HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
October 1, 2008
Vol. 66
No. 2

All About Assessment / An Unintentional Deception

author avatar

      Alignment is a nine-letter word which, when preceded by the word assessment, seems to trigger a flock of four-letter words. Yes, assessment alignment has caused more than a few educators to completely abandon curse control. Innocently, assessment alignment simply calls for a state's accountability tests to appropriately measure that state's official curricular aims. But this is a far tougher task than it appears to be.
      Especially since the 2002 enactment of No Child Left Behind (NCLB) and the subsequent release of regulations implementing that federal law, officials in state departments of education have been scrambling to come up with NCLB-required evidence that their accountability tests are suitably aligned with their state's content standards—that is, with their state's curricular aims.
      To secure such alignment evidence, state officials typically hire an external firm to conduct an independent study focused on the degree to which a state's accountability tests mesh with the state's content standards. Currently, the most popular of the available alignment approaches is the one devised by Norman Webb of the University of Wisconsin. Webb has developed a systematic procedure for determining the degree to which curricular “expectations and assessments are in agreement.”
      Webb's approach revolves around what the measurement community has, for decades, referred to as “content-related evidence of validity,” which tells us whether a test accurately measures students' possession of the skills and knowledge embodied in whatever curricular aims the test is supposed to assess. In almost all of today's assessment-alignment approaches, a group of judges is first trained, then asked to make item-by-item judgments about the degree to which an accountability test's items match the particular curricular aims those items were intended to assess.
      Webb's approach to assessment alignment reflects a traditional way of thinking about whether a test measures what it purports to measure. It is a well-intentioned procedure, and it satisfies federal demands for such alignment evidence. The only problem with Webb's system—and with most of the assessment-alignment approaches based on it—is that in an accountability context, it is misleading!
      Let's consider, for a moment, the two most important parts of an educational accountability system. First, we have curricular aims—referred to these days as content standards—representing the stuff we want kids to learn. Second, we have accountability tests whose function is to determine whether educators have been successful in getting kids to learn the stuff identified in the content standards. If students perform well enough on these accountability tests, we laud educators for having gotten kids to master the content standards. But if students don't perform well enough, we invoke sanctions intended to make unsuccessful educators shape up. These features make sense.
      Yet, for this approach to work properly, the tests used to evaluate instructional quality must be capable of determining whether teachers have, in fact, been instructionally successful. Accordingly, the following question would beinappropriate to ask judges as they review the relationship between content standards and the items on an accountability test: Is this test item in agreement with the curricular aim it is intended to measure? Such a softball question misses the mark. A more appropriate question would be something along these lines: Will this test item accurately measure how effectively students have been taught to master the curricular aim that the item measures?
      You see, because the heart of an educational accountability approach is the use of test results to gauge instructional quality—and because instructional quality should be determined by what students learn—the emphasis of a defensible alignment study must be on the issue of whether instruction was successful. If an accountability test's items are not capable of distinguishing between good and bad instruction, then educational accountability simply makes no sense.
      Test items that are only judged on whether they agree with a curricular aim may or may not accurately reflect instructional quality. For example, suppose a state-approved curricular aim in social studies calls for elementary school students to be able to employ simple maps to generate a sensible plan for getting from Point A to Point B. Note that the essence of this curricular aim is for students to be able to figure out how to get some place.
      But what if an item supposedly measuring this curricular aim dealt with subtle aspects of map reading—such as how boundaries are graphically defined—that were not central to what students actually needed when using a map to get someplace? Such an item might very well pass muster when using most of today's assessment-alignment procedures. The item is clearly related to map reading. But the fact that a test item is judged to be “in agreement” with this curricular aim does not signify that the item will help us determine how successfully teachers promoted students' mastery of a “get from Point A to Point B” curricular aim. And yet the accurate determination of instructional quality is the very cornerstone of a defensible education accountability program.
      Because most states now have evidence that their accountability tests are apparently “aligned” to their state's curricular aims, it is assumed that these tests do a satisfactory job of measuring educators' instructional quality. But this is the moment when deception dances onstage. We assume that accountability tests—because they are aligned—provide us with an accurate picture of educators' instructional quality. Much of the time, this assumption is unwarranted. Many successful schools will be regarded as losers; many unsuccessful schools will be seen as winners. We will be deceived because we allowed ourselves to be seduced by an insufficiently demanding conception of assessment alignment.
      End Notes

      1 Webb, N. L. (2002). Alignment study in language arts, mathematics, science, and social studies of state standards. Washington, DC: Council of Chief State School Officers.

      James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

      Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

      He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      From our issue
      Product cover image 109021.jpg
      Expecting Excellence
      Go To Publication