HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2006
Vol. 64
No. 3

All About Accountability / Phony Formative Assessments: Buyer Beware!

author avatar

      The term formative assessment is rapidly moving to the head of this year's education fad parade. The reason is all too clear. Several years ago, Paul Black and Dylan Wiliam (1998) of Kings College in London presented a persuasive review of empirical studies dealing with the payoffs of well-conceived classroom assessments. The two British researchers concluded that when schools used the results of classroom assessments to adjust ongoing instruction, students not only mastered content better, but also improved their performance on external achievement tests.
      Given the pressure on educators to boost their students' scores on external accountability tests, the notion that classroom assessments could contribute to higher test scores was alluring to many education leaders. As news of Black and Wiliam's conclusions gradually spread into faculty lounges, test publishers suddenly began to relabel many of their tests as “formative.” This name-switching sales ploy was spurred on by the growing perception among educators that formative assessments could improve their students' test scores and help their schools dodge the many accountability bullets being aimed their way.
      More than one test company official has confided to me that companies affixed the “formative” label to just about any tests in their inventory. The companies sensed that the term would sell tests and appeal to many pressured educators, who would, in desperation, grasp at any score-improvement straws they could find.
      Assessment expert Lorrie Shepard believes that this approach, which is based solely on marketing motives, is corrupting the meaning of the termformative assessment, thereby diminishing the potentially positive effect of such assessments on student learning. During the 2006 National Large-Scale Assessment Conference, Shepard observed,The research-based concept of formative assessment, closely grounded in classroom instructional processes, has been taken over—hijacked—by commercial test publishers and is used instead to refer to formal testing systems called “benchmark” or “interim assessment systems.”
      What, then, is formative assessment, and why is it so important for educators to understand what's involved? For an assessment to be formative, teachers (and ideally students as well) need to have the results in sufficient time to adjust—that is form—ongoing instruction and learning. According to Wiliam, the biggest instructional payoffs occur when teachers use “short-cycle” assessments, in which test results are available quickly enough to enable teachers to adjust how they're teaching and students to alter how they're trying to learn.
      Educators need to realize that the research rationale for formative assessment is based on short-cycle assessments. Such rapid-turnaround assessments yield results during a class period or in the midst of a multiweek instructional unit. If the results don'tget back in time for teachers to adjust instruction for the students being assessed, then it's not formative assessment.
      Profit-motivated testing firms (as well as dollar-driven consultants) may allege that districtwide or even statewide assessments, referred to variously as “benchmark” or “interim” tests, are, in fact, formative. But almost all these tests fail to get results back in time for meaningful instructional adjustments to take place for the tested students. Some take more than a month, especially those that require hand-scoring of student responses. Because the results come back for topic X when the teacher has already moved on to topic Z, such tests cannot be regarded as formative assessments. At the very least, test companies have no right to proclaim the effectiveness of their tests by riding on the research coattails of short-cycle classroom assessments used formatively.
      Moreover, for district-dispensed interim tests to spur timely and beneficial adjustments in teachers' instruction, the administration of those tests would have to mesh remarkably well with the curricular aims that teachers were addressing in the district's classrooms at that specific time. Although this curricular concurrence is possible, I've rarely witnessed it.
      Just because these large-scale tests don't qualify as formative doesn't mean that classroom assessments automatically can claim that advantage. Most classroom assessments will not supply information to help teachers adjust their instruction unless teachers deliberately design them to do so. Thus, the assessments can't be considered formative. Even if a teacher intends to create a test whose results will permit instructional adjustments, not all those well-intentioned tests will be as helpful as the teacher hoped. Many classroom tests are consummately cruddy.
      Properly formulated formative classroom assessments (or even sufficiently short-cycled district assessments) can help students learn better and can improve those students' scores on external accountability tests. Persuasive empirical evidence shows that these tests work; clearly, teachers should use them to improve both teaching and learning.
      I am not suggesting that longer-cycle tests, such as the so-called benchmark or interim tests that we often run into these days, are without merit. They quite possibly may enable teachers to make useful longer-term changes in instruction and curriculum. But if you encounter such a test that is glowingly labeled as “formative” and is swathed in research results associated with short-cycle classroom assessments, don't be hoodwinked by the sales pitch. In the future, evidence may show that benchmark or interim tests are instructionally beneficial in the short term. But research currently does not support that claim.
      References

      Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139–148.

      Shepard, L. (2006, June 26). Panelist presentation delivered at the National Large-Scale Assessment Conference, sponsored by the Council of Chief State School Officers, San Francisco, CA.

      James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

      Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

      He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      From our issue
      Product cover image 107028.jpg
      NCLB: Taking Stock, Looking Forward
      Go To Publication