Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
October 1, 2004
Vol. 62
No. 2

All About Accountability / Tawdry Tests and AYP

author avatar

    All About Accountability / Tawdry Tests and AYP- thumbnail
      Unless one happens to suffer from long-term sleeping sickness, every U.S. educator knows about the federal law calling for public school students to make “adequate yearly progress” (AYP). That law, the sometimes-praised and often-reviled No Child Left Behind Act (NCLB), says that if a school fails to attain its annual AYP targets, bad things will follow.
      Although NCLB is a lengthy law, replete with all sorts of requirements, most public attention has understandably centered on the annual assessment sweepstakes wherein students' performances on state reading and writing tests determine which public schools will be identified as having hopped or flopped that year's AYP hurdles. AYP-failing schools that get NCLB dollars receive serious sanctions. AYP-failing schools that get no NCLB funds chiefly suffer public embarrassment. It is no surprise, then, that all teachers and administrators are eager to have their school's students make decisively “adequate” yearly progress.
      Because each state's officials determine their own performance expectations, and because different states choose different achievement tests to fulfill NCLB requirements, AYP failure rates vary wildly. In some states, about three-fourths of public schools are currently falling short of their AYP goals. In other states, fewer than 20 percent of the schools miss their AYP targets.
      Students' test scores are by far the most important determinant of whether a school has passed or failed AYP targets. Let's face it—evaluating a school's teachers on the basis of students' test-determined achievement is a reasonable notion. After all, schools exist so that teachers can help their students acquire important skills and knowledge. So it makes sense to judge a school's success on the basis of the learning that students display on appropriate achievement tests.
      Please reread the previous sentence to see whether you can identify its most important word. If you chose appropriate, you get an A+ in reading comprehension. That key word raises a pivotal question that all U.S. public school educators should currently be asking: Are our state's NCLB tests appropriate?
      In today's NCLB-dominated accountability world, few questions are more worth asking.
      The entire structure of NCLB accountability sits smack astride a cornerstone of test-determined student growth. If the tests used to measure student growth don't do that job accurately, then educators are being required to take part in an accountability game that they have no chance of winning. If the system determines AYP on the basis of test-signified improvements in students' learning, but the tests employed cannot spot such improvements, then NCLB's accountability strategy falls flat on its otherwise well-founded fanny.
      What, then, is an appropriate test that's capable of accurately identifying annual improvements in students' achievement? First, a good test measures only a modest number of curricular aims so that teachers are not overwhelmed by the prospect of promoting too many skills or bodies of knowledge. (However, because an appropriate test might then measure only a half-dozen rather than a half-hundred curricular aims, each of the curricular targets actually measured must be extraordinarily significant.) Second, whatever skills or bodies of knowledge the test assesses should be described with sufficient clarity so teachers can aim their instruction at well-understood curricular aims rather than at particular test items. Finally, an appropriate test must contain enough items related to each assessed skill or body of knowledge to determine students' mastery of that skill or body of knowledge, thereby enabling teachers to identify those parts of their instruction that need improvement. Any test lacking one or more of these three attributes is inappropriate for determining adequate yearly progress.
      Currently, many states appropriately assess their students' composition skills by administering writing-sample tests. Such tests typically focus on a powerful, properly described skill, and students' performances are judged using scoring guides whose evaluative criteria (organization, mechanics, and so on) can be taught effectively. It is also possible to conceptualize target skills for appropriate assessment in other fields—for instance, reading—so that a few significant, well-described, and instructionally addressable skills form the heart of a state's NCLB assessments. I'm delighted to report that Wyoming's education officials have already embarked on a full-scale effort to build instructionally sensitive NCLB tests for their state.
      What can educators do with their knowledge of this distinction between appropriate and inappropriate NCLB tests? I suggest that you discover just how appropriate your state's tests are for determining adequate yearly progress. Working by yourself, collaborating with colleagues, or enlisting the resources of your professional association, you should set out to discover how your state's NCLB tests jibe with the three essential attributes of appropriate tests.
      If your state's tests are as tawdry as, unfortunately, most states' current NCLB tests are, then you need to aggressively enter the public forum, enlisting the aid of willing colleagues to persuade your state's citizens and education policymakers to improve or replace those tests. Accountability systems implemented with inappropriate achievement tests harm students instead of helping them.

      James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

      Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

      He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      From our issue
      Product cover image 105029.jpg
      Go To Publication