HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
September 1, 2006
Vol. 64
No. 1

All About Accountability / Content Standards: The Unindicted Co-conspirator

author avatar

      During William Jefferson Clinton's election campaign for the U.S. presidency, his staffers plastered their offices with placards asserting, “It's the economy, stupid!” They were reminding themselves that they needed to focus voters' attention on an issue that might well lead Clinton to victory. It did.
      When I look at the current mess that most states' education officials have made of their test-based accountability programs, I find myself wanting to toss a similar epithet their way—namely, “It's the content standards, stupid!” Although it's easy to snipe at a state's accountability tests or revile the federal legislation that spawned those tests, the true culprits in our current education calamity are states' official content standards, that is, the skills and bodies of knowledge that a state's students are all supposed to learn. Most states have far too many content standards. Moreover, they are poorly conceptualized either for teaching or testing.
      What people mean when they use the phrase “content standard” varies all over the lot. In some states, content standards are little more than category labels describing collections of curricular aims in particular content areas. For instance, mathematics might have only a handful of content standards, titled “numerical operations” or “algebra,” for example. But each of these “standards” is little more than a label that subsumes a number of smaller-scope curricular aims, such as the specific numerical or algebraic skills that students are supposed to acquire. Typically, educators describe such smaller-scope curricular aims as “benchmarks,” “expectancies,” or “indicators.” Yet only when we get down to this smaller “grain size” do teachers actually know what to teach and assessment specialists actually know what to test. This is where problems arise.
      Put simply, whatever label we hang on curricular aims, when teachers and test makers become overwhelmed by too many of them, any test-based accountability program is certain to stumble. An unsound accountability program not only provides inaccurate evaluations of school quality, but also frequently fosters instructional activities inherently harmful to students.
      If a state's content standards—or the smaller-grain curricular aims that those content standards subsume—are too numerous, then teachers don't know where to aim their instructional efforts. Similarly, because test makers are unable to assess all of the state's sprawling curricular aims, test designers often settle for a sampling strategy in which certain curricular aims get assessed in some years, others get assessed in other years, and some rarely, if ever, get assessed. In such settings, teachers are obliged to guess which curricular aims will be assessed in a given year. Not surprisingly, many guess wrong.
      Moreover, the large number of curricular aims makes it impossible for test makers to include enough items in any year's accountability test to adequately assess a student's performance in every curricular aim. But how can teachers provide targeted instruction for students when they don't know which specific aims a student has or has not mastered? How can teachers improve their instruction over time when they don't know what is and isn't working?
      Even though test-development companies know full well that they can't do a defensible job of assessing such staggering collections of curricular aims, those companies rarely register even a peep of protest about the flawed nature of a state's content standards. After all, test development is a big-bucks business these days, and when testing companies try to garner heavy-duty contracts from states, they are reluctant to admit to state officials that they can't do everything—even though they can't.
      So despite the absurdity of trying to build an accountability test to assess more curricular aims than they can possibly assess, most testing companies pledge that they will create accountability tests thoroughly “aligned” with a state's content standards. Sadly, the winning contractor usually creates an accountability test that is not only instructionally insensitive but also fails to identify first-rate instruction—even when such instruction is patently present.
      What's to be done about this sorry state of affairs? The only way out of this maze is to overhaul a state's content standards so they represent curricular aims that teachers can truly teach in the instructional time available and that they can assess in a way that pays off instructionally. Several states have recently been flirting with the idea of using a smaller number of broader “power standards.” However, even though the announced intention of this approach is to winnow the number of curricular aims that a state's accountability tests must measure, many state curriculum specialists are reluctant to abandon cherished skills or knowledge that they yearn for students to possess. Happily, both Kansas and Wyoming have made courageous efforts to reduce state-assessed curricular aims to a manageable number.
      If your state has already approved a set of too-numerous curricular aims, you need to apply whatever pressure you can, personally or in collaboration with colleagues, so that state educators immediately review these aims and, if necessary, revise them. It is possible to create sets of worthwhile content standards and wonderful accountability tests. But to do so, we desperately need to create—from scratch in many instances—sets of powerful curricular targets, which, when transformed into defensible accountability tests, will stimulate sound instruction.

      James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

      Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

      He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      From our issue
      Product cover image 107026.jpg
      Teaching to Student Strengths
      Go To Publication