HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
April 1, 2005
Vol. 62
No. 7

All About Accountability / Squandered Instructional Zeal

author avatar

    All About Accountability / Squandered Instructional Zeal- thumbnail
      Earlier this year, I made a presentation at a three-day conference focused on improving teachers' classroom instruction. Soon after the conference got under way, I found myself caught up in the excitement of what was going on around me. Rarely have I encountered a collection of educators so zealously intent on acquiring additional instructional moxie. Moreover, the conference staff was presenting information in accord with what research tells us about teaching. It was an exhilarating conference, featuring solid content, a skilled staff, and eager participants.
      Yet as the conference went on, I found myself becoming disheartened. I began to realize that these instructionally energized participants would soon be heading home where, suitably jazzed by what they had learned at the conference, they would implement a host of proven teaching tactics. As a consequence, their students would learn better. Everything would be wonderful—until the state accountability tests were administered, typically in the spring, to satisfy the requirements of No Child Left Behind (NCLB).
      Even crackerjack classroom teachers may look like losers on their state's accountability scoreboard if the dominant indicators of success are students' scores on instructionally insensitive tests—that is, tests that cannot detect improvement in instruction. And, because almost every state currently uses instructionally insensitive accountability tests, these enthusiastic conference attendees would likely implement the nifty stuff they'd learned only to be judged later as instructional failures. A legitimate cause for depression!
      Two species of instructionally insensitive accountability tests can make it appear that even a superb teacher is unable to promote sufficient year-to-year improvement in students' scores. Some states use nationally standardized achievement tests, which are designed from the get-go to yield comparative scores that show who outperforms whom. Unfortunately, to secure the degree of score spread among test takers needed to make such comparisons, these national tests (or similarly constructed state tests) contain many items directly linked to students' socioeconomic status (SES). Such tests usually measure the demographic composition of a school's student body rather than the staff's instructional prowess.
      A second set of instructionally insensitive tests are standards-based. Many states have created such tests to enable them to better assess students' mastery of the state's content standards—its official curricular aims. Unfortunately, most states have generated such a huge collection of content standards that overwhelmed teachers can't figure out what the tests will actually assess. Beyond that, these tests rarely provide timely, instructionally meaningful feedback to teachers or students. And students' scores on these tests also turn out to be closely linked to SES.
      Teachers who really want to improve instruction, however, dare not be dissuaded from using the most powerful instructional approaches in their classrooms—despite the seemingly contradictory evidence from flawed accountability tests. Instead, educators need to learn why tests, even NCLB-sanctioned state accountability tests, are likely to provide misleading evidence of teachers' degree of effectiveness.
      Once educators have learned more about the specific accountability tests used by their state, I recommend that they first transform the information into language suitable for laypersons, and then use a variety of media—print, in-person, and electronic—to convey the information to local community members, particularly parents and school board members. If citizens can learn—ideally in advance of the release of state test results—why students' scores on certain accountability tests do not accurately reflect teachers' instructional excellence, then those citizens are less apt to be misled by such test scores.
      The task is for educators to proactively educate their communities about what kinds of test-based evidence we should or should not employ to judge teachers' instructional competence. By presenting this information clearly and in a manner that the public does not see as self-serving and defensive, educators can prevent the results of off-target tests from squashing good teaching.
      Instructional zeal, if unrewarded, often evaporates. It is too precious to squander.

      James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

      Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

      He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      From our issue
      Product cover image 105034.jpg
      The Adolescent Learner
      Go To Publication