Skip to content
ascd logo

Join
April 1, 2008
Vol. 65
No. 7

Ask About Accountability / When the Test Says You're Not So Smart

author avatar

      Question: How can educators preserve the self-esteem of academically successful students who get so-so scores on college entrance exams, such as the SAT or ACT? Some of our school's most solid, well-rounded students become discouraged once they see their scores. They not only worry about what college they can get into, but some also begin to doubt their academic abilities.
      — Susan Rosenthal, Assistant Principal, Catalina Foothills High School, Tucson, Arizona
      Answer: This is an important question, Susan, because it raises a tip-of-the-iceberg issue regarding the effect today's tests have on students. Millions of U.S. kids are now required to take the yearly achievement tests mandated by No Child Left Behind (NCLB), and thousands of them will fail to earn proficient or better scores. Thus, for hordes of U.S. students, our schools are dishing out an annual test-transmitted message that says, "You aren't good enough!" Over time, what do you think the impact of this oft-repeated message will be? Test scores influence students' self-perceptions, and self-perceptions influence students' lives.
      You ask how educators should deal with students, especially solid and well-rounded kids, who get less-than-lustrous scores on college entrance exams. A number of those students will surely conclude that they "aren't smart" or, at least, that they "aren't smart enough" to succeed in a first-rate college. Most of them will be wrong.
      Similarly, students who earn less-than-proficient scores on standardized tests for several years in a row may become permanently doubtful about their academic abilities. Such conclusions, however, are also flat-out wrong. In many settings, the state-approved tests have been constructed using psychometric procedures that actually mask improvements in student performance. Unfortunately, the designers of these tests are often more concerned with creating sufficient score spread—that is, differences among test takers' scores for comparison purposes—than with measuring the amount of improvement in an individual student's achievement.
      The problem is that most students know practically nothing about education testing. They believe that standardized tests are much more accurate than they truly are and that their test scores are precise representations, not only of their current capabilities, but also of their future potentials. That's just not so.
      Take the ACT or SAT, for example. High school students' scores on these tests predict only about 25 percent of the grades these students will subsequently earn as college students. The remaining 75 percent of college grades are better predicted by nontest factors, such as students' motivation and study habits. It turns out that effort typically makes the difference in a college student's academic success. A pile of evidence vividly shows that when it comes to the grades students will earn in college, the ACT and SAT possess only modest predictive power.
      Students should know this before they sit down to take their first college entrance exam. If students hear about the limited predictive power of the SAT and ACT only after they have flopped on one or both of those exams, they will think that someone is just trotting out an excuse to make them feel better. We need to equip our students with understandings of the basic concepts of education testing—of the notions of validity, reliability, and assessment bias, for instance. When they take a test, students who possess such understandings will know what a particular test can and can't tell them. For example, some achievement tests can help test takers get a fairly specific fix on what they do and don't know. Other achievement tests provide no particulars but only a general, tough-to-interpret picture of a student's accomplishment.
      For more than a decade, an emerging tenet of modern medicine has been that patients should become more responsible for their own health care. Rather than docilely submitting to the preferences of physicians or other healthcare professionals, patients are now being urged to respond more knowledgeably and independently to the recommendations of their health care practitioners. Patients need to become partners, not pawns, in managing their own health.
      Why should education be any different? Why should students not become more actively involved in managing the information elicited by the many tests they take? What students really need to learn about education assessment can be boiled down to a few hours' worth of key concepts. Spreading this content over a student's many years in school would make promoting assessment literacy far from intrusive.
      One or more measurement-moxie staffers at the state or district level could take the lead in identifying a modest set of basic concepts about assessment that all students should know. Such content, along with procedures for teaching it, would be all that's needed to get underway in promoting students' assessment literacy.
      This brings us back to those academically successful students who get so-so scores on college entrance exams and to your question about self-esteem, Susan. By becoming assessment literate, these students will be less prone to discouragement and doubt, knowing that the effort they put forth—and not some score on a test—is the more reliable predictor of academic success. I'd also place a bet on students who specifically put forth the effort to become assessment literate. These students are almost certain to be winners.
      End Notes

      1 As reported in technical manuals for college admission exams, the relationship between high school students' test scores and the grades the students later earn in college rarely exceeds a correlation coefficient of .50. It's necessary to square such coefficients to determine what percentage of college grades can be accounted for by students' entrance exam scores. Squaring .50 gives us 25 percent—the percentage of college grades accounted for by students' scores on entrance exams.

      James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

      Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

      He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      From our issue
      Product cover image 108026.jpg
      Poverty and Learning
      Go To Publication