HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
April 1, 2006
Vol. 63
No. 7

All About Accountability / Branded By a Test

author avatar

      If you were to ask 10 randomly selected adults whether they recalled their SAT or ACT scores, odds are that all of them would. Folks just don't forget their scores, and that's too bad.
      The SAT and ACT are the premier college entrance exams in the United States. Test takers generally regard them as definitive determiners of a person's intellectual ability. Many people believe that SAT or ACT scores not only govern whether high school students will get into college, but also decide whether those students will succeed in college and in later life. Such views, although widely held, are wrong.
      Surprisingly, most education professionals know little more than laypersons do about the SAT and ACT. This is problematic because teachers may pass along erroneous information to their students. Basic Reality One is that the two exams differ in meaningful ways. Many educators believe that the SAT and ACT are essentially interchangeable—two three-letter peas from the same assessment pod. That's not so. Even though both tests have an identical measurement mission, they emphasize substantially different content.
      Let's start with the SAT, not only because it's older (it was first administered in 1926) but also because more students take it—more than two million annually. The initial name of the SAT was the Scholastic Aptitude Test. That label accurately reflected what the builders of the test were trying to assess—namely, the sorts of inborn academic aptitudesthat would accurately predict a high school student's college grades. For more than three-quarters of a century, the SAT has focused on measuring high school students' verbal and quantitative aptitudes.
      Student performances on each of the two SAT sections (verbal and quantitative) are reported as scale scores ranging from a low of 200 to a high of 800. For years, students simply added the two scores together. You can be assured that if any test takers earned a combined score in the high 1400s or somewhere in the 1500s, those students did not just remember their scores—they often proclaimed them to anyone within earshot. I've been surprised that in recent years we've not seen students sporting high SAT score tattoos!
      Students who perform less well on the SAT also remember their scores. Students who earned a combined score of 960 on the SAT, however, rarely raced to the tattoo parlor, nor did they forget their so-so scores and what they seemed to imply. (With the addition of a writing section in 2005, the top score possible on the SAT is now 2400.)
      The ACT, formerly the American College Testing Program, was introduced in 1959 by University of Iowa professor E. F. Lindquist. Lindquist and his colleagues regarded the SAT as an entrance exam designed primarily for elite universities in the Northeast. Accordingly, rather than assessing students' inborn aptitudes, the ACT was intended from the get-go to be anachievement test. It measured the skills and knowledge necessary for college that students had already learned.
      The ACT assesses student achievement in four content fields: English, mathematics, reading, and science (with an optional essay test now available to assess writing). The ACT is used for college admissions more often than the SAT is in certain areas of the United States, particularly in the Midwest. The scale used to report students' ACT scores ranges from a low of 1 to a high of 36 in each content area, with the total score an average of those four content scores. As with the SAT, students remember their ACT scores.
      If Basic Reality One alerts us to the different purposes of these two tests, Basic Reality Two alerts us to an even more crucial point: Neither of these tests accurately predicts the grades that high school students will subsequently earn in college.
      To determine the predictive accuracy of either the SAT or the ACT, correlations are computed between students' scores on one of the exams and the grades those students go on to earn in college. With few exceptions, the correlations between these test scores and college grades hover around .50, meaning that we can accurately predict 50 percent of a college student's grades on the basis of that student's SAT or ACT scores. Those .50 correlation coefficients are, to be sure, statistically significant—such co-efficients would rarely be that high on the basis of chance alone. However—and this is the heart of Basic Reality Two—a correlation coefficient of .50 is not that blinking accurate in predicting anything!
      Because of statistical considerations too abstruse to dip into here, you must square that predictive validity coefficient of .50 to get an accurate idea of how predictive an SAT or ACT score really is. Multiply .50 by .50, and you end up with .25. This means that we can accurately predict only 25 percent of a college student's grades on the basis of that student's SAT or ACT scores. Fully 75 percent of a college student's grade performance is explained by other factors, such as the student's motivation or study habits.
      Yes, SAT and ACT scores are somewhat predictive of how well a high school student will prosper academically in college. But it's important to remember that variables other than test scores have three times as much impact on that student's college performance.
      High school students can be successful in college and life with ho-hum SAT or ACT scores. Let's start judging students by their real achievements, not their test-predicted ones.

      James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

      Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

      He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

      Learn More

      ASCD is a community dedicated to educators' professional growth and well-being.

      Let us help you put your vision into action.
      From our issue
      Product cover image 106042.jpg
      Teaching the Tweens
      Go To Publication