HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
March 1, 2010
Vol. 67
No. 6

Special Topic / Why We Should Stop Bashing State Tests

author avatar
An item-by-item look at state test results reveals that students lack higher-level reading and thinking skills.

premium resources logo

Premium Resource

It is, of course, a common lament: "Oh, those standardized tests! If it weren't for them…" But if you look closely at the released test items and student performance data for states that provide such information, your opinion may change. Mine did. Standardized tests can give us surprisingly valuable and counterintuitive insights into what our students are not learning.
  1. They understand the problems of society.
  2. They represent an outdated set of values.
  3. They are the most open to change.
  4. They are role models for the speaker.
Well, we "better start swimmin' or [we'll] sink like a stone" in education—because only 58 percent of students chose the correct answer, B. Astonishingly, 19 percent chose A; 12 percent chose C; and 11 percent chose D. In other words, more than 40 percent of 10th graders think the lyrics mean theopposite of what they really do. It seems that a huge chunk of our students cannot even make the most basic sense of a biting song lyric.

A Critical Weakness in Student Understanding

This result is not an aberration, I am sorry to report. Over and over, in looking at fully disclosed tests and results where they are available—especially in Massachusetts, Florida, and Ohio, where the data are rich and revealing—I have found that far too many of our students at all grade levels do poorly on questions requiring inferences. Students are especially weak at drawing conclusions from nonfiction pieces of writing. On average, across all three states, only about 60 percent can identify the main idea or the author's purpose related to reading passages.
  1. The abacus is the earliest form of computer. (answer chosen by 21 percent of students)
  2. The development of the computer spans many centuries. (correct answer; chosen by 64 percent)
  3. Lady Ada Byron's role in the computer's origin is often overlooked. (chosen by 4 percent)
  4. The world would be very different if Babbage's machine had been finished. (chosen by 11 percent)
The fact that the most frequently chosen wrong answer here is A reveals a serious problem that is evident across tests and grade levels: Many students incorrectly select an answer containing an importantfact in the passage, instead of realizing the need to make an inference and then picking the correct inference.
  1. To keep readers from being confused. (chosen by 13 percent)
  2. To make the book interesting to readers. (correct answer; chosen by 62 percent)
  3. To remember all the information for the book. (chosen by 15 percent)
  4. To make the writing long enough to become a book. (chosen by 10 percent)
Overall, these are sobering results— and they are not easily explained away as artifacts of testing, as many testing critics would have us believe. As adults, all of these students will need to read nonfiction text and make inferences on their own for professional, political, commercial, and personal reasons. Yet the results across states and grade levels reveal that a large portion of them cannot do so.
The math results are arguably even more appalling. No news here, really: for decades, the National Assessment of Educational Progress (NAEP) and the Trends in International Mathematics and Science Study (TIMSS) have shown that math teachers in the United States are not getting the job done, especially at the high school level. But the tests from the three aforementioned states show the problem clearly, too.
On all three states' high school geometry tests, for instance, students do poorly on questions that require them to first recognize the need to use the Pythagorean theorem (In any right triangle, a2 + b2 = c2, where c represents the length of the hypotenuse and a and b represent the lengths of the other two sides) and then apply the theorem—despite the fact that this is arguably one of the most important concepts covered in the course. Figure 1 shows the weak performance by 8th grade Ohio students on one such question. Only 48 percent could figure it out.

Figure 1. Item from the Ohio 8th Grade Mathematics Test

el201003_wiggins_fig1.gif
The state tests contain dozens of Pythagorean items like this, in which students have to infer the need to use this core formula—and fail to. In some cases the tests even give strong hints, such as the little icon for right angles provided in a graphic. Even so, most students cannot go past rote learning to infer the correct formula from the information given. This content was "covered"—but not understood.
Figure 2 shows an item from the 2003 Massachusetts 10th grade algebra assessment. This result is doubly dismaying to me because even if you forgot the distance formula, you could plot the points and use the Pythagorean theorem to solve the item or derive the distance formula. Apparently, though, if our high school students can't "plug and chug" the answer, a majority of them are stymied. After a year or more studying algebra, students simply do not understand linear relationships—the key topic of the entire year! It makes me wonder what algebra teachers have been doing all year, frankly.

Figure 2. Item from the Massachusetts 10th Grade Algebra Test

el201003_wiggins_fig2.gif
But my favorite example by far (one that always makes faculties we work with in Understanding by Design trainings moan in dismay, roll their eyes, and shake their heads) comes from the Massachusetts 10th grade English test. Students were asked to read a thought-provoking, enjoyable piece of nonfiction about color blindness. It was 17 paragraphs long.1st paragraph: A fellow 4th grader broke the news to me after she saw my effort on a class assignment involving scissors and construction paper. "You cut out a purple bluebird," she said. There was no reproach in her voice, just a certain puzzlement. Her observation opened my eyes—not that my eyes particularly help—to the fact that I am color-blind. In the 36 years since, I've been trying to understand what that means. I'm still not sure I do.…16–17th paragraphs: … there's no ready source of information about how many presidents, or military heroes, or rock singers have been color-blind.Based on the law of averages, though, there must have been some. We are everywhere, trying to dope, trying to blend in. Usually we succeed. Until someone spots our purple bluebirds. Then the jig is up.
  1. a biography. (chosen by 38 percent)
  2. a scientific article. (chosen by 14 percent)
  3. an essay. (correct answer; chosen by 35 percent)
  4. an investigative report. (chosen by 12 percent)
When local papers did a follow-up article on the MCAS results, they interviewed students who had taken the test. When reporters asked about this question, many students said the piece could not be an essay because it was "funny" and "was not five paragraphs."
Here is our problem in a nutshell. Students are taught formulas that they learn and spit back unthinkingly— regardless of subject matter—all in the name of "meeting standards." Yet, as so many assessment results reveal, a large portion of U.S. students are so literal minded that they are incapable of solving fairly simple questions requiring interpretation and transfer—which is surely the point of the state standards.
Is that the fault of the testing system? Or have our teachers and school administrators badly misunderstood what kind of curriculum and instruction a standards-based education demands? No research supports the oft-heard claim that the tests "demand" superficial coverage. (On the contrary, we see the most slavish test prep in inadequate schools, not the best schools.) An education focused on student understanding—a prioritized curriculum, focused on transfer—would not yield such depressing results.

Lesson One: The Solution Is Local

Perhaps what should be a-changin' is our attitude toward test results. An examination of the released tests shows that most of the questions on the math and language arts tests are both appropriate and revealing—especially those that involve inferences about such key concepts as main idea, author purpose, linear relationships, equivalency of fractions and decimals, and so on. (It is true that some of the vocabulary questions seem picayune, and the history and science assessments are in general weaker.)
In fact, despite the constant criticisms leveled at state tests, local assessment is arguably the far weaker link in the whole chain of would-be reform. Many of us have seen firsthand how invalid and low-level many local tests are. And studies have shown for years that in terms of Bloom's taxonomy, most teacher questions only hit the first two levels (knowledge and comprehension) instead of the higher levels (application, analysis, synthesis, and evaluation). In one high-income suburban New Jersey district that some colleagues and I studied, we found no test question that required any higher-level thinking in all the marking-period tests. Even more surprising, there was no difference across honors and regular-track versions of the same courses.
A close look at state test results shows me that both test-prep "teaching" and test bashing get it wrong. The test items that our students do most poorly on demand interpretation and transfer, not rote learning and recall. Better teaching and (especially) better local testing would raise state test scores. Teaching for greater understanding would improve results, not threaten them—as both common sense and the research indicate.

Lesson Two: Greater Transparency Is Essential

If the goal is to better understand and prepare for standards-based assessments in math and literacy, I highly recommend that educators use the (free!) resources provided by the states discussed here to demystify testing and focus on what the feedback reveals, even if you don't teach in one of these three states. But the long-term solution should be less jerry-rigged. On the policy front, therefore, it is high time for all states to follow the lead of Massachusetts (www.doe.mass.edu/mcas), Florida (http://fcat.fldoe.org), and Ohio (http://ohio3-8.success-ode-state-oh-us.info): Release all or most of the tests with item-by-item and school-by-school analyses—and include the percentage of answers chosen for all questions, not just the correct answer.
There simply cannot be genuine accountability unless state assessments provide such transparent feedback. (I trust that the U.S. groups working on the Core Standards and assessments are attending to this point.) In far too many states, alas, educators can still actually be reprimanded for just looking at the test questions when the tests are given, never mind getting data about how their students and students in other schools and districts did on each question.
The sorry excuse for such policies? As they said in Watergate, follow the money. Many state education department personnel have told me either that it is too expensive to release the tests or that their contract with the vendor prohibits it. But if we are interested in genuine reform based on useful feedback from tests, it is unacceptable to settle for mere accountability audits that keep the goals and results of state assessments too opaque for actual use.
So maybe Dylan had it right here, too: In assessment, both local and state, the "old road is rapidly agin'." His next words seem all the more prescient today: "Please get out of the new one if you can't lend your hand."
End Notes

1 See, for example, Archibald, D. A., & Grant, T. J. (1999). What's on the test? An analytical framework and findings from an examination of teachers' math tests.Educational Assessment, 6(4), 250.

Grant Wiggins (1950–2015) was president of Authentic Education, a consulting, research, and publishing company. He authored many books and was coauthor of Understanding by Design®, an award-winning framework for curriculum design that extolled the virtues of backward planning.  

Wiggins, a nationally recognized assessment expert, worked on some of the most influential reform initiatives in the country, including Vermont's portfolio system and Ted Sizer's Coalition of Essential Schools. He consulted with schools, districts, and state education departments on a variety of reform matters, organized conferences and workshops, standards clarification, and developed print materials and web resources on curricular change.

 

UNDERSTANDING BY DESIGN® and UbD® are registered trademarks of Backward Design, LLC used under license.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 110025.jpg
Reading to Learn
Go To Publication