Skip to content
ascd logo

Join
February 1, 2003
Vol. 60
No. 5

Perspectives / Blind Data

Perspectives / Blind Data - Thumbnail
Credit: Copyright(C)2000-2006 Adobe Systems, Inc. All Rights Reserved.
In the current political climate, data can be an educator's best friend—and not just because the No Child Left Behind legislation now requires schools to collect and disaggregate standardized test data each year. When ideology polarizes all arguments, data—another name for information, whether quantitative or qualitative—can help educators decide what action to take to improve instruction, change practice, or reform schools. Using data provides an alternative to relying on subjective opinion. But educators who want to use data well need to answer some crucial questions.

How Will We Use the Data?

“There are two broad streams of data use, one a carrot, one a stick,” Denis P. Doyle writes.The same data—attendance, demographics, test scores, teacher characteristics, school spending, course-taking patterns—can be used for diagnosis or accountability purposes or both. For example, disaggregating test scores by identifiable groups of youngsters can provide the key to either bettering instruction or finger-pointing (or both). (2002, p. 31)
Because using data for finger-pointing will be in full effect once the No Child Left Behind legislation gets fully implemented, educators may find it hard in the days ahead to keep in mind the higher purpose for studying data. Authors in this issue point out the use and misuse of data in education.
For example, Thomas R. Guskey (p. 6) leads off with an explanation of why classroom assessment data are superior to the once-a-year test data that rank students. Looking carefully at how many students know certain items on a given test can help a teacher decide to give corrective instruction, to present the concept in new ways, and to pay attention to trouble spots in a timely fashion. Using data to promote mastery learning yields far more valuable information than using data to measure this year's class against last year's.

Where Did the Data Come From?

The term scientifically based research is almost as much in use as data-driven decision making. Researcher Robert E. Slavin (p. 12) clarifies evidence-based research. In layman's terms, he explains random assignment, statistical significance, and the importance of sample size, and he warns against such misuses of research as cherry picking and bottom fishing (claiming that the positive or negative outliers prove success or failure). The practicing educator needs to be more informed than ever about the methodology of researchers in a time when the U.S. Department of Education stoops to deleting studies from ERIC that it disagrees with. As Diane Ravitch and Chester E. Finn write in reference to other new Department of Education practices,How can anyone trust the numbers if there is opportunity for politicians . . . to massage the data or determine the timing and manner of their release? (2002, p. 76)

Where Do We Start?

The mounds of data that we can collect in electronic warehouses can frighten people-oriented educators. Several of our authors demystify the process of analyzing data. Mike Schmoker (p. 22) suggests that an individual can start by focusing on two questions: How many of my students are succeeding? and What are my students' strengths and weaknesses? Victoria L. Bernhardt (p. 26) offers advice for administrators on organizing data and intersecting two, three, or four data categories to get answers to such questions as, Which program is making the biggest difference in achievement for at-risk students?

What Do the Data Really Say?

In a summary of their original research, the largest study yet conducted on this issue, Audrey L. Amrein and David C. Berliner (p. 32) examine whether high-stakes tests affect student improvement on other test measures, dropout rates, and motivation to learn. The data show that after adopting such exams, twice as many states slipped against the national averages on the SAT and the ACT as gained on them. Elementary students who did well on state tests did not have correspondingly good scores on the NAEP math tests. And, after adopting mandatory exit exams, twice as many states had graduation rates lower than the national average. In addition, the researchers found that students became less intrinsically motivated to learn when rewards and sanctions were attached to performance on tests.
All of these questions about data remind us of the power of knowledge—and the importance of using data with care and informed judgment.
References

Doyle, D. P. (2002). Knowledge-based decision making. The School Administrator, 59(11), 30–34.

Ravitch, D., & Finn, C. E. (2002). Time to save federal education data. Education Week, 21(42), 76.

Marge Scherer has contributed to Educational Leadership.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Related Articles
View all
undefined
Leadership
Preventing "Wellness Fraud" for Leaders
Jo Lein
2 weeks ago

Related Articles

From our issue
Product cover image 103031.jpg
Using Data to Improve Student Achievement
Go To Publication