Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
February 1, 2013
Vol. 70
No. 5

Special Topic / Data: No Deus ex Machina

author avatar
author avatar
Before we see real changes in schooling, practitioners and policymakers will need to get smarter about data-based decision making.

premium resources logo

Premium Resource

Data-based decision making is all the rage. Secretary of Education Arne Duncan (2009) has emphatically declared, "I am a deep believer in the power of data to drive our decisions. Data gives us the roadmap to reform. It tells us where we are, where we need to go, and who is most at risk." In the past few years, all 50 states have adopted most or all of the Data Quality Campaign's framework for state data systems.
In important respects, this is a welcome development. Data expose inequities, create transparency, and help drive organizational improvement.
But something is amiss. Many educators regard talk of data-based decision making as an external imposition, sensing new obligations and what they see as a push to narrow schooling to test scores and graduation rates. Districts remain hidebound and bureaucratic, with precious few looking like data-informed learning organizations. And the data—which are relatively crude, consisting mostly of reading and math scores—are unequal to the heavy weight they're asked to bear.
Despite these challenges, enthusiasts continue to make sweeping claims about the restorative power of data. Too often, as we talk to policymakers, system leaders, funders, advocates, and vendors, we get a whiff of deus ex machina, the theatrical trick of having a god drop from the heavens to miraculously save the day. (The phrase's literal meaning is "God in the machine.") Like a Euripides tragedy in which an unforeseen development bails out the playwright who has written himself into a corner, would-be reformers too often suggest that this wonderful thing called "data" is going to resolve stubborn, long-standing problems.
We need a more measured view. Data can be a powerful tool. But we must recognize that collecting data is not using data; that data are an input into judgment rather than a replacement for it; that data can inform but not resolve difficult questions of politics and values; and that we need better ways to measure what matters, rather than valuing those things we can measure.

We've Been Here Before

Data have long promised easy answers, sometimes with discomfiting results. Frederick Kelly created the first modern multiple-choice test in 1914 (Murdoch, 2007). Others quickly followed suit. Edward Thorndike and Charles Judd devised achievement tests in spelling, handwriting, arithmetic, composition, and more (Butts & Cremin, 1953). By 1923, more than 300 standardized scales were available (Cubberley, 1919).
Stanford's iconic dean of education, Ellwood Cubberley (1919), cheered such assessments, insisting, "We can now measure an unknown class and say, rather definitely, that, for example, the class not only spells poorly but is 12 percent below standard" (p. 694). Cubberley explained,
Standardized tests have meant nothing less than the ultimate changing of school administration from guesswork to scientific accuracy. The mere personal opinions of school board members and the lay public … have been in large part eliminated. (p. 698)
Consider the IQ test, created to help sort new recruits mobilized for World War I. The U.S. government asked elite psychology professors to develop a system for gauging intelligence. In hindsight, some of the results were unreliable. In one analysis, testing expert H. H. Goddard identified 83 percent of Jews, 80 percent of Hungarians, and 79 percent of Italians as "feeble-minded" (Mathews, 2006). In one 1921 study, Harvard researcher Robert Yerkes concluded that "37 percent of whites and 89 percent of negroes" could be classified as "morons" (Gould, 1981, p. 227). Yerkes had no concerns about the results because the tests were "constructed and administered" to address potential biases and were "definitely known to measure native intellectual ability" (Graham, 2005, p. 48).
In the 1960s and 1970s, proponents of data and accountability again insisted that they had it right. U.S. Office of Education Associate Commissioner Leon Lessinger (1970) promised,
Once we have standardized, reliable data on the cost of producing a variety of educational results … legislators and school officials will at last be able to draw up budgets based on facts instead of on vague assertions. Through knowledge gained in the process of management, we will also be able to hold the schools accountable for results. (p. 10)
Lessinger was hardly alone; more than 4,000 books and articles on data and education accountability were published in the late 1960s and early 1970s (Browder, 1975). Yet in 2001, No Child Left Behind's architects started from the bipartisan conviction that U.S. schooling was nearly bereft of good data.
In hindsight, it seems clear that would-be reformers have consistently overestimated the potential of data and have used new data in inappropriate and troubling ways. We'd do well to keep this in mind if we intend to do more than repeat past mistakes.

Four Big Problems

For data to fulfill their promise, we need to remedy four problems with how we use them today (see Coburn & Turner, 2012; Little, 2012).

The Problem of Use

To date, there's been more interest in data systems, unit records, and the machinery of data than in how educators are supposed to use these to improve teaching and learning. Educators may be awash in data, but failures in teacher preparation, professional development, and district practices mean that few are equipped to take full advantage of new tools. Few schools have provided more opportunities for teachers to develop expertise. Instead, we just ask teachers to use data "more often" and "better" on top of everything else they already do.
Insisting that educators use data without building reasonable processes for their use can be counterproductive—just another compliance exercise and source of resentment. This is especially unfortunate because using data could be not just another mandate handed down from on high, but a way of empowering educators to make informed decisions about improvement.

The Problem of Judgment

There's a seeming confidence that data will enable schools to just do "what works," allowing science to stand in for imperfect judgment. But this confidence is based on a caricatured view of scientific inquiry. As any real scientist or detective will tell you, questions of which hypotheses to pursue, what data to collect, and how to interpret conflicting and ambiguous information rely heavily on human skill, judgment, and expertise.
To put it another way, using data makes the job harder, not easier because it requires both technical skill and comfort with the unsettling process of figuring out what you're doing right and wrong and how you know which is which. Like other tools, data are a great aid when wielded skillfully, but we're fooling ourselves if we fail to recognize that tools are only as good as the hands that hold them.

The Problem of Politics

Some reformers suggest that if we just "do what the data tell us," we'll be able to sidestep many messy political fights about policy and practice. This is an old hope—Progressive Era reformers like Cubberley also dreamed of expert rule that would enable schools to shake free from politics. Although we recognize that the spinning wheels of politically driven reform are an impediment to sustained school improvement, we believe it's naive to think that data can or should replace political debates over the values, goals, and purposes of public schools. People will interpret the same findings in different ways.
More to the point, whether studying class size or charter schooling, particular findings on student outcomes will only rarely trump personal views of what constitutes sensible policy and practice. It's most helpful to regard data as one input into an often chaotic political process, as something we can use to inform debates but not to settle them.

The Problem of Purpose

We're often imprecise about what kind of data to use for what purpose. There are two related issues here. The first is that crude data that were designed for public accountability are now being used to manage performance in ways that were never intended. The second is that we don't collect the kinds of data that would be more broadly useful for organizational improvement.
Schools and systems have embraced student achievement data but have paid scant attention to other forms. The result is "data-driven" systems in which leaders give short shrift to the operations, hiring, and financial practices that are the backbone of any well-run organization and essential to supporting educators. For instance, few districts know how long it takes to respond to a teaching applicant, how frequently teachers use formative assessments, or how rapidly school requests for supplies are processed and fulfilled. Absent such information, it's hard to know how data are supposed to help improve crucial practices and routines—or put educators in a position to succeed.

What To Do?

We believe in the promise of data—as a tool, not as a talisman. Practitioners, policymakers, and advocates need to get real about how to use data and what to expect as a result. Four recommendations can help shape this discussion.

Build Human Expertise

Data don't use themselves. At one level, building human expertise entails building technical skills so educators are better able to use data. More ambitiously, it involves building a field in which analysis and inquiry are central to the work.
One much-discussed education system, Finland, makes learning how to research a central part of teachers' graduate training. At the other end of the spectrum, entrepreneurial charters and the new principal and teacher programs they've erected have made the use of data and continuous improvement a central part of their work. These approaches can serve as models in different ways.

Create Structures to Support Data Use

Teachers need training, support, and time to use data well. Increasing sophisticated data use over time requires practice and coaching from mentors or colleagues who are experienced and skilled in using data wisely. Unbundling teacher roles to explicitly create teacher leaders with expertise in this area is one promising path.
Districts and networks can help by listening to school leaders' thoughts on how roles should be divvied up, who should take responsibility for specific roles, and what issues require more or less attention. Data can help inform such deliberations.
Finally, districts should focus on developing external infrastructure. This might include providing educators with expert help figuring out which questions to ask and how to use data—the kind of assistance provided by such programs as the Center for Education Policy Research's Strategic Data Project at Harvard University. Districts should also focus on developing management information systems that make it simple to access and examine data and provide relationships that can help educators tap the kinds of expertise and support they need.

Change the Climate Around Data

Because today's data discussion mostly concerns external accountability for schools and educators, it has focused almost exclusively on test scores in reading and math and on graduation rates. Not surprisingly, teachers have viewed this whole enterprise as an intrusion.
Data are more likely to spark improvement when principals and teachers internally champion their use; these educators may be driven to solve an urgent issue in their schools or by a desire to find cost savings. Data are empowering when they become a tool that helps people analyze and act on their environments.

Collect a Wider Range of Data

Test scores are useful and important, but they're not the only form of data. Student writing samples, videos of classroom teaching, and the number of parents who attend parent–teacher conferences are other forms. What data you should collect depends in part on what problem you're trying to solve.
Going forward, educators need to collect more granular and operational data that can help schools improve. For example, what professional development is delivered to which personnel, when, for how long, and by whom—and what are the results? What tutoring or after-school programs are delivered to which students, when, for how long, and by whom—and what are the results? Which schools use which reading and math programs, how well do the schools implement these programs, at what cost—and what are the results?
We've overinvested in data that are useful for public accountability, and we're underinvested in data that improve management or instruction. If we want increased performance, we need to reverse these priorities.

Beyond Number Crunching

Back in 2003, Michael Lewis's book Moneyball: The Art of Winning an Unfair Game made a splash with its exploration of how the Oakland A's general manager Billy Beane used new data tools to better evaluate baseball players. Various thinkers suggested that the same insights could apply to schooling. If we measured what really counts, they argued, we could do for education what Beane had done for the A's—turn a world of conventional wisdom into one ruled by informed analysis.
A decade later, the A's still haven't won a World Series, and the enthusiasm for Moneyball has tempered a bit. Yes, there's a crucial role for number crunching, but there's also a role for the judgment of scouts and wise decision makers able to weigh the things that matter.
Schooling needs not just numbers, but also the system building that's at the heart of all good teams. Good baseball teams consistently draft well, develop talent, create effective systems of training, and build a winning culture. All of this is informed by data, but it's not just a matter of data.
Education success requires similar attention to selecting, developing, and using talent—and to creating the systems and cultures where it can excel. Yet these things have gotten short shrift, even from some of the most ardent champions of data.
Asking data to bring transformative change all on their own is doing little more than hoping for a last-act deus ex machina. If we were writing the ending to a Greek tragedy, that might work. But if we're trying to improve teaching and learning for 50 million kids? Not so much.

Browder, L. H. (1975). Who's afraid of educational accountability? Denver, CO: Cooperative Accountability Project.

Butts, R. F., & Cremin, L. A. (1953). A history of education in American culture. New York: Holt, Rinehart, and Winston.

Coburn, C., & Turner, E. O. (2012). The practice of data use: An introduction. American Journal of Education, 118(2), 99–111.

Cubberley, E. (1919). Public education in the United States: A study and interpretation of American educational history. Cambridge, MA: Riverside Press.

Duncan, A. (2009, June 8). Robust data gives us the roadmap to reform. Retrieved from U.S. Department of Education.

Gould, S. J. (1981). The mismeasure of man. New York: Norton.

Graham, P. A. (2005). Schooling America: How the public schools meet the nation's changing needs. New York: Oxford University Press.

Lessinger, L. (1970). Every kid a winner: Accountability in education. New York: Simon and Schuster.

Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143–166.

Mathews, J. (2006, November 14). Just whose idea was all this testing? Washington Post. Retrieved from

Murdoch, S. (2007). IQ: A smart history of a failed idea. Hoboken, NJ: Wiley.

Rick Hess is a resident scholar and the director of education policy studies at the American Enterprise Institute (AEI), where he works on K–12 and higher education issues. He also founded and chairs AEI's Conservative Education Reform Network.

Hess's research and writings are found in many scholarly and popular periodicals, including Harvard Educational ReviewForbes, The Hill, Teachers College RecordPhi Delta KappanEducation WeekWashington Post, and U.S. News and World Report. He also writes Education Week’s blog “Rick Hess Straight Up” and serves as an executive editor of Education Next. Hess taught education and public policy at Harvard, Georgetown, and Rice Universities and at the universities of Pennsylvania and Virginia.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 113036.jpg
Creativity Now!
Go To Publication