1703 North Beauregard St.
Alexandria, VA 22311-1714
Tel: 1-800-933-ASCD (2723)
8:00 a.m. to 6:00 p.m. eastern time, Monday through Friday
Local to the D.C. area: 1-703-578-9600, press 2
Toll-free from U.S. and Canada: 1-800-933-ASCD (2723), press 2
All other countries: (International Access Code) + 1-703-578-9600, press 2
December 2008/January 2009 | Volume 66 | Number 4
Data: Now What?
Jeffrey R. Henig
Good research can spawn good policy, but only if we understand the limitations and benefits of research—and use a good dose of judgment.
Among the apple pie issues in U.S. education is the notion that good policy should be based on the best and latest research. This is a premise of long standing, embedded, for example, in the training requirements for school leaders who are expected to become literate consumers of education research.
In recent years, there has been renewed emphasis on the imperative to ensure that policy choices are scientifically based. No Child Left Behind mentions "research" 216 times, with more than half of those references specifically indicating that such research should be "scientifically based" (Manna & Petrilli, 2008). The U.S. Department of Education's Institute of Education Sciences, through its grant policies, promotion of randomized field trials, and What Works Clearinghouse, has provided detailed road maps of what greater reliance on strong research design might mean. Research findings and debates get deep coverage in outlets like Education Week and instant coverage in the blogosphere. And advocacy groups appear anxious to enlist social science evidence and researchers to add legitimacy to their causes.
Practitioners, policymakers, and the average citizen can be excused, however, if the public spectacle of research leads them to question whether the payoff is worth the hype. On politically contentious policy questions, opposing cliques are ready and able to muster their own stable of researchers and findings to buttress their claims and challenge those cited by the other side. The seeming malleability of evidence reinforces cynicism about the independence and potential contribution of good scientific techniques.
This cynicism is exacerbated when researchers themselves get drawn into the fray, portraying conflicting findings not as part of the normal process through which research gradually circles in on more reliable findings but as the result of disingenuous or incompetent work by those whose interpretations differ from their own. Mark Twain captured the sour view that many thoughtful observers have come to find compelling: "There are three types of lies," he wrote, citing Benjamin Disraeli, "Lies, damn lies, and statistics."
For example, let's look at the uses and abuses of evidence claims in one particular issue area—charter schools. "Charter Schools Trail in Results, U.S. Data Reveals," read the Page 1 headline of an August 17, 2004, New York Times
story (Schemo, 2004). The article described a report that the American Federation of Teachers (AFT) had released on the performance of charter schools, a rapidly expanding mechanism of education reform. "The first national comparison of test scores among children in charter schools and regular public schools shows charter school students often doing worse than comparable students in regular public schools," read the opening sentence. The reporter went on to describe the findings as "buried in mountains of data the Education Department released without public announcement," suggesting that the Bush administration, which was a vocal and active proponent of charter schools, had tried to suppress the evidence. For charter school supporters, this was what one observer labeled "a day that will live in infamy."
Within days, proponents of charters and other choice-based and market-oriented approaches to school reform had launched a counterattack. The Center for Education Reform, a Washington, D.C.–based organization supported by conservative foundations, coordinated the placement of a full-page ad in the New York Times, in which a number of prominent researchers both criticized the methodology of the AFT study and took the newspaper to task for failing to subject the report to a more rigorous and skeptical review. At the same time, school choice proponents got busy placing editorials, providing supporters with talking points for discussion on television talk shows, issuing critiques of the study on various electronic newsletters and blogs, and publicizing a report by a Harvard economist, which they claimed was much stronger methodologically and which they said arrived at the opposite conclusion—that charter schools were working very well indeed.
It was not long, however, before these rebuttals themselves were being subject to intense scrutiny—and to charges that the critics of the AFT report were hypocritically and selectively using the standards of social science to mask and advance an agenda that was grounded more in ideology than in evidence.
For those committed to the notion that good research will spawn good policy, the case stands as a troubling one. It adds to the picture of education research in general, and of school choice research in particular, as being so highly politicized that evidence claims and methodological critiques function more as political weapons than as part of a collaborative effort to genuinely inform democratic decision making. It's important to clarify how research gets politicized in the first place and why that politicization should not get in the way of our understanding the real advantages that come from sound research, responsibly applied.
Most research goes as unnoticed as the proverbial tree falling in the forest. For example, the 2006 Annual Meeting of the American Educational Research Association included 1,946 substantive panels, at which 8,361 papers were presented. A search on Lexis-Nexis found that, in the months immediately following the meeting, only seven articles focusing on research presented at that meeting appeared in U.S. print media.
Occasionally, however, a policy issue becomes aligned with broader ideologically charged debates. One such charged debate in education concerns the degree to which children are naturally inclined to learn and the degree to which they need discipline and training to do so. Another involves the proper allocation of responsibility among family, community, and school. In the case of charter schools, the key polarizing debate is over the proper role of government as opposed to markets.
When issues come to be understood in terms that neatly overlay the standard ideological alignments, they readily become proxies for the big war between Right and Left. Much of day-to-day policymaking and implementation takes place in a less contentious environment. Some issues are highly technical and do not arouse great passions. Some are essentially matters of divvying up resources, deciding, for example, which states, congressional districts, or schools get a bigger share of public dollars—and these can be settled through logrolling and split-the-difference compromise.
In high-stakes ideological battles, however, key interest groups are unwilling or unable to find common ground. They resist any movement toward a position of compromise as a slippery path toward disaster. Some hear a study suggesting that very young children are intellectually curious as an invitation to wholeheartedly adopt child-centered pedagogies. Others take a finding suggesting that direct instruction provides some benefits for young children from chaotic neighborhoods or families as a mandate to constrain all teachers in a tightly scripted curriculum. Research suggesting that vouchers or charter schools perform badly or well is seen as fueling either a Leviathan government that maintains iron-fisted control of schools or a Wild West scenario in which private school providers run amok and the only consumers who count are those with cash in their pockets.
When this happens, the pressure to control messages relating to research can become intense. One way to do this is by directly funding or conducting research. Advocacy groups, aided at times by supportive foundations, can produce their own studies or seek out sympathetic researchers whom they can reasonably count on to generate results that fit the political message of the sponsoring group.
But probably at least as important as creating research is the role these groups play in disseminating research to broader audiences. Under normal circumstances, social science research percolates slowly and within small niches. A given study might take two to three years from conceptualization to completion and then another two years to make it through the peer-review process and appear in a scholarly journal.
However, when partisan and ideologically driven stakeholders see the findings as relevant to ongoing debates, they can and often do reach out to accelerate the process. Taking advantage of the Web and various electronic media, they broadcast distilled summaries of the findings, often glossing over the researchers' caveats and oversimplifying the policy implications. When the researchers are willing, these groups will sometimes more aggressively enlist them in the dissemination effort, providing them with media training to help them present their work in terms that—compared with the somewhat dense, abstract, and stuffy language that frequently characterizes academic discourse—are crisper, more definitive, and linked to concrete examples instead of theory.
The politicization of research can be dispiriting, but there are some encouraging signs. The quality of the empirical research on education policy issues is on the upswing, and even in controversial areas like charter schools, there are indications that research is converging on a more nuanced and informative view.
The research is getting better for at least three reasons. First, the field of education research is attracting better-trained researchers. Although there have always been important exceptions, the quality of research and research preparation in the typical education school has historically been mediocre, especially when it comes to sophisticated quantitative techniques. Scholars housed in the academic disciplines—political scientists, economists, sociologists, and the like—for many years ignored education as a focus for their research. They tended to see it as somewhat boring, parochial, and overly applied.
Since the 1980s, however, reforms such as vouchers, charter schools, high-stakes testing, and No Child Left Behind have stirred the waters. Economists interested in the role of markets, political scientists interested in federalism and the politics of privatization, and sociologists interested in the consequences of organizational redesign have suddenly seen education as a fruitful field in which to test hypotheses that relate to the key debates within their disciplines. These discipline-based newcomers to education research have often lacked deep substantive knowledge of schools and schooling, but increasingly they are becoming drawn in more deeply. And although some within the education community have been scornful of the "interlopers," certain education schools are rising to the bait and improving the quality of their research faculty and of their training of new education researchers.
Second, much better data are becoming available for isolating the factors affecting student achievement. State governments have dramatically improved the quality of their administrative data systems. This investment was partly prompted by the pressure from No Child Left Behind but partly emerged in response to the states' own need to better track students as they move through and among traditional and charter schools. States like Florida, Texas, North Carolina, and New York have begun making individual student-level data available to researchers. The result is a growing wave of studies that are able to isolate the effect of schools and teachers from the myriad other factors—such as race, family socioeconomic status, English proficiency, and student mobility. Such factors often have powerful consequences for student learning that can be mistaken for school effects if they are not independently measured and taken into account analytically.
Third, the federal pressure to improve the quality of education research has raised the ante when it comes to attention to research design. These federal efforts have stirred some controversy. By heavily emphasizing the use of randomized experiments as the gold standard in research designs, the Institute of Education Sciences provoked something of a backlash among researchers, who legitimately noted that such models were not always possible or even appropriate and that an array of other research strategies, if done well, can be extremely important in building scientific knowledge.
But even if overemphasized, randomized designs are indisputably a powerful investigative tool, and some large-scale experimental studies—like the multiyear study of the Washington, D.C., school voucher program (Wolf et al., 2008)—are certain to provide important findings. Moreover, the debate engendered by the federal government's efforts arguably has had positive spillover effects, making education researchers in general much more self-conscious and explicit about causal inference, thereby improving the quality and interpretation of studies that employ nonexperimental designs.
Although the public presentation of charter school research today is nearly as contentious as it was when the AFT report made waves in 2004, beneath the radar screen is a growing convergence on a set of findings that fit neither the rosy predictions of the early advocates nor the dire fears of the early critics:
Findings like these do not tell local communities or school leaders precisely what to do. They do, however, set the stage for public deliberation that is more civil, pragmatic, and informed.
No single research study can ever provide the kind of definitive and universal answers that policymakers and citizens seem to yearn for ("Just tell me yes or no whether this will work"). There are a number of reasons why this is so. Some have to do with fundamental limitations of data; measures, no matter how precise, are always one step or more removed from the concepts that we really are interested in. Test scores provide only an approximation of learning. Free lunch eligibility is a crude proxy for complex notions of class.
Some of the reasons have to do with the incredible complexity of untangling causal relationships in a complicated and multicausal world. And some have to do with the fact that time, context, and particularities of policy implementation matter: Policies that work under times of enrollment growth might fail in times of contraction; initiatives that work in racially and economically more homogenous settings might have different dynamics when poverty is concentrated or neighborhoods are more sharply segregated; reforms put in place by talented leaders and capable bureaucracies can fizzle when done amateurishly or without imagination and spark.
Although individual studies cannot carry the weight of contemporary expectations, research as a collective enterprise—comprising multiple studies, attacking questions from differing angles, and exposed to the checks and balance of studies in differing settings and using differing methodologies—can enrich our knowledge base. But sifting through accumulated studies to discern the weight of the evidence requires at some point the application of judgment. Research can inform decisions but cannot, in itself, displace the need for judgment.
Manna, P., & Petrilli, M. (2008). Double standard? "Scientifically based research" and the No Child Left Behind Act. In F. M. Hess (Ed.), When research matters: How scholarship influences education policy (pp. 63–88). Cambridge MA: Harvard Education Press.
Schemo, D. J. (2004, August 17). Charter schools trail in results, U.S. data reveals.
New York Times, p. 1. Available:
Wolf, P., Gutmann, B., Puma, M., Kisida, B., Rizzo, L., & Eissa, N. (2008). Evaluation of the DC Opportunity Scholarship Program: Impacts after two years (NCEE 2000-4023). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Jeffrey R. Henig is Professor of Political Science and Education at Teachers College, Columbia University;
firstname.lastname@example.org. He is the author of Spin Cycle: How Research Is Used in Policy Debates: The Case of Charter Schools (Russell Sage Foundation Publications, 2008).
Copyright © 2008 by Association for Supervision and Curriculum Development
Subscribe to ASCD Express, our twice-monthly e-mail newsletter, to have practical, actionable strategies and information delivered to your e-mail inbox twice a month.
ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.