Skip to content
ascd logo

December 1, 2008
Vol. 66
No. 4

Researchers and Educators: Allies in Learning

premium resources logo

Premium Resource

We know a great deal about the U.S. education system: the numbers and makeup of the students and staff in schools; the overall differences in student performance by racial and ethnic group, family income, and gender; the amount districts and schools spend and what they spend it on; the numbers of students going to postsecondary institutions; and so on.
At the same time, we know relatively little about the effectiveness of education programs, practices, and policies. Many education studies of the past few decades have relied on measurement approaches that are not rigorous enough to prove or disprove the effectiveness of the interventions they study. The result is that educators continue to use interventions or approaches whose underlying effectiveness may be limited or negligible. This is as frustrating for educators as it is for buyers of a product who discover it does not live up to its claims.
The quality and relevance of research in this area have improved in recent years because of federal requirements that instructional strategies and professional development be grounded in "scientifically based research." (See Shavelson & Towne, 2002, for a fuller discussion of the value of scientifically based research in education.) A range of research falls under the definition of scientifically based, but an exemplar is the randomized controlled trial—the clinical trials that are common in the medical field. The U.S. Department of Education's Institute of Education Sciences has made funding and promoting these types of studies a priority.
This recent emphasis on controlled trials is a welcome sign. In a typical education study of this kind, students are randomly assigned to a control group, which receives traditional instructional strategies and materials, or to a treatment group, in which they are taught using the intervention under study. Use of control groups and random assignment removes the possibility that other factors influence student outcomes and enables researchers to make causal statements linking student outcomes to the intervention. These kinds of studies provide decision makers with valid information about what works to improve student achievement in their school, district, or state; they can then allocate resources to products or approaches with proven effectiveness.
We need to couple the encouraging shift toward controlled trials with real use of the knowledge gained from them. It is not yet clear that educators are doing much with this new kind of information. Most educators are not trained to evaluate data and research or translate findings into on-the-ground practices or approaches that can improve student learning. The need for active channels of dissemination and translation is clear.

The Current State of Research

Does today's education research provide reliable and rigorous evidence about the programs, practices, and policies that help students achieve? The short answer is, not really. My work for the past three years as a principal investigator and director of the What Works Clearinghouse has shown me that we have a ways to go to reach this goal.
The What Works Clearinghouse was established in 2002 by the Institute of Education Sciences to provide educators, policymakers, and the public with a central and trusted source of scientific evidence about what works in education. The clearinghouse has found that, although rigorous studies exist, they are not the norm. In a review of research on reading programs for students in grades K–3, for example, researchers reviewed 887 studies of 153 reading programs. Of these, only 27 studies were randomized controlled trials that met the clearinghouse's evidence standards. Another 24 studies—mostly "quasi-experimental" studies that used statistical approaches to emulate true randomized controlled trials—met the standards with reservations. That only 16 percent of reading interventions were supported by research meeting the clearinghouse's minimum bar for rigor is a finding in and of itself. Similar rates were found for studies of interventions related to middle school math, English language learning, early childhood education, and dropout prevention.
Some of the reasons for this low number may lie with the research community, some with educators, and some with the culture of education itself. A look at the field of medicine—and how doctors use data in their field—will provide some insights.

A Study in Contrasts

The fields of education and medicine share a need to understand what works to advance the goals of improving learning or improving health, respectively. Both fields engage in a continual process of developing and deploying interventions to achieve these goals. Yet the standards and processes used to assess the effectiveness of interventions differ markedly, as do the training and contextual experiences of those in the field.

Medicine and Research

From the moment they begin their training, perhaps as science majors or in pre-med programs, doctors are expected to use data and research findings in making decisions—and they expect the data and research to be of high quality. They use and create these data and studies in a culture in which randomized controlled trials are an accepted standard of rigor. Doctors are held to treatment standards and can be found liable if they do not provide appropriate treatment related to these standards. They know that the Food and Drug Administration requires controlled trials to evaluate new drugs and that new treatment regimens must be supported by strong evidence—such as that yielded by clinical trials—before professional organizations will accept them. This context creates in practitioners an expectation that they will use research throughout their careers.
Researchers in the health field have responded to this demand for rigorous research with decades of experimental studies. For example, a Cochrane Collaboration review identified 67 trials of "topical treatments for fungal infections of the skin and nails of the foot." That millions of people possibly have athlete's foot may explain the many studies of treatments for it. But then again, millions of children receive reading instruction every year, and there have been relatively few rigorous studies of those reading "treatments."
Research literacy also has instilled caution in the medical field when discussing results. Discussions distinguish between whether a study findsdifferential outcomes (a correlation that may not be causal) or effects (a causal relationship between the treatment and outcomes observed). These distinctions have brought to light causal relationships between risk factors and diseases, between drugs and changes in patient health, and between courses of treatment and patient outcomes.

Education and Research

In contrast to the training that doctors receive, education training programs put little emphasis on teaching students to conduct research or interpret and use research findings. The vast majority of education school graduates receive their degrees without being exposed to a rigorous research study. It is therefore not surprising that many educators may not rely on research findings and evidence to make decisions.
Researchers studying education issues—many of whom may want to use experimental studies—also face resistance from educators about creating control groups. Fearful that students in a control group will miss out on an intervention they assume to be effective (perhaps by virtue of its newness), school and district leaders might decline to participate in experimental studies that could provide definitive evidence for what does work. Not participating reduces the risk that parents will complain that their children are being harmed by not getting the intervention. But if the intervention is ineffective, not receiving it is exactly what a parent should want!
Raising expectations for education research will benefit the field and the practice of educators at all levels. Most important, it will benefit students themselves, who are confronted every day with a hodgepodge of instructional strategies and instructional materials—some of which may work well, but some of which may not.

Questions of Concern to All

A shift in thinking at the federal level and new research designs are giving school and district leaders a reason to support better research. In the past, researchers designed studies to answertheir questions. Researchers and funders are now seeing the importance of district buy-in—of collaborating with district educators to answer questions the districts may have—to conducting long-term, rigorous studies.
Consequently, the Institute of Education Sciences is doing more to meet districts' needs by funding studies that, in addition to answering researchers' questions, provide districts with useful information they might not otherwise be able to collect. Partnering with researchers, education leaders can help their students, deepen their own understanding of what works in schools and classrooms, and advance knowledge in the field.
For example, in a recent study of educational technology in which I participated (Dynarski et al., 2007), researchers crafted a joint question: Is education technology effective in improving academic achievement, and if so, under what conditions and practices? Instead of simply studying the technology that districts already had in place, researchers offered new software for improving reading and math skills as well as new hardware for districts to try out.
There was little downside for the districts—their current infrastructure was not in danger of being undermined, and they stood to gain technology and to practice using software with their students. Districts were eager to participate, and recruitment went fast. Fifteen products were tested in 132 schools across the United States. The research team assessed student achievement for more than 9,000 students, finding no link between the use of the technology and achievement after one year. A forthcoming report will examine effects after a second year of use.
This kind of information is useful for the districts that participated. They now have a better sense of both the effectiveness of the software products and the real demands of using them in classrooms. It's also useful for researchers and policymakers who are striving to understand how to improve software effectiveness and whether the products merit investments of public dollars.
Another example is a recent study of math curriculums that the Institute of Education Sciences is conducting with Mathematica Policy Research. School districts often have little evidence on which to base textbook adoption decisions, which affect all aspects of teaching and learning. The researchers offered districts the chance to use four textbooks and to train their teachers on their use. The textbooks were selected in response to public submissions from publishers and reviews of submissions by experts in mathematics education. Fifteen participating districts and 100 schools across the United States agreed to participate, and a report from the study is forthcoming (see The information from the study will be useful for districts, which want to know which textbook works best with their students, as well as for researchers, who are trying to understand the effectiveness of various approaches for teaching math.
These research designs provide rigorous findings but also fit well with the management and operational structures in which districts operate. The opportunities are there, and districts and researchers working together to identify and exploit these opportunities will benefit everyone. Districts should promote their desire to innovate, to search for and use the best methods possible, and to be clear with the public that doing so means, on occasion, that some schools, teachers, and students will use newer methods and that the district will assess whether these newer methods are, in fact, better. This is a far sounder approach than simply relying on positive reports from teachers or on scrutiny of test score gains.

From Research to Practice

Although federal efforts may need to go further to increase educators' capacity to use research (visit the Doing What Works Web site at, there has been a gradual shift toward acceptance, and even expectation, of experiments and research-informed decisions.
At the same time, we need to do more to distill research findings and make them available to school districts and teachers. Researchers and funders can help craft recommendations for practitioners that are grounded in evidence and tested in real-world settings. School and district leaders can identify staff members responsible for keeping others informed of the latest research and for finding ways to collaborate more closely with researchers, propose studies to funders, or express needs for knowledge and support once studies are launched. District staff can work with university researchers, research organizations, or regional laboratories to clarify district needs and requirements, identify options for conducting the research, and ensure the research is both practical and sound.
For example, a district may be interested in using guided reading. Researchers could survey literature on its effectiveness and propose a pilot in which some schools use guided reading and others continue with their current reading approach. By randomly selecting from among schools that are interested in participating, researchers would create a randomized controlled trial through which the district will get a clear answer to the question about the effectiveness of guided reading, and researchers will have conducted a useful study that perhaps merits publication in an academic journal.
Involving others who play a role in teacher training and professional development—including schools of education and professional associations for educators—is also important in creating a cycle of using data and research in making decisions, expecting data and research to be of high quality, and finding ways to connect research and practice in the pursuit of continuous improvement.
Medicine again provides an example in which recognition of practitioners' needs has spawned a whole new type of research. This translational research, whose primary aim is to connect experimental findings with clinical practices, has made rigorous research even more accessible to doctors. Its purpose is simple and easily transferred to education: "Basic scientists provide clinicians with new tools for use in patients and for assessment of their impact, and clinical researchers make novel observations about the nature and progression of disease that often stimulate basic investigations" (National Institutes of Health, n.d.). The National Institutes of Health currently operates 24 translational research centers and plans to operate 60 research centers by 2012.
The What Works Clearinghouse attempts to bridge this same divide on a smaller scale with its practice guide series. (To access the guides, visit These publications distill research findings into guidance for educators on such topics as teaching English language learners, turning around low-performing schools, improving adolescent literacy, and reducing dropout rates. They include recommendations, the level of evidence supporting the recommendations, and detailed checklists for carrying out the recommendations.
These are good first steps and appropriate ways in which government-funded research can help decision makers and practitioners, but much more remains to be done. As a field, we need to not only design studies that help participants, researchers, and other educators, but also ensure that future teachers and education leaders are exposed to rigorous research in their training programs—and that the training clearly demonstrates the power of research to answer pressing questions. Developers also can play a role by using stronger research designs to study and refine their products.
These changes will not happen quickly, but the payoff could be great. The life expectancy of Americans has risen from 47 years in 1900 to 77 years in 2003 because of, among other things, advances in health care driven by research findings. Increases in student achievement of this magnitude are perhaps unlikely in the short term, but statistics like these demonstrate what a field focused on using research to improve practices can accomplish. If teachers, principals, district leaders, researchers, and federal officials work together to learn more about effective instructional practices, education programs, and curriculum strategies, we can learn more about what works in school and how to apply it.

Dynarski M., Agodini, R., Heaviside, S., Novak, T., Carey N., Campuzano, L., et al. (2007). Effectiveness of reading and mathematics software products: Findings from the first student cohort. Washington, DC: U.S. Department of Education, Institute of Education Sciences.

National Institutes of Health. (n.d.). Translational research. In NIH roadmap for medical research. Bethesda, MD: Author. Available:

Shavelson, R. J., & Towne, L. (Eds.). (2002).Scientific research in education: Committee on scientific principles for education research. Washington, DC: National Academies Press.

End Notes

1 The Cochrane Collaboration is a group of approximately 11,500 volunteers in more than 90 countries who apply a rigorous, systematic process to review the effects of interventions tested in biomedical randomized controlled trials.

2 The study is fully funded by the Institute of Education Sciences and is being carried out by third-party evaluators who have no stake in the outcomes.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
Related Articles
View all
Responding to Intolerance: Leadership for a Multiracial Democracy
John Rogers
1 week ago

Related Articles

From our issue
Product cover image 109023.jpg
Data: Now What?
Go To Publication