Skip to content
ascd logo

March 1, 2004
Vol. 61
No. 6

Research on Reading: A Cautionary Tale

Research on Reading: A Cautionary Tale - thumbnail
Research on reading has generated much controversy and confusion. In an attempt to clarify the situation, education policymakers sought a research method that would offer the final word on what works in reading instruction. Meta-analysis—a statistical procedure that synthesizes the data from a number of existing studies to determine important programmatic effects—appeared to offer great potential for objectivity and even-handedness. Disputes could be resolved by making the fullest use of the research literature. The National Reading Panel (2000) likewise attempted to conduct a meta-analysis to review the best evidence available to guide instruction in reading. Unfortunately, the scientific method used in the Panel's report is flawed.

Flawed Science

The National Reading Panel's most important conclusion is that its findings “provide solid support” for the idea that systematic phonics instruction is more effective than alternatives in teaching children to read. But Camilli, Vargas, and Yurecko (2003) reanalyzed the same data, also using meta-analysis, and found that the effect size of programs using systematic phonics was only half as large as that reported by the National Reading Panel. This effect, moreover, was substantially smaller than the facilitative effect of one-to-one instruction. Additional analyses have shown that the combined effect of a number of literacy activities appears to be larger than the effect of systematic phonics instruction alone.
The huge discrepancies between the National Reading Panel study and the analysis by Camilli and colleagues are due largely to differences in the scientific methods used to analyze the data. In our view, defensible studies of reading outcomes must take into account the multiple influences on reading ability and provide accurate descriptions of experimental effects.
Specifically, the process of extracting quantitative information from previous studies requires a more systematic approach. The National Reading Panel researchers decided to compare studies that had “more active” phonics interventions (experimental groups) with studies that had “less active” interventions (control groups). Yet this choice resulted in the experimental groups of some studies resembling the control groups of others. In other words, the more/less principle created a sliding scale and compromised the rigor of the Panel's report. The many influences on a student's reading ability cannot be reduced to such a more/less dimension.
Research studies of reading typically do not compare a pure treatment group to a pure control group. For example, even direct instruction in explicit decoding can be embedded within a comprehensive, meaning- based literacy program. This is why Fletcher and Lyon (1998) noted that gains in reading skills are reinforced by an emphasis on good literature, reading for enjoyment, and a meaning-based context. Unfortunately, the National Reading Panel analysis was designed to ignore the effects of literacy activities, and consequently it confounds the effects of systematic phonics with those of literacy-oriented treatments.
To justify this design decision, some researchers (see Archibald, 2003) have assumed that the literacy effect can simply be added to the systematic phonics effect to arrive at the “true” effect of the latter. But assuming that the phonics effect is the sum of all instructional efforts is the equivalent of false advertising. Such an assumption ignores questions like these: Was there a print-rich environment offering extended opportunities for writing practice? Did the treatment supplement or supplant classroom instruction? A scientific approach to reading research does not automatically attribute to systematic phonics the effects of other factors.

Untangling the Terminology

The small effect for systematic phonics, which was the focus of the National Reading Panel's report, is a less important finding than one might think. Neither the report nor its reanalysis addressed phonics in general. Rather, both focused on systematic phonics, an approach that introduces letter-sound correspondences in a predetermined schema. This is different from phonics as decoding—that is, using the sound and look of a word to figure out what it means in context.
We know of no mainstream researcher or teacher who thinks that decoding is unimportant. But the National Reading Panel's report has confused many consumers of research who loosely mix the terms decoding, phonics, and systematic phonics. The authors of a recent editorial, for example, argued that the study by Camilli and colleagues placed “even greater weight on the importance of phonics than did the National Reading Panel” (Editorial, 2003). It is difficult to determine whether the author of the editorial understood the difference between decoding skills and systematic phonics instruction. But the issue is not whether “phonics,” or decoding skills, is important. The issue is how to teach decoding skills.

Beyond the “Phonics” Debate

In an attempt to answer the question, What works in reading instruction?, some have argued for a balance of methods that would incorporate both direct instruction and meaning-based approaches (McIntyre & Pressley, 1996; Morrow & Tracey, 1997; Pearson, 1995). Pressley, Rankin, and Yokoi (1996) surveyed the practices of effective reading teachers and concluded that teachers' educations should “include exposure to a number of approaches and practices intermingling different types of instruction” (p. 380). However, there is no secret formula that will address the needs of every reader. One unfortunate connotation of the balance approach is that the teacher may devise a single curriculum with a mix-and-match strategy to “catch” all students. This approach eliminates the teacher's professional power and responsibility for instructional decision making.
Even the most reductive, scripted reading programs can make the claim of “balance.” We prefer the term differentiation, which implies a different scenario. Using this strategy, the teacher assesses students' needs, determines the appropriate methods to address those needs, and creates individual and group experiences accordingly. Perhaps some students need direct instruction in decoding skills; perhaps none do. With differentiation in mind, a clearer interpretation of the National Reading Panel's findings on phonics instruction may be this: Direct instruction in phonics is necessary for certain at-risk kindergartners, but only if embedded in a print-rich, comprehensive literacy program and delivered in brief, individualized lessons.

Needed: Better Science

Unless researchers describe literacy, language, and phonics within a comprehensive framework, they may continue to conduct studies that have no clear implications for instructional practice. It seems noncontroversial to argue that there is no useful one-dimensional model of reading instruction. Consequently, scientific studies should be sensitive to the mosaic of effects present in any reading environment.
As researchers (Baumann, Hoffman, Moon, & Duffy-Hester, 1998; Morrow & Tracey, 1997) have shown, most teachers think decoding skills are important; these teachers provide daily phonics instruction in their classrooms. In a survey by Pressley and colleagues (1996), 95 percent of teachers reported explicitly teaching phonics. These studies indicate that most students receive phonics instruction, and Camilli and colleagues (2003) found that adding systematic phonics instruction to this base appears to have a small impact for certain groups of students.
The important recommendation here is that systematic phonics instruction may be valuable for selected students when added to a comprehensive literacy program, but it imparts little value when used as the only reading instruction for all students. The efficacy of phonics instruction depends on how teachers use their class time. If a teacher delivers 10 minutes of systematic phonics instruction each day to those students who need it, such instruction may have a high benefit-cost ratio. If phonics activities displace literature-rich and meaning-oriented instruction, however, the benefit of systematic phonics instruction will be greatly diminished.
As educators look for practical guidance about what works in reading instruction, they should know that some findings of “evidence-based research” have been greatly exaggerated. The complexity of classroom- relevant understandings of reading instruction may seem overwhelming, but this is why teachers should make curricular decisions. Effective teachers must understand the particular needs and situations of their students as well as a range of contextualized, evidence-based practices. Questions of whether direct instruction in decoding or phonics is necessary are less important than the questions of when, why, how, and to whom teachers should provide such instruction. These are core questions in the scientific study of reading.

Archibald, G. (2003, June 10). Researchers verify reading ability gets a boost from phonics. Washington Times. Available:

Baumann, J. F., Hoffman, J. V., Moon, J., & Duffy-Hester, A. M. (1998). Where are teachers' voices in the phonics/whole language debate? Results from a survey of U.S. elementary classroom teachers. Reading Teacher, 51(8), 636–650.

Camilli, G., Vargas, S., & Yurecko, M. (2003). Teaching children to read: The fragile link between science and federal education policy. Education Policy Analysis Archives, 11(15). Available:

Editorial. (2003, June 12). Sounding phonics. Richmond Times-Dispatch.

Fletcher, J. M., & Lyon, G. R. (1998). Reading: A research-based approach. In W. Evers (Ed.), What's gone wrong in America's classrooms (pp. 49–90). Stanford, CA: Hoover Institute Press.

McIntyre, E., & Pressley, M. (1996). Strategies and skills in whole language: An introduction to balanced teaching. Boston: Christopher-Gordon.

Morrow, L. M., & Tracey, D. H. (1997). Strategies used for phonics instruction in early childhood classrooms. Reading Teacher, 50(8), 644–651.

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Reports of the subgroups. Washington, DC: National Institute of Child Health and Human Development.

Pearson, P. D. (1995, November). Reclaiming one center: A reading curriculum for all students and teachers. Presentation at the meeting of the California Reading Association, Anaheim, California.

Pressley, M., Rankin, J., & Yokoi, L. (1996). A survey of instructional practices of primary teachers nominated as effective in promoting literacy. Elementary School Journal, 96(4), 363–384.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
Related Articles
View all
Responding to Intolerance: Leadership for a Multiracial Democracy
John Rogers
18 hours ago

Related Articles

From our issue
Product cover image 104028.jpg
What Research Says About Reading
Go To Publication