HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
February 1, 2000
Vol. 57
No. 5

Measuring Reading at Grade Level

A simple, reliable, and cost-effective test can indicate whether a child is meeting grade-level expectations in reading.

Parents worry about their children's reading skills. Are they progressing as well as can be expected from 2nd or 3rd graders? Meetings with teachers often fail to provide the information that parents need to hear. "She seems to be doing well. She tries really hard and is making progress" is frequently the message. Or, "Don't worry, all he needs to do is to practice his reading a little bit more and he will do better."
The vagueness of such remarks can raise parental concerns. On what standard or on what evidence did the teacher base that conclusion? How did the teacher know that a given student was doing all right? What does it mean to be "doing all right"? Does it mean trying hard and doing the best that the child can? Or does it mean reading at grade level?
At a time when assessment, testing, and standards have become a central theme for educational renewal, many teachers still do not refer to performance standards to describe children's skills in reading. As a result, some children who "try hard" may lack basic skills, even though their teacher has not identified them as being "below the expected standard" in reading.
This reality, together with a mandate from the state of Washington that all children should read at grade level by the end of 3rd grade, spurred the Mukilteo School District to launch the Victory 1000 project, whose name reflects the roughly 1,000 days from the beginning of 1st grade to the end of 3rd grade. Although the state initiative required testing 2nd graders for reading performance, Victory 1000 aimed at giving teachers, parents, and administrators a barometer of district growth and performance in reading at several points as children moved from 1st to 3rd grade.
  • To provide reliable and valid feedback on children's reading performance
  • To identify children reading at, above, or below grade level
  • To allow teachers to document student progress over time
  • To offer a testing method minimally intrusive to the instructional process
  • To provide easy-to-understand information to parents

Screening for Reading Performance

A plethora of tests and assessments allows teachers to determine how well a student is reading. Many of these methods, however, are developed by the classroom teacher or by the school district and are not tested for reliability or validity. Commercial or state-developed tests are available, but often require substantial classroom and scoring time.
Mukilteo administrators, in collaboration with representatives from Western Washington University's Woodring College of Education, decided to employ timed oral reading fluency measures to screen all students in grades 1 through 3. Testing would take place in September and early October, November, and May to enable the district to track students' progress over time. The district planned to establish district-level reading fluency norms and targets on the basis of data gathered during the first year. Further, the district could examine the predictive validity of timed oral fluency measures by using high-stakes tests, such as the Iowa Test of Basic Skills, as criterion measures.
Screening tests are controversial. In the past, schools have used tests to determine grade promotions or to identify students who would (or would not) have access to special programs and curricular offerings. Many educators, therefore, view such tests as a symbol of injustice, a way of separating students. The Mukilteo school board decided to take a second look at screening tests, not as a means of separating students, but as a tool for making better pedagogical decisions and for providing reliable and valid information about student progress to teachers, parents, and administrators.
Screening tests are more common outside of education. For example, checking blood pressure is a routine part of a physical exam. This noninvasive test takes only a few seconds to perform, but gives a wealth of information. If the result indicates hypertension, the doctor will order additional tests to find the cause. The screening test serves as a relatively low-cost, reliable way to identify patients who need further attention.
The Mukilteo school board members hoped that they, much like a doctor, could gather information about students' reading performance through a simple, nonintrusive procedure. They hoped that oral reading fluency measures could identify students who needed further attention because they were reading substantially below grade-level expectations.

Oral Reading Fluency Measures

Oral reading fluency measures are a broad group of classroom- and curriculum-based tasks that are sensitive to increments in student performance and learning trends. They are general outcome measures that help teachers answer a broad and important question: For a given child, how effective is the instructional program in producing growth over time compared with other possible strategies? These scores allow teachers to evaluate the effectiveness of their teaching and the different approaches that they use in their classrooms. Although the test does not directly evaluate strategies, it does evaluate students' outcomes and thus serve as a barometer of the effect of the instruction.
Oral reading fluency measures originated at the University of Minnesota in the early 1980s. They were developed as an alternative assessment model in reaction to a concern that standardized norm-referenced test items did not reflect local curriculum. The field of special education has researched and used these measures extensively. With recent emphases on accountability, educators now recognize these measures as appropriate tools for classroom teachers in general education (Bowers, 1993; Marston, Fuchs, & Deno, 1986).
Oral reading fluency measures represent an effective strategy for collecting student progress in reading. Students read aloud from a grade-appropriate passage for one minute while the tester records the number of words read correctly. A number of features make them useful for tracking children's learning. First, unlike broad, high-stakes achievement measures, oral fluency measures are directly derived from the curriculum. For example, in the case of Mukilteo, students read passages with content and difficulty levels comparable to what they would normally read in class.
Second, oral fluency measures use standardized methods; that is, the measures are administered and data are collected and scored in a consistent fashion from measurement to measurement. Teachers, therefore, can easily compare a child's test results and reading progress across the school year.
In addition to classroom tasks and standardized administration and scoring, oral fluency measures use the rate of performing as a key parameter. That is, oral reading fluency data are recorded and reported as the number of words that a child correctly reads in a minute. Rate is particularly important in reflecting and measuring the extent of automatized word recognition. The notion of automaticity—how quickly and effortlessly a child can process words—is a key factor in explaining why children struggle to achieve reading proficiency. Without the automatic processing of basic skills, a child has little cognitive space left over for more complex thinking skills (Reutzel & Hollingsworth, 1993; Shinn, Knutson, Good, & Tilly, 1992).
A fourth feature of oral reading fluency measures is their efficiency in terms of time and money. They require little time to administer and score, have minimal training requirements, and are inexpensive to construct.
The data of primary interest in an oral reading fluency measure are rate and accuracy. Rate is determined by counting the number of words read correctly in one minute (correct-words-per-minute, CWPM). To determine the CWPM score, the teacher counts the number of words read in a minute and subtracts the number of errors. For screening and monitoring children's progress in reading, teachers count only substitutions and mispronunciations as errors.

Victory 1000: Implementing Oral Reading Fluencies

The school district implemented the Victory 1000 project in three phases over the first year. During phase 1, a workshop introduced teachers to the program. This meeting focused on the rationale for oral fluency testing and trained teachers in testing procedures and data collection. A Web site provided teachers and parents with information about the tests and the procedures. To aid teachers and administrators as they gave and scored the tests and reported data, Western Washington University developed a computer program that allowed teachers to easily enter and clearly display their measurements.
In late September and early October of 1999, 3,332 children in grades 1 through 3 took an oral reading fluency test. First graders, who were generally not reading, took a letter-naming fluency test (Walsh, Price, & Gillingham, 1988). Correlating the letter-naming data to oral reading measures obtained later in the year tested the usefulness of this method. Children in 2nd and 3rd grade read passages from the Multilevel Academic Skills Inventory-Reading (MASI-R).
For phase 2 in October, all students read passages. Schools shared reports with students and parents. These reports compared each child's oral fluency measure with district and national targets (Taylor, Harris, Pearson, & Garcia, 1995).
Phase 3 took place in May. The district used the accumulated data to assess the validity of letter-naming fluency. A correlation between the September and early October letter-naming fluency data and the November oral reading fluency data indicated that letter naming, for nonreaders, is indeed a good indicator of reading achievement. This result was important because the school district needed to establish a norm for letter-naming fluency that could stand as a target for future comparisons. The district also used the data from the three screenings to establish district norms and to assess the validity of oral fluency measures at the 2nd and the 3rd grade levels by using the students' spring ITBS scores for the correlation.
Some interesting results appeared when the oral reading fluency scores were correlated to ITBS subtest scores. For example, students' one-minute oral fluency scores correlated strongly with their scores on the ITBS reading comprehension subtest. This confirmed the belief that oral reading fluency screening testing was a valid and reliable tool and an excellent predictor of performance on high-stakes tests. That the correlations were so strong, even on areas outside reading, such as mathematics, leads to questioning the need for many of the time-consuming, costly tests prevalent in today's evaluation procedures.

Empowering Teachers in the Screening Process

No curricular change or reform effort can be sustained in schools without the support of teachers (Darling-Hammond, 1997). Therefore, after phase 3, the district surveyed the participating teachers about how they valued the Victory 1000 reading fluencies. Although opinions about the quality of the reading passages diverged across grades, teachers generally viewed the testing favorably and deemed the data to be valuable. In general, the respondents viewed the oral fluency approach as a testing procedure that was easy to administer and understand. Also, teachers valued its nonintrusive nature. They finally had a test that did not require hours of preparation and execution.
A few results stood out. Second grade teachers expressed a considerably more favorable attitude toward the testing than did 1st and 3rd grade teachers. A reason might be that 2nd graders were already a target group for state testing, and teachers could substitute the less intrusive oral reading fluency measures for the state test. They generally found the process simpler and more informative than expected and therefore became supportive. For 1st and 3rd grade teachers, however, the test was an add-on, a mandate from the school board and the administration. But even these teachers viewed the oral reading fluency testing as a valuable way to determine reading progress.

Hopes for the Future

We are living in a time marked by a widespread concern for the quality of education in the United States. The increased accountability for meeting national and state standards underpins much of the debate around testing and assessment. In a drive to document educational progress, teachers and students spend substantial portions of their instructional time on time-consuming tests. The success of the Victory 1000 project suggests alternatives.
With the national trend to link assessment to instruction, measures such as oral reading fluencies give teachers an assessment tool that yields important predictive information on the progress of their students in the context of classroom instruction. They can administer and score these simple, quick screening tests and be confident that the scores will provide useful information regarding overall basic-skill proficiency.
As a response to the Victory 1000 program, several efforts are under way. First, the oral fluency reading testing will continue, and as new data are gathered, well-grounded district norms and targets will be developed. Second, because the initial, enthusiastic responses from the school board and parents indicate a desire to expand this type of testing to other subject areas, the district has initiated plans to develop a test for mathematics. Finally, many teachers want to use oral fluency testing more often than three times a year to track and assess particular students. A new computer program (available at http://www.wce.wwu.edu/SRSD) allows teachers to effectively save, compare, and display oral fluency data.
Although an oral reading fluency test will not answer all the questions about a child's reading performance, it serves as a good indicator and another way for teachers to quickly identify children who need further diagnostic testing—an important step in the right direction. Current instructional practice is often informed by what the teacher observes and hears rather than through reliable and valid measures. Students often take high-stakes tests at the end of the year, a practice that provides little information to the teacher about how students are doing. Oral reading fluencies, in contrast, allow teachers to monitor students' progress continually and enable them to provide vital information to students and parents about reading proficiency.
References

Bowers, P. G. (1993). Text reading and rereading: Determinants of fluency beyond word recognition. Journal of Reading Behavior, 25, 133–153.

Darling-Hammond, L. (1997). School reform at the crossroads: Confronting the central issues of teaching. Educational Policy, 11 (2), 151-166.

Marston, D., Fuchs, L., & Deno, S. L. (1986). Measuring pupil progress: A comparison of standardized achievement tests and curriculum-related measures. Diagnostique, 11, 77-90.

Reutzel, D. R., & Hollingsworth, P. M. (1993). Effects of fluency training on second graders' reading comprehension. Journal of Educational Research, 86, 325-331.

Shinn, M. R., Knutson, N., Good, R. H. I., & Tilly, W. D. I. (1992). Curriculum-base measurement of oral reading fluency: A confirmatory analysis of its relation to reading.School Psychology Review, 21, 459-479.

Taylor, B., Harris, L. A., Pearson, P. D., & Garcia, G. (1995). Reading difficulties (2nd ed.). New York: McGraw-Hill.

Walsh, D. J., Price, G. G., & Gillingham, M. G. (1988). The critical but transitory importance of letter naming. Reading Research Quarterly, 23, 108-122.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
From our issue
Product cover image 100028.jpg
What Do We Mean by Results?
Go To Publication