HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2015
Vol. 73
No. 3

Commentary / RTI: What Teachers Know That Computers Don't

Standardized test data can't replace teacher expertise in school improvement efforts.

premium resources logo

Premium Resource

Commentary / RTI: What Teachers Know That Computers Don't - Thumbnail
Over the last decade, Response to Intervention (RTI) has fostered a significant shift in daily teaching and learning practices. Identifying students who are at risk and providing them with immediate and intensive instructional support is undeniably a powerful approach.
As a teacher who is fairly new to implementing RTI in our elementary school, I'm excited by its potential. As a theoretical framework, RTI embraces an admirably holistic approach to supporting every child by considering each child's instructional and environmental contexts to determine individualized interventions.
However, some of the assumptions underlying many schools' approaches to the model can have unintended consequences in the real world of classrooms. What are some of the assumptions, and how can we correct them?

Assumption 1. Frequent data collection is crucial.

One of the fundamental tenets of RTI is that "RTI requires the continuous collection of data in order to determine student progress" (O'Meara, 2011, p. 18). The research suggests that the data should be drawn from multiple measures, such as teacher observations, formative and summative classroom assessments, and scientific screening instruments.
Objective, standardized, and scientific measures of student achievement hold a lot of appeal. Teachers inevitably vary in their level of assessment skill and expertise. And despite an increasing commitment to teacher collaboration, teachers have just a fraction of the time they need to compare student work and to practice scoring that work in relation to anchor assessments. What's more, teachers are profoundly influenced by context. A teacher might perceive a student who is an outlier in a cohort of high-performing students to be at risk when, in fact, the student would be considered on grade level by more objective measures. These are all good reasons to include some objective measures when identifying and monitoring students who need Tier 2 and Tier 3 support.
In actuality, however, it's been disheartening to see how uninformative the "objective" data can be. Last year, our district used two different standardized assessments to screen and monitor students' reading comprehension: the FAST (Formative Assessment System for Teachers) and the SRI (Scholastic Reading Inventory). In January, Robin, a vivacious 10-year-old with a vivid imagination, was reported to be in the 51st percentile nationally according to the FAST. But according to the SRI, taken just one week later, Robin was in the 99th percentile. I'd like to take credit for doubling her score in one week of instruction, but it doesn't seem plausible.
The assessments also demonstrated inconsistencies in their measurement of student growth. Consider Kyesha, a sophisticated reader who consistently demonstrates deep, inferential comprehension. When I compared Kyesha's September and January results on the FAST assessment, her growth was far less than expected, showing only 3 points improvement when 10 points is considered "moderate growth" (FAST, n.d.). The SRI, on the other hand, gave her a much higher growth rating. Her Lexile measure was 1114 in September and leapt to 1289 in January, dramatically exceeding the average expected growth of approximately 35 points annually for a student in this Lexile range (Knutson, Scholastic Research, & MetaMetrics, 2011).
In fact, of the 20 students in my 4th grade class, 8 students increased their score on one assessment and decreased their score on the other assessment. What's more, the growth discrepancies weren't consistent. Some students went up on the SRI and down on the FAST; others went down on the SRI and up on the FAST.
I was bewildered by the apparent deterioration in students' reading skill from September to January. In this same incredibly diverse cohort, 82 percent of the students scored in the proficient or advanced category on the rigorous reading portion of the MCAS (Massachusetts Comprehensive Assessment System) the previous spring, when they were in my 3rd grade class. My own daily work with these students assured me that they were deepening their comprehension and tackling increasingly challenging texts across the genres.
How can we explain these inconsistencies? Was a student hungrier on one testing date than the other? Did a student who has unreliable housing have a bed to sleep in one week but not the next? When our turn to test in the computer lab was during the last period, did more students rush through the test, anxious about missing their bus? Were some students more fatigued when they took the second test? Were other students more successful when they took the second test because they had more practice the week before? Were the reading passages on one test more engaging to a particular student?
Vanderbilt University professors Douglas Fuchs, Lynn Fuchs, and Donald Compton (2012) acknowledge some of the limitations of these universal screenings: "Findings from this research frequently show unacceptably high rates of false positives (or students who appear at risk but are not) with one-stage screens, particularly in the early grades" (p. 266). But instead of recommending that RTI programs make room for teacher expertise, the researchers suggest collecting more assessment data, following up the screening assessment with more in-depth testing for students who are flagged.
How does this play out in a real classroom, in a real school? My 4th graders took more than 25 district-mandated assessments in the first half of last year, with many weeks of spring high-stakes testing still ahead. For these students, who were accustomed to deep, interdisciplinary project-based learning, morale went down and disengagement went up. Instructional momentum was fragmented. Kids who were typically animated, curious, and motivated learners became bored, anxious, and apathetic.
Ironically, the more assessments we administer, the less reliable they become. Why? Students aren't fools. They don't mind taking a silly test every year or so to please the grown-ups. But once these instruments become monthly, or even weekly, students no longer feel motivated to put forth their best effort. They breezily click through their adaptive test screens, eager to get back to the delicious novel they are allowed to read after the testing session. Can we really blame them?

Assumption 2. Student progress is linear and consistent.

Another assumption of many progress-monitoring programs is that student progress marches forward with regularity and predictability. Educators know that nothing could be further from the truth. The toddler who only knows 20 words may suddenly begin speaking in three-word sentences; the kindergartner who is just sounding out c-a-t one day can leap into a Magic Tree House book the next; a student who struggles with the subtraction algorithm throughout 2nd grade masters it seemingly overnight in 3rd grade.
Teaching is much more like gardening than baking. When you make a cake, you add the precise ingredients, put it in the oven, and you know exactly what you'll have in 37 minutes time. As teachers, however, we are cultivating seeds, knowing that the bloom may appear when we're no longer there to admire its beauty.
I am privileged to teach in a looping 3rd and 4th grade classroom, so I have longer to watch my seeds bloom than most teachers do. In the fall of her 3rd grade year, I met an eager English language learner named Sonia who had an incredible will to learn. That September, she scored below the reporting range on her Scholastic Reading Inventory (SRI) screener, effectively receiving a zero. By June, she had reached a Lexile level of 517. Then in the fall of her 4th grade year, she jumped up to 795, but went down to 789 in the winter. What accounted for this negative growth during the fall of 4th grade? The conventional wisdom would be that she had mediocre instruction, but I was her teacher both during the year when she gained 517 points and during the semester when she had negative growth. Should we panic about Sonia? Or should we celebrate the overall upward trend? And should I, as her 4th grade teacher, be designated as a "low-growth teacher" and potentially suffer professional consequences?

Correcting the Overreliance on Standardized Assessment

I believe in the potential of RTI, but my experiences with implementation have left me with serious concerns about overtesting our students. I have a simple proposal for strengthening the RTI model: Move it closer to home. That is, return the focus of instructional decision making to the classroom teacher. Instead of supplementing the screening assessment with even more testing, combine a more limited set of screening data with teacher observation and analysis of student work over time.
What does this look like in practice?
1. Prioritize formative assessment data over universal screening data. Like most practitioners, I knew within three days of the beginning of school which 3rd graders would benefit from targeted, intensive instruction in math and reading. Hearing students read to me, asking them questions about their thinking, and watching them puzzle over a challenging math task are all practices that give me the data I need to improve my Tier 1 instruction and determine which students need instruction at higher tiers to get to grade level.
At our school's RTI meetings, we don't just discuss the numbers; we discuss the children. We look at authentic student work samples. We consider social-emotional factors that are supporting or impeding academic performance. Although the screening data have a place in our discussions, we don't allow the data to supplant our professional judgment. I am lucky to be in a school where our principal and coaches value my expertise. But many teachers do not have this good fortune.
2. Build reteaching time into the school day. Our 3rd and 4th grade teachers have a daily 30-minute "practice and challenge" block during which we conduct on-the-spot interventions for students who require reteaching. For example, if I notice that Kaya, Jamal, and Phi are struggling with reducing fractions to their simplest form, I simply pull them into a small group during that final block of the day and work with them toward mastery while the students who have demonstrated proficiency with this skill choose from a selection of other academic activities (such as independent reading, free-choice writing, and math enrichment).
3. Build time for collegial discussions of student needs. One significant advantage of the RTI model is that it emphasizes collaborative problem solving. Particularly in elementary schools, where students are generally with the same teacher throughout the day, having time to get support and ideas from colleagues about successful approaches to Tier 1 and Tier 2 instruction is crucial.

What Computers Don't Know

An overreliance on scientific measures and a devaluation of teacher expertise will have deleterious consequences for the whole profession as new teachers enter a field that doesn't push them to own the formative and summative assessment process but instead asks them to defer to computer-generated reports. By relying on standardized data, teachers will lose the intimate knowledge of each student that's so crucial to our ability to advance skills.
I know if a student is having difficulty spelling "ch" blends because it's a sound that doesn't occur in her native language. I know when a student's writing looks below proficient because of grapho-motor deficits, even though his oral composition dictated to a teacher could rival Charles Dickens for creativity. I know which student understands the concepts behind the multiplication algorithm but doesn't yet have automatic recall with his 7s, 8s, and 9s tables. The computer can't tell me this information.
It's the unique and very human interaction between student and teacher that underlies true assessment—what researcher Vanessa Rodriguez describes as "the intangible synchrony that occurs between teacher and student" (Rodriguez, 2014, p. 133). Computers can't attain synchrony with students, but teachers can. And synchrony allows for much more than perceptive insights about a student's academic strengths and needs.
When teachers are really paying attention to each child—to the whole child, and not just to the child's scores—students feel "known." In classrooms characterized by synchrony, instead of test-driven dread and anxiety, students feel the kind of joy and security that motivates their natural curiosity to learn.
Author's note: Student names are pseudonyms.
References

FAST. (n.d.). Benchmark scores and normative information: 2014–2015 academic year. Minneapolis, MN: Author.

Fuchs, D., Fuchs, L., & Compton, D. (2012). Smart RTI: A next-generation approach to multilevel prevention. Exceptional Children, 78(3), 263–279.

Knutson, K. A., Scholastic Research, & MetaMetrics. (2011). Growth expectations: Setting achievable goals. New York: Scholastic.

O'Meara, J. (2011). RTI with differentiated instruction, grades K–5: A classroom teacher's guide. Thousand Oaks, CA: Corwin.

Rodriguez, V. (2014). The teaching brain. New York: The New Press.

Karen Engels is a 3rd and 4th grade teacher at the Graham and Parks School in Cambridge, Massachusetts.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 116030.jpg
Doing Data Right
Go To Publication