HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
December 1, 2007
Vol. 65
No. 4

Data in the Driver's Seat

Two New Jersey schools discover the benefits of interim assessments, clearly defined standards, and data-driven instruction.

premium resources logo

Premium Resource

Data in the Driver's Seat- thumbnail
Credit: Copyright(C)2000-2006 Adobe Systems, Inc. All Rights Reserved.
Our story starts with two public middle schools in Newark, New Jersey. Both had student populations representative of Newark's Central Ward, where 90 percent of students qualify for free or reduced-price lunch and 85 percent are black. Students in both schools were generally well behaved and academically on task.
Despite the two schools' similar student populations, their 2003 achievement results revealed two very different pictures. One school, Greater Newark Academy, was in a tailspin: Only 7 percent of its 8th grade students had passed the state math test. The second school, North Star Academy, had more respectable results—well above the district average—but it was still behind its suburban New Jersey counterparts.
Over the ensuing four years, each school made massive gains in student achievement, outstripping the district average by at least 30 points on both math and English/language arts assessments and surpassing the statewide average in almost every category. How did these two schools end up with such tremendous student achievement? Therein lies our story.
Beginning in the 2002–03 school year, North Star Academy launched a model of data-driven instruction with interim assessments at the center, and Greater Newark followed suit the next year. In this case, interim assessments are defined as assessments given every 6 to 8 weeks throughout the school year to measure student progress toward meeting standards. Many schools are using interim assessments today, but not all are seeing such strong achievement gains. What separates those schools that use interim assessments effectively from those that do not? Certain key drivers of data-driven instruction made these two schools—and many more like them—so successful.

Assessment: Great Expectations?

  1. What is 50% of 20?
  2. What is 67% of 81?
  3. Shawn got 7 correct answers out of 10 possible answers on his science test. What percentage of questions did he answer correctly?
  4. J. J. Redick was on pace to set a college basketball record in career free throw percentage. Going into the NCAA tournament in 2004, he had made 97 of 104 free throw attempts. What percentage of free throws had he made?
  5. J. J. Redick was on pace to set an NCAA record in career free throw percentage. Going into the NCAA tournament in 2004, he had made 97 of 104 free throw attempts. In the first tournament game, Redick missed his first five free throws. How far did his percentage drop from right before the tournament game to right after missing those free throws?
  6. J. J. Redick and Chris Paul were competing for the best free throw percentage. Redick made 94 percent of his first 103 shots, whereas Paul made 47 of 51 shots. (a) Which one had a better shooting percentage? (b) In the next game, Redick made only 2 of 10 shots, and Paul made 7 of 10 shots. What are their new overall shooting percentages? Who is the better shooter? (c) Jason argued that if J. J. and Chris each made their next 10 shots, their shooting percentages would go up the same amount. Is this true? Why or why not? Describe in detail how you arrived at your answers.
Note how the level of difficulty increases with each question. For the first question, a student could understand 50 percent as one-half and determine the answer without actually using percentages. Questions 3–6 could be considered attempts at real-world application or critical thinking, but Question 6 requires far more critical thinking and conceptual understanding than any other question. Despite these drastic differences, every one of the questions is standards based. This leads to the central point about the relationship between standards and interim assessments: Standards are meaningless until you define how you will assess them.
In many schools, teachers define the standards according to their own level of expectation, and those expectations vary radically from one classroom to the next. Thus, different teachers teach to different levels of mastery. We cannot expect every teacher to teach the skills required for complex problems like Question 6 as the standard for learning if that expectation is not explicit and transparent.
To help teachers hold their students to a common standard of rigor, Greater Newark Academy and North Star Academy shared the same interim assessments that North Star originally designed in alignment with New Jersey state tests. In this way, they defined one common level of mastery to which every grade-level teacher should teach. Teachers saw the assessments before teaching their unit so that they could plan their lessons with those expectations in mind. The assessments were administered every eight weeks, and the tests measured every standard that had been taught up to that date. Thus, the first step on the path to high student achievement was established: transparent, common, rigorous assessments.

Analysis: Watching "Poolside"

High-quality assessments do not guarantee student achievement; neither does the analysis of assessment results. For example, imagine a swimming coach trying to analyze the performance of his team. If he picked up the newspaper the day after the meet and read the times of his third-place swimmer, he might decide that she just has to swim faster. Yet if he had watched that swimmer at the meet, he would have noticed that she was the last one off the starting block but the fastest one in the water. At that point, his analysis would be clear: He needs to focus on getting her off the block faster.
School assessment analysis is no different. Looking at state test or interim assessment results in isolation is like reading a newspaper summary of a sports event: You can only draw summative conclusions, and those conclusions might actually be inaccurate. You have to be "poolside" to analyze effectively.
North Star developed a spreadsheet that teachers in both schools used to analyze results on the interim assessments, but the key factor was having teachers go back to the test to look at individual questions. Teachers in the two schools received results on the day after each assessment. They could then examine the data to determine where the students erred. Seeing which distractors students chose on the multiple-choice questions and examining student work on open-ended questions helped teachers recognize what students misunderstood and plan their instruction accordingly.
For example, a 6th grade math teacher thought her students had problems with rates until she analyzed the question more closely. The question was, "How far would Jennifer travel in 2 1/4 hours if she drove 36 miles per hour?" The teacher analyzed the students' answers and discovered that most chose Answer C: 72 miles, instead of the correct answer of 81 miles. Thus, the students actually understood rates—because they multiplied 2 hours by the 36 miles to get 72—but they didn't know how to multiply by a mixed number (2 1/4 × 36).
Greater Newark and North Star were able to avoid the mistakes of the swim coach by doing item-level, test-in-hand analysis. This enabled teachers to make solid, root-cause analyses, which in turn facilitated far more effective action plans. Being "poolside" made all the difference: Assessments and analysis were now linked.

Action: Taking Data out of the Binder

Even with high-quality interim assessments and effective analysis, student achievement will not improve without targeted follow-through. Most research about highly effective schools focuses on developing an action plan for reteaching particular standards (Symonds, 2003). Following this advice, schools often develop data binders containing analyses and action plans based on the previous round of assessments and keep a copy in the main office or in each classroom.
Yet the key question remains: Where is that information when teachers plan lessons? If a teacher plans lessons on Sunday night and the data binder is in the classroom, then the effect on teaching is greatly diminished. Action plans must be connected to lesson plans, which need to translate to changes in teaching.
Teachers at Greater Newark and North Star developed six-week action plans based on interim assessment results, and the most successful teachers had those action plans in hand when planning lessons. A 5th grade literacy teacher, for example, learned that her students could make basic inferences and identify the main idea, but they couldn't keep track of the sequence of events, nor could they identify the evidence in the text that supported their inferences. So the teacher redesigned her guided reading lessons to ask more questions related to these skills, and she created scaffolded guides to teach these skills more efficiently.
Teachers also used the action plans to design targeted tutoring sessions and differentiated small groups. Some teachers actually stapled their action plans to the front of their lesson plans to remind themselves of the connection between their assessment analysis and their teaching. The seamless coherence among assessments, analysis, and action creates the ideal classroom environment for significant gains in student learning.

Buy-In: Chicken or Egg?

Much research has been done about the data-driven culture of high-achieving schools, especially the role of teacher buy-in (Datnow, Park, & Wohlstetter, 2007). Unfortunately, the research has not adequately answered the question of whether that buy-in was a prerequisite for success—a true driver of achievement—or a by-product of a data-driven culture. An example from one of our two schools helps address this question.
When North Star launched its data-driven instruction model in 2003, most teachers were ambivalent about whether using interim assessments would have any effect. Some wondered, Don't we already assess our students and analyze their progress? A few were outright resistant.
Before the first interim assessment, North Star's leaders had teachers predict the performance of their students by labeling each question in one of three ways: Confident (students would answer correctly); Not Sure (students might or might not answer correctly); or No Way(students would not answer correctly). When the results came in, teachers were shocked to see that their students performed far worse than they expected. They implemented the three principles mentioned previously: using the assessments to evaluate the rigor of their teaching, doing test-in-hand analysis, and applying targeted action plans when planning lessons. They also pored over the next assessment in advance, hungry to prove that they could do better. On that next assessment, almost every teacher had students show gains in achievement.
While the school celebrated these improvements, some teachers still resisted the process. One teacher in particular, Ms. J, was adamant that she was not really improving her teaching and was only teaching to the test. At the end of the 2003–04 school year, school leaders compared her results from the previous year with the current year and saw that her students that year had shown much stronger gains in reading and language than did her students for the previous year, before interim assessments were implemented. The teachers and school culture were the same for both cohorts; the only thing that changed was effective implementation of interim assessments. Although Ms. J clearly saw the incredible gains she had made, she still did not fully endorse the process.
Two years later at a faculty meeting, teachers debated shortening one part of the analysis and action plan process. Ms. J stood up and firmly defended keeping the process as it was because of the incredible value it added to her teaching. In two years, this teacher had gone from being the most vocal opponent to being an ardent supporter. The results came first; the buy-in came next. Data-driven instruction does not require teacher buy-in—it creates it.

Creating Better Schools

Greater Newark Academy and North Star Academy started at two different places when they decided to implement data-driven instruction: One was in danger of sanctions, and the other was considered good but had not made the transition to excellence. Both saw significant gains as a result of the effective implementation of interim assessments, which included a preestablished assessment calendar and a trained leadership team. In essence, they shifted the focus of the schools from what was being taught to what the students were learning.
These two schools are not alone. Over the past three years, more than 500 school leaders have attended workshops that I have delivered through New Leaders for New Schools and for various school systems. Participants then launched interim assessments and data-driven instruction in their schools. From this work have come dramatic student achievement gains in charter and district schools in the San Francisco Bay Area, Chicago, New York, Memphis, Baltimore, and Washington, D.C. With the proper interplay among interim assessments, analysis, action and data-driven culture, schools can be transformed, and a new standard can be set for student learning.
References

Datnow, A., Park, V., Wohlstetter, P. (2007).Achieving with data: How high-performing school systems use data to improve instruction for elementary students. Los Angeles: Center on Educational Governance, Rossier School of Education, University of Southern California.

State of New Jersey, Department of Education. (2004). New Jersey core curriculum content standards for mathematics. Trenton, NJ: Author. Available:www.state.nj.us/education/cccs/s4_math.htm

Symonds, K. W. (2003). After the test: How schools are using data to close the achievement gap. San Francisco: Bay Area School Reform Collaborative.

Paul Bambrick-Santoyo has contributed to Educational Leadership.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 108023.jpg
Informative Assessment
Go To Publication