HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
February 1, 2003
Vol. 60
No. 5

First Things First: Demystifying Data Analysis

author avatar
To improve student achievement results, use data to focus on a few simple, specific goals.

First Things First: Demystifying Data Analysis - Thumbnail
Credit: (C) 1987-1996 Adobe Systems Incorporated All Rights Reserved
I recently sat with a district administrator eager to understand her district's achievement results. Pages of data and statistical breakdowns covered the table. Looking somewhat helpless, she threw up her hands and asked me, “What do I do with all this?”
Many educators could empathize with this administrator. The experts' tendency to complicate the use and analysis of student achievement data often ensures that few educators avail themselves of data's simple, transparent power. The effective use of data depends on simplicity and economy.
  • How many students are succeeding in the subjects I teach?
  • Within those subjects, what are the areas of strength or weakness?
The answers to these two questions set the stage for targeted, collaborative efforts that can pay immediate dividends in achievement gains.

Focusing Efforts

Answering the first question enables grade-level or subject-area teams of practitioners to establish high-leverage annual improvement goals—for example, moving the percentage of students passing a math or writing assessment from a baseline of 67 percent in 2003 to 72 percent in 2004. Abundant research and school evidence suggest that setting such goals may be the most significant act in the entire school improvement process, greatly increasing the odds of success (Little, 1987; McGonagill, 1992; Rosenholtz, 1991; Schmoker, 1999, 2001).
If we take pains to keep the goals simple and to avoid setting too many of them, they focus the attention and energies of everyone involved (Chang, Labovitz, & Rosansky, 1992; Drucker, 1992; Joyce, Wolf, & Calhoun, 1993). Such goals are quite different from the multiple, vague, ambiguous goal statements that populate many school improvement plans.

Turning Weakness into Strength

After the teacher team has set a goal, it can turn to the next important question: Within the identified subject or course, where do we need to direct our collective attention and expertise? In other words, where do the greatest number of students struggle or fail within the larger domains? For example, in English and language arts, students may have scored low in writing essays or in comprehending the main ideas in paragraphs. In mathematics, they may be weak in measurement or in number sense.
Every state or standardized assessment provides data on areas of strength and weakness, at least in certain core subjects. Data from district or school assessments, even gradebooks, can meaningfully supplement the large-scale assessments. After team members identify strengths and weaknesses, they can begin the real work of instructional improvement: the collaborative effort to share, produce, test, and refine lessons and strategies targeted to areas of low performance, where more effective instruction can make the greatest difference for students.

So What's the Problem?

Despite the importance of the two questions previously cited, practitioners can rarely answer them. For years, during which data and goals have been education by-words, I have asked hundreds of teachers whether they know their goals for that academic year and which of the subjects they teach have the lowest scores. The vast majority of teachers don't know. Even fewer can answer the question: What are the low-scoring areas within a subject or course you teach?
Nor could I. As a middle and high school English teacher, I hadn't the foggiest notion about these data—from state assessments or from my own records. This is the equivalent of a mechanic not knowing which part of the car needs repair.
Why don't most schools provide teachers with data reports that address these two central questions? Perhaps the straightforward improvement scheme described here seems too simple to us, addicted as we are to elaborate, complex programs and plans (Schmoker, 2002; Stigler & Hiebert, 1999).

Over-Analysis and Overload

The most important school improvement processes do not require sophisticated data analysis or special expertise. Teachers themselves can easily learn to conduct the analyses that will have the most significant impact on teaching and achievement.
The extended, district-level analyses and correlational studies some districts conduct can be fascinating stuff; they can even reveal opportunities for improvement. But they can also divert us from the primary purpose of analyzing data: improving instruction to achieve greater student success. Over-analysis can contribute to overload—the propensity to create long, detailed, “comprehensive” improvement plans and documents that few read or remember. Because we gather so much data and because they reveal so many opportunities for improvement, we set too many goals and launch too many initiatives, overtaxing our teachers and our systems (Fullan, 1996; Fullan & Stiegelbauer, 1991).

Formative Assessment Data and Short-Term Results

A simple template for a focused improvement plan with annual goals for improving students' state assessment scores would go a long way toward solving the overload problem (Schmoker, 2001), and would enable teams of professional educators to establish their own improvement priorities, simply and quickly, for the students they teach and for those in similar grades, courses, or subject areas.
Using the goals that they have established, teachers can meet regularly to improve their lessons and assess their progress using another important source: formative assessment data. Gathered every few weeks or at each grading period, formative data enable the team to gauge levels of success and to adjust their instructional efforts accordingly. Formative, collectively administered assessments allow teams to capture and celebrate short-term results, which are essential to success in any sphere (Collins, 2001; Kouzes & Posner, 1995; Schaffer, 1988). Even conventional classroom assessment data work for us here, but with a twist. We don't just record these data to assign grades each period; we now look at how many students succeeded on that quiz, that interpretive paragraph, or that applied math assessment, and we ask ourselves why. Teacher teams can now “assess to learn”—to improve their instruction (Stiggins, 2002).
A legion of researchers from education and industry have demonstrated that instructional improvement depends on just such simple, data-driven formats—teams identifying and addressing areas of difficulty and then developing, critiquing, testing, and upgrading efforts in light of ongoing results (Collins, 2001; Darling-Hammond, 1997; DuFour, 2002; Fullan, 2000; Reeves, 2000; Schaffer, 1988; Senge, 1990; Wiggins, 1994). It all starts with the simplest kind of data analysis—with the foundation we have when all teachers know their goals and the specific areas where students most need help.

What About Other Data?

In right measure, other useful data can aid improvement. For instance, data on achievement differences among socio-economic groups, on students reading below grade level, and on teacher, student, and parent perceptions can all guide improvement.
But data analysis shouldn't result in overload and fragmentation; it shouldn't prevent teams of teachers from setting and knowing their own goals and from staying focused on key areas for improvement. Instead of overloading teachers, let's give them the data they need to conduct powerful, focused analyses and to generate a sustained stream of results for students.
References

Chang, Y. S., Labovitz, G., & Rosansky, V. (1992). Making quality work: A leadership guide for the results-driven manager. Essex Junction, VT: Omneo.

Collins, J. (2001, October). Good to great. Fast Company, 51, 90–104.

Darling-Hammond, L. (1997). The right to learn: A blueprint for creating schools that work. New York: Jossey-Bass.

Drucker, P. (1992). Managing for the future: The 1990s and beyond. New York: Truman Talley Books.

DuFour, R. (2002). The learning-centered principal. Educational Leadership, 59(8), 12–15.

Fullan, M. (1996). Turning systemic thinking on its head. Phi Delta Kappan, 77(6), 420–423.

Fullan, M. (2000). The three stories of education reform. Phi Delta Kappan, 81(8), 581–584.

Fullan, M., & Stiegelbauer, S. (1991). The new meaning of educational change. New York: Teachers College Press.

Joyce, B., Wolf, J., & Calhoun, E. (1993). The self-renewing school. Alexandria, VA: ASCD.

Kouzes, J., & Posner, B. (1995). The leadership challenge. San Francisco: Jossey-Bass.

Little, J. W. (1987). Teachers as colleagues. In V. Richardson-Koehler (Ed.), Educator's handbook. White Plains, NY: Longman.

McGonagill, G. (1992). Overcoming barriers to educational restructuring: A call for “system literacy.” ERIC, ED 357–512.

Reeves, D. (2000). Accountability in action. Denver, CO: Advanced Learning Press.

Rosenholtz, S. J. (1991). Teacher's workplace: The social organization of schools. New York: Teachers College Press.

Schaffer, R. H. (1988). The breakthrough strategy: Using short-term successes to build the high-performing organization. New York: Harper Business.

Schmoker, M. (1999). Results: The key to continuous school improvement (2nd ed). Alexandria, VA: ASCD.

Schmoker, M. (2001). The results fieldbook: Practical strategies from dramatically improved schools. Alexandria, VA: ASCD.

Schmoker, M. (2002). Up and away. Journal of Staff Development, 23(2), 10–13.

Senge, P. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday.

Stiggins, R. (2002). Assessment crisis: The absence of assessment FOR learning. Phi Delta Kappan, 83(10), 758–765.

Stigler, J. W., & Hiebert, J. (1999). The teaching gap: Best ideas from the world's teachers for improving education in the classroom. New York: Free Press.

Wiggins, G. (1994). None of the above. The Executive Educator, 16(7), 14–18.

Mike Schmoker is a former administrator, English teacher, and football coach. He has written dozens of articles for educational journals, newspapers, and TIME magazine as well as multiple bestselling books for ASCD. In an EdWeek survey of national educational leaders, he was identified as among the best sources of practical "nuts and bolts…advice, wisdom and insight" on effective school improvement strategies.

Schmoker is a recipient of the Distinguished Service Award by the National Association of Secondary School Principals for his publications and presentations. As a much sought-after presenter, he delivers keynotes and consults internationally throughout the United States, Canada, Australia, China, and Jordan.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 103031.jpg
Using Data to Improve Student Achievement
Go To Publication