HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
February 1, 2015
Vol. 72
No. 5

Taking a District's Pulse

External peer review—in which educators use rigorous criteria to assess neighboring school districts—is a better way to measure quality.

Taking a District's Pulse- thumbnail
Two crucial misconceptions have framed recent efforts to improve the quality of U.S. schools: (1) the conflation of achievement with learning accompanied by the misuse of achievement data, and (2) the erroneous belief that meaningful changes in school quality can be coerced through a national regimen of testing and accountability. Let's look at each misconception in turn—and at a more fruitful way some districts are assessing the quality of their schools.

Two Misconceptions …

Recent national reform efforts, from No Child Left Behind to Race to the Top, have conflated student achievement with student learning—but they are not synonymous. Achievement speaks to students' scores on tests. Learning is manifested by a deeper conceptual understanding; by the transfer, the application, and especially the creation of knowledge (Grennon Brooks & Brooks, 1999); and the concomitant development of higher order thinking abilities.
Conflating high test scores with enduring learning has led to a paradox: As state test scores throughout the United States have risen, student results on what are widely considered to be broader measures of student learning, such as the SAT, Programme for International Student Assessment, and Trends in International Mathematics and Science, have remained relatively flat and, in some instances, declined (Carnoy & Rothstein, 2013).
The constant public focus on standardized test scores suggests the belief that the threat of sanctions or promise of rewards (mostly in the form of increased funding) will enhance students' test scores and improve school quality—that schools can somehow be coerced or seduced into improvement. It's a neat and simple supposition, yielding quick, outcome-based results through a process controlled by external agencies. To date, however, there's no evidence that this premise has generated any improvement in student learning.

And Accompanying High Stakes

The paradox of higher standardized test scores and flat scores on measures of deeper learning would be relatively benign were test scores not also used to judge school quality and teacher effectiveness. But they are. Under the current system, test scores = student achievement, and student achievement = school quality. This is a deeply flawed, corrosive equation for several reasons.
First, even the makers of the tests by which school quality is inappropriately gauged—as well as testing experts—acknowledge that this is a misuse of test results. Standardized tests are designed solely to assess student achievement, not school or teacher quality (Haney, 2002; Popham, 2001).
Second, this equation assumes that the tests are linked to meaningful standards and are therefore valid measures of student achievement relative to those standards, assumptions yet to be supported by reputable research.
Third, good schools offer their students much more than academic attainment, much of which is palpable and observable but not testable. There is a difference between high-performing schools and schools that serve higher-achieving students—and between low-performing schools and schools that serve lower-achieving students. Some schools considered high-performing because of their students' achievement are actually quite ordinary, and some labeled low-performing add enormous value to the lives of those who attend, both educationally and socially. Thus, it's not necessarily the case that a school in which 80 percent of the students achieve proficiency in reading and math is "better" than a school in which 60 percent of students achieve proficient scores. Yet this is what the current reform system would have us believe.

A Better Way

Gauging school quality is an elusive process that transcends monitoring achievement. So how can we come to more fully understand school quality and the conditions that promote improvements in student learning? One promising approach is external peer review.
External peer review—through which educators from outside a district review that district's practices with an eye for specific elements—has a rich history in educational improvement. The British Inspectorate model (Thomas, 1998) and the School Quality Review system in New York (Sobol, 2013) have provided valuable feedback to schools for decades. A third model, described in depth here, comes from the Tri-State Consortium, a learning organization currently serving 45 school districts in New York, New Jersey, and Connecticut.
Districts in the Tri-State Consortium receive visits triennially from teams of educators, who are chosen from other member districts, trained in the visit process, and have expertise in the focus area selected by the host district. Focus areas range from academic disciplines (K–12 math, science, world languages, and so on) to specific programs (such as library media, guidance, or Response to Intervention) to approaches that promote deeper thinking (such as the extent to which critical thinking is embedded in a district's curriculum).
Peer review teams examine each focus area by looking at eight broad indicators of systemic quality. Only one indicator focuses on student test scores; the others cover a range of processes and structures related to student performance and to internal and external support for student learning. They provide a comprehensive picture of a district's strengths and of areas ripe for attention.

The Power of the Process

Visiting team members are released by their districts to participate in three-day visits and are instructed in how to apply the eight indicators. (See "" for a brief description.) Each indicator poses questions for the district to consider pertaining to its approach (what the system intends to happen); implementation (what is actually happening); and results (changes in student learning that can be attributed to the district's approach and implementation).
External peer review is particularly influential when it focuses on benchmarking, as the consortium does, rather than on qualitative judgments about programs and practices. Qualitative reviews are organized around questions like, "How well is the district doing this?" Benchmark reviews pose questions like, "To what extent is the district doing this?" Benchmarking implies an unfinished journey and continuous improvement.
The consortium has a set of rubrics for these indicators, enabling teams to pinpoint where a district is on its journey. For example, a district's approach to performance-based assessment may be seen as foundational, whereas its approach to professional learning may be systematic or even systemic. The rubrics serve as a mirror helping districts understand more clearly how they currently function.
In preparation for visits, districts spend the better part of a year identifying the focus, gathering documentary evidence relating to each of the eight indicators, structuring the three days of the visit, and developing a set of essential questions that will frame the visit. This preparation serves as an internal self-study; many districts identify future directions on their own before the visit.
During a visit, a team of approximately 20 educators will typically hear an opening presentation by the district, review the documentary evidence, and spend the equivalent of one full day in each of the district's schools. They may shadow a few students and interview teachers, administrators, students, parents, and board of education members. Classroom observations of teachers are not part of the process; judgments of teacher quality based on a brief, one-time visit often yield misleading information and are incompatible with a critical friends model.
Each visit ends with a debriefing for the superintendent and whomever else the superintendent invites to the meeting. Generally, within six weeks of the visit, the team sends a report that includes a narrative section responding to the essential questions, a set of commendations and recommendations associated with each of the eight indicators, and the team's best thinking about where the district falls on the benchmarking rubric.

How It Worked in Hightop

Let's look at how the process unfolded for a district I'll call Hightop that identified its K–12 mathematics program as the focus. Some elementary schools in the Hightop district serve mostly white students from wealthy homes; others educate students from less wealthy families and minority backgrounds. Students from these various schools converge into both of Hightop's middle and high schools.
Although its students appeared to achieve well on assessments, the district was concerned that buried within the high scores were subgroups not benefitting equitably from the curriculum. They also wanted to probe the impact of grade-level transitions. As in many districts, elementary students were exposed to a constructivist math program, then transitioned to a middle school curriculum dominated by textbook and workbook assignments, and then to a course-based approach in high school. Two of Hightop's essential questions were, "To what extent does our K–12 math program address the needs of all learners?" and "To what extent is our K–12 math curriculum aligned and articulated?"
The visit confirmed that students from minority backgrounds weren't achieving, or learning, at the same level as the majority of students in the district. They were under­represented in the most challenging math curriculums and courses the district offered. The visit also validated the concern about alignment: Transitioning from a conceptual approach to math in the elementary years to a more computational approach in the middle years was challenging for some students.
Thus, one set of the team's recommendations focused on data. They advised the district to gather data around the number and academic profiles of students taking and not taking high-level math classes in high school and to hold focus groups to explore why students do and don't take such classes. They also recommended
  • Generating data on the impact of regrouping for math in the elementary and middle years (particularly what course sequences students placed in low, middle, and high groups early on followed as they progressed).
  • Looking at student math work at transition points and examining ways to improve curriculum alignment during transition years.
  • Providing time for teachers and administrators to work collaboratively on these tasks.
Recommendations such as these challenge districts to move beyond the largely superficial data they receive from test reports and to generate a set of deeper questions that go to the heart of their core beliefs about student learning. As a result of the team's recommendations, Hightop examined its longstanding practice of regrouping for math in the elementary and middle grades. It made a concerted effort to encourage students from minority backgrounds to self-select into higher-level classes.
Several years after this review, larger numbers of minority students are participating in higher-level math courses at the high school and earning higher grades in these classes.

School Quality Redux

The peer review process honors and enhances a district's capacity to examine its own work, offers feedback about the strengths and needs of its schools, and helps that district make decisions on next steps. Many people who serve on visit teams describe the process as the most meaningful professional learning experiences they've had. And hosting districts consistently state that the experience of preparing for the visits and reflecting—and following through—on a manageable set of recommendations gives them leverage for improvement. Teachers and administrators come to view their district's work through a less parochial, more systemic lens.
Getting at school quality is messy, complex, and time intensive. Quality is best measured as a mix of hard and impressionistic data generated internally by people asking locally pertinent and complicated questions. Peer reviewers assist this process by experiencing firsthand what's actually happening in a school. They provide a mirror for districts to look into to determine whether they're pleased with what they see—and what they might do to foster improvement.
References

Carnoy, M., & Rothstein, R. (2013). What do international tests really show about U.S. student performance? Washington, DC: Economic Policy Institute.

Grennon Brooks, J., & Brooks, M. (1999). In search of understanding: The case for constructivist classrooms. Alexandria, VA: ASCD.

Haney, W. (2002). Lake Woebeguaranteed: Misuse of test scores in Massachusetts, Part I. Education Policy Analysis Archives, 10(24).

Popham, W. J. (2001). Uses and misuses of standardized achievement tests. National Association of Secondary School Principals Bulletin, 85(622), 24–31

Sobol, T. (2013). My life in school. Scarsdale, NY: Public Schools of Tomorrow.

Thomas, G. (1998). A brief history of the genesis of the new schools' inspection system. British Journal of Educational Studies, 46(4), 415–427.

Martin G. Brooks is Superintendent of the Valley Stream Central High School District in Valley Stream, New York.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 115020.jpg
Improving Schools: What Works?
Go To Publication