HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2015
Vol. 73
No. 3

What Data Dashboards Can Do

Using data dashboards can help move schools toward higher efficiencies and improve teaching and learning systemwide.

premium resources logo

Premium Resource

What Data Dashboards Can Do - Thumbnail
Credit: Copyright(C)2000-2006 Adobe Systems, Inc. All Rights Reserved.
Imagine your car's dashboard had only a gas gauge. You would know if you were low on fuel, but you wouldn't know how fast you were going, whether your oil pressure was low, or whether the car was overheating. In short, you'd have some important information, but not enough to keep your car running at peak efficiency.
When it comes to school performance, most states operate with the equivalent of that inadequate dashboard. They measure performance on a single indicator—state test scores—and determine on the basis of that indicator alone whether a school needs assistance. Or they combine multiple indicators into an index or a letter grade that gives little information about how the school can improve.
At a time when states and the federal government are considering new approaches to replace the 13-year-old No Child Left Behind (NCLB) system, a number of states and districts around the United States are implementing new ways of measuring and reporting school performance. These systems, often called data dashboards, offer a way to track performance and hold schools, principals, and teachers accountable.
Like an automobile dashboard, data dashboards provide an array of information about school performance and practices. This information enables educators to focus on particular problems and, equally important, to monitor and address all the issues that affect performance. Just as drivers fill up their tanks before the gas gauge turns to E, a school using a data dashboard can address potential problems before they affect student achievement.
State-level dashboards are relatively new, but education systems in other countries, as well as a number of local districts, have used dashboard-type systems for some time. If you're considering establishing a dashboard system, here are some of the issues involved.

Accountability in Flux

Although states have held schools accountable for student performance for decades, NCLB introduced a new level of accountability. It required states to test all students in both reading and mathematics in grades 3–8, plus once in high school; to set standards that would indicate "proficiency" on the tests; and to set targets that would bring all students to proficiency by 2014.
Schools that failed to meet annual targets—either for the school as a whole or for subgroups within the school—would be subject to an escalating set of sanctions and interventions. Although the law also authorized states to monitor additional measures (usually graduation rates), high performance on these measures couldn't compensate for low performance on tests.
According to most observers, NCLB has been successful in focusing attention on previously underserved subgroups. Student achievement has risen somewhat, particularly in elementary school mathematics. But high school performance has remained flat, and relatively few students have demonstrated the deeper learning competencies—such as the ability to think critically, communicate effectively, and collaborate with peers—that are necessary for success in college and careers. Moreover, NCLB has produced an overemphasis on tests that fail to assess or encourage some of the most important skills.
NCLB's shortcomings have led state officials and advocates to consider new approaches to accountability, opening the door to ideas like data dashboards. In 2011, the U.S. Department of Education began issuing states waivers from certain NCLB requirements, allowing them to develop their own accountability systems. Some 43 states have received waivers, and their new systems vary in many ways from the NCLB model. Some states and districts have significantly broadened their measures of school performance beyond test scores. For example, Kentucky includes teacher performance, and a group of large school districts in California known as the California Office to Reform Education (CORE) monitor indicators of social and emotional learning.
Meanwhile, California passed a state law that required districts to measure school performance along eight dimensions: student achievement; student engagement; college and career readiness; school climate; parent involvement; basic services (such as access to materials and adequate facilities); implementation of state standards; and access to rigorous coursework.

Why Dashboards?

Current accountability systems have a number of advantages. They focus attention on outcomes and measure whether schools have achieved learning goals. They enable parents and policymakers to easily compare schools. And they can be administered with available data systems.
However, accountability systems that use a single indicator—test scores—to judge school performance can create incentives to focus on the relatively low levels of performance the tests measure.
Some states measure multiple indicators but combine them into an index or a letter grade, which can mask low performance on individual indicators. Such accountability systems might show that a school is low performing, but they don't tell educators how to improve. Moreover, they tend to rely solely on measures like state tests and graduation rates, which are calculated too late to address problems before they worsen.
Data dashboards can alleviate some of these problems. They provide an array of indicators in a transparent way so school staffs, parents, and public officials know each year how schools are performing on all the important measures. They can include measures of school operations and practices so schools know what to address if performance is low. They also enable districts to add real-time measures, such as grades and attendance rates, to supplement the state data and enable local leaders to monitor performance throughout the year.

Toward a Data Dashboard

Creating a data dashboard is not as simple as just collecting and displaying all available information. Districts that have used the systems effectively suggest four important actions that school leaders should take when developing such systems.

Choose the Right Indicators

To drive improvements in performance, the indicators in a dashboard need to reflect the most significant measures of a school's performance. To that end, they rely on what are considered the most crucial outcome measures for that school, as well as research on the factors that contribute to high performance.
"It helps focus attention," said Maggie Glennon, a former assistant superintendent in Monroe County, Georgia, who is now a consultant with the Georgia Leadership Institute for School Improvement. "You have to prioritize. You can't do everything."
Many dashboards group indicators into a handful of categories, which clearly show the areas of focus. For example, Monroe County's dashboard, called a balanced scorecard, provides indicators in four categories: student learning outcomes, organizational effectiveness, public engagement, and professional learning. The scorecard contains some 70 individual measures of performance, but grouping them into these four categories enables leaders to see areas to address that might be contributing to lower student performance. "You can't improve student learning if the classrooms are dirty and teachers don't get their paychecks on time," Glennon says. "If teachers aren't happy, they won't do their best job teaching." (You can see an example of Monroe County's scorecard.)
Similarly, school reports in Alberta, Canada, provide measures in six categories: safe and caring schools; student learning opportunities; student learning achievement; preparation for lifelong learning, the world of work and citizenship; parental involvement; and continual improvement. The indicators within those categories include a mix of outcome measures (such as test scores and graduation rates) and survey results (for example, teacher and parent satisfaction with parental involvement in decisions about their children's education).
Local districts in California, under the state funding formula that requires local accountability plans, will include resource measures as well. For example, the reports include measures of basic services, such as materials and adequate facilities, and access to coursework needed for college and career readiness. Through these reports, districts are held accountable for providing the resources necessary for schools to succeed.
The measures of student performance included on dashboards tend to extend beyond test scores. For example, the districts in California's CORE network include measures of social and emotional well-being, which reflect the growing body of research on the importance of motivation and academic mindsets in student learning. Indicators include results from surveys on school climate, suspension and expulsion rates, and chronic absenteeism.
Meanwhile, districts such as Dallas Independent School District, the School District of Philadelphia, Pittsburgh Public Schools, San Jose Unified School District, and New Visions schools in New York City have participated in the College Ready Indicator Systems (CRIS) project to develop and report indicators of college readiness on three dimensions (academic readiness, academic tenacity, and "college knowledge"), based on a framework developed by David Conley, the former director of the Educational Policy Improvement Center and an expert on college readiness. The districts collect data on these dimensions at the student, school, and district levels.

Identify the Most Crucial Indicators

Although the dashboard systems include an array of indicators, administrators often focus on a few key indicators to ensure that schools continue to monitor these indicators and address the most crucial problems in school performance.
For example, Monroe County Schools revises its dashboard system from time to time. If schools continue to exceed targets on an indicator, the district might not monitor it each year. However, some items, such as the attendance rate, need annual monitoring because they're crucial to student achievement. San Jose's OPSTAT system (for opportunity statistics) measures school performance on 11 indicators (see fig. 1), but individual schools are particularly responsible for four key indicators—one for elementary schools (3rd grade reading performance); one for middle schools (Algebra 1 performance); and two for high schools (Advanced Placement or International Baccalaureate performance, and completion of coursework required for admission into the California state university system). Regardless of performance on other indicators, schools must develop plans for improving results on those measures, according to Jason Willis, the assistant superintendent for accountability and community engagement. These indicators encourage broader access to high-quality learning opportunities and track how students are doing when they receive these opportunities.

Figure 1. Key Performance Measures for San Jose Unified School District

  • Early literacy (preK–2nd grade)

  • Advanced reading achievement (3rd–8th grade)

  • Advanced mathematics achievement (3rd–8th grade)

  • English learner reclassification within 6 years

  • Socio-emotional learning scale

  • Writing performance assessment (3 or higher at grade 2, 4 or higher at grade 6)

  • Algebra I, B or better (8th grade)

  • Advanced Placement, 3 or better; International Baccalaureate, 4 or better

  • SAT (1650+), ACT (24+)

  • University of California/California State University, A-G course completion, C or better

  • Exhibiting 21st century skills.

 


Set Appropriate Targets

In addition to presenting information on the current status of school performance, data dashboards also contribute to school improvement by showing how well schools raise their performance over time. To show progress, leaders set targets for improvement and hold themselves accountable for meeting those targets.
In Monroe County, the scorecard's color-coding scheme is based on whether schools meet targets in each of the categories. When schools meet or exceed a target, the indicator is marked in green. If schools are close to a target, it's marked in yellow. If schools fall short, it's marked in red.
Targets are set in negotiations between principals and district leaders. District officials want to set ambitious but reasonable targets, according to Glennon, the former assistant superintendent. The district wants schools to show steady gains, but it doesn't want to set targets that are so out of reach that teachers and principals grow discouraged. So, for example, the district seldom expects schools to reach 100 percent in any category because leaders recognize that any student could have a bad test day and cause the school to miss that goal. On the other hand, district leaders insist that schools raise their targets if a principal's initial proposal is too low.

Use the Tools for Improvement

In addition to presenting data in a different way, districts using dashboard systems have transformed the way they approach accountability. Rather than simply present data or threaten intervention, district officials work with schools to improve performance.
Philadelphia's school report cards show how dashboards can contribute to improvement. The scores for each category of indicator (achievement, progress, climate, and college and career readiness) are shown in four performance categories: model, reinforce, watch, and intervene. The schools with model levels of performance are showcased throughout the city. At the reinforce and watch levels, the district and the schools monitor performance. The district intervenes only in the schools, and the areas within a school, at the intervene level.
San Jose shifted from accountability as finger-pointing to accountability for improvement, according to Willis, the assistant superintendent. Since the district shifted to its OPSTAT system in 2013, principals regard it much more favorably. They meet quarterly with district leaders and work with them to develop plans for improvement if one indicator—say, reading performance—is low. "It's the polar opposite of the culture of mistrust that characterized the previous system," Willis said. "It's not intended to fire principals or label schools. There's an ongoing dialogue."

A Comprehensive System

With waivers from No Child Left Behind, and the possibility of a revised federal education law that gives them greater flexibility, states are experimenting with new accountability systems aimed at providing information to schools and communities that can help them improve performance for all schools.
But data dashboards aren't the only elements of new systems. States are also creating school quality review processes, in which educators visit schools for several days to observe classes and interview teachers and thus get a better picture of teaching and learning and recommend improvements. Such systems are already in place in a number of states, including Rhode Island and Georgia.
In addition, states are developing and adopting performance assessments to supplement state tests. These assessments provide a broader view of student performance by tapping abilities that end-of-year tests seldom measure, such as the ability to write extended essays in a variety of genres or solve complex multistep mathematics problems. The assessments also provide information on student performance during the year, while teachers can act on that information.
States also are creating professional growth systems and tying them to their accountability systems by connecting teachers and school leaders who struggle in a particular area with other educators who have demonstrated success in that area. For example, in the CORE districts in California, when a school's dashboard suggests that a school is having difficulty with a particular area (for instance, English language learners' performance in reading), the district links teachers from the struggling school with those from another school that performs well in that area.
At best, all of these elements should work in tandem. Together, they can transform an accountability system aimed at identifying low performance to one that's designed to support continual improvement for all schools.
End Notes

1 Erpenbach, W. J. (2014). A study of states' requests for waivers from requirements of the No Child Left Behind Act of 2001: New developments in 2013–14. Washington, DC: Council of Chief State School Officers. Retrieved from www.ccsso.org/Documents/2014/ASR_SCASS_ESEA%20Flex%20Addendum.pdf

2 Conley, D. (2007). Redefining college readiness. Eugene, OR: Education Policy Improvement Center.

Robert Rothman has contributed to educational leadership.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 116030.jpg
Doing Data Right
Go To Publication