HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2003
Vol. 61
No. 3

Myopia in Massachusetts

The state's focus on scores harms students and ignores crucial indicators of school quality.

Myopia in Massachusetts -thumbnail
Since 1998, the Massachusetts Comprehensive Assessment System (MCAS) has been a hallmark of the education landscape in Massachusetts. MCAS involves the annual testing of all students in every grade, except 9th, from 3rd to 10th grade. High school graduation depends on passing the tests in English and mathematics, and students with especially high marks can receive financial aid to attend the state's college and university system. State officials use the annual test results to judge whether or not school systems are meeting the state's standards and to reward or sanction schools on the basis of gains or declines in school scores.
State officials formally adopted the test-based accountability program in May 1999, explicitly rejecting the use of multiple indicators to describe school performance (Vigue & Engley, 1999). In 2001, the Massachusetts Department of Education began using test results to report performance trends in schools and districts. Researchers and educators have pointed out that the poor quality of test questions, mathematical errors in determining ratings, and the strong correlation between test results and community income levels indicate that the school rating system is misconceived and misleading (Bolon, 2001; Haney, 2002; McElhenny, 2001; Tuerck, 2001). State education officials, however, continue to describe this accountability system as “the shining star of education reform” (cited in Tantraphol, 2001).

Luck and School Score Gains

Do changes in test scores accurately reflect school performance in Massachusetts? Not necessarily, for two reasons.

The Changing Population of Test Takers

The most obvious flaw in the use of gains or declines in test scores to track school performance lies with the different composition of schools' test takers from year to year. In 2000, 19.6 percent of the state's students entered or left their school district during grades 1, 2, and 3, with student transience ranging from 12 to 51 percent in some districts (Massachusetts Department of Revenue, 2000). Because students relatively new to a school can make up a sizable portion of its enrollment, score gains and declines may reflect little more than the chance composition of test takers.
School closings, mergers, and reconfigurations contribute to the problem. For example, Cambridge Public Schools, an urban district of about 7,000 students, has recently closed 5 of 17 elementary schools, merging them with other schools and dispersing whole classrooms of English-language learners to different schools citywide. Around the state, a middle school closing or the decision to switch a district's elementary schools from K-8 to K-5 can disrupt the stability of a school's enrollment. Under such circumstances, test score shifts provide little insight into a particular school's quality.

Unreliability of Score Changes in Small Schools

Even with stable enrollments, chance plays a part in test score changes from year to year, especially in schools testing small numbers of students. In examining scores from North Carolina, Texas, and California, researchers have found wide fluctuations in scores from one year to the next, especially in small schools, which have led them to conclude that 70 percent of test score changes in such schools reflect little more than random variation (Kane & Staiger, 2002). Score changes in small Massachusetts schools replicate this pattern. In a review of four years of 4th grade MCAS results from 977 elementary schools, Haney (2002) found that math scores in schools testing fewer than 100 students could swing 15–20 points from year to year, compared with score shifts of five points in schools testing 150 or more.
Research findings such as these suggest that a school's test score changes may be indicators of luck rather than of school quality.

What Test-Based Accountability Hides

Over the years, state officials have used MCAS scores not only to monitor schools, but also to tell the story of education reform overall. In 2001, state officials and news accounts referred to “unprecedented” 10th grade MCAS score gains as evidence that “reform [was] working” (cited in Greenberger, 2001). Two years later, policymakers noted an “unfortunate gap” between the rates of white and minority seniors who had failed MCAS and would not receive a diploma (cited in McMahon, 2003). Citing high overall pass rates for 12th graders, however, supporters continued to applaud the MCAS-based accountability system as “the major lever to keep teachers and students focused” on learning (cited in O'Shea, 2003).
But just as shining a light too intensely on one attic corner makes the rest of the space less visible, so the focus on MCAS scores as evidence that school reform is “working” obscures other measures of education well-being that have not fared so well during the years of MCAS testing. As state officials applaud rising MCAS scores and pass rates, they discount indicators that signal weaker school holding power and graduating power.

Weaker School Holding Power

School holding power—that is, schools' capacity to hold on to all students while they move in a timely fashion from 9th to 12th grade—is a strong indicator of overall school health (Balfanz & Legters, 2001). In light of research showing higher dropout rates in states with high school graduation exams (Jacob, 2001), indicators of school holding power merit attention as part of accounting for education reform. But when policymakers tell the story of Massachusetts education reform, they ignore these indicators.
Increasing student attrition between grades 8/9 and grade 12. School holding power is strong when the number of students enrolled in the first year of high school approximates the number enrolled the final year. At Boston College, researchers in the Progress Through the Education Pipeline Project—a Ford Foundation-funded project that examines the relationship between enrollment patterns and high-stakes testing throughout the United States—have analyzed statewide enrollment data and found that the percentage of students lost before grade 12 from each high school cohort has increased steadily during the years of education reform, whether calculated from 8th or 9th grade.
Of the 9th graders enrolled in October 1999, 22 percent failed to reach grade 12 by fall of 2002, compared to 18 percent of 9th graders enrolled in October 1992. Calculating from 8th grade, 16 percent of 8th graders enrolled in October 1998 failed to reach grade 12 in 2002, compared to 12 percent of 8th graders enrolled in October 1991 who failed to reach grade 12 in 1995. For the class of 2003, attrition was highest for Latino students, who showed an overall enrollment decline of 42 percent from 8th to 12th grade and a decline of 27 percent from 9th to 12th grade.
Increasing rates of grade failure, especially in grade 9. Research has long made clear that retaining students in grade depresses achievement, undermines motivation, and puts students at higher risk for dropping out (Hauser, 2001; Shepard & Smith, 1989). In 2000–2001, the last year for which state data are available, 24,640 students repeated a grade (Massachusetts Department of Education, n.d.).
The rates for grade failure in Massachusetts have been rising steadily since 1994–1995. During this period, the rate for African American students retained has increased from 4.4 to 5.9 percent, and for Latino students from 4.5 to 5.6 percent. The state also reports a steady increase in the percentage of students failing 9th grade, rising from 6.3 percent in 1995 to 8.4 in 2001. The rates are even higher in individual districts. In Boston, for example, 9th grade retention has climbed from 14.5 to 25.5 percent in six years (Massachusetts Department of Education, 2002a, 2002b).
Rising student exclusion rates. High student exclusion rates have long been correlated with dropping out (Wehlage & Rutter, 1986). Student exclusions, which include removing students for disciplinary reasons for more than 10 days, or even permanently, increased steadily in Massachusetts between 1998–1999 and 2000–2001 (Massachusetts Department of Education, 2003c). During that period, exclusions rose by 22 percent overall, with steeper increases for vulnerable groups: 70 percent among African American students and 67 percent among students in grade 9. In Springfield, one of the state's high-poverty districts, exclusions doubled and now represent 29 percent of all the state's exclusions.
Increasing 9th grade dropout rates. Since 1992–1993, the official dropout rate in Massachusetts has changed little. Every year, between 3.4 and 3.7 percent of high school students statewide drop out of school and do not return by October 1 of the following year. The dropout rate for 9th graders, however, has not been so steady. In 2000–2001, the most recent year for which data have been released, the rate reached 3.3 percent, the highest rate of 9th grade failure since the state's education reform began (Massachusetts Department of Education, 2002a). Because more of the state's dropouts are leaving school in the earlier high school grades, more of the least-engaged students do not even take the MCAS graduation tests.
Decline in dropouts returning to school. Students who drop out during a school year but reenroll by October 1 the following year are considered “returned dropouts.” Since the mid-1990s, fewer Massachusetts dropouts have been returning to school. The percentage of returned dropouts fell from 20.3 in 1995–1996 to 14.6 in 2000–2001 (Massachusetts Department of Education, 1997, 2002b). Ninth grade dropouts are now least likely to return to school, with only 14 percent returning in 2000–2001 compared to a high of 23 percent in 1993–1994, when they were most likely to return. Among ethnic groups, African American dropouts are least likely to return, with only 12 percent returning in 2000–2001 compared to a high of 18.8 in 1994–1995.

Declining Graduation Rates

Massachusetts defines its graduation rate as the percentage of 9th graders who graduate from high school four years later. Between 1996 and 2002, this percentage has hovered between 75 and 77 percent (Massachusetts Department of Education, 2002a, 2002b). In 2003, however, using the number of students passing MCAS (Massachusetts Department of Education, 2003a, 2003b) as a proxy for the number of graduates from the class of 2003, the graduation rate will drop to 70.9 percent for all students—76.6 percent for white students, 54.2 percent for African American students, and 41.4 percent for Latino students. Graduation rates calculated on the basis of 8th grade enrollments will also decline, from a range of 80–83 percent over the seven-year period prior to the implementation of MCAS as an exit exam to 76.5 percent for all students—79.9 percent for white students, 65.7 percent for African American students, and 51.6 percent for Latino students.

Public Accounting or Public Relations?

The official account of education reform in Massachusetts is one of rising test scores, high pass rates, and dozens of exemplary schools. At best, this narrative is incomplete. In fact, the story of MCAS pass rates and school records is less straightforward, and official statements sound like a public relations campaign designed to promote high-stakes MCAS testing.

Miscalculated Pass Rates

The official narrative of school reform often emphasizes the story of hard-working students rising to meet the challenge of high-stakes testing. After the first round of testing, 67 percent of the class of 2003 had passed MCAS (Massachusetts Department of Education, 2001). Three retests followed, and in March 2003, state officials announced that 90 percent of the class had passed MCAS and could graduate from high school (Massachusetts Department of Education, 2003a, 2003b).
Basing pass rates on the number of high school seniors in October 2002 and not on the number originally enrolled in 9th grade in October 1999, however, inflates rates considerably. In short, the pass rates increased not only because about 8,700 more students passed the tests after the first round of testing, but also because approximately 17,000 students between grades 9 and 12 disappeared from the total number of enrolled students and were excluded from the accounting.
When all students were included in the calculation, pass rates were 70 percent, not 90 percent, for all students. Pass rates were 54 percent, not 75 percent, for African American students; and 40 percent, not 70 percent, for Latino students. Moreover, although the state reported that 15 districts had pass rates of 93 percent, the actual “on time” pass rates for those same districts ranged from 88 to 55 percent, depending on the district's student attrition rate (Haney, Madaus, & Wheelock, 2003).
Inflated pass rates reported by the state and repeated over and over in the media have clearly helped neutralize public opposition to the state's graduation requirement. In Massachusetts, achieving high pass rates reflects not just students' hard work, then, but also student attrition combined with a method of calculating the rates that discounts the students who did not progress to 12th grade with their class.

Award-Winning Schools: Exemplars of Progress?

The Massachusetts accountability policy awards $10,000 to individual schools designated as Compass Schools because of their exemplary MCAS gains. Although the stated purpose of the program is to identify model practices for replication by other schools, the program has repeatedly favored elementary schools that typically test small numbers of students, reaching as few as 27 in one school (Wheelock, 2002). State officials use the program as occasion to tout the benefits of education reform, but, in fact, the small numbers tested make it difficult to distinguish good schools from lucky schools.
At the same time, many high schools recognized for gains in 10th grade scores also show increases in student attrition between 9th and 10th grade (Wheelock, 2002). School score gains in this context hardly seem cause for celebration. To the contrary, they raise concerns that an accountability policy that relies on MCAS scores to describe schools as exemplary may actually discourage schools from holding on to students whose test score prospects threaten their ratings.
The claim that the reform is working in Massachusetts is plausible only as long as the focus is on test scores alone. But test scores are dependent on other indicators that are not part of the accountability system. A complete account of reform requires attention to a wider variety of indicators, including those that highlight the status of the state's most vulnerable students.

Alternatives to Test-Based Accountability

The narrow test-based accountability system does a disservice to the public and to the students of Massachusetts. Using MCAS scores as the sole basis of the state's accountability program may appear to offer a clear account of reform in the state, but it actually hides from public view the full picture of how well schools are working for all students.
What are the alternatives? The Coalition for Authentic Reform in Education (n.d.) has proposed an accountability plan for Massachusetts that would draw a more comprehensive picture of school performance. The plan calls for limited standardized testing in reading and math, but it would also require locally developed performance-based assessments, such as portfolios, exhibitions, and performance tasks tied to the state's broad learning goals in all subject areas. In addition, the plan envisions school-quality reviews conducted on a three- to five-year cycle by teams of educators and trained community participants from other districts. The process is similar to a review process in place in Rhode Island and a review already conducted for charter schools in Massachusetts. Finally, schools would be required to describe the status of students' academic progress, allocation of school resources, and schools' holding power in an annual report to their own communities.
By including indicators of school holding and graduating power, the plan would produce a less triumphant picture of education reform. But such a plan is necessary if accountability is to serve school reform policies that attend to the status of all students, especially the most vulnerable.
References

Balfanz, R., & Legters, N. (2001, Jan. 13). How many central city high schools have a severe dropout problem, where are they located, and who attends them? Report prepared for Dropouts in America: How severe is the problem? What do we know about intervention and prevention ? Forum convened by The Civil Rights Project, Harvard University and Achieve, Inc., Harvard Graduate School of Education, Cambridge, Massachusetts.

Bolon, C. (2001). Significance of test-based ratings for metropolitan Boston schools. Education Policy Analysis Archives, 9(42). Available: http://epaa.asu.edu/epaa/v9n42

Coalition for Authentic Reform in Education. (n.d.). A call for an authentic statewide assessment system. Available: www.fairtest.org/care/accountability.html

Greenberger, S. S. (2001, Oct. 16). MCAS failures drop sharply. Boston Globe, p. A1.

Haney, W. (2002). Lake Woebeguaranteed: Misuse of test scores in Massachusetts, Part I. Education Policy Analysis Archives, 10(24). Available: http://epaa.asu.edu/epaa/v10n24

Haney, W., Madaus, G., & Wheelock, A. (2003, March). DOE report inflates MCAS pass rates for the Class of 2003. Available: www.bc.edu/research/nbetpp/pdf/doe_press.pdf

Hauser, R. (2001). Should we end social promotions? Truth and consequences. In G. Orfield & M. L. Kornhaber (Eds.), Raising standards or raising barriers: Inequality and high-stakes testing in public education (pp. 151–178). New York: Century Foundation Press.

Jacob, B. (2001). Getting tough: The impact of high school graduation exams. Educational Evaluation and Policy Analysis, 23(3), 99–121.

Kane, T. J., & Staiger, D. O. (2002). Volatility in school test scores: Implications for test-based accountability systems. In D. Ravitch (Ed.), Brookings papers on education policy, 2002 (pp. 235–283). Washington, DC: Brookings Institution. Available:www.dartmouth.edu/~dstaiger/Papers/KaneStaiger_brookings2002.pdf

Massachusetts Department of Education. (1995). Dropout rates in Massachusetts Public Schools: 1993–1994. Malden, MA: Author. Available: www.doe.mass.edu/infoservices/reports/dropout/9394

Massachusetts Department of Education. (1997). Dropout rates in Massachusetts Public Schools: 1995–1996. Malden, MA: Author. Available: www.doe.mass.edu/infoservices/reports/dropout/9596

Massachusetts Department of Education. (2001). Massachusetts Comprehensive Assessment System Spring 2001 MCAS Tests: Summary of state results. Commissioner's foreword. Available:www.doe.mass.edu/mcas/2001/results/statesum0.html

Massachusetts Department of Education. (2002a). Dropout rates in Massachusetts Public Schools: 2000–2001. Malden, MA: Author. Available: www.doe.mass.edu/infoservices/reports/dropout/0001

Massachusetts Department of Education. (2002b). Trend analysis of high school enrollment, dropout, and grade retentions over time. Available:www.doe.mass.edu/infoservices/reports/c03_analysis.html

Massachusetts Department of Education. (2003a). Progress report on students attaining the competency determination statewide and by district: Classes of 2003 and 2004. Malden, MA: Author. Available:www.doe.mass.edu/mcas/2002/results/0303cdprogrpt.pdf

Massachusetts Department of Education. (2003b). Progress report on the competency determination, classes of 2003 and 2004 [PowerPoint presentation]. Malden, MA: Author. Available:www.doe.mass.edu/mcas/news_archives.asp

Massachusetts Department of Education. (2003c). Student exclusions, 2000–2001. Malden, MA: Author. Available: www.doe.mass.edu/InfoServices/reports/exclusions/0001/full.pdf

Massachusetts Department of Education. (n.d.). Grade retention reports [Online report]. Available:www.doe.mass.edu/infoservices/reports/retention

Massachusetts Department of Revenue. (2000, April). Agawam Public Schools review. Boston: Education Management Accountability Board Report. Available: www.dls.state.ma.us/PUBL/edaudit/Agawam.pdf

McElhenny, J. (2001, Jan. 12). Scoring errors cited in MCAS report. New Bedford Standard Times. Available: www.s-t.com/daily/01-01/01-12-01/a03sr018.htm

McMahon, S. (2003, March 4). “Steady” gains on MCAS, but 6,000 students in state still won't receive diploma. Lowell Sun, p. A1.

O'Shea, M. E. (2003, July 6). Ed reform gets mixed critique. Springfield Republican, p. 1.

Shepard, L. A., & Smith, M. L. (1989). Flunking grades: Research and policies on retention. New York: Falmer Press.

Tantraphol, R. (2001, Jan. 10). Ratings add to MCAS test controversy. Springfield Union News, p. A1.

Tuerck, D. G. (2001, Jan. 20). MCAS rating system needs to be fixed. Boston Globe, p. A15.

Vigue, D. I., & Engley, H. (1999, May 26). Expansion of exams over more grades OK'd. Boston Globe, p. B3.

Wehlage, G. G., & Rutter, R. A. (1986). Dropping out: How much do schools contribute to the problem? Teachers College Record, 87(3), 374–392.

Wheelock, A. (2002). School awards programs and accountability in Massachusetts: Misusing MCAS scores to assess school quality. Paper prepared for the National Center for Fair and Open Testing, Cambridge, Massachusetts.

Anne Wheelock has been a contributor to Educational Leadership.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 103387.jpg
The Challenges of Accountability
Go To Publication