At a lively panel discussion, four experts on performance assessment explained what they believe educators and researchers have learned from attempts to use performance assessment over the past several years.
"We've learned a lot," said Joan Herman, associate director of the National Center for Research on Evaluation, Standards, and Student Testing at UCLA. "In studies across a number of states, we've learned that the new forms of assessment serve a very important signaling function to schools—that teachers respond by changing their instructional practices. We've also learned that engaging teachers in the development and scoring of assessment can be a transformative process that encourages them to change their expectations for student performance and to engage students in complex thinking and problem solving. And lastly on the plus side, we've learned that putting teachers and parents and even students together in conversations around student work provides a powerful platform for school reform."
"On the chinks-in-the-armor side," Herman continued, "we've learned that it's going to be more difficult than we ever imagined to provide technically credible measures of individual student performance—and we must be able to provide reliable individual measures if we're going to satisfy the public."
"One of the initial insights was that this is very, very hard work," said Jay McTighe, director of the Maryland Assessment Consortium, a collaboration of 24 school districts working on performance assessment. "In fact, I've come to believe that the first-year effort involving teachers in developing performance tasks is an extraordinarily powerful learning experience, but people should be cautious about expecting high-quality, final-draft products as a result of that effort."
However, McTighe added, "the follow-up good news is that with models and experience and especially a cycle of revision-review-refinement, we have gotten high-quality performance assessment tasks for classroom use, and this model has extended into local school districts' use of the same process that we followed."
"If I were handing out medals to performance assessment," said Dennie Palmer Wolf, senior research associate at the Harvard Graduate School of Education and director of the Performance Assessment Collaborative for Education (PACE), "the medal I'd hand out first is for making public the discussion about what makes good work." Performance assessment, Wolf contended, has promoted "the idea that quality can be publicly discussed, that students can revise their way toward it, that you weren't born a 79—if you think about the social and political history of this country, those are huge contributions."
"Maybe what we've learned," Wolf continued, "is that we ought to be building [performance assessment] from the bottom up, not from the top down." With adequate funding to train teachers, concerns about inter-rater reliability might be lessened, she speculated.
"To some extent, we are guilty of overselling the power of performance assessment," said Warren Simmons, executive director of the Philadelphia Education Fund, a nonprofit organization that supports education reform in Philadelphia. "It was a strategy designed to be linked to at least two or three other components to actually improve student performance," he asserted. "Performance assessments must be linked to a clear set of rigorous standards, and we must have a way of helping people do the walk between the standards and the assessments, and the implications for improving curriculum and pedagogy."
"Underlying the power of assessment are these linkages that we need to build, and I don't think we've paid as much attention to that as we should," Simmons said. "That's glaringly evident in urban school districts that have been strained over the years in terms of the amount of money for professional development and the resources to develop new forms of curriculum. That's where we need to begin to focus our attention."