HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2009
Vol. 67
No. 3

Looking at Student Work

How can teacher groups assess student work productively? By focusing on improving teaching, not on proving students "got it."

premium resources logo

Premium Resource

With the proliferation of data teams, lesson study groups, and professional learning communities, teachers today have plenty of opportunities to analyze student work together. But collaborating in teacher groups can be challenging and— sadly—unproductive, even when teams use recommended protocols.
For the past five years, we have studied eight professional learning communities of secondary-level math and science teachers who engage in inquiry centered on assessing student work. We've witnessed the challenges teachers encounter in sharing differing beliefs about teaching and learning, finding resources that inform their inquiry, identifying which student work to consider, and making sense of students' thinking in relation to learning goals.
Teachers are usually on their own in figuring out how to sustain effective collaboration. But it may not come naturally. The education culture prizes professional independence and privacy. Teachers' tendency to make decisions about teaching and assessment within individual classrooms gives them few opportunities to examine with others the effects of their teaching on students' understanding. If teacher collaboration is going to yield productive results, we must find ways to address these obstacles.
One key is for teacher groups to come to assessments with a truly inquisitive approach. Charalambos and Silver (2008) discuss the differing approaches individual teachers take in looking at student assessments: Some look with an eye to proving student learning gains, some to improving their practice through reflecting on data. We have extended this "proving or improving" idea to the agendas teachers adopt as they analyze student work collaboratively. The approach the group takes can significantly alter what and how much teachers learn from the experience.
Our research team provided support to these new professional learning communities as they launched, and we encouraged the groups to use assessment data to identify gaps in student learning or to judge whether teacher interventions were having the desired effect. But we found that in many cases, teachers approached data for the purpose of proving that students had learned and that teachers had done their jobs well. This approach is natural, given that external evaluators now use high-stakes tests to draw blanket conclusions about teachers and students, with huge financial implications. Demonstrating that a chosen intervention is working or that a student is approaching grade-level expectations is important. But although the proving approach is justifiable, it limits what teachers learn.

The Proving Approach: Are They "Getting It"?

When teachers used their time together to prove that students had learned and teachers had taught well, teachers focused on whether students "got it." If students who achieved a 4 or higher were marked as proficient, for example, these groups focused on whether learners had scored at least a 4. Teachers often processed data in terms of percentages correct or incorrect. They spent a lot of time and focused a lot of attention on finding, adapting, and creating assessments that had a good chance of generating positive results.
Sometimes teachers were so focused on being able to attain score gains that they didn't consider questions like, What does "got it" mean to each of us? What kind of understanding did the students who received 4s have that the students who received 3s did not have? What are the students who received a 1, 2, or 3 showing us they need from us?
Teachers bent on proving also leaned toward considering behavioral or life factors—such as attendance, motivation, or home situation—to explain why a student performed in a particular way, rather than seeking clues in the student'swork about what interventions might move that learner forward.
Our research team observed certain patterns among proving-focused teachers. These teachers held on to predetermined ideas about students' abilities. For example, teachers assumed that high-achieving students understood content even when their work did not explicitly reveal understanding, and they took for granted that low-achieving students did not understand without exploring such students' emerging learning. Provers more often held rigid ideas about how to express a grasp of content correctly. They compared students' responses to these rigid expectations without being open to alternative ways of understanding, processing, or expressing the desired knowledge.

The Improving Approach: What Are They Thinking?

Despite external pressure to prove that students had learned, some of the teacher groups we observed were able to talk about student work in terms of improving both teaching and learning, rather than exclusively taking a proving approach. Teacher groups that took an improving stance tried to use students' work to understand student thinking. This helped teachers understand what students' needed as they planned further instruction.
These teachers looked for varied forms of assessments that could reveal students' thinking and then thoughtfully discussed how to interpret the data. For example, a group of science teachers we worked with at Cedar Grove Middle School determined that multiple-choice questions would not give them much information about their students' thinking. They decided to include space on assessments for students to write about why they chose the answer they did. Teachers pored over students' explanations in an attempt to understand their conceptions and misconceptions rather than simply placing students in "got it" or "hasn't got it" piles.
Improving-focused groups had more generative conversations about student work. Teachers' discussions yielded questions that teachers wrestled with; those questions led to additional questions and sometimes to spirited debates about what teaching and learning should look like. Teachers sharpened their thinking about instruction, learning styles, content expectations, formative assessment, the role of the teacher, and student engagement.
As the following two vignettes illustrate, our research suggests that an improving approach led teachers to deeper understandings about teaching and learning, greater satisfaction about the outcome of their collaborative work, and more informed classroom decisions.

Cedar Grove: Pausing to Ask Questions

Karen, the leader of the Cedar Grove learning community, led her group of science teachers through an inquiry cycle focused on science vocabulary. (See our research project Web site,www.vancouver.wsu.edu/stride, for more information on the inquiry cycle.) This cycle, which unfolded over an academic year, included reading literature, developing baseline assessments, creating teaching interventions, collecting more student data after those interventions, and examining this new data to determine next steps.
Although some members were eager to move quickly through the cycle, Karen encouraged the group to take time to ponder and talk about what they really wanted to know about their students' science learning.
The Cedar Grove teachers decided to concentrate on words that students would need in science courses but that were not tied to one particular unit or branch of science. Such words as system,model, and function were often more troubling to students than specific content words. One teacher gave an example of a student who knew what mucus was but could not articulate the function of mucus. Another shared that when students were asked to draw a model of cellular respiration, grasping the meaning of model was more difficult for them than explaining cellular respiration. Teachers built a list of words they believed students must master to express what they knew about science content and created a corresponding assessment.
Karen continually asked questions concerning the construction of the group's word list and baseline assessment. She acknowledged that she was unclear about what it actually meant toknow vocabulary, saying, "Before we jump into finding out how we can solve [the vocabulary problem], I think we need to be really clear on what we're talking about. … How do we know when you've achieved [true knowledge of a concept]?"
Drawing on a research article, Karen facilitated a discussion that helped the group members sharpen their thinking about the specific vocabulary they would be assessing and how and why they were assessing it. After administering the vocabulary assessment they had created, some group members wanted to immediately score papers and plan interventions. Karen suggested they spend more time trying to understand and explore students' misconceptions.
During their analysis of responses on this vocabulary assessment, the teachers learned important truths about student thinking—and their own teaching. They recognized that they had often been unclear in how they used certain words, such as prediction and hypothesis, in class. The group took time to talk about and clarify distinctions among related words so teachers could be more precise in future instruction. Teachers also discovered that simply counting correct and incorrect answers on a multiple-choice assessment was insufficient. Several times, students' written explanations revealed errors in their thinking even though they had selected the correct answer.
Teachers were surprised by the thinking revealed in students' written responses. Some learners believed that, as one wrote, "system has to do with size" and thought human bodies were not systems. Many students did not recognize that a system needs to have uniform units of measurement. To create lessons to clarify this concept, teachers explored systems that students might already be familiar with, such as an iPod docking system.
Similarly, some students believed evidence was only evidence if it was obtained from a specific lab or if someone gave it to you. Teachers guessed that students might be extrapolating ideas about evidence from television crime shows and brainstormed how they could address these misconceptions.
At the end of the year, the group retested students on the vocabulary. They found that focusing on student thinking and creating lessons to correct misconceptions had led to learning gains. They planned to continue exploring student thinking through written assessments in the coming year.

Alder Creek: Clarifying Expectations

Cheryl and Lauren, coleaders of the Alder Creek learning community, focused their group's conversations on assessing students' written scientific conclusions and determining exactly what a high-quality conclusion should include. When Cheryl and Lauren were building their assessment of students' science conclusions, colleagues in the group asked them, "What's so important about writing conclusions?" Cheryl acknowledged that she was unsure. The group drew on readings and other resources to help them think out loud together about what they expected from their students and why.
The teachers tried to pinpoint the attributes of conclusion writing that they wanted their students to master before leaving their classes. They negotiated a common prioritized list of seven expectations— for example, the expectation that students provide data in support of their answers to a research question or their hypothesis. The group used these expectations to develop a tool that contained the critical facets of scientific conclusions and listed the errors they frequently saw students make in their use of data. (See Minstrell, Anderson, Kraus, & Minstrell, 2008 for more on critical facets.) Learning community members used the tool to assess students' science writing and track student progress. The list of errors helped teachers record the specific mistakes in data reporting that their students made throughout the year and decide together how to address widespread errors.
From past experience, the teachers sensed that students would need help determining what information should be included in a science conclusion and how to organize that information. They collectively built a graphic organizer for learners and agreed to use it.
After collecting student writing several times during the year and using their facet tool to compare their students' conclusion writing to the teachers' expectations, the group members discussed what they observed. They also looked at how each of the teachers had incorporated the graphic organizer into his or her classes to see whether anyone had discovered a particularly effective method.
After several rounds of looking at student writing, Alder Creek teachers found that students had benefited from the graphic organizer but—as they predicted—there were still areas of confusion regarding science concepts. Even though many students were organizing their conclusions well, students continued to display significant misunderstandings in their interpretations of data. Interpretation of data clearly needed to become a focus. The teachers discussed how they could center their next inquiry process on helping students make sense of and write about experimental data.

Enriching Complexities

Judging from our observations of many learning communities, we must admit that taking an improving approach does not remove the challenges involved in looking at student work collaboratively. In fact, as the scenarios described here show, taking an improving stance often unearths the formidable complexities of teaching and learning that stay hidden when the focus is on making cutoff scores. But we believe switching from a proving to an improving approach will yield more worthwhile discussions around student work—discussions that enrich our teaching as well as our students' understanding.
References

Charalambos, C., & Silver, E. A. (2008, January). Shifting from proving to improving: Using assessment as an integral part of instruction. Paper presented at the annual meeting of the Association of Mathematics Teacher Educators, Tulsa, OK.

Minstrell, J., Anderson, R., Kraus, P., & Minstrell, J. E. (2008). Bridging from practice to research and back: Tools to support formative assessment. In J. Coffey, R. Douglas, & C. Sterns (Eds.), Science Assessment: Research and Practical Approaches (pp. 37–68). Arlington, VA: National Science Teachers Associate Press.

End Notes

1 All names of schools and teachers are pseudonyms.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
From our issue
Product cover image 110022.jpg
Multiple Measures
Go To Publication