HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2009
Vol. 67
No. 3

Responsive Writing Assessment

Grades and rubrics get in the way of creating better writers.

premium resources logo

Premium Resource

My grandmother and a recurrent case of the flu taught me something important about teaching and assessing writing. Ever since my grandmother had been diagnosed with cancer, it seemed that every time I went to visit her I ended up on the couch with my stomach aching and my head throbbing. One afternoon, when I was in 4th grade, she sat next to me, took my hand, and said, "Maja, I know my being sick is hard for you, but you don't need to worry about me." As she spoke, I realized that I'd been imagining entering a room and finding her unconscious on the floor. I'd actually been worrying myself sick.
I didn't stop worrying, of course. But her ability to help me understand what I was feeling and thinking in that moment— her responsiveness to me—was powerful. I didn't feel sick at her house anymore.
When I began teaching, I tried to bring the same kind of responsiveness to my relationships with my students. This priority on being responsive has guided my thinking about writing assessment.

Trapped in an Unresponsive System

Peter Johnston (2004) reminds us that a sense of agency—a person's feeling of being able to make a difference in the world—depends on a belief that the environment is responsive. Agency is not only vital to personal development and learning, but it is also central to a thriving democracy. Without it, citizens stop investing in the political process, figuring that nothing they do matters anyway.
Schools have a central role to play in creating educated and involved citizens, but they aren't always good at engendering a sense of agency in students or in teachers. In fact, they're fairly good at the opposite—creating a sense of learned helplessness. The high-stakes testing program of No Child Left Behind (NCLB) adds to the problem. Under pressure to cover a curriculum they haven't had much say in creating, teachers and students alike often feel trapped in a frenzied program that doesn't respond to their interests or needs.
The top-down pressure at the heart of NCLB negatively affects student performance. But it's worse than that: An education system ruled by this pressure is unresponsive to its developing citizens, and thus it undermines the sense of agency necessary for the success of the very democracy that created it.

The Folly of Grading Writing

Ever since the implementation of NCLB in 2002, I have considered it my responsibility to keep the pressure of the high-stakes testing movement from trickling down to my students. I am, after all, the only adult in the room. No matter what demands are placed on me to get through the curriculum or prepare students for standardized tests, I want to create a responsive environment for my students.
Creating a responsive writing curriculum came easily to me from the start. My teacher education program had provided guidance in designing a writers' workshop in the classroom—a wonderfully responsive structure that fosters student choice and individualized attention. My classroom didn't always look much like the workshops I'd read about in the professional literature, but the ideals of choice and my understanding of students' process guided my decisions about instruction.
Creating responsive assessments, however, did not come so easily. My training in assessment had involved discussions of the bell curve, the question of whether to give zeros, and the relative merits of short-answer, fill-in-the-blank, and multiple-choice tests.
Although I once gave a quiz on the steps of the writing process in a moment of desperation (and perhaps sleep deprivation), even as a teacher-in-training I knew that I didn't want to assess writing through tests. But in my education courses, we never actually looked at a piece of student writing and talked about different ways to assess it. We never even talked about what assessment was; I assumed that it was just a euphemism for grading. Certainly, there was no discussion of "responsive" assessment, unless that meant grading and returning essays promptly.
As I gained experience in the classroom, promptly graded essays didn't fit my growing sense of what responsive assessment should be. I came to the conclusion that grading and other ways of quantifying student writing were useless at best and harmful at worst. Working with college-prep students, alternative high school students, and adult learners who had once dropped out of school, I saw that bad grades didn't create more-invested writers; they only convinced students that they couldn't write or focused their attention on exactly what they should do to earn seven more points—hardly the kind of thinking that leads to better writing.
Just as bad, grading essays misrepresented the goal and nature of writing and reading. Assigning a number to each text implies that the writer is shooting for some objective standard. However, the goal of a writer (a person who has something to say) is to communicate with an audience (another person, who brings powerful, subjective perspectives to the reading). A writer's work, then, is to constantly negotiate what is inside at least two human minds. Given this complex dynamic, how can any grade be objective and accurate?
The testing industry's response to this conundrum has been to try to change the way readers read—to get them all to agree. The industry finds reader disagreement disagreeable because it casts doubt on the idea that the tests are reliable. The Educational Testing Service has spent enormous amounts of energy (and money) to remove reader disagreement from writing assessment—first by abolishing writing tests in the 1940s in favor of multiple-choice grammar tests that don't require readers, then by creating criteria to calibrate readers' ratings of student essays in the 1960s, and finally by weeding out test readers whose ratings don't agree with the rest (Wilson, 2006).
But different responses to the same piece of writing don't necessarily result from reader error. In fact, subjectivity is integral to the reading process. As educator and literary theorist Louise Rosenblatt (1993) observes, the reader and the text "work on" each other, and this "transaction" defines each reading experience (p. 380). Stanley Fish (1980) goes further and argues that for all intents and purposes, writing doesn't really exist without the reader bringing it into meaning.
More recently, neuro-scientist Antonio Damasio (1999) describes how, as individuals, we continually construct a "sense of me" in relationship to the objects we perceive—including the texts we encounter. Constructing and experiencing this sense is not only one of the most powerful reasons we read and write, but it's also what enables us to do so. This is perhaps most obvious when it isn't happening properly. Some readers can decode each word, but they can't bring experiences, feelings, or previous thoughts to the words on the page; they can't create a sense of themselves in the act of reading. More phonics instruction doesn't help these readers. The reading process activates and depends on individual subjectivities, a fact that has driven the writing assessment industry crazy in its attempts to standardize readers' responses to essay tests.

Developing a Process for Responsive Assessment

Although grading essays didn't make much sense to me, I certainly had something to say when I read student writing. In fact, I believed that my response to student writing was among my most powerful pedagogical tools. That response would help students see how their specific piece of writing had affected an experienced reader's mind. After all, creating an effect in someone else's mind is a central goal of writers.
But couldn't such a response and grading coexist? Study after study, along with my own experience, suggested that the answer was no. Grades, apparently, had just as damaging an effect on teachers' readings and responses as they had on students' motivation and view of what it means to write. When comments and grades coexist, the comments are written to justify the grade and therefore don't tend to address how the writing has affected the reader (see Daiker, 1989; Huot, 2002).
In reflecting on my objections to grading, I began to think that narrating my responses while reading was, in fact, assessment. My responses to student writing were necessarily, frustratingly, wonderfully subjective. But that was okay, because the same thing would happen to the writing my students would do some day in "the real world"—their writing would either convince, confuse, delight, bore, frustrate, or motivate their bosses, clients, family, friends, or professional communities. As a writing teacher and experienced reader, I could show students how their words affected me as I read and help them compare these effects with their intent. I could also provide other audiences and help students sort through various responses and assessments to clarify their intent and revise their writing.
I knew, however, that assessment involved more than just a description of what went on in my mind when I read. I thought of my grandmother. The power of her responsiveness to me lay in her ability to imagine what was going on in my mind; her response articulated something I hadn't been able to put into words myself.
I found myself trying to get inside the mind of the writer when I responded to student writing. At first, I did this out of necessity; student writers are often unsure about what they're trying to do. To help my students figure out their intent and how their words could better express that intent, I had to imagine myself in the middle of their process. It's not enough to tell students to make their writing more clear, focused, or engaging—students can't often do that because they don't know how, and they don't really understand how they went astray.
I was sometimes wrong when I imagined what they were trying to do when they wrote, but my attempt focused students' attention on their intent and process rather than on vague categories such as "organization." I was looking, then, for an approach to assessment that got inside both my mind as I read and my students' minds as they wrote.
Whereas my exposure to assessment in college had been limited to test formats I never wanted to use in a writing class, my on-the-job assessment training had mostly involved rubrics: the Michigan Educational Assessment Program (MEAP) four-point and six-point rubrics, Northwest Regional Educational Laboratory's 6+1 Trait rubric, and my own. I was frustrated with rubrics for many reasons. The most frustrating thing was that no matter how often I revised them or even invited my students to help create them, rubrics rarely reflected my response when I actually read individual papers. The problem was twofold: The rubric couldn't get inside my mind when I read, and it couldn't get inside my students' minds as they wrote.
Interestingly, many rubrics now include "reader response" language. Consider this comment from the 6+1 Trait rubric's sentence fluency category:Phrasing does not sound natural. The patterns may create a sing-song rhythm, or a chop-chop cadence that lulls the reader to sleep. This comment may sound like it has gotten inside the reader's mind, but only in the same sense that a computerized phone menu can anticipate a caller's problems or needs. The company using such a menu can add as many options as it wants, but no caller experiences this as a real interaction.
Because I considered response the most pedagogically significant form of assessment, I put aside rubrics and other predetermined categories designed to generate reader agreement and grades. Instead, I tried to do what I imagined my grandmother would have done: help students learn to use language to mediate the gulf between the writer's and the reader's mind.

Getting Inside Sarah's Head

Sarah was a 12th grader in my college-prep research writing class last year. The first assignment was for students to research any aspect of their futures. Some students researched their top three college choices to figure out which school would best suit their interests and personalities. One investigated what it would be like to be a female engineer in a career dominated by males. One tried to figure out why he was so in-decisive and how he could learn to deal with the unavoidable and important decisions he would face. Sarah's passion was music. She knew she didn't want to perform or teach, so she researched the field of music therapy to see whether it might be right for her.
One of my goals had been to teach students to find, synthesize, and skillfully use sources in their papers, and I'd asked students to use a combination of online and print sources in addition to at least one interview. When I read Sarah's paper, it was immediately obvious that she had plagiarized at least one source—big chunks of her writing didn't sound like her.
Dealing with students who plagiarize sources is an occupational hazard of writing teachers. Teachers often speak of "busting" students, threatening to run every paper through Turnitin.com and punish any instances of plagiarism with a zero. A common reaction to Sarah's paper would have been to give it a failing grade or at least return it to her and tell her to fix the problem before I read further.
Instead, I decided to find out what would happen if I narrated my response to Sarah's first draft and tried to get inside her mind. My response included this passage:Sarah,I like your perspective here; you've got to figure out a way to turn your passion into a job. It looks like you found some great sources, including information from schools offering degrees in this [music therapy], and the online interview, which seems more helpful to me than all the other jargony information about what the job entails.Actually, the jargony nature of some of the information is a bit of a problem, because the jargon obscures the meaning. (What in the world does it mean to be "trained in the specific use of music therapy techniques as an adjunctive/ augmentative therapy"?) Since you've stuck too close to the wording of these sources in your paper, you not only have a problem with plagiarism here, but the information goes over my head because I can't figure out what it means. In fact, I suspect that the information went over your head, which is why you didn't put it into your own words.Fixing the plagiarism problem, then, means that you've got to figure out a way to make all this jargon a bit more concrete. I'd suggest using more of the online interview, since the story of the day-to-day realities of the job can help you form some images in your mind that will ground the abstract jargon and allow you to understand it enough to make it more simple for us. Let me know if you don't quite understand what I'm suggesting you should do here.…
After handing back the students' drafts, I approached Sarah at the end of class. I was curious whether I'd guessed right and whether she'd understood my feedback. I didn't need to ask her anything; she saw me and immediately began laughing. "Adjunctive therapy! I have no idea what that means, right? I even looked it up, because I knew I was supposed to use different words, but the definition didn't make any more sense than my source." As Sarah and I talked, we thought that her best course of action was to drop the plagiarized source altogether and talk to an actual music therapist about what he or she does, because the online interview didn't give enough information. Her next draft not only was more interesting to me, but also reflected Sarah's greater understanding of the career she was researching.
My attempt to narrate my reading of the plagiarized material and get inside Sarah's state of mind not only served to deflect her defensive reaction, but also enabled me to actually enjoy responding to the plagiarism problem. When I'd encountered plagiarism before, it had always made me angry, frustrated, and disengaged. But turning the reading into an imaginative exercise changed my experience. In addition, Sarah brought what she had learned from our conversation into her next research paper. When she encountered sources she didn't understand, she sought out people who could help explain the sources—something I do routinely in my own writing.
Could I have come to this insight if I had been using a rubric or other means of scoring Sarah's paper? Perhaps. But my focus would have been different. I would have been comparing the paper to something outside myself—the categories or standards on the rubric.

Beyond Grading to Responsiveness

Although I believe that grades and rubrics get in the way of the kinds of response and assessment that create better writers and, ultimately, better citizens, I still have to give grades. And my students still have to take standardized writing tests that are the epitome of unresponsiveness—tests in which they write on canned topics for faceless readers (and sometimes computers) and then receive a "response" months later in the form of a number or perhaps scripted "feedback."
I experiment with ways to get around the problem of grading—giving completion grades every time a draft is handed in but asking students to continue working on it until we're both happy; putting off grading until the third draft, or negotiating grades with students at the end of the semester. I have also found that students who invest in the writing process—including working through revisions that come from our assessment conversations—generally do fine on standardized writing tests.
But every time I feel the pressure to conflate assessment with quantification and standardization creeping up on me, worrying me and my students sick, I think of my grandmother and ask myself what she would do. It always makes me feel, think, write, read, teach, learn, and live a little better.
References

Daiker, D. (1989). Learning to praise. In C. Anson (Ed.), Writing and response (pp. 103–113). Urbana, IL: National Council of Teachers of English.

Damasio, A. (1999). The feeling of what happens: Body and emotion in the making of consciousness. New York: Harcourt and Brace.

Fish, S. (1980). Literature in the reader. In J. P. Thompkins (Ed.), Reader-Response Criticism (pp. 70–100). Baltimore: Johns Hopkins University Press.

Huot, B. (2002). (Re)Articulating writing assessment for teaching and learning. Logan: Utah State University Press.

Johnston, P. (2004). Choice words. Portland, ME: Stenhouse.

Rosenblatt, L. (1993). Transactional theory: Against dualisms. College English, 55, 377–386.

Wilson, M. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
From our issue
Product cover image 110022.jpg
Multiple Measures
Go To Publication