HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
September 1, 2008
Vol. 66
No. 1

The View from Somewhere

In making objectivity the Holy Grail of scoring writing, have we forgotten that real assessment of writing requires an interaction between two minds?

premium resources logo

Premium Resource

It's old news that high-stakes, summative assessment practices don't help students learn, although word hasn't yet trickled up to politicians. We know that when Carl, a struggling 8th grader, takes the state writing exam (theme—"Friends"), he won't receive any kind of helpful response if he writes,im not going to write about friends because i don't have any friends and ive never had any friends and i had a friend once but he tried to kill me the end.
His failing score, when he sees it months later, will not help Carl develop his thoughts or guide him to think about how well his words match his intent. Even if Carl is lucky enough to live in a state that still uses human rather than computer labor for attaching numbers to written expression, the scorer will not establish a mentoring relationship with him, a bond we know is important to all learners and especially crucial to Carl.

Creating Distance

Anonymous assessment like this is not just unhelpful; it can also be damaging. The prompt reminds Carl of his difficult social interactions, the testing situation reinforces his many school failures, and these failures compound his social alienation. At the end of the day, we'd have to agree with Carl when he announces, before crumpling up his rough draft and stomping away, "No one is going to care what I have to say about this, anyway."
Why can't competent, caring teachers of writing assess the work of learners like Carl? Because we're not "objective" enough, of course. Objectivity is ingrained in these testing procedures. Teachers, the argument goes, are too close to students to be fair; our caring might interfere with our assessments.
All of this objectivity adds up to distance. Because of built-in physical, situational, and psychological distance, the scorer, scoring procedure, and reporting process won't touch Carl, except negatively. So he distances himself, refusing to engage fully in the test or in writing itself.
The earliest conceptions of objectivity in philosophical literature were quite different from those implied by our current usage. In late medieval times, the term was used to refer to "objects of thought, rather than those of the external world" (Daston, 1992, p. 600). This is an inversion of the distinction we now make between subjectivity (which is "unreliable" because our perceptions cannot be trusted) and objectivity (which is "reliable" because it exists outside the shifting nature of our minds). This definition began to change in the 18th century. In 1759, Adam Smith foreshadowed the connection policymakers now make among objectivity, impartiality, and distance when he described a man who viewed experienceneither from our own place nor from his, neither with our own eyes nor with his, but from the place and with the eyes of a third person, who has no particular connexion to either, and who judges with impartiality, between us. (Daston, 1992, p. 20)
Scorers and procedures in the current assessment landscape embody this man distanced from himself, standing in "no particular" place and looking through "no particular" eyes, a conundrum that Thomas Nagel (1986) described as "the view from nowhere." Carl cannot show his thoughts to someone who assumes this view from nowhere; he is lost enough already. Teachers' reluctance to challenge the suitability of objectivity undermines our commitment to teaching, learning, and assessment.

Rubrics and the View from Nowhere

Although the harm done by the distance and objectivity of high-stakes assessment practices is clear, more subtle incarnations of the pursuit of objectivity have similar effects. Rubrics were developed to make direct writing assessments palatable to testing companies—and thank goodness, or multiple-choice grammar tests would still be the only acceptable method of determining writing skill. However, the disagreement that is inevitable when unique individuals bring their perspectives to a text doesn't lend itself to objective testing procedures. So rubrics meet the demands of objectivity by distancing teachers from their own perceptions in order to create agreement among readers—writing assessment's view from nowhere.
Consider how the rubric in the following scenario pushes a group toward this universal but bland perspective. Teachers are assessing a 6th grader's response to a districtwide writing prompt. All but one think the paper should earn the high score of 4 on the districtwide rubric. The odd-teacher-out stubbornly insists that the paper is vacuous and shouldn't earn higher than a 2, so the session leader directs everyone to the wording of the rubric. He leads the group category by category through the scores for mechanics, sentence fluency, and organization, until the rebel teacher is forced to concede that according to the rubric, the student has earned a 4. What this teacher actually sees in the paper is not of concern; the rubric's rigid categories take precedence.
If the rubric had included a category for "originality," the teacher's insight might have stood a fighting chance. But the way that these teachers related to the rubric, feeling that they must fit their responses into it, should give us pause. Writing assessment expert Bob Broad (2003) spent a year listening to assessment conversations on student writing among members of a university's English department. He identified 89 distinct categories of values the professors expressed about student writing. A typical rubric has four to seven categories that teachers must confine their reactions to. Given Broad's observation, it's safe to assume that even a rubric with 15 categories would likely shut down teachers' responses, including the responses of rebel teachers, which might help students write more meaningfully. What matters with rubrics is agreement.
Agreement (or reliability) is central to objectivity in positivist testing theory, but it runs roughshod over the nature of writing and reading. Louise Rosenblatt (1976) and other literacy theorists have posited for years that the meaning and value of texts are not simply fixed in syntax and word choice; instead, meaning and value develop in relationship to the experiences that readers bring to words. People can't agree on the meaning (let alone the "worth") of any text because the truth of the text is bigger than anything we could express or agree on and because the meaning evolves with the perspective of its readers. This interaction of reader with text—and of reader with other readers—demands a shared assessment process that defies agreement or expression through numbers.
This collaborative nature of meaning and value in texts has always threatened the work of standardized test makers. Educational Testing Service's Paul Diederich was appalled in the early 1960s at the wide range of scores his 53 "esteemed readers" gave to papers. Instead of adjusting assessment to accommodate the nature of language, Diederich and his colleagues tried to adjust the nature of reading and writing to fit the needs of objective assessment procedures (Diederich, French, & Carlton, 1961). He tried to bypass reader reaction by standardizing it, distilling the comments of the 53 readers into a list of five factors—writing assessment's scientifically derived view from nowhere. We see the effects of Diederich's oversimplification in the introduction to a Michigan Educational Assessment Program rubric (2005): "Here is an explanation of what readers think about when they score your writing." The rubric lifts the readers out of their own responses and tells them what to think, thus violating the complexity of reading, writing, and responding for the sake of objectivity. Rubrics, which classroom teachers routinely use for formative assessment, compound the distance created by high-stakes tests.

Assessment as Conversation

I suggest that any assessment tool designed around the concept of objectivity is not, in fact, assessment. Before you dismiss me as too radical, consider the Latin root of the word assessment—assidere, to sit beside. I admire teachers who take the time to sit or crouch down next to their students to listen and question and work. The physical position conveys not only respect for the student but also recognition that, in order to help, the teacher must get closer to the student and his or her work. Proximity matters; the teacher gains a better view of what the student needs and can bring individual experience—and an individual perspective—to bear on the interaction.
What might an assessment process informed by a reader's unique view from somewhere and by the transaction among reader, text, and author look like? Consider what happens in a good conversation. I share my perspective or experience. This inspires you to respond—to share a similar experience, perhaps. These two stories blend and make me realize something I'd never thought about before, and I offer this new idea for your consideration. The process continues until we've reached new conclusions and raised new questions. Through this collaboration, we've articulated and developed our thoughts, even changed our minds. Neither could have done this without the other. This sounds very much like the transaction between text and reader that Rosenblatt describes—and like learning itself.
How can teachers view writing assessment as a conversation? First, pay attention to what goes on in your mind as you read students' writing, and articulate those reactions; give each student your experience of reading her or his paper. As students respond to your reactions, you might find that your interpretation and suggestions change. Be open to each writer's purposes, experiences, and personality, using everything you know about that person and yourself as a reader.

Conversation with Justin

An assessment conversation I recently had with my student Justin demonstrates how perspectives evolve in such assessment. Justin wrote about a tennis match he played during which his mind was really on the impending death of his grandfather. His first paragraph began,Some days in life you find yourself lost, the trivial experiences of yesterday are no longer as important as they were a mere 24 hours before.
Six more general, abstract sentences followed, and I found myself zoning out and cursing whoever had taught Justin the inverted funnel introductory paragraph structure. I wasn't engaged until the fourth page, when he vividly described the barrage of images that assaulted him with every bounce of the tennis ball. I wanted to tell Justin to cut the first three pages. On the other hand, I was worried that my bias against five-paragraph essays was getting in the way of a "fair" assessment. I needed more information from Justin. So I flipped to the last page, where I'd asked students to write about how they'd started writing, where they'd struggled, and their concerns. Justin had written,I was stuck for a while at the beginning. I was not exactly sure what I wanted my topic to be. … I wrote the first paragraph without any topic but then I decided to go with it and chose a story that fits it.
As I imagined the first paragraph as Justin's way out of writer's block, my initial reaction made sense. I wrote to Justin,I think you may have struggled a bit in the intro, trying to find the memory you wanted to write about. There isn't anything really wrong with the intro—there are some interesting, true statements in it, but it is so abstract. What if you see the intro as useful in the sense that it helped you define your memory, but delete it because it doesn't fit the vivid, descriptive nature of the rest of the paper?
Then I remembered how Justin had told me that his response group had been amazed by the introduction. I imagined they saw his abstractions as sophisticated, and I thought that, developmentally speaking, Justin needed to be allowed to try out this kind of writing. To honor both perspectives, I added this suggestion,When you read it [the intro] you layer all the things you know about the moment you were playing tennis … onto it. But your reader doesn't have the benefit of the knowledge you have, so the whole intro remains very abstract and general. … What if you pick the most meaningful sentences from your intro, and figure out a way to work them into the ending of the paper? This way, your reader will be able to pile the specific memories, feelings, and images onto these abstractions.
I was surprised by what I'd written; I'd never thought about the interaction between abstractions and details in quite this way before. In negotiating the conflict between my response and that of Justin's group, I had come to a new insight about writing. That insight came as I reflected on my reaction, his group's reaction, and Justin's intent and experience of writing. A variety of particular perspectives—not a bland, universal view of writing—informed my suggestions.
When Justin handed in his revision, which we both felt had improved tremendously, I asked him to reflect on his process. He wrote,I liked how you addressed what you saw and felt from the paper and then gave your suggestions. Together they gave me a greater understanding of what you meant and made it easier to … know where each suggestion was coming from. I think that what you were saying was somewhere in the back of my mind and I just needed it vocalized to me in a fresh perspective.
For Justin to make difficult cuts and changes, it was important for him to know "where each suggestion was coming from." He needed my perspective, not the codified feedback of a rubric.
In assessment conversations like this, there is no single objective perspective—not only because the teacher's experience is fundamental to this process, but also because the teacher's perspective changes in response to the student's perspective, intention, and writing process. This model is blasphemous to many testing specialists, whose Holy Grail is objectivity. But it has the potential to help our students become better writers and assessors of their own work.
Using grades to prompt revision usually leads students to ask, What changes should I make for three more points? rather than, What am I trying to say and how can I say it more interestingly? But most teachers need to turn in a grade at the end of the day. Linda Christensen (2004–2005) has negotiated this conflict by assigning a point value to each draft a student turns in. If students engage in each stage of revision, they receive all the possible points, although some may need to work through more drafts. I've gotten around the issue of points altogether by working with students to choose a small number of papers each term that we will work on until we are both happy. I don't have a substantive assessment conversation with each student for every paper, but not all papers need to go through this intense process. At the end of the term, the student and I decide together what grade to assign.

Dialogue Is Assessment

This view of assessment as a conversation works as well for research- and literature-based writing as it does for personal narrative. For any writing in which ideas, experience, intent, and audience matter—which is to say most anything worth writing or reading—a conversation is the only process responsive enough to expose the human mind's complex interactions with language. Even a frequently revised rubric I create won't be as responsive as a conversation. I don't want to be held to statements and reactions determined in advance, or even to my own best ideas, because I know they'll change as I collaborate with my students.
The conversations that teachers have with students about their writing—questioning, articulating, and changing perspectives together—are not dialogues we have before assessment takes place to give the student a fair shot at the best possible grade; they areassessment. Objectivity and reductionism may still rule in school, but let's not confuse a grade or any tool designed to generate a number with assessment. As long as I have any say in my own classroom, I'll be assessing writing through conversation, boldly asserting that the view from nowhere can never be as rich as the view fromhere.
References

Broad, B. (2003). What we really value: Beyond rubrics in teaching and learning. Logan: Utah State University Press.

Christensen, L. (2004–2005). Moving beyond judgment. Rethinking Schools, 19(2), 33–37.

Daston, L. (1992). Objectivity and the escape from perspective. Social Studies of Science, 22(4), 597–618.

Diederich, P. J., French, J., & Carlton, S. (1961). Factors in judgments of writing ability. Educational Testing Service Research Bulletin (No. 61–15). Princeton, NJ: Educational Testing Service.

Michigan Educational Assessment Program. (2005). High school English language arts test booklet, Form 1. Lansing: State Administrative Board, State of Michigan.

Nagel, T. (1986). The view from nowhere. Oxford, UK: Oxford University Press.

Rosenblatt, L. M. (1976). Literature as exploration. New York: Noble and Noble.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
From our issue
Product cover image 109020.jpg
The Positive Classroom
Go To Publication