HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
September 1, 2020
Vol. 78
No. 1

Stepping Up the Grading Environment

author avatar
Students' expectations of the grading process can be a self-fulfilling prophecy. Here's how to change the narrative.

premium resources logo

Premium Resource

Stepping Up the Grading Environment thumbnail
Most of my career has been spent trying to engage students, to spur their interest—be it for instruction, grading, or assessment. I want to know: How can we make grading policies and decisions that are more empowering and accessible to our students? I believe the answer to this could involve disrupting the patterns that students have come to expect and anticipate within the grading process. Let me tell you a brief story.
In 2019, I interviewed Celeste Kidd, a professor of developmental psychology, for the ASCD streaming series Ask Them. Over dinner the night before our shoot, Kidd gave a fascinating, albeit troubling, account of one of her experiences working in a homeless shelter. A local church group decided to deliver gifts to the children during the holiday season. The parishioners arrived one morning, wrapped gifts in hand, and distributed them to the children. Upon receiving the gifts, the children predictably tore the wrapping paper from the packages to reveal an array of dolls and toy cars. Smiles and laughter filled the room. But suddenly the festive mood took a less expected turn. Almost at once the children started ripping the dolls' arms off, tearing the doors off toy cars, and smashing the mini windshields. Some children used scissors to give Barbie a mullet, and others gave her a buzz cut as though she'd just joined the Marines.
As the children engaged in this joyful destruction, the church folks stood speechless … stunned. Once one of the church organizers got over her initial shock, she approached Kidd and suggested that the children's destructive behavior was abnormal.
But Kidd saw it differently. "Wrecking toys is fun," she said. "Haven't you ever wrecked a toy?" She went on to describe some of the realities of a homeless shelter. Personal space is limited, and few things are secure—especially toys. These children had years of experience in these conditions, and they'd come to have certain expectations of that environment. They had learned that destruction was unavoidable, and if the owner of the toy didn't take control of its demise, someone else would.
This attention to environmental factors is central to Kidd's research. I had discovered her work while researching the well-known "marshmallow experiment," first conducted by Stanford University psychologist Walter Mischel. The study involved placing a marshmallow in front of a child, typically aged 3 to 5, and promising the young subject that they would receive a second marshmallow if they could wait 15 minutes before eating the first one. The time it took for the child to eat all or part of the first marshmallow was used as a singular measure of their self-control (Mischel & Ebbesen, 1970).
Kidd went on to conduct her own study (Kidd, Palmeri, & Aslin, 2013) that suggested that environmental factors had far more to do with a child deciding to wait than any supposed self-control. More recently, researchers have taken the marshmallow study further and shown that a child's socioeconomic standing looms large in their decision to wait or not. Like Kidd, they found that these young subjects were making calculated, rational decisions based on their past experiences (Watts, Duncan, & Quan, 2018).
Looking more specifically within a school setting, Kidd's broader research explores the fact that students will generally "tune out" if any of the following situations exist:
  • They think they already know the material.
  • They believe they have no hope of learning it.
  • They don't have the background knowledge to make connections to new material.
  • They struggle to see the utility or relevance of what they're learning.
In light of this list, I've spent a lot of time pondering Kidd's contention that how students act in any situation depends a lot on their expectations of the environment. If student expectations have such a significant impact on engagement in the learning process, what about grading? Might students form expectations based on their past grading experiences?
I believe they do and that we can positively impact those expectations if we focus on three "I"s: investment, identity, and independence.

Investment: Give Students a Chance to Tell Their Story

For years I've heard teachers argue, "I'm getting my students ready for university" as justification for upholding antiquated grading rules and policies. Thankfully this argument is eroding, but I fear it's got more to do with what universities are doing than anything K–12 systems are doing on their own. Many universities are already looking at student performance more holistically, asking applicants to write or record a personal profile in which they discuss challenges they've overcome, significant life events, and personal interests as a way to distinguish themselves from others. Check out the admissions websites for Brown University (https://admission.brown.edu) and the University of British Columbia (www.ubc.ca/admissions) as just two examples among many.
I wonder what K–12 schools could do to send students the message that their story matters? How could our grading policies and procedures demonstrate that we consider the whole student to be important? Here are a few ideas:
  • Examine what lies below the surface. Our school district has looked at the work of Tolisano and Hale (2018) to inform our process in getting students to report their own learning. Their analogy of an iceberg has been particularly helpful. When reporting student learning, too often we focus on the portion of the iceberg that extends above the waterline, the part we can all see. The final product of a project, the test score, the written paragraph, a speech—these are all "above the surface" elements. Tolisano and Hale encourage us to explore what lies below the waterline. To this end, our district designed self-reporting templates that can be used for projects, assignments, or end-of-term grades that ask students to tell us what "no one else would know" about their learning. Figure 1 shows one of our elementary templates that was designed to have the student tell us more than grades will ever reveal.
  • Be sure students understand the questions and the measurement scale. If we're going to construct grading environments in which we welcome student voice, there are steps we can take to improve the quality of these responses. As districts broaden the opportunities students have to tell their own story, questions around the accuracy of these accounts will surely arise. Thankfully, there is research emerging to guide us through these challenges. Rosen, Porter, and Rogers (2017) have identified four key considerations for making self-reporting more accurate:
  • Comprehension: Does the student understand what is being asked?
  • Retrieval: Does the student recall a balanced and representative collection of events in order to make an accurate report?
  • Judgement: Is a final conclusion based on sufficient data?
  • Response: Does the student understand the scale used in the response? Has the teacher taken social factors into account? (For instance, embarrassment may alter the student's report.)

Figure 1. Elementary Self-Reporting Template

el202009_Dueck_fig1.gif
Considering this research, I think we have more power than we realize to affect not only self-reporting accuracy, but student expectations of the grading experience.

Identity: Do My Grades Define Who I Am?

A number of years ago a colleague gave me some sage advice: "If you're going to make changes to your grading and assessment policies, write down what your students think about it."
As I became accustomed to listening, I began hearing students make grading comments that were more about them than the system. It dawned on me that grades had a huge impact on some students' self-concept and their very identity.
One story comes to mind more than any other, and I shared it in the conclusion of my book Grading Smarter, Not Harder (ASCD, 2014). One Friday afternoon, Melinda, a struggling student, dropped by to take a retest. She displayed very little academic confidence in class and was generally defeatist, so it surprised me a little that she'd come by on her own time to be reassessed. She asked to explain her understanding verbally rather than by writing. I recall her answering a series of questions with incredible detail.
As she was leaving, she stopped by the door and said, "You know, Mr. Dueck, this is probably the only class I'm putting effort into. I get to see what I'm right about and what I'm not right about. This helps me get my grades up. It makes me feel smarter than I really am—I mean, I don't get good grades, but in this class, I know things and I feel smart. I haven't really felt smart in the past."
This narrative speaks volumes about Melinda's self-concept and the role that grading changes and policies can play in forming it. By allowing retests, and by widening the window through which Melinda could demonstrate her understanding, I was building an environment in which she could feel empowered, feel smart.
If grades have a significant impact on our identity, perhaps we can harness this power. Here are a few ideas:
  • Failure is not an option … temporarily. MIT has an interesting approach for first-year students, called First Year Experimental Grading. According to the university's website, first-year students will receive either a pass or no record for their courses. Yup, that's right—you can't fail. Now why would someone institute an idea like that? The answer is written in plain language on the website: They wish to "ease the transition" for students coming to MIT. You could argue that they want students to feel confident and less discouraged. (If you're a big fan of failure, don't get too alarmed. Students do transition to an A–F scale by their sophomore year.)I can see a thousand eyes roll about this MIT practice, but let's just dream for a second. What if a high school informed all incoming 9th graders that the first month of feedback would either be graded pass or no record? What if there were no permanent grades determined for the first two or three assessments, but rather only written feedback or a "potential score" that had the ability to change as the work improved? What if we gave nongraded feedback such as, "You're starting to figure out this formula, but let me show you an adjustment that will greatly increase your chances for success"? I don't think the mountains would fall into the sea if we tried a few of these things.
  • Strength-based reporting. Look over the descriptions at the "lowest" or "weakest" end of your rubric's assessment tool. Are they phrased in negative or positive terminology? If they're at all like mine used to be, I'll put money on the former. Is it any wonder that some learners have a negative identity when they spend a lifetime having their demonstrations of learning cloaked in terms like fails to, incorrectly, and shows no evidence of?
Think about the language you speak when you want someone to actually be engaged in learning something new … say, riding a bike. Do you give them a push and run behind them shouting out all of the things they're doing incorrectly? "You're not riding very fast! You appear unbalanced! You're not as good as that kid over there!"
Of course not. If we want them to persevere, to ever pick up a bike again after falling down, we tell them what they are doing right, how they are beginners, how they are getting it little by little. Why don't we do the same for struggling learners?
British Columbia was a haven for draft dodgers in the 1960s, so don't be surprised if there's a lingering feeling of nontraditional thinking around here, especially in education. One example of this is our move toward strength-based reporting. Our Ministry of Education has piloted a reporting model in which strength-based language is used for all learners. In many rubrics, the lowest level is phrased as: The student demonstrates an initial understanding of the concepts and competencies relevant to the expected learning.
Some other examples of how we can describe the beginning stages of learning include:
  • With support is able to …
  • Makes an attempt to …
  • Shows some signs of …
The next time you tackle a new piece of learning, such as making a pie crust, consider whether you want the feedback to be positive or negative in nature. Your feelings might just be the same as your students'.

Independence: Can I Measure and Report My Own Lifelong Learning Journey?

I think we've caught a glimpse of the need for independence in this COVID-19 era. Last school year, in what seemed like an instant, a pandemic resulted in our students leaving before spring break and not coming back for the rest of the year. Teachers and students suddenly needed to adapt to remote learning environments, define essential outcomes, and reach out digitally.
For most schools, this situation went on for months. What I have witnessed, however, was innovative grading and assessment practices that will undoubtedly shift our landscape when we do return to a new normal. Can we use these to set our students up to be lifelong learners? Can we empower students by allowing them to grade themselves?
One challenge that teachers have seen since shifting to remote learning is that their grading practices aren't as solid as they'd like. Through providing support to schools remotely, I got the opportunity to chat with a number of educators during the COVID-19 era. Heather Willman, a district instructional coach in Rochester, Minnesota, said, "One thing we are seeing is that some of our typical in-class assessment practices, such as multiple-choice testing, just aren't an option in remote learning situations. Obviously, students couldn't take these tests because they'd easily copy off each other or just Google the answer." Willman has realized that changes in assessment and grading are needed with or without social distancing. "If the answer is so easily copied or looked up, perhaps we need to go deeper," she said.
Ben Arcuri teaches senior chemistry at Penticton Secondary School in British Columbia and works with educators around North America. Faced with teaching his chemistry students from behind a screen, Arcuri looked for solutions. "I think this COVID-19 era was an opportunity to try new things, be inventive, think outside the box," Arcuri said. "We were clearly in uncharted waters."
One assessment strategy adopted by Arcuri was to conduct a digitally collaborative chemistry lab, but with an individual assessment. Arcuri assigned students to groups of three or four and sent out a different set of data to each group. Students communicated with each other by email, Zoom, Microsoft Teams, or chat. They discussed the data, conducted the lab remotely, and performed needed calculations. Once they collectively looked at the problem, the final step was for each student to submit an individual reflection or lab report. By using different data sets with each group, Arcuri greatly reduced the chances of copying while encouraging collaboration. The individual reflection or lab report was unique to each student in each group, so plagiarism was rare.
Similarly, middle school teacher Danika Heinrich at Skaha Lake Middle School in British Columbia has personalized responses to problems by asking students to determine their own carbon footprint or to calculate their personal water use. The purpose or question for these problems are consistent for the group, but they elicit personal responses that engage the learner while avoiding copying behavior.
Teacher Marcus Blair adopted a four-level proficiency scale with his senior history class before the COVID-19 crisis and found it came in very handy during remote learning. Blair avoids terminology such as A+ or 100 percent and rather reports when students have been extending their learning or showing sophistication. "I really like having the upper level of my scale as extending," said Blair. "This allows students who want to go further and explore on their own to just go for it. I use a lot of inquiry questions [during remote learning], and each student's response is different. They can't just Google it."
All of these revelations stemming from the switch to remote learning can set students up to be more independent in both how they learn and also how they seek information. It also teaches them how to be more formal in their digital communications, which will come in handy once they return to the classroom as well as after they leave it. "I now recognize that we need to teach [technology communication] skills not only in times of social distancing, but also as students embark on careers and other paths that will require virtual and digital communication," said Heinrich.

Reshaping the Expectations

Our students arrive to each class with a set of expectations based on past experiences. Every day, in every interaction, and with every grading decision, we have the power to shape their behaviors, built on their expectation of being independent, invested learners who have a strong identity. Let's be courageous in harnessing that power.

Reflect & Discuss

➛ How can you include more student voice and choice in your grading and assessment?

➛ How do your students associate their identities with their grades?

➛ Have you seen flaws in your own grading system after switching to remote learning? In what ways can you adjust to fix those?

References

Kidd, C., Palmeri, H., & Aslin, R. N. (2013) Rational snacking: Young children's decision-making on the marshmallow task is moderated by beliefs about environmental reliability. Cognition, 126(1), 109–114.

Mischel, W., & Ebbesen, E. B. (1970). Attention in delay of gratification. Journal of Personality and Social Psychology, 16(2), 329–337.

MIT. Registrar's office—First year grading. Retrieved from https://registrar.mit.edu/classes-grades-evaluations/grades/grading-policies/first-year-grading

Rosen, J. A., Porter, S. R., & Rogers, J. (2017, May 25). Understanding student self-reports of academic performance and course-taking behavior. AERA Open, 3(2).

Tolisano, S. R., & Hale, J. A. (2018). A guide to documenting learning. Making thinking visible, meaningful, shareable and amplified. Thousand Oaks, CA: Corwin.

Watts, T. W., Duncan, G. J., & Quan, H. (2018). Revisiting the marshmallow test: A conceptual replication investigating links between early delay of gratification and later outcomes. Psychological Science, 29(7), 1159–1177.

For 23 years, Myron Dueck has worked as an educator and administrator. Through his current district position, as well as working with educators around the world, he continues to develop grading, assessment, and reporting systems that give students a greater opportunity to show what they understand, adapt to the feedback they receive, and play a significant role in reporting their learning.

Dueck has been a part of administrative teams, district groups, school committees, and governmental bodies in both Canada and New Zealand sharing his stories, tools, and first-hand experiences that have further broadened his access to innovative ideas. He is the author of the bestselling book Grading Smarter, Not Harder (ASCD, 2014) and the new book Giving Students a Say (ASCD, 2021).

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 121035b.jpg
Grade Expectations
Go To Publication