HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
February 1, 2018
Vol. 75
No. 5

The End of Points

Using points to measure achievement may seem fair and objective, but it can hide critical information about student learning.

premium resources logo

Premium Resource

Assessment
It is difficult to imagine teaching without points. Points are the currency of our classrooms, our units of measure to describe student learning. As a former teacher and school administrator, I know that over the course of a term, we might capture and calculate hundreds, even thousands, of points in our grade books. Even the most progressive, performance-based tasks are often translated into points.
But what if our use of points, specifically in assessments, actually thwarts effective teaching and learning? What if our use of points pulls our focus toward tallying specific responses and away from gleaning an overall level of understanding? What if we are so consumed with each tree that we are losing the forest?

When Points Don't Tell the Whole Story

To understand what I mean, suppose three students earned these scores on a recent quiz:
  • Miguel: 48 out of 60 points = 80%
  • Olivia: 48 out of 60 points = 80%
  • Kamryn: 48 out of 60 points = 80%
All three students appear to have performed identically, right? All of them achieved a B-, so although they didn't show the strongest knowledge of the content, they seem solid enough to move forward. But what if the quiz had comprised three sections, each worth 20 points, and the students point allotments were markedly different across the three sections, as shown in Figure 1.
el201802_feldman_fig1.gif
Now the students' performances don't seem identical at all. Olivia seems to be in a much better position than Miguel and Kamryn. If each part of the quiz addresses a distinct standard, then it's deeply problematic if all three students receive the same B-. Each student clearly has a different pro-file of what they know and what they don't yet know. If we were asked to identify students for additional support and looked only at composite quiz scores, we would miss the fact that Miguel and Kamryn clearly need support before moving on—Miguel with Part 1 content and Kamryn with Part 3 content.
This is the problem with measuring student achievement with points: Although it seems objective, accurate, and fair, relying on points can hide crucial information about our students. We want assessment scores to clearly represent each student's unique performance in our class—their specific strengths and weaknesses—but points often prevent us from doing this. What's more, accurate information about our students' achievement becomes hidden from them—Miguel and Kamryn might think that their 48/60 means they're as ready for the next unit as Olivia, which is simply not true.
Now let's adjust our scenario. Instead of assessing three distinct standards, what if our quiz assessed just one standard, but each section tested a different of Depth of Knowledge (DOK) level? Now let's look again at the three students' scores, but this time (as represented by Figure 2), imagine that Part 1 assessed the students' recall of the content (DOK 1), Part 2 assessed whether the students could summarize the content (DOK 2), and Part 3 tested the students' abilities to compare the content to ideas from another unit (DOK 3/4). Under this scenario, Olivia still seems fine, but Kamryn's knowledge of the standard caps out at DOK 2, suggesting that she needs more support to gain a deeper understanding.
el201802_feldman_fig2.gif
Miguel's performance now seems downright befuddling: According to the scores, he has a deep understanding of the material, more so than Olivia, but made several mistakes when the DOK was low. Based on his perfect scores on DOK 2–4 questions, he clearly has deep content knowledge, so did he just rush too quickly through the easy questions? Did he misunderstand the questions? Did he not remember the ancillary concepts? Finally, what grade best represents his knowledge? If we merely totaled the points from each section and assigned a grade, we would miss all these nuances—and miss a chance to guide the student toward a better understanding of the lesson content.

More Than the Math

Points give us cover and safety. They give our measurement of student performance a veneer of fairness and quantifiable objectivity. We assign a point maximum that a student can earn for each question, count up the total points the student earned, and then divide that total by the total points possible, and—presto!—we have a grade. Unfortunately, it's a grade that reflects only the points a student accumulated, not how those points were accumulated. If someone questions the accuracy of a student's grade, we can respond with, "It's how the math worked out." But this approach to measuring learning is a shirking of our professional responsibility and a denial of our professional expertise as educators. The mathematics of points should not dictate a student's grade when our professional judgment doesn't agree.
Worse, when we reduce our students' learning to point totals, our students respond in kind. They bargain, haggle, and argue with us over each point on each quiz and test. Just like us, they lose the forest (what level of knowledge they have demonstrated and how they need to improve) for the trees (how many points they earned). In fact, when points are the only way we speak about achievement, it can feel like teachers and students aren't even looking at the trees. We are counting leaves.
Many educators who recognize the weaknesses of points are turning to alternative approaches to scoring. What if instead of calculating point totals, for example, we created a scale with simple descriptors of stages along a continuum of standards mastery, as shown by the sample scale in Figure 3?
el201802_feldman_fig3.gif
Many teachers use this simpler and more straightforward scale. For any given assessment, they review the totality of a student's answers and assign a grade that represents the overall level of knowledge the student demonstrates. If a student showed only surface level knowledge, the grade reflects that. If a student made simple mistakes but showed deep overall knowledge, the teacher can use his or her professional judgment to decide whether those mistakes materially affect the grade. If an assessment covers multiple standards, the teacher evaluates students' level of mastery on each standard independently, and enters each of those grades (A–F) in the grade book so information isn't collapsed and important details aren't lost. With this approach, clear and important information about a student's academic progress is not obscured by blanket point totals. What a student knows and doesn't yet know is now clear to the teacher and the student.

Focusing on Mastery

This kind of thinking forces us to see the forest. What matters isn't whether a student scores 48 out of 60, or 88 out of 100, or 265 out of 297. What matters is the level of content mastery the student demonstrates. When teachers free themselves from points, the discourse of the classroom fundamentally changes. Teachers no longer return papers silently with students eager to compare point totals. Now, teachers facilitate a reflective debriefing, which might sound like: "Which of the three standards is your strongest, and with which one do you still have the most work to do?" Teachers can partner a student with another student who performed higher on a particular standard and ask them to identify what they need to learn to demonstrate a higher level of mastery. In this way, students stop talking about point accumulation and start describing learning in terms of mastery. It's not, "I'm three points away from a B." It's "I still haven't mastered how to factor inequalities."
Creating assessments becomes more intentional, too. First, we identify what a student would have to demonstrate to prove mastery of the content and which common misconceptions or errors would reveal less than sufficient mastery. Next, we organize these criteria into a simple rubric or checklist. Then, rather than design a quiz through a lens of point calculations ("How many multiple-choice questions should I have, and how much should each be worth so it's a 60-point quiz?"), we create assessments guided by the question: "What questions are necessary to determine each student's level of standards mastery?"
To illustrate this, I'll use the high school social studies standard, "Analyze voluntary and involuntary immigration trends since Reconstruction in terms of causes, regions of origin and destination, cultural contributions, and public and governmental response" (North Carolina's AH2.H.3.4, 2010). If we wanted to know whether a student mastered the standard, we'd want to know that the student could identify economic, political, and social factors that attracted immigrants to America and that pushed immigrants to leave their home countries, and also that the student could describe both hostile and welcoming reactions from America.
We might decide that we could confidently determine the student's level of mastery if we asked them to choose two countries and write a paragraph on each that described the factors that resulted in immigrants from that country coming to America, along with specific American reactions—both governmental and public—to each country's immigrants. Or, if many of our students are English language learners, we might decide that we want to reduce the language demands of the responses and instead would ask a handful of multiple choice questions that address each of the elements of the standard.
If you worry that this approach forsakes the objectivity of points and instead invites subjectivity into assessment, remember: Points create only the illusion of objectivity. In either case, the teacher creates questions and determines how well students answer those questions. Awarding and totaling points doesn't magically make the process objective. In fact, liberating ourselves from points capitalizes on our professional expertise and helps us to be less subjective and more transparent. Rather than describing proficiency as a "90 percent," we articulate exactly what student performance constitutes evidence of mastery—even collectively agreeing within a grade or department—and share that with students, caregivers, and administrators. Teachers find, to their great relief, that with this approach there is far less haggling and arguing about grades. A student earns a grade based on the level of performance she demonstrates, and no amount of points earned through homework or extra credit can substitute for showing increased proficiency.
Ending our use of point scoring may seem a radical suggestion, but we need to dispel the myth that points make our grading more accurate or fair. We've seen that they generally do the opposite, making our grades less accurate, more confusing, and less fair. Liberating ourselves from points can be frightening, but it redirects us to focus not on the quantity of points our students earn, but on the quality of the learning our students demonstrate.
End Notes

1 Webb, N. (2002). "Depth of Knowledge Levels for Four Content Areas." [unpublished paper]

2 Marzano, R. J. (2010). Formative assessment and standards-based grading. Bloomington, IN: Marzano Research Laboratory, p. 44.

Joe Feldman has worked in education for over 20 years as a teacher, principal, and district administrator. He is the founder and CEO of Crescendo Education Group, which since 2013 has supported K-12 schools, districts, and colleges/universities nationwide to improve grading and assessment practices. He has presented at numerous education conferences, and his writings have been published in Education Week, Kappan, Education Leadership, District Administrator, and Black Press USA. His book, Grading for Equity: What It Is, Why It Matters, and How It Can Transform Schools and Classrooms (Corwin), was published in 2018.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Related Articles
View all
undefined
Assessment
The Value of Descriptive, Multi-Level Rubrics
Jay McTighe & Susan M. Brookhart et al.
2 months ago

undefined
Giving Retakes Their Best Chance to Improve Learning
Thomas R. Guskey
1 year ago

undefined
The Way We Talk About Assessment Matters
Kevin Kuehn
2 years ago

undefined
The Power of Feedback
Educational Leadership Staff
2 years ago

undefined
Fine-Tuning Assessments for Better Feedback
Myron Dueck & Ben Arcuri
2 years ago
Related Articles
The Value of Descriptive, Multi-Level Rubrics
Jay McTighe & Susan M. Brookhart et al.
2 months ago

Giving Retakes Their Best Chance to Improve Learning
Thomas R. Guskey
1 year ago

The Way We Talk About Assessment Matters
Kevin Kuehn
2 years ago

The Power of Feedback
Educational Leadership Staff
2 years ago

Fine-Tuning Assessments for Better Feedback
Myron Dueck & Ben Arcuri
2 years ago
From our issue
Product cover image 118067b.jpg
Measuring What Matters
Go To Publication