HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2008
Vol. 66
No. 3

All About Assessment / The Assessment-Savvy Student

author avatar

This issue of Educational Leadership stresses the importance of giving students ownership of learning. That's an unarguably good thing to do. But it carries with it a corollary requirement—namely, that students who don't understand something really can't own it. These days, as never before, teachers rely on evidence from students' scores on state, district, and classroom tests to help them monitor their students progress and make instructional decisions. But if students don't know where those progress-monitoring scores came from—or whether they're accurate—how can they truly own their learning?
So what do students really need to know about tests? Well, I recently received an e-mail from a reader in Arizona who asked me to lay out "a modest set of basic concepts about assessment that all students should know." Here's what I suggested.
First, start off with the fundamental function of educational assessment: namely, to collect overt evidence from students that permits teachers, parents, and students alike to make inferences about students' covert knowledge, skills, and affect. Let students know that assessment validity refers to thoseinferences—and not to the tests themselves. If students realize that frequently fallible grown-ups make inferences about students' learning progress—or lack of it—on the basis of students' test scores, they'll be less likely to ascribe unwarranted accuracy to the scores yielded by what someone has erroneously told them is "a valid test."
Perhaps highest on my list of "must-know" measurement truths is assessment's imprecision. Many students believe that because tests generate scores that have been subjected to sophisticated statistical analyses, those scores must be remarkably accurate. That's just not so! Students should regard test scores only as approximations, never as unerringly precise representations of someone's achievement. Indeed, remind students of the following adage: "If you torture statistics long enough, they'll confess to anything!"
To drive home the point about assessment imprecision, students should know just a bit about the concept of assessment reliability. In most instances, students probably don't need to be able to distinguish among the different types of reliability evidence that measurement mavens rely on to show that their tests measure with consistency. However, it would be helpful for students to know that meaningful differences exist among the kinds of reliability evidence they may encounter. For instance, in the case of students being asked to complete two versions of the same test at different times, is there evidence (called alternate-form reliability) that the two forms function similarly? Or, in the case of students being asked to retake the same exam they took several weeks earlier, is there evidence (called stability reliability) that the two exam administrations are likely to measure student learning in the same way?
Students definitely need to know how to interpret commonly employed reporting techniques for standardized tests—including percentiles and scale scores. Most of today's educators know what a percentile is, of course, but scale scores are less well understood. Scale scores are born when students' number-correct scores are translated (with or without severe statistical massaging) into a brand-new, arbitrary numerical scale, such as one ranging from 200 to 400 or from 1,200 to 1,800 score points. Because scale scores are commonly in use these days, students need to know where a scale comes from in the first place and what the scale really represents. Many scale scores signify a student's status with respect to a general amalgam of skills and knowledge, such as a student's "mathematical understanding" or "reading comprehension." Students should understand that most of today's assessment scales provide essentially no diagnostic information about what students do and don't know.

What's Up with Aptitude and Labels?

Because the predictive accuracy of aptitude tests plays such a prominent role in many students' futures, especially when college admission exams are involved, students need to understand that their scores on such aptitude tests can reliably predict only about 25 percent of their subsequent college grades. A student's motivation, study habits, and effort account for much more.
Also, students should learn how test scores are transformed into teacher-dispensed grades and state-dispensed labels, such as "below basic" or "proficient." Students need to understand the basics of what happens during that transformation process—and to recognize its limitations. For example, a teacher might describe to students the exact procedures state officials used to determine specific performance categories ("advanced," for instance) on their state's annual accountability tests. Reports of those standard-setting procedures can usually be obtained from state authorities. Students would quickly see that human judgment, not numbers, dominates such procedures.
Any school's faculty could sit down with these few concepts as openers and improve on the list by adding or deleting according to their preferences. Once a school's teachers and administrators agree on various assessment concepts that students should learn, it is a simple matter to divvy up the concepts for teachers to either introduce or review at specific grade levels. This push for students' assessment literacy should really take very little time.
Student ownership of learning requires that students understand the evidence used to signify whether learning is taking place. To help them reach this understanding, teachers will need to acquire sufficient assessment know-how. However, having more teachers understand a modest collection of assessment notions can only benefit our schools.

James Popham is Emeritus Professor in the UCLA Graduate School of Education and Information Studies. At UCLA he won several distinguished teaching awards, and in January 2000, he was recognized by UCLA Today as one of UCLA's top 20 professors of the 20th century.

Popham is a former president of the American Educational Research Association (AERA) and the founding editor of Educational Evaluation and Policy Analysis, an AERA quarterly journal.

He has spent most of his career as a teacher and is the author of more than 30 books, 200 journal articles, 50 research reports, and nearly 200 papers presented before research societies. His areas of focus include student assessment and educational evaluation. One of his recent books is Assessment Literacy for Educators in a Hurry.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 109022.jpg
Giving Students Ownership of Learning
Go To Publication