1703 North Beauregard St.
Alexandria, VA 22311-1714
Tel: 1-800-933-ASCD (2723)
8:00 a.m. to 6:00 p.m. eastern time, Monday through Friday
Local to the D.C. area: 1-703-578-9600, press 2
Toll-free from U.S. and Canada: 1-800-933-ASCD (2723), press 2
All other countries: (International Access Code) + 1-703-578-9600, press 2
by W. James Popham
Table of Contents
In the immediate instructional adjustment application of formative assessment, teachers gather assessment evidence, analyze that evidence, and change their teaching tactics (if a change is warranted) right then, within the same class session. Because immediate instructional adjustments by teachers can be based either on assessments of students' performance or on students' self-reported levels of understanding, we'll be considering each of these sorts of adjustments separately, in different chapters. The focus here, in Chapter 3, will be immediate adjustments based on data gathered via fairly conventional kinds of classroom assessments.
It is difficult to argue with the instructional virtues of immediacy; any sort of self-correcting system is certain to work better if along-the-way corrections are made as quickly as possible. If we were to fantasize about the way this particular application of formative assessment might work in a pedagogical paradise, teachers would use a medley of brief assessments to gauge students' status on a continual basis—perhaps several times during each class session— and then make instant, adroit instructional adjustments as required so students would be on the receiving end of constantly perfected, truly tailored instruction.
If the data indicate that students are getting what the teacher thinks they should be getting at this stage in the instruction, then the natural conclusion is that instruction is working pretty well. No change is required, and the class session can continue as planned. But if the evidence indicates (1) students are misunderstanding the instruction; (2) students haven't "gotten it at all" yet—that is, their pace of mastery is slower than anticipated; or (3) students have already reached the level of understanding the lesson is aiming for and have none of the questions or confusion the teacher has anticipated and planned to address, then an adjustment is necessary. It is definitely time for the teacher to invoke the hoped-for virtues of Instructional Plan B, Plan C, or whatever other approach the teacher may have ready to go. These alternate plans typically entail either more instruction or different instruction—and sometimes both. In instances when students' assessment evidence indicates they have already mastered whatever's being taught, the adjustment might even be to provide less instruction than the teacher had planned—to skip ahead to the next topic, say, rather than distribute an apparently unneeded set of skill-building experiences. Of course, as this formative assessment application's name implies, any adjustments to be made will be immediate.
But flipping from fantasy to reality, a real-world teacher must prepare for any kind of en-route evidence gathering and also be ready to make immediate instructional adjustments depending on the nature of the evidence collected. That kind of preparation takes time, and plenty of it. It means having to identifying when to assess students—that is, at which point in an instructional sequence and how often in an instructional sequence. It also means figuring out how to assess students so that the data generated will support immediate and accurate diagnosis of students' learning status and then deciding what will happen instructionally, depending on the nature of the assessment results. As you can see, good teachers must be very deliberate about choosing to use this particular application of formative assessment lest they become overwhelmed and disenchanted.
There are five key choices facing teachers who are trying to decide whether to employ formative assessment for immediate instructional adjustments based on assessed performance data. Figure 3.1 provides an overview and a preview of the nature of each decision at stake.
The thrust of the immediate-adjustment application of formative assessment requires the teacher to be able to gain a very quick measurement of students' status. This means setting aside pencil-and-paper quizzes and the like in favor of evidence-gathering that can be done with just a few questions and a visual survey of student responses.
There are some fancy electronic response systems on the market capable of tallying and providing instant summaries of students' multiple-choice answers, but I want to focus on the more common (and budget friendly) techniques: (1) the thumbs-up/down procedure, (2) letter-card responses, and (3) whiteboard responses.
The thumbs-up/down procedure. This simple signaling technique involves the teacher's presenting the class with binary-choice questions—True/False or Right/Wrong test items. The teacher presents a question orally, on the chalkboard, or via a projection device. Then the students, at the teacher's prompt, display their answer by holding a thumb up (for "True") or a thumb down (for "False") in front of their chests, where the teacher can see the signal but classmates cannot. The teacher then scans the direction of students' extended thumbs and takes a rough tally of how many students have answered correctly. Based on the ratio of correct responses to incorrect responses, the teacher decides whether an instructional adjustment is in order.
A clear advantage of the thumbs-up/down procedure approach is that it doesn't require any additional materials, as the vast majority of students come equipped with at least one thumb. Because the teacher uses two-choice questions, it's usually possible to present these questions orally rather than write them on the chalkboard or present them on a display screen. However, the yes-or-no question format does have its drawbacks. Binary-choice test items, if not created with care, can easily slip into measuring nothing more than memorized information and, thus, encourage student thinking at lower rather than higher levels of cognition. Elsewhere (Popham, 2011), I have described how to create binary-choice items that elicit more than merely memorized information.
Letter-card responses. This is a straightforward technique built on students' instant responses to prepared multiple-choice questions. As its name implies, a letter-card system requires the teacher to provide students with pieces of cardstock (often standard index cards, white and approximately 5 by 8 inches in size), on which are printed large letters of the response options the teacher wants students to be able to make. The most common use of letter-cards involves letters A to E, and some teachers include a question mark card students can use to indicate uncertainty.
As with the thumbs-up/down procedure, the teacher signals students when to raise their cards (so the teacher can see but other students can't), visually discerns what the students' response patterns are, and tells the students when "Cards down" is in order.
This technique's use of multiple-choice questions is one of its chief virtues. A teacher may design multiple-choice items to tease out very finely shaded pictures of understanding via answer-options that convey subtle differences in "correctness." In addition, some wrong-answer options can be structured so that when students choose certain of these "distracters," as they're often called, those responses signify a particular kind of confusion that can be addressed instructionally.
Yes, there's a bit more preparation involved with this assessment technique. Obviously, teachers must procure and create the cards, but they must also post or display each question along with all its possible answer options so that students don't have to rely on their memories to keep straight which letter is associated with which answer. Students who can't remember which letter corresponds to their intended answer won't generate accurate assessment data. Another consideration, which applies to any kind of selected-response test item, is that reliance on a letter-card technique does restrict a teacher to presenting questions for which students select an answer from presented options rather than being obliged to generate their answers "from scratch." Certain sorts of subskills really require students not merely to select a response from alternatives but, instead, to come up with an original response. We see this when students must be able to coalesce apparently disparate dollops of information into some kind of brand new cognitive casserole. To find out how well students can do such coalescing on their own, letter-cards are simply inadequate.
Whiteboard responses. This third quick-assessment technique is the answer when a teacher seeks a constructed response. Students respond to teacher questions by writing a word or a brief phrase on individual, tablet-sized, erasable whiteboards, and then present their answers in the usual way when the teacher calls for "Whiteboards up!" (Rags or old socks, preferably clean ones, can be used to erase old responses and prepare the board for its next use.) The challenge of using whiteboards is that the teacher must limit questions to those that can be answered satisfactorily through a single word or short phrase. Remember, the teacher is looking to gather data on student understanding via a quick visual scan.
To sum up, then, the questions a teacher needs to ask when facing this choice-point are "How much preparation time do I have?" and "What kind of data am I looking for—a general sense of whether students 'get it' or not, or am I looking for more diagnostic information? And, of course, although there's no reason for teachers to limit themselves to a single data-gathering approach, most teachers who adopt this particular application of formative assessment tend to employ one of the approaches more than others; they become familiar with a particular response system, and so do their students.
In choosing among these three alternatives for collecting assessment evidence, the overriding concern is always the same—namely, what sorts of assessment approach will generate the data that will support the most valid inferences about students' status? If a less elaborate approach will do the assessment job well, then there's little need for a more elaborate approach.
As stated, I believe teachers must collect assessment evidence for each building block in the learning progression they are using— optimally, near the conclusion of instruction focused on the building block, but while there is still some time available for any assessment-dictated adjustments to right instructional wrongs. However, each teacher's affinity for assessment will be different. Some teachers may want to collect data on students' achievement of less significant sorts of en-route accomplishments that will contribute to building block mastery—what we might call the "subgoals" or "subcomponents" of the subskills or bodies of enabling knowledge.
A teacher's decision about how often to collect assessment evidence to inform an instant adjustment will almost always interact with the teacher's choice of how to collect that evidence. Certain data-gathering techniques are higher on the "hassle index" than others. For instance, a thumbs-up/down procedure is likely to require less preparation and administration time than a whiteboard procedure will. The "how often" decision is also affect by factors such as the students' level of background knowledge, the complexity of the subject matter at hand (more complicated subject matter means more opportunities for student-stumbles), and how much experience the teacher has with the subject matter (the better a teacher knows the material and the ways in which students are likely to misunderstand it, the less onerous the prep work required for immediate-adjustment formative assessment will be). During a five-week teaching unit, for example, some teachers may be quite satisfied to employ immediate-adjustment formative assessment only three or four times. Other teachers embarking on the same five-week unit might wish to assess students two or three times each week, focusing not only on students' mastery of a learning progression's building blocks, but also on students' grasp of numerous, less significant knowledge and skills. Here, the teacher needs to ask, "When, during the instructional sequence I have planned, would it make the most sense to make an instructional adjustment?" and "What effect would pausing to collect assessment evidence have on the planned instructional sequence?"
The third question to ask at this choice-point is one that, frankly, should trump the instructional and assessment considerations raised by the first two questions. That question is, "How willing am I to do this?" The best advice I can give to teachers here is that they must be honest about both their assessment time and inclination, and they must beware of setting an evidence-gathering schedule that is so demanding that it will lead them to abandon the entire formative assessment enterprise. Several teachers have told me about how they initially tried to use immediate-adjustment formative assessment every day in their classes. After all, it doesn't seem like such a big deal to call for thumbs-up/thumbs-down measures of understanding. But, because these folks were dedicated to doing formative assessment right, they also diligently prepared sets of instructional adjustments to have at the ready, depending on what the assessment data revealed—thus doubling or tripling their preparation burden. It wasn't too long before these teachers found themselves cutting back their formative assessment frequency on the grounds of sanity preservation.
If I were teaching high school students again and considering when to collect assessment evidence for the purpose of making immediate instructional adjustments process, although I'd be influenced by instructional and assessment considerations, I'd make my decision chiefly on the basis of the third factor—namely, my own tolerance for such evidence gathering.
A teacher's task here is to figure out how many assessment items are enough to generate good data on students' mastery status relative to the subskill or body of enabling knowledge being assessed. The two key factors to consider are the building block's breadth (its grain size) and the relative importance of that building block to a student's overall mastery of the target curricular aim. Although all building blocks in a learning progression are crucial, some are "more crucial" than others. The larger the grain size of the building block in question, and the more crucial it is to mastery, the more items the teacher will need to make a sound adjust-or-don't-adjust decision.
The rationale for using more items for larger building blocks is a simple one: with more content subsumed in a building block, there is more content mastery to measure, and more items that must be written. By contrast, a teacher might measure mastery of a small-grained building block covering an important but simple concept with a single, well-crafted question.
The exact number of items to include in an assessment is a teacher's judgment call. If the teacher thinks that one or two items are too few, but that three or four items will provide a sufficiently accurate estimate of students' status, then the teacher should opt for the larger number of items. Here, the question to ask is, "How much data would I need to be comfortable deciding that a student either has or has not mastered this building block?"
In TA1, I recommended that before collecting evidence of students' status, teachers should carefully think through what level of performance would signal, "Instructional adjustment is necessary." (Setting adjustment triggers after data collection tends to increase the frequency of concluding that students' performance is "good enough.")
The trick to formulating an adjustment trigger is to first set a required level for individual student performance and then spell out a required level for the whole class. To illustrate, a teacher might decide that the baseline of individual mastery for a certain subskill will be providing a correct thumbs-up/thumbs-down response to five of six questions, and that the adjustment trigger will be three-fourths of the class failing to reach this mastery level. In other words, the teacher might make a note that "if at least three-fourths of the class do not answer five or six of the items correctly, I will make an instructional adjustment."
The guiding principle in determining adjustment-trigger levels flows from the relative instructional importance of whatever subskill or body of knowledge the assessment is measuring. Really important subskills and enabling knowledge need more stringent performance levels than less important ones do—both in terms of the performance level established for an individual student and with regard to the "percent successful" performance level set for the entire group of students.
Ultimately, it all comes down to this. Remember, the essence of this application of formative assessment is the teacher's making an immediate instructional adjustment. The instantaneity of the adjustment means that the teacher must have planned what to do and be ready to implement that adjusted instruction immediately.
Margaret Heritage, a colleague of mine at UCLA, works frequently with teachers who are trying to employ formative assessment. She finds that the most difficult choice they face is figuring out what to do after assessment evidence has shown them that an instructional adjustment is needed (Heritage, Kim, Vendlinski, & Herman, 2009). It seems, therefore, that teachers might wish to prepare a template filled in with a variety of fix-it possibilities that make instructional sense to them. This template might be similar to Figure 3.2, which identifies potential sources of students' confusion a teacher ought to consider when deciding how to make adjustments if assessment evidence reveals less-than-effective instruction.
Possible Source of Confusion
Sample Options for Instructional Adjustments
My explanation of the key content was not clear.
… different metaphors or examples.
… a different mode of instruction (e.g., visual/aural).
Students don't understand what they're supposed to be working toward.
Clarify the intended outcome of instruction by
… talking about the nature of the target curricular aim.
… focusing on evaluative criteria that will be used to judge the quality of their performance.
My learning progression has shortcomings.
Review prior learning progression choices by
… determining if any earlier key building blocks have been omitted.
… deciding if adjustment-trigger levels for previous building blocks were too low.
Students have not received adequate modeling.
Provide additional modeling by
… distributing and using a detailed rubric or performance model.
… conducting a demonstration.
… assigning heterogeneous pairwork in which a struggling student works with a buddy who has mastered the content.
Students have not had sufficient time on task.
Provide additional guided or independent practice by
… setting small-group practice activities.
… assigning new homework problems.
The adjustment possibilities in this figure are illustrative, not definitive. In truth, the diversity of curricular aims, the composition of student groups, teachers' personal pedagogical styles, and a carload of contextual differences make it altogether impossible to prescribe what teachers should do when adjusting unsuccessful instruction. It is always possible that factors other than those captured in the figure will contribute to a given group of students' misconceptions and to less-than-satisfactory mastery of whatever the teacher has assessed. But let me stress again that a teacher employing this particular brand of formative assessment needs to have all possible Plans B, C, and so on prepared and ready to roll out if needed. A set of ideas such as those in the figure, informed by the teacher's own past pedagogical experiences, can provide useful starting points for these planning efforts.
Having introduced you to the key ingredients in this application of the formative assessment process, let's look now at a teacher who is using it in her classroom.
Alice Olson teaches world and U.S. history at Edison High School, a large urban secondary school in New England that draws students from a wide range of socioeconomic strata and ethnicities. This is Alice's sixth year of teaching and her fourth year at Edison. Although several of her colleagues use formative assessment in their classes, Alice is regarded by the rest of the faculty as the school's most knowledgeable user of formative assessment.
This year, Edison High School has a brand new principal, Kevin Todd. By his own admission, Kevin possesses almost no experience working with teachers who employ the formative assessment process, and he's eager to educate himself on its use. Alice has agreed to have one of her classes videotaped so that her principal can see formative assessment in action. This is no big deal, she assures him; at Edison, classes are routinely videotaped by students in the school's advanced technology program so that various lessons and instructional approaches can be reviewed by faculty members working in professional learning communities. Because of the considerable frequency with which these video recordings are made, most students have become quite inured to them and rarely "play to the camera."
So the following Tuesday morning, during a lesson about the Civil War, a video recording is made of Alice's second period 11th grade U.S. history class. Copies of the video are delivered to Principal Todd and Alice by Wednesday afternoon. They both watch the video prior to their after-school interview on Friday, during which they review several segments together.
The video recording opens with a lively, teacher-led discussion of the potential causes of the Civil War. The questions Alice poses for her students are not fact-focused questions but a series of why-type queries, and most of the students take an active part in trying to resolve the sometimes subtle issues they present. After about 15 minutes of this open-class discussion, Alice asks her students to take out their A-B-C-D letter-cards because, as she announces, she wants to collect formative assessment evidence about certain aspects of the Civil War "as seen through your eyes."
At that point in the class session, students take a set of 6-by-9-inch cards from their desks. Alice then uses a computer-controlled projector to show five multiple-choice questions, one at a time. After students have had a chance to consider a question for a few moments, Alice says, "Cards up!" and students hold up one of their letter-cards so that it's visible to her but not to their classmates. Alice scans the room, quickly jots down a tabulation of students' responses, then says, "Cards down." This process continues for all five questions, at which point Alice looks at her notes regarding students' responses, thanks the students for their cooperation, then announces, "Based on the answers you've provided, we're going to do a little shifting in what I had previously planned."
"All right," Alice continues, "I'm passing out a copy of an editorial that appeared two years ago in the Boston Globe, but its message is every bit as pertinent today as it was then. Please read the editorial carefully, then write a summary of 25 words or less that captures the editorial's central message. We'll take 15 minutes for this, and then I'll choose three of you—using our regular random-selection procedure—to read your summaries aloud. Everyone understand? OK, then. Please begin now."
When the allotted time has passed, Alice asks for everyone's attention and then calls on three students, one after the other. The summaries they read are very similar. The essence of the editorial is that Northerners often adopt a simplistic, uninformed explanation about the cause of the Civil War, ascribing it exclusively to "the South's wanting to retain its slaves." The editorial contends that the underlying causes of the Civil War were far more complicated than merely the slavery issue, and that today's Northerners have an obligation to understand the full range of events and motives that fostered this remarkably bloody conflict.
The class session closes with students working in study groups of three or four to identify at least a half-dozen factors that led to the Civil War. Meanwhile, Alice posts written instructions for the evening's homework assignment, which she reads aloud just prior to the bell: "Tonight, please read the previously assigned textbook pages covering the root causes of the Civil War and be prepared to look more closely at what led to a conflict in which so many Southerners and Northerners lost their lives."
by Principal Kevin Todd
Alice, thanks so much for agreeing to have your class recorded and for allowing me to look at the video. To get under way, I thought you did a wonderful job with your students, and they obviously were fully engaged in what was going on. But we're here to focus on formative assessment, so let's do that. Now, I assume that formative assessment was what took place when you asked your students to respond to those five multiple-choice questions, the ones you projected on the screen. Is that right?
Well, Kevin, yes, you're almost right! The five questions were part of the formative assessment process. It is that last word, process, that's really key. The evidence I collected from students—the evidence they supplied with their letter-card answers to the five questions— was an important part of the process. But the process continues beyond that to include my deciding, based on this evidence, whether to make an instructional adjustment and what instructional adjustment to make.
OK, yes, I saw that you initiated the activity based on the Boston Globe editorial right after you saw how the kids responded to your multiple-choice items. Did you decide to assign that activity based on those answers?
Yes, I did. My original plan had been to spend the second half of that class session on foreign involvement in the Civil War, and if a prespecified proportion of my students had answered all of the relevant items correctly, that's exactly what I would have done. Actually, only three of the five multiple-choice questions were related to my decision to launch the editorial activity. Those were Questions 1, 2, and 3, which I wrote specifically to generate evidence about the kids' understanding of the causes of the Civil War. Questions 4 and 5 dealt with another aspect of my instruction, the initial military strength of the Northern and Southern forces. Fortunately, almost all of the students did really well on those other two questions, but if they hadn't, the adjustment I had in my pocket was to re-explain the key military-strength differences in a different way. I had the outline for that re-explanation in my notes.
As for the instructional adjustment I did make? Well, I'm from Arizona originally, and when I first moved here, to New England, I read two articles by history professors talking about a "Northern take" on the causes of the Civil War. They argued that teachers in New England needed to go the extra mile to make sure their students acquired a balanced and accurate view of this aspect of U.S. history.
Tell me more about how you could tell from the class response to those three questions that your students had this "Northern take" on the war's causes.
There's a lot you can do with multiple-choice questions! I wrote each of these questions to include one correct answer choice—a response capturing the understanding I want my students to acquire—and three wrong answers. But for each question, I designed one of the wrong-answer options to reflect the "Northern take." Here, see, for Question 1, it was Option A; for Question 2, it was Option B; and for Question 3, it was Option C. All I did, when I scanned the room for the students' answers, was mentally calculate the approximate proportion of my students who were incorrect, and then what proportion of those erring students had chosen the Northern-take responses for those three items.
How many wrong responses and "Northern take" responses were "too many"? In other words, how did you decide what levels of misunderstanding would mean making an instructional change?
That's a good question, and I can tell you it's sometimes really tough to decide how bad is "bad enough" to require an instructional adjustment. It's something that I spend time considering during my planning, and I jot down notes next to each question about where the tipping points are. When I first began using formative assessment, I tended to be too optimistic about whether my instruction would work the way I had planned for it to work, and I was constantly setting levels for my students that were pretty high—you know, "Ninety percent of the students will answer 90 percent of questions correctly." I found that I was in a constant state of adjustment, which was exhausting. I've gotten better at setting performance levels now. It comes with experience, I think.
What were the change-determining levels you set for those three questions we were talking about? And how about for last two questions, the ones that students did well enough on not to trigger an adjustment?
That's what we call them, you know, "adjustment triggers." Well, starting with your second question, you can see in my notes here that I'd decided that if at least 80 percent of my class did not answer both Question 4 and Question 5 correctly, I would make the planned adjustment, which was to provide a different kind of explanation than the one I'd given earlier. As it turned out, almost 100 percent of the class came up with the correct letter-card responses to those two questions, so there was no need for an adjustment.
On Questions 1, 2, and 3, I had decided that if two-thirds of the class did not answer all three questions correctly, then I'd look further at their wrong-answer responses, to see if more than half of those who missed those questions had opted for the incorrect "Northern-take" response. If that happened, and it definitely did, I was ready to introduce the Boston Globe editorial activity.
So if less than half of the class had gone for those Northern-take responses you wouldn't have made that adjustment?
I'd have made an adjustment, but not that particular adjustment, which was a response designed to address a particular misunderstanding, a particular barrier to mastery. If my students had flopped on the three key questions but had not chosen the Northern-take options in the numbers that they did, I would have gone back to re-explain the generally accepted causes underlying the Civil War.
Actually, what we're talking about now is a very significant part of making on-the-spot instructional adjustments. The teacher has to scan student responses really quickly—within seconds, really. For the three questions we've been talking about, I had to first determine the students' percent of mistakes, and then figure out how many students had opted for the Northern-take distracters. When I'm scanning the collection of letter-cards in a large class, it's about making the most accurate estimate possible and then using that data to choose the most appropriate adjustment from a set of adjustments I'm prepared to make.
Do you remember those "choose your own adventure" books? They were big when I was in elementary school. These are books in which the author presents a scenario and then asks the reader make a choice: "You walk down the hallway of the haunted house and see two doors: Do you want to open the red door or the black door? If you choose the red door, turn to page 15; if you choose the black door, turn to page 17." Well, applying formative assessment to make immediate instructional adjustments is like writing one of those books. You have to map out the various ways students might respond, and then be prepared with the right instructional adjustment depending on how they do respond.
OK, what you just explained brings up a good question. How often do you apply formative assessment in this way? There seems to be a lot of planning and preparation involved. Surely, Alice, you can't be doing this sort of thing every day—or can you?
Believe it or not, some teachers do. It's an issue that I've personally had trouble resolving. When I first began to use the formative assessment process—that was about four years ago—I knew I wanted to take frequent readings of students' progress. I would look at my lesson plans and, based on those plans, map out some reasonable learning progressions. Then I'd see where I might collect evidence related to the mastery of each subskill and each knowledge-focused building block in those progressions. I wound up deciding I'd need to gather assessment data every two or three weeks at minimum—slightly before the conclusion of instruction based on each building block. Well, I was so gung ho about formative assessment that I wanted to do frequent monitoring. The idea was to catch kids as they were just beginning to get off course and get them back on course. So I started using the letter-card assessments for immediate instructional adjustment once a week, assessing not just the building blocks but also the subcomponents of those building blocks—the things that students needed to know and be able to do to master the building blocks! What I found, though, was that it was positively overwhelming—and so time-consuming.
So what did you do then?
I cut back. Here's the thing: I believe in the power of formative assessment, and I didn't want to burn out on it. After all, I was the one who'd set it up to be so difficult for myself. Today, I think I have a system that works for me. I still prepare assessments for each of the building blocks in any unit that will last three weeks or more. Beyond the building block tests, however, I'm a bit more restrained when it comes to collecting assessment evidence of the lesser subgoals.
But you still assess some subgoals? How do you decide which subgoals to assess?
Yes, I do, and I decide based on experience, mostly. I focus on aspects of the content that have tripped up students in the past. Actually, that's what you saw in Tuesday's lesson. I wasn't assessing students' mastery of a learning progression's building block; I was measuring a lesser understanding that I suspected would get in the way of building block mastery.
If I were to encourage more of Edison's teachers to try out formative assessment in their own classes, Alice, would you be willing to help them arrive at a reasonable way to decide when to collect assessment evidence from their students?
Absolutely! But I'd want to make it clear that the frequency of data gathering depends, in very important ways, on an individual teacher's preferences. Sure, we need to know whether kids have mastered all the along-the-way building blocks, and in this application of formative assessment—the immediate-adjustment type—we need to be ready to launch into a different instructional activity if they haven't. But beyond that required amount of assessment, formative assessment really hinges on what a teacher is personally comfortable doing. Some people may be really happy and comfortable incorporating this kind of formative assessment into every class session, but it's not the best choice for me.
One other thing I should point out, Kevin: I never grade my students on any of these letter-card assessments, and I try to help them understand that the purpose of these assessments is to help me figure out how to teach them as well as I can. It's about them supplying me with the assessment evidence I need to do a better job—for them. I think getting this take on classroom assessment across to students is a particularly positive consequence of the formative assessment process. They are, in a real sense, partnering with me to sharpen my ongoing instruction.
Interviews are akin to ink blots, and different people see different things when reviewing what goes on in most interviews. You are apt to have focused on something different than I did, but I believe Alice's comments should have alerted her principal to three important considerations for a teacher trying to apply formative assessment for the purpose of immediate instructional adjustments.
First, there was the issue of how well students had to perform on an in-class assessment so a teacher would know that an instructional adjustment was warranted. As Alice said, this is a tough choice and, even if a decision is made in advance, the decision may still be made in error. It's a teacher's judgment call, and Alice said her earlier estimates regarding students' likely performance had been too high. Fortunately, experience is a marvelous helper here. Seasoned teachers tend to have solid expectations regarding students' necessary levels of understanding, and they can be a good source of information for less-seasoned teachers.
Second, Kevin, the principal, sensibly dug into the "How often?" question—a tricky one. Alice stressed that a range in the frequency of assessments is possible—from a minimum focusing solely on a learning progression's building blocks all the way to almost-daily assessments. Alice has apparently chosen a middle stance, in which she makes sure to assess students' building block mastery but also tries to check on mastery of certain less-important curricular aims.
Finally, what came up several times in the interview was Alice's candid concern about the danger of making formative assessment so difficult that it becomes aversive. All sorts of refinements in formative assessment will not benefit students if teachers find the formative assessment process so onerous that they never use it.
Remember, there is no single best way to make formative assessment fly. Other teachers might well have made other decisions than Alice did. But as long as teachers adhere to the core concepts of formative assessment, those teachers' students will usually be successful.
Copyright © 2011 by ASCD. All rights reserved.
No part of this publication—including the drawings, graphs, illustrations, or chapters, except for brief quotations in
critical reviews or articles—may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopy, recording, or any information storage and retrieval system, without permission from ASCD.
Subscribe to ASCD Express, our twice-monthly e-mail newsletter, to have practical, actionable strategies and information delivered to your e-mail inbox twice a month.
ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.