1703 North Beauregard St.
Alexandria, VA 22311-1714
Tel: 1-800-933-ASCD (2723)
8:00 a.m. to 6:00 p.m. eastern time, Monday through Friday
Local to the D.C. area: 1-703-578-9600, press 2
Toll-free from U.S. and Canada: 1-800-933-ASCD (2723), press 2
All other countries: (International Access Code) + 1-703-578-9600, press 2
by Susan M. Brookhart
Table of Contents
Constructing an assessment always involves these basic principles:
This general three-part process applies to all assessment, including assessment of higher-order thinking. Assessing higher-order thinking almost always involves three additional principles:
The first part of this chapter briefly describes the general principles that apply to all assessment, because without those, assessment of anything, including higher-order thinking, fails. The second section expands on the three principles for assessing higher-order thinking. A third section deals with interpreting student responses when assessing higher-order thinking. Whether you are interpreting work for formative feedback and student improvement or scoring work for grading, you should look for qualities in the work that are signs of appropriate thinking.
Begin by specifying clearly and exactly the kind of thinking, about what content, you wish to see evidence for. Check each learning goal you intend to assess to make sure that it specifies the relevant content clearly, and that it specifies what type of performance or task the student will be able to do with this content. If these are less than crystal clear, you have some clarifying to do.
This is more important than some teachers realize. It may seem like fussing with wording. After all, what's the difference between "the student understands what slope is" and "the student can solve multistep problems that involve identifying and calculating slope"? It's not just that one is wordier than the other. The second one specifies what students are able to do, specifically, that is both the target for learning and the way you will organize your assessment evidence.
If your target is just a topic, and you share it with students in a statement like "This week we're going to study slope," you are operating with the first kind of goal ("the student understands what slope is"). Arguably, one assessment method would be for you to ask students at the end of the week, "Do you understand slope now?" And, of course, they would all say, "Yes, we do."
Even with a less cynical approach, suppose you were going to give an end-of-week assessment to see what students knew about slope. What would you put on it? How would you know whether to write test items or performance tasks? One teacher might put together a test with 20 questions asking students to calculate slope using the point-slope formula. Another teacher might ask students to come up with their own problem situation in which finding the slope of a line is a major part of the solution, write it up as a small project, and include a class demonstration. These divergent approaches would probably result in different appraisals of students' achievement. Which teacher has evidence that the goal was met? As you have figured out by now, I hope, the point here is that you can't tell, because the target wasn't specified clearly enough.
Even with the better, clearer target—"The student can solve multistep problems that involve identifying and calculating slope"—you still have a target that's clear to only the teacher. Students are the ones who have to aim their thinking and their work toward the target. Before studying slope, most students would not know what a "multistep problem that involves identifying and calculating slope" looks like. To really have a clear target, you need to describe the nature of the achievement clearly for students, so they can aim for it.
In this case you might start with some examples of the kinds of problems that require knowing the rate of increase or decrease of some value with respect to the range of some other value. For example, suppose some physicians wanted to know whether and at what rate the expected life span for U.S. residents has changed since 1900. What data would they need? What would the math look like? Show students a few examples and ask them to come up with other scenarios of the same type until everyone is clear what kinds of thinking they should be able to do once they learn about slope.
Design performance tasks or test items that require students to use the targeted thinking and content knowledge. The next step is making sure the assessment really does call forth from students the desired knowledge and thinking skills. This requires that individual items and tasks tap intended learning, and that together as a set, the items or tasks on the assessment represent the whole domain of desired knowledge and thinking skills in a reasonable way.
Here's a simple example of an assessment item that does not tap intended learning. A teacher's unit on poetry stated the goal that students would be able to interpret poems. Her assessment consisted of a section of questions matching poems with their authors, a section requiring the identification of rhyme and meter schemes in selected excerpts from poems, and a section asking students to write an original poem. She saw these sections, rightly, as respectively tapping the new Bloom's taxonomy levels of Remember, Apply, and Create in the content area (poetry), and thought her assessment was a good one that included higher-order thinking. It is true that higher-order thinking was required. However, if you think about it, none of these items or tasks directly tapped students' ability to interpret poems.
Plan the balance of content and thinking with an assessment blueprint. Some sort of planning tool is needed to ensure that a set of assessment items or tasks represents the breadth and depth of knowledge and skills intended in your learning target or targets. The most common tool for this is an assessment blueprint. An assessment blueprint is simply a plan that indicates the balance of content knowledge and thinking skills covered by a set of assessment items or tasks. A blueprint allows your assessment to achieve the desired emphasis and balance among aspects of content and among levels of thinking. Figure 1.1 shows a blueprint for a high school history assessment on the English colonies.
Founding of English colonies
Identify names, dates, and events.
10 points, 10%
10 points, 100%
Government of English colonies
Define proprietary, royal, and self-governing.
Describe the function of governors and legislatures in each colony.
Explain how the governments of the colonies effectively foreshadowed and prepared colonists for the American Revolution.
25 points, 25%
5 points, 20%
10 points, 40%
Life in English colonies
Describe the roles of religion, work, climate, and location in colonial life.
15 points, 15%
15 points, 100%
Relations with Native Americans
Explain how colonial relations with Native Americans were influenced by land, food and resources, political events, and the French.
20 points, 80%
Trade, commerce, and navigation
Identify goods and resources produced in the colonies. Define the mercantile theory of trade.
Describe British trade and navigation acts. Describe the triangular trade, including its role in slavery.
Explain how salutary neglect benefited all parties involved.
15 points, 60%
TOTAL 100 points, 100%
30 points, 30%
20 points, 20%
The first column (Content Outline) lists the major topics the assessment will cover. The outline can be as simple or as detailed as you need to describe the content domain for your learning goals. The column headings across the top list the classifications in the Cognitive domain of the revised Bloom's taxonomy. Any other taxonomy of thinking (see Chapter 2) could be used as well.
The cells in the blueprint can list the specific learning targets and the points allocated for each, as this one does, or simply indicate the number of points allocated, depending on how comprehensive the content outline is. You can also use simpler blueprints, for example, a content-by-cognitive-level matrix without the learning targets listed. The points you select for each cell should reflect your learning target and your instruction. The example in Figure 1.1 shows a 100-point assessment to make the math easy. Each time you do your own blueprint, use the intended total points for that test as the basis for figuring percents; it will not often be exactly 100 points.
Notice that the blueprint allows you to fully describe the composition and emphasis of the assessment as a whole, so you can interpret it accurately. You can also use the blueprint to identify places where you need to add material. It is not necessary for every cell to be filled. The important thing is that the cells that are filled reflect your learning goals. Note also that the points in each cell do not all have to be 1-point test items. For example, the 10 points in the cell for explaining how colonial governments helped prepare citizens for participation in the American Revolution could be one 10-point essay, two 5-point essays, or any combination that totals 10 points.
A blueprint helps ensure that your assessment and the information about student achievement that comes from it have the emphasis you intend. In the assessment diagrammed in Figure 1.1, three topic areas (government, relations with Native Americans, and trade—25 percent each) have more weight than colonial life (15 percent) or the founding of the colonies (10 percent). You can plan what percentage of each topic area is allocated to what level of thinking from the points and percentages within the rows. And the total at the bottom tells you the distribution of kinds of thinking across the whole assessment. In Figure 1.1, 55 percent of the assessment is allocated to recall and comprehension (25 percent plus 30 percent), and 45 percent is allocated to higher-order thinking (20 percent plus 25 percent). If the emphases don't come out the way you intend, it's a lot easier to change the values in the cells at the planning stage than to rewrite parts of the assessment later on.
In fact, blueprints simplify the task of writing an assessment. The blueprint tells you exactly what kind of tasks and items you need. You might, when seeing a blueprint like this, decide that you would rather remove one of the higher-order thinking objectives and use a project, paper, or other performance assessment for that portion of your learning goals for the unit, and a test to cover the rest of the learning goals. So, for example, you might split off the question in the cell Analyze/Native Americans and assign a paper or project for that. You could recalculate the test emphases to reflect an 80-point test, and combine the project score with the test score for the final grade for the unit.
Plan the balance of content and thinking for units. You can also use this blueprint approach for planning sets of assessments (in a unit, for example). Cross all the content for a unit with cognitive levels, then use the cells to plan how all the assessments fit together. Information about student knowledge, skills, and thinking from both tests and performance assessments can then be balanced across the unit.
Plan the balance of content and thinking for rubrics. And while we're on the subject of balance, use blueprint-style thinking to examine any rubrics you use. Decide on the balance of points you want for each criterion, taking into account the cognitive level required for each, and make sure the whole that they create does indeed reflect your intentions for teaching, learning, and assessing. For example, a common rubric format for written projects in many subjects assesses content completeness and accuracy, organization/communication, and
writing conventions. If each criterion is weighted equally, only one-third of the project's score reflects content. Evaluating such a rubric for balance might lead you to decide to weight the content criterion double. Or it might lead you to decide there was too much emphasis on facts and not enough on interpretation, and you might change the criteria to content completeness and accuracy,
soundness of thesis and reasoning, and writing conventions. You might then weight the first two criteria double, leading to a score that reflects 80 percent content (40 percent each for factual information and for higher-order thinking) and 20 percent writing.
Decide what you will take as evidence that the student has, in fact, exhibited this kind of thinking about the appropriate content. After students have responded to your assessments, then what? You need a plan for interpreting their work as evidence of the specific learning you intended. If your assessment was formative (that is, it was for learning, not for grading), then you need to know how to interpret student responses and give feedback. The criteria you use as the basis for giving students feedback should reflect that clear learning target and vision of good work that you shared with the students.
If your assessment was summative (for grading), then you need to design a scheme to score student responses in such a way that the scores reflect degrees of achievement in a meaningful way. We will return to the matter of interpreting or scoring student work after we present some specific principles for assessing higher-order thinking. It will be easier to describe how to interpret or score work once we have more completely described how to prepare the tasks that will elicit that work.
Put yourself in the position of a student attempting to answer a test question or do a performance assessment task. Asking "How would I (the student) have to think to answer this question or do this task?" should help you figure out what thinking skills are required for an assessment task. Asking "What would I (the student) have to think about to answer the question or do the task?" should help you figure out what content knowledge is required for an assessment task. As for any assessment, both should match the knowledge and skills the assessment is intended to tap. This book focuses on the first question, the question about student thinking, but it is worth mentioning that both are important and must be considered together in assessment design.
As the beginning of this chapter foreshadowed, using three principles when you write assessment items or tasks will help ensure you assess higher-order thinking: (1) use introductory material or allow access to resource material, (2) use novel material, and (3) attend separately to cognitive complexity and difficulty. In the next sections, each of these principles is discussed in more detail.
Use introductory material. Using introductory material—or allowing students to use resource materials—gives students something to think about. For example, student performance on a test question about Moby Dick that does not allow students to refer to the book might say more about whether students can recall details from Moby Dick than how they can think about them.
You can use introductory material with many different types of test items and performance assessment tasks. Context-dependent multiple-choice item sets, sometimes called interpretive exercises, offer introductory material and then one or several multiple-choice items based on the material. Constructed-response (essay) questions with introductory material are similar, except students must write their own answers to the questions. Performance assessments —including various kinds of papers and projects—require students to make or do something more extended than answering a test question, and can assess higher-order thinking, especially if they ask students to support their choices or thesis, explain their reasoning, or show their work. In this book, we will look at examples of each of these three assessment types.
Use novel material. Novel material means material students have not worked with already as part of classroom instruction. Using novel material means students have to actually think, not merely recall material covered in class. For example, a seemingly higher-order-thinking essay question about how Herman Melville used the white whale as a symbol is merely recall if there was a class discussion on the question "What does the white whale symbolize in Moby Dick?" From the students' perspective, that essay question becomes "Summarize what we said in class last Thursday."
This principle about novel material can cause problems for classroom teachers in regard to higher-order thinking. For one thing, it means that only the teacher knows for sure whether a test item or performance assessment actually assesses higher-order thinking; others outside a given classroom can't tell by looking whether or not an assessment requires higher-order thinking for that particular class. For another, the novelty of the material on an assessment is under a teacher's control. Teachers who "teach to a test" by familiarizing the students with test material intended to be novel change the nature of the assessment. However well-intentioned, this practice short-circuits the intent of the instrument to assess higher-order thinking.
Teachers should avoid short-circuiting assessments that are meant to evaluate higher-order thinking by using in class the same questions or ideas that they know will be on the test. Sometimes this is easier said than done, as students may complain—and rightly so—"we never did that before." Students should be assessed on things they were taught to do, not surprised on a test or performance assessment with tasks for which they have had no practice.
The solution is that teachers who want their students to be able to demonstrate higher-order thinking should teach it. Dealing with novel ideas, solving problems, and thinking critically should not be something students feel they "never did before." By the time students arrive at a summative assessment that requires higher-order thinking in the content domain of instruction, they should have had many opportunities to learn and practice, using other novel material.
The following example includes three versions of an assessment requiring students to discuss a theme, in this case the moral of an Aesop's fable: a multiple-choice question, an essay question, and a performance assessment. All three of the examples present the students with introductory, novel material. In this case, the material is Aesop's fable "Androcles and the Lion." Giving students the fable text means they don't need to have memorized the tale. Using a new (to them) fable means students can't rely on a previous discussion or summary of the tale. To save space, the fable is printed only once here, but it would be printed above whichever format of the question you used. You would not use all three formats, just the one most appropriate for your assessment purposes.
Androcles and the Lion
Once upon a time, a slave escaped from his master. The slave's name was Androcles. He ran into the forest and came upon a lion in distress. The lion was lying down, moaning in pain. Androcles started to run away, but the lion did not run after him. Thinking that strange, Androcles turned back. As he approached the lion, the great beast put out his paw. Androcles saw that the paw was swollen and bleeding from a huge thorn that had become embedded in it. Androcles pulled out the thorn and bandaged the lion's paw. Soon the lion was able to stand, and licked Androcles' hand like a dog. The lion took Androcles to his cave, where Androcles could hide from his master, and brought him meat to eat each day. All was well until both Androcles and the lion were captured. Androcles was sentenced to be thrown to the lion, who had not been fed for several days, as an entertainment in the arena. Many people, including the emperor, came to see the spectacle. The lion was uncaged and, eagerly anticipating a meal, charged into the arena, where Androcles was waiting. When the lion approached Androcles, however, he recognized his old friend and once again licked Androcles's hand like a dog. The emperor was surprised, summoned Androcles, and asked how this could be so. Androcles told the emperor about coming upon the lion in the forest, caring for his paw, and living in his cave. Upon hearing the tale, the emperor pardoned Androcles and freed both Androcles and the lion.
Multiple-choice question to assess reasoning about the theme
1. The theme of Aesop's fable "Androcles and the Lion" can be expressed as "Gratitude is the sign of noble souls." Choose the plot detail that best expresses the theme.
A. The emperor ordered Androcles to be thrown to the lion.
*B. The lion did not eat Androcles.
C. Androcles pulled the thorn from the lion's paw.
Brief essay question to assess reasoning about the theme
2. The theme of Aesop's fable "Androcles and the Lion" can be expressed as "Gratitude is the sign of noble souls." Explain how the fable expresses this theme.
CRITERIA for feedback or rubrics:
Performance assessment to assess reasoning about the theme
3. The theme of Aesop's fable "Androcles and the Lion" can be expressed as "Gratitude is the sign of noble souls." Write an original fable expressing the same theme. Then explain how the theme applies in a similar way to both "Androcles and the Lion" and your own fable.
All three of the tasks call for analytical thinking. All three require that students be able to reason about the fable "Androcles and the Lion" and its theme. Note, however, that the formats are not completely interchangeable. They each tap a slightly different set of skills in addition to the core analysis required for explaining the theme. The multiple-choice version requires students to identify, from choices given, the portion of the fable's plot where a noble soul expresses gratitude. The short-essay version requires students to identify, from the text of the fable, the portion of the fable's plot where a noble soul expresses gratitude and to explain their reasoning. It also, therefore, requires students to exercise some writing skills. The performance assessment version requires students to do everything the short-essay version does, plus display synthetic or creative thinking to write an analogous fable and explain that thinking. It also requires more writing than the short-essay version. Which of these you would use would depend on exactly what you wanted to assess.
Manage cognitive complexity and difficulty separately. Realizing that level of difficulty (easy versus hard) and level of thinking (recall versus higher-order thinking) are two different qualities allows you to use higher-order-thinking questions and tasks with all learners. The misconception that recall is "easy" and higher-order thinking is "hard" leads to bad results. The two most insidious ones, in my opinion, are shortchanging young students and shortchanging low achievers of any age by offering them only recall and drill assignments because they are not "ready" to do higher-order thinking. In either case, while these students are waiting for you to think they are ready, they will also learn that school is boring. They may misbehave, they may drop out, and they certainly will not learn to think well.
Thinking tasks can be easy or hard, and so can recall-level tasks. If you doubt that, consider the examples on the following page.
Who is the main character in The Cat in the Hat?
Name all the characters in
Why do you think the Cat cleaned up the house on his way out, before Mother got home?
Hamlet wrestles with a major question in his soliloquy, "O, that this too, too solid flesh would melt" in Act 1, Scene 2, Lines 131–161. What is the question in his mind, and how do you think he resolves it by the end of his soliloquy? State your interpretation of his major question and his resolution, and use evidence from the speech to support it.
There are two ways to interpret student responses to items or tasks: one is to comment on the work, and the other is to score it. For either, it is important to apply criteria about the quality of thinking exhibited in the work. In this book, I suggest criteria with each essay or performance assessment example (as shown in the example using the fable). The criteria could be either the foundation for feedback or the basis for rubrics, or both, depending on how you used the assessment. The important points are that the criteria match your learning targets, and that progress against those criteria means learning.
Observing and discussing student reasoning directly can be a powerful way to assess higher-order thinking. Give students an assessment, and use it formatively. Have conversations with students about their reasoning, or give substantive written feedback. The conversations and feedback should be based on your learning target and criteria. Exactly what sort of thinking were you trying to assess? How should students interpret the quality of their thinking? What are some ways they might extend or deepen that thinking?
Here is an example from Robert Danka, an 8th grade mathematics teacher at Kittanning High School in Pennsylvania. He was familiarizing his students with the kind of open-ended math problems that might appear on the Pennsylvania System of School Assessment (PSSA) test. Open-ended PSSA items include phrases such as "Show all your work" and "Explain why you did each step." To do that, students need first to be able to identify the problem. Here is one part of one of the sample problems Robert used:
The Gomez family is taking a trip from Kittanning [Pennsylvania] to Atlanta, Georgia. The trip is 744 miles. They are leaving at 6 a.m. and would like to arrive at 6 p.m. How fast would they have to drive in order to arrive on time? Show and explain your work.
The major purpose for using this problem was to help students appraise the quality of their explanations of math problem solving, a formative purpose. These skills would help the students on the PSSA, a summative evaluation. This teacher gave students feedback on both the correctness of their answers and the quality of their explanations. Although it may seem automatic to the adults reading this chapter, identifying the problem as a distance problem that requires division is an important skill. Figure 1.2 (p. 32) reproduces two student responses for just the portion of the Gomez family trip problem I have used as an example.
For Student 1, Robert wrote, "This is correct, but explain why you divided—what are you looking to find? Your explanations are improving—continue to include every piece of data in the explanation." He noticed and named one strategy (including data in the explanation) that the student had been working on and did successfully, and gave one suggestion for improvement (provide a rationale for using division). Both of these would help the student make his reasoning more transparent to a reader, and would also help with the state test expectations for explaining reasoning.
For Student 2, this teacher wrote next to d = rt, "Good use of the formula!" Next to the explanation, he wrote, "62 __ ? Please refer to the question to display the units! Good explanation!" He noticed and named one specific strength (use of the formula) and made one general comment (good explanation) and one specific suggestion for improvement (specify the units).
A complex task requiring higher-order thinking can be subverted by a scoring scheme that gives points only for facts reported. Conversely, scoring the quality of students' reasoning on even some very simple tasks can assess higher-order thinking. For summative assessment of how students use higher-order thinking—for graded tests and projects—a scoring scheme must be devised in such a way that higher-order thinking is required to score well. This requirement means that soundness of thinking must figure into the criteria from which the rubric is developed. Some rubrics or other scoring schemes attend mainly to surface features or merely count the number of correct facts in students' responses. Such scoring schemes can turn an exercise in which students did use higher-order thinking into a score that doesn't reflect the thinking students did.
Multiple-choice questions. Multiple-choice questions would typically be scored with one point for a correct choice and no points for an incorrect choice. The "thinking" is encoded into the choosing. It is worth reminding readers here that for the resulting scores to mean that students use higher-order thinking, the questions have to be designed so that higher-order thinking really is required to answer.
Constructed-response and essay questions. For constructed-response answers to questions designed to tap various kinds of reasoning, often a rubric with a short scale will work well. Start with the criterion, the type of thinking you intended to assess. For example, ask, "Does the student weigh evidence before making decisions?" or "Does the student appropriately evaluate the credibility of the source?" Then use a scale that gives partial credit depending on the quality of the reasoning.
Here is an example of a task a 9th grade science teacher used to assess students' understanding of chemical and physical changes. Students had observed demonstrations about ice floating in water, then melting, and had drawn diagrams of the molecular structure. Then pairs of students were given cards with everyday events. They were to sort them into two categories, physical change and chemical change, and explain why they put them where they did. Then they were to write what they learned about physical and chemical changes. In passing, I should mention that this exercise sparked some interesting student higher-order thinking beyond simple categorizing and inductive thinking. For example, one student asked, "Is cutting the grass a chemical or physical change, if you consider that the cut part of the grass dies?"
Here is an example of a scoring scheme that could be used with the 9th grade science class example of physical and chemical changes. I list the scale as 2-1-0, but it could also be 3-2-1, or 6-4-2, or whatever weight is appropriate for other scores with which it needs to be combined for a particular test or grade composite score.
Did the student reason inductively from the examples to arrive at a clear, accurate description of physical and chemical changes?
2 = Completely and clearly—Response gives clear evidence of reasoning from the examples.
1 = Partially—Response is accurate, but reasoning from examples isn't clear or is only partial.
0 = No—Response does not demonstrate reasonable conclusions from the examples.
Figure 1.3 presents responses from three student pairs. Each pair was to list one example of physical and chemical change, and then a paragraph explaining what the pair had learned about physical and chemical changes from their inductive reasoning. Response 1 would score a 0. The teacher did not think that these students showed any evidence of having figured out differences between physical and chemical changes based on sorting the examples. Response 2 would score a 1. These students' statement about molecular structure is correct, but as the teacher commented, "Textbook response, got the concept but I'm not sure if it was from discussion." The response does not allow us to conclude much about their reasoning. Response 3 would score a 2. In fact, the teacher was very pleased and said, "Not an answer I would expect, but they really got the concept."
Performance assessments. Analytical rubrics are often used for scoring performance assessments, papers, and projects. The quality of thinking demonstrated in the work should figure prominently in at least one of the rubric trait scales. Teachers can write their own rubrics or select a rubric for use from among the many that are available on the Internet or in curriculum materials. An Internet search for "problem-solving rubrics," for example, yielded 85,500 results.
Before you use rubrics from the Internet or from curriculum materials, make sure they are good ones that will help you communicate clearly. Select or write rubrics that are appropriate to the content and thinking skills you intend to assess and that are appropriate for the educational development of your students. Select or write rubrics that describe qualities (e.g., "reasoning is logical and thoughtful") rather than count things (e.g., "includes at least three reasons").
It is particularly helpful if the same general thinking or problem-solving scheme can be applied to several different assignments. Students will learn that the thinking and reasoning qualities described in the rubric are their "learning target" and can practice generalizing them across the different assignments. The general rubrics can be used with each assignment or can be made specific for that assignment.
Excellent examples of problem-solving rubrics for different purposes exist, and many are available on the Internet. The NWREL Mathematics Problem-Solving Scoring Guide uses five criteria: conceptual understanding, strategies and reasoning, computation and execution, communication, and insights. Descriptions of performance on each criterion are given for each of four levels: emerging,
developing, proficient, and exemplary. The rubric is available online at
In 2008, the School-wide Rubrics Committee at Lincoln High School in Lincoln, Rhode Island, developed several rubrics for teachers and students to use, many based on the National English Language Arts Standards. One of the schoolwide rubrics is for problem solving and is available online at
www.lincolnps.org/HighSchool/rubrics/Problem-Solving%20School-wide%20Rubric.pdf. This rubric describes five criteria: understands the problem and devises a plan, implements a plan, reflects on results, creates an organizing structure, and
demonstrates understanding of written language conventions (when appropriate). The rubric describes performance at each of four levels: exceeds standard, meets standard, nearly meets standard, and below standard.
The state of Kentucky uses an open-ended scoring guide for problem solving in mathematics, social studies, science, and arts and humanities. This general rubric can be defined in more specific detail for specific assessment items or tasks. The advantage of using such a general framework as the basis for scoring all kinds of work is that students will come to see the types of thinking expected in the general rubric as learning goals. They will be able to practice and work consistently toward these achievement outcomes. This rubric is holistic, which means it uses one overall scale to rate performance. It is called the Kentucky General Scoring Guide and is contained in each booklet ofreleased items for the Kentucky Core Content Test (KCCT). Links to released items at different content areas and grade levels are available online at
There are many more excellent rubrics to be found. Use your own evaluative skills when searching for and appraising rubrics for your own purposes. When you select rubrics to use or to adapt for use, make sure that the criteria (scoring categories or traits) match the problem-solving or other skills you value and teach. Make sure that the descriptions of quality in each of the levels (proficient, and so on) match the expectations for performance in your school and district. Use rubrics not only for scoring summative assessments, but for instruction and formative assessment as well. For example, students can use rubrics to analyze the qualities of work samples, to
self-assess drafts of their own work, to discuss work with peers, or to form the basis of student-teacher conferences.
This chapter discussed three general assessment principles, three specific principles for assessing higher-order thinking, and ways to interpret or score the student work from such assessments. I think of the material in this chapter as "the basics." These principles underlie all the assessment examples in the rest of the book. As you read the more specific examples in the following chapters, think of how each one works out these basic principles in the specific instance. This should help you develop the skills to apply these principles when you write your own assessments.
Copyright © 2010 by ASCD. All rights reserved.
No part of this publication—including the drawings, graphs, illustrations, or chapters, except for brief quotations in
critical reviews or articles—may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopy, recording, or any information storage and retrieval system, without permission from ASCD.
Subscribe to ASCD Express, our twice-monthly e-mail newsletter, to have practical, actionable strategies and information delivered to your e-mail inbox twice a month.
ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.