HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2009
Vol. 67
No. 3

From Test Takers to Test Makers

When students become partners in the assessment process, tests become vehicles for learning.

premium resources logo

Premium Resource

Meet Mary and Nathan, two typical students from my 25-year teaching career. If you're like most teachers, you'll probably recognize them.
Mary received top grades, but she did not participate much in class discussions, nor did she reveal deep insights when challenged with why questions: Why do you think the author comes to this conclusion? Why do you think the king decided to attack? Why do you think the substances changed colors when they were mixed? She did, however, do well on most tests because they were not made up of why questions, but of fact questions: What was the author's conclusion? When did the attack take place? What color did the substances become when mixed? Mary studied hard just before the test and developed good techniques for answering the questions. I knew Mary would get good grades, but I was worried about her understanding of the material.
Nathan, on the other hand, did not do well on tests. He was especially unhappy with fact questions. He wanted to see the relationships among the facts and understand how the facts had become facts. He complained that there was not enough time for this during the test and that there was never enough space to write explanations. Nathan made our class discussions interesting; he had in-depth knowledge and sought understanding when he felt it was absent. He excelled at the why questions. I knew Nathan understood the material, but this was not evident in his test scores.

Questioning Assessment

I started studying assessment as an academic because I was unhappy as a schoolteacher with how assessment took place. I realized that tests did not reflect the current level of competence and knowledge of students like Mary and Nathan. My first responsibility was to students, and traditional tests did not serve their needs—or my needs as a teacher. So I started looking for other ways to understand students' learning.
My thinking was influenced by a variety of assessment experts (for example, Black & Wiliam, 1998; Crooks, 1988; Gipps, 1994; Popham, 1995; Sadler, 1987, 1998; and Stiggins, 2002). I came to see assessment as a combination of processes—not merely a test taken at a specific point in time. The assessor (in our case, the teacher) can only make inferences about student learning on the basis of the data collected. The quality of conclusions drawn depends on the quality of the data. So how do we improve the data?
I realized that it would not be wise to do away with tests. The education system is based on testing, and testing is widely accepted among parents and students. Plus, tests illuminate an important aspect of students' learning profiles, namely, the ability to present factual knowledge within a given time limit. I started pursuing what I callcomplementary assessments (as opposed to alternative assessments). With complementary assessment, tests become more than way to give grades; they also act as learning tools before, during, and after test administration.
How can teachers translate this view into practice? Here, I offer some ideas for using tests for learning as well as assessment. I have practiced these strategies in English as a foreign language, literature, and education classes with learners ranging from as young as 9 years old to the university level. Other teachers have put them to use in courses in mathematics, history, and other subjects.

Learning Through Creating the Test

Having students help write a test is useful for reviewing and for easing test anxiety. I usually apply the technique when we have completed a unit or when approaching the end of term, following these steps.
Step 1. Students look through their notes and other material from the unit and write down the main points they have learned. This is the first review, done by the individual student at home.
Step 2. The teacher invites the whole class to brainstorm a list of learning points, all of which are written on the board. The list serves as a review for all students, and it reveals what students perceive as the main points of the unit. Students like Mary might point out important facts, whereas the Nathans in the class will share how those facts intersect. I've found that the student-generated list does not always match my intended teaching points, so it helps me to refine my own teaching so that I better emphasize the points I want students to learn.
Step 3. The teacher discusses with the class how the learning points were covered, whether through discussions, lectures, group work, readings, digital learning, field trips, and so on. This step reminds the students and the teacher that learning and teaching activities come in different wrappings— that Mary's facts and Nathan's analyses are both part of learning.
Step 4. The teacher briefly informs the students about the basics of test design. I usually explain how open and closed test items tap into different types of knowledge, and I stress the importance of clarity of instructions and of the questions themselves. Students are usually keen on peeping into the "black box" of testing, and learners suffering from test anxiety may feel more at ease when they better understand test design.
Step 5. The class is divided into groups of five students. Each group designs approximately eight test tasks or items, keeping in mind the tips they were given in the previous step. Ideally, students will create a mix of multiple choice and discussion questions; however, most students' questions reflect the type of questions they have encountered on other tests in the class.
To write a good test item, the designer has to understand the topic well, so the student groups spend time in intense discussion and review. If any student does not understand the topic, other group members usually explain it. The teacher monitors the groups, pushing them to go beyond the factual level to design tasks that require the higher levels of understanding that students like Nathan long to share and that students like Mary need to develop.
Step 6. Each group presents its final product to the class. Group members answer classmates' questions about the test items. After the presentations, all test items are familiar to all students, and most misunderstandings have been clarified by the students, not by the teacher. If the students are unable to clarify some misunderstandings, the teacher steps in and reteaches the material in a different way. Often, these misunderstandings are evidence of a problem with the teaching, and the exercise gives the teacher useful feedback and ensures that students will not be assessed on material that they haven't had adequate opportunity to learn.
Step 7. The teacher collects the student-designed tests and creates a final version, using only items written by students and making sure questions from each group appear on the test. The goal is to create a test with a good mix of fact-based and analytical questions, so that students like both Mary and Nathan can demonstrate their strengths and push themselves to grow in their areas of weakness. If it seems a bit frightening to take the full step the first time you involve your students in designing tests, you can always have them design 50 percent of the test, while you design the other half.
Step 8. The students take the test.
Student-created tests give students repeated reviews of the material, both individually and in groups, and allow for fruitful peer teaching and learning. Students take the activity seriously because they know that they will take this test. Some teachers might worry that students will intentionally create easy test questions, but I have not found that to be the case. In fact, the student-created tests are usually more difficult than the tests I would write myself.
The main challenge to this approach is that it is time-consuming. For this reason, I do not apply the approach to every test, but I do so at least once each term, and twice during a year. The gain in learning time on those occasions outweighs the loss of teaching time.

Learning Through Taking the Test

Many testing situations create tension among students. The setting is formal, and they are anxious to get a good grade. When students are taking a test, they are not open to learning but are focused on presenting what they have already learned. However, I've found that less formal test situations can be learning opportunities. One informal test-administration strategy is group testing.
Step 1. The teacher announces the test as usual, often a week or so ahead of time. Students are not told at this point that they will be taking the test in groups.
Step 2. On test day, the students are randomly placed in groups of three.
Step 3. Each student must turn in an individual copy of the test but can freely discuss any test tasks with others in their group. It amazes me to see how much peer learning takes place within the groups. The Marys readily share the facts that they can recall, and the Nathans guide the Marys to make connections between the facts. Students who know the material well are generally eager to explain why they think the answer is correct, and this peer teaching enhances the learning of both the explainer and the person receiving the explanation.
Step 4. The teacher marks the papers. The individual test papers from group members are usually similar, but that's acceptable, as long as each student has completed his or her own paper. The group scores are presented to the whole class, and groups aspire to have the highest score. For a group to "win," all group members need to do their best.
In many workplaces, individual performance is related to group achievement, and we do not prepare students well for the ecology of the workplace if we focus only on individual performance. This approach enables each student to demonstrate his or her own learning while also getting practice in supporting the learning and achievement of others.

Learning Through Marking the Test

I used to spend much time scoring and writing comments on test papers, and when I returned the tests, the students only looked at the score and paid no attention to my careful comments. When students have just taken a test, some of the tension is behind them, yet the material is still fresh in their minds, which offers opportunities for learning. But when students focus only on the grade, they miss those opportunities. When learners score their own tests, the problem largely disappears.
Step 1. The teacher scores the test and returns it to the students as usual, but with nothing written on the actual test paper. The learner's strengths and weaknesses on the test, the score for each task, and the overall score are recorded and saved elsewhere. Students are informed of this.
Step 2. The teacher divides the class into groups and asks them to retake the test together. Students who did well on the test chair the group discussions. This motivates students like Mary who tend to test well to share their knowledge with peers. The Nathans, who might not have done so well, ask questions that push the Marys to grow in their understanding.
Step 3. Each group develops an answer key and presents it to the class. For open questions, the group writes a "best sample" response. If necessary, the teacher provides further explanations and makes corrections. Together, the whole class comes up with an answer key, which is put on the board.
Step 4. Each student corrects his or her own test, using a pen of a color different from the one used to take the test. The point value for each test item is written on the test paper, the student-developed key is displayed on the board, and examples of good open answers are provided. The student has all the information needed to act as an informed assessor.
Step 5. Each student writes the score he or she believes is appropriate on the test paper and returns it to the teacher. Because students know that the teacher has already scored the tests, they are unlikely to award themselves a score they do not deserve.
Step 6. After seeing the marks the students give themselves, the teacher calculates the final score for each learner by averaging the student and teacher scores. If there is a discrepancy of more than 10 percent, the teacher should meet with the student to clarify any disagreement. In my experience, the teacher score is usually the higher of the two scores, and discrepancies of more than 10 percent are rare.
In the beginning, I asked students to score one another's tests, but they did not feel comfortable correcting peers' work, and they did not like peers seeing their personal strengths and weaknesses. Students are happy to ask one another questions and benefit from discussing the test with peers, but they resist being scored by a friend.

Learning from Testing

Testing activities can become learning activities when students are informed partners in assessment processes, a strategy whose effectiveness is documented in research (Hattie, 2009). If we accept that assessment is for learning and not merely for selection and decision making, we must seek out and apply such strategies.
This kind of student and teacher partnership can make tests a means through which both the fact-oriented Marys and the discussion-loving Nathans can share what they've learned and grow in their areas of weakness. Why not give them that chance?
References

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5, 7–74.

Crooks, T. J. (1988). The impact of classroom evaluation practices on students.Review of Educational Research, 58, 438–481.

Gipps. C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer Press.

Hattie, J. A. (2009). Visible learning: A synthesis of over 800 meta-analyses related to achievement. New York: Routledge.

Popham, W. J. (1995). Classroom assessment: What teachers need to know. Ohio: Allyn and Bacon.

Sadler, D. R. (1987). Specifying and promulgating achievement standards. Oxford Review of Education, 13, 191–209.

Sadler, D. R. (1998). Formative assessment: Revisiting the territory. Assessment in Education: Principles, Policy and Practice, 5, 77–84.

Stiggins, R. J. (2002). Assessment crisis: The absence of assessment FOR learning. Phi Delta Kappan, 83(10), 758–765.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
From our issue
Product cover image 110022.jpg
Multiple Measures
Go To Publication