Conference Countdown
Orlando, Fla.
October 31 - November 2, 2014
  • membership
  • my account
  • help

    We are here to help!

    1703 North Beauregard Street
    Alexandria, VA 22311-1714
    Tel: 1-800-933-ASCD (2723)
    Fax: 703-575-5400

    8:00 a.m. to 6:00 p.m. Eastern time, Monday through Friday

    Local to the D.C. area, 703-578-9600, press 2

    Toll-free from U.S. and Canada, 1-800-933-ASCD (2723), press 2

    All other countries (International Access Code) + 1-703-578-9600, press 2

  • Log In
  • Forgot Password?

 

Share
2014 ASCD Conference on Educational Leadership

2014 ASCD Conference on Educational Leadership

October 31–November 2, 2014, Orlando, Fla.

Learn the secrets to great leadership practices, and get immediate and practical solutions that address your needs.

 

Permissions

ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.

Policies and Requests

Translations Rights

Books in Translation

Sale Book (Aug 2007)
Related Topics

Tests That Teach

by Karen Tankersley

Table of Contents

Chapter 1. Constructed Response: Connecting Performance and Assessment

Helping Students Prepare for the Future

Early in their school careers, students learn that the teacher has the “right” answers to questions asked in the classroom. Successful students learn that their “job” is to try to figure out that “right” answer and to provide it for the teacher. Students who are able to do this quickly and accurately are perceived as brighter and are rewarded with higher grades and more positive feedback. Students who have difficulty in perceiving the answer the teacher is seeking may well be viewed as less competent and are less tolerated. In far too many classrooms, teachers do not require students to think deeply or move beyond the basic knowledge and comprehension level. Even those students who are perceived as bright, capable learners are seldom asked questions like “How do you know?” or “How did you get that answer?” or “Why do you think so?” or “Show me proof that answer is correct.” This lack of cognitive follow-through in our classrooms leads to shallow thinking and encourages students to simply try to guess what the teacher is thinking during instruction rather than really cognitively engage in deep thinking and learning.

Another issue that has limited the depth of thinking and learning in American classrooms is that our ever-expanding curriculum has been “a mile wide and an inch deep.” Traditional classrooms have emphasized facts and rote information at the expense of requiring students to apply higher-order thinking activities. As knowledge in the world continues to explode at exponential levels, this is no longer practical. Instead, students must be able to manage information and apply the appropriate level of sophistication needed to think deeply and process complex problems.

To solve problems in real-world situations, students must be able to apply knowledge and use thinking strategies to analyze, synthesize, and evaluate information. In the real world, answers are seldom black and white, and there are often many solutions to a problem. Preparing students with only surface-level knowledge does not lead to “deep thinking,” to intellectual independence, or to building a student's capacity to problem solve and analyze complex situations in the real world. Requiring students to think and process information at much deeper levels prepares them for the real role they will face in life and in tomorrow's workplace. We might be surprised at just how capable our students are if we push them to reach beyond where they are currently performing. As Mem Fox (1993) so aptly puts it in her book, Radical Reflections, “If we allow children to show us what they can do rather than accepting what they usually do, we would be in for some grand surprises. As adults, our feeble expectations of children's capabilities puts brakes on their potential” (p. 65).

Preparing students to think deeply, thoroughly, and critically is a task we cannot begin too early. If we create classrooms where students are willing to take risks, share their ideas and thoughts, delve deeply into issues and ideas, and take responsibility for their own learning, our students will become deep thinkers who will not only perform well today but also be more prepared for the increasingly complex world they will face tomorrow.

Public Accountability and High-Stakes Testing

Since the rise of the industrial era, one of the major roles of the public educational system has been to prepare students for the workplace. The public educational system, as it existed during much of the first half of the 20th century, was designed to prepare students to either continue on to college or to develop vocational skills that could be transferred upon graduation directly to the workplace. During the 1970s and 1980s, rapid breakthroughs in technology and increased pressures from global competition caused business leaders to begin questioning the preparedness of American graduates and the rigor of the curriculum of public school systems across the United States. Policymakers responded with passage of the Carl D. Perkins Vocational and Applied Technical Educational Act of 1984. As the economic noose continued to tighten around the profit margins of American companies, business and community leaders continued to pressure state and local policymakers for educational reform.

In 1994, President Clinton signed the $300 million School-to-Work Opportunities Act followed by the more aggressive Improving America's Schools Act of 1994 (Goals 2000) legislation. Goals 2000 funds ushered in the establishment of subject content standards by national professional organizations such as the International Reading Association and the National Council of Teachers of Math and other similar content organizations and movement toward uniform national curriculum standards. The Improving America's Schools Act of 1994 required states to adopt or develop their own state standards and to align assessments to grade-level benchmark standards. This act also required Title I schools to show progress on those assessments for continued funding allocations, which was a direct government attempt to control student performance via funding.

On the U.S. Department of Labor (DOL) Web site, one can find a table listing the 30 fastest-growing jobs from the 2006–07 Occupational Outlook Handbook. The table lists not only the number of jobs expected to be produced in coming years but also the level of education required to qualify for these occupations. Just six of these jobs can be obtained with “on the job training” only, while 80 percent of the remaining occupational titles will require postsecondary education. Of these 30 fast-growing careers, 30 percent will require an associate's degree, 33 percent will require a bachelor's degree, and 17 percent will require a master's or doctoral degree (U.S. DOL, 2007). National concern over ever-widening gaps in achievement and poverty status in ballooning minority populations has also surfaced. With mounting pressure for highly skilled, technically literate workers from business and political leaders, President George W. Bush formed a bipartisan coalition in 2000 to draft a more stringent version of national educational policy. This new policy, known as the No Child Left Behind Act (NCLB), established national goals that all children would reach proficiency in key content areas by 2012 and provided sanctions for failure to meet targeted goals and progress measures. Congress supported the plan, and the highly directive NCLB legislation overwhelmingly passed the House by a vote of 381–41 and the Senate by a vote of 87–10.

During the past couple of decades, business and community leaders have been loudly demanding that schools be held publicly accountable for student performance. This demand for accessible accountability is so apparent that an Internet search can provide information about the academic performance of any school in the country. The information is posted not only by the various state departments of education and often the school districts themselves but also by private organizations. These businesses make money by providing the public with various perceptual and performance statistics about schools anywhere in the country. We are also likely to hear how local school districts and schools have fared on state assessments on television, on the radio, and even on the front page of the newspaper in every city in the country. Everyone wants to know how their local schools have performed and expects to have easy access to this information as soon as it is available. The performance of local schools and school districts influences the price of real estate and where people buy their homes. It also determines whether voters are supportive of increased school funding needs in the local area.

Although education is a “state's right issue,” even the federal government has taken steps to ensure more uniform accountability. As public accountability intensifies, pressure on state governments to ensure compliance has also magnified. Not only are test results being associated with specific school districts, but they are more increasingly viewed as a reflection of the instructional skills of the individual classroom teacher. Recent Federal Register grant offerings demonstrate that policymakers are encouraging states to move accountability measures (both rewards and sanctions) down to the individual teacher level. In many areas, “pay for performance” measures are increasingly the focus of new pay structures for teachers. School districts such as Denver and Houston have already begun using student performance as the basis for teacher merit pay, and the movement is growing at both the state and federal levels. Despite the unpopularity of “pay for performance” with teacher labor groups, pay systems tied to student achievement are only likely to increase in the foreseeable future as researchers find ways to separate out the threads that link student growth directly to the instructional impact of a single teacher. Helping students perform well on state academic assessments is no longer optional but, rather, essential to a teacher's career and financial well-being. Whether we like it or not, high accountability, public scrutiny, and pressure for performance are here to stay.

The Standards Movement: What Should Students Know and Be Able to Do?

To better understand how we got here, let's take a look at the educational trends over that past few decades. With the high-tech industry exploding, business leaders and legislators were making public calls in the 1980s for raising student achievement, especially in the areas of reading, writing, and mathematics. Because educational expectations were different from state to state, demands to increase instructional rigor and develop national curricular standardization were loudly touted. National professional organizations saw the writing on the wall and responded by developing, with professionals from the field, their own sets of benchmark “standards” to delineate the key curriculum for their content areas. The standards movement and “standards-based instruction” were born.

As each professional curriculum group released their “standards” documents, state educational departments across the country took these guidelines and personalized them, subject by subject, to meet the needs of their own students. Copies of the state content standards were shipped to districts, and educators received their own state's versions of the “standards” delineating what students should know and be able to do. Often these early standard guidelines were broadly written and may even have spanned several grades with the same basic criteria for mastery. Although teachers' unions responded that the new standards limited teacher creativity and educational freedom, state departments were quick to assure local school districts that there was substantial room for “teacher interpretation” at the classroom level. Most states then pressed forward with developing new “standards-based assessments” for key “benchmark” grades in at least the core content areas of reading, writing, and mathematics to measure their new curriculum.

After the passage of NCLB, federal reporting requirements forced states to ramp up testing to more grade levels and more content areas, including science. State plans had to receive approval at the federal level, or the state would risk losing not only federal educational funds but all federal state aid as well. A few states considered opting out of the federal requirements or even filing a judicial challenge to the requirements, but soon all changed their positions and began working to meet the regulations of the law. Although states still varied widely in content expectations and academic rigor, one giant step toward a more uniform, standard curriculum and the measurement of student performance had been taken.

Second-Generation Assessments Require Deeper Thinking

Many of the first state assessment instruments contained primarily multiple-choice types of items. As testing instruments became more sophisticated, short-answer and open-ended, constructed-response items where students had to apply their knowledge on a more complex performance-oriented task began to appear more frequently. As states continue to make revisions, more and more constructed-response questions are being incorporated into assessment instruments.

If we examine the content standards now in place in most states, we will see that they require students to use higher-order thinking and reasoning skills rather than just memorize content information. This emphasis on higher-order thinking reflects the philosophy that while content knowledge and basic skills are important, it is the ability to reason and apply those skills that truly demonstrates mastery of content.

Despite the fact that the tests have changed to include a greater emphasis on higher-order thinking with performance-based measures, some teachers have not changed the way they approach their daily instruction. For this reason, it is in the constructed-response sections where students are having difficulty applying their knowledge. This results in lower overall scores on these parts of the state test. With changes to how we approach daily instruction, we can help our students develop the skills they need not only to do well on these assessment items but also to internalize what we are trying to teach them.

It is beyond the scope of this book to debate the merits or pitfalls of high-stakes testing. If one is interested in reading about this debate, a plethora of information on this issue is available on the Internet. Regardless of whether we feel high-stakes testing is good or bad, it exists, and until the political realities change, these are the “rules” under which we must operate. Like it or not, accountability is reality. For me, it is a waste of time to debate what I cannot control. I have chosen to take the path of helping teachers understand how to provide their students with the best possible preparation for success on state assessments, since this student success is critical.

This book will help teachers understand how to teach on a daily basis so that students can easily rise to the challenge of scoring well on state assessments, particularly on the constructed-response sections of these instruments. Your students' success is your success. This book will help you understand how to create a classroom that supports deep thinking and higher-order performance so that your students have the best chance to show what they know and are able to do.

What Is Constructed Response?

Let's start with a definition of what constructed-response items are so we have a common understanding of what the term means. Constructed-response questions are assessment items that ask students to apply knowledge, skills, and critical thinking abilities to real-world, standards-driven performance tasks. Sometimes called “open-response” items, constructed-response questions are so named because there is often more than one way to correctly answer the question, and they require students to “construct” or develop their own answers without the benefit of any suggestions or choices.

Constructed-response items can be very simple, requiring students to answer with only a sentence or two, or quite complex, requiring students to read a prompt or a specified text article, reflect on the key points, and then develop a meaningful essay or analysis of the information. Whether simple or complex, all constructed-response questions measure students' ability to apply, analyze, evaluate, and synthesize the knowledge that they have acquired in a more abstract way.

Although most states primarily use open-ended response questions on their state assessment instruments, some use constructed-response questions in a “closed” format. Closed constructed-response questions are similar to more traditional fill-in-the-blank types of questions and have only one right answer. They ask students to fill in a word or a phrase in a specific text and usually require only simple recall or, at best, an inference. Here's an example of a closed type of question: “According to the passage, Allen did not want _______ to go with the group to the park.” Although students might have to make an inference from the text passage, there is still only one correct answer that can complete the question, so divergent thinking or unique responses are unnecessary. Despite the fact that making an inference is considered a higher-order skill, the question still merely requires students to identify the correct answer. Closed questions generally do not stretch a student's thinking to any great extent.

In contrast, open-ended questions require students to think deeply about the question and to provide a much more in-depth response. Here's an example of such a question: “The principal thinks students need more time for learning and wants to do away with recess for all grades at your school. Write a persuasive essay telling him why you either agree or disagree with this idea.” This type of question requires students to think about a specific issue, evaluate their position on it, and then organize their thoughts and compose a meaningful, persuasive essay on the topic. This is a complex, high-level task that can demonstrate what the student knows and is independently able to do.

As we have already discussed, constructed-response questions may be simple or complex in nature, as in the example just given. Here's an example of a simple prompt, commonly referred to as a “brief response” question: “Briefly tell why Mary was afraid in the story.” Test makers might be looking for a student response such as “Mary was afraid because it was dark, and she has lost her key to get into her house. No one was home to let her in when she got home.” Students who could identify losing the key and being locked out as two main concepts causing Mary to be afraid would likely be awarded the complete point allowance for the response. A student who simply noted that “It was dark” might be awarded only a portion of the possible response points because although one piece of the answer was correct, the response was missing other key details that were also required for full points. In this response, the student did not demonstrate a thorough understanding of the fact that Mary's loss of the key was the event that caused her to be locked out in the dark.

Sometimes test makers provide a specific stimulus for students to use in constructing a response. Students may be presented with a graphic organizer, a map, a picture, or a diary excerpt that they must use as background information for the task they are asked to do. They may be asked to process the information, make inferences, or analyze information based on this stimulus. Students are then expected to write anything from a sentence to a fully developed essay using the information presented and the connections and interpretations they have made. For example, in language arts, students may be asked to identify and cite examples from a short story to highlight changes in attitude that occur in the lead character from the beginning to the end of the story. In math, students may be asked to draw a diagram, interpret data, or develop a sequential solution that solves a specific problem. In social studies, they may be asked to discuss the meaning of a theme such as “nationalism” or to create a graphic organizer comparing two regions or two contrasting concepts such as communism and capitalism.

Alternatively, sometimes students must use information to create a graphic organizer, create and correctly label a diagram, or draw a diagram showing a solution. Complex constructed-response questions require substantially more response time and require that students have an understanding of the processes that will be needed to complete the task. More often than not, in a complex response, students will be asked to provide examples or illustrations or to justify their thinking on a particular topic. When text passages are used as the stimulus for the response, students are often asked to refer directly to the text to show connections or examples of the points they are trying to make. When students regularly practice justifying their thoughts and linking information back to their own ideas, their ability to do the same at assessment time increases substantially.

Constructed-response questions are generally criterion referenced and may measure one broad content standard or several specific content standards. Test items will usually be scored manually against a pre-established rubric and sets of prescored sample papers, called anchor papers, that identify the range of allowable responses for each score level. The anchor papers help raters establish a degree of uniformity in how each paper is scored on the established rubric. Scoring is holistic and relatively objective, based on the actual components included in the response. Students may receive full credit or partial credit based on the pre-established rubric criteria.

Scoring Constructed-Response Questions

Almost every state now incorporates constructed-response items into its state grade-level assessment instruments or graduation exams. While multiple-choice test items typically only carry 1 point per item, constructed-response items can account for as few as 2 points or as many as 10 points of the total raw score for each question. Depending on the state, constructed-response items may account for as much as 25 to 50 percent of the composition of the total test that students will be facing each year.

Not only are constructed-response questions more demanding, but their placement in the test booklet can add an additional challenge. Although these questions require more intense effort and processing power, most test writers put them at the end of the multiple-choice test segments. Because students may already be beginning to tire when they reach this point on the assessment, some have a tendency to gloss over the very items that require their most concentrated and deliberate efforts. As a result, some students lose points simply because they are too tired to devote the energy needed to complete the task. Each question is scored against a pre-established rubric, with partial credit being awarded for answers that may have some accuracy but are less fully developed. Unlike a multiple-choice question, which is scored either right or wrong, a constructed-response question can earn students partial credit for having some degree of accuracy in their response. In any case, with a constructed-response question, students have multiple options for organizing, processing the information, and creating a response that derives from their own thoughts, skills, and experiences. Helping students improve their ability to provide high-quality responses on the constructed-response test items can significantly improve students' scores because each constructed-response item may hold many points that could affect the overall score.

Teachers who routinely teach for deep understanding find that their students are also more motivated, interested, and involved in the learning process. Despite the increased use of constructed-response items on state and national assessments, many teachers have not changed their instructional approaches to help students practice responding in more comprehensive, open-ended ways. Deep thinking and processing must become the rule rather than the exception. While building superficial knowledge may have worked when tests were written entirely in a multiple-choice format, it is not enough now. We have seen that students lose the most points on the constructed-response sections, so clearly we must change how we work with students every day. Our daily instruction must help students delve deeper and go beyond the superficial knowledge. By understanding and incorporating open-ended activities into the regular instructional program, teachers can feel confident that their students will quickly become better prepared for meeting the challenges they will face on the constructed-response sections of assessments.

Classroom Instruction That Builds Thinking

Good teachers have always known that effective learning requires practice and very specific performance feedback to build success. Consider when a child learns to ride a two-wheeled bicycle for the first time. We provide support such as training wheels or more physical support such as holding onto the bike and running alongside the child until the child can balance the bicycle unaided. All during the process, we provide feedback or suggestions on how to combine the actions of balancing and pedaling at the same time. In addition to teaching the child how to move forward, we also describe how to stop, turn, and use appropriate safety procedures. At first the task is difficult, but with persistence, the child makes progress. Although he or she may still fall or be wobbly now and then, it is the continued practice and our coaching that eventually help the child learn to synthesize all of the information into accomplishing the task.

Constructed-response test questions entail a similar sequential process. For example, consider the following test item that might appear on a high school reading assessment: “Identify the mood in the passage from Joseph Conrad's novel The Heart of Darkness. Use at least four specific examples and details from the text to support your answer.” This question would require students to read the given passage with sufficient comprehension not only to understand the concept of “mood” but also to analyze the text, choose relevant examples, and then support a logical argument about the text in written form. This is a complex task that many high school students find not only challenging but perhaps even overwhelming. It is therefore not surprising that students perform less well on open-ended, constructed-response items on state and national assessments than they do on multiple-choice items. At least with a multiple-choice item, depending on the number of responses presented for each question, students have a one-in-four or perhaps a one-in-five chance of answering correctly, even when they have no idea what the right answer may be. Constructed-response questions, in contrast, require students to supply their own answer to the question, commonly referred to as a “prompt.”

In reading or social studies, a multiparagraph passage can provide test developers with multiple questions that build from simple to complex. After reading the passage, students may be provided with a comprehensive prompt or a series of smaller response prompts all based on the same text. Here are examples of comprehensive prompts:

  • “Write a paragraph summarizing the story.”
  • “What do you think was the lesson that Character X learned in this story?”
  • “Choose someone that you know who is like Character X. List and discuss the ways this person is like Character X. Use examples and illustrations from the text to support your response.”
  • “Suggest a different ending to this story.”

In other cases, students may need to create a chart showing the differences or simply draw conclusions and formulate an answer to a question. Questions might be written as follows: “What conclusions can you draw about population growth in Country X from the data presented in the chart?” or “Given the location of this country on the map shown, what issues or concerns would you expect the people who live in this country to have?” To respond successfully to these types of responses, students must be able to analyze the information and then be able to draw conclusions about the data. Sometimes the same data are used with brief-answer questions, all based on the same data set. In each case, students must reflect on the text to answer each of the open-ended questions in a thorough and comprehensive way.

In many states, the assessment of terminology and vocabulary is an integral part of state assessment instruments. To assess vocabulary knowledge, test makers frequently give students a short passage containing the target word and then ask them to interpret the meaning of a vocabulary term by selecting a multiple-choice sentence that correctly explains the usage of the word. Although these items are often multiple choice, they still require that students have a well-developed vocabulary to identify the correct response. Teachers can help students improve in this area by ensuring that students have a good understanding of how to use context clues and prefixes, suffixes, and root words to determine the probable meaning of a given word.

In some states, students are expected to be able to classify text by genre. Students might be asked not only to identify the genre but also to cite clues or information that helps categorize the text into this genre. Genre questions require that students fully understand the characteristics of various genres and are able to identify the elements that meet these characteristics in the passage. Here's an example of a genre question: “Discuss the elements found in this story that make it a fairy tale, and cite relevant examples from the story.” If students have not been taught to identify the characteristics of various literary genres (and state standards expect them to do so), they will not be able to perform well on a constructed-response item of this type.

In math, open-ended questions are generally developed around one of the following five key areas: number sense and operations; patterns, relationships, and algebraic concepts; geometry; measurement; or data, statistics, and probability. Students may be asked to describe the steps taken to solve a particular problem, predict a sequence or pattern, or calculate specific values from a given chart or table. In many cases, students need to understand not only how to perform the necessary calculations but also how to sift through irrelevant data to find the essential pieces of information. Many states will also expect students to explain how they arrived at answers or to sequentially describe the steps they used to solve the problem. Students will also have to understand essential math terms and vocabulary such as perimeter and area that might appear in the problem, as well as be able to apply the appropriate skills correctly.

On science assessments, test makers may ask students to interpret data from maps of various types in order to draw conclusions or make predictions. They may also ask them to describe or draw various scientific components, such as the layers of soil or the atmosphere, or discuss how certain features help specific animals survive in their habitats or how certain cycles occur in nature. If students do not have a true understanding of the processes involved in these scientific elements, they will not be able to perform the tasks. Again, familiarizing yourself with the grade-level content standards and all state-released items will help you clearly understand the expectations set for your students.

Just as practice and feedback help a child learn to ride a bicycle, if we want students to score well on constructed-response items on their annual assessments, we must provide them with many opportunities to use higher-level thinking where we can coach and shape their abilities. Students need to be trained in how to analyze the requirements of a task, explain their thinking, and support their analysis or opinions with concrete evidence. We must ask them regularly to make connections, synthesize responses, analyze sets of data, and use all of their background knowledge to interpret information. When this activity has become the norm rather than the exception, then we will have truly accomplished our goal of preparing our students for the demands facing them.

Creating a Classroom Conducive to Higher Levels of Thinking

Classrooms that maximize student learning potential are environments where independent thinking is valued and students are encouraged to see themselves as capable problem solvers. The teacher's role becomes the “guide by the side” rather than the “sage on the stage.” The atmosphere is supportive and nonthreatening so that risk taking and divergent thinking are encouraged and promoted. Questions are not answered primarily by the teacher but, rather, by the students themselves. Students are encouraged to experiment and grapple with ideas to make their own connections and find their own solutions. They are asked to explain their conclusions, outline their thought processes, and justify their decision making. In these classrooms, it is important to talk more about the process used to arrive at a specific response rather than about whether the answer was correct. In short, it is what the student has learned rather than what the teacher has taught that becomes the focus of daily operations.

To clarify this idea by again comparing it to performing a simple sport, we can conceptually teach a child to hold a baseball bat and to swing it at an oncoming ball. We can help the child understand the rules of baseball and even how to run from base to base or the points to consider for stealing a base, but we cannot teach that player how to apply all of these skills to maximize the chance of success. It is the player who must learn to evaluate, to strategize, and to combine all of these skills to play the game successfully. The degree to which each individual can successfully combine all of this training and information into successfully playing the game determines whether that person becomes an occasional player at the local park or a highly paid, major league professional. Likewise, how well students integrate what they have learned and practiced in the classroom helps determine their performance on assessments.

In classrooms that promote deep thinking, students receive ample time to explore and reflect on multiple perspectives. In these classrooms, learning is a community activity. Students frequently work together to develop, clarify, or blend their thoughts and ideas on the topics they are learning. They ask not only their teacher but one another for information, explanations, and clarifications on a regular basis. Activities draw from real-life examples that provide immediate and meaningful relevance to students' lives and experiences. Students understand that there may be more than one right answer and that process is just as important as an answer. They use a variety of communication tools such as drawings, diagrams, tables, and other forms of graphic organizers or visual displays to process and present their thoughts and ideas. Comparing, interpreting, and analyzing information become common tasks that permeate learning in the classroom.

As a classroom teacher, I always thought that the best learning occurred when I remembered that learning should, above all, be enjoyable and stimulating. The chapters that follow will help you enhance what has been successful for you as well as take your instructional practices to the next level. With these changes, your classroom can foster the kind of successful, complex, independent thinkers capable of facing the challenges of the 21st century.