1703 North Beauregard St.
Alexandria, VA 22311-1714
Tel: 1-800-933-ASCD (2723)
8:00 a.m. to 6:00 p.m. eastern time, Monday through Friday
Local to the D.C. area: 1-703-578-9600, press 2
Toll-free from U.S. and Canada: 1-800-933-ASCD (2723), press 2
All other countries: (International Access Code) + 1-703-578-9600, press 2
by Charlotte Danielson
Table of Contents
Meaningful conversations about teaching and valid evaluations of teaching must be grounded in a clear definition of practice—a framework for teaching. This definition should reflect the professional consensus of educators in the particular school or district. Regardless of the purposes to be advanced, whether for professional development or for evaluation of teachers, a clear definition is essential.
But a clear definition of teaching is not sufficient. Both the support of teacher development and the evaluation of teacher performance require evidence of practice—evidence of each of the components of teaching identified in the adopted framework. The term evidence is not intended to suggest a courtroom or a litigious environment. Rather, it is intended to convey that conversations about teaching must be grounded in actual events, in actions or statements, in artifacts, or in decisions a teacher has made. Without such grounding, impressions of teachers' skills are based entirely on the observers' own idiosyncratic views of teaching and their understandings of what has occurred and what those events mean.
Mentors and coaches, no less than evaluators, depend for their work on evidence of practice. They collect the same evidence but use it for different purposes. For evaluators, evidence is the foundation of judgments they make about teachers. All the evidence they assemble, from a variety of sources—for example, formal and informal observations of practice or artifacts for those aspects of practice not observed in the classroom—serve as the basis of decisions they make about renewing a contract or offering tenure. Mentors and coaches, on the other hand, use the evidence they collect—from the same sources—to structure professional conversations. No judgments are based on it; it is used purely for formative purposes. So the question is not whether individuals in these different roles collect evidence; the question is how they use that evidence.
The focus in this chapter is on identifying the evidence needed to describe teachers' skill in the domains and components of teaching described in Enhancing Professional Practice: A Framework for Teaching.
Evidence comes from two principal sources: direct observation and the examination of artifacts. Observation is appropriate for the observable aspects of teaching—principally, a teacher's interaction with students in the classroom. But some essential aspects of teaching can't be observed—for example, a teacher's skill in planning or in communicating with families. Although a classroom observation might reveal indirect evidence of, for example, planning, only the planning documents themselves provide a coach or a supervisor with direct evidence of the teacher's skill in designing and sequencing meaningful learning experiences, locating appropriate resources, and developing suitable assessments.
The observation of classroom practice is the cornerstone of the evidence of a teacher's skill; engaging students in important learning is rightly considered to be the key to professional teaching. What teachers do in their interaction with students is what matters most in influencing student learning.
In general, observation of classroom practice, with the accompanying preconference and postconference, provides the best evidence of Domains 1, 2, and 3 of the framework for teaching: Planning and Preparation, the Classroom Environment, and Instruction. The preconference, also called a planning conference, provides an opportunity for a teacher to display important planning skills, at least as used in planning a single lesson. The postconference, also called a reflection conference, is an important opportunity for teacher self-assessment, reflection on practice, and professional conversation—activities that have been demonstrated to contribute to professional learning by teachers.
Of course, other important aspects of a teacher's work can be observed as well. For example, a teacher's conduct during faculty or team meetings demonstrates the teacher's engagement in a professional community; a parent conference demonstrates the teacher's skill in communicating with families. Furthermore, observations of practice apart from classroom teaching lend themselves to planning and reflection conferences. For example, if a teacher has arranged a presentation to a child study team, the planning conference enables a teacher to respond to questions such as these: "What are you hoping to accomplish?" and "What approach do you plan to take?" Furthermore, the reflection conference may proceed along the same lines as the one following a lesson, with questions such as these: "Did you achieve what you hoped?" "What did you learn from the meeting that causes you to rethink your approach?"
In fact, many educators believe that, from the standpoint of teacher learning, the reflection conference is the most significant part of the observation process, whether the observation has been of a lesson or an event outside the classroom. This has been found to be the case whether the observation has been conducted for purposes of coaching or evaluation. Both the observer and the teacher have witnessed the same events, albeit from different perspectives. That shared experience provides the raw material for meaningful dialogue.
Artifacts offer the best, and in some cases the only, evidence of certain aspects of teaching. The planning documents discussed at the preconference for a lesson observation provide important evidence of a teacher's skill in planning, at least for a single lesson. But what about long-range planning? That is a different and very important skill. Only a unit plan enables teachers to demonstrate how they intend to engage students in sustained learning of complex concepts, with meaningful activities and suitable materials. Through a unit plan, teachers can demonstrate how they develop concepts over time with their students, with the content moving from simpler to more complex, through a variety of approaches. This skill cannot be observed in a single lesson or the lesson plan that accompanies it.
In general, artifacts are essential for teachers to demonstrate their skill in Domains 1 and 4 of the framework for teaching—Planning and Preparation, and Professional Responsibilities. Most of the components of these domains can be observed only indirectly, if at all. No number of classroom observations will enable a teacher to demonstrate the skills of maintaining records, communicating with families, or engaging in professional growth—some of the components of Domain 4. These can best be demonstrated through "stuff"—artifacts. For example, a class newsletter, a phone log, and a letter to parents about a new program indicate the range of a teacher's skill in communicating with families. These written materials should be clear, with proper use of language, and appropriate to the cultural and educational backgrounds of their recipients. For professional conversations or the evaluation of teacher performance in the components of Domains 1 and 4 of the framework for teaching, artifacts are critical.
Artifacts can also provide evidence for Domain 3, Instruction. For example, an assignment or the directions for an activity offer critical evidence of the level of intellectual rigor in the classroom. Furthermore, student work in response to the assignment offers important evidence of student engagement. It's easy to detect whether students have taken the assignment seriously and whether they have engaged thoughtfully with it. Furthermore, in discussing student work with teachers, it's possible to see how they use student work in their informal assessment of student learning.
Figure 1.1 provides some examples across the two dimensions of evidence— direct observation versus examination of artifacts, and classroom practice versus nonclassroom responsibilities. (See Appendix B for guidance on the collection and review of artifacts.)
How to Gather Evidence
The framework for teaching describes the work of teaching. Every component of the framework is supported by empirical research linking the component to increased student achievement. But the framework for teaching is a description of inputs—that is, what teachers do. As such, it provides only indirect evidence of what is, after all, the main outcome of schooling—student learning. Therefore, instead of being asked to demonstrate their skill in teaching, teachers can be asked to provide direct evidence of their results with students. They may be asked to demonstrate that they have actually had an impact on the students they teach.
Evidence of student learning can take many forms. The most obvious form, particularly to noneducators, is the results of state-mandated standardized tests.
State-mandated standardized tests have the advantage of being externally administered and regarded as valid. For the subjects and grade levels in which they are administered, state tests are considered by many people to provide unimpeachable evidence of a teacher's effectiveness. However, the use of state-mandated standardized tests as indicators of a teacher's skill in teaching involves many conceptual and technical difficulties.
Unavailability of state tests for a full range of subjects and levels. State tests are not administered in all subjects and at all grade levels. Most states assess achievement in grades 3 through 8 in math and reading, and sometimes writing and science, and once more in high school, typically in math and reading. For elementary teachers and for middle and high school teachers who teach those subjects, the resulting data can provide an indication, though not a timely one, of their effectiveness. However, state-level tests don't exist for many subjects, or for any actual course at the secondary level—for example, music, world languages, or the sciences such as biology or chemistry. Thus, state-level tests cannot be used to determine the effectiveness of many teachers. For example, even if a state has developed a science test for high school students, such a test will not assess students' knowledge of chemistry.
Incomprehensiveness of state assessments of student learning. Another difficulty with using standardized tests to measure teacher effectiveness is that such tests cannot assess all the learning outcomes considered important by both educators and the general public. For example, multiple-choice, machine-scorable tests can assess student knowledge of facts and procedures, but they are less able to evaluate conceptual understanding, thinking skills, or writing. Furthermore, standardized tests cannot evaluate any aspect of student achievement that depends on student production—for example, giving a presentation or speaking another language.
Student mobility. Any data about student learning is contaminated by the fact that during the course of the year some students enter the school and others leave. These students may not be included in the data, and their exclusion may skew the results. Therefore, the average score of students on a test, particularly if the numbers involved are small, may be highly influenced by the movement in and out of only a few students.
Influences beyond the classroom. Educators and others recognize that student learning—including the learning measured by state-mandated tests—is highly influenced by factors beyond the control of the school. Some families are much better equipped (through their own experiences or the resources they can provide) to support students in their learning. They are able to provide a quiet location for schoolwork and to offer assistance when needed. They can take weekend excursions to local places of interest and provide a home environment that promotes learning. It is well known that such factors greatly influence students' level of performance and rate of learning. Even within the school, student learning cannot always be attributed to individual teachers. Schools are complex systems, and the instructional skill of individual teachers is only one factor, albeit an important one. Other factors that come into play are the curriculum, the school's organization and master schedule, and available learning support. Furthermore, students are sometimes assigned to classes for which they lack important prerequisite knowledge; for example, they may be assigned to an algebra class without having an adequate understanding of fractions.
These various considerations suggest yet another challenge—namely, that test data constitute valid evidence of teaching only if such data are calculated on a value-added rather than an absolute basis; that is, what is the level of proficiency of students at the end of a school year compared with their predicted level of achievement (based on their previous rates of learning)? Such calculations require sophisticated statistical techniques and are possible only where a state or a district has invested heavily in psychometric analysis.
The challenges related to state-mandated tests are not presented as an argument against asking teachers to demonstrate their impact on the students they teach. Teachers recognize that it is not unreasonable to request that they be able to show that their students learn as a result of their actions. The question becomes one of evidence: How can a teacher demonstrate that? What would count as evidence? How much evidence is sufficient? Could a small sample suffice?
These questions about evidence are most critical in the context of teacher evaluation, where supervisors must be able to say with confidence that a teacher obtains good results with students. The amount and type of evidence of that teacher's impact should be, therefore, decided by the teacher and the supervisor together, and should relate to the school's and the teacher's aims for those students. For example, writing samples from September and May would offer evidence of growth in writing skills, just as science lab reports from fall and spring would demonstrate increased skill in analyzing data. (See Appendix B for guidelines for gathering such evidence.)
Some schools and districts encourage the collection and examination of feedback about teacher performance from students and parents. Such feedback can be extremely valuable to teachers in improving their practice. For example, a teacher might strive to treat all students fairly and with respect. But if the students don't believe that all students are treated fairly, the teacher should know that. Similarly, a teacher might strive to be responsive to parent questions and concerns, but if parents find the teacher inaccessible, that is important information for the teacher.
However, collecting information from students and parents presents some challenges. The typical methodology is opinion surveys with a rating scale by which people express their views. By definition, surveys elicit perceptual information—that is, information about people's views and opinions. Such information is not independently verifiable. However, in some instances, perceptions are important. For example, if students believe that a teacher shows favoritism toward some students, it does not matter what the teacher's intentions are or if the "objective" data show something else.
Educators contemplating the use of student and parent surveys must decide whether the respondents will be anonymous. Some students and parents fear retribution if teachers know the source of a negative comment. This retribution could take the form of unpleasant classroom interactions, poor grades, or reluctance by the teacher to write a favorable college recommendation. In addition, it's important to decide what will be done with the results. How will they be used?
Some administrators (and some boards of education) regard the results of student and parent surveys as part of the evaluation process and believe that the results should be sent to the administrator without the teacher even seeing them. In this situation, the issue of respondents' anonymity is not as important. Understandably, most teachers resist this use of student and parent surveys.
In considering the concerns surrounding the evidence that can be derived from student and parent surveys, it seems clear that their primary value is to help teachers understand the impact of their work on the most important people with whom they interact—their students and the parents of those students. Thus it makes sense that teachers be the sole recipients of the feedback, but that they be expected to summarize it to their administrators and be prepared to describe what they have learned from the surveys, what surprises they encountered, and how they plan to address any concerns raised.
Many educators collect evidence of teaching for use in several different contexts. Mentors and coaches increase their effectiveness with the teachers they serve if they can cite specific examples of a teacher's practice. And it's essential for evaluators, if they are to make accurate judgments about teaching, to base those judgments on specific evidence of practice. However, a number of issues must be considered in the collection and use of evidence.
Evidence—whether gathered through observation of practice or examination of artifacts—may be provided either formally or informally. Most systems of teacher evaluation require a certain number of formal classroom observations, and this requirement is typically spelled out in the negotiated agreement. But formal does not necessarily mean "announced"; instead, the term formal
typically refers to observations that last for a certain minimum length of time, that are documented in a write-up placed in a teacher's file, and that are used in writing the annual evaluation.
An examination of artifacts may be formal as well; the collection of artifacts to be used in, for example, the teacher evaluation may be specified in advance, along with a schedule of when they are to be examined. In some schools, teachers prepare one artifact to be reviewed during the reflection conference for each of the three announced observations. Other schools set aside a separate time for the review of artifacts. In either case, there are no surprises; teachers know what is expected of them, and they have written guidelines to follow.
Most negotiated agreements specify whether evidence gathered informally "counts" in the evaluation process. Some contracts are clear on this point: only evidence collected through the formal (and often announced) observations may be used in making an evaluation. Other contracts include no statement on the point, which means that any evidence, whether it is collected through formal or informal means, may be used.
Another distinction occurs between announced and unannounced observations of practice and examinations of artifacts. Of course, in the context of teacher evaluation, the concept of evidence, and its accuracy, is critical. A strong case can be made for unannounced observations of practice and discussions of artifacts.
How can an administrator know with any certainty the quality of a teacher's practice? Teachers work with students approximately 5 hours each day, for at least 180 days each year, for a total of nearly 1,000 hours. Administrators do well to observe in a teacher's classroom for 4 of those hours each year and may observe for far less than that: typically 1 hour. This is a tiny percentage of the total—well under one-half of 1 percent. The best one can hope for is that the observed lesson is typical, representative of the total. But that conclusion may be unlikely if the observation has been announced; "dog and pony shows" are alive and well! An administrator may witness a lesson that has been much more thoroughly planned than would normally be the case, and for which the students have been specifically prepared.
Administrators sometimes argue that although an announced observation of teaching may cause them to see a highly planned and perhaps not completely typical lesson, it is adequate for their purposes. They can learn, they maintain, whether the teacher is capable of designing and executing a superior lesson.
But surely such an argument is not sufficient. An administrator may determine that a teacher is capable of teaching an excellent lesson when that teacher knows that the administrator will be present. But the students are there every day. To obtain an accurate view of a teacher's practice, it is critical to know, somehow, what happens in the classroom every day, even when no observer is present. This requirement is the strongest argument for including in an evaluation system at least some unannounced observations of teaching. Such observations can be "formal"—that is, they can be for the length of time specified in the contract and can yield a written report, but they are unannounced.
Of course, if a classroom observation is unannounced, it is not possible to conduct a planning conference. As a result, the teacher does not have the same opportunity to demonstrate skill in planning that such a conference affords. However, the planning discussion can be incorporated into the reflection conference— for example, by asking questions such as these: "What were you planning for the students to learn?" "Why were those suitable outcomes for this group of students?"
Unannounced observations of teaching also lend themselves to the unannounced examination of artifacts, particularly those connected to the observed lesson—for example, materials used in the lesson and samples of student work. Other artifacts of a teacher's practice routinely come to the supervisor's attention— for example, information sent to parents or materials prepared for a faculty meeting. Furthermore, teachers' conduct during faculty meetings and team work sessions, their presentation to a child study team, or their sharing of findings from a conference or an action research project are always conducted in the public sphere; observation of these situations is always informal. Figure 1.2 shows some of the characteristics of announced and unannounced and formal and informal procedures for the collection of evidence.
For purposes of mentoring or coaching, teachers collect evidence of the teaching of their colleagues and conduct nonjudgmental conversations about practice. When the evidence is used for performance appraisal, it is typically collected by supervisors or administrators. These may be department chairs, site administrators, or district-level content supervisors. Of course, some exceptions apply; in some settings, teachers are involved in the evaluation of their colleagues. Although rare, this situation is not unknown.
Evidence of teaching may be provided either by the teacher or by a mentor, a coach, or a supervisor. Of course, people's perspectives may differ slightly, because of their different roles. An observer cannot know the background of every situation and can't be aware of the particular challenges facing each individual student. On the other hand, even the most attentive teachers cannot be aware of everything happening in a classroom; another person's observations can help to supplement the teacher's perceptions.
All educators have a private, idiosyncratic view of good teaching, depending on their own experience as students or as parents with children of school age, their professional preparation, and their experience as teachers and interactions with colleagues. This private vision extends to the type of work students should be doing in school and the nature of suitable communication with families. When teachers join the faculty of a school, their views of teaching may be similar to those of other teachers and supervisors, but not necessarily; the value of an agreed-upon framework for teaching is to provide a common language and a shared understanding for use by teachers and administrators in a school and a district.
But simply adopting the framework for teaching does not guarantee that everyone understands it in the same way or that everyone would cite similar evidence for the different components. To achieve such consistency, training is needed to ensure common understanding of the framework for teaching and, for evaluators, consistent judgments about teaching based on evidence. In addition, the forms and procedures used must support that consistency.
Training in the framework for teaching can be highly valuable professional development for teachers, mentors and coaches, and evaluators. A careful study of Enhancing Professional Practice: A Framework for Teaching can provide a good start to becoming familiar with the framework. In such a book study, educators read a chapter of the book between meetings, and they come prepared to discuss the section's applicability to their own setting. For the sections of the book describing the various domains and components, the question of evidence can be added to the discussion: "Does this aspect of teaching apply to me, and if so, what would constitute evidence that it was well done?"
During high-quality training, educators come to understand the different components of the framework for teaching, how the components are related to one another, and which ones share the common themes of, for example, equity, high expectations, or the appropriate use of technology. They learn that the levels of performance in the framework are levels of performance of teaching, not of teachers, and how evidence is interpreted to reflect those different levels of performance.
But the most valuable aspect of training in the framework for teaching does not consist of the acquisition of skill in assembling evidence for the different components. The greatest value derives from the professional conversations among educators about their practice. In these conversations, teachers acquire ideas from one another ("Oh, that's a good idea; I never thought of doing it that way"), and administrators see the richness of experience that their teachers bring to their work. For this reason, it's important, when possible, for teachers, coaches, and administrators to participate in training together; it's important for teachers not to believe that administrators are engaged in secret activities. For all educators involved in supporting and evaluating teaching, the evidence is the same. Granted, the use to which that evidence is put is different, but the nature of the evidence is identical. And the conversations among all educators are enriched by having in the same room individuals with different perspectives and responsibilities within the school.
Occasionally a school district will announce that it is going to revise its system of teacher evaluation, and then sets about to revise the forms. It's important to recognize that a form is not an evaluation system; the system consists of all its elements, including the evaluative criteria, the procedures, the time lines for activities, and the decisions about who the evaluators are and the training they receive. However, the evaluation instruments (the forms) are important to a good system; they structure the manner in which evaluators collect evidence, and they determine the questions teachers consider for a planning conference and a reflection conference. Similarly, the issues teachers consider in assembling artifacts for a portfolio structure their thinking and their choices. Therefore, the forms provided in Appendixes A and B are offered to stimulate educators' thinking about the types of evidence they want to ask teachers to provide and the nature of the reflection (by teachers, mentors and coaches, and administrators) they intend to encourage.
When planning for teacher evaluation or designing a system of mentoring, induction, or professional development, evidence of teaching is essential. This is not evidence in a legalistic sense but constitutes the means by which teachers demonstrate their skill. Good teaching is not a mystery; educators, like members of other professions, have developed considerable consensus regarding what constitutes excellent practice. And to ascertain the level of skill, evaluators of teaching must have evidence of that practice. Some of the evidence comes from observations; other evidence comes from artifacts. What is important is that the method used be appropriate to the aspect of practice for which evidence is sought.
Figure 1.3 summarizes the sources of evidence for the components of the framework for teaching. The items in Column 2 refer to forms in Appendix A. The artifacts referred to in Column 5 are described in Appendix B.
Domain and Component
Sources of Evidence
Planning and Preparation
Teaching, Planning, and Reflection Conferences
Observations of Teaching
Other Observations of Practice
1a: Demonstrating Knowledge of Content and Pedagogy
Form C: Teaching
Interview, Question 1
1b: Demonstrating Knowledge of Students
Interview, Questions 3, 4;
Form F: Planning
Conference, Questions 1–8
1c: Setting Instructional Outcomes
Conference, Questions 1, 2, 4
1d: Demonstrating Knowledge of Resources
Interview, Questions 8, 9
1e: Designing Coherent Instruction
Conference, Question 5
1f: Designing Student Assessments
Conference, Question 7
The Classroom Environment
2a: Creating an Environment of Respect and Rapport
Interview, Question 7
2b: Establishing a Culture for Learning
2c: Managing Classroom Procedures
Interview, Question 5
2d: Managing Student Behavior
2e: Organizing Physical Space
Interview, Question 10
3a: Communicating with Students
3b: Using Questioning and Discussion Techniques
3c: Engaging Students in Learning
3d: Using Assessment in Instruction
3e: Demonstrating Flexibility and Responsiveness
4a: Reflecting on Teaching
Form F: Reflection
Conference, Questions 1–6; or Form D: Teacher Lesson Reflection
4b: Maintaining Accurate Records
Instructional and noninstructional records, for example
4c: Communicating with Families
Communication with families, for example
4d: Participating in a Professional Community
Interview, Question 13
Participation in a professional community, for example
4e: Growing and Developing Professionally
Interview, Question 2
Professional development, for example
4f: Showing Professionalism
Student Assumption of Responsibility
Interview, Question 11
Appropriate Use of Technology
Interview, Question 12
Attention to Individual Students, Including Those with Special Needs
Conference, Question 6
Note: Forms are in Appendix A, artifacts are described in Appendix B.
© 2008 ASCD. All rights reserved.
Copyright © 2008 by Association for Supervision and Curriculum Development. All rights reserved.
No part of this publication—including the drawings, graphs, illustrations, or chapters, except for brief quotations in
critical reviews or articles—may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopy, recording, or any information storage and retrieval system, without permission from ASCD.
Subscribe to ASCD Express, our twice-monthly e-mail newsletter, to have practical, actionable strategies and information delivered to your e-mail inbox twice a month.
ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.