1703 North Beauregard St.
Alexandria, VA 22311-1714
Tel: 1-800-933-ASCD (2723)
8:00 a.m. to 6:00 p.m. eastern time, Monday through Friday
Local to the D.C. area: 1-703-578-9600, press 2
Toll-free from U.S. and Canada: 1-800-933-ASCD (2723), press 2
All other countries: (International Access Code) + 1-703-578-9600, press 2
by Richard Sagor
Table of Contents
Chapters 7 and 8 introduced a variety of viable data collection techniques. However, employing proven techniques doesn't guarantee the quality of the findings that emerge. The reality is, action research simply isn't worth doing unless it is done well. Although that may sound like just an old refrain, it is far more. The imperative for maintaining high standards of quality is a truth learned and sometimes painfully relearned by teacher researchers. There are three fundamental reasons why you as a teacher researcher should hold yourself to the highest quality standards possible:
The first reason, your obligation to students, rests on the premise that the education of the community's young is a sacred trust placed upon you as a educator. Therefore, the decisions you make on behalf of students are actions of no small consequence. No one, least of all teachers, would wish to see students victimized by malpractice. When you make teaching decisions on the basis of sloppy research, you place your students at risk.
A second reason to hold your action research to the highest standards of quality is that understanding your influence on educational outcomes can significantly enhance your personal and collective feelings of efficacy. However, before you can take credit for the success reflected in your data, the quality of that data must withstand the scrutiny of the world's most critical jury—your own skeptical mind. Ultimately, if you doubt your own conclusions regarding the contribution you have made to your students, those findings won't have much impact on your feelings of self-worth.
The third factor, adding to the knowledge base, may not seem important if you are a teacher researcher practicing in a one-room school or you find yourself in a school culture that emphasizes individualism. However, it should be extremely relevant to the vast majority of teachers—those of you who tend to share what you've learned with your colleagues. Not infrequently, one of the unspoken reasons for conducting action research is to persuade or entice your skeptical colleagues to consider “your” perspective on an issue. When you present your research to peers who are skeptical about the theory you are following, you should expect a similar skepticism about the research findings you produce concerning those theories. If your pedagogical opponents can find fatal flaws in your action research data, all future efforts at persuasion become that much more difficult.
The criteria used to establish the quality of action research should be no different from those used with other forms of research. Topping any researcher's list of quality criteria are the twin pillars of science: validity and reliability, first introduced in Chapter 1. These concepts are so critical to the quality of action research that it is worth taking some time to discuss and explore each of them.
As you no doubt recall from Education Psychology 101, validityrefers to the essential truthfulness of a piece of data. By asserting validity, the researcher is asserting that the data actually measure or reflect the specific phenomenon claimed. Scientific history is full of examples of research findings that were discredited because they were shown to lack validity.
A mercury thermometer is an example of a valid instrument yielding valid data. The height reached by the fluid in an accurate thermometer is a valid and appropriate measurement of air temperature. Similarly, the movement of a membrane in a barometer is an appropriate and valid way to determine barometric pressure. A ruler can be a valid way to measure length, and unfortunately (for those of us who are weight conscious) a bathroom scale can be a valid measure of weight.
Nothing has helped me understand the importance of attending to validity as much as my experience with performance assessment. One of the great accomplishments of the modern assessment movement has been drawing teacher attention to the value of authentic work products. Although bubble-sheet tests can, in many cases, produce valid data, teachers' preference for authentic work products is understandable. It is analogous to historians' preference for “primary source material” over “secondary source material.” Intuitively, we all know that words from the horse's mouth are more believable than words related by the horse's trainer. Similarly, a piece of actual student writing has more validity than a score obtained on the language section of a standardized multiple-choice exam. A performance by the school band is a better indicator of students' ability to execute a musical piece than are the students' grades in band.
However, even given the deserved popularity of performance and portfolio assessments, these types of data are not exempt from concerns regarding validity. For example, how should we react to the use of a written lab report as a means to assess student understanding of the scientific method? Should a lab report written in standard English be accepted as a valid indicator of a student's understanding of science?
Suppose you answered yes. Would you still accept that lab report as a valid indicator if you learned that the student lacked fluency in English? Probably not. This is because the English-language proficiency needed to complete the report introduced what scientists call an intervening and confounding variable. In the case of assessing the proficiency in science of a student with limited English proficiency, the written aspect of the report intervenes and thereby confounds the accuracy of the assessment. Intervening and confounding variables are factors that get in the way of valid assessment. This is why when conducting assessments on student learning and collecting data for action research, it is important to ask:
Are there any factors or intervening variables that should cause me to distrust these data?
Reliability is a different but no less important concept. Reliability relates to researchers' claims regarding the accuracy of their data. A few years ago, when a police officer issued me a ticket for speeding, I didn't question the validity of his using an expensive, city-issued speedometer. I was willing to concede to the officer the validity of measuring vehicular speed with a speedometer. However, I urged him to consider my thesis regarding the reliability of his speedometer. I respectfully suggested that although I knew he sincerely believed that his speedometer was accurate, he ought to consider the possibility that it could be damaged. I argued that if it were broken it wouldn't produce an accurate, credible, and reliable measure of my speed. What I was suggesting was that although speedometers are valid measures of speed, they aren't always reliable.
Unfortunately, I lost that argument. I fared no better when I presented the same “reasonable doubt” plea to the judge. Unbeknownst to me, the state police regularly establish the reliability (accuracy) of their speedometers by testing the speedometer on each patrol car every morning. In the end, I had to pay the fine. But in return I learned a memorable lesson on the value of establishing reliability.
Reliability problems in education often arise when researchers overstate the importance of data drawn from too small or too restricted a sample. For example, imagine if when I was a high school principal I claimed to the school board that I had evidence that the parents love our school's programs. When the board chair asked me how I could make such a claim, I responded by defensively asserting it was a conclusion based on “hard data”—specifically, a survey taken at the last winter band banquet. The board chair might respond that because that event was attended by only 5 percent of the school's parents and all the parents who attended had one thing in common—they had children in band—my conclusions were “unreliable.” He would be right. Claiming that such a small and select sample accurately represented the views of a total population (all the school's parents) stretches the credibility of my assertion well beyond reasonableness.
To enhance the reliability of your action research data, you need to continually ask yourself these questions when planning data collection:
To appreciate the concepts of validity and reliability and how you might establish them, consider how you would behave as a juror deliberating in a criminal trial. Lawyers for both sides would argue their cases as persuasively as possible. Your task as a juror is to determine which of the arguments to believe. In deciding if a lawyer had “proved the case,” you would probably ask these questions regarding validity: Are these claims credible? Can I truly believe that this evidence means what these witnesses and lawyers say it does? To determine the reliability of the evidence, you would ask questions such as these about the accuracy of the witnesses' recollections and testimony: Can I trust the accuracy of their eyes and ears? Could time or emotions have played a trick on their memories?
So how do legal “researchers”—defense lawyers and prosecutors—convince a jury of the essential truth and accuracy (validity and reliability) of their cases? They do it through the twin processes of corroboration and impeachment. When they want the jury to believe what one of their witnesses said, they bring in other independent witnesses. If an additional witness corroborates everything the first witness said, it increases the confidence a juror will have in the initial testimony. The more independent pieces of evidence a lawyer can place before a jury, the more the jurors will trust the truthfulness and accuracy of the claims. Conversely, if lawyers want the jury to doubt the truth and accuracy (validity and reliability) of the other side, they try to impeach (challenge the credibility of) the testimony of the other side, by, for example, entering into evidence alternative or irreconcilable reports on the same phenomenon from several independent sources.
Action researchers use a similar process to that used by lawyers. It is called triangulation, and, as was discussed in Chapters 1 and 2, it involves the use of multiple independent sources of data to establish the truth and accuracy of a claim.
There are ways to develop valid and reliable instruments without triangulation, but these methods are often problematic. First, they are time-consuming and frequently prohibitive in terms of cost. This is because significant field-testing is required to establish the validity and reliability of a measuring instrument. Just consider the many millions of dollars invested by publishers to support the validity and reliability of their standardized tests. But even if teachers were willing to invest the time, money, and energy required to establish technical validity (construct and content) for their home-grown instruments, they probably wouldn't be happy with what they produced.
For good reason, educators are intuitively unimpressed with “single instrument measures.” They tend to question whether any single tool could ever capture the full reality of any meaningful educational outcome. Occasionally I will meet a layperson who believes that SAT scores alone (or another piece of seemingly compelling data, such as college admissions data or discipline referrals) provide an accurate picture of a school's quality. But I have never met a knowledgeable educator who is willing to make a judgment based upon any of those same valid and reliable instruments. This is because educators know that what these “valid and reliable” instruments reveal is simply too narrow to justify conclusions regarding educational quality.
This is not to say that these instruments (SAT scores, college admissions, discipline referrals, and so forth) aren't valuable windows into the larger phenomenon (the quality of a school), but before conclusions can be drawn about the big picture, those findings need to be corroborated by looking at the phenomenon through a variety of other windows.
Figure 9.1 illustrates what a plan for triangulated data collection might look like to answer a question on the quality of a high school.
Although we might be skeptical about drawing conclusions regarding a school's quality from any one of the success indicators in Figure 9.1, if all of these instruments painted a similar picture, we would, no doubt, feel confident in declaring the school “good.”
Chapter 6 presented guidelines for producing a written problem statement/research proposal (Implementation Strategy #6). The sample proposal written by Richard and Georgia, although short, contained all the items expected from a formal research proposal except the data collection plan. Chapter 2 described the triangulation matrix as a helpful planning tool (Figure 2.3, p. 21). Figure 9.2 shows the triangulated data collection plan, in the form of a matrix, that Richard and Georgia used to answer their research questions. Implementation Strategy #10 can help you complete a triangulation matrix.
Data Source #1
Data Source #2
Data Source #3
1. Could we motivate our 8th graders to conduct and complete Real World Advocacy Projects?
Grade book records
2. What would be the quality of the projects produced by our students?
Teacher assessments using a project rubric
Student self-assessments using the same rubric
Assessment by community members using the rubric
3. Would the completion of Real World Advocacy Projects result in enhanced feelings of social efficacy for our students?
Surveys of students' other teachers
Interviews with random sample of students
Interviews with random sample of parents
WHAT:Constructing a data collection plan with high probability of producing valid and reliable answers to your research questions
Once you have developed a triangulated data collection plan, you have accomplished much of the hard work of action research. Most doctoral students report that the hardest aspect of completing a doctorate is getting a comprehensive research proposal through their dissertation committee. Once the rationale for their research has been established and a methodology (the data collection plan) for answering their research questions has been put in place, all that is left is to carry out the proposal. If you, alone or with colleagues, have followed the steps outlined in this book thus far, you are ready to proceed. Now all you have to do is carry out your plan.
Unfortunately, many beginning action researchers stall at this point, usually because completing the next stage, data collection, requires budgeting time from an already packed schedule. To get over this hurdle, it is helpful to commit to a time line and a process for completing the work of data collection. The rationale for formalizing this commitment is to keep the demands of a hectic work life from getting in the way of completing what should prove to be a most satisfying piece of work. Implementation Strategy #11 takes only a few minutes to complete, but doing so will help ensure that you get over the time hurdle and maintain your momentum for completing your research.
WHAT:Making a commitment to a plan for completing the data collection portion of your action research
Chapters 10 and 11 explore the three remaining steps in the action research process: data analysis, reporting, and action planning. Chapter 12 discusses a number of important ethical and methodological issues that will be particularly helpful for beginning researchers. If you intend to conduct your data collection before reading the rest of this book, I strongly recommend that you read Chapter 12 first.
Although this strategy suggests collecting three types of data to answer a research question, it is perfectly permissible to collect more than three types.
Although this strategy suggests collecting three types of data to answer a research question, it is perfectly permissible to collect more than three types.
Copyright © 2000 by Association for Supervision and Curriculum Development. All rights reserved.
No part of this publication—including the drawings, graphs, illustrations, or chapters, except for brief quotations in
critical reviews or articles—may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopy, recording, or any information storage and retrieval system, without permission from ASCD.
Subscribe to ASCD Express, our twice-monthly e-mail newsletter, to have practical, actionable strategies and information delivered to your e-mail inbox twice a month.
ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.