Conference Countdown
Atlanta, Ga.
April 2-4, 2016
  • membership
  • my account
  • help

    We are here to help!

    1703 North Beauregard Street
    Alexandria, VA 22311-1714
    Tel: 1-800-933-ASCD (2723)
    Fax: 703-575-5400

    8:00 a.m. to 6:00 p.m. eastern time, Monday through Friday

    Local to the D.C. area: 1-703-578-9600, press 2

    Toll-free from U.S. and Canada: 1-800-933-ASCD (2723), press 2

    All other countries: (International Access Code) + 1-703-578-9600, press 2


  • Log In
  • Forgot Password?


ASCD Annual Conference and Exhibit Show

2016 ASCD Annual Conference and Exhibit Show

Learn. Teach. Lead.
Get the tools to put it all together at this can't-miss education conference—with more than 200 sessions and five inspirational keynote speakers.

Learn more and register.



ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.

Policies and Requests

Translations Rights

Books in Translation

Premium Member Book (May 2000)

Guiding School Improvement with Action Research

by Richard Sagor

Table of Contents

Chapter 9. Data Collection: Building a Valid and Reliable Data Collection Plan

Chapters 7 and 8 introduced a variety of viable data collection techniques. However, employing proven techniques doesn't guarantee the quality of the findings that emerge. The reality is, action research simply isn't worth doing unless it is done well. Although that may sound like just an old refrain, it is far more. The imperative for maintaining high standards of quality is a truth learned and sometimes painfully relearned by teacher researchers. There are three fundamental reasons why you as a teacher researcher should hold yourself to the highest quality standards possible:

  1. Your obligation to students
  2. The need for personal and collective efficacy
  3. The need to add to the professional knowledge base

The first reason, your obligation to students, rests on the premise that the education of the community's young is a sacred trust placed upon you as a educator. Therefore, the decisions you make on behalf of students are actions of no small consequence. No one, least of all teachers, would wish to see students victimized by malpractice. When you make teaching decisions on the basis of sloppy research, you place your students at risk.

A second reason to hold your action research to the highest standards of quality is that understanding your influence on educational outcomes can significantly enhance your personal and collective feelings of efficacy. However, before you can take credit for the success reflected in your data, the quality of that data must withstand the scrutiny of the world's most critical jury—your own skeptical mind. Ultimately, if you doubt your own conclusions regarding the contribution you have made to your students, those findings won't have much impact on your feelings of self-worth.

The third factor, adding to the knowledge base, may not seem important if you are a teacher researcher practicing in a one-room school or you find yourself in a school culture that emphasizes individualism. However, it should be extremely relevant to the vast majority of teachers—those of you who tend to share what you've learned with your colleagues. Not infrequently, one of the unspoken reasons for conducting action research is to persuade or entice your skeptical colleagues to consider “your” perspective on an issue. When you present your research to peers who are skeptical about the theory you are following, you should expect a similar skepticism about the research findings you produce concerning those theories. If your pedagogical opponents can find fatal flaws in your action research data, all future efforts at persuasion become that much more difficult.

Quality Criteria

The criteria used to establish the quality of action research should be no different from those used with other forms of research. Topping any researcher's list of quality criteria are the twin pillars of science: validity and reliability, first introduced in Chapter 1. These concepts are so critical to the quality of action research that it is worth taking some time to discuss and explore each of them.


As you no doubt recall from Education Psychology 101, validityrefers to the essential truthfulness of a piece of data. By asserting validity, the researcher is asserting that the data actually measure or reflect the specific phenomenon claimed. Scientific history is full of examples of research findings that were discredited because they were shown to lack validity.

A mercury thermometer is an example of a valid instrument yielding valid data. The height reached by the fluid in an accurate thermometer is a valid and appropriate measurement of air temperature. Similarly, the movement of a membrane in a barometer is an appropriate and valid way to determine barometric pressure. A ruler can be a valid way to measure length, and unfortunately (for those of us who are weight conscious) a bathroom scale can be a valid measure of weight.

Nothing has helped me understand the importance of attending to validity as much as my experience with performance assessment. One of the great accomplishments of the modern assessment movement has been drawing teacher attention to the value of authentic work products. Although bubble-sheet tests can, in many cases, produce valid data, teachers' preference for authentic work products is understandable. It is analogous to historians' preference for “primary source material” over “secondary source material.” Intuitively, we all know that words from the horse's mouth are more believable than words related by the horse's trainer. Similarly, a piece of actual student writing has more validity than a score obtained on the language section of a standardized multiple-choice exam. A performance by the school band is a better indicator of students' ability to execute a musical piece than are the students' grades in band.

However, even given the deserved popularity of performance and portfolio assessments, these types of data are not exempt from concerns regarding validity. For example, how should we react to the use of a written lab report as a means to assess student understanding of the scientific method? Should a lab report written in standard English be accepted as a valid indicator of a student's understanding of science?

Suppose you answered yes. Would you still accept that lab report as a valid indicator if you learned that the student lacked fluency in English? Probably not. This is because the English-language proficiency needed to complete the report introduced what scientists call an intervening and confounding variable. In the case of assessing the proficiency in science of a student with limited English proficiency, the written aspect of the report intervenes and thereby confounds the accuracy of the assessment. Intervening and confounding variables are factors that get in the way of valid assessment. This is why when conducting assessments on student learning and collecting data for action research, it is important to ask:

Are there any factors or intervening variables that should cause me to distrust these data?


Reliability is a different but no less important concept. Reliability relates to researchers' claims regarding the accuracy of their data. A few years ago, when a police officer issued me a ticket for speeding, I didn't question the validity of his using an expensive, city-issued speedometer. I was willing to concede to the officer the validity of measuring vehicular speed with a speedometer. However, I urged him to consider my thesis regarding the reliability of his speedometer. I respectfully suggested that although I knew he sincerely believed that his speedometer was accurate, he ought to consider the possibility that it could be damaged. I argued that if it were broken it wouldn't produce an accurate, credible, and reliable measure of my speed. What I was suggesting was that although speedometers are valid measures of speed, they aren't always reliable.

Unfortunately, I lost that argument. I fared no better when I presented the same “reasonable doubt” plea to the judge. Unbeknownst to me, the state police regularly establish the reliability (accuracy) of their speedometers by testing the speedometer on each patrol car every morning. In the end, I had to pay the fine. But in return I learned a memorable lesson on the value of establishing reliability.

Reliability problems in education often arise when researchers overstate the importance of data drawn from too small or too restricted a sample. For example, imagine if when I was a high school principal I claimed to the school board that I had evidence that the parents love our school's programs. When the board chair asked me how I could make such a claim, I responded by defensively asserting it was a conclusion based on “hard data”—specifically, a survey taken at the last winter band banquet. The board chair might respond that because that event was attended by only 5 percent of the school's parents and all the parents who attended had one thing in common—they had children in band—my conclusions were “unreliable.” He would be right. Claiming that such a small and select sample accurately represented the views of a total population (all the school's parents) stretches the credibility of my assertion well beyond reasonableness.

To enhance the reliability of your action research data, you need to continually ask yourself these questions when planning data collection:

  • Is this information an accurate representation of reality?
  • Can I think of any reasons to be suspicious of its accuracy?

Establishing Validity and Reliability

To appreciate the concepts of validity and reliability and how you might establish them, consider how you would behave as a juror deliberating in a criminal trial. Lawyers for both sides would argue their cases as persuasively as possible. Your task as a juror is to determine which of the arguments to believe. In deciding if a lawyer had “proved the case,” you would probably ask these questions regarding validity: Are these claims credible? Can I truly believe that this evidence means what these witnesses and lawyers say it does? To determine the reliability of the evidence, you would ask questions such as these about the accuracy of the witnesses' recollections and testimony: Can I trust the accuracy of their eyes and ears? Could time or emotions have played a trick on their memories?

So how do legal “researchers”—defense lawyers and prosecutors—convince a jury of the essential truth and accuracy (validity and reliability) of their cases? They do it through the twin processes of corroboration and impeachment. When they want the jury to believe what one of their witnesses said, they bring in other independent witnesses. If an additional witness corroborates everything the first witness said, it increases the confidence a juror will have in the initial testimony. The more independent pieces of evidence a lawyer can place before a jury, the more the jurors will trust the truthfulness and accuracy of the claims. Conversely, if lawyers want the jury to doubt the truth and accuracy (validity and reliability) of the other side, they try to impeach (challenge the credibility of) the testimony of the other side, by, for example, entering into evidence alternative or irreconcilable reports on the same phenomenon from several independent sources.


Action researchers use a similar process to that used by lawyers. It is called triangulation, and, as was discussed in Chapters 1 and 2, it involves the use of multiple independent sources of data to establish the truth and accuracy of a claim.

There are ways to develop valid and reliable instruments without triangulation, but these methods are often problematic. First, they are time-consuming and frequently prohibitive in terms of cost. This is because significant field-testing is required to establish the validity and reliability of a measuring instrument. Just consider the many millions of dollars invested by publishers to support the validity and reliability of their standardized tests. But even if teachers were willing to invest the time, money, and energy required to establish technical validity (construct and content) for their home-grown instruments, they probably wouldn't be happy with what they produced.

For good reason, educators are intuitively unimpressed with “single instrument measures.” They tend to question whether any single tool could ever capture the full reality of any meaningful educational outcome. Occasionally I will meet a layperson who believes that SAT scores alone (or another piece of seemingly compelling data, such as college admissions data or discipline referrals) provide an accurate picture of a school's quality. But I have never met a knowledgeable educator who is willing to make a judgment based upon any of those same valid and reliable instruments. This is because educators know that what these “valid and reliable” instruments reveal is simply too narrow to justify conclusions regarding educational quality.

This is not to say that these instruments (SAT scores, college admissions, discipline referrals, and so forth) aren't valuable windows into the larger phenomenon (the quality of a school), but before conclusions can be drawn about the big picture, those findings need to be corroborated by looking at the phenomenon through a variety of other windows.

Figure 9.1 illustrates what a plan for triangulated data collection might look like to answer a question on the quality of a high school.

Figure 9.1. A Plan for Triangulated Data Collection

Although we might be skeptical about drawing conclusions regarding a school's quality from any one of the success indicators in Figure 9.1, if all of these instruments painted a similar picture, we would, no doubt, feel confident in declaring the school “good.”

Finishing the Research Proposal

Chapter 6 presented guidelines for producing a written problem statement/research proposal (Implementation Strategy #6). The sample proposal written by Richard and Georgia, although short, contained all the items expected from a formal research proposal except the data collection plan. Chapter 2 described the triangulation matrix as a helpful planning tool (Figure 2.3, p. 21). Figure 9.2 shows the triangulated data collection plan, in the form of a matrix, that Richard and Georgia used to answer their research questions. Implementation Strategy #10 can help you complete a triangulation matrix.

Figure 9.2. Triangulation Matrix for Real World Advocacy Project Research Questions

Research Question

Data Source #1

Data Source #2

Data Source #3

1. Could we motivate our 8th graders to conduct and complete Real World Advocacy Projects?

Teacher journals

Student surveys

Grade book records

2. What would be the quality of the projects produced by our students?

Teacher assessments using a project rubric

Student self-assessments using the same rubric

Assessment by community members using the rubric

3. Would the completion of Real World Advocacy Projects result in enhanced feelings of social efficacy for our students?

Surveys of students' other teachers

Interviews with random sample of students

Interviews with random sample of parents

Implementation Strategy #10—Building a Triangulated Data Collection Plan

WHAT:Constructing a data collection plan with high probability of producing valid and reliable answers to your research questions


  1. Prepare a four-column data collection matrix with separate rows for each research question (see Figure 9.2).
  2. Write your research questions in column 1 of your matrix.
  3. For each research question, ask yourself the following: What is one source of data that could help answer this question? Write your answer in column 2 next to the research question.
  4. Ask the question two more times to determine a second and third source of data, and write your answers in columns 3 and 4, respectively.* 
  5. Repeat this process for each research question.
  6. Review the completed matrix and ask yourself the following question: Are these the best sources of data I/we could collect in answer to each of these questions? When you are satisfied with your answer to this question, you have a completed data collection plan.

Last Important Step

Once you have developed a triangulated data collection plan, you have accomplished much of the hard work of action research. Most doctoral students report that the hardest aspect of completing a doctorate is getting a comprehensive research proposal through their dissertation committee. Once the rationale for their research has been established and a methodology (the data collection plan) for answering their research questions has been put in place, all that is left is to carry out the proposal. If you, alone or with colleagues, have followed the steps outlined in this book thus far, you are ready to proceed. Now all you have to do is carry out your plan.

Unfortunately, many beginning action researchers stall at this point, usually because completing the next stage, data collection, requires budgeting time from an already packed schedule. To get over this hurdle, it is helpful to commit to a time line and a process for completing the work of data collection. The rationale for formalizing this commitment is to keep the demands of a hectic work life from getting in the way of completing what should prove to be a most satisfying piece of work. Implementation Strategy #11 takes only a few minutes to complete, but doing so will help ensure that you get over the time hurdle and maintain your momentum for completing your research.

Implementation Strategy #11—Data Collection Time Line/To-Do List

WHAT:Making a commitment to a plan for completing the data collection portion of your action research


  1. Make a four-column list on a sheet of chart paper.
  2. Brainstorm (either individually or, if your research is a team effort, with your colleagues) a list of each thing that needs to be accomplished in order to complete your triangulated data collection plan. List these items (roughly in chronological order) in the left-hand column on the chart paper.
  3. In the second column, write the date that each should be accomplished. Then ask yourself if it is realistic to complete this item by that date. If the answer is yes, go to the next item. If the answer is no, determine the earliest “realistic” date.
  4. If working individually, go on to the next step. If working as a team, go through each item on the list and determine who is willing to be responsible to see that this item is accomplished by the agreed upon date. Write that person's name in column 3.
  5. Ask yourself (or ask the team) the following question: What types of support or help might I/we need to complete each of these items? Perhaps you will need some support from your principal or some help from a professor at a local university. Write the name of the person or organization whose help you anticipate needing in the last column and commit to a time for making contact with these “critical friends.”
  6. One last time, ask yourself or your team if this plan is realistic. If you answer yes, you are ready to proceed. If you answer no, repeat this strategy.

Chapters 10 and 11 explore the three remaining steps in the action research process: data analysis, reporting, and action planning. Chapter 12 discusses a number of important ethical and methodological issues that will be particularly helpful for beginning researchers. If you intend to conduct your data collection before reading the rest of this book, I strongly recommend that you read Chapter 12 first.


*  Although this strategy suggests collecting three types of data to answer a research question, it is perfectly permissible to collect more than three types.


Log in to submit a comment.

To post a comment, please log in above. (You must be an ASCD EDge community member.) Free registration