Admittedly, researchers attempting to “prove” the effects of outcome-based education face various problems. First, OBE is an umbrella concept under which various reform efforts can be placed, and people who ask, “What exactly is outcome-based education?” may receive several answers. Block, Efthim, and Burns (1989) include OBE in their conceptual overview of mastery learning, but mastery learning is not the only way to implement OBE. The fact that people who practice open education also claim to engage in outcome-based education suggests the breadth of the concept.
Creating another complication, Spady and Marshall (1991) distinguish among traditional, transitional, and transformational approaches to schoolwide or districtwide OBE, noting that the first can operate within an existing school system, while the last requires the creation of a whole new system. Even more basic is the distinction between OBE and the setting of high school graduation outcomes. Just because a state requires its high school graduates to achieve specific outcomes does not mean that schools necessarily engage in outcome-based education to prepare students for graduation assessments. And finally, a movement that purports to develop “complex thinkers,” “responsible citizens,” and “community contributors” faces complex measurement challenges, both conceptual and practical (Minneapolis Public Schools 1992).
Despite these limitations, a small but growing body of OBE research does exist. Relevant research includes information on classroom-based mastery learning; over 20 years of evidence from the Outcomes-Driven Developmental Model in Johnson City, New York; and studies from state-level OBE projects in Utah, Missouri, and Minnesota.
The Effects of Mastery Learning
Although OBE does not require mastery learning as an exclusive instructional model, many people consider mastery learning to be an integral part of OBE beliefs and practice (Burns 1987, Schleisman and King 1990, Spady 1982). In a comprehensive meta-analysis, Kulik, J. Kulik, and Bangert-Drowns (1990) integrate the various findings of the last decade (Guskey and Gates 1985, J. Kulik et al. 1979, Slavin 1987) and address inconsistencies about the effects of mastery learning. They examine 108 studies on Bloom's Learning for Mastery and Keller's Personalized System of Instruction.1
Both approaches present material in short units, and students take formative tests on each unit.
The meta-analysis indicates that the average student in a mastery learning class performed at the 70th percentile, whereas the average student in a control class performed at the 50th percentile. The authors conclude that mastery learning does have positive effects on student achievement. Results were better for social science classes, on locally developed tests, in teacher-paced classes, when the required level of performance was high, and when the control group received less feedback. They also note that lower-aptitude students enjoyed a greater gain than higher-aptitude students. 2
The most consistently negative effect was on course completion. Students in mastery learning classes completed fewer courses than students in control classes.
Although the authors argue that the positive effects are not as great as Bloom predicts, they do state that the overall effect of mastery learning is impressive when compared with other educational treatments: “Few educational treatments of any sort were consistently associated with achievement effects as large as those produced by mastery teaching.”
The Outcomes-Driven Developmental Model
One current working model of an outcome-based educational program at the district level is the Outcomes-Driven Developmental Model (ODDM) begun in 1972 in the Johnson City, New York, School District under Superintendent Albert Mamary.
The developers of ODDM make clear that it is an “empowering, participatory, and noncoercive” change process (Alessi et al. 1991). Johnson City's ODDM, using an instructional model similar to Learning for Mastery, has been so successful it is the only total school curriculum model validated by the National Diffusion Network (Vickery 1990). Adding to its credibility is the fact that Johnson City is a lower-middle-class community with few professional citizens and the second highest poverty rate of 10 urban districts in its county. Over 20 percent of its school population qualifies for free or reduced-price lunch, and it has a sizable Asian immigrant population with limited English proficiency. (See “On Creating an Environment Where All Students Learn: A Conversation with Al Mamary,” p. 24).
When Johnson City began its program in 1972, it ranked 14th out of 14 districts in its county on academic achievement as measured on standardized tests. Approximately 45 to 50 percent of its students scored at or above grade level in reading and math in grades 1 through 8. By 1977, the percentages rose to about 70 percent, and by 1984 ranged between 80 and 90 percent.
To have a consistent measure for tracking student progress, in 1984 the district chose to identify the number of students whose scores indicated achievement six months or more above grade level on the California Achievement Tests (CATs) in reading and math. They found that in 1976, 44 percent of all students had performed at six months or more above grade level in reading, and 53 percent had done so in math. By 1984 these figures had increased to 75 percent in reading and 79 percent in math.
Other indicators of success in Johnson City include performance on the New York State Regents exams and attainment of the Regents diploma. In 1989, for example, Johnson City students, on every exam, always surpassed the state performance and either equaled or surpassed the county performance (with 70 percent of Johnson City students participating, 58 percent in the county, and 40 percent of students statewide participating in the exams). In 1986, 77 percent of Johnson City students received a Regents diploma, compared with 43 percent statewide and 59 percent countywide.
In 1988, the New York Board of Regents instituted more rigorous requirements for a Regents diploma. In 1989, Johnson City still outperformed the state and county, with 55 percent of its students receiving the diploma, compared with 33 percent statewide and 47 percent countywide. This placed Johnson City in the top 10 percent of schools statewide in percentage of students receiving Regents diplomas. These figures aside, however, perhaps the most convincing evidence of Johnson City's success is the 100 percent enrollment, in 1991–92, of students in 9th grade algebra.
Lessons from States
At the state level, documentation of the effects of OBE is difficult to find, and what is available is largely perceptual. Nevertheless, data collected in Utah, Missouri, and Minnesota provide useful insights.
State evaluators in Utah conducted more than 300 interviews with board members, administrators, teachers, support staff, and students regarding progress toward implementation. They administered three questionnaires (at the district, school, and individual staff levels) about attitudes, opinions, beliefs, and perceived effects, and also asked for student achievement data. Thirty-four districts, 437 schools, and more than 7,400 teachers returned questionnaires, and 11 districts submitted student achievement data. The evaluators reached the following conclusions:
- Implementation of OBE generally requires a restructuring of the entire educational system and consequently takes a significant period of time.
- More OBE implementation takes place in districts that have adopted ODDM as a model than in other districts.
- More OBE implementation takes place in elementary schools than in secondary schools, and in smaller districts than in larger districts.
- Although the evidence is limited, districts with more complete implementation of OBE also appear to demonstrate higher student achievement gains (Applegate 1992).
- Districts using ODDM seem to be experiencing the most successful implementations.
Another state effort noted by OBE proponents is Missouri's Statewide Project for Improving Student Achievement (Cohen and Hyman 1991, Guskey and Block 1991). This project, called the Instructional Management System, involves the following components: (1) a statewide curriculum; (2) three state-endorsed instructional programs (mastery learning, outcome-based education, and cooperative learning); and (3) a criterion test, the Missouri Mastery Achievement Test (MMAT) that precisely measures the curriculum's outcomes.
Beginning in 1986–87, scores on the mastery test have significantly risen statewide each year in nearly every subject area. At the same time, scores have increased on norm-referenced tests, including the Iowa Tests of Basic Skills for grades 2 through 8 and the Test of Achievement and Proficiency for grades 9 and 10.
One example of the Missouri project's success is an “Academic Achievement Demonstration Site,” the Thorpe Gordon Elementary School in Jefferson City. In 1987, approximately 40 to 60 percent of the students in this inner-city school ranked in the bottom two quintiles in language arts, mathematics, and science on the MMAT. By 1989, 10 percent or less were in the bottom two quintiles, with few students placing in the lowest one. In addition, 70 to 90 percent now rank in the top two quintiles, with 50 to 75 percent in the highest (Guskey and Block 1991).
In Minnesota, the Department of Education's Office of Educational Leadership worked in 10 project sites across the state from 1989 through 1991 to determine the effectiveness of an outcome-based system of education in improving student learning (Minnesota Department of Education 1990, Center for Applied Research and Educational Improvement 1991, Bosma and King 1992). Research activities, including interviews with students, teachers, administrators, and parents, sought to document the perceived effects of the changes made, that is, to provide initial evidence about what was happening to students as a result of the transformational OBE approaches being implemented (King et al. 1992).
The results across 37 schools involved during 1990–1991 included three perceived effects on student learning. Forty-nine percent of the respondents reported more and better learning. (“I've gotten a lot more out of class than the last few years.” “There's been a tremendous increase in student learning.” “We have set higher expectations, and students are achieving more.”) Forty-three percent reported increased student involvement in learning. (“Kids really take a stake in learning and are more responsible.” “I'm pushing myself more.”) Thirty-five percent reported different effects for different student types. Many parents expressed a sense that OBE “works for the average and unmotivated learner,” both because these students are allowed sufficient time and opportunities to succeed and because some become part of regular instruction for the first time. But many respondents reported negative perceptions for students who have succeeded in the traditional system. (“Admittedly we have picked up some we would've lost, but we are losing some at the top.” “We feel the higher students won't be challenged enough.”)
What, then, can we conclude about OBE as a restructuring effort to date? Acknowledging the paucity of hard data, we find at least three themes. First, the data from research on mastery learning; Johnson City, New York; Missouri; and Utah suggest that mastery learning and its ODDM implementations are effective at the classroom and building levels. Second, experiences in Johnson City and Utah indicate that the Outcomes-Driven Developmental Model can work and is readily adapted into traditional systems. Third, the mastery learning and Minnesota data document that OBE appears to benefit low-achieving students while having questionable effects on high-achieving students (Evans and King, in press). These three points speak cogently to the emerging possibilities of OBE within the traditional system. However, whether transformational OBE can effect similar changes remains to be seen.
What the data—or lack of it—suggest is the compelling need for more research. But not just any research will do. First, we must be clear what we mean by OBE. Second, we must determine what it is we want to do well in schools and how that can be best documented. For example, are we committed to “authentic” learning, with measures that tap such achievement (Newmann and Archbald 1992), or will we settle for improvements on standardized tests?
In our work with OBE and its increasing numbers of dedicated educators, we have become convinced that traditional studies are simply not rich enough to portray the changes that an OBE system may inspire. And so, we challenge researchers to devise innovative evaluation methodologies that truly capture the excitement of real and lasting change in schools.
Alessi, F. V., L. Rowe, and A. Mamary. (1991). “The Outcomes-Driven Developmental Model: A Program for Comprehensive School Improvement.” Johnson City, N.Y.: Johnson City Central School District.
Applegate, T. (1992). “Evaluation of OBE in Utah: Executive Summary.” Salt Lake City, Utah.
Block, J. H., H. E. Efthim, and R. B. Burns. (1989). Building Effective Mastery Learning Schools. New York: Longman.
Bosma, J., and J. A. King. (1992). Office of Educational Leadership Phase II Evaluation Report. Minneapolis: Center for Applied Research and Educational Improvement, College of Education, University of Minnesota.
Burns, R. (1987). Models of Instructional Organization: A Casebook on Mastery Learning and Outcome-Based Education. San Francisco: Far West Laboratory for Educational Research and Development.
Center for Applied Research and Educational Improvement. (January 1991). Office of Educational Leadership, Vol.1: Phase 1 Evaluation Report. Minneapolis: College of Education, University of Minnesota.
Cohen, S. A., and J. S. Hyman. (1991). “Can Fantasies Become Facts?” Educational Measurement: Issues and Practice 10, 1: 20–33.
Evans, K. M., and J. A. King. (1992). “The Outcomes of Outcome-Based Education: Research and Implications.” Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, Calif.
Evans, K. M., and J. A. King. (In press). “Outcome-Based and Gifted Education: Can We Assume Continued Support?” Roeper Review.
Guskey, T. R., and J. H. Block. (1991). “The Missouri Miracle: A Success Story about Statewide Collaboration to Improve Student Learning.” Outcomes 10, 2: 28–43.
Guskey, T. R., and S. L. Gates. (1985). “A Synthesis of Research of Group-Based Mastery Learning Programs.” Paper presented at the Annual Meeting of the American Educational Research Association, Chicago (ERIC Document Reproduction Service No. ED 262 088).
King, J. A., J. Bosma, and J. Binko. (1992). “After Two Years: A Study of Educational Transformation in Ten Minnesota Sites.” Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco.
Kulik, C. C., J. A. Kulik, and R. L. Bangert-Drowns. (1990). “Effectiveness of Mastery Learning Programs: A Meta-Analysis.” Review of Educational Research 60, 2: 265–299.
Kulik, J. A., C. C. Kulik, and P. A. Cohen. (1979). “A Meta-Analysis of Outcome Studies of Keller's Personalized System of Instruction.” American Psychologist 34: 307–318.
Minneapolis Public Schools. (October 1992). “Exit and Supportive Outcomes.” Minneapolis, Minn.
Minnesota Department of Education. (1990). Transforming Education: The Minnesota Plan. St. Paul: Minnesota Department of Education, Office of Educational Leadership.
Newmann, F. M., and D. A. Archbald. (1992). “The Nature of Authentic Academic Achievement.” In Toward a New Science of Educational Testing and Assessment, edited by H. Berlak, F. M. Newmann, E. Adams, D. A. Archbald, T. Burgess, J. Raven, and R. A. Romberg. Albany: State University of New York Press.
Schleisman, K. E., and J. A. King. (1990). “Making Sense of Outcome-Based Education: What Is It and Where Did It Come From?” (Research Report No. 7). Minneapolis: Center for Applied Research and Educational Improvement, University of Minnesota.
Slavin, R. E. (1987). “Mastery Learning Reconsidered.” Review of Educational Research 57, 2: 175–213.
Spady, W. G. (1982). “Outcome-Based Instructional Management: A Sociological Perspective.” The Australian Journal of Education 26, 2: 123–143.
Spady, W. G., and K. J. Marshall. (1991). “Transformational Outcome-Based Educational Curriculum Restructuring.” Educational Researcher 6, 2: 9–15.
Varnon, C. J., and R. L. King. (1993). “A Tidal Wave of Change—OBE in the USA.” Outcomes 12, 1: 16–19.
Vickery, T. R. (1990). “ODDM: A Workable Model for Total School Improvement.” Educational Leadership 47, 7: 67-60.
The authors of the meta-analysis used effect size to compare results on the various studies examined. Effect size is defined as the difference between the mean scores of the experimental and control groups divided by the standard deviation of the control group. The overall finding for the meta-analysis was an effect size of 0.52 standard deviations, with a range of 1.58–0.22.
The effect sizes were 0.6 for lower-aptitude students and 0.4 for higher-aptitude students.
Karen M. Evans is a doctoral candidate in educational policy at the University of Minnesota
and Jean A. King is an Associate Professor of Educational Policy and Administration, University of Minnesota, 105 Burton Hall, 178 Pillsbury Drive, S. E., Minneapolis, MN 55455-7496.