1703 North Beauregard St.
Alexandria, VA 22311-1714
Tel: 1-800-933-ASCD (2723)
8:00 a.m. to 6:00 p.m. eastern time, Monday through Friday
Local to the D.C. area: 1-703-578-9600
Toll-free from U.S. and Canada: 1-800-933-ASCD (2723)
All other countries: (International Access Code) + 1-703-578-9600
by David H. Rose, Anne Meyer, Nicole Strangman and Gabrielle Rappolt
Table of Contents
In this chapter, you will learn how the qualities of various instructional media(text, speech, images, and digital media) affect their accessibility to students and see why flexibility in media is the key to providing instruction that reaches more students, more effectively.
Traditional media for teaching—speech, text, and images—are so ingrained in our methods and curriculum that we rarely pause to consider their use. Instead of thinking carefully about which medium to use in a given situation, we usually select what we have chosen in the past or what is convenient now.
What few of us recognize is that these media have very different things to offer. The inherent communicative strengths and weaknesses of speech, text, and images determine their suitability for different instructional purposes. As teachers, when selecting a medium for teaching, we should consider its appropriateness for the particular content or activity. But the selection process does not stop there. We also need to weigh the characteristics of our students. Each individual's facility with a medium is a function of the proclivities, strengths, and weaknesses of their learning networks and the particular demands each medium makes on these networks.
This analysis is not usually a part of how we understand and appraise our students' capacities, how we teach, and how we evaluate learners' progress. Unwittingly, we have allowed traditional media to shape these practices. Instead of considering students individually, we operate on a one-size-fits-all mindset. When we set goals, we often tie them to particular media without considering alternatives. When we evaluate children's abilities, it is often on the basis of their performance within a single medium. We categorize as disabled those students for whom a printed textbook, a lecture, a chart, or a videotape is difficult or impossible to use. We then prescribe for them special goals, teaching methods, andmaterials—often with a remedial focus. Students are assessed according to standards and standardized tests with little regard for how the chosen medium affects their learning or their ability to demonstrate that learning.
This situation has developed in part because traditional instructional media and materials are inflexible and not amenable to individualization. New electronic media offer the opportunity—and we believe, the obligation—to re-examine old assumptions about teaching media and tools and reconsider their impact on learners.
The first half of this chapter focuses on traditional fixed media: speech, printed text, and still or video images (as opposed to their new digital counterparts). We outline the nature of each as a means of communication, discuss advantages and drawbacks, show how each places certain demands on learners' brain networks, and explain how each interacts with individual differences. In the second half, we focus on digital media, highlighting their inherent wealth of flexibility and illustrating how this flexibility provides teachers a new and better approach to understanding and addressing learner differences.
Go back to Mississippi, go back to Alabama, go back to Georgia, go back to Louisiana, go back to the slums and ghettos of our northern cities, knowing that somehow this situation can and will be changed. Let us not wallow in the valley of despair. I say to you today, my friends, that in spite of the difficulties and frustrations of the moment, I still have a dream. It is a dream deeply rooted in the American dream. I have a dream that one day this nation will rise up and live out the true meaning of its creed: “We hold these truths to be self-evident: that all men are created equal.” I have a dream that one day on the red hills of Georgia the sons of former slaves and the sons of former slave owners will be able to sit down together at a table of brotherhood.
Hear, see, and read Dr. King's “I Have a Dream” in audio, video, text, and still pictures athttp://www.cast.org/TeachingEveryStudent/Mlk
Most of us easily recognize this passage as text from Dr. Martin Luther King's “I Have a Dream” speech (1963, p. 219)—if not immediately, then surely when we read the familiar words, “I have a dream.” We would recognize the passage even more quickly if we could hear Dr. King's voice or see the scene on videotape or on the World Wide Web.
Although one might argue that the literal content of “I Have a Dream” remains the same whether it is presented as text, speech, or image, it is clear that each medium produces a qualitatively different effect. The textual version of Dr. King's speech contains the words, powerful and evocative. The audio offers vocal cues—intonation, pauses, volume, and pitch—tools this eloquent orator used very skillfully to convey the meaning of his message. The video version adds to the vocal cues a variety of visual ones. There are Dr. King's gestures and facial expressions and the environmental context—the large, responsive audience and the majestic setting on The Mall in Washington, D.C. (see Figure 3.1).
Not available for electronic dissemination.
© AP/Wide World Photos. Used with permission.
As this example suggests, the particular qualities of speech, text, and images differentiate their respective communicative power and influence their suitability for various expressive purposes. Each medium's characteristics drive how the brain understands it and how effective it can be in instruction. Understanding the differences among the traditional classroom media can help teachers select the most appropriate means to present different content and tasks. Explorations into speech, text, and images can provide a better understanding of students' learning strengths and weaknesses. They reveal that barriers to learning do not necessarily lie within a student's physiological or psychological makeup, but rather occur at the intersection of the task, the student's strengths and weaknesses, and the instructional media and tools used to present that task.
Many of the distinctive qualities of speech relate to the physics of sound. Sound is a forceful, physical medium. Any disturbance that vibrates air—an explosion, a plucked guitar string, or the movement of a cricket's legs—creates a traveling wave of energy. The brain's recognition networks contain specialized modules for processing sound, located within the auditory cortex. When a wave of energy reaches our ears and the air pressure pounds on our eardrums, the energy is transformed into patterns of nerve impulses that the processing modules within our auditory cortex recognize as sound.
Here is a key point. The patterns of vibration that reach our ears are transient. For example, as you pronounce the word permanence, the first syllable, “per,” is gone before you pronounce the second syllable, “ma.” You can record sound to review later, but when you play it back, it's transient again. Although a digital recording, such as the one illustrated in Figure 3.2, might enable you to “see” a segment of a sound wave, interpreting that segment requires specialized expertise. It's just not possible to generate a true “still shot” of sound. In fact, time is inseparable from sound because the brain responds to the number of waves per second. A word spoken, a car horn, and a sonata are all patterns in time, which cannot be frozen and examined. Sound perception is thus intimately connected to transient sequences in time—and thus to memory.
Think of speech as a particularly rich subcategory of sound. Human vocal chords generate extremely complex patterns, and the ability to recognize the source of these sounds (the particular vibration patterns of the sender) and the meaning of vocalizations is one of our most remarkable traits.
The qualities of speech vary with its context. We speak very differently in conversation than we do when we are performing in a play, making stump speeches on the campaign trail, testifying in court, telling a bedtime story, or presenting a unit in the classroom. A good way to examine speech as a communicative medium is to consider it in the context of public speaking, a much-studied form of speech. The two major recommendations you will find inmost popular guides to speechmaking highlight the medium's chief strength—its versatility and expressive power—and its chief limitation—the demand it makes on the audience's memory.
Advantages of speech. Perhaps the most basic rule of public speaking is never read your speech. The arguments mostly boil down to a single point: Reading text makes a presentation less natural-sounding, and therefore, less effective.
Natural speech works better because of its enormous expressive power. It offers a wonderfully rich vocal orchestration that enables us to express meanings clearly and energetically, beyond what words and syntax alone can convey. With our voices, we can vary intonation, pace, volume, and pitch to emphasize significant points, clarify intent and point of view, sharpen impact, provide background and emotional tone (such as sarcasm or irony), and even alter meaning. When we speak directly instead of reading text, we also tend to emphasize or clarify our words through physical cues, such as facial expression, gestures, motion, and posture. Speech is also interactive in that we can alter our tone, expressions, and gestures based on our audience's responses.
Limitations of speech. The second basic rule for speechmaking iskeep it brief. In Say It in Six (Hoff & Maguire, 1996), the authors argue somewhat hyperbolically that you should never use more than six minutes to say anything. Other oratory experts concur that short and concise is best, but allow that a major speech could run as long as 18 to 20minutes.
This call for brevity relates to speech's major limitation: its transience. Speech requires our listeners to remember, and it is easy to overload a listener's memory with a long, complex presentation . . . or even a long, complex sentence. Speech's transience and reliance on audience memory is also the reason why expert public speakers use techniques designed to reduce memory load and increase power. For example, Dr. King's “I Have a Dream” speech uses the technique of repetition to great effect. His repeated use of those four famous words provides the structure, rhythmic power, and emphasis that ultimately make the speech more memorable.
Both major characteristics of speech—its rich expressiveness and its transience—are experienced differently by different listeners. In order to find out why and how, we need to explore what understanding speech requires of the brain's recognition, strategic, and affective networks. Of course, speech is a medium used by both teachers and learners in the classroom. For the sake of brevity, we take the standpoint of the learner, focusing on the demands of understanding speech, rather than those of generating it.
Spoken communication conveys a sequence of shared-meaning elements that we call words. In the act of understanding speech, recognition networks rely on distributed, parallel processing to interpret many kinds of complex stimuli, while simultaneously processing individual sounds, words, phrases, and intonation. Of course, understanding speech requires both bottom-up processing—interpreting meaning from the flow of sounds—and top-down processing—using prior knowledge and context to predict what words will come next and to make sense of what we are hearing. To clarify how recognition networks process speech, let's first explore the steps required to recognize words and phrases.
Recognizing words requires a complex set of steps including hearing and differentiating individual sounds (also called phonemes) and attaching meaning to words. In speech, phonemes are uttered in rapid sequence. Understanding spoken language requires the ability to hear and distinguish these sounds rapidly to segment the stream of aural input into the words to which we can then attach appropriate meaning.
Barriers to understanding speech include hearing impairments and difficulty segmenting speech-sounds quickly. Research has shown that some children cannot process spoken language quickly enough to segment normal speech into phonemes. It's only when the speech is slowed dramatically (using a computer) that these children can learn to segment speech into the individual sounds that make up words (see Tallal et al., 1996).
Semantic recognition—the ability to attach meaning to individual words—is another critical part of understanding language and another place where barriers may arise. Some of the words we hear cannot be defined based solely on how they sound. The wordrock, for example, can mean a type of music, a geological formation, or something we do to soothe a baby. Although most of us can easily assign meaning to the word rock in context (“I stubbed my toe on that rock”),some individuals are unable to connect these words to their meanings—although they can accurately hear and repeat spoken words. This dissociation reflects the fact that recognition networks have two ways of extracting meaning, each one vulnerable to individual differences. They operate in a bottom-up fashion to use phonemic information to recall meaning but when definitions are ambiguous, they must process in a top-down fashion, using contextual information derive meaning (see Federmeier, Segal, Lombrozo, & Kutas, 2000;Ward, Stott, & Parkin, 2000).
As we can see, the task of recognizing spoken words is a complex combination of bottom-up and top-down processing. But recognizing single words is just a tiny fraction of what we must do to interpret speech, which is long strings of words combined into phrases and sentences. We take for granted the understanding of grammar and syntax we apply when we speak and when we listen, and we usually are not aware that we must apply this knowledge to interpret even simple sentences. Difficulties using grammatical constructions can appear independently of other language problems in a disorder called agrammatism (Nadeau, 1988).
Visual cues are also very important for understanding speech because they convey meaning that the bare bones of speech simply cannot express. Facial expressions and gestures provide emphasis, context, and significance to verbal communications. If a student standing outside of the room were to overhear his teacher say to his parents, “Your son is a real trouble-maker,” he might become very concerned. However, he probably wouldn't be concerned if he could see his teacher's amused and benign expression—a cue that would help him recognize her joking tone.
Because people who are blind cannot access visual cues, they sometimes miss key facets of spoken conversation. This is also a problem for people with damage to certain parts of recognition networks in the brain's right hemisphere who, although able to see, cannot interpret emotions via facial expressions (see DeKosky et al., 1980). Students with attention deficits may also sometimes fail to interpret visual cues—not because they are incapable of seeing or understanding them, but because they may not properly attend to them (Barkly, 1997; Lyon, 1994).
As this discussion illustrates, understanding speech places extensive demands on a listener's brain. In a classroom context, students must recognize sounds, words, and a variety of aural and visual cues simultaneously in order to attach appropriate meaning to the streams of speech coming from their teacher and their classmates. Thank goodness for parallel processing! But recognition networks are not working alone when students are interpreting spoken language. Listening to and understanding speech also requires us to act strategically and to stay focused and engaged. These are the domains of the strategic and affective networks.
It seems obvious that strategic networks, specializing in motor plans and actions, play an essential role in speaking. After all, we have to plan what we're going to say and go through the physical activity of producing speech. However, strategic networks' role in listening is not as immediately clear. Although in a conversation we often must prepare an answer or question while listening to someone speak, the listening itself seems effortless and unplanned. Listening to lectures seems even more passive. How much strategy could listening require?
In fact, listening requires heavy participation of the brain's strategic networks, in part because of the memory demands imposed by speech's transience. To gain meaning we must actively remember what we hear. And despite experts' advice that speechmakers “keep it short,” we are often subjected to long speeches, particularly in academic settings! Organizing a long, continuous stream of speech into meaningful segments, placing these segments in context with prior knowledge, and engaging tactics to remember new concepts presented in the speech (taking notes, for example) are all strategic processes that are vital to understanding what we hear. A variety of brain imaging studies corroborate that what we call “active listening” is a significant act of cognition that engages modules throughout strategic networks (Gabrieli, Poldrack, & Desmond, 1998;Posner & Pavese, 1998; Smith, Jonides, Marshuetz, & Koeppe, 1998).
Differences in the ways our strategic networks operate manifest as functional differences in how well each of us can learn from information presented through speech. Students with executive-function disorders, students who have attention deficit hyperactivity disorder(ADHD), and students with other, subtler difficulties rarely have trouble recognizingspeech as it is occurring. However, they often have trouble understandingspeech because they cannot enact the strategies and skills necessary for active listening, learning, and remembering.
“Verbal working memory” is one strategic function fundamental to speech comprehension. Individuals with impaired working memory functions (for example, some individuals with epilepsy) may be fully capable of recognizing speech but nevertheless struggle to comprehend it because their frontal networks cannot “hold on” to spoken language long enough (Jambaque et al., 1993).
Concentration is another key factor. Understanding speech requires a listener to devote attention selectively and to screen out irrelevant stimuli. Various deficiencies in strategic networks affecting attention can seriously undermine speech comprehension. For example, some patients with Parkinson's disease—a condition characterized by degeneration of the frontal cortex—exhibit an impaired sentence comprehension that appears to be due to a loss of selective attention (Grossman, 1999).
It is interesting to note that students with strong strategic listening skills seem to be able to compensate somewhat for problems with speech recognition. A good memory can help a student retain information long enough to support comparatively slow recognition processes. Further, strong top-down strategic skills help learners predict, hypothesize, and fill in gaps in what they hear. In this way, both strengths and weaknesses in students' strategic networks affect how well they are able to understand speech.
The demands speech places on memory also call on affective networks. In order to remember what they hear, listeners must stay actively engaged while they listen. Following the advice of the experts, speakers sustain listeners' attention by appealing to their interests, fears, hopes, and senses of humor. Thus, the affective dimension of language—the emotional content that is carried by words, but is distinct from the words themselves—is important for a speaker to consider.
Because affective networks are distributed, different modules process the emotional content conveyed by intonation, facial expression, and gesture. Thus, different people may have trouble with different aspects of the affective processing of speech. For example, neurologist and author Oliver Sacks describes a patient who could understand words perfectly but could not decode the expressive character of speech. When listening to someone speak, she was unable to discern an angry tone from a cheerful or sad one. This meant that unlike most people, she could not rely on affective cues to clarify ambiguous sentences and therefore, had to insist that those around her use unambiguous words. Strict adherence to “proper words in proper places” was the only way she could understand what a speaker intended to say (Sacks, 1985, p. 79).
Damage to speech recognition networks can disrupt the ability to distinguish the speaker's tone of voice (sad, angry, happy) while sparing literal word comprehension (Tucker, Watson, & Heilman, 1977). These kinds of selective deficits further illustrate the distributed, modular nature of language processing. Individuals differ in their ability to employ and interpret these emotional cues, just as they differ in their ability to understand semantic content or to apply strategies for listening and remembering.
Students with emotional difficulties (whether these difficulties have resulted from situational issues or inherent traits) can find listening very challenging. Emotional difficulties alone may impede a student's ability to use information conveyed through speech, in part because the feelings themselves can demand students' attention and make it difficult to concentrate on other matters.
As this discussion has outlined, speech, a seemingly simple medium that most of us take for granted, requires highly complex, rapid processing on the listener's part. With access to nearly a dozen separate channels of communication, including words, gesture, intonation, facial expression, pitch, volume, and pauses, speakers can convey their ideas with great intensity, sharp clarity, and strong emphasis. Because meaning is conveyed through several channels simultaneously, speech supports great subtlety and nuance, much like a complex piece of music. Expression or intonation can contradict words, creating irony or humor. Alternately, expression and intonation can reinforce words, creating emphasis and clarity.
The richness and power of speech make it an excellent medium for communication and teaching. This richness requires extensive processing by the three modular, interconnecting networks in the brain, and learners present myriad subtle strengths and weaknesses as listeners. However, the transience of speech that's inherent in the way we process sound presents a variety of challenges and potential barriers to learners. As teachers, when we are aware of these barriers, we can adjust our teaching methods and materials to support every student's learning. This may involve supplementing speech with other traditional media when conveying concepts or applying the power of digital media to provide additional support.
With the fundamental qualities of speech established, let's move on to text and images, the two other traditional classroom media. Although each offers its own advantages and challenges for students, both depend on the processing of light.
Perceiving visual media like text and images requires us to process light. Like sound, light is an invention of sorts, a perception our nervous system creates from traveling energy waves. Specialized cells in the retinas of our eyes absorb electromagnetic energy waves and convert them into a neural signal that our brains interpret as light. Using light, we are able to identify shape, color, depth, and motion—all critical for recognizing the world visually.
Although both sound and light are linked to traveling energy waves, the two kinds of waves are fundamentally distinct. Whereas sound is inherently mechanical, the product of energy created and propagated by movements and vibrations, light measures the electromagnetic energy reflected by an object—something less physical, more passive (Bregman, 1990).This might seem like a rather arcane point, but these differences factor in perception. For example, if we witness a rockslide, both sound and light contribute to our impression, but the sound has a tremendous affect, conveying a physical sense of moving masses hitting the ground. Compared to sound, light affects most of us less palpably, physically, and emotionally. Think about how silent movies seem less gripping than films with sound effects and music.
At the same time, light provides a more durable record than sound. The sound of a rockslide is created by the physical impact of rocks hitting each other and the ground, sending up vibrations that last only a few seconds. However, we can see the fallen rocks on the ground long after the noise has faded away. Whether you look at an object now or five hours from now, it will still be visible.
The visual cortex of the brain is specialized to process light. Cells in the visual cortex are selectively wired to cells in lower-order structures that conduct visual signals, newly converted from electromagnetic energy. Various parts of the visual cortex receive different patterns of input, reflecting their specialized roles in discerning features such as object motion, shape, and color. In addition, certain kinds of visual stimuli, such as letters and faces, are processed in separate regions. For example, processing modules important for reading text are specialized to recognize types oflines—diagonal lines, vertical lines, and curved lines. Modules important for decoding images are specialized to recognize color, texture, and form. Features of different sorts are extracted in parallel, endowing our visual systems with remarkable speed and efficiency.
We now have a basic idea of how light, and therefore the visual input, is processed by the nervous system. To read text and interpret images, we grapple with the strengths and limitations of the visual medium—and we bring our own unique brain networks to the task.
Because printed text is so central to our culture, learning to read and write is the primary focus of the early school years. Teachers tend to relegate communication in all other media—even speech—to peripheral status. Because text is so powerful and so ubiquitous, we rarely pause to consider its limitations and the way those limitations affect learners. Analyzing the qualities of text can help us make wiser choices about when to use this medium.
Advantages of text. Whereas oral language is a “presentational” medium, text is truly a “representational” one. That is to say, text can be viewed as a re-presentation of spoken language in a new format—a format that overcomes transience, the major liability of spoken language. Text reduces the memory demands of spoken language by providing us with a lasting record. This advance has served humanity by scaffolding cultural memory across time, enabling us to maintain historical records without continuous oral repetition. Text also permits us to reach amass audience dispersed in both time and space.
The act of creating text also helps us in a variety of ways. Once written, notes can be revisited at any time (provided they are not lost) and even passed on to another. We can create text to support memory, attention, or even comprehension. Text also helps us stay on course during our work. Think of the checklists we make to track our progress through a series of tasks.
The permanence of text supports a fidelity not possible with speech. This feature is inordinately valuable—whether the text is a legal document or a love letter. Not only can text maintain an accurate record of past events, it can also help us communicate information more exactly in the present. We can prepare a communication on paper, then read and revise the text to make sure we include all the necessary information. The exact record text provides lets us first convey complex concepts, such as philosophical arguments or historical interpretations, which in turn can be reread, reexamined, and reconsidered.
Limitations of text. Although text has many advantages, it has significant limitations, too. The most obvious drawback of text as a communication tool is that it lacks the inherent expressiveness of speech. The text version of Dr. King's speech accurately reports his words, but there's no question that the emotive qualities and impact are diminished on the page.
Text is also bound by a large number of conventions that writers must follow and readers must understand. For example, various kinds of printed documents are presented differently. A novel is generally formatted in single columns, with chapter titles in large type and content presented on sequential pages. In contrast, a newspaper presents text in small columns with headings above stories. Several newspaper stories might begin on the same page and conclude on different pages. Each of the many presentational forms of text—novels, poems, newspaper articles, and reference books—requires readers to approach it in a certain manner. For example, narrative and expository texts are designed to be read from start to finish, in a linear, logical order (Bolter, 1991; McLuhan, 1994; Meskill, 1999).Reference books like dictionaries and thesauruses are designed for selective consultation and require the reader to apply prior knowledge of the rules of alphabetization. Thus, the conventions of textual communications convey information outside of the words themselves that is nevertheless essential to finding the words' meaning. Such conventions can be enormously helpful and supportive, offering structural cues that direct reading. But for some learners, these conventions are difficult to grasp and use.
All students do not experience the advantages and limitations of text in the same way. The reason relates to how text engages the three brain networks. Here, for the sake of brevity, we focus on what is involved in reading text, rather than producing it. Reading engages multiple areas in all three networks. The involvement of myriad individual processing modules within recognition, strategic, and affective networks leads to innumerable differences between learners' strengths and challenges in learning to read and in reading well.
Reading is a difficult task—and really a compilation of many tasks. Decoding text requires students to recognize several levels of complex patterns (letters, letter-sound correspondences, words, phrases, and sentences) and a variety of forms (essays, newspaper stories, poems). This requires the coordinated action of many different recognition modules. Because individual differences can crop up in any of these modules, learners' abilities to read are subject to various subtle or profound barriers, beginning at the letter level.
You might think that recognizing letters would be a challenging task, especially considering that each letter in the alphabet can appear in different fonts, sizes, colors, and styles. However, most people find letter recognition easy. This facility is due to the modularity and efficiency of our visual recognition networks, which divide and distribute the task of identifying letter features, processing them rapidly in parallel and combining the information very quickly. Still, some students have trouble identifying letters and confuse similar-looking or similar-sounding letters, even late in the process of learning to read (see Roswell & Natchez, 1977).
Making the connections between letter forms and letter sounds is much more complex and can be much more problematic. Although the English alphabet has only 26 letters, it has twice that many distinctive sounds. To represent all the sounds in our language, we use combinations of letters (called graphemes), based on complex correspondence relationships. The letter A, for example, represents a wide range of sounds as in law, play, and cat. Just as understanding the meaning of a spoken word may require semantic context, letter-sound relationships depend heavily upon letter context. Unlike the simple correlation of letter shape to letter name, (which is primarily, if not entirely, a “bottom-up” process), aligning letters with their sounds requires top-down processing. Readers must apply conventions based on the placement of a letter in the context of its surrounding letters to be able to differentiate, for example, the A-sound inrat from the A-sound in rate.
These sound combinations represented in text are also essential to determining word meaning. Just as many spoken words sound the same but mean different things, many written words lookthe same but have different meanings, requiring yet another level of contextual, top-down processing. For example, you have to know the semantic context of the word “read” before you can pronounce it or identify its tense. (“Yesterday Boris read his spy orders; tomorrow, Natasha will read the secret code.”)
Brain imaging techniques are beginning to help researchers understand the relationship between modular recognition networks and various kinds of reading difficulties. For example, researchers have found significant differences in the brain activation of dyslexic and nondyslexic readers, indicating among dyslexics “a disruption within the neural systems serving to link the visual representation of the letters to the phonological structures they represent” (Shaywitz et al., 1998, p. 2640). These studies indicate an impairment of the module that links letters to sounds. As another example, students with autism are often “gifted” in recognizing the patterns of letter-to-sound correspondences and single-word decoding that comprise the early stages of reading. However, difficulties understanding what they read coincide with problems with other kinds of contextual, top-down processing. Thus, differences in recognition ability can splinter across many separate aspects of recognition and across individuals.
In sum, recognizing text is a complex, multi-dimensional process that places a variety of demands on learners' recognition networks. Readers must recognize letter forms, letter-sound correspondences, words, sentences, and larger units of meaning. Each of these tasks requires rapid, automatic processing and interconnections between a large number of neural modules.
Many no educators fail to realize that recognition is just one of several elemental facets of reading. Strategic networks are also involved at all levels of reading, even decoding, although this is so primarily when children are learning or struggling to read. Strategic networks are most significantly involved in comprehension, which is not simply a matter of “recognizing” the meaning of text, but also involves constructing that meaning through interpretation and analysis.
This construction is an active process. Reading for meaning requires us to set goals(try to find out who shot President Lincoln, enjoy a good story, or learn the molecular structure of an organic compound); create and execute a plan to achieve those goals; generate hypotheses and test them against textual cues; constantly monitor what is being read and compare that to prior knowledge; reread when comprehension falters; and evaluate whether a goal has been achieved. Research confirms that skilled readers adapt their pace and approach according to their purposes and the types of texts they are reading(see McGann, 1991).
Separate modules within a strategic network manage the various elements of reading, and problems can occur at any part of the process. Difficulty setting goals, understanding purpose, interpreting structural cues and meaning within text, connecting prior knowledge with new content, monitoring progress, and remembering concepts are among the many weaknesses based in strategic networks. Further, when decoding is not automatic, the brain recruits strategic networks into the work of analyzing words—a process that sidetracks a learner's ability to focus on constructing meaning.
Learn about the research-proven Reciprocal Teaching method developed by Anne Marie Palincsar at:http://www.ncrel.org/sdrs/areas/issues/students/learning/lr2recib.htm
Affective networks provide an excellent illustration of how differently speech and text call upon and impact the nervous system. The energy of sound (and therefore speech) is directly “wired up” to affective networks in our brain cortex and limbic system, exerting an emotional impact that enhances and in some cases alters the meaning of a visual stimulus. For example, think of a scene from the movie Jaws. The camera is panning across the harbor at sunset . . . and the recognizable “shark theme” is rising on the soundtrack. Now imagine the same scene without sound, or accompanied by serene classical music, a soothing romantic song, or blaring rock music. This exercise shows us how much of a visual experience's emotional content can be conveyed through sound.
Text employs various methods and conventions to try to replicate the emotive power of speech. These include punctuation (compare Mom lost the car keys! with Mom lost the car keys? and Mom lost the car keys.), and descriptive parts of speech, like adverbs (“Sure!” he said mockinglyor angrily or hopelessly or hurriedlyor boldly or balefully). Spacing and visual variations such as boldface and underlining are additional ways of representing emotion on the printed page. But none of these visual conventions connects directly to the nervous system. To access emotion through text, readers must interpret these cues and conventions and “add back,” if you will, the expressive nuance of speech and sound. This requires top-down affective processing—using context to infer the correct emotional tone. Good readers can re-create the emotion an author embeds in text, but gleaning text's emotional content can be a challenge for poor readers who struggle to grasp the basic meaning of words. Because text does not have a direct route to emotion, it may be harder to engage students in text than in speech, leaving them vulnerable to outside distractions.
Experience the power of sound in video athttp://www.cast.org/TeachingEveryStudent/sound
Difficulty understanding the emotions in text is only one part of the challenge for affective networks. A range of other challenges centers on students' experiences with the process of reading and their interest in the content presented. For students who have trouble reading, negative associations with text can build to the point where they no longer invest effort, convinced that they will fail. Students can also be turned off by reading if they continually encounter texts that are not relevant to them.
Thus, individual differences in affective networks can shape students' understanding of text and their engagement with the content. Being aware of text's demands on affective networks and being sensitive to students' individual differences can help teachers provide appropriate choices of reading matter, build supports to engage students, and use other media constructively.
Our exploration of recognition, strategic, and affective networks and how they are recruited by the task of reading provides us with additional insight into the potential advantages and limitations of text as an instructional medium. Recognizing patterns in text, enlisting and applying various interpretive strategies, and engaging with the process and content of reading are all highly complex acts involving modules in all three networks. Each part of the process is susceptible to individual differences, whether these are talents or difficulties. For example, although text reduces the memory demands posed by speech and offers visual organizational cues such as white space, formatting, and headings, the need to decode and to understand and apply those cues causes new and different barriers for many students.
When teachers get to know their students' strengths and weaknesses in recognition, strategy, and affect, they can make better choices about when and how to use text. Understanding the critical role of brain networks in reading and the different ways in which their performance can be derailed also helps teachers to be conscious of the choices they make when setting goals and selecting materials and methods for different learners.
Consideration of images—their strengths and weaknesses as a communication medium and the demands they place on learners—rounds out our discussion of traditional fixed instructional media. Still images are a longstanding component of instruction, but one that has usually been relegated to a lesser role. However, images' increasing cultural prevalence through television, magazines, movies, and, of course, computers, are making them more important in classrooms.
Advantages of images. The major point of contrast between still images and language is that language communicates linearly and sequentially, whereas images communicate everything at once. Images offer immediacy and they capture the entirety of a view. Novelist Mark Helprin expresses a similar viewpoint through one of his characters:
Paintings . . . (are) so easily apprehensible. They're present all at once, unlike music, or language, with which you can lie to the common man merely because he may not remember what has just been and cannot know what is coming. . . . Painting is tranquil and appeals directly to heart and soul.” (Helprin, 1991, p. 284)
Limitations of images. Although images can be used to simplify complex information (e.g., a visual display of data or a diagram of the water cycle),sometimes they are governed by conventions that can be quite complex, requiring training and practice to interpret. Further, images are not ideal for conveying conceptual, philosophical, and abstract information—anything below the surface or above the instance. Their ability to explain, clarify, ask questions, speculate, negate, or convey inner thoughts and emotion is inferior to language in many cases (Stephens, 1998). Of course, there are exceptions. Think about how powerfully expressive pictures of the American flag can be: the flag raised at Iwo Jima, the flag burned in protest, the flag brandished by an Olympian, or the flag flying over the rubble of the World Trade Towers.
Images also make unique demands on the nervous system. Let's touch briefly on some of the different processes involved in understanding images to highlight barriers some learners may face.
Interpreting images requires visual acuity and the ability to recognize the parts in relation to each other—in size, depth, movement, and many other ways. Because different visual features are processed in different parts of the visual system, some aspects of an image can be accessible to an observer, while others may not be.
For example, studies have shown that particular brain lesions can impair the ability to analyze parts of a picture, but not the ability to understand the picture's gestalt—the overall idea it's communicating. Other damage can disrupt the ability to perceive shape, but not color (Zeki, 1999). These specific deficits are not unlike those we observed with spoken language, as when a person with a particular language disorder might have difficulty understanding word meanings but can interpret emotional tone or vice versa. These insights make it clear that images, like other media, are processed in parallel by multiple modules within recognition networks.
Understanding an image requires the analysis of parts and wholes. Individuals create viewing strategies according to their own purposes and the nature of the image itself. Images used to convey complicated concepts make additional demands, requiring skilled interpretation based on the knowledge of graphic conventions, such as placing the most important elements at the center of the page, or organizing the various elements in ways that draw the eye and encourage it to linger.
Just as learners can be distracted when processing text, they can mistakenly focus on an image's unimportant elements. Strategic networks help viewers determine what is important and focus their attention where it is most productive. Skilled viewers are much more successful because they apply top-down processing: Their inspection is more firmly anchored to the meaning or knowledge they seek, and they are skilled at tying what they see to what they already know.
Affect is also crucial to deciding the important elements of images and understanding them. A viewer's individual emotional preoccupations may steer him away from his intended viewing strategy or prevent him from receiving the message the image's creator intended. On the other hand, a lack of emotional engagement may prevent a viewer from looking closely at an image or may block the viewer's comprehension of the image's emotional content. However, affect is not always negative. Positive affective engagement may feed an individual's interest—and persistence—in working with images.
Compared to speech and text, images offer a partially overlapping but unique set of advantages and limitations for teachers and learners. Images share the directness and emotive power of spoken language and the permanence of text. Unlike either of these media, they present everything at once rather than sequentially, giving viewers the opportunity to examine the information in images in their own preferred sequence. Barriers to understanding images can occur in any or all brain networks and include among other things, inability to see the images, difficulty with systematic examination and interpretation, or simple lack of interest. Considering the purpose of a particular lesson and the strengths and weaknesses of a student can help teachers evaluate the appropriateness of using images for teaching and student expression.
Thus far, we have provided an overview of how speech, text, and images each present a unique profile of strengths and weaknesses not experienced equally by all individuals. In the classroom, each medium poses barriers for some students, while offering particular opportunities to others. None works optimally for every student or for every situation, which means that several media options should be available.
Unfortunately, many classrooms continue to be dominated by a single medium—usually printed textbooks. This dominance prevents teachers from reaching all students and instead forces them to cater to those who find text accessible and create barriers for those who do not. There are further consequences. Even students able to access text are missing out because we know that there are other media more suitable for communicating particular kinds of material. Finally, students' preferences and proclivities for certainmedia and tools can play an important role in deepening their engagement and enhancing their success, even if they are capable of using a variety of media.
What is the alternative? Is the answer simply for teachers to incorporate more text- and image-based instruction? To make multiple presentations of the same content—using text for some, speech for others, and images for still others? This just isn't very feasible. Collecting and maintaining a sufficiently varied assortment of traditional media that would allow us to create an optimal instructional environment for every student in every unit of curriculum would be incredibly costly, consume too much space, and create nearly insurmountable logistical problems. No, more media is not a reasonable alternative. Teachers don't need more media; what we need is better media.
Fortunately, we don't have to wait for better media to come along. Digital media—available now—can help overcome the limitations of traditional instructional formats.
Although digital media also represent information through text, sound, and images, the similarities to traditional media end there. By virtue of one essential feature—flexibility—digitalmedia surpass traditional media in their ability to meet diverse students' varied needs in a variety of instructional contexts. This flexibility is inherent in the way digital content is stored and transmitted.
Over the last six thousand years, we humans have invented a wonderful array of storage mechanisms for our communications, ranging from stone to parchment to vinyl. Most of these techniques are an application of the same basic concept: Information is stored by taking a direct or encoded representation (a letter, an image, a sound) and physically embedding it in a medium (paper or vinyl), where it becomes permanent.
In digital media, content is stored in an entirely different manner. Rather than being embedded in a physical medium, the information is transformed into something abstract(bytes: ones and zeroes), which can then be presented in almost any medium and quickly transferred from one medium to another.
Consider the familiar image of DaVinci's Mona Lisa (see Figure 3.3).A conventional print reproduction stores the masterpiece (paint permanently embedded in canvas)in a similarly fixed format: as ink permanently set on paper. A digital reproduction stores the Mona Lisa as a set of numbers in a computer. When these numbers are “read,” the image is re-created on the computer screen. But unlike the paint-on-canvas version or the ink-on-paper version, this image of the Mona Lisa is not permanently fixed in this one format and location. On the contrary, you could display the same content on your 14-inch computer monitor, a 40-foot video screen in Times Square, a Web page in Singapore, or a Palm Pilot.
Photograph by Martin Kausal. Reproduced with permission.
Moreover, the digital Mona Lisa is malleable: The whole image can be made darker or lighter, the greens subdued, or the edges sharpened. Parts of the image can be transposed or deleted, duplicated and recombined (as was the case with the image in Figure 3.3.) What once was permanent can be altered, removed at will, or restored—multiple times.
Digital media offer a remarkable, almost paradoxical, set of features. They can save text, speech, and images reliably and precisely over time, and yet they offer tremendous flexibility in how and where those text, speech, and images can be redisplayed. The same content that is irrevocably fixed in a traditional medium can be flexibly accessed in a digital medium and changed or adapted. This is very useful to a teacher with a diverse classroom. Four aspects of digital media's flexibility are particularly beneficial for classroom application: versatility, transformability, the ability to be marked, and the ability to be networked. Let's examine each in turn.
Unlike a printed book, which can present only text and images, digital media can display content in many formats—text, still image, sound, moving image, combinations of text on video, sound in text, video in text, and more. Compared to print—and indeed, to any traditional medium—this versatility is astonishing. What's most exciting is that it offers users the chance to work in a preferred medium or interact with multiple media simultaneously. In a digitally equipped class studying Dr. King's “I Have a Dream” speech, for example, a student with reading problems, a student with a preference for visual material, and a student who loves to read could access the material as speech, video, and text, respectively. Or they could access it as video andtext or text and speech.
Because the display instructions are separable from the content itself, digital media allow the same content to be displayed in multiple ways. Web pages are a familiar example. Various users accessing the same Web page can alter how the site's content is presented; they can change the appearance of text or images, adjust sound volume, turn off graphics, and so forth simply by selecting a different browser, changing browser settings, or using a different computer. We call these kinds of adjustments “within-media transformations,” because they adjust the way something is presented without changing it to another medium. That is, a loud or a soft sound is still a sound; large or small text is still text.
With digital media, “cross-media transformations”—transform ations from one medium to another—are also possible. Speech recognition software, which automatically translates spoken language into text, is one example. Text-to-speech software, which transforms text into speech, is another. These tools can now be embedded into Web browsers and other software programs via translation algorithms so that the transformation from one medium to another can take place automatically and just in time—right when users want to access the material.
The capacity to transform digital content, both within and across media, is a powerful asset. Using within-media transformations, students who have trouble seeing small text can increase its size; those who have trouble understanding speech can slow the speech down or increase its volume. Using cross-media transformations, a teacher can set up a computer to read words aloud on demand for a student with dyslexia. These are just a few examples of the almost endless possibilities for application.
Hypertext markup language (HTML) is a code for constructing Web pages. HTML allows a Web page designer to “mark up” text, tagging different structural components such as the title, subheadings, or main body. Newer markup languages, like XML, provide an increasing variety of “tags” and even give designers the ability to invent their own tags. Once content is marked, a Web page designer can direct the display of the different components (put all text marked as headers in a 12-point Helvetica font, for example).
The advantage of these marking tools is that they allow teachers and students (with only a small amount of training) to flexibly alter content to accommodate needs or preferences. If you wanted to use a particular text to teach your students about literary devices, you could mark all the sentences containing similes and set them to be displayed in boldface type. A Latin teacher could mark the text to italicize all words that have Latin roots. Students trying to understand a detailed piece of text could mark it to underline all the summary sentences as a way to keep track of the most important information.
What differentiates the marking ability available with digital media from that used with traditional media (a textbook and a highlighter pen, for example) is that with digital media, markups can be shown or hidden, amended, expanded, or deleted. The same store of information can be marked in different ways for different students. It can also be unmarked and re-marked to suit the evolving needs of any particular student.
The fourth great advantage of digital media is that it's possible to link one piece of digitally stored content to another. This “networkability” allows digital media to incorporate embedded hyperlinks to all kinds of learning supports—from direct access to dictionaries and thesauruses, to prompts that can support reading comprehension, to supplementary content that builds background knowledge, to electronic notepads or visual organizers. Networkability makes possible rapid navigation between a word and its definition, an image and its description, a video and it caption, or a text passage from Robert McCloskey's Make Way for Ducklings and an audio file of real ducks quacking. All these things are possible with multimedia packages and a local network of rich digital resources. And if the network includes the World Wide Web, educators and learners gain access to all this and more—widely varying and continually updated materials, not to mention the perspectives and contributions of diverse experts, mentors, and peers from all over the world.
The Web is itself an excellent illustration of the four flexible characteristics of digital media. Its strengths parallel those of neural networks. Like neural networks, Web information is distributed across many different locations and interconnected via a dense collection of links. From any networked computer, there is near-immediate access to information elsewhere in the network. Even with billions of Web sites, the Web is far less complex and infinitely less intelligent than a human brain. Nevertheless, compared to traditional media, it has capabilities that are a much better match for the potential and diversity of the human learner.
With a better understanding of new and traditional media and how individual brains interact with each, teachers can reevaluate how they teach, how students learn, and how best to use various tools and techniques to individualize these processes. As our discussion of the three brain networks and individual differences makes plain, no single traditional instructional medium format works for all students. And although it is possible to remove barriers and expand access to learning by offering content in a variety of media, the fixed nature of speech, text, and images makes this an impractical, unviable option for most educators.
The flexibility of new media opens new doors to diverse learners. Digital capacity to combine and transform text, speech, and images leads to a more diversified palette for communication—one that can accommodate the varied strengths and weaknesses of each medium and every brain.
Digital media also has the potential to transform the learning process. The hegemony of printed text has already disappeared in high-impact fields like advertising, entertainment, and communication, but in education, its dominance remains. In the years ahead, however, it is clear that text-only instruction will give way to a more deliberate application of multimedia. Instructional designers will use digital tools to tailor media to the task, to different kinds of learning, and to different kinds of students, reducing the barriers and inefficiencies inherent in one-size-fits-all printed textbooks. New expertise in the representational and expressive qualities of each medium and the new blends that will evolve will help educators reach a broader spectrum of students with a broader spectrum of knowledge.
Students with various kinds of disabilities are likely to be the earliest and most obvious beneficiaries. Media such as talking books, descriptive videos, and American Sign Language (ASL)tracks vastly increase both access and learning opportunities. Using digital tools actually changes those students' capacities and makes them far more capable. An extreme example is a student with severe physical and language disabilities who, independently, might be able to communicate only by indicating yes and no.With a computer and the right software tools, this student can be on an equal footing with others.
The incidental but equally important beneficiaries of new digital media for teaching will include teachers and students of subjects like math, music, geography, and physics—subjects that have never easily yielded their magic through linear text. Ultimately, the new media will benefit all learners.
Incorporating digital media into the classroom is an important step that requires thoughtful consideration. Accessing their flexibility requires a shift in how educators think about instruction. CAST's intent is to help teachers understand the process of integrating digital media into the curriculum to engage diverse learners in meaningful educational progress. Our framework, Universal Design for Learning, provides guidance for that process. In the next chapter, we define UDL in more detail and explain its three guiding principles.
Copyright © 2002 by Association for Supervision and Curriculum Development. All rights reserved.
No part of this publication—including the drawings, graphs, illustrations, or chapters, except for brief quotations in
critical reviews or articles—may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopy, recording, or any information storage and retrieval system, without permission from ASCD.
Subscribe to ASCD Express, our free e-mail newsletter, to have practical, actionable strategies and information delivered to your e-mail inbox twice a month.
ASCD respects intellectual property rights and adheres to the laws governing them. Learn more about our permissions policy and submit your request online.