Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
March 1, 2012
Vol. 69
No. 6

True—or Not?

How can students know whether the information they find online is True—or Not?

premium resources logo

Premium Resource

We choose everything—from toothpaste to health care, from education research to instructional strategies—by evaluating information. How do I gauge the effectiveness of our one-to-one laptop program? Which presidential candidate will get us out of this economic slide? Is watching SpongeBob SquarePants bad for my child?
We also make daily decisions about whom to trust with our information. Does this company sell my information to third-party advertisers? Should I share my concerns about a colleague with an administrator? When I'm out of town, should I alert the post office to hold my mail or just ask my neighbor to retrieve it from my mailbox?
The staggering volume and speed with which information is presented and the sophisticated ways in which facts and figures are represented make it practically impossible for an average adult to single-handedly judge accuracy and credibility without guidance. Why are we surprised to learn that bogus communications purporting to be from banks or credit card companies dupe smart adults into supplying personal or account information to scammers? Or to find that a high school senior's essay cites a 5th grader's slick-looking web page on the Greek gods? Or to learn that teens are making important life decisions on the basis of seemingly reputable health websites that contain inaccurate reproductive health information (Tolani & Yen, 2010)?
In this participatory digital world, we're all novices at some point when judging whom to trust. Appearance, credentials, and other indicators of quality that used to serve as shorthand tests of credibility don't readily transfer online. Our students may acknowledge that Wikipedia is unreliable, but they use it anyway—and so do we.

Smarter "Satisficing"

Many educators erroneously claim that students lack the capacity to evaluate information. On the contrary, even when they're geeking around, students wonder why some peers are seen as experts (Why are they listening to him?) and question how decisions about trust are made (Why does her argument appeal to me?). Indeed, youth are less trusting than their elders of sites that host their information. Young adults (ages 18–29) are more likely than older adults (ages 50–64) to customize their privacy settings to limit what others see, remove their name from tagged photographs, and delete comments that others have made on their profiles (Madden & Smith, 2010).
Skepticism requires effort. Realistically, we have neither the time nor the patience to analyze every source or fact thoroughly. We rely on rules of thumb—routines that have worked in the past—to arrive at quick trust judgments. As an education consultant, I have watched teachers parade out an information evaluation lesson as part of the discussion around the obligatory high school research essay. However, one lesson a year on the topic won't transfer to lifelong learning.
Embedded in every subject are natural tasks—locate a news article, select a photograph, read a map or graph—that lend themselves to evaluation using rules of thumb that work online. Just as we learn to get good-enough answers to our own quick reference questions—How do I get to that restaurant? Which cereal is cheaper?—students can learn to marshal effective rules of thumb to find answers that "satisfice" (both satisfy and suffice). These rules of thumb will prepare students to more deeply evaluate claims and arguments.

How We Evaluate Data

We define credibility variously as truthfulness, trustworthiness, expertise, objectivity, relevance, reliability, or even believability (Hilligoss & Rieh, 2008); and these criteria interact as a series of judgment calls. For example, during the period of corporate scandals and bank failures, we learned that auditors, lawmakers, and regulators were experts, but their credentials weren't necessarily a good rule of thumb for truthfulness or reliability. Or, when investigating incidents of bullying, an assistant principal may well take the victim's story as truthful, but certainly not as objective.
We should recognize that our gut may judge something authoritative because it confirms our beliefs or values. Or that we may find a report by the American Medical Association objective merely because we associate science with a lack of bias.
Cultural background also colors our evaluation decisions. During research for a debate on the U.S. military presence overseas, Yi-Min, a second-generation Chinese American student, confidently uses a government report, whereas Lupe, who is worried about the possible deportation of a family member, might discount a government report as suspect.
Technology has a role to play, too. Both Yi-Min and Lupe may accept Google's search engine at face value because they usually find answers to their questions on the first page of results. Just asking those two students to try a second search engine, like Bing, or an answer engine, like Wolfram Alpha, may induce them to question what they find because people tend to be more critical of results from a less familiar engine.

What About Wikipedia?

We tend to treat print as a determinant of information quality; we trust books, journals, mainstream news, and, by extension, e-books, e-journals, and subscription databases over blogs, tweets, and ephemeral digital information. Although we've read about manufactured and plagiarized information by a variety of people writing for well-known publications, such as New York Times reporter Jayson Blair, we still expect this medium to deliver accurate information because we know that magazine and newspaper editors typically require writers and reporters to corroborate their sources.
You might be surprised to learn that Wikipedia, vilified by many educators, concurs with these criteria for information quality. According to its guidelines, articles written for Wikipedia should cite "reliable, third-party published sources … credible published materials with a reliable publication process [and] authors [who] are generally regarded as trustworthy or authoritative" (Wikimedia Foundation, 2010). (For suggestions on how to assess credibility in Wikipedia, see "Weighing Truth in Wikipedia," p. 74.)

Rules of Thumb for Online Reading

In our dizzying world of click-and-go wikified information, everyone uses fast and frugal skimming strategies to evaluate information daily. Our challenge is to teach students to devise accurate rules that take advantage of new technology to quickly judge the quality of the information they want to use.
Here are some quick and dirty rules of thumb—digital reading strategies, in fact—that will intrigue students, spark their curiosity, and serve as sensible entry points to more sophisticated analyses of ideas.

Judging Importance: Who's Weighing In?

Articles submitted to journals are often peer reviewed; reviewers scrutinize methodology and claims for accuracy, evaluate the writer's expertise, and may return the manuscript for further revision before publication. However, most information on the web is published without such vetting, under the assumption that it will be evaluated and judged by many eyes afterwards.
On the web, voting mechanisms allow readers to weigh in on the value of a source. Aggregated mouse clicks and tags vote, in effect, on an individual's reputation or the importance of the information. James Surowiecki, a business columnist for the New York Times, calls this process "the wisdom of crowds." He argues that when a large number of people with diverse perspectives from different geographical locations evaluate information, their "votes" identify important individuals and ideas—but only if these voters don't know one another and can't influence one another's opinions. Many websites, from Amazon to Zillow, include recommender systems by which people can star or vote for everything from authors to real estate.
In science, because the traditional peer-review process often delays publication up to a year, scholars are seeking ways to publish important scientific discoveries more quickly. One such online experiment in open-access journal publishing, the Public Library of Science, invites submissions of scientific papers before peer review. For every article, the site shows social metrics, such as the number of people who view, download, bookmark, or cite the article, as well as comments, notes, blog posts, and stars (readers can award one to five stars). Students could quickly judge the importance of an article on, say, chronic fatigue syndrome by noting the number of citations and links to the article, its online usage, and the accrued ratings by named scientists or researchers who have expertise in the topic.
By analogy, Google's page ranking algorithm is built on a similar premise—that clicks and links to a source are votes of confidence in its relevance and importance. In practice, although we tend to accept the top results of a Google search as both relevant and credible, clicks and links do not guarantee these qualities. A search for information on Martin Luther King Jr., for example, always brings up Stormfront's hate site on the first page of results because many schools link to it to show students how to evaluate biased information. Rules of thumb are shortcut evaluation measures, and they only work in certain situations; it's healthy to become skeptical when gut feelings or deeper scrutiny raise questions about a source.

Judging Trustworthiness: Who's Setting the Record Straight?

Unlike articles in print journals, which can take months or years to be vetted and published, content can be posted instantly online, albeit by an unnamed author whose assertion of authority could rest more on persuasive rhetoric than on credentials. Noticing how authors and publications handle errors is a quick way to check on their trustworthiness.
When Hwang Woo-suk's scientific claims of major breakthroughs in stem cell research were discredited, Science, the journal of the American Association for the Advancement of Science, reestablished credibility by announcing in their blog that they were retracting his papers. They provided full documentation of the issues, with direct links to their evaluating committee's report, their publication's response, and an accompanying editorial on their website.
Error correction behavior is a rule of thumb for truthfulness. Although wikis archive all editorial revisions, blogs and other digital publications can make changes invisibly, deleting or editing content without comment. The blogger or author who publicly acknowledges mistakes is more trustworthy than one who deletes errors without a trace.

Judging Accuracy: What Do Other Sources Say?

Significant problems accompany the competitive, round-the-clock scramble for new content. The basic facts of a disaster—the number of survivors of a mine explosion or the number of gallons from an oil spill—may be wildly inaccurate in early reports, but then be corrected later on. Therefore, simply comparing the dates and times of related news stories can serve as a rough rule of thumb for accuracy. Unfortunately, the deluge of retweets of initially compelling but inaccurate information may bury later retractions.
In July 2007, just before the International Medieval Congress, historian Marco Mostert, one of the event's organizers, was widely reported to have claimed that "underwear created literacy." Responding to a series of questions about the development of literacy in the Middle Ages, he had written,
The 13th century saw a growth in the number of towns all over Europe, a commercial revolution, and an unprecedented growth in the number of schools, especially in towns, which suggests literacy rates grew apace … The development of literacy was certainly helped by the introduction of paper, which was made of rags … In the 13th century, so it is thought, the use of underwear increased. This caused an increase in the amount of rags available for paper-making. So even the wearing of underclothes, it could be argued, was a factor in the development of medieval literacy!
Within hours, part of his response appeared in more than 12,000 blogs and news reports—and you can guess which part. After reading his comments in a Guardian news article and elsewhere, I e-mailed the author directly—a rule of thumb for corroboration that we should use more often than we do. He replied that his words had been taken out of context, distorted by the omission of his final qualifying sentence: "But seriously, there was a definite link between the development of urban centers and the development of literacy." Troubled by how his remark might blemish his scholarly work, he planned to write an article on the topic with proper documentation. But he never did. The written record remains uncontested: Underwear created literacy.
An effective rule of thumb is to find three different viewpoints about the original content, rather than three reiterations of the same content. Effective triangulation might mean contacting the primary source of the information, tracing down an original report rather than using a summary with an extracted quote, or consulting different types of sources (for example, a government entity, corporation, foundation, nonprofit organization, journalist, or eyewitness) to check the facts or ideas expressed.

Judging Infographics: Who's Behind the Chart?

Powerful computers with virtually unlimited storage capacity provide us with the ability to access and analyze vast quantities of data. Visual displays of information in graphs, charts, maps, tables, and infographics can help consumers make choices among products and help citizens discern trends or analyze complex issues.
Any thoughtful analysis of aggregated data from multiple sources must be grounded in an assessment of each of the sources that feeds into the visualization, a process that is neither quick nor easy. However, an initial rule of thumb might be to determine the motivation and vested interests of the producer: Who created this visualization and why? For example, a politician wishing to convince voters to vote a particular way might use an infographic to persuade rather than to inform.
Once you determine the creator of the visual, you need to locate an evaluation ally. This might be a nonpartisan research organization that publishes reports or has developed tools to help journalists, citizens, and students analyze information in areas related to its mission. Take MapLight.org, for instance, which, according to its website, is funded by "organizations who favor greater transparency for our elected officials' actions." The organization has developed software that queries legislators' voting records as well as campaign contribution data to determine the influence that money might have on the voting patterns of elected officials. For example, one can ask for the data on a legislator's voting record on deepwater drilling and compare it with the size of contributions that the official has received from oil companies.

Taking It Schoolwide

The 21st century challenge for education and democracy is teaching us and our students to assess credibility in a systematic, sustained, and scalable way. One school's approach has been to build a climate in which asking questions like, How do you know that? and Where is the evidence? has become a routine way of approaching information evaluation.
One school librarian has institutionalized credibility assessments by gaining the faculty's agreement that all students would add critical annotations to their bibliographies for any research conducted in any class. Below each reference, students must include a sentence or two that addresses each of four areas: author (What makes this author credible?); currency (How old is the source type, and has it been updated?); subject (How thorough is the coverage of the topic?); and balance (Is the coverage biased in any way?). Four years later, students have learned how to define what credible information is, identify attributes that signal credibility, and state the rules of thumb they've used to determine credibility.
Teachers know that students' reading comprehension is affected by their interest in the text and task. When students are serious about an assignment, they're likely to evaluate a source thoroughly and systematically. However, for their day-to-day reading, when such intentional evaluation may not be called for, these four rules of thumb provide enough guidance to get a "good enough" result quickly and jump-start more analytical digital reading strategies.

Weighing Truth in Wikipedia

How can students evaluate the accuracy of articles in Wikipedia, arguably the most important source of objective information on the web? Here are some rules of thumb:

  • Look for length. Longer articles with more citations and more contributors are typically more accurate than shorter ones.

  • Look for text revisions. When you click on the WikiTrust tab, the software calculates the extent of revision for an article. Text that is highlighted in orange is newly written, whereas white text has been extensively revised by authors who have a reputation among Wikipedia editors for contributions and edits that remain unchallenged. Therefore, that text is likely to be more accurate. Also, you can scan the archive of comments and revisions to see how neutral contributors' tone and language are.

  • Look for editorial ratings. A small bronze star on the upper right-hand side of the article signals that Wikipedia editors consider this a polished article of high standard, whereas a small lock indicates that the article's content is controversial and is now protected from contentious revision after being edited for objectivity and neutrality.

  • Look at patterns of editing. Another software overlay on Wikipedia articles displays each contributor's pattern of editing. For example, when we search the "Deepwater Horizon Oil Spill" article through <LINK URL="http://wikidashboard.appspot.com">WikiDashboard</LINK>, we see a record of the contributions of a self-described fisherman named Michael Westbrook, who "loves the environment and wants to protect it." We can follow his edits during the Gulf of Mexico oil spill side-by-side with other editors' suggestions on how he could moderate his strident tone, source the facts he supplies, and remove speculation about BP's future stock value. When he fails to observe Wikipedia's editorial norms, they block him from editing. The record and reasoning for edits enable us to quickly judge the neutrality of individual contributors and the objectivity of the article at that point in time.


Hilligoss, B., &amp; Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing and Management, 44(4), 1467–1484.

Madden, M., &amp; Smith, A. (2010). Reputation management and social media: How people monitor and maintain their identity through search and social media. Washington, DC: Pew Internet and American Life Project, Pew Research Center. Retrieved from http://pewinternet.org/Reports/2010/Reputation-Management.aspx

Tolani, A., &amp; Yen, S. (2010). Many websites fail to dispel myths about IUDs, emergency contraception, birth control, and proper timing of pap smears. Retrieved from Lucille Packard Children's Hospital at Stanford at www.lpch.org/pdf/clinical/adolescent-medicine/yenPosterWebsitesInfoTeens.pdf

Wikimedia Foundation. (2010). Wikipedia: Identifying reliable sources. Retrieved from Wikipedia at http://en.wikipedia.org/wiki/wikipedia:Reliable_sources

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
From our issue
Product cover image 112021.jpg
Reading: The Core Skill
Go To Publication