Chapter 6: 21st-century media and issues

6.1.2 A digital media literacy intervention increases discernment between mainstream and false news in the United States and India

Andrew M. Guess, Michael Lerner, Benjamin Lyons, Jacob M. Montgomery, Brendan Nyhan, Jason Reifler, and Neelanjan Sircar

  1. Edited by David G. Rand, Massachusetts Institute of Technology, Cambridge, MA, and accepted by Editorial Board Member Margaret Levi April 28, 2020 (received for review November 20, 2019)



Few people are prepared to effectively navigate the online information environment. This global deficit in digital media literacy has been identified as a critical factor explaining widespread belief in online misinformation, leading to changes in education policy and the design of technology platforms. However, little rigorous evidence exists documenting the relationship between digital media literacy and people’s ability to distinguish between low- and high-quality news online. This large-scale study evaluates the effectiveness of a real-world digital media literacy intervention in both the United States and India. Our largely encouraging results indicate that relatively short, scalable interventions could be effective in fighting misinformation around the world.


Widespread belief in misinformation circulating online is a critical challenge for modern societies. While research to date has focused on psychological and political antecedents to this phenomenon, few studies have explored the role of digital media literacy shortfalls. Using data from preregistered survey experiments conducted around recent elections in the United States and India, we assess the effectiveness of an intervention modeled closely on the world’s largest media literacy campaign, which provided “tips” on how to spot false news to people in 14 countries. Our results indicate that exposure to this intervention reduced the perceived accuracy of both mainstream and false news headlines, but effects on the latter were significantly larger. As a result, the intervention improved discernment between mainstream and false news headlines among both a nationally representative sample in the United States (by 26.5%) and a highly educated online sample in India (by 17.5%). This increase in discernment remained measurable several weeks later in the United States (but not in India). However, we find no effects among a representative sample of respondents in a largely rural area of northern India, where rates of social media use are far lower.

Social media platforms have proved to be fertile ground for inflammatory political misinformation. People around the world increasingly worry that so-called “fake news” and other forms of dubious or false information are misleading voters—a fear that has inspired government actions to address the problem in a number of countries (1, 2).

Research into online misinformation has thus far focused on political, economic, and psychological factors (35). In this article, we focus on another human vulnerability to online political misinformation: shortfalls in digital media literacy.

While largely overlooked in the emerging empirical literature on digital disinformation and fake news, the concept of digital media literacy usefully captures the skills and competencies needed to successfully navigate a fragmented and complex information ecosystem (6). Even under ideal conditions, most people struggle to reliably evaluate the quality of information they encounter online because they lack the skills and contextual knowledge required to effectively distinguish between high- and low-quality news content.

The connection between digital media literacy and misinformation was identified early by theorists. “Misinformation—and disinformation—breeds as easily as creativity in the fever-swamp of personal publishing,” according to an influential 1997 introduction to the subject. “It will take all of the critical skills users can muster to separate truth from fiction” (ref. 7, p. xii).

More than 20 y later, these warnings seem prescient. Survey research shows that few people are prepared to effectively navigate the digital world. For example, the Pew Research Center found as recently as 2017 that only 17% of US adults have the skills and confidence to learn new information effectively online (8). Nonetheless, people worldwide increasingly obtain news and information from social media platforms that lack traditional editorial controls (9, 10), allowing politicians and other actors to widely disseminate misinformation via algorithmic news feeds. Without the necessary digital media literacy skills, people frequently fall victim to dubious claims they encounter in this context.

These concerns have become especially salient in the United States and India in recent years. In the United States, low-quality online articles were distributed widely on social media in the months before the 2016 US presidential election (11). This phenomenon created widespread fears that fake news was misleading people at a massive scale (12). Smartphone use has also made India, the world’s largest democracy, a fertile environment for online rumors and misinformation. Viral misinformation spread via WhatsApp in India has reportedly provoked hatred and ethnic violence (13). Moreover, online political misinformation became a significant concern during the 2019 Indian general election as political parties engaged in aggressive digital campaign efforts via short message service (SMS) and messaging applications like WhatsApp (14, 15). For instance, one analysis found that over 25% of the news shared on Facebook during the election by the governing Bharatiya Janata Party (BJP) came from dubious outlets (16).

Many nonprofits and governments are seeking to counter these trends (and the related threat of foreign manipulation campaigns) by improving the digital media literacy of news consumers (1720). For instance, American universities increasingly teach media literacy to undergraduate students (21) and similar efforts are also being proposed at the kindergarten to grade 12 (22). Similarly, WhatsApp and the National Association of Software and Service Companies announced plans to train nearly 100,000 people in India through in-person events and posts on social media to spot misinformation (23).

Despite the attention and resources these initiatives have received, however, little large-scale evidence exists on the effectiveness of promoting digital media literacy as a response to online misinformation. Existing scholarly work related to digital and media literacy is frequently qualitative in nature or focused on specific subpopulations and/or issues. Observational findings are mixed (24, 25) and randomized controlled trials remain rare (26).

Two related but more specific approaches have been shown to be somewhat effective in countering misinformation and are important to note, however. First, inoculation interventions have been employed to protect audiences against misleading content by warning of misinformation and either correcting specific false claims or identifying tactics used to promote it. This approach has been shown to reduce the persuasiveness of misinformation in specific domains (2732). In addition, other studies evaluate the effectiveness of providing warnings about specific misinformation (33, 34).

We therefore seek to determine whether efforts to promote digital media literacy can improve respondents’ ability to correctly evaluate the accuracy of online content across issues. Such a finding would suggest that digital media literacy shortfalls are a key factor in why people fall victim to misinformation. In particular, we consider the effects of exposure to Facebook’s “Tips to Spot False News,” which were developed in collaboration with the nonprofit First Draft and subsequently promoted at the top of users’ news feeds in 14 countries in April 2017 and printed in full-page newspaper advertisements in the United States, the United Kingdom, France, Germany, Mexico, and India (3540). A variant of these tips was later distributed by WhatsApp (a Facebook subsidiary) in advertisements published in Indian and Pakistani newspapers in 2018 (41, 42). These tips are therefore almost surely the most widely disseminated digital media literacy intervention conducted to date. (The full treatments are provided in SI Appendix, section A.) The US treatment, which was adapted verbatim from Facebook’s campaign, consists of 10 strategies that readers can use to identify false or misleading stories that appear on their news feeds, whereas the India treatment, which uses adapted versions of messages shown in India by Facebook and WhatsApp, presents 6.

These interventions provide simple rules that can help individuals to evaluate the credibility of sources and identify indicators of problematic content without expending significant time or attention. For instance, one sample tip recommends that respondents “[b]e skeptical of headlines,” warning that “If shocking claims in the headline sound unbelievable, they probably are.” Such an approach should reduce reliance on low-effort processes that frequently lead people astray (e.g., perceptions of cognitive fluency) by teaching people more effective heuristics (e.g., skepticism toward catchy headlines). Importantly, the success of this approach does not require readers to take burdensome steps like conducting research or thinking deeply about each piece of news they encounter (which is typically impossible in practice given the volume of stories that social media users encounter). Instead, this intervention aims to provide simple decision rules that help people distinguish between mainstream and false news, which we call “discernment” following ref. 4.

There are important reasons to be skeptical about the effectiveness of this approach. Prior research has found that media literacy interventions like this can help people think critically about the media content they receive (43). However, prior studies focus mostly on offline health behavior; the extent to which these interventions are effective for controversial political claims or online (mis)information is largely unknown. Moreover, such interventions may struggle to overcome people’s reliance on heuristics such as familiarity and congeniality that news consumers use to evaluate the credibility of online stories (44, 45). Finally, attempting to identify false news through close scrutiny of a headline differs from the typical approach of professional fact checkers, who usually use “lateral reading” of alternative sources to corroborate claims (46).

We therefore conducted preregistered survey experiments in both the United States and India examining the effectiveness of presenting people with “tips” to help spot false news stories. [The US and India studies were each preregistered with Evidence in Governance and Politics; see Materials and Methods. All preregistered analyses are reported in this article or in the replication archive for the study (47).] Strikingly, our results indicate that exposure to variants of the Facebook media literacy intervention reduces people’s belief in false headlines. These effects are not only an artifact of greater skepticism toward all information—although the perceived accuracy of mainstream news headlines slightly decreased, exposure to the intervention widened the gap in perceived accuracy between mainstream and false news headlines overall. In the United States, the effects of the treatment were particularly strong and remained statistically measurable after a delay of approximately 3 wk. These findings suggest that efforts to promote digital media literacy can improve people’s ability to distinguish between false and mainstream news content, a result with important implications for both scientific research into why people believe misinformation online and policies designed to address the problem.

Our main research hypotheses evaluate whether the media literacy intervention reduces belief in false news stories (hypothesis 1 [H1]), increases belief in mainstream news content (H2), and improves respondents’ ability to distinguish between them (H3). We also consider three research questions (RQs) for which our a priori expectations were less clear. First, past research shows that the effects of many experimental treatments (e.g., in persuasion and framing studies) decay quickly over time (48), although providing participants with novel information may have more long-lasting effects (49). We therefore test the durability of our treatment effect by leveraging a two-wave panel design to tests its effects several weeks after the initial intervention (RQ1). Second, it is also possible that interventions may work only to make individuals more skeptical of noncongenial content they are already inclined to dismiss, leaving their vulnerability to ideologically consistent misinformation unchanged. We therefore test for the heterogeneity of the treatment effects based on the partisan congeniality of the content (RQ2). Finally, we test whether the intervention changed self-reported intentions to share false stories or subsequent online news consumption behavior in the US sample where these measures were available (RQ3). Additional analyses exploring heterogenous treatment effects and alternate outcomes are discussed below, but full models appear in SI Appendix, section C. These analyses include whether intuitive cognitive style or prior headline exposure moderates the treatment effect, as well as whether the treatment affects the perceived credibility of “hyperpartisan” headlines.


US Survey Experiment.

Consistent with our first hypothesis (H1), randomized exposure to the media literacy intervention causes a decrease in the perceived accuracy of false news articles. Results from wave 1 of the US study in Table 1 show a decrease of nearly 0.2 points on a 4-point scale (intent to treat [ITT]: β=0.196, SE=0.020; P<0.005). We observe similar effects of the media literacy intervention on the perceived accuracy of hyperpartisan headlines (ITT: β=0.176, SE=0.020; P<0.005) (SI Appendix, section C, Table C2).

Table 1.

Effect of US media literacy intervention on perceived accuracy by news type

One concern is that the intent-to-treat effects described above understate the true effect of the intervention, which may have been neglected by some respondents. While we can offer the opportunity to read the digital literacy “fake news tips” intervention to a random subset of respondents, we cannot force every respondent to read these tips carefully.

We therefore also estimate the effect of the treatment on those who actually received it, which is known as the average treatment effect on the treated (ATT), using an instrumental variables approach. In this model, our indicator for receipt of treatment is the ability to correctly answer a series of follow-up questions about the content of the news tips (approximately two-thirds of respondents in the treatment condition [66%] were successfully treated) and our instrument is the original random assignment. Table 1reports the ATT, which we compute using two-stage least-squares regression. With this approach, we estimate that the perceived accuracy of false headlines decreased by nearly 0.3 points on a 4-point scale (ATT: β=0.299, SE=0.030; P<0.005).

We compare the characteristics of respondents who would successfully take the treatment only if assigned to it (“compliers”) to those who would not even if assigned to treatment (“never takers”) (SI Appendix, section B) (50). Compliers were more likely to be older, college graduates, interested in politics, politically knowledgeable, Republican identifiers, and more polarized in their feelings toward the two political parties than never takers. Compliers also scored lower in conspiracy predispositions and their feelings toward Donald Trump. However, the substantive magnitudes of most of these differences are modest (SI Appendix, section B, Fig. B1). Crucially, we find no statistically significant evidence that respondents who take the treatment differ in their baseline propensity to visit untrustworthy websites compared to those who do not (analysis conducted among participants for whom presurvey behavioral data are available; see SI Appendix, section Afor details). The average number of prior visits to false news websites is actually greater among compliers than among never takers but this difference does not reach conventional thresholds of statistical significance (0.35 compared to 0.18; P=0.08).

Our next hypotheses predicted that the media literacy intervention would increase the perceived accuracy of mainstream news (H2) and increase people’s ability to successfully distinguish between mainstream and false news articles (H3). These results are shown in the second and third columns in Table 1. We find that exposure to the media literacy intervention had a small negative effect on belief in mainstream news in wave 1 (ITT, β=0.046 [SE=0.017], P<0.01; ATT, β=0.071 [SE=0.026], P<0.01). However, the negative effects of the intervention on the perceived accuracy of false news described above are larger. As a result, the media literacy intervention increased discernment between mainstream and false stories (ITT, β=0.146 [SE=0.024], P<0.005; ATT, β=0.223 [SE=0.035], P<0.005), demonstrating that it helped respondents to better distinguish between these two types of content. In relative terms, this effect represents a 26.5% improvement in respondents’ ability to distinguish between mainstream and false news stories compared to the control condition.

In addition, we test the durability of these treatment effects in wave 2 per RQ1. After a delay between waves that averaged several weeks, the effect of the media literacy intervention on the perceived accuracy of false headlines remains statistically distinguishable from zero (SI Appendix, section C, Table C1). The median interval between waves was 20 d; the 5th to 95th percentile range was 16 to 29 d. While the effect is still present weeks later, its magnitude attenuates by more than half relative to wave 1 (ITT, β=0.080 [SE=0.019], P<0.005; ATT, β=0.121 [SE=0.028], P<0.005). In addition, the negative effect of the media literacy treatment on the perceived accuracy of mainstream news content was no longer statistically measurable by wave 2. As a result, the perceived accuracy difference between mainstream and false headlines remained statistically distinguishable from zero in the second wave, although its magnitude decayed (β=0.050; SE=0.020; P<0.05).

Fig. 1 illustrates the substantive magnitude of the intent to treat effects of the media literacy intervention in the United States using a binary indicator of perceived headline accuracy. The proportion of respondents rating a false headline as “very accurate” or “somewhat accurate” decreased from 32% in the control condition to 24% among respondents who were assigned to the media literacy intervention in wave 1, a decrease of 7 percentage points. This effect represents a relative decrease of approximately one-fourth in the percentage of people wrongly endorsing misinformation. Treatment effects continue to persist with this alternate measure—in wave 2, the intervention reduced the proportion of people endorsing false headlines as accurate from 33 to 29%, a 4-percentage-point effect. By contrast, the proportion of respondents who classified mainstream news as not very accurate or not at all accurate rather than somewhat or very accurate decreased only from 57 to 55% in wave 1 and 59 to 57% in wave 2.


Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

6.1.2 A digital media literacy intervention increases discernment between mainstream and false news in the United States and India by Andrew M. Guess, Michael Lerner, Benjamin Lyons, Jacob M. Montgomery, Brendan Nyhan, Jason Reifler, and Neelanjan Sircar is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book