# The Psychology of Fake News - Author(s): Gordon Pennycook and David G. Rand - Date: 2021 - Publication: Trends in Cognitive Science - [Link](https://doi.org/10.1016/j.tics.2021.02.007) --- ## Summary This is a review article of the psychological research surrounding fake news and misinformation and, in particular, the interventions that seemed promising at the time. I would say that it is not a comprehensive review and seems to focus mostly on making arguments in support of the authors work (which is, admittedly, promising). ### Why do people fall for fake news? This section of the paper highlights three specific areas of interest that the authors dig into further: **political motivations, reasoning, and heuristics**. #### Political motivations The authors conclude this section with: > Taken together, the evidence therefore suggests that political identity and politically motivated reasoning are not the primary factors driving the inability to tell truth from falsehood in online news. The general argument is that it does play a role but that role is overhyped. #### Reasoning The authors conclude this section with: > Thus, when it comes to the role of reasoning, it seems that people fail to discern truth from falsehood because they do not stop to reflect sufficiently on their prior knowledge (or have insufficient or inaccurate prior knowledge) and not because their reasoning abilities are hijacked by political motivations. Work in this vein has a particular focus on dual-process theories stipulating that analytic thinking can override automatic, intuitive responses. Dual-process theories are a core component of research on the cognitive science of reasoning. These theories argue that human cognition can be partitioned into two fundamentally different types of processes that differ in terms of their characteristics: **Type 1** (or System 1) processing that is characterized primarily by automaticity such that Type 1 outputs ('intuitions') come to mind directly as a response to the stimulus, and **Type 2** (or System 2) processing that is characterized by the deliberation that may or may not arise given a particular intuitive output (or set of outputs) [see Box three from original text for some good references]. #### Heuristics > Prior work in judgment and decision making ([[Paper_Khaneman_1982_JudgementsUnderUncertainty]]) indicates that people are likely to use heuristics or mental shortcuts when judging news headlines. **What, then, are the specific features of fake news that influence people's intuitions or cause them to make mistakes when reasoning?** Those features/factors that people use to guide their heuristics (outlined by the authors in this paper) are: - **Familiarity / the *illusory truth effect***: feelings of familiarity brought on by repetition of fake news (and possibly increasing processing fluency) likely contributes to increased belief in false claims. - **News source**: Participants are more likely to believe information provided by people whom they view as being credible. - **Social feedback from platforms** (e.g., "likes", etc): posts with more likes are presumed to be more accurate - **Emotional content**: fake news is often geared toward provoking shock, fear, anger, or (more broadly) moral outrage — this is problematic because other work shows that instructing individuals to rely on emotion increases their belief in false news. ### Believing versus Sharing Fake News Generally, work covered in this paper suggests that people do not *want* to share fake news but that they may share it as a result of the context of social media, which does not place accuracy judgements at the forefront of sharing behavior. In some sense, they lack the attentional focus to consider this as a salient factor at the time of sharing. This supports what the authors call the "**inattention-based account**" of why people share misinformation. Other views they cover but then discount are: - the **confusion-based** account: people are fooled by misinformation and thus share it because they believe it is true - they highlight work which indicates that only 33% of misinformation that was share (in one study) was also believed. This suggests that most misinformation that was share was actually **not** believed (which is counter to this perspective) - the **preference-based** account: this theory is based on the idea that people are sharing misinformation that aligns with their political identity (or some other type of virtual signaling) such that they prioritize this behavior above the truth. - However, they again highlight work which shows that, of the false headlines shared in one study, 16% of them were shared despite being identified as inaccurate. Thus, this does appear to happen but it seems unlikely to explain the bulk of false or misleading content that is shared online. ### What Can Be Done? Interventions To Fight Fake News The authors cover **current** and **new approaches**. #### Current - Machine learning to identify posts and then downrank them - Pros: automatic and scalable - Cons: truth is not black and white (even professional fact-checkers often disagree on how exactly to classify content), "nonstationarity" (misinformation content tends to evolve rapidly, and therefore the features which are effective at identifying misinformation today may not be effective tomorrow) - Corrections/Warnings - Pros: Lot of work shows that this is effective at reducing misperceptions and sharing - Cons: Not scalable, typically only attached to blatantly false claims (may lead to the *implied truth* problem — users may assume that [false or misleading] headlines without warnings have actually been verified). - Fact-checks often also fail to reach their intended audience (Guess, A.M. et al. [2020] Exposure to untrustworthy websites in the 2016 US election. Nat. Hum. Behav.), and may fade over time (Swire, B. et al. [2017] The role of familiarity in correcting inaccurate information. J. Exp. Psychol. Learn. Mem. Cogn.), provide incomplete protection against familiarity effects (Pennycook, G. et al. [2018] Prior exposure increases perceived accuracy of fake news. J. Exp. Psychol. Gen.), and cause corrected users to subsequently share more low-quality and partisan content (Mosleh, M. et al. [2021] Perverse consequences of debunking in a Twitter field experiment: being corrected for posting false news increases subsequent sharing of low quality, partisan, and toxic content. In Proceedings of the 2021 CHI Conference of Human Factors in Computing Systems, http://dx.doi.org/10.1145/3411764.3445642). - Make sources of articles more visible/salient - Pros: Attempts to leverage our heuristic reliance on source credibility - Cons: Numerous studies find that making source information more salient (or removing it entirely) has little impact on whether people judge headlines to be accurate or inaccurate — though a few have found some positive results (see paper for sources). #### New Approaches - Innoculation/prebunking - Pros: Promising results, there is some scalable, [ecologically valid work in conjunction with YouTube](https://www.science.org/doi/full/10.1126/sciadv.abo6254) - Sander van der Linden's work + the Bad News Game - Cons: Prebunks (in practice) are *opt-in*, — that is, people have to actively choose to engage (and often for long periods) — and those who need the intervention most also seem most likely to not opt-in. - Some "light-touch" innoculations that simply present users info that may help them spot misinfo may not have this problem > **Important side note:** Innoculation and fact checking methods attempt to help people distinguish what is and is not misinformation. However, as mentioned [[Paper_Pennycook_2021_PsychologyOfFakeNews#Believing versus Sharing Fake News | above]], sharing misinformation does not necessarily seem to be driven mostly by people being fooled (i.e., their lack of ability). Instead, it seems like they are simply not considering accuracy at all when sharing content (the inattention-based account). - Accuracy prompts - Examples from studies: Twitter DMs (rate accuracy of politically neutral headline), ask users how they know an article is accurate before sharing - Pros: scalable, does not require a third-party "arbiter of truth"[^1], - Cons: Unlikely to be effective for everyone all the time, unclear how long effects will last (most studies have tested only a short time period) - Crowdsourced fact-checks: - Pros: Some work shows that the average rating of layperson crowds generally agrees with fact-checkers. Average is important here. Jennifer Allen's work examining Twitter's Community Notes (formerly "Bird Watch") shows that people almost exclusive fact-check stories posted by those from their political out group. - Cons: There is a publisher-based bias built into crowd ratings — i.e., bc people rely on source-based heuristics, new/niche publishers are unfairly published for being relatively unknown. - There are likely some solutions to this problem [^1]: Note that most platforms already do this. See: https://www.poynter.org/ifcn/ --- #### Related #psychology #misinformation #misinfo_interventions