Fact-Checking Impact Measurement


 * Pennycook & Rand (2017) conducted five experiments to test the effectiveness of attaching warnings to news stories that have been disputed by third-party fact-checkers. They find that while warnings do lead to a modest reduction in perceived accuracy of fake news relative to a control condition, there is also an implied truth effect: the presence of warnings caused untagged stories to be seen as more accurate than in the control. Fact-checking becomes ineffective when confirmation and desirability bias is prevalent. Another problem with this flagging strategy is that it can take a considerable amount of time to do the flagging, compared with the duration of an average news cycle that does not exceed 48h (Tan et al, 2016) and possibly much shorter for the bulk of sharing on social media websites.
 * (Roozenbeek, J., van der Linden, S., 2019) “decades of research on human cognition finds that misinformation is not easily corrected. In particular, the continued influence effect of misinformation suggests that corrections are often ineffective as people continue to rely on debunked falsehoods (Nyhan and Reifler, 2010; Lewandowsky et al., 2012). Importantly, recent scholarship suggests that false news spreads faster and deeper than true information (Vosoughi, Roy, and Aral, 2018). Accordingly, developing better debunking and fact-checking tools is therefore unlikely to be sufficient to stem the flow of online misinformation (Chan et al., 2017; Lewandowsky, Ecker, and Cook, 2017).”
 * (Jankowicz, N. (2020)) “A press release, no matter how well written, cannot fully correct a salacious story. A fact-check, even if verified beyond a shadow of a doubt, will not convince a conspiracy theorist to give up his fervent speculations.”
 * (Jankowicz, N. (2020)) “... a Yale University study found that labeling content “disputed” on the platform [Facebook] had little effect on user behaviour. The labels helped only 3.6 percent of those surveyed identify false stories. More worryingly, among supporters of President Trump and those under twenty-six, the study found that the flags did the opposite of their intended goal and made users more likely to believe the content. Finally, the study warned about a truly frightening problem of scale. In what it dubs the “Implied Truth Effect,” the study found that users assumed unlabeled content was accurate, a nearly unsurmountable obstacle in the endless flow of information that is today’s internet.” [Gordon Pennycook et al., The implied truth effect. Management science, January 16, 2020.]
 * See also Artificial Intelligence Impact measurements and Legal Restrictions/ Self-Regulation Restrictions Impact measurements
 * (Avaaz (2020b)) Anti-BLM pages on Facebook: "At the time of our investigation, many of the posts we reviewed did not carry a fact-checking label, even though the false or misleading narratives the posts were based on had previously been debunked by independent fact-checkers."
 * (Lewandowsky, S. et al. (2021) ) "Research using this paradigm has consistently found that retractions rarely, if ever, have the intended effect of eliminating reliance on misinformation, even when people believe, understand, and later remember the retraction (e.g., Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011; Ecker, Lewandowsky, & Tang, 2010; Fein, McCloskey, & Tomlinson, 1997; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998, 1999; Schul & Mazursky, 1990; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999). In fact, a retraction will at most halve the number of references to misinformation, even when people acknowledge and demonstrably remember the retraction (Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011); in some studies, a retraction did not reduce reliance on misinformation at all (e.g., H. M. Johnson & Seifert, 1994)."
 * (Lewandowsky, S. et al. (2021) ) "The wealth of studies on this phenomenon have documented its pervasive effects, showing that it is extremely difficult to return the beliefs of people who have been exposed to misinformation to a baseline similar to those of people who were never exposed to it."
 * - Lewandowsky, S. et al. sum op key assumptions why retractions do not have the desired impact mental models, retrieval failure, fluency and familiarity, reactance, individual's worldview, individual's level of skepticism.

Russia-Ukraine war
 * (Lewandowsky, S. et al. (2021) ) "To date, only three factors have been identified that can increase the effectiveness of retractions: (a) warnings at the time of the initial exposure to misinformation, (b) repetition of the retraction, and (c) corrections that tell an alternative story that fills the coherence gap otherwise left by the retraction."
 * (Lewandowsky, S. et al. (2021) ) "... it appears that a careful and prolonged dissection of incorrect arguments may facilitate the acquisition of correct information. To illustrate this point, Kowalski and Taylor (2009) conducted a naturalistic experiment in which they compared a standard teaching format with an alternative approach in which lectures explicitly refuted 17 common misconceptions about psychology but left others unchallenged. The results showed that direct refutation was more successful in reducing misconceptions than was the nonrefutational provision of the same information. On the basis of a more extensive review of the literature, Osborne (2010) likewise argued for the centrality of argumentation and rebuttal in science education, suggesting that classroom studies “show improvements in conceptual learning when students engage in argumentation” (p. 464)."
 * (Van der Linden (2020) ) "Meta-analyses have consistently found that fact-checking and debunking interventions can be effective, including in the context of countering health misinformation on social media. However, not all medical misperceptions are equally amenable to corrections. In fact, these same analyses note that the effectiveness of interventions is significantly attenuated by (1) the quality of the debunk, (2) the passing of time, and (3) prior beliefs and ideologies."
 * (Van der Linden (2020) ) "When designing corrections, simply labeling information as false or incorrect is generally not sufficient because correcting a myth by means of a simple retraction leaves a gap in people’s understanding of why the information is false and what is true instead. Accordingly, the recommendation for practitioners is often to craft much more detailed debunking materials. Reviews of the literature have indicated that best practice in designing debunking messages involves (1) leading with the truth, (2) appealing to scientific consensus and authoritative expert sources, (3) ensuring that the correction is easily accessible and not more complex than the initial misinformation, (4) a clear explanation of why the misinformation is wrong, and (5) the provision of a coherent alternative causal explanation. Although there is generally a lack of comparative research, some recent studies have shown that optimizing debunking messages according to these guidelines enhances their efficacy when compared with alternative or business-as-usual debunking methods."
 * (Van der Linden (2020) ) "The consensus is therefore that, although practitioners should be aware of these backfire concerns, they should not prevent the issuing of corrections given the infrequent nature of these side effects."
 * (Van der Linden (2020) ) "... there are two other notable problems with therapeutic approaches that limit their efficacy. The first is that retrospective corrections do not reach the same amount of people as the original misinformation. For example, estimates reveal that only about 40% of smokers were exposed to the tobacco industry’s court-ordered corrections98. A related concern is that, after being exposed, people continue to make inferences on the basis of falsehoods, even when they acknowledge a correction. This phenomenon is known as the ‘continued influence of misinformation’, and meta-analyses have found robust evidence of continued influence effects in a wide range of contexts."
 * (Carey, J. et al. (2022) ) "Preregistered survey experiments in the United States, Great Britain and Canada show that exposure to fact-checks decreased the perceived accuracy of targeted false claims about COVID-19 immediately after exposure. These decreases in false beliefs were often greatest among people who were previously most misinformed and/or who were potentially especially susceptible due to political affiliations or distrust of established authorities. However, we find no evidence that repeated exposure to fact-checks increases their effects or that exposure to these claims has durable effects on the accuracy of people’s beliefs over time."
 * (Lisa Forte on Twitter) "this raises an interesting point about news consumption on social media. In normal press they’d issue a retraction for poorly substantiated reports. That’s lost on social, so any retraction or admission of “fakery” is lost. No retrospective or prospective correction ability?"
 * (Alexandria Ocasio-Cortez on Twitter) "One thing I have learned about many fact-check operations is that while the facts of a given matter may be “objective,” who/when/what they check or publish seems almost entirely subjective. They rarely share their standards for what meets a check ..."
 * A deepfake of Ukrain president Zelensky was disseminated by Russian disinformation channels. It was immediately debunked by Zelensky. Wired: "That short-lived saga could be the first weaponized use of deepfakes during an armed conflict, although it is unclear who created and distributed the video and with what motive. The way the fakery unraveled so quickly shows how malicious deepfakes can be defeated—at least when conditions are right."
 * (Twitter) Russian disinformation reported on Ukrainian refugees attacking volunteers in Germany. Influencers on TikTok repeated the claims. The local police in Euskirchen immediately debunked. One TikTok ifluencer apologized.