Legal restrictions/ Self-regulation Impact measurement

- (The New York Times (2021b)) Case study of Joseph Mercola
 * (Center for Countering Digital Hate) "Analysis of a sample of anti-vaccine content that was shared or posted on Facebook and Twitter a total of 812,000 times between 1 February and 16 March 2021 shows that 65 percent of anti-vaccine content is attributable to the Disinformation Dozen ["The Disinformation Dozen are twelve anti-vaxxers who play leading roles in spreading digital misinformation about Covid vaccines."] ... Facebook, Google and Twitter have put policies into place to prevent the spread of vaccine misinformation; yet to date, all have failed to satisfactorily enforce those policies. All have been particularly ineffective at removing harmful and dangerous misinformation about coronavirus vaccines, though the scale of misinformation on Facebook, and thus the impact of their failure, is larger. Further, they have all failed to remove the accounts of prominent anti-vaxxers who have repeatedly violated their terms of service, as documented in later sections of this report."
 * (Center for Countering Digital Hate) CCDH found just 10 Facebook publishers were responsible for 69% of climate denial content on the platform.
 * (Avaaz (2021a)) "Avaaz found that, within this time period, the top 5 emitters of climate misinformation on Facebook were PragerU, Turning Point USA, John Stossel, Bjørn Lomborg, and Alan Jones . All five accumulated more than 61 million estimated views combined on posts containing climate falsehoods, including the claim that there is no evidence for the adverse effects of climate change. Facebook failed to label 88% of these posts . Moreover, Avaaz researchers found that the platform allowed these same actors to promote climate change denial and other misinformation through paid advertisements that were seen an estimated 6.9 million times by users."
 * (Avaaz (2021c)) "Research from Avaaz shows that in President Biden’s first 60 days in office, misinformation related to climate science and renewable energy racked up an estimated 25 million views across Facebook. While climate disinformation campaigns have been waged for decades, 1 Facebook is a powerful new tool in misinformers’ playbooks. Avaaz’s research supports the hypothesis that tactics to prevent climate action are shifting away from outright climate change denialism, and toward more subtle misinformation narratives that promote “inactivism” ... Our research further suggests that Facebook is unprepared to address the growing and evolving landscape of climate misinformation. For instance, the top-performing misinformation posts in our sample garnered more interactions than factual posts about climate and energy from the New York Times and the Washington Post . Additionally, posts without a fact-checking label accounted for 45% of all estimated views."
 * (Avaaz (2021d)) "A majority (56%) of fact-checked misinformation content in major non-English European languages 1 is not acted upon by Facebook, compared to only 26% of English-language content debunked by US-based fact checkers. This means Europeans are at greater risk of seeing and interacting with COVID-19-related misinformation: Italian speakers are least protected from misinformation, with measures lacking for 69% of Italian content examined. Next are French and Portuguese speakers, with measures lacking on 58% of French content and 50% of Portuguese content. Spanish speakers were most protected, though measures lacked for 33% of Spanish language content, which is still more than English content. Based on our sample analysis, on average, Facebook is almost one week slower to label non-English false content, taking 30 days for that content compared to 24 days for English-language false content."
 * (Avaaz (2021e)) "An analysis of the steps Facebook took throughout 2020 shows that if the platform had acted earlier, adopting civil society advice and proactively detoxing its algorithm, it could have stopped 10.1 billion estimated views of content from top-performing pages that repeatedly shared misinformation over the eight months before the US elections. Failure to downgrade the reach of these pages and to limit their ability to advertise in the year before the election meant Facebook allowed them to almost triple their monthly interactions, from 97 million interactions in October 2019 to 277.9 million interactions in October 2020 - catching up with the top 100 US media pages 2 (ex. CNN, MSNBC, Fox News) on Facebook. Facebook has now rolled back many of the emergency policies it instituted during the elections, returning to the algorithmic status quo that allowed conspiracy movements like QAnon and Stop the Steal to flourish."
 * (Avaaz (2021e)) "Avaaz identified 267 pages and groups - in addition to “Stop the Steal” groups - with a combined following of 32 million, spreading violence-glorifying content in the heat of the 2020 election. Out of the 267 identified pages and groups, 68.7 percent had Boogaloo, QAnon or militia-aligned names and shared content promoting imagery and conspiracy theories related to these movements. Despite clear violations of Facebook’s policies, 118 of those 267 pages and groups are still active on the platform and have a following of just under 27 million - of which 59 are Boogaloo, QAnon or militia-aligned. Among these we found at least three instances of content which verged on incitement to violence".
 * (Avaaz (2021e)) "The top 100 most popular false or misleading stories on Facebook, related to the 2020 elections, received an estimated 162 million views. The millions of users who saw these misinformation stories before they were labeled never received retroactive corrections to inform them that what they had seen wasn’t true. Although each of the 100 stories had a fact-check publicly available from an organisation working in partnership with Facebook, 24% of the stories (24) had no warning labels to inform users of falsehood."
 * (Saltz, E. et al. (2021)) For the USA: "We find that encounters with platform misinformation interventions are widespread (49% overall report some exposure occurring after the 2020 election), but the process for reviewing content on platforms is poorly understood (40% believe most or all content on online platforms is fact-checked, 17% are unsure). Interventions trigger polarized responses between Republicans and Democrats (roughly a standard deviation gap). Besides partisanship, we surface several additional traits that correlate strongly with intervention attitudes. Trust in institutions (though not any particular institution) strongly predicts support for different intervention types and intervention sources (though not any particular source). Moreover, positive experiences with misinformation interventions predict increased intervention support from both Democrats and Republicans, suggesting that, in some cases, perceived intervention efficacy and reduction of false positives may help to overcome disapproving attitudes."
 * (Frankfurter Rundschau on Twitter) The comment section in Facebook can be overflooded by trolls because Facebook limits the amount of comments you can delete to 10,000.
 * (Rule of Law) Evaluation of the impact of the Ustawy o ochronie wolności słowa w internetowych serwisach społecznościowych by the Polish Ministry of Justice.