Legal Restrictions/ Self-Regulation Restrictions

Internal code: 4.3a

Target groups: initiators, recipients

General description
 * (HLEG, 2018) Online platforms are making efforts to provide responses to the distribution of disinformation. Key efforts include, first, steps to identify and remove illegitimate accounts; second, steps to integrate signals for credibility and trustworthiness in ranking algorithms and include recommendations of alternative content to increase the “findability” of credible content; third, attempt to de-monetize for-profit fabrication of false information and, fourth, collaboration with independent source and fact-checking organizations.
 * EU Human Rights Guidelines on Freedom of Expression Online and Offline, 2018: An open society based on the rule of law can only operate effectively if there is an independent and pluralistic media environment offline and online. A free, diverse and independent press and other media provide public platforms that are essential to any society to ensure freedom of opinion and expression and the enjoyment of other human rights.

Interventions - (Reuters) "Meta Platforms ... denied a claim by the Kazakh government that it had been granted exclusive access to the social network's content reporting system".
 * (GARM) Brand Safety Floor + Suitability Framework "The Global Alliance for Responsible Media (GARM) is an industry first effort that unites marketers, media agencies, media platforms, and industry associations to safeguard the potential of digital media by reducing the availability and monetization of harmful content online."
 * (WSJ, 31.1.2021) Facebook clamps down on Groups.
 * (Facebook's Nathaniel Gleicher on Twitter) "We are now prohibiting Russian state media from running ads or monetizing on our platform anywhere in the world. We also continue to apply labels to additional Russian state media."
 * (TheNextWeb on Meta) "The social media behemoth has provisionally permitted hate speech in certain Facebook and Instagram posts. There’s no need to worry, however: the malice can only target Meta-approved baddies. Violent speech will now be permitted when targeting Russian soldiers and Russians in the context of the Ukraine invasion, according to internal emails seen by Reuters. Meta is also now allowing posts that call for death to Russian President Vladimir Putin or his Belarusian counterpart Alexander Lukashenko. The provisional policy change applies to Ukraine, Russia, Poland, Latvia, Lithuania, Estonia, Slovakia, Hungary, and Romania."
 * Reddit discussion on potential interventions
 * Twitter has three justifications for its interventions : on the basis of privacy, on the basis of harm, and on the basis of misinformation. All these areas are problematic. Since 2020 it has started up a user community to start dealing with these interventions: Bluesky - see more https://bluesky-community.net/.
 * (Yoel Roth on Twitter) "Today, we’re adding labels to Tweets that share links to Russian state-affiliated media websites and are taking steps to significantly reduce the circulation of this content on Twitter. We’ll roll out these labels to other state-affiliated media outlets in the coming weeks."
 * (NATO 2021) (p.13) Overview
 * (NATO 2021) "... the actions of social media companies do not represent a panacea to the problem of disinformation. Limited legislative and governmental regulation and intervention on these platforms have left it up to companies to create their own guidelines. This state of play is unsatisfactory and needs careful rethinking for two main reasons. First, self-regulation has not always been effective in eliminating false content or preventing hostile actors from disseminating misleading and divisive content (Berzina et al, 2019). Second, leaving social media companies solely responsible for tackling disinformation raises concerns about corporate censorship, politicization of the platforms, and lack of transparency. The boundary between disinformation on the one hand, and freedom of speech on the other must therefore be set through a democratic process and not be left to the discretion of private entities. In collaboration with the private sector and civil society, democratic governments should work towards achieving a shared understanding of the threat posed by online disinformation and set common standards on how to tackle it, while at the same time protecting freedom of speech."
 * (The Economist) Censorious government use disinformation as a pretext to clamp down on critics.
 * (Belgian Senate (2021)) (p.23-30) Overview
 * (Belgian Senate (2021)) “Facebook is zich bewust van zijn verantwoordelijkheid en hanteert daarom een drieledige aanpak: verwijderen, verminderen en informeren."
 * (Belgian Senate (2021)) "De meest verregaande ingreep is het verwijderen van accounts of berichten. Dat gebeurt bij berichten die in strijd zijn met de huisregels ( bijvoorbeeld gebruik van nepaccounts). Elke dag worden meer dan één miljoen nepaccounts verwijderd."
 * (Belgian Senate (2021)) "Berichten die niet onmiddellijk in strijd zijn met de huisregels van het bedrijf, maar wel van lage kwaliteit zijn of onwaar, worden minder verspreid. Daarvoor werkt Facebook samen met externe en onafhankelijke «factcheckers» (in België: DPA, Knack, AFP). Als deze factcheckers iets als «onwaar» bestempelen, dan vermindert het bedrijf de distributie van het bericht drastisch."
 * (Belgian Senate (2021)) "Facebook wenst zijn gebruikers te informeren: door hen meer context te geven, kunnen die zelf beslissen wat ze lezen, vertrouwen en delen. Wanneer een bericht door factcheckers als onwaar wordt beoordeeld, toont Facebook waarschuwingslabels met de beoordeling en het onderzoek van de factcheckers aan gebruikers die het bericht zien, het bericht proberen te delen of dat al hebben gedaan."
 * (Belgian Senate (2021)) "Ook YouTube voert een gelijkaardig verminderings- en verwijderingsbeleid."
 * (Belgian Senate (2021)) "Toch wijzen experten nog op hiaten in deze aanpak, vooral wat betreft de werking van de algoritmen.”
 * (Belgian Senate (2021)) "Google laat in de nota aan de commissie weten dat het op basis van de bestaande Europese Code of conduct on countering illegal hate speech online strikte regels hanteert.”
 * (KRRiT (nd)) (p.42-58; 71-82) Overview
 * (KRRiT (nd)) "W Polsce Stowarzyszenie Komunikacji Marketingowej SAR, które podpisało Kodeks postępowania w zakresie zwalczania dezinformacji100, jest inicjatorem akcji mającej na celu zainteresowanie i włączenie jak największej liczby uczestników rynku reklamowego do pracy nad wdrożeniem standardów transparentności i jakości ruchu w Internecie. Obecnie SAR realizuje projekt Okrągłego Stołu poświęcony przeciwdziałaniu dezinformacji w Internecie oraz jej skutkom dla użytkowników sieci, konsumentów i reklamodawców."
 * (KRRiT (nd)) (p.94-96) Facebook initiatives in Poland
 * EU Human Rights Guidelines on Freedom of Expression Online and Offline, 2018: The EU will:
 * - Support action by third countries to ensure legal, policy and regulatory frameworks based on international standards that protect and promote freedom of expression and information.
 * - Support actions by third countries to enact necessary procedures to facilitate individuals to receive information, including by freedom of information laws
 * - Promote the independence of and the protection against political or commercial interference of all public bodies that regulate media, broadcasting or telecommunications.
 * - Support actions by third countries to improve transparency of media ownership, the adoption of measures against media concentration and fair and transparent licencing allocation as the associated risks have grown more acute in the digital age.
 * - Support actions by third countries aiming at the strengthening of journalistic and editorial independence, including through legal and financing mechanisms reinforcing financial self-sustainability of both public and private media.
 * - Encourage the promotion, in third countries, of measures, in particular voluntary, self- regulatory initiatives and mechanisms such as media ethic codes, which enhance press accountability.
 * - Encourage free and pluralistic reporting on elections as well as equitable political party access to public service media during election campaigns.
 * - Encourage independent organisations to actively monitor the situation of media freedom and pluralism in different countries.


 * (Bloomberg) Blocking Russian-sponsored information outlets like Sputnik and RT. "The restrictions prohibit broadcasting content from the two television channels and their subsidiaries, downloading their apps and sharing their output on social media platforms, according to documents published by the European Commission today."
 * (The Guardian) Sanction disinformation outlets. "Twelve key disinformation outlets used to bolster Vladimir Putin have been hit with sanctions in an online crackdown on “false and misleading” reports claimed to be orchestrated by Russian intelligence."

Legal restrictions/ Self-regulation Impact measurement

Assumptions
 * (Miro-Llinares, F., Aguerri, J. (2021)) "... there are dozens of empirical studies that have attempted to describe and analyse an issue that, despite still being in the process of definition, has been identified as one of the key COVID-19 cyberthreats by Interpol, is considered a threat to democracy by many states and supranational institutions and, as a consequence, is subject to regulation or even criminalization. These legislative and criminal policy interventions form part of the first stage in the construction of a moral panic that may lead to the restriction of freedom of expression and information. By analysing empirical research that attempts to measure the extent of the issue and its impact, the present article aims to provide critical reflection on the process of constructing fake news as a threat. Via a systematic review of the literature, we observe, firstly, that the concept of fake news used in empirical research is limited and should be refocused because it has not been constructed according to scientific criteria and can fail to include relevant elements and actors, such as governments and traditional media. Secondly, the article analyses what is known scientifically about the extent, consumption and impact of fake news and argues that it is problematic to establish causal relationships between the issue and the effects it has been said to produce. This conclusion requires us to conduct further research and to reconsider the position of fake news as a threat as well as the resulting regulation and criminalization."
 * (Howley, D. (2021)) "Political disinformation, user harassment, and crypto scams already pervade the digital world, and they’ll only become bigger problems as we enter the metaverse — unless companies tackle them head-on.""

Recommendations – aan politieke partijen bij het verspreiden van politieke advertenties; – aan alle sociale media bij het verspreiden van politieke content tegen betaling."
 * European Commission (2018a). Action Plan against Disinformation - Code of Practice: large online platforms should immediately (i) ensure scrutiny of ad placement and transparency of political advertising, based on effective due diligence checks of the identity of the sponsors, (ii) close down fake accounts active on their services and (iii) identify automated bots and label them accordingly. Online platforms should also cooperate with the national audio-visual regulators and with independent fact-checkers and researchers to detect and flag disinformation campaigns in particular during election periods and to make fact-checked content more visible and widespread.
 * (NATO 2021) "... individual member states should translate the collective recommitment to the democratic values and principles that underpin the Alliance contained in the June 2021 communiqué into concrete actions at the national level. In particular, member countries should reassert their commitment to women’s equal rights and dedicate additional resources to understanding and countering the effects of gendered disinformation on our democracies."
 * (NATO 2021) "Allied countries should cooperate with digital companies in a more informal manner to develop a democratic digital domain in which freedom of speech is upheld and disinformation prevented from spreading. This cooperation should focus on four key aspects. First, Allied countries and digital companies should assess the impact of the measures that the latter have recently adopted, particularly around the 2020 US election, and replicate successful practices where appropriate. Second, Allied governments should urge these companies to invest in technologies, such as algorithms, that can automatically identify disinformation and flag it for users. Third, they should press digital companies to strengthen online accountability by bolstering their policies against users posting under false names. Finally, governments should work with these companies to take down authoritarian state-sponsored news outlets and social media accounts disseminating disinformation and propaganda."
 * (NATO 2021) "Member states should foster public trust in the integrity of electoral processes which constitute the basis of our democratic systems. To that end, they could consider developing a shared framework on the protection of elections against disinformation and propaganda. This framework should include common standards on strategic communications around elections, specific media regulations, and eventual sanctions against disruptive actors."
 * (NATO 2021) "Allied countries should cooperate to develop a common transatlantic legislative approach on regulating online content with a view to prevent the dissemination of disinformation and propaganda. Large technological companies should not be placed in a position to make unilateral decisions on the acceptability or veracity of content. Rather, such issues should be addressed through a strong legislative framework that would be transparent and democratic in nature and would both guarantee freedom of speech and prevent online hostile information activities. This framework should aim to replace currently competing national efforts with a common approach to managing the use of information technologies, including emerging technologies such as AI and deepfakes. Beyond preventing the misuse of information tools, this framework should also support a positive vision of digital technology as a democratic tool."
 * (The Royal Society (2022)) "To promote standards and guide startups, interested parties need to collaborate to develop examples of best practice for countering misinformation as well as datasets, tools, software libraries, and standardised benchmarks."
 * (Belgian Senate (2021)) "De Senaat beveelt hierbij aan om erover te waken dat het evenwicht bewaard blijft tussen, enerzijds, de strijd tegen desinformatie en, anderzijds, het respect voor de grondrechten, in het bijzonder de vrijheid van meningsuiting."
 * (Belgian Senate (2021)) "De Senaat beveelt aan zich bewust te zijn van het belang van sociale media, die ware «echokamers» kunnen vormen voor desinformatie bij de burgers."
 * (Belgian Senate (2021)) "De Senaat beveelt aan te onderzoeken hoe vermeden kan worden dat socialemediaplatformen zelf moeten oordelen over de legaliteit van berichten van gebruikers van de platformen."
 * (Belgian Senate (2021)) "De Senaat beveelt aan de socialemediaplatformen actief te betrekken bij het informeren van burgers over de wijze waarop deze platformen berichten selecteren en verspreiden en hoe burgers desinformatie kunnen ontdekken."
 * (Belgian Senate (2021)) "De Senaat beveelt aan dat socialemediabedrijven hun beleid inzake de criteria voor verwijdering van berichten en advertenties van politici transparanter maken."
 * (Belgian Senate (2021)) "De Senaat beveelt aan dat socialemediabedrijven in onderling overleg treden en transparant zijn over hun beleid om de verspreiding van desinformatie tegen te gaan."
 * (Belgian Senate (2021)) "De Senaat beveelt aan dat politieke partijen wettelijk aansprakelijk blijven voor wat ze delen of verspreiden.
 * (Belgian Senate (2021)) "De Senaat beveelt aan om de deontologische codes voor politieke mandatarissen aan te vullen met de plicht zich te onthouden van het doelbewust verspreiden van desinformatie."
 * (Belgian Senate (2021)) "De Senaat beveelt aan dat in het kader van de wetten op de erkenning en de financiering van de partijen uitdrukkelijk wordt vermeld dat de politieke partijen geen robots of nepaccounts mogen gebruiken en steeds wettelijk aansprakelijk moeten blijven."
 * (Belgian Senate (2021)) "De Senaat beveelt aan om een transparantieverplichting op te leggen:
 * (Belgian Senate (2021)) "De Senaat beveelt aan bijzondere aandacht te besteden aan het weerbaarder maken van politieke kandidaten tegen desinformatie, zwartmaken, haatspraak, enz., waarvan vooral vrouwen het slachtoffer zijn."

Legal Restrictions/ Self-Regulation restrictions projects