Algorithms

Internal code: 4.3b2 Target groups: (large) (social media) platforms

General description
 * Algorithms used to spread false information exploit human biases and predispositions, such as human confirmation bias, inclination to believe repeated stories, and attraction to novel content.

Interventions

Algorithms Impact measurement

Assumptions
 * (FilterTube, 2021) An experimental research on how personalization influence search results. YouTube.
 * Belgian Senate (2021) “… achter de algoritmen die ze gebruiken zit altijd een bepaalde keuze. Daar zit de kern van het probleem.”
 * (Jankowicz, N. (2020)) “The algorithmic recommendations on which social media platforms thrive ... had always been vectors for indoctrination and extremism.”
 * (Chen, W. et al. (2021)) "Given the political neutrality of the news feed curation, we find no evidence for attributing the conservative bias of the information ecosystem to intentional interference by the platform. The bias can be explained by the use (and abuse) of the platform by its users, and possibly to unintended effects of the policies that govern this use: neutral algorithms do not necessarily yield neutral outcomes."
 * (Chen, W. et al. (2021)) "While most drifters are embedded in clustered and homogeneous network communities, the echo chambers of conservative accounts grow especially dense and include a larger portion of politically active accounts. Social bots also seem to play an important role in the partisan social networks; the drifters, especially the Right-leaning ones, end up following a lot of them. Since bots also amplify the spread of low-credibility news, this may help explain the prevalent exposure of Right-leaning drifters to low-credibility sources."
 * Hindman, M. et al. (2022) "Most public activity on the platform comes from a tiny, hyperactive group of abusive users. ... And because Facebook’s algorithm rewards engagement, these superusers have enormous influence over which posts are seen first in other users’ feeds, and which are never seen at all. Even more shocking is just how nasty most of these hyper-influential users are. The most abusive people on Facebook, it turns out, are given the most power to shape what Facebook is."
 * Hindman, M. et al. (2022) "Perhaps the most important revelations that came from the former Facebook data engineer Frances Haugen’s trove of internal documents concerned the inner workings of Facebook’s key algorithm, called “Meaningful Social Interaction,” or MSI. Facebook introduced MSI in 2018, as it was confronting declining engagement across its platform ... The basics of MSI are simple: It ranks posts by assigning points for different public interactions. Posts with a lot of MSI tend to end up at the top of users’ news feeds—and posts with little are, usually, never seen at all. According to The Wall Street Journal, when MSI was first rolled out on the platform, a “like” was worth one point; reactions and re-shares were worth five points; “nonsignificant” comments were worth 15 points; and “significant” comments or messages were worth 30. A metric like MSI, which gives more weight to less frequent behaviors such as comments, confers influence on an even smaller set of users. Using the values referenced by The Wall Street Journal and drawing from Haugen’s documents, we estimate that the top 1 percent of publicly visible users would have produced about 45 percent of MSI on the pages and groups we observed ..."
 * Hindman, M. et al. (2022) "So long as adding up different types of engagement remains a key ingredient in Facebook’s recommendation system, it amplifies the choices of the same ultra-narrow, largely hateful slice of users. So who are these people? ... Of the 219 accounts with at least 25 public comments, 68 percent spread misinformation, reposted in spammy ways, published comments that were racist or sexist or anti-Semitic or anti-gay, wished violence on their perceived enemies, or, in most cases, several of the above."
 * Hindman, M. et al. (2022) "If each of Facebook’s 15,000 U.S. moderators aggressively reviewed several dozen of the most active users and permanently removed those guilty of repeated violations, abuse on Facebook would drop drastically within days. But so would overall user engagement."
 * In the Netherlands supervision on algorithms is exercised by the AP (Authority Personal Data). In addition a new Algorithm Supervisory Body is planned - but not much is clear yet.
 * (Huszar, F. et al. ) "We present two sets of findings. First, we studied tweets by elected legislators from major political parties in seven countries. Our results reveal a remarkably consistent trend: In six out of seven countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left. Consistent with this overall trend, our second set of findings studying the US media landscape revealed that algorithmic amplification favors right-leaning news sources. We further looked at whether algorithms amplify far-left and far-right political groups more than moderate ones; contrary to prevailing public belief, we did not find evidence to support this hypothesis."

Recommendations

Algorithm projects