Hate and Extremism on Social Media; How do the Algorithms Contribute?
Updated: May 2, 2022
Hateful, extremist, and dangerous content is bound to find its way onto social media as long as they exist in humans. Any ideologies or even passing thoughts in a person's mind are also present on social media. While steps can be taken to prevent this, such as banning accounts and particular language, some harmful content will inevitably slip through the cracks. Recommendation algorithms can exacerbate this issue. Recommendation systems use the data they are given (personal info, user interactions, etc.) to predict what the user will enjoy. For example, suppose you regularly interact with history-related Tik Toks. In that case, the app will recommend similar videos to you in the future and recommend other videos that the engine assumes you will like. One of the most prevalent concerns relating to using biases in recommendation systems to make predictions is that these predictions will lead users to increasingly harmful content.
In Zeynep Tufekci's New York Times article, YouTube, The Great Radicalizer, she found a concerning pattern present in YouTube's recommendation algorithm (Tufekci, 2018). In her personal experiences, she discovered that the platform would recommend increasingly extreme content based on previously watched videos. In her words, "Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons" (Tufekci, 2018). The suspected reason behind this phenomenon is that users are more likely to enjoy extreme content. Suppose people are attracted to extreme videos, and the recommended videos continue to become more and more extreme. In that case, they are likely to spend more time on the platform, essentially getting sucked into a rabbit hole of radical content. Tufekci's article focuses on how YouTube will recommend either far-left or far-right videos to users that have watched even somewhat politically ambivalent content, a finding in the Wall Street Journal's investigation of YouTube (Nicas, 2021). The study posits that the YouTube algorithm might have a tendency that favors controversial content. This hypothesis is logical, as it would benefit the company to have a bias in the algorithm that assumes users want to see outrageous content. As a result, controversial videos will be viewed and sought out by people who agree with their message and those who do not. YouTube primarily cares about promoting content that will get the most views and interactions possible. While recommending extreme and outraging content will achieve this goal, it could lead to severe consequences for the platform if people begin to observe shifts in ideologies towards more hateful or extremist views.
The Wall Street Journal's theory is especially concerning when you consider the sheer amount of hateful and extremist content that exists on social media. YouTube, in particular, has come under fire for the presence of major terrorist group propaganda on the platform despite community guidelines (Murthy, 2021). While YouTube's community guidelines explicitly state that hate speech and the promotion of terrorism are not allowed, it's relatively easy to slide past YouTube's defenses (Neumann, 2013). An Islamic State defector himself said, "A lot of people when they come [to ISIS], they have a lot of enthusiasm about what they've seen online or what they've seen on YouTube" (Freytas-Tamura, 2015). People entering terrorist groups based on videos they see on YouTube is a concerning reality and is most likely not the norm. However, it does shine a light on the dangers of having so many young impressionable minds using social media.
YouTube is only one among several social media platforms that run rampant with harmful media. An ISD study found that extremist and hateful videos are more common on TikTok than one might imagine (O'Connor, 2021). In the sample of 1,030 videos, ISD found that 30% of the videos promoted white supremacy somehow, and 24% of the videos displayed support for an extremist or terrorist organization (O'Connor, 2021). While there are several possible confounds in this study, such as selective interactions with certain types of content or regional biases, it shows how easy it is to come across extremist content on TikTok.
With the context of Tufekci's article and the Wall Street Journal investigation, the concerns over the prevalence of harmful content are only worsened. If a user, especially an impressionable young person (as many social media users are), comes across a few extremist videos and then is consistently recommended even more extreme videos, there's a possibility that their views could become exceedingly extremist or hateful. That, however, is a worst-case scenario, one that could or could not occur. Regardless of whether an extremist rabbit-hole has the power to corrupt our minds, the concern should still be raised over social media's filtration systems. Social media platforms only have so much control over what their algorithms recommend. Still, they are responsible for upholding their stated community values and ensuring that harmful content is minimally present on their apps.
Bibliography:
Nicas, J. (2021, February 3). How YouTube drives people to the internet's darkest corners. The Wall Street Journal. Retrieved April 25, 2022, from https://www.wsj.com/articles/how-youtube-drives-viewers-to-the-internets-darkest-corners-1518020478
Tufekci, Z. (2018, March 10). YouTube, the Great Radicalizer. The New York Times. Retrieved April 25, 2022, from https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html
Murthy, D. (2021). Evaluating platform accountability: Terrorist content on YouTube. American Behavioral Scientist, 65(6), 800–824. https://doi.org/10.1177/0002764221989774
Neumann, P. R. (2013). Options and strategies for countering online radicalization in the United States. Studies in Conflict & Terrorism, 36(6), 431–459. https://doi.org/10.1080/1057610x.2013.784568
Freytas-Tamura, K. de. (2015, September 20). Isis defectors reveal disillusionment (published 2015). The New York Times. Retrieved April 25, 2022, from http://www.nytimes.com/2015/09/21/world/europe/isis-defectors-reveal-disillusionment.html
O'Connor, C. (2021). Hatescape: An In-Depth Analysis of Extremism and Hate Speech on TikTok. Politico.eu. Retrieved April 25, 2022, from https://www.politico.eu/wp-content/uploads/2021/08/24/ISD-TikTok-Hatescape-Report-August-2021.pdf
Comments