Can Social Media Algorithms Counter Prejudice?
Updated: May 2, 2022
Social media platforms are the most globalized form of socializing possible today. While the idea of global socialization is incredible, it is also evidently flawed. Exposure to hateful ideals and biased content, whether racial, gender, or political bias, is unavoidable on all the most popular platforms. Although you can report another user for spreading harmful content, there is little one can do to change the actual algorithms major applications like Instagram employ. People are indeed responsible for the posts and comments on social media. Still, the recommendation engines they use might be responsible for cultivating an environment where biased ways of thinking can thrive. By creating echo chambers or promoting certain content, these algorithms might deepen some users' biases or create new ones in other users. While several theories are floating around that social media algorithms promote biased behavior (for example, the filter bubble hypothesis), these machine learning algorithms are never actually held accountable for the behavior they encourage or create. The algorithms, or the computer scientists behind them, are rarely pushed to be better, learn, or spread awareness in the way content creators are.
Furthermore, the recommendation engine is a universal aspect of these social media platforms. Every user's unique feed is the result of a recommendation engine. Instead of forming echo chambers and endless loops of biased content, what if these algorithms promoted diverse media to counter the human inclination towards narrow thinking?
Creating recommendation algorithms that recommend diverse content would expose users to varied perspectives and potentially counter the formation of biases but would also benefit the platforms that employ the algorithms. A fatal flaw of recommendation engines is that they can become repetitive and constantly recommend similar content (Helberger et al., 2016). When someone is scrolling through their explore feed, if every post they see is related to one topic, it's bound to get boring. By showing a wider variety of content to users, social media platforms will be able to keep people interested longer, therefore spending more time on the app and perhaps widening their interests, leading them to buy more products promoted to them.
In addition to the benefits that social media platforms could reap from increasing exposure diversity in their algorithms, users would benefit as well. They would enjoy their time on each platform more, and it would also provide an opportunity to widen their worldview. Furthermore, with more varied feeds, people could be exposed to different political and moral views, which might promote interpersonal empathy and understanding.
A more varied feed has the potential to be beneficial in a multitude of ways; however, is it possible for recommendation systems to produce this? Recommendation engines are supposed to provide a personalized experience for each user and find content that they will enjoy (see "The 'In' on TikTok's Infamous Algorithm). By definition, they go against a diversity-aware system. A diversity-aware recommendation system, ideally, would still provide a personalized experience. However, it would be less monolithic and expose users to media they might not usually see with a normal recommendation system. It needs to find the right balance of diverse content and content relevant to the user (Tintarev, 2017). In a model proposed by Nava Tintarev at the Delft University of Technology, the diversity-aware recommendation system considered both user diversity and content diversity. Most models of diversity-aware recommendation systems only consider item diversity, which led to my interest in Tintarev's model. This model aims to diversify the content the user sees and maintain the user's satisfaction. Essentially, the model works by considering both the user and "items" (pieces of content, so a post of some sort) to be diverse, and it creates a list of recommended items that range in topics and relevance to the user. So while this model prioritizes recommending highly personalized content to the user, it will also show some posts that are less relevant to the user.
Tintarev's model is just one of several proposals for diversity-aware recommendation systems. Diversity-aware recommendation systems could be the key to maintaining a personalized feed while not getting trapped in echo chambers of content that only reflects how the user thinks. However, as with all machine learning algorithms, diversity-aware recommendation systems have their list of possible downfalls and ethical concerns. They are not a one-size-fits-all solution, but it is something social media platforms should consider to improve the user experience on their applications.
Bibliography:
Helberger, N., Karppinen, K., & D’Acunto, L. (2016). Exposure diversity as a design principle for Recommender Systems. Information, Communication & Society, 21(2), 191–207. https://doi.org/10.1080/1369118x.2016.1271900
Tintarev, N. (2017). Presenting diversity-aware recommendations: Making challenging news acceptable. Boise State ScholarWorks. https://doi.org/10.18122/b2hq41
Comments