top of page
Corinne O'Leary-Lee

Filter Bubbles vs. Selective Exposure; Part 1

Updated: May 2, 2022

Have you ever been on social media and felt that the same trends and topics pop up? Perhaps the discovery pages on whatever platforms you partake in seem to be getting to know you too well? You might be one of the millions who have experienced the phenomenon of "filter bubbles." A filter bubble is one of two popular hypotheses relating to the adverse effects of recommendation systems employed by social media platforms like TikTok, Instagram, and Twitter (Nguyen et al., 2014). Recommendation engines filter what content you see on your feed, and if you tend to interact with similar types of media, a filter bubble could be formed. While an upside of this effect is that most of the media you see will appeal to you and your views, filter bubbles can also cause harm. Essentially, these "bubbles" are exactly as they sound - they create isolated social media experiences where users are only exposed to a particular type of content or viewpoint. Filter bubbles result in a lack of diverse content on users' feeds. In this article, I'll be analyzing several studies that look into the effects of filter bubbles - and whether it's better to enjoy a more comfortable social media experience or if a lack of diversity in content could lead to severe consequences.

The first question on the bracket is whether recommendation systems are really to blame for filter bubbles or whether the user is not choosing diverse content. In a study using data from MovieLens, a movie streaming service, evidence was found that "recommendation-takers consume more content diverse movies than non-recommendation-takers, and that these users are actively seeking to watch more diverse movies" (Nguyen et al., 2014). The study also concluded that based on their findings, user's who opted to take the recommendations from MovieLens had an overall more positive experience with the service. Based on this evidence, it could be argued that filter bubbles resulting from recommendation systems are not to blame for the lack of content diversity but rather the choices of the user, also known as selective exposure (Spohr, 2017). However, how does this translate into social media, where you can't necessarily pick and choose which recommendations you take from the app? While there are several similarities between streaming services and apps like TikTok and Instagram, you can't quickly determine which posts to see or not see on your "for you page" or "explore" feed. You can quickly scroll past something or choose not to "like" a particular media piece, but you can't entirely avoid what is recommended to you. Concerning the study mentioned above, it's easy to imagine how echo chambers could be more harmful on social media than on streaming services.

A significant concern of the harm of filter bubbles on social media is increased ideological polarization - which is already rampant. People already tend to choose news sources based on their political affiliations. While the internet might seem like a solution to this issue due to the practically unlimited information - the amount of data is part of the problem (Spohr, 2017). Due to the nearly infinite amount of content that can be consumed on social media - recommendation systems must be employed. Instead of users making a conscious choice to consume media from sources that align with their ideologies, the system chooses for them. The system's predictions can lead to issues of further polarization because people might not be aware that they are choosing prejudiced content and instead believe they see diverse content. As a result, their views are widely validated - and thus, their world shrinks. A study conducted on the Facebook News Feed found that many Facebook users weren't even really aware of the recommendation algorithm employed by the platform (Rader & Gray, 2015). This lack of awareness can worsen ideological polarization and increase misinformation consumption. When people see news or other information from multiple sources that they believe to be a diverse array of perspectives - they are more likely to believe this information to be accurate and, more importantly, unbiased.

The potential harm of filter bubbles spans far beyond news into almost every sector of our world perception. For example, say that a person's TikTok feed only recommends content produced by individuals that look similar to the user due to a filter bubble. The user would not be actively choosing content from similar creators, but they might instead assume that most content creators on the app happen to look like them. Or perhaps a user has extremist views, and their filter bubble recommends extremist content for the user. This could result in the user believing that their extremist ideologies are widely accepted, thus feeling more free to act upon them.

Validation of prejudiced views - generally, is the primary concern of filter bubbles. Filter bubbles are usually the consequence of existing user bias, not the cause. However, filter bubbles certainly could contribute to furthering biases like extremist political views, racism, sexism, and other harmful ideologies. Filter bubbles are among several theories on the creation/manifestation of prejudices online. Another hypothesis that I will discuss in my next article, selective exposure, places more blame on the user than the recommendation engine and could also theorize the creation of filter bubbles.


 

Bibliography:

Nguyen, T. T., Hui, P.-M., Harper, F. M., Terveen, L., & Konstan, J. A. (2014, April 1). Exploring the filter bubble: Proceedings of the 23rd International Conference on World Wide Web. ACM Other conferences. Retrieved April 24, 2022, from https://dl.acm.org/doi/abs/10.1145/2566486.2568012

Spohr, D. (2017). Fake news and ideological polarization. Business Information Review, 34(3), 150–160. https://doi.org/10.1177/0266382117722446

Rader, E., & Gray, R. (2015). Understanding user beliefs about algorithmic curation in the Facebook news feed. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. https://doi.org/10.1145/2702123.2702174


Comments


bottom of page