Filter Bubble Teaching Activities

Continuing somewhat of a trend for this semester, I’m sharing some teaching activities that I’ve come up with for my class on critical thinking and information literacy in the hopes that either a) I can get some feedback on them, b) they will be useful to someone else, or c) both. I’m currently prepping a unit on fake news and filter bubbles, and I chose to focus my activities on filter bubbles and algorithms (though I’ve also tried activities on fake news in the past). To give credit where credit is due, Activity 1 is inspired in part by the Mozilla-endorsed Data Detox activities.

Activity 1: Take a peek at your filter bubble

This week, we read about “filter bubbles”: automated or self-selected processes that focus our attention on particular kinds of information (that we prefer) and restrict our access to other kinds of information (that we don’t like as much). One of the trickiest things about filter bubbles is that they’re invisible—we’re often not aware of them, and it’s hard to tell how they’re affecting the information that we see. This week, we’re going to try to make those bubbles a little bit more visible by figuring out what companies like Facebook and Google think we like! Here are some steps for you to complete to get a glimpse at what filter bubbles you might be in and then report on it for class credit.

Check at least one platform.

Below, I’ve listed instructions for how to learn about who tech platforms think you are and what you might be interested in. This helps them determine what sort of ads to serve you and possibly what content to prioritize in your feed. I would prefer that you pick multiple platforms to get multiple perspectives on this, but I don’t know for sure what platforms everyone uses, so I can’t guarantee that everyone has at least two of these!

While it is not officially part of this activity, I suggest that you take some extra time to explore the settings surrounding advertisements and personalization. Consider whether you’re comfortable with all of these settings, and consider removing some of the personalization from your experience to lessen the effect of your filter bubble.

Google

Log into the following website (with whatever Google account you use the most): https://adssettings.google.com/authenticated

As long as Ad personalization is ON at the top of the screen, you should be able to scroll down and see what factors influence the ads that you see on Google services.

Twitter

Log into the following website: https://twitter.com/settings/your_twitter_data/twitter_interests

Click on the first two menu items to see what interests Twitter and its business partners think you have. Click on the third item to request a list of advertisers that have taken an interest in you (and for what reasons why).

Facebook

Log into Facebook, then click on: Settings → Ads → Ad Preferences → Your Interests → See all your interests

Review the interests that Facebook thinks you have!

Instagram

Log into Instagram, then click on: Settings → Ads → Ad Activity

This should give you an indication of what kind of ad activity you receive on Instagram.

Other

If you know how to find this information on another website, feel free to do so!

Report out to the class!

Write a post below that reports on your experience with this activity.

You will earn half a point for including:

  • a description of the platform that you decided to examine
  • a summary of the information that you found there

You will earn another half-point for including:

  • a reflection on whether or not you believe the platform is correct in describing your interests
  • a few thoughts on how these perceived interests may be influencing your experience on this platform.

Activity 2: explore the effect of algorithms

Algorithms played a big part of our discussion of filter bubbles in this week’s readings. In the context of personalized data, algorithms are automated means of determining what we’re interested in and providing us with more of that content. As we read, many people are concerned about the effect of algorithms—recently, they’ve been particularly controversial in the context of YouTube.

Here’s an excerpt from an op-ed by Zeynep Tufekci, an influential tech researcher:

At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube. I was writing an article about his appeal to his voter base and wanted to confirm a few quotations.

Soon I noticed something peculiar. YouTube started to recommend and “autoplay” videos for me that featured white supremacist rants, Holocaust denials and other disturbing content.

Since I was not in the habit of watching extreme right-wing fare on YouTube, I was curious whether this was an exclusively right-wing phenomenon. So I created another YouTube account and started watching videos of Hillary Clinton and Bernie Sanders, letting YouTube’s recommender algorithm take me wherever it would.

Before long, I was being directed to videos of a leftish conspiratorial cast, including arguments about the existence of secret government agencies and allegations that the United States government was behind the attacks of Sept. 11. As with the Trump videos, YouTube was recommending content that was more and more extreme than the mainstream political fare I had started with.

Intrigued, I experimented with nonpolitical topics. The same basic pattern emerged. Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons.

It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

This “radicalizing” effect of YouTube’s recommendations has some pretty obvious connections with the filter bubble and with modern concerns about algorithmic personalization. We might start with a video on a subject that we like and then be recommended videos that are related to that subject but are more extreme. We start with our own opinions and then end up with stronger versions of them, all because YouTube is feeding us information that it thinks we’ll like.

Now, no one really knows how YouTube’s algorithm works, it’s been 18 months since Tufekci wrote her piece, and YouTube is also sensitive to this kind of criticism, so it’s possible that this effect isn’t as strong as it used to be. That said, this kind of activity is a great way to see the effect that a personalization algorithm can have on our thinking! Here are the steps that you need to take:

  1. Go to YouTube and make sure that you’re logged in with whatever Google account you use the most (this will allow YouTube to consider preferences that you’ve already demonstrated in your past video watching).
  2. Search for a YouTube video that you already know you like or on a subject that you know that you like. Write down the name of the video for later.
  3. Click on the “Up next” recommended video that is either to the right of the video or below the video on the right. Write down the name of this video. You do not need to watch any of the videos that you find in this way, though you are welcome to (as long as you’re wearing your critical thinking hat).
  4. Repeat step 3 until you have a total of ten videos: One that you picked, and nine that YouTube recommended for you (each based on the last one). If you are getting into disturbing or uncomfortable territory, you have my permission to stop before getting to ten videos.

For half a point, post your list of videos in the forum below. For another half-point, write a paragraph commenting on the “trail” of videos that YouTube led you down. Is there a radicalizing or amplifying effect, like Tufekci talks about? Are any of the videos at the “end” of the trail videos that you would search for on your own? How do you feel about the YouTube algorithm after this experience?