Political ideology and user choice—not algorithmic curation—are the biggest drivers of engagement with partisan and unreliable news via Google Search, a study finds.
The study addresses a long-standing concern that digital algorithms learn from user preferences and surface information that largely agrees with users’ attitudes and biases. However, search results shown to Democrats differ little in ideology from those shown to Republicans, the researchers find. The ideological differences emerge when people decide which search results to click, or which websites to visit on their own.
The results, published in the journal Nature, suggest the same is true about the proportion of low-quality content shown to users. The quantity doesn’t differ considerably among partisans, though some groups—particularly older participants who identify as “strong Republicans”—are more likely to engage with it.
Katherine Ognyanova, an associate professor of communication at the Rutgers School of Communication and Information and coauthor of the study, says Google’s algorithms do sometimes generate results that are polarizing and potentially dangerous.
“But what our findings suggest is that Google is surfacing this content evenly among users with different political views,” Ognyanova says. “To the extent that people are engaging with those websites, that’s based largely on personal political outlook.”
Despite the crucial role algorithms play in the news people consume, few studies have focused on web search—and even fewer have compared exposure (defined as the links users see in search results), follows (the links from search results people choose to visit), and engagement (all the websites that a user visits while browsing the web).
Part of the challenge has been measuring user activity. Tracking website visits requires access to people’s computers, and researchers have generally relied on more theoretical approaches to speculate how algorithms affect polarization or push people into “filter bubbles” and “echo chambers” of political extremes.
To address these knowledge gaps, researchers at Rutgers, Stanford, and Northeastern universities conducted a two-wave study, pairing survey results with empirical data collected from a custom-built browser extension to measure exposure and engagement to online content during the 2018 and 2020 United States elections.
Researchers recruited 1,021 participants to voluntarily install the browser extension for Chrome and Firefox. The software recorded the URLs of Google Search results, as well as Google and browser histories, giving researchers precise information on the content users were engaging with, and for how long.
Participants also completed a survey and self-reported their political identification on a seven-point scale that ranged from “strong Democrat” to “strong Republican.”
Results from both study waves showed that a participant’s political identification did little to influence the amount of partisan and unreliable news they were exposed to on Google Search. By contrast, there was a clear relationship between political identification and engagement with polarizing content.
Platforms such as Google, Facebook, and Twitter are technological black boxes: Researchers know what information goes in and can measure what comes out, but the algorithms that curate results are proprietary and rarely receive public scrutiny. Because of this, many blame the technology of these platforms for creating echo chambers and filter bubbles by systematically exposing users to content that conforms to and reinforces personal beliefs.
Ognyanova says the findings paint a more nuanced picture of search behavior.
“This doesn’t let platforms like Google off the hook,” she says. “They’re still showing people information that’s partisan and unreliable. But our study underscores that it is content consumers who are in the driver’s seat.”
Source: Rutgers University