Although artificial intelligence has a growing role in journalism, research finds that Americans don’t know about AI’s role in their lives—or their news.
Technology has repeatedly transformed the news media industry—telegraph, radio, television, and then the internet. Yet despite these evolutions, technology remained the medium and human journalists the messengers. The introduction of AI has changed that model.
Today, AI machines designed to perform the communicator role are generating news content independent of humans. That means AI is the medium and the messenger, giving human journalists a new synthetic partner programmed to aid in news gathering.
The new study finds many Americans are unaware of the role AI plays in their world, including in the production of news.
The findings come at a challenging time in the news media industry when news makers are facing historically low levels of public trust in their product. More Americans than ever before report getting their news from the very social media platforms using AI to provide them with their media while, at the same time, news makers are reporting on the dangers AI can pose to privacy, fairness, equality, safety, and security. It’s a conundrum.
“So, how do we help people trust AI but still report on companies that use AI to the detriment of the public?” asks Chad S. Owsley, a doctoral student at the University of Missouri School of Journalism and coauthor of the recent study. “How do you separate the two?”
Using an online survey in 2020, Owsley and coauthor associate professor Keith Greenwood found that less than half the respondents—48%—were certain that they had read, seen, or heard something about AI in the past year, another 40% could only say it was possible and only 25% of participants claimed to perceive AI as capable of writing or reporting news equal to or better than human journalists.
The 48% was consistent with a European study that took place three years earlier, which Owsley says was interesting considering the high rate of technology usage the participants claimed in the 2020 study. For example, 61% reported owning a smartphone. He expected that as people’s use of technology increased, so would their awareness of AI.
“We seemed to have stalled for three years and that is worth asking why and how,” Owsley says, adding that the study did not seek to answer those questions, which he is addressing in his dissertation. “Some of this stuff is pretty geeky. There might be a general disinterest in a high level of detail [about AI]. People might be thinking: ‘I just want it to work. I don’t care how it works.'”
Despite this lack of awareness, some forms of AI are replacing journalists, while others serve as an aid to news-gathering.
In 2018, Forbes introduced a publishing platform called “Bertie,” which uses AI to aid reporters with news articles by identifying trends, suggesting headlines, and providing visual content matching relevant stories. The Washington Post and Associated Press are also using AI to perform the role of journalist. Among the topics most often aided by AI is financial reporting.
“The way an AI machine thinks or operates is based on constructed data,” Owsley says. “It has to have the information in tabulated form, such as spreadsheets and tables.”
AI then takes that knowledge and generates a human language story.
Greenwood says a longstanding criticism of innovations is that news organizations often jump into the “next best thing” without important considerations, such as “what does our audience think and how will they interact with this?”
“If organizations are going to be thinking about adopting these AI technologies, one of the things they need to be asking is, ‘OK, how does this fit with what our audience needs or expects or how does it fit with who they think we are?'” he says. “Rather than thinking, ‘Well, this is the future, and we need to go that way.'”
The study appears in the journal AI & Society.
Source: University of Missouri