In this podcast episode, experts offer up tips and tools you can use to help spot misinformation.
With the increase of misinformation and disinformation on the internet and social media, our brains struggle to process what we’re seeing and whether an image, a video clip or a story is real or not.
Jenny Stromer-Galley and Jason Davis have studied the trends and created tools to help discern what’s real and what is synthetic when it comes to content posted online and on social media.
Stromer-Galley is an expert in political campaigns and misinformation and is a professor in the School of Information Studies; Davis is an expert on misinformation and disinformation detection. He is a research professor with the Office of Research and Creative Activity in the S.I. Newhouse School of Public Communications, and is also co-director of the Real Chemistry Emerging Insights Lab.
“Depending on where people are getting their information, the quality and credibility of that information could be quite low,” Stromer-Galley says.
“It leaves the public more vulnerable to state actors who are trying to engage in disinformation campaigns or US-based malignant actors who are trying to manipulate the public for their own ends.”
“Our brains have not evolved as fast as the technology, and so we are still as vulnerable as we ever were to the same sorts of approaches at being deceived, intentionally or unintentionally,” Davis says.
“With this new digital landscape and digital speed and scale, we need digital tools to help us protect ourselves from ourselves sometimes, and sometimes from that malicious information ecosystem.”
On this “‘Cuse Conversation,” Stromer-Galley and Davis offer up tips and tools you can use to help spot misinformation, share advice to help us be better-informed consumers of information and social media, and analyze the latest research on misinformation trends in the upcoming presidential election:
Semantic forensics
Davis is involved with the semantic forensics program, whose work is funded by the Defense Advanced Research Projects Agency (DARPA). Through his efforts with DARPA, Davis is helping to advance research into the detection of disinformation and misinformation in the media.
Semantic forensics is the understanding of not just whether something is real or fake, Davis says, but also delves into the why. What was the intent? Who was the target?
In its fourth year of concentrating on this research area, Davis has been developing digital tools that identify synthetic, manipulated media. The program evaluates the detectors being used, striving to understand what they can and can’t do when it comes to identifying synthetic media, as well as how effective they are at spotting real or synthetic content.
“We can say with confidence that this detector works for detecting these kinds of fake, synthetic images at a 98% accuracy, and it is capable of doing this but not being able to do that. They’re not a panacea, but here’s what they can do, so we learn how to use these detection devices properly and use them appropriately,” Davis says.
“Then there’s the development of the tools and the modeling of the threat landscape. How do we create controlled versions of what we know is going on out there in the wild so that we can study, train, and better understand our capabilities?”
Investigating social media
Stromer-Galley, who leads the University’s Institute for Democracy, Journalism and Citizenship ElectionGraph team, has studied misinformation trends in this presidential race and other top 2024 contests.
After the assassination attempt on former President Donald Trump in Butler, Pennsylvania, over the summer, the ElectionGraph team explored the money being spent by the candidates, political action committees, political parties, and unknown actors that mentioned presidential candidates in advertisements on both Facebook and Instagram.
The aim was to “visualize the firehose of information and misinformation coming at voters from groups with a jumble of motives, ties and trustworthiness ahead of the 2024 elections,” Stromer-Galley says.
The findings showed that:
- negative social media advertising in the presidential race increased after the assassination attempt;
- nearly 3,500 Facebook pages from outside organizations spent $55 million over the past year in an effort to influence the public this election season; and
- there was a pattern of “coordinated inauthentic behavior” among some outside organizations, including a large network of Facebook pages running ads (costing an estimated $5 million) aimed at scamming the public under the guise of supporting a presidential candidate’s campaign that garnered roughly 234 million impressions.
“To our surprise, there was a large network of individuals and organizations that we didn’t know who was behind this that were running scam ads targeted to people who are activated and excited about the presidential election. They were capitalizing on their enthusiasm by turning over their credit cards and then they’re getting scammed,” Stromer-Galley says.
“While Facebook is trying to take down those pages, the scammers continue to stay a step ahead.”
Tips to avoid misinformation
When you find yourself aimlessly scrolling through social media without thinking about the validity of what you just saw, that act makes you fully engaged in the platform and susceptible to misinformation or disinformation.
Users are encouraged to embrace cognitive friction when scrolling, because, according to both Davis and Stromer-Galley, the social media apps are designed for you to absorb content at face value, without applying deeper thought to who was behind the post or what their intent might be. By increasing friction, you take the proactive step of slowing down and contemplating the legitimacy of a post.
Both Davis and Stromer-Galley say that the best defense to misinformation and disinformation campaigns is knowledge, urging people to get their news from a wide-range of diverse, traditional media outlets, and to not solely rely on social media as a reliable news source.
Source: Syracuse University