Advice from artificial intelligence experts may be just as influential as from human experts, researchers report.
Both human and robotic bearers of bad news may find, however, that they lose influence when their negative opinions run contrary to a positive crowd, according to a new study.
Machines that generate recommendations—or AI experts—were as influential as human experts when the AI experts recommended which photo a user should add to their online business profile, researchers found.
But both AI and human experts failed to budge opinions if their feedback was negative and went against popular opinion among other users, says S. Shyam Sundar, professor of media effects in the Donald P. Bellisario College of Communications, co-director of the Media Effects Research Laboratory, and an affiliate of the Institute for Computational and Data Sciences (ICDS) at Penn State.
The findings may show that there are times when the opinion of the crowd, also called the bandwagon effect, can beat out the opinions of experts whether they are AI or human, Sundar says.
Both AI-powered and human experts with a positive evaluation on a business profile picture were able to influence users’ own assessment of the photo, he says. However, if experts did not like the photograph and the crowd offered a positive evaluation of it, the experts’ influence waned.
Because people are increasingly using social media to look for feedback, cues that suggest expert opinions and the bandwagon effect may be important factors in influencing decisions, according to first author Jinping Wang, a doctoral candidate in mass communication.
“Nowadays, we often turn to online platforms for opinions from other people—like our peers and experts—before making a decision,” says Wang. “For example, we may turn to those sources when we want to know what movies to watch, or what photos to upload to social media platforms.”
AI experts are often less expensive than human experts and they can also work 24 hours a day, which, Wang suggests, might make them appealing to online businesses.
The researchers also found that the AI’s group status—in this case, national origin was designated—did not seem to affect a person’s acceptance of its recommendation. In human experts, however, an expert from a similar national origin who offered a negative assessment of a photograph tended to be more influential than a human expert from an unknown country who offered a similar negative rating of a photograph.
While findings that suggest group status may not affect whether a person values the judgment of AI experts sounds like good news, Sundar suggests that the same cultural biases might still be at work in the AI expert, but they could be hidden in the programming and training data.
“It can be both good—and bad—because it all depends on what you feed the AI,” says Sundar. “While it is good to have faith in AI’s ability to transcend cultural biases, we have to keep in mind that if you train the AI on pictures from one culture, they could give misleading recommendations on pictures meant for use in other cultural contexts.”
The researchers recruited 353 people through an online crowdsourcing service to take part in the study. Researchers randomly selected the participants to view a screenshot of a website that offered users recommendations for their business profile photos.
Researchers also told participants that the website allowed feedback from other users of platform, in addition to expert raters. The screenshots represented the various conditions the researchers studied, including whether the expert raters were human or AI; whether their feedback was positive or negative; and whether the source of the rater came from a similar, different, or unknown national identity.
In the future, researchers plan to investigate the group dynamics of influence more deeply and examine whether the expert’s gender plays a role in influencing opinion.
The research appears Computers in Human Behavior.
Source: Penn State