A new study by researchers from the University of Lausanne and the Munich Center for Machine Learning has revealed how users respond to fake news generated by artificial intelligence (AI) versus humans. The study, which was conducted online with 988 subjects from the U.S., compared the perceived veracity and the willingness to share of 20 fake news items related to the COVID-19 pandemic.
The main findings of the study are:
- AI-generated fake news is perceived as less accurate than human-generated fake news, but both tend to be shared equally.
- Several socio-economic factors explain who falls for AI-generated fake news, such as age, gender, political orientation, and cognitive reflection.
- Users responded faster to AI-generated fake news than human-generated fake news, suggesting that they may rely more on heuristics and less on critical thinking.
The study, which was pre-registered and followed ethical guidelines, is one of the first to explore the behavioral effects of AI-generated fake news, which poses a serious threat to the integrity of online information and the functioning of modern societies. The authors suggest that more evidence-based research is needed to understand the risks and design effective mitigation strategies.
What do you think of these results? Do you trust AI-generated content more or less than human-generated content?
Source:
Comparing the willingness to share for human-generated vs. AI-generated fake news (arxiv.org)