Header image

7A: Consequences of misinformation exposure

Friday, June 13, 2025
11:40 AM - 12:40 PM
Lockewood Suite

Speaker

Assoc Prof Ciara Greene
Associate Professor
University College Dublin

Minimal real-world effects of one-off fake news exposure: Evidence from a study of food misinformation

Abstract

Discussions of online misinformation often assume that exposure to “fake news” will have direct, dire consequences for behaviour, but there is little to no research explicitly testing this hypothesis. Here, participants were recruited to an online study (N = 2,397) in which they were exposed to a fabricated news story about food contamination, with a subset subsequently completing a laboratory study (n = 143) in which they were offered the opportunity to eat the targeted foods. In contrast to our hypotheses, exposure to the fabricated story did not significantly reduce attitudes towards to the target food, nor the quantity of food consumed in the laboratory. A follow-up experiment (n = 417) confirmed that the results were not specific to the particular story presented to participants. We conclude that a single exposure to misleading information does not necessarily produce substantial behavioural change, and that discussions of misinformation should avoid unsubstantiated claims.

Paper Number

197
Mr Didier Ching
Phd Student
University College Cork

Seeing is not believing: Assessing the effects of deepfake exposure on false memories, political opinions and voting intentions in the US 2024 election.

Abstract

Deepfake technology is often cited as a potentially powerful political misinformation tool. However, the literature is lacking in empirical evidence demonstrating the uniquely potent effects of deepfakes on individuals’s beliefs, memories, and behaviours in political environments. Across multiple experiments, we systematically manipulated different elements of deepfakes such as their valence (positive or negative deepfakes) or repeated (illusory truth effect) and delayed (sleeper effect) exposure. We compared the same misinformation presented in different formats (text, synthetic audio, and deepfake videos). Overall, we found that deepfakes are not significantly more effective at influencing an individual’s political beliefs, memories, or behaviours than existing forms of misinformation, such as simple text. We urge future deepfake discourse to be centered on empirical evidence.

Paper Number

200
Dr Natasha van Antwerpen
Lecturer
The University Of Adelaide

Examining Misinformation on Misinformation: A Longitudinal Investigation of Misinformation's Impact on Institutional Trust, Perceptions of Moral Decline, and Affective Polarisation

Abstract

Concern around misinformation has increasingly pointed to its societal impacts, including effects on polarization, perceived moral decline, and institutional trust - all of which could influence capacity for collective action on global challenges. However, these proposed negative effects lack causal evidence. Accordingly, we are conducting a within-between longitudinal experimental study investigating how repeated exposure to True vs. False headlines (between) impacts broader perceptions of moral decline, trust in political, scientific, and media institutions, and affective polarization, along with judgements of headline belief and morality and the impact of headline repetition on these constructs (within: four timepoints). In addition to longitudinal data on the illusory truth and moral repetition effect, the study will test whether misinformation erodes trust in institutions, or increases polarization and perceived moral decline. Collectively, these findings will contribute to understanding the effects of misinformation over time, with implication for how to combat its spread and influence.

Paper Number

381
Dr Maryanne Brassil
Postdoctoral Researcher
Swansea University

The Liar’s Dividend: Investigating the Impact of Deepfake Claims on Trust in User-Generated Evidence

Abstract

The rapid advancement of deepfake technology not only threatens the authenticity of visual media but also risks undermining public trust in genuine content. A central concern is the “liar’s dividend”—a strategy in which individuals confronted with incriminating photo or video evidence claim it to be AI-generated, casting doubt on its validity. The potential impact of such denials on legal proceedings, where user-generated content serves a critical evidentiary role, remains uncertain. This talk will discuss initial findings from a series of experimental studies examining how references to deepfakes in courtroom contexts affect lay evaluations regarding the trustworthiness of user-generated evidence.

Paper Number

147

Chair

Dr Maryanne Brassil
Postdoctoral Researcher
Swansea University

loading