Study reveals comparable false memory rates between deepfakes and text, raising questions about the uniqueness of deepfake technology
In a groundbreaking study, researchers discovered that deepfake videos, which utilize artificial intelligence to alter visual and auditory elements, can generate false memories of non-existent movie remakes. However, intriguingly, the study also revealed that simple text descriptions of the fabricated films yielded similar rates of false memory. The findings, presented by Gillian Murphy of University College Cork, Ireland, and Lero, the Science Foundation Ireland Research Centre for Software, and their colleagues in the open-access journal PLOS ONE on July 6, 2023, challenge the notion that deepfake technology possesses unmatched potency in manipulating memory.
Deepfake videos have gained notoriety as manipulated clips that replace an individual’s face or voice with those of another person using AI technology. With the increased accessibility and affordability of deepfake creation tools, discussions have intensified regarding their potential applications and associated risks. Concerns range from the dissemination of misinformation to the manipulation of viewers’ memories.
To explore these potential risks and benefits, Murphy and the research team invited 436 participants to engage in an online survey. The survey involved viewing deepfake videos featuring fictional remakes of popular movies, wherein actors such as Will Smith portrayed characters originally played by Keanu Reeves in “The Matrix” and Brad Pitt and Angelina Jolie starred in a remake of “The Shining.” Additionally, participants were presented with genuine remakes, including “Charlie & The Chocolate Factory,” “Total Recall,” “Carrie,” and “Tomb Raider.” In certain cases, participants were provided with text descriptions of the remakes instead of watching the deepfake videos. The participants were only informed of the falseness of the deepfakes after completing the survey.
Consistent with previous studies, both the deepfake videos and text descriptions induced false memories of the non-existent remakes, with an average of 49 percent of participants believing in the authenticity of each fabricated film. Notably, participants often reported memories of the fabricated remakes being superior to the originals. However, the false memory rates resulting from the text descriptions were remarkably similar, suggesting that deepfake technology might not possess greater manipulative power over memory compared to other tools.
An overwhelming majority of participants expressed discomfort with the utilization of deepfake technology in recasting films. They voiced concerns related to artistic integrity and the potential disruption of shared social experiences associated with movies.
These significant findings hold implications for the future development and regulation of deepfake technology in the film industry.
The researchers remarked, “While deepfakes are of great concern for many reasons, such as non-consensual pornography and bullying, the current study suggests they are not uniquely powerful at distorting our memories of the past. Though deepfakes caused people to form false memories at quite high rates in this study, we achieved the same effects using simple text. In essence, this study shows we don’t need technical advances to distort memory, we can do it very easily and effectively using non-technical means.”
Reference:
PLOS ONE: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0287503
Citation: Murphy G, Ching D, Twomey J, Linehan C (2023) Face/Off: Changing the face of movies with deepfakes. PLoS ONE 18(7): e0287503. https://doi.org/10.1371/journal.pone.0287503