The Stereotypes and Mechanisms behind Deepfakes
Deepfakes are realistic photos and videos created with AI-generated tools. In the past two years, deepfake production has increased by 900%, often targeting women and children. Amid this upsurge, public figures, including academics, journalists, and politicians, have been particularly at risk of having their faces and bodies exposed in AI-generated content. Stereotypes related to identities, particularly the ones of minoritized groups, are baked into these AI systems, enabling the creation of harmful content. The violence behind the production and consumption of deepfakes thus affects both the person being targeted and the audience witnessing the online content.
In this panel, we invite journalist Eva Hofman (De Groene Amsterdammer) to introduce and discuss her investigation into how AI deepfakes and last year's political campaign in the Netherlands have been explored, particularly in terms of gender, race, and religion. Following her talk, the experts George Azzopardi and Guru Bennabhaktula explain how deepfakes work and explore solutions to protect vulnerable groups.
The panel works as an inaugural event for YARN’s symposium "Targeting public voices: how scholars can protect themselves in times of generative AI", which will be held in mid-September 2026.
Programme
|
Time
|
Activity
|
|---|---|
|
12:00
|
Walk-in and lunch
|
|
12:45
|
Welcome
|
|
13:00
|
Eva Hofman, journalist at De Groene Amsterdammer
|
|
13:30
|
George Azzopardi & Guru Swaroop Bennabhaktula (RUG)
|
|
14:00
|
Q&A
|
|
14:30
|
Final announcements (event in late September focusing on RUG researchers / ERC safety)
|
Speakers

Eva Hofman (De Groene Amsterdammer) works as an investigative journalist and technology reporter for the weekly magazine De Groene Amsterdammer. She writes about internet culture, feminism, and the power of big tech.

George Azzopardi is an Associate Professor in Pattern Recognition at the Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence at the University of Groningen. He leads the PRISMA research team (Pattern Recognition with and for Impact: Signals, Models and Applications), which focuses on the robustness of pattern recognition models and on approaches that achieve more with less data. His interdisciplinary research spans (bio)medical imaging and video analysis, public safety, and digital forensics. He is also cofounder of ForensifAI, a University of Groningen spinoff that develops image authenticity tools aimed at countering deepfake misuse.

Guru Swaroop Bennabhaktula is a postdoctoral researcher at the University of Groningen, developing authentication techniques to counter face-morphing attacks in identity documentation. As the co-founder of ForensifAI, he is dedicated to scaling forensic technologies to combat digital fraud and create a tangible real-world societal impact. In his PhD at the University of Groningen, he focused on societal safety, contributing to the EU project 4NSEEK to combat child sexual abuse. His PhD journey continued at the University of Leon, Spain, where he worked on perceptual image hashing and enhancing the robustness of neural networks. His academic background include Mathematics and Computer Science.
Moderator

Marília Gehrke is an Assistant Professor of Media and Journalism Studies at the University of Groningen, and she is also the creator of the crowdsourced initiative Forced to Quit, which aims to map women in politics, journalism, and activism who had to leave the public sphere due to violence. This project is an outcome of the Jantina Tammes School Early Career Prize 2024. Gehrke’s research interests are gendered disinformation, fact-checking, AI/data journalism, and transparency.
Registration is free of charge. Please register by 20 May, 2026. First-come, first-served. Lunch will be provided, and your participation will be confirmed via email.
Organized by: Young Arts Network (YARN) and Jantina Tammes School of Digital Society, Technology, and Artificial Intelligence. Organizers: Marília Gehrke, Alex Lorson, Lars de Wildt, Maja Babic, Sandra Banjac.
