Skip to ContentSkip to Navigation
Jantina Tammes School of Digital Society, Technology and AI
Digital prosperity for all
Jantina Tammes School of Digital Society, Technology and AI News

'I see fear and ghost stories about AI as a new technology'

10 November 2025
Photo: Mirjam Plantinga

Associate Professor Mirjam Plantinga sees it as her mission to paint a realistic picture of artificial intelligence (AI). She is project leader at the ELSA AI Lab Northern Netherlands (ELSA-NN), a lab dedicated to the responsible development and application of AI in healthcare. In this edition of the JTS Scholars, she talks about the (im)possibilities of artificial intelligence. ‘When implementing AI, we must not lose sight of the fundamental questions of care.’

Text: Jelle Posthuma

About the JTS Scholars

A ‘JTS Scholar’ is a researcher (from postdoc to professor) affiliated with the University of Groningen who conducts research in fields related to the Jantina Tammes School: digitalization, digital technologies and artificial intelligence. In this series, we interview our Scholars about their expertise and future plans for interdisciplinary collaboration.

Read more about the JTS Scholar Programme

Recently, Plantinga was involved in the theatre production Kladiladi (short for ‘klap die laptop dicht’). She was approached by a theatre maker who had written a play about the impact of AI on society and the concerns surrounding it. Plantinga was asked to join as an expert for the post-performance discussion. ‘The play focuses on personal questions and the role AI can play in them. AI can do a lot, but it doesn’t solve our personal dilemmas for us. I found that to be a powerful message.’

With the play and the discussion afterwards, Plantinga and the creators wanted to highlight the positive aspects of AI. After the performance, she discussed the possibilities of the technology and received many enthusiastic reactions. ‘People were curious to learn more about AI. They realized the play was an exaggeration and wanted to know what is actually possible. Especially among older audiences, you can sense fear and ghost stories about this new technology. With our lecture, we wanted to dispel that fear and misunderstanding, and provide a realistic image of AI.’

Responsible innovation in healthcare

Reaching as broad an audience as possible to talk about the potential of artificial intelligence, that is one of Plantinga’s key goals. Her research focuses on innovations in healthcare and closely aligns with her role as project leader at the ELSA AI Lab Northern Netherlands (ELSA-NN). There, the responsible development and application of AI in healthcare are central.

‘AI has a major impact on society, and that brings responsibility. We must face not only the opportunities but also the risks and challenges, for example in areas such as sovereignty and climate. Our choices about responsible AI should be based on our values and principles, so legislation and technology can be aligned accordingly.’

The potential applications of AI in healthcare are highly diverse, Plantinga says. These range from treatment decision-making and diagnostics to administrative processes. ‘A lot is being developed, but due to regulations, it’s not always possible or desirable to apply AI models widely in healthcare. We use language models at home very differently than in hospitals. At the UMCG, for example, we work within secure environments and clear agreements so that sensitive data stay within hospital walls.’

AI can support healthcare, but in practice, the field is still in its early stages. In treatment decision-making, AI use is often still in the research phase, Plantinga explains. Applications in daily care are still limited. ‘However, language models are increasingly being used to create summaries of patient records.’

AI is often said to help reduce workload in healthcare, for example by generating draft responses or automatically transcribing conversations. ‘But whether it truly delivers results remains to be seen.’ AI-generated responses can save time, but a physician must still review them, the researcher points out. ‘The question is how much it really helps. It might even cause people to send more messages because they receive quicker or easier replies. Then we must ask whether the workload truly decreases. Expectations and norms can shift, precisely because technology enables that.’

Human questions first

When implementing AI, we must not lose sight of the fundamental questions in healthcare, Plantinga emphasizes. ‘That’s a pitfall. Perhaps AI reduces workload, but we must continue to question the causes of that pressure. We don’t want to use AI to solve problems that we ourselves created unjustly. The problem still exists, even if technology helps us manage it.’

In treatment decision-making, AI offers countless possibilities for the future, but there are important caveats, the researcher notes. AI models are often based on medical data because that’s where sufficient data are available. ‘Factors such as mental health or someone’s living environment are harder to measure and may remain invisible, even though that holistic perspective is crucial.’

Data and limitations

AI can also perform well in analyzing scans, as it can process huge amounts of data and recognize patterns. ‘But fewer data are available for exceptions, which makes them harder to detect. Moreover, an AI model doesn’t understand the data, it only maps patterns or deviations. Some of those deviations mean little or nothing medically. If too many irrelevant deviations are detected, it doesn’t help healthcare. There’s even an example of an AI model that predicted a patient’s survival chances based on a hospital watermark on the scan. A doctor, of course, would never make such an error.’

That’s exactly why humans must remain in control, Plantinga stresses. ‘It remains human work. Expertise is needed to interpret AI outcomes. Artificial intelligence is smart in the sense that it can process a lot of data, but it can’t critically reflect and has no morality. Those human qualities are missing.’

Interdisciplinary collaboration

As a Scholar of the Jantina Tammes School, Plantinga wants to focus on interdisciplinary collaboration. ‘A researcher with a technical background can explain how AI technology works, while a social scientist can discuss what it means for our society. The Jantina Tammes School brings those perspectives together. The School’s broad network is also valuable because it allows us to reach diverse audiences. The ‘Jouw Technology van Morgen’ program is a great example of this. As a Scholar, I want to reach as wide a network as possible.’

Last modified:10 November 2025 5.36 p.m.
View this page in: Nederlands
Volg ons optwitter linkedin youtube