Skip to ContentSkip to Navigation
University of Groningenfounded in 1614  -  top 100 university
Research Bernoulli Institute

Jakob Schoeffer in Ukrant: Don’t count on AI for advice. What are the patient’s chances?

10 September 2025

Jakob was interviewed by Cecile Bruil in the Ukrant:

Can AI help doctors determine whether comatose patients are likely to wake up again? The model that Jakob Schoeffer tested certainly couldn’t. ‘Its predictions turned out to be completely random.’

Heart rate. Temperature. Pupil size. These are just a few things doctors check for in their comatose patients. They then have to make a difficult choice based on their knowledge and expertise: will they continue treating the patient? Or should they cease care in an effort to prevent suffering? 

This question has never been easy to answer. Even experienced doctors struggle to make the right decision. ‘Some patients are taken off ventilation when they might have recovered if they hadn’t been’, says AI researcher Jakob Schoeffer with the Faculty of Science and Engineering. But at the same time, some patients who will never recover continue to receive treatment.

It’s no surprise then, that in a time where artificial intelligence is being utilised in various fields, hospitals are investing in models that can make these difficult choices. 

Statistical prediction

Schoeffer and his colleagues Maria De-Arteaga and Jonathan Elmer studied an AI model that had been trained using data from thousands of comatose patients. For this, they used machine-learning tools like Random Forest and Logistic Regression. These are fairly simple AI tools that use statistics to make predictions. Unlike ChatGPT, they don’t come up with their own solutions. This allows the researchers to gain insight into how the system is making its choices, which also makes it easier to check.

Some patients might have recovered 

Schoeffer published the study’s results last summer. But his findings weren’t particularly encouraging. ‘The system wasn’t working as flawlessly as we’d hoped’, he says. 

The conclusions the AI model had drawn and those the doctors had made still differed widely. The AI model would give some patients an 80-percent chance at recovery, when doctors had put it at zero, or vice versa. 

‘The AI’s predictions turned out to be completely random’, says Schoeffer. ‘That could have some far-reaching consequences if applied in a hospital. We absolutely can’t use it in its current state.’

No reliable data

But the study did make him realise where the model had gone wrong. It’s in the data that was to train it, says Schoeffer. There is no reliable data on patients who were taken off ventilation, since we simply don’t know what would have happened if they hadn’t been taken off. That makes it hard to make the right prediction.

There are situations that simply should not involve AI

Schoeffer is now working on a new model; he no longer has the system predict the chance of survival, but rather the level of uncertainty of the situation. ‘We imagine the system saying something along the lines of: “based on earlier cases, this patient’s condition is very uncertain.”’

It might sound like a minor difference, but it does make the prediction as a whole much safer. To reach this particular conclusion, the model uses more data than the previous system. ‘Uncertainty is something we can measure in various ways for each patient’, says Schoeffer. 

The AI system also looks for confusing signs from the body and how doctors assessed the situation in the past. It will use this data to estimate the degree of uncertainty instead of determining what the outcome will be. That means you don’t have to know whether a patient will recover in order to train the AI system on uncertainty.

No false expectations

This new model does have the potential to be used in hospitals, where it might just save lives, Schoeffer thinks. ‘Some doctors give up on their patients a little too quickly.’ In such cases, the AI system could possibly encourage doctors to reassess before taking a patient off ventilation.

People shouldn’t become dependent on AI

But, says Schoeffer, the AI system shouldn’t be creating false expectations, either. ‘We don’t want it to say that maybe, in a hypothetical universe where miracles occur, the patient could recover. We don’t want patients to suffer if there’s no hope left.’ 

The way the system is being used also needs very careful consideration, he feels. People are worried about using AI for decisions involving life and death, and rightly so. ‘That’s why the strict European laws on this are good and important’, he says. ‘There are so many possibilities, but there are situations that simply should not involve AI.’ 

Schoeffer is critical of the trend to use AI for normal, everyday things, like selecting candidates for job interviews. ‘There must be some kind of added value’, he says. ‘People shouldn’t become dependent on it. It can be helpful in some cases, but utterly detrimental in others.’ 

Interaction

During his PhD research on the interaction between people and AI, Schoeffer realised the success of an AI system is dependent on how people use it. How they feel about it, how they use it, and whether it actually helps.

That interaction between people and AI is also important in his current research: ‘You have to have a good understanding of the weaknesses of both people and AI in order to utilise AI’s strong points’, he emphasises. 

Right now, Schoeffer is developing his new AI system in collaboration with the University of Pittsburgh Medical Center. It’s hopefully the first step towards a system that can save lives. But he’s also taking into account the fact that for now, there will be no AI support when it comes to treating comatose patients. ‘It’s even possible that the limitations on these high-risk decisions are too extensive. That would mean we would never implement it.’

And even if it does get implemented, says Schoeffer, the system will never function autonomously. ‘If something goes wrong, there will always be a doctor to take over and ignore the predictions from the AI system.’

Dutch

https://ukrant.nl/magazine/what-are-the-comatose-patients-chances-dont-count-on-ai-for-advice-just-yet/?lang=en

Last modified:10 September 2025 3.51 p.m.
Share this Facebook LinkedIn

More news