The GenAI-bubble will burst, but don’t give up on AI altogether

Generative AI such as ChatGPT may seem convincing with its beautiful images and eloquent texts. ‘But people keep promoting the belief that generative AI provides universal tools that are capable of much more,’ says Michael Biehl, Professor of Machine Learning. ‘Sooner or later, the genAI bubble will burst,’ he is certain. But that doesn’t mean all of AI should be thrown out with the bathwater.
FSE Science Newsroom | Text Charlotte Vlek
‘What generative AI (genAI) presents us with may look appealing: applications such as ChatGPT or Dall-E generate eloquent texts, beautiful images, and impressive videos,’ Biehl describes. ‘But there are several fundamental problems, such as factually incorrect texts, implausible images, and physically impossible videos. And these are intrinsic features of these systems, not bugs that will disappear with the next version.’
Biehl is concerned about the hype currently surrounding genAI: ‘Exaggerated praise predicts billion-dollar markets, while overwhelming fears range from the loss of countless jobs to the end of humanity.’ There are wild claims that genAI is capable of passing the Turing test (see text box), that it is capable of text comprehension, problem analysis, and logical reasoning.
The Turing test celebrates its 75th anniversary this month. Will GenAI pass the test?
In a famous article from October 1950, computer scientist Alan Turing posed the question: can machines think? Instead of answering the question directly, he described a thought experiment in which a human judge communicates via text messages with a computer and another human. If the distinction between the human and the machine cannot be made, and the computer has thus successfully imitated a human, it is said to have passed the test. For all intents and purposes, it appears to think like a human.
Recently, claims have been made that large language models such as ChatGPT have passed the Turing test. Biehl explains: ‘Such claims of passing the Turing test or achieving high scores in academic exams are often based on dubious evaluation methods and are generally obscured by non-transparent training.’ For instance, ChatGPT was only able to pass the test after receiving very specific prompts instructing it to behave in a certain way. In addition, there is no clear definition of how long the test should last, and how competent the human judge should be.
Finally, one might wonder whether passing the Turing test automatically means that the system is intelligent. Many researchers in the field believe that this is not the case. AI expert and critic Gary Marcus summarizes it as follows: The Turing test is mostly a test of how gullible people are.
Ethical problems
‘Sooner or later people will realize that generative AI is an illusion that they have been led to believe by Big Tech,’ Biehl says. ‘Right now, it’s a good thing that people are starting to become aware of the many ethical issues surrounding genAI.’ It is an impressive list, including the enormous waste of resources, the global exploitation of cheap labour to fix AI errors, the unauthorized use of copyrighted materials, the misuse of user data, and the uncritical amplification of biases in the training data. Last but not least, there is growing awareness and fear of possible manipulations by authoritarian regimes or by the big tech companies that release and control these systems.
Sooner or later people will realize that generative AI is an illusion that they have been led to believe by Big Tech
Biehl: ‘People seem to think that all of artificial intelligence equals this over-hyped generative AI. But there is a longstanding, very broad field of research that works on very different things besides producing eloquent series of words.’ Researchers at the UG, for example, are working on computer programs that can support doctors in analysing medical data, even when there is insufficient training data, or that can be used as sparring partners in medical diagnosis. The emphasis is often on collaboration between humans and machines, or on computer systems that can support their conclusions with arguments.
When the bubble bursts
‘Eventually, people will realize that even the largest language models are just that: language models,’ Biehl predicts. ‘They put together sentences, shamelessly imitate, or blatantly reproduce and remix pieces of text from their training data. When the genAI bubble bursts — and it will — the media and public opinion will turn against everything called AI.’ That is why Biehl appeals to his colleagues in the field: ‘Don’t fall for the hype, don’t fuel it any further, and don’t try to take advantage of it.’ Biehl even suggests avoiding the term “AI” as much as possible. ‘It will backfire.’
Last modified: | 07 October 2025 4.05 p.m. |
More news
-
01 October 2025
In Science Podcast: Ajay Kottapalli about seal whiskers and ultrasensitive sensors
'In Science' is the podcast of the University of Groningen. In this episode, we’re joined by Ajay Kottapalli, Associate Professor at the Engineering and Technology Institute Groningen and co-founder of the Sencilia startup.
-
25 September 2025
University of Groningen researcher maps impact of noise in the Wadden Sea
With a grant from the Wadden Fund and the Ministry of Agriculture, Fisheries, Food Security and Nature, RUG Professor Britas Klemens Eriksson will conduct research into the impact of noise on underwater life in the Wadden Sea.
-
23 September 2025
Reduced working hours for a smaller carbon footprint?
Klaus Hubacek analyses the effects of various green solutions to reduce CO2 emissions — such as planting more trees, sharing cars, or working less — to find out whether they realize their intended outcome. Spoiler: almost everything has a downside,...