will limit the use of AI, and works towards more responsible AI systems. Bart Verheij, Professor of AI & Argumentation at the University of Groningen, thinks that responsible AI should be capable of logical reasoning. That way, an AI system will be able to explain itself and be corrected where necessary.
FSE Science News Room | Charlotte Vlek
Hallucination is a nasty problem among AI systems: these may provide a very confident answer which is grammatically correct and looks good, but has no meaning. For instance, ChatGPT’s response to the question ‘Who was Jantina Tammes Jantina Tammes?’ looks good at first glance, but states that she was born in Loppersum (incorrect) and passed away in Haren (also incorrect). In the US, a lawyer used ChatGPT to write his plea in a suit against an airline, but the plea turned out to be full of made-up sources. The judge was not amused.
(text continues below image)
ChatGPT uses Machine Learning: a popular method in AI in which a computer is trained on enormous amounts of data. Based on these data, the computer learns to execute a concrete task. In essence, this is all statistics, where the computer receives a huge number of examples and is thus able to produce the most likely response to a new assignment.
In the case of ChatGPT, the goal of the training was to produce the most likely next word in a conversation. Based on this, it’s not surprising that ChatGPT tends to hallucinate sometimes: ChatGPT doesn’t ‘know’ anything about Jantina Tammes, but only produces what occurs most in the datasets that it was fed, and apparently Loppersum and Haren were words with a high prevalence in this case.
Verheij: ‘But sometimes, such a language model turns out to be fairly good at tasks for which it was not trained, such as adding and subtracting. And sometimes ChatGPT produces a very exact argument, whereas in other instances, it is not capable of logical reasoning at all. No one understands exactly why and when, and that renders such a system unreliable.
Verheij recognizes two main trends within AI: knowledge systems and data systems. A knowledge system operates on the basis of logic: you put knowledge and rules in, and what it returns is always correct, and — if so desired— can be explained. These kinds of systems are built by people, from the ground up. Data systems work with enormous datasets, and brew something from this on their own. For instance, with the use of Machine Learning.
Modern AI in particular is not good at explaining itself
Under the supervision of Bart Verheij, PhD-student Cor Steging investigated how Machine Learning deals with things like rules and logic. Steging took a rule from Dutch law that stipulates when and what counts as a wrongful act. In doing so, he generated the ‘perfect dataset’ of examples, and studied what a computer distills from that dataset.
After training on this ‘perfect dataset’, the computer programme was able to indicate with high accuracy, whether or not something was wrongful. So far so good. But the programme did not learn the correct underlying rules from the dataset. Verheij: ‘In particular it failed to learn the exact combination of logical conditions which are needed in law. And values of limits, such as age limits, are not recognized correctly.’
‘Modern AI in particular, which is so powerful, is not good at explaining itself. It’s a black box.’ And that needs to change, according to Verheij. That’s why in Groningen, researchers are working on computational argumentation. ‘It would be great if humans and machines could support each other in a critical conversation. Because only humans understand the human world, and only machines are capable of processing a lot of information so quickly.
As a part of the Jantina Tammes School, the 'Language and AI' theme is an interdisciplinary initiative that aims to encourage collaboration among academics, PhD candidates, students, and industry representatives who share a keen interest in the...
Prof. Moniek Tromp has been appointed Captain of Science of the Chemistry Top Sector by the Minister of Economic Affairs and Climate Policy. As from 1 July 2023, she succeeded Prof. Bert Weckhuysen from Utrecht University.
Leonardo Arriagada Beltran conducted his PhD research on the interface of computer-generated art and the constantly evolving field of Artificial Intelligence (AI). He will defend his Phd thesis on 21 September. His research offers valuable insights...
The UG website uses functional and anonymous analytics cookies. Please answer the question of whether you want to accept
or reject other cookies (such as tracking cookies).
If no choice is made, only basic cookies will be stored. More information