Designing responsible artificial intelligence
PhD ceremony: | C.C. (Cor) Steging, MSc |
When: | October 01, 2024 |
Start: | 11:00 |
Supervisor: | prof. dr. H.B. (Bart) Verheij |
Co-supervisor: | dr. S. Renooij |
Where: | Academy building RUG |
Faculty: | Science and Engineering |
Artificial Intelligence (AI) has become an integral part of our society: we have smart assistants with speech recognition on our phones, self-driving cars, and online algorithms that recommend what we should buy, watch or listen to. Most of these AI systems learn to make decisions based on data: large quantities of examples from the past. The exact internal reasoning of such AI systems that learn from data is difficult to determine, however. This can cause the AI system to behave irresponsibly.
In his thesis, Cor Steging introduces a method to evaluate the internal reasoning of AI systems that learn from data. He shows that AI systems sometimes make the right decisions, but for the wrong reasons. For example, unbeknownst to us, an AI system can learn an undesirable, hidden bias from the data.
The method that Steging describes in his thesis can not only evaluate the internal reasoning of an AI system, but can also adjust it and steer it in the right direction. Additionally, Steging shows how one can create an AI system with predefined reasoning, rather than making it learn its reasoning from data. This way, the system cannot accidentally learn to make the decisions for the wrong reasons.
All of the methods discussed in the thesis build upon the idea that the domain knowledge of human experts is crucial when designing AI systems that learn from data. The thesis shows that this is essential for designing responsible artificial intelligence.
See also: Responsible AI should be capable of logical reasoning