Reasoning with defeasible reasons
|PhD ceremony:||S. Pandzic, MA|
|When:||October 29, 2020|
|Supervisors:||prof. dr. B.P. (Barteld) Kooi, prof. A.M. Tamminga, prof. dr. L.C. (Rineke) Verbrugge|
|Where:||Academy building RUG|
The information age is marked by a tremendous amount of incoming information. Even so, we are almost always dealing with information that is only incomplete or even conflicting. This thesis investigates the logical principles behind our abilities to find out the right answers and recover from errors that we make in drawing hasty conclusions. Consider, for example, the following headline: ``NASA warns of an asteroid capable of ending human civilization approaching''. The headline gives you a reason to conclude that the Earth is on a collision course. However, were you to read below the headline that although the asteroid is approaching closer, it will pass by the Earth at a distance over sixteen times farther than our Moon, you would doubt your reasons to conclude that the collision is about to happen. This commonsense ability to question old reasons in the wake of new information is known as ``defeasibility'' of reasons.
Defeasible reasons came to the attention of AI researchers who realized that the design of intelligent computer programs requires principled understanding of such commonsense abilities. The relevance of commonsense reasoning is nowadays emphasized by the need to increase the transparency of AI systems, but also by the fact that AI systems still underperform in commonsense reasoning tasks.
This thesis investigates the role of logic in commonsense reasoning. Firstly, it develops logical systems that are successful in modeling defeasible and commonsense reasoning. Secondly, the thesis shows why commonsense reasoners are bound to reason logically, despite being prone to errors.