Artificial Intelligence Colloquium - Dr. Sreejita Ghosh (TU Eindhoven, NL)
When: | Mo 30-06-2025 10:15 - 11:00 |
Where: | 5161.0116 |
Link: | https://rug-nl.zoom.us/j/69641790244?pwd=9wwJNasrMWYLLxHGopXecl3ahTR9O9.1 |
Title: Beyond performance: Extracting knowledge from an intrinsically interpretable probabilistic classifier model
Abstract: In the last few years there has been rapid integration of Machine learning (ML) models in the daily workflows of general population who are unaware of the internal workings of these models. While performance of the ML models are important, they are not enough to ensure the model’s reliability and fairness. While model-agnostic Explainable AI techniques can approximate a model’s decision locally, intrinsically interpretable models allow
access into the model’s working-logic. Often there are misconceptions that interpretable models are limited to the simple logistic regression, or that only deep learning models can handle non-linear decision-boundaries. During my PhD I developed a variant of a nearest prototype-based classifier models called Learning Vector Quantization (LVQ). I see LVQ models as the middle-ground between the aforementioned extremes. The variant I developed is called ‘Probabilistic Angle Learning Vector Quantization’. In addition to being intrinsically interpretability, the model can learn from very few samples, and can handle class-imbalance and missing values in an embedded manner.
As it is probabilistic, we can know how certain it is about its decision. This has not only been an important part of my PhD, it also motivated my post-PhD research direction(s) and made me question what is really fair, thus pushing me into the slippery slope of Philosophy. Thus, even though there will be a few equations, there will be scope for non-technical discussions on fairness, reliability, and why I didn’t choose a catchy name for the model or the talk.