Learning vector quantization with applications in neuroimaging and biomedicine
PhD ceremony: | Mr R. (Rick) van Veen |
When: | May 02, 2022 |
Start: | 12:45 |
Supervisors: | M. (Michael) Biehl, Prof, prof. dr. N. (Nicolai) Petkov, prof. dr. K.L. (Klaus) Leenders |
Co-supervisor: | dr. G.J. de Vries |
Where: | Academy building RUG |
Faculty: | Science and Engineering |
An early clinical diagnosis of neurodegenerative diseases is complex and not always possible due to overlapping characteristics between the disorders. Biomarkers are necessary and functional brain imaging may provide a solution.
Generally, machine learning can aid the diagnosis, but the decision process of some methods cannot always be understood. Furthermore, machine learning requires data. Therefore, functional brain scans from several neuroimaging centers are combined into a single dataset. We show that this leads to unwanted variation in the data that can inflate the performance of machine learning.
Learning Vector Quantization is a type of machine learning that produces a prototypical representation (prototypes) of the classes in the data. Additionally, it weights the input space based on its relevance to the classification task. In one application example, we train a model on steroid measurements from patients with a benign or malignant adrenocortical tumor. In this case, the obtained models were directly interpretable and helped to decide between different measuring technologies.
Due to the complex nature of the brain data, the models trained to diagnose neurodegenerative diseases are not directly understandable. Nonetheless, we show that prototypes and relevances can be reconstructed in the imaging space, increasing the interpretability of the models significantly. Additionally, we can produce easy-to-understand representations of the data that can visualize the diagnosis and progression over time of patients, leading to actionable scenarios. Lastly, we present a novel method to deal with center-related, unwanted variance in the data.