»It is hard to imagine a person who would feel comfortable in blindly agreeing with a
system’s decision without a deep
understanding of the decision-making rationale.―
Detailed explanations of AI decisions seem necessary to provide insight into the rationale the AI uses to draw a conclusion.«
This site is dedicated to my research on explainable artificial intelligence, driven by the vision that machine learning models should not only make decisions or provide probabilities, but also provide human-understandable explanations for how they reach their conclusions.
Machine learning models must be accurate to be able to provide reasonable explanations.
We validate all models in independent datasets to test their performance.
Machine learning models need to focus on key features that are causally related to the target and often known a priori.
Relevance maps help us to assess the level of noise and to detect bias in the training data.
Machine learning models should provide an explanation describing the decision-making process in an intuitive way.
Our goal is to develop a system architecture that can extract relevant information and dynamically synthesize descriptive explanations.
Martin Dyrba, PhD
I am a researcher in artificial intelligence and translational research in neuroimaging. My research interests are machine learning methods to detect neurodegenerative diseases such as Alzheimer’s.
At present, I am research associate at the German Center for Neurodegenerative Diseases (DZNE), Rostock. Here, I work on novel approaches to improve the comprehensibility and interpretability of machine learning models.
Biosketch & activities
In 2011, I graduated from the University of Rostock. In 2016, I obtained my PhD in medical informatics. At the end of 2015, I was awarded the Steinberg-Krupp Alzheimer’s Research Prize for my work on Support Vector Machine models to detect Alzheimer’s disease based on multicenter neuroimaging data. I have been working as reviewer for several grant agencies and international journals. I was guest editor for the Frontiers Research Topic ‘Deep Learning in Aging Neuroscience’
Recently, I chaired the Featured Research Session ‘Doctor AI: Making computers explain their decisions’ at the Alzheimer’s Association International Conference (AAIC) 2020.
In 2020, I developed a convolutional neural network architecture to detect Alzheimer’s disease in MRI scans. The diagnostic performance was validated in three independent cohorts.
From the neural networks, we can derive relevance maps that indicate the brain areas with high contribution to the diagnostic decision. Medial temporal lobe atrophy was shown as most relevant area which matched our expectations, as hippocampus volume is actually the best established neuroimaging marker for Alzheimer’s disease.
We are working on a system architecture that can extract relevant information and dynamically generate descriptive explanations of varying granularity on demand.
“AI will drastically change healthcare. We are working on making AI systems more reliable, transparent, and comprehensible.”
Student assistant, 2020
InteractiveVis Python and Bokeh programming