The Austrian sister project EMPAIA is connected to the German AI platform project “Ecosystem for Pathology Diagnostics with AI Assistance, www.empaia.org. As an EMPAIA reference center we make machine decisions transparent, re-traceable, hence interpretable for a medical expert. Our goal is to enable pathologists to understand the context, thereby enabling them to re-enact on demand. We
1. investigate how medical experts explain their decisions by looking at their strategies, as they can explore the underlying explanatory factors of the data, so to formalize a structural causal model of human decision making and mapping features in these to AI/ML approaches. For example, in digital pathology such mechanistic models can be used to analyze and predict the response of a functional network to features in histology slides, molecular data and family history.
2. develop methods to measure the quality of explanations; we pioneer in solutions to measure “causability” as the extent to which an explanation of a statement to a human expert achieves a specified level of causal understanding with effectiveness, efficiency and satisfaction in a specified context of use.
3. use the gained insights from 1) and 2) to develop, test and evaluate novel interface techniques, trainable by medical experts to make underlying principles understandable to them. This will enhance reliability, accountability, fairness, trust in AI methods and foster ethical responsible ML.