DeSBi
DeSBi
DeSBi

P3: Uncertainty Assessment and Contrastive Explanations for Instance Segmentation

Project Summary

Project P3 aims to provide a formal basis for uncertainty modeling and explainability in the context of structured pixel-wise predictions on biomedical image data, with a particular focus on the use case of instance segmentation. We will use a combination of uncertainty modeling and contrastive explainability to increase the efficiency of proofreading automatic instance segmentation, a current bottleneck in many biomedical applications. This project will use the explainable statistical tests developed in P2 and the explanations in the context of uncertainty modeling developed jointly with P4 to identify features that explain differences between modes of predictive distributions. We will also work closely with P6 on Bayesian DL models and uncertainty quantification for biomedical imaging data.

Research Question(s)

How can we improve explainable AI (XAI) techniques for structured pixel-wise predictions on biomedical data, by leveraging concept-based approaches, such as Concept-Relevance Propagation (CRP)?

Research Framework

Our research aims to extend Concept-Relevance Propagation (CRP) to structured pixel-wise predictions in the biomedical domain. Specifically, we plan to integrate model-guided explanations with CRP to enhance the interpretability of pixel-level predictions in critical medical applications, such as biomedical imaging.

Main Contribution

Towards the goal of integrating model-guided explanations with CRP in the context of pixel-level predictions in the biomedical domain, we unify the use of explainable AI heatmaps for weakly supervised segmentation with the „Right for the Right Reasons“ paradigm, demonstrating that differentiable heatmap architectures perform competitively in semi-supervised segmentation tasks, even outperforming standard encoder-decoder models when trained with limited pixel-level labels and image-level supervision [1]. In [2], we consider the question of explaining predictive uncertainty, i.e., why a model ‘doubts’. While this aspect has been neglected in the literature before, it is shown that predictive uncertainty is dominated by second-order effects, involving single features or product interactions between them.  Moreover, we provide „quanda“, a Python toolkit designed as a unified framework to facilitate the evaluation of training data attribution methods [3]. As another application on biomedical images, [4] considers concept-based explanations in the classification of Alzheimer’s disease. Finally, in [5] we introduce a continuous-time score-based generative model leveraging fractional diffusion processes, enhancing image quality and pixel-wise diverstiy – an essential property for synthetic data used in AI systems for high-stakes medical applications, such as structured pixel-wise predictions in biomedical imaging.

Publications

  • Yu, Xiaoyan, et al. „Model guidance via explanations turns image classifiers into segmentation models.“ World Conference on Explainable Artificial Intelligence. Cham: Springer Nature Switzerland, 2024.
  • Bley, Florian, et al. „Explaining Predictive Uncertainty by Exposing Second-Order Effects.“ arXiv preprint arXiv:2401.17441 (2024).
  • Bareeva, Dilyara, et al. „Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond.“ arXiv preprint arXiv:2410.07158 (2024).
  • Tinauer, Christian, et al. „Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification.“ World Conference on Explainable Artificial Intelligence. Cham: Springer Nature Switzerland, 2024.
  • Nobis, Gabriel, et al. „Generative fractional diffusion models.“ In Advances in Neural Information Processing Systems, 2024.

 

Principal Investigators

Dagmar Kainmüller (UP/MDC)

Wojciech Samek (TU Berlin/HHI Fraunhofer)

Project Researchers

Gabriel Nobis (HHI Fraunhofer)

Claudia Winklmayr (MDC Berlin)