Project P3 aims to provide a formal basis for uncertainty modeling and explainability in the context of structured pixel-wise predictions on biomedical image data, with a particular focus on the use case of instance segmentation. We will use a combination of uncertainty modeling and contrastive explainability to increase the efficiency of proofreading automatic instance segmentation, a current bottleneck in many biomedical applications.

This project will use the explainable statistical tests developed in P2 and the explanations in the context of uncertainty modeling developed jointly with P4 to identify features that explain differences between modes of predictive distributions. We will also work closely with P6 on Bayesian DL models and uncertainty quantification for biomedical imaging data.