Deep networks have made significant progress in various areas, but understanding model decisions can be challenging due to their hierarchical, structured, and non-linear topology. To overcome this, one successful approach involves tracing the output score back through the network and combining relevance scores for all input features. Project P4 expands on this work in three ways: improving the robustness of XAI techniques used to compute population-level explanations, developing training objectives and regularization techniques to enhance interpretability, and applying them to genomic and bioimage data. To evaluate pattern sharing between protein targets, this project will use statistical methods for explanations developed in P2 as well as deep-learning-based conditional independence tests from P1. Additionally, we will collaborate with P3 to develop explanations in the face of model uncertainty. The regularization and visual interpretation methods developed in this project will also be used in P5.