DeSBi
DeSBi
DeSBi

P5: Structured Explainability for Interactions in Deep Learning Models Applied to Pathogen Phenotype Prediction

Project Summary

This project seeks to combine statistical methodology and variable selection with deep learning to disentangle the complex non-linear interactions present in genomic and protein data. Project P5 aims to develop methods for variable selection and Bayesian regularization that can be applied within the framework of semi-structured mixed models, as well as in models that explain classification decisions such as LRP.

Research Question

How can we extract explanations from deep learning models and understand and interpret complex interactions between not necessarily explicitly stated features such as genomic motifs?

Research Framework

Improve explainability for interactions in deep learning for genomics.

Main Contribution

  • Propose a modified version of Layer-wise relevance propagation (LRP) that enforces sparsity and concentrates relevance on the most important features;
  • Develop a semi-structured deep learning model with high predictive power, while providing explainability for complex structured data.

Publications

  • Yanez Sarmiento, P., Witzke, S., Klein, N. and Renard, B.Y. (2024), ’Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation’, Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 336-351). Cham: Springer Nature Switzerland.
  • Alexander Rakowski, Remo Monti, Viktoriia Huryn, Marta Lemanczyk, Uwe Ohler, Christoph Lippert, Metadata-guided feature disentanglement for functional genomics, Bioinformatics, Volume 40, Issue Supplement_2, September 2024, Pages ii4–ii10

Principal Investigators

Nadja Klein (KIT)

Bernhard Renard (UP/HPI)

Project Researchers

Dingy Lai (KIT)

Paulo Yanez Sarmiento (HPI)