DeSBi Retreat 2024
7 - 8 July 2024
Our research unit, RU KI-FOR 5363, recently held its annual retreat—an event we eagerly anticipate each year. This retreat serves as a critical platform for promoting scientific exchange and taking full advantage of the diverse expertise within our project teams. More than just a gathering, it’s a space where we come together to discuss the progress of our work, showcase key achievements, and engage in meaningful discussions that drive innovation forward.
DeSBi Retreat 2024
7 - 8 July 2024
Our research unit, RU KI-FOR 5363, recently held its annual retreat—an event we eagerly anticipate each year. This retreat serves as a critical platform for promoting scientific exchange and taking full advantage of the diverse expertise within our project teams. More than just a gathering, it’s a space where we come together to discuss the progress of our work, showcase key achievements, and engage in meaningful discussions that drive innovation forward.
DAGStat 2025
24 - 28 March 2025
The DeSBi Research Unit organized a dedicated session on its research topics at DAGStat 2025 in Berlin. The session, titled „Fusing Deep Learning and Statistics Towards Understanding Structured Biomedical Data,“ was chaired by our associated postdoctoral researcher, Georg Keilbar. Oral presentations were delivered by unit PhD students and postdocs: Masoumeh Javanbakht, Marco Simancher, Manuel Pfeuffer, and Sepideh Saran.
The presented topics covered a diverse range of cutting-edge research, including deep nonparametric conditional independence tests for images, visual explanations for statistical tests, deep modeling in the presence of known confounders with applications to neuroimaging data, and an empirical analysis of uncertainty quantification in genomics applications.

DeSBi Joint Seminar Series
The RU’s joint seminar series plays a vital role in fostering scientific exchange and enhancing our visibility within the research community. These seminars feature both internal and external speakers, creating a vibrant platform for knowledge sharing.
Join Us!
We invite everyone to participate in the upcoming seminars and engage in the dynamic discussions that shape our research landscape. Stay tuned for our schedule and speaker announcements.
KI-FOR 5363 DeSBi
Fusing Deep Learning and Statistics towards Understanding Structured Biomedical Data (DeSBi)
Publications

Paulo Yanez, Simon Witzke, Nadja Klein, Bernhard Y. Renard: Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance Propagation
Published in: Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science
Abstract:
Explainability is a key component in many applications involving deep neural networks (DNNs). However, current explanation methods for DNNs commonly leave it to the human observer to distinguish relevant explanations from spurious noise. This is not feasible anymore when going from easily human-accessible data such as images to more complex data such as genome…

Marco Simnacher, Xiangnan Xu, Hani Park, Christoph Lippert, Sonja Greven: Deep Nonparametric Conditional Independence Tests for Images
Published in: arXiv
Abstract:
Conditional independence tests (CITs) test for conditional dependence between random variables. As existing CITs are limited in their applicability to complex, high-dimensional variables such as images, we introduce deep nonparametric CITs (DNCITs). The DNCITs combine embedding maps, which extract feature representations of high-dimensional variables, with nonparametric CITs applicable to these feature representations.
Events
10. September, 14.00 - 15.00 pm/ Hybrid
Grégoire Montavon (Charite)
Grégoire Montavon (Charite): Explainable AI for Unsupervised Learning: Turning Raw Data into Scientific Insights.
Abstract: Raw data usually comes unsupervised. Turning unsupervised data into supervised data requires labels, which in turn require expert knowledge. Developing methods that are not tied to labels gives more flexibility in conducting various forms of scientific investigations. In this talk, I will present novel Explainable AI techniques, based on layer-wise relevance propagation (LRP), that work with unsupervised data and unsupervised ML models. These techniques formulate questions such as „what makes data dissimilar/similar“, „what makes features mutually predictable“, and explain by highlighting instances, features or concepts that are relevant. Medical use cases ranging from inferring regulatory networks from omics data to detecting Clever Hans effects in medical foundation models, will be presented to highlight the actionability of unsupervised XAI.
9. September 2025, 14.30-15.30 pm/ Hybrid
Philipp Bach (FU)
Philipp Bach (FU): DoubleMLDeep: Estimation of Causal Effects with Multimodal Data
Abstract: This paper explores the use of unstructured, multimodal data, namely text and images, in causal inference and treatment effect estimation. We propose a neural network architecture that is adapted to the double machine learning (DML) framework, specifically the partially linear model. An additional contribution of our paper is a new method to generate a semi-synthetic dataset which can be used to evaluate the performance of causal effect estimation in the presence of text and images as confounders. The proposed methods and architectures are evaluated on the semi-synthetic dataset and compared to standard approaches, highlighting the potential benefit of using text and images directly in causal studies. Our findings have implications for researchers and practitioners in economics, marketing, finance, medicine and data science in general who are interested in estimating causal quantities using non-traditional data.
14th of May 2025, 10.00 - 12.00
WIAS/DeSBi Seminar Series - MSS: Vladimir Spokoiny: Estimation and inferencce for Deep Neuronal Networks and inverse problems.
Abstract:
The talk discusses two important issues in modern high-dimensional statistics. Success of DNN in practical applications is at the same time a great challenge for statistical theory due to the curse of dimensionality problem. Manifold type assumptions are not really helpful and do not explain the double descent
phenomenon when the DNN accuracy improves with of overparametrization. We offer a different view on the problem based on the notion of effective dimension and a calming device. The idea is to decouple the structural DNN relation by extending the parameter space and use a proper regularization without any substantial increase of the effective dimension. The other related issue is the choice of regulation in inverse problems. We show that a simple ridge penalty (Tikhonov regularization) does a good job in any inverse problem for which the operator is more regular than the unknown signal. In the opposite case, one should use a model reduction technique like spectral cut-off.
28th April 2025, 12.45 pm - 14.00 pm
Dimosthenis Kontogiorgos (MIT): Explainable Human-Robot Collaboration (Talk at Fraunhofer HHI)
5th of February 2025
Sophie Langer (University of Twente): Deep Learning Theory – What’s Next?
15th of January 2025
Nadja Klein (KIT): A Statistical Perspective on Bayesian Deep Learning.
Bayesian deep learning fuses deep neural networks with Bayesian techniques to enable uncertainty quantification and enhance robustness in complex tasks such as image recognition or natural language processing. However, fully Bayesian estimation for neural networks is computationally intensive, requiring us to use approximate inference for virtually all practically relevant problems. Even for partially Bayesian neural networks, there is often a lack of clarity on how to adapt Bayesian principles to deep learning tasks, leaving practitioners overwhelmed by the theoretical aspects, such as choosing appropriate priors. So, how do we design scalable, reliable, and robust approximate Bayesian methods for deep learning? We address this question from a statistical perspective with a focus on „combining the best of both worlds“ — statistics and machine learning. We develop methods that deliver high-accuracy predictions and offer calibrated probabilistic confidence measures in those predictions. We showcase our work through real data examples and conclude with selected open challenges and directions for future research. The talk will start with a gentle introduction to Bayesian deep learning and tries to give intuitions rather than formulas.
20 March 2024
Jonas Peters (ETH Zurich)
Jonas Peters (ETH Zurich): course on „Causality“, DeSBi Short Courses, Humboldt Univeristy of Berlin, Berlin.