DeSBi
DeSBi
DeSBi

KI-FOR 5363 DeSBi

Fusing Deep Learning and Statistics towards Understanding Structured Biomedical Data (DeSBi)

Publications

All Publications

Events

All Events

10. September, 14.00 - 15.00 pm/ Hybrid

Grégoire Montavon (Charite)

"

Grégoire Montavon (Charite): Explainable AI for Unsupervised Learning: Turning Raw Data into Scientific Insights.

Abstract: Raw data usually comes unsupervised. Turning unsupervised data into supervised data requires labels, which in turn require expert knowledge. Developing methods that are not tied to labels gives more flexibility in conducting various forms of scientific investigations. In this talk, I will present novel Explainable AI techniques, based on layer-wise relevance propagation (LRP), that work with unsupervised data and unsupervised ML models. These techniques formulate questions such as „what makes data dissimilar/similar“, „what makes features mutually predictable“, and explain by highlighting instances, features or concepts that are relevant. Medical use cases ranging from inferring regulatory networks from omics data to detecting Clever Hans effects in medical foundation models, will be presented to highlight the actionability of unsupervised XAI.

9. September 2025, 14.30-15.30 pm/ Hybrid

Philipp Bach (FU)

"

Philipp Bach (FU): DoubleMLDeep: Estimation of Causal Effects with Multimodal Data

Abstract: This paper explores the use of unstructured, multimodal data, namely text and images, in causal inference and treatment effect estimation. We propose a neural network architecture that is adapted to the double machine learning (DML) framework, specifically the partially linear model. An additional contribution of our paper is a new method to generate a semi-synthetic dataset which can be used to evaluate the performance of causal effect estimation in the presence of text and images as confounders. The proposed methods and architectures are evaluated on the semi-synthetic dataset and compared to standard approaches, highlighting the potential benefit of using text and images directly in causal studies. Our findings have implications for researchers and practitioners in economics, marketing, finance, medicine and data science in general who are interested in estimating causal quantities using non-traditional data.

14th of May 2025, 10.00 - 12.00

WIAS/DeSBi Seminar Series - MSS: Vladimir Spokoiny: Estimation and inferencce for Deep Neuronal Networks and inverse problems.

"

Abstract:
The talk discusses two important issues in modern high-dimensional statistics. Success of DNN in practical applications is at the same time a great challenge for statistical theory due to the curse of dimensionality problem. Manifold type assumptions are not really helpful and do not explain the double descent
phenomenon when the DNN accuracy improves with of overparametrization. We offer a different view on the problem based on the notion of effective dimension and a calming device. The idea is to decouple the structural DNN relation by extending the parameter space and use a proper regularization without any substantial increase of the effective dimension. The other related issue is the choice of regulation in inverse problems. We show that a simple ridge penalty (Tikhonov regularization) does a good job in any inverse problem for which the operator is more regular than the unknown signal. In the opposite case, one should use a model reduction technique like spectral cut-off.

28th April 2025, 12.45 pm - 14.00 pm

Dimosthenis Kontogiorgos (MIT): Explainable Human-Robot Collaboration (Talk at Fraunhofer HHI)

"
5th of February 2025

Sophie Langer (University of Twente): Deep Learning Theory – What’s Next?

"
15th of January 2025

Nadja Klein (KIT): A Statistical Perspective on Bayesian Deep Learning.

"

Bayesian deep learning fuses deep neural networks with Bayesian techniques to enable uncertainty quantification and enhance robustness in complex tasks such as image recognition or natural language processing.  However, fully Bayesian estimation for neural networks is computationally intensive, requiring us to use approximate inference for virtually all practically relevant problems.  Even for partially Bayesian neural networks, there is often a lack of clarity on how to adapt Bayesian principles to deep learning tasks, leaving practitioners overwhelmed by the theoretical aspects, such as choosing appropriate priors. So, how do we design scalable, reliable, and robust approximate Bayesian methods for deep learning? We address this question from a statistical perspective with a focus on „combining the best of both worlds“ — statistics and machine learning. We develop methods that deliver high-accuracy predictions and offer calibrated probabilistic confidence measures in those predictions. We showcase our work through real data examples and conclude with selected open challenges and directions for future research. The talk will start with a gentle introduction to Bayesian deep learning and tries to give intuitions rather than formulas.

20 March 2024

Jonas Peters (ETH Zurich)

"

Jonas Peters (ETH Zurich): course on „Causality“, DeSBi Short Courses, Humboldt Univeristy of Berlin, Berlin.