October 3-4, 2025, giving a talk on 'Machine Learning for Preference Elicitation in Combinatorial Auction Design' at the Berkeley–Columbia Meeting in Engineering and Statistics; September 25-26, 2025, presenting a poster on Deep Learning Theory at MoDL; July 17, 2025, giving an oral presentation (top 1%) of the paper 'Prices, Bids, Values: One ML-Powered Combinatorial Auction to Rule Them All' at ICML 2025; February 16-21, 2025, participating in the workshop on Uncertainty Quantification in Neural Network Models at BIRS, giving a talk and leading a discussion on 'Inductive Bias of Neural Networks' and presenting a poster on Uncertainty Quantification; October 1, 2024, starting his postdoc with Prof. Bin Yu at UC Berkeley on Deep Learning Theory and Uncertainty Quantification; July 26, 2024, defending his PhD thesis on 'Inductive bias of neural networks and selected applications'; July 8-12, 2024, giving a talk on 'Path-dependent Neural Jump ODEs and their Application to Stochastic Filtering' at the 12th Bachelier World Congress of the Bachelier Finance Society; June 17-20, 2024, giving a talk on 'Deep Learning Theory on Multi-task Learning' at the ETH – Hong Kong – Imperial Mathematical Finance Workshop; March 25, 2024, giving a talk on 'NOMU: Neural Optimization-based Model Uncertainty' at AMLD EPFL 2024; February 19-26, 2024, at AAAI 2024 for the paper 'Machine Learning-powered Combinatorial Clock Auction'; June 25-28, 2023, visiting the University of Oxford; February 15-28, 2023, visiting Bin Yu's research group at UC Berkeley; February 6-15, 2023, giving an oral presentation of the paper 'Bayesian Optimization-based Combinatorial Assignment' at AAAI 2023.
Research Experience
Since 2024, postdoc in the Yu group at UC Berkeley with Prof. Bin Yu on Deep Learning Theory and Uncertainty Quantification.
Education
2019-2024, Ph.D. at ETH Zurich, advised by Prof. Josef Teichmann. Prior to that, received a B.Sc. and M.Sc. (2019) in Technical Mathematics from the Technical University of Vienna.
Background
Research interests: Mathematical theory of deep learning algorithms (in terms of inductive bias, multi-task learning, and compressibility); quantifying epistemic and aleatoric uncertainty of deep neural networks; applying deep learning to market design (preference elicitation for combinatorial auctions); Neural Jump ODEs for irregularly observed time series and compression of neural networks.
Miscellany
Enjoys working in teams on scientifically exciting open questions, trying to understand paradoxical phenomena. Particularly likes establishing new perspectives on such questions, which can partially resolve such paradoxes. Also enjoys developing new methods and diagnosing, discussing, understanding, and improving methods in teams. It is important that each team member is passionate about the projects, and he is happy when students continue to work with him in their free time for years after his supervision purely out of excitement about their work.