Annual Meeting of the Association for Computational Linguistics · 2023
Cited
10
Resume (English only)
Academic Achievements
Publications: 'Are Language Models Efficient Reasoners? A Perspective from Logic Programming', NeurIPS 2025; 'MathGAP: Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs', ICLR 2025.
Research Experience
Currently a PhD student at ETH Zurich, supported by the Max Planck ETH Center for Learning Systems. Had research experience at the University of California, Berkeley.
Education
PhD: ETH Zurich, advised by Mrinmaya Sachan, Ryan Cotterell, and Bernhard Schölkopf; MSc: Data Science, ETH Zurich; BSc: Industrial Engineering, Chalmers University of Technology; Spent time at University of California, Berkeley.
Background
Research Interests: Natural language processing, machine learning, and computational linguistics. Current focus: Efficiency of deductive reasoning in language models. Other interests: Linguistic and cognitive evaluation of LLMs, understanding and modeling human reading, test-time adaptation/scaling, context-free parsing, and constrained generation of LLMs.
Miscellany
Open to discuss ideas and techniques in his areas of interest.