- State-of-the-art methods for watermarking LLM, HeavyWater and SimplexWater, accepted at NeurIPS 2025
- Paper on Inference-Time Reward Hacking in Large Language Models selected as a spotlight at NeurIPS 2025
- Paper on Leveraging the Sequential Nature of Language for Interpretability selected as a spotlight at the ICML 2025 Workshop on Assessing World Models
- Papers on Soft Best-of-n sampling and Inference-Time Reward Hacking in LLMs
Research Experience
Claudio currently works under the mentorship of Flavio Calmon at Harvard’s School of Engineering and Applied Sciences. His research interests include inference-time alignment, interpretability, fairness, and the science of generative AI evaluations, as well as the economic implications of AI deployment.
Education
Claudio completed his Ph.D. in mathematics and electrical engineering (summa cum laude) under the guidance of Felix Krahmer within the Optimization and Data Analysis group, while concurrently affiliated to the Information Theory group under the leadership of Holger Boche at the Technical University of Munich.
Background
Claudio is a mathematician working with AI and machine learning at Harvard’s School of Engineering and Applied Sciences. His research focuses on building the mathematical foundations of trustworthy AI, developing rigorous frameworks, algorithms, and theoretical guarantees for deploying AI systems safely and equitably. He harnesses tools from optimization, statistics, information theory, and signal processing to advance both theory and practice.
Miscellany
Claudio actively collaborates with lawyers and policymakers on AI governance, including contributing to the G20 Summit policy discussions to bridge the gap between technical innovation and responsible AI deployment.