Publications include 'Escaping Plato's Cave: Towards the Alignment of 3D and Text Latent Spaces' (CVPR 2025), 'Accelerating Transformer Inference for Translation via Parallel Decoding' (ACL 2023), 'Multimodal Neural Databases' (SIGIR 2023), and more.
Research Experience
Currently a Research Scientist at Nous Research, conducting post-training research on LLMs with a focus on enhancing their robustness, reliability, and alignment. Formerly an MLR Research Scientist at Apple, working on the robustness and reliability of foundation models via uncertainty estimation. Open Science Researcher at Hugging Face - BigScience, involved in the workshop on large language models, introducing the now popular instruction-tuning training paradigm. Research Engineer at Pi School, School of Artificial Intelligence, working on a European Commission project to promote entrepreneurship and tech transfer in the R&D area via NLP-based tools.
Education
PhD in Computer Science, 2024, Sapienza University of Rome; MSc in Computer Science, 2020, University of Roma Tor Vergata; BSc in Computer Science, 2018, University of Roma Tor Vergata.
Background
Research interests include Large Language Models, Natural Language Processing, and Representation Learning. During his PhD, he focused on building effective, efficient, and reliable Large Language Models. He was a Research Scientist Intern at Apple, working with the MLR team. His current research focuses on improving the robustness and reliability of language models through uncertainty estimation and mechanistic interpretability.