No specific academic achievements like publications, awards, or patents are mentioned.
Research Experience
During his Ph.D., interned at Meta FAIR with Asli Celikyilmaz on agent-based reasoning; at Microsoft Research with Patrick Xia and Jason Eisner to improve the reasoning capabilities of LLMs by revising their output; and at Amazon Alexa AI on setting constraints on the output space of a neural network. Experienced in fine-tuning LLMs at scale (up to Llama 70B), distilling reasoning capabilities from larger models (GPT-4, Claude, Llama 70B) to smaller ones (Llama, Mistral, T5, GPT-2), and aligning LLMs to generate more accurate and contextually appropriate responses (PPO, DPO, RLHF).
Education
Pursuing a Ph.D. at ETH Zürich, Switzerland, under the supervision of Prof. Mrinmaya Sachan (ETH) and Nicholas Monath (Google DeepMind).
Background
Research interests include exploring the reasoning capabilities of large language models and enhancing the reasoning skills of smaller models through effective distillation techniques. Additionally, he is interested in areas such as alignment, autonomous agents, and multimodal models.
Miscellany
In his free time, enjoys playing tennis, reading about conspiracy theories, and collecting sneakers.