Conference on Empirical Methods in Natural Language Processing · 2024
Cited
1
Resume (English only)
Academic Achievements
Published several papers including 'Compositional Instruction Following with Language Models and Reinforcement Learning' in TMLR 2024, which was also accepted to RLC 2025; presented CaT-Bench at EMNLP 2024, a benchmark evaluating how language models handle step dependencies in plans; and introduced CAPE at ICRA 2024, a method enabling robots to make corrective actions using large language models. Also co-created the first publicly released, open-source LLM OpenGPT-2 and dataset OpenWebText.
Research Experience
During his PhD, worked as a consultant and research scientist at Blackbird.AI, where he created Compass, an agentic research and analysis application for multimodal social media content. Before starting his PhD, contributed to ConceptNet while working at Luminoso in Boston.
Education
Completed B.Sc. and M.Sc. in Computer Science at Brown University, advised by Stefanie Tellex and George Konidaris in the Humans to Robots Lab; currently a PhD student at the University of Texas at Austin, advised by Ray Mooney.
Background
Research interests include grounded natural language processing, reinforcement learning, and robotics. Aims to develop AI systems capable of understanding and interacting with complex environments through language and action.