Applied researcher currently working on GitHub Copilot, focusing on model contextualization to enhance model performance in static and agentic modes through smart context.
Broadly interested in language model capabilities in textual understanding, coding, and tool use, as well as mitigating LLM hallucinations.
Pursues models that leverage external, human-editable data sources such as structured knowledge and open-domain text, using tools and code to accomplish tasks precisely.
In code generation, has investigated approaches to improve program synthesis in low-resource programming languages, LLM dialogues verified by code generation and execution, and complex data science task completion.
In NLP, has extensive experience in natural language inference (NLI), particularly for question answering, summarization, and Knowledge Graph usage via lexical semantic modeling.