Developed neurosymbolic program synthesis algorithms including sketch learning (Bayou), neural admissible heuristic search (Near), imitation-projected descent (Propel, PIRL), and LLM-aided evolution and abstraction (LaSR)
Pioneered neurosymbolic program representations such as modular neural architectures (Houdini), compositions of neural predictors and traditional software (DSE), and differentiable relaxations of symbolic programs (Near)
Recent publications on LLM agents for theorem proving (Copra), search-based natural language deduction (SCSearch), and compositional world modeling (Cosmos)
Delivered tutorials on neurosymbolic programming at NeurIPS 2022 and POPL 2023
Co-authored an op-ed on the 2023 AI Safety Executive Order
Research Experience
Leads the Trishul lab, investigating problems at the interface of automated reasoning, machine learning, and programming languages
Has worked on program synthesis for many years—early work based on symbolic formal methods (e.g., Lambda2, ConSynth), recent work on neurosymbolic approaches
Extensive background in automated reasoning: PhD work on automata-theoretic formal verification (Nested Trees); current work on neurosymbolic formal and informal (visual/textual) reasoning
On leave at Google DeepMind's Science and Strategic Initiatives Unit in London since Fall 2024