Published papers such as 'Coeditor: Leveraging Repo-level Diffs for Code Auto-editing' (ICLR 2024, Spotlight paper), 'TypeT5: Seq2seq Type Inference using Static Analysis' (ICLR 2023), 'STEADY: Simultaneous State Estimation and Dynamics Learning from Indirect Observations' (IROS 2022), 'OneVision: Centralized to Distributed Controller Synthesis with Delay Compensation' (IROS 2021), and 'LambdaNet: Probabilistic Type Inference using Graph Neural Networks' (ICLR 2020).
Research Experience
Led various research efforts at Augment on code completion, retrieval context, and most recently, the Next Edit feature. Research focuses on blending deep learning with static analysis to automate programming tasks like type inference and code autoediting.
Education
PhD in Computer Science, 2018-2023, University of Texas at Austin, advised by Işıl Dillig and Greg Durrett; BSc in Physics, 2013-2017, University of Science and Technology of China.
Background
Research interests include AI for Code, Software Engineering, and Programming Language Theories. A research scientist at Augment Computing, Inc., focusing on developing AI-powered developer tools. Passionate about building tools that empower developers and tackling complex AI challenges in software engineering.
Miscellany
Interests lie in improving the software development process through AI technology.