October 2024: Two works on code refinement with LLMs and neuro-symbolic AI accepted at NeurIPS'24; July 2024: Zhaoyu's work on deep learning for theorem proving accepted at COLM'24; April 2024: Logan and Zhaoyu's work on autoformalizing Euclidean geometry accepted at ICML'24; September 2023: Sissi and Zhaoyu's work on learning reliable logical rules accepted at NeurIPS'23; Qidong and Allen's work on fuzzing dynamic deep learning compilers accepted at APLAS'23; April 2023: Allen's work on learning reliable neural specifications accepted at ICML'23 (oral).
Research Experience
Former Assistant Professor at McGill University, now conducting research at the University of Toronto. Focuses on combining statistical and logical methods to address complexities and uncertainties in program reasoning, including automatically learning API specifications from large codebases and improving analysis accuracy through user feedback.
Education
Assistant Professor in the School of Computer Science at McGill University from 2021 to 2022; Ph.D. in Computer and Information Science from the University of Pennsylvania, advised by Mayur Naik; M.S. in Computer Science from Vanderbilt University; B.E. (with Honors) from Nankai University.
Background
Assistant Professor in the Department of Computer Science at the University of Toronto, also a faculty affiliate at the Vector Institute and an affiliate member at Mila - Quebec AI Institute, holding a Canada CIFAR AI Chair. Research interests include improving software quality using AI-based techniques such as symbolic reasoning, constraint solving, statistical and probabilistic models, and deep learning and reinforcement learning.