Published multiple papers on topics including code vulnerability detection, safety of functionally correct patches, secure code generation, conformal safety shielding for imperfect-perception agents, compositional verification of autonomous systems, validating mechanistic interpretations, monitoring safety properties for autonomous driving systems, debugging and runtime analysis of neural networks, survey on formal verification techniques for vision-based autonomous systems, attacks and defenses for large language models on coding tasks, and concept-based analysis of neural networks via vision-language models.
Research Experience
Assistant Professor in the Department of Computer Science at Colorado State University; Teaching CS 580B1: Trustworthy Machine Learning (Fall 2024) and CS 454: Principles of Programming Languages (Spring 2025).
Education
PhD from the School of Computer Science at Georgia Tech; Post-doctoral researcher at CMU Cylab, hosted by Dr. Corina Pasareanu.
Background
Research Interests: Analyzing the safety and security of software systems using formal methods. Focus on evaluating and improving the trustworthiness, i.e., safety, security, and interpretability, of AI-enabled systems such as coding assistants and robots.
Miscellany
Open to interacting and supervising students who are interested in or curious about topics related to his research.