Scholar
Yunhao Yang
Google Scholar ID: hEKrDSEAAAAJ
University of Texas at Austin
Formal methods
Autonomy
Privacy
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
140
H-index
7
i10-index
4
Publications
20
Co-authors
14
list available
Contact
CV
Open ↗
GitHub
Open ↗
LinkedIn
Open ↗
Publications
9 items
Learning Actionable Manipulation Recovery via Counterfactual Failure Synthesis
2026
Cited
0
RepV: Safety-Separable Latent Spaces for Scalable Neurosymbolic Plan Verification
2025
Cited
0
AD-VF: LLM-Automatic Differentiation Enables Fine-Tuning-Free Robot Planning from Formal Methods Feedback
2025
Cited
0
VLN-Zero: Rapid Exploration and Cache-Enabled Neurosymbolic Vision-Language Planning for Zero-Shot Transfer in Robot Navigation
2025
Cited
0
Foundation Models for Logistics: Toward Certifiable, Conversational Planning Interfaces
2025
Cited
0
Real-Time Privacy Preservation for Robot Visual Perception
2025
Cited
0
Evaluating Human Trust in LLM-Based Planners: A Preliminary Study
2025
Cited
0
Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework
arXiv.org · 2024
Cited
1
Load more
Resume (English only)
Academic Achievements
- Publications:
- On the Privacy Risks of Deploying Recurrent Neural Networks in Machine Learning
- Self-Enhancing Multi-filter Sequence-to-Sequence Model
- Uncertainty-Guided Enhancement on Driving Perception System via Foundation Models
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework
- Towards Neuro-Symbolic Video Understanding
- Fine-Tuning Language Models Using Formal Methods Feedback: A Use Case in Autonomous Systems
- Multimodal Pretrained Models for Verifiable Sequential Decision-Making: Planning, Grounding, and Perception
- A Survey on Reinforcement Learning for Combinatorial Optimization
- Training Heterogeneous Features in Sequence to Sequence Tasks: Latent Enhanced Multi-filter Seq2Seq Model
- Awards:
- Best Student Paper Award (Proceedings of Intelligent Systems Conference 2022)
- Patents:
- Automaton-Based Controller and Method with Generative Language Models for Task Execution
- Fellowships and Awards:
- Amazon Scholar (2024-2025)
- Special Departmental Honor in Computer Science (2022)
- Research Distinction (2021)
- Undergraduate Research Fellowship (2020)
Research Experience
- Ph.D. Research: Conducting doctoral research at The University of Texas at Austin, focusing on integrating large language models and formal methods
- Projects: Involved in multiple research projects, including uncertainty-guided enhancement on driving perception systems via foundation models
Education
- Degrees: Ph.D. (in progress), M.S., B.S.
- School: The University of Texas at Austin
- Advisor: Prof. Ufuk Topcu
- Time: Started Ph.D. program in Fall 2022
- Major: Computer Science (M.S., Ph.D.), Mathematics (B.A.)
Background
- Research Interests: Integrating large language models and formal methods for autonomy, AI safety, and privacy
- Field: Computer Science
- Brief Introduction: Currently a member of the Center for Autonomy at The University of Texas at Austin, supervised by Prof. Ufuk Topcu.
Miscellany
- Community Services:
- Journal and Conference Reviewer
- Volunteer for STEM Girl Day at UT Austin (2024, 2025)
- Mentor for REACT RUE summer program (2023)
- Volunteer for TACC Back@TACC (2022, 2024)
- Volunteer for LASA High School Visit (2023)
- Volunteer for FIRST Technical Challenge: Texas-Central GEMS League Tournament (2023)
Co-authors
14 total
Ufuk Topcu
The University of Texas at Austin
Sandeep P Chinchali
Assistant Professor, UT Austin ECE
Co-author 3
Cyrus Neary
The University of British Columbia
Neel P. Bhatt
Postdoctoral Fellow, The University of Texas at Austin
Minkyu Choi
UT Austin, Lockheed Martin
Sahil Shah
IIT Bombay
Chandrajit Bajaj
Computational Applied Mathematics Chair, Professor of Computer Science,
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up