Scholar
Yunqi Hong
Google Scholar ID: moSZIuEAAAAJ
University of California, Los Angeles
LLM post-training
Multimodal LLM
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
7
H-index
2
i10-index
0
Publications
6
Co-authors
0
Contact
Email
yunqihong@ucla.edu
GitHub
Open ↗
LinkedIn
Open ↗
Publications
8 items
Understanding Reward Hacking in Text-to-Image Reinforcement Learning
arXiv.org · 2026
Cited
1
When Distance Distracts: Representation Distance Bias in BT-Loss for Reward Models
2025
Cited
0
Adaptive Diagnostic Reasoning Framework for Pathology with Multimodal Large Language Models
2025
Cited
0
Uncertainty-Guided Selective Adaptation Enables Cross-Platform Predictive Fluorescence Microscopy
2025
Cited
0
QG-CoC: Question-Guided Chain-of-Captions for Large Multimodal Models
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing · 2025
Cited
0
IRIS: Intrinsic Reward Image Synthesis
2025
Cited
0
Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs
2025
Cited
0
Graph Neural Diffusion Networks for Semi-supervised Learning
arXiv.org · 2022
Cited
3
Resume (English only)
Academic Achievements
- Publication: Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs, NeurIPS 2025
- Preprint: IRIS: Intrinsic Reward Image Synthesis, arXiv preprint arXiv:2509.25562, 2025
Research Experience
- Currently conducting research in the Computer Science Department at UCLA
- Focused on LLM post-training, inferencing, and downstream applications
- Collaborating with Prof. Neil Y.C. Lin to develop LLM-driven methods for biomedical research
Education
- Degree: PhD student
- University: UCLA
- Advisor: Prof. Cho-Jui Hsieh
- Department: Computer Science
Background
- Research Interests: LLM post-training, inferencing, and downstream applications
- Current research focus: LLM reinforcement learning, reward modeling, and text-to-image generation
- Previous research areas: LLM automatic prompting, model interpretability, graph adversarial attacks, and recommender systems
- Collaborates with Prof. Neil Y.C. Lin on developing LLM-driven methods for biomedical research
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up