Scholar
Min-Hsuan Yeh
Google Scholar ID: GXSualcAAAAJ
University of Wisconsin Madison
Natural Language Processing
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
55
H-index
5
i10-index
2
Publications
11
Co-authors
17
list available
Contact
Email
minhsuan.yeh@gmail.com
CV
Open ↗
Twitter
Open ↗
GitHub
Open ↗
LinkedIn
Open ↗
Publications
8 items
Simulating and Understanding Deceptive Behaviors in Long-Horizon Interactions
2025
Cited
0
Clean First, Align Later: Benchmarking Preference Data Cleaning for Reliable LLM Alignment
2025
Cited
0
Cognition-of-Thought Elicits Social-Aligned Reasoning in Large Language Models
2025
Cited
0
LUMINA: Detecting Hallucinations in RAG System with Context-Knowledge Signals
2025
Cited
0
MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent Systems
2025
Cited
0
How to Steer LLM Latents for Hallucination Detection?
2025
Cited
0
Can Your Uncertainty Scores Detect Hallucinated Entity?
2025
Cited
0
Challenges and Future Directions of Data-Centric AI Alignment
2024
Cited
1
Resume (English only)
Academic Achievements
NeurIPS 2025: 'MetaMind: Modeling Human Social Thoughts with Metacognitive Multi-Agent Systems' (Spotlight)
NeurIPS 2025 Datasets and Benchmarks Track: 'Clean First, Align Later: Benchmarking Preference Data Cleaning for Reliable LLM Alignment'
TMLR 2025: 'HalluEntity: Benchmarking and Understanding Entity-Level Hallucination Detection' (J2C Certification, Top 10%)
ICML 2025 Position Track: 'Position: Challenges and Future Directions of Data-Centric AI Alignment'
ICML 2025: 'Steer LLM Latents for Hallucination Detection'
EMNLP 2024: 'COCOLOFA: A Dataset of News Comments with Common Logical Fallacies Written by LLM-Assisted Crowds'
FAccT 2024: 'Analyzing the Relationship Between Difference and Ratio-Based Fairness Metrics'
EMNLP 2022: 'Multi-VQG: Generating Engaging Questions for Multiple Images'
EMNLP 2021: 'Lying Through One’s Teeth: A Study on Verbal Leakage Cues'
Preprints: 'LUMINA: Detecting Hallucinations in RAG System with Context–Knowledge Signals'
Preprints: 'Cognition-of-Thought Elicits Social-Aligned Reasoning in Large Language Models'
Preprints: 'Simulating and Understanding Deceptive Behaviors in Long-Horizon Interactions'
Conference Reviewer: COLING’25, ICLR’25, ACL’25, ACL’24, COLING’24
Co-authors
17 total
Sharon Li
University of Wisconsin-Madison
Sean Du
Nanyang Technological University | UW-Madison
Seongheon Park
University of Wisconsin-Madison
Leitian Tao
University of Wisconsin–Madison
Lun-Wei Ku
Research Fellow, Academia Sinica
Xuanming Zhang
Stanford University, University of Wisconsin-Madison
Haobo Wang
Zhejiang University
Philip S. Thomas
University of Massachusetts
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up