Scholar
Kehai Chen
Google Scholar ID: _M4Am0AAAAAJ
Harbin Institute of Technolgy (Shenzhen)
LLM
Natural Language Processing
Agent
Multi-model Generation
Follow
Homepage
↗
Google Scholar
↗
Citations & Impact
All-time
Citations
2,015
H-index
27
i10-index
48
Publications
20
Co-authors
0
Contact
GitHub
Open ↗
Publications
50 items
SAT: Balancing Reasoning Accuracy and Efficiency with Stepwise Adaptive Thinking
2026
Cited
0
Agentic Tool Use in Large Language Models
2026
Cited
0
Long-form RewardBench: Evaluating Reward Models for Long-form Generation
2026
Cited
0
Mitigating Translationese Bias in Multilingual LLM-as-a-Judge via Disentangled Information Bottleneck
2026
Cited
0
Toward Robust LLM-Based Judges: Taxonomic Bias Evaluation and Debiasing Optimization
2026
Cited
0
Beyond Token-Level Policy Gradients for Complex Reasoning with Large Language Models
2026
Cited
0
Dynamics Within Latent Chain-of-Thought: An Empirical Study of Causal Structure
2026
Cited
0
Beyond Unimodal Shortcuts: MLLMs as Cross-Modal Reasoners for Grounded Named Entity Recognition
2026
Cited
1
Load more
Resume (English only)
Academic Achievements
Published over 90 papers in top-tier NLP/ML/AI conferences and journals, including ACL, NeurIPS, ICLR, AAAI, IJCAI, TPAMI, TASLP, etc.
Multiple papers accepted at NeurIPS, EMNLP, TASLP, ACM MM in 2025, including:
“A Survey on Human Preference Learning for Large Language Models” (ACM Computing Surveys)
“Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning” (NeurIPS)
“MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching” (NeurIPS)
“Exploring Translation Mechanism of Large Language Models” (NeurIPS)
“XIFBench: Evaluating Large Language Models on Multilingual Instruction Following” (NeurIPS)
“Constituency Parsing using LLMs” (TASLP)
“BIMCompNet: Multimodal Dataset for Geometric Deep Learning in Building Information Model” (ACM MM)
“Generator-Assistant Stepwise Rollback Framework for Large Language Model Agent” (EMNLP)
“ORPP: Self-Optimizing Role-playing Prompts to Enhance Language Model Capabilities” (EMNLP)
“Benchmarking LLMs for Translating Classical Chinese Poetry: Evaluating Adequacy, Fluency, and Elegance” (EMNLP)
Co-authors
0 total
Co-authors: 0 (list not available)
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up