Scholar
Xinglin Wang
Google Scholar ID: d3EPahgAAAAJ
Beijing institute of technology
Large Language Models
Reasoning
Evaluation
Follow
Google Scholar
↗
Citations & Impact
All-time
Citations
174
H-index
7
i10-index
6
Publications
20
Co-authors
8
list available
Contact
No contact links provided.
Publications
17 items
Learning More from Less: Unlocking Internal Representations for Benchmark Compression
2026
Cited
0
Do Not Waste Your Rollouts: Recycling Search Experience for Efficient Test-Time Scaling
2026
Cited
0
Diagnosing and Mitigating System Bias in Self-Rewarding RL
2025
Cited
0
PatternKV: Flattening KV Representation Expands Quantization Headroom
2025
Cited
0
Do Retrieval Augmented Language Models Know When They Don't Know?
2025
Cited
0
Every Rollout Counts: Optimal Resource Allocation for Efficient Test-Time Scaling
arXiv.org · 2025
Cited
3
Mind the Quote: Enabling Quotation-Aware Dialogue in LLMs via Plug-and-Play Modules
2025
Cited
0
Silencer: From Discovery to Mitigation of Self-Bias in LLM-as-Benchmark-Generator
2025
Cited
0
Load more
Resume (English only)
Co-authors
8 total
Peiwen Yuan
Beijing Institute of Technology
Yiwei Li
Beijing Institute of Technology
Shaoxiong Feng
Beijing Institute of Technology; RedNote
Boyuan Pan
TechLead, RedNote Inc.
Chuyi Tan
Beijing Institute of Technology
Jiayi Shi (施家宜)
Beijing Institute of Technology
Bin Sun
School of Computer Science & Technology, Beijing Institute of Technology
Yueqi Zhang
Beijing Institute of Technology
×
Welcome back
Sign in to Agora
Welcome back! Please sign in to continue.
Email address
Password
Forgot password?
Continue
Do not have an account?
Sign up