Huawen Feng
Scholar

Huawen Feng

Google Scholar ID: WsTNqM8AAAAJ
South China University of Technology, Alibaba Tongyi Lab, Microsoft Research Asia, Tencent Hunyuan X
NLPLarge Language ModelsPost TrainingReinforcement LearningPreference Optimization
Citations & Impact
All-time
Citations
102
 
H-index
6
 
i10-index
4
 
Publications
13
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought, Technical Report.
  • WarriorCoder: Learning from Expert Battles to Augment Code Large Language Models, 2025 ACL.
  • Training Large Language Models for Retrieval-Augmented Question Answering through Backtracking Correction, 2025 ICLR.
  • Improving Factual Consistency of News Summarization by Contrastive Preference Optimization, 2024 EMNLP (Findings).
  • Well Begun Is Half Done: An Implicitly Augmented Generative Framework with Distribution Modification for Hierarchical Text Classification, 2024 COLING.
  • Perturbation-Based Self-Supervised Attention for Attention Bias in Text Classification, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
  • Joint Constrained Learning with Boundary-adjusting for Emotion-Cause Pair Extraction, 2023 ACL.
  • It’s Better to Teach Fishing than Giving a Fish: An Auto-Augmented Structure-aware Generative Model for Metaphor Detection, 2022 EMNLP (Findings).
Education
  • Ph.D.: South China University of Technology, School of Computer Science and Engineering, Advisor: Prof. Qianli Ma; Bachelor's Degree: South China University of Technology, with the qualification for postgraduate recommendation, and later applied for a direct Ph.D. program in the first year of graduate studies.
Background
  • Research Interests: LLM Alignment, Post Training, and Preference Optimization. Currently, a third-year Ph.D. student in the School of Computer Science and Engineering at South China University of Technology (SCUT).
Miscellany
  • Contact Information: Email: 541119578@qq.com
Co-authors
0 total
Co-authors: 0 (list not available)