International Conference on Learning Representations · 2022
Cited
215
Resume (English only)
Academic Achievements
[{'title': 'Improving Instruct Models for Free: A Study on Partial Adaptation', 'conference': 'EMNLP', 'year': '2025'}, {'title': 'An Alternative to FLOPS Regularization to Effectively Productionize SPLADE-Doc', 'conference': 'SIGIR', 'year': '2025'}, {'title': 'Unsupervised Contrast-Consistent Ranking with Language Models', 'conference': 'EACL', 'year': '2024'}, {'title': 'Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning', 'conference': 'ACL Findings', 'year': '2023'}, {'title': 'Dataless Knowledge Fusion by Merging Weights of Language Models', 'conference': 'ICLR', 'year': '2023'}, {'title': 'Attending to Entities for Better Text Understanding', 'conference': 'AAAI', 'year': '2020'}, {'title': 'The UTexas System for TAC 2019 SM-KBP Task 3: Hypothesis Detection with Graph Convolutional Networks', 'conference': 'TAC', 'year': '2019'}, {'title': 'Implicit Argument Prediction as Reading Comprehension', 'conference': 'AAAI', 'year': '2019'}, {'title': 'The UTexas System for TAC SM-KBP Task 3: Probabilistic Generation of Coherent Hypotheses', 'conference': 'TAC', 'year': '2018'}, {'title': 'Implicit Argument Prediction with Event Knowledge', 'conference': 'NAACL', 'year': '2018'}, {'title': 'Representing Meaning with a Combination of Logical and Distributional Models', 'conference': 'Computational Linguistics', 'year': '2016'}]
Research Experience
Prior to joining Bloomberg, he conducted research on natural language understanding and computational semantics at the University of Texas at Austin.
Education
Ph.D. in Computer Science; University: University of Texas at Austin (UT Austin); Advisor: Dr. Katrin Erk; Time: Not specifically mentioned. Bachelor's degree: Automation and Economics; University: Tsinghua University.
Background
Research Interests: Natural language processing and understanding, computational semantics. Professional Field: Computer Science. Brief Introduction: Pengxiang Cheng is a research scientist and team lead at the AI group of Bloomberg, managing the Core NLP team, focusing on developing libraries and frameworks for NLP and LLM applications and training foundation models with financial domain knowledge.