Xingchen Zeng
Scholar

Xingchen Zeng

Google Scholar ID: NOlXqNEAAAAJ
Hong Kong University of Science and Technology (Guangzhou)
Multimodal LLMVisualizationHigh-dimensional Data
Citations & Impact
All-time
Citations
78
 
H-index
3
 
i10-index
3
 
Publications
8
 
Co-authors
0
 
Resume (English only)
Academic Achievements
  • - Paper 'DaVinci: Reinforcing Visual-Structural Syntax in MLLMs for Generalized Scientific Diagram Parsing' submitted to ICLR 2026
  • - Paper 'Chart-G1: Visually Grounded Chart Reasoning by Rewarding Multimodal Large Language Models' submitted to PacificVis 2026 TVCG Track
  • - Paper 'Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning' published in IEEE Transactions on Visualization and Computer Graphics (Proc. IEEE VIS 2024)
  • - Paper 'IntentTuner: An Interactive Framework for Integrating Human Intentions in Fine-tuning Text-to-Image Generative Models' published in Proceedings of the CHI Conference on Human Factors in Computing Systems 2024
  • - One paper accepted by ICML 2025
  • - One paper accepted by CHI 2025
Research Experience
  • Currently seeking research internship opportunities.
Education
  • - PhD student in Data Science and Analytics at the Hong Kong University of Science and Technology (Guangzhou), supervised by Prof. Wei Zeng and Prof. Wei Wang
  • - Bachelor's degree from Central South University, with honors, supervised by Prof. Jiazhi Xia
Background
  • Research Interests: Empowering Large Language Models with human-like comprehension and creation of structured visual content, unlocking new capabilities in AI-driven design, reasoning, and communication.
Miscellany
  • Email: xingchen.zeng@outlook.com
  • Links: [Google Scholar] [GitHub] [Curriculum Vitae]
Co-authors
0 total
Co-authors: 0 (list not available)