Jie-Jing Shao
Scholar

Jie-Jing Shao

Google Scholar ID: k1tEDpQAAAAJ
Nanjing University
Machine LearningNeuro-Symbolic LearningReinforcement Learning
Citations & Impact
All-time
Citations
293
 
H-index
10
 
i10-index
10
 
Publications
20
 
Co-authors
6
list available
Resume (English only)
Academic Achievements
  • 1. ChinaTravel: A Real-World Benchmark for Language Agent in Chinese Travel Planning, Preprint, 2024.
  • 2. Breaking the Self-Evaluation Barrier: Reinforced Neuro-Symbolic Planning with Large Language Models, IJCAI'25.
  • 3. Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models, IJCAI'25 Survey Track.
  • 4. Abductive Learning for Neuro-Symbolic Grounded Imitation, KDD'25, Best Paper Award at PAKDD 2024 Workshop.
  • 5. Offline Imitation Learning with Model-based Reverse Augmentation, KDD'24.
  • 6. Offline Imitation Learning without Auxiliary High-quality Behavior Data, In the Progress of Re-Submission.
  • 7. Investigating the Limitation of CLIP Models: The Worst-performing Categories, Preprint, 2023.
  • 8. Open-Set Learning under Covariate Shift, Machine Learning, 2024.
  • 9. LOG: Active Model Adaptation for Label-Efficient OOD Generalization, NeurIPS'22.
  • 10. Active Model Adaptation Under Unknown Shift, KDD'22.
  • 11. Towards Robust Model Reuse in the Presence of Latent Domains, IJCAI'21.
Research Experience
  • Member of LAMDA Group, led by Professor Zhi-Hua Zhou. Research focuses on reinforcement learning and neuro-symbolic learning.
Education
  • Received B.Sc. from Jilin University (Tang Aoqing Honors Program in Science) in 2019. Admitted to Nanjing University for M.Sc. without entrance examination in the same year. Started pursuing a Ph.D. at Nanjing University in 2022. Supervisor: Professor Yu-Feng Li.
Background
  • Research Interests: Reinforcement Learning, Neuro-Symbolic Learning, especially on improving their generalization and data efficiency. Field: Computer Science & Technology.
Miscellany
  • Collaborative publications on related topics such as Semi-Supervised Learning and Abductive Learning.