Browse publications on Google Scholar (top-right) ↗
Resume (English only)
Academic Achievements
NeurIPS 2024: Iteratively Refined Behavior Regularization for Offline Reinforcement Learning
NeurIPS 2024: Unlock the Intermittent Control Ability of Model Free Reinforcement Learning
NeurIPS 2024: CleanDiffuser: An Easy-to-use Modularized Library for Diffusion Models in Decision Making
ICML 2024: Rethinking Decision Transformer via Hierarchical Reinforcement Learning
ICML 2024: Imagine Big from Small: Unlock the Cognitive Generalization of Deep Reinforcement Learning from Simple Scenarios
IJCAI 2024: ENOTO: Improving Offline-to-Online Reinforcement Learning with Q-Ensembles
ICLR 2024: Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback
NeurIPS 2023: Reining Generalization in Offline Reinforcement Learning via Representation Distinction
CIKM 2023: A Hierarchical Imitation Learning-based Decision Framework for Autonomous Driving
AAAI 2023: SplitNet: A Reinforcement Learning based Sequence Splitting Method for the MinMax Multiple Travelling Salesman Problem
CAAI AIR 2023: OSCAR: OOD State Conservative Offline Reinforcement Learning for Sequential Decision Making
IJCAI 2022: PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations
NeurIPS 2021: A hierarchical reinforcement learning based optimization framework for large scale dynamic pickup and delivery problems
KDD 2021: A Multi-Graph Attributed Reinforcement Learning based Optimization Algorithm for Large-scale H
Research Experience
Worked in Professor Jianye Hao's research group and published over 20 papers in top AI conferences.
Background
Currently an associate professor at Shanxi University, with research interests in reinforcement learning, offline reinforcement learning, embodied AI, and applications of deep reinforcement learning.
Miscellany
A huge fan of basketball, snowboarding, and orienteering.