HAMMER: Hamiltonian Curiosity Augmented Large Language Model Reinforcement

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing curriculum reinforcement learning approaches often rely on hand-crafted difficulty annotations for sample ordering, leading to suboptimal local minima—repeated exposure to easy samples early in training hampers policy exploration. Method: We propose a semantic-diversity-driven global curriculum learning framework that, for the first time, integrates the Minimum Semantic Hamiltonian Path into RL-based dynamic sample sequencing. By modeling pairwise semantic similarity among training instances and optimizing the traversal path over the semantic graph, our method explicitly enhances model curiosity and exploration breadth—without requiring human-annotated difficulty labels. Contribution/Results: The framework achieves globally optimal curriculum organization, yielding consistent +3–4% average accuracy gains across diverse reasoning benchmarks. It significantly improves training stability and cross-task generalization, establishing a scalable, semantics-aware paradigm for curriculum learning.

Technology Category

Application Category

📝 Abstract
Recent curriculum reinforcement learning for large language models (LLMs) typically rely on difficulty-based annotations for data filtering and ordering. However, such methods suffer from local optimization, where continual training on simple samples in the early steps can cause the policy to lose its exploration. We propose a novel schema, namely Hamiltonian curiosity augmented large language model reinforcement (HAMMER), that transfers diversity metrics, commonly used in dataset evaluation, into the dynamic reinforcement learning procedure, where training samples are ordered via a minimum-semantic Hamiltonian path making the initial training retrain more exploration. From a theoretical perspective of generalization bounds, diversity-driven ordering facilitates stable convergence. Empirical evaluations indicate that HAMMER stimulates model "curiosity" and consistently achieves a 3% to 4% average accuracy gain across diverse inference benchmark.
Problem

Research questions and friction points this paper is trying to address.

Addresses local optimization in curriculum reinforcement learning
Enhances exploration through diversity-driven sample ordering
Improves generalization bounds and model curiosity in training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orders training samples via minimum-semantic Hamiltonian path
Transfers diversity metrics into reinforcement learning procedure
Augments LLM reinforcement with Hamiltonian curiosity for exploration
🔎 Similar Papers
No similar papers found.
M
Ming Yang
1Fudan University2Tiansuan Lab, Ant Group Co., Ltd.
Xiaofan Li
Xiaofan Li
East China Normal University
Computer Vision
Z
Zhiyuan Ma
2Tiansuan Lab, Ant Group Co., Ltd.
D
Dengliang Shi
2Tiansuan Lab, Ant Group Co., Ltd.
J
Jintao Du
2Tiansuan Lab, Ant Group Co., Ltd.
Y
Yu Cheng
2Tiansuan Lab, Ant Group Co., Ltd.
Weiguo Zheng
Weiguo Zheng
Fudan University
graph databaseknowledge graphquestion answering