🤖 AI Summary
Existing curriculum reinforcement learning approaches often rely on hand-crafted difficulty annotations for sample ordering, leading to suboptimal local minima—repeated exposure to easy samples early in training hampers policy exploration. Method: We propose a semantic-diversity-driven global curriculum learning framework that, for the first time, integrates the Minimum Semantic Hamiltonian Path into RL-based dynamic sample sequencing. By modeling pairwise semantic similarity among training instances and optimizing the traversal path over the semantic graph, our method explicitly enhances model curiosity and exploration breadth—without requiring human-annotated difficulty labels. Contribution/Results: The framework achieves globally optimal curriculum organization, yielding consistent +3–4% average accuracy gains across diverse reasoning benchmarks. It significantly improves training stability and cross-task generalization, establishing a scalable, semantics-aware paradigm for curriculum learning.
📝 Abstract
Recent curriculum reinforcement learning for large language models (LLMs) typically rely on difficulty-based annotations for data filtering and ordering. However, such methods suffer from local optimization, where continual training on simple samples in the early steps can cause the policy to lose its exploration. We propose a novel schema, namely Hamiltonian curiosity augmented large language model reinforcement (HAMMER), that transfers diversity metrics, commonly used in dataset evaluation, into the dynamic reinforcement learning procedure, where training samples are ordered via a minimum-semantic Hamiltonian path making the initial training retrain more exploration. From a theoretical perspective of generalization bounds, diversity-driven ordering facilitates stable convergence. Empirical evaluations indicate that HAMMER stimulates model "curiosity" and consistently achieves a 3% to 4% average accuracy gain across diverse inference benchmark.