🤖 AI Summary
This work addresses the challenges of inefficient exploration and weak generalization in existing methods when tackling open-ended, unbounded, and domain-specific scientific problems. To overcome these limitations, we propose a novel framework that integrates hierarchical evolutionary algorithms, reinforcement learning, and in-context learning. Our approach constructs a context-aware pool of diverse, high-quality candidate solutions and employs policy iteration for continuous refinement, enabling efficient exploration of complex solution spaces. Empirical results demonstrate state-of-the-art performance: on the circle packing task, our method achieves a record-breaking sum of radii of 2.63598308 using a 14B-parameter model; on the Adult and Bank Marketing datasets, it surpasses GPT-4o by an average of 5.95 F1 points, significantly advancing both solution quality and generalization capability.
📝 Abstract
Large language models (LLMs) with reasoning abilities have demonstrated growing promise for tackling complex scientific problems. Yet such tasks are inherently domain-specific, unbounded and open-ended, demanding exploration across vast and flexible solution spaces. Existing approaches, whether purely learning-based or reliant on carefully designed workflows, often suffer from limited exploration efficiency and poor generalization. To overcome these challenges, we present HELIX -- a Hierarchical Evolutionary reinforcement Learning framework with In-context eXperiences. HELIX introduces two key novelties: (i) a diverse yet high-quality pool of candidate solutions that broadens exploration through in-context learning, and (ii) reinforcement learning for iterative policy refinement that progressively elevates solution quality. This synergy enables the discovery of more advanced solutions. On the circle packing task, HELIX achieves state-of-the-art result with a sum of radii of 2.63598308 using only a 14B model. Across standard machine learning benchmarks, HELIX further surpasses GPT-4o with a carefully engineered pipeline, delivering an average F1 improvement of 5.95 points on the Adult and Bank Marketing datasets.