InternBootcamp Technical Report: Boosting LLM Reasoning with Verifiable Task Scaling

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) exhibit insufficient generalization on complex, real-world reasoning tasks, as mainstream reinforcement learning (RL) approaches are largely confined to narrow-domain benchmarks (e.g., mathematics, programming), failing to capture the diversity of authentic scenarios. Method: We propose Bootcamp—a modular, open-source framework featuring a heterogeneous task environment comprising 1,000+ diverse reasoning tasks. It introduces a novel verifiable task expansion paradigm: automated agent-based task generation, human curation, and automated correctness validation form a closed-loop pipeline, enabling over 100× task-scale growth. The framework integrates RL training, synthetic data generation, and a robust, verifiable evaluation infrastructure. Contribution/Results: A 32B-parameter model trained on Bootcamp achieves state-of-the-art performance on our proprietary Bootcamp-EVAL benchmark and significantly outperforms prior work on established benchmarks—including GSM8K and HumanEval—establishing a new paradigm and reproducible baseline for general-purpose reasoning model development.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have revolutionized artificial intelligence by enabling complex reasoning capabilities. While recent advancements in reinforcement learning (RL) have primarily focused on domain-specific reasoning tasks (e.g., mathematics or code generation), real-world reasoning scenarios often require models to handle diverse and complex environments that narrow-domain benchmarks cannot fully capture. To address this gap, we present InternBootcamp, an open-source framework comprising 1000+ domain-diverse task environments specifically designed for LLM reasoning research. Our codebase offers two key functionalities: (1) automated generation of unlimited training/testing cases with configurable difficulty levels, and (2) integrated verification modules for objective response evaluation. These features make InternBootcamp fundamental infrastructure for RL-based model optimization, synthetic data generation, and model evaluation. Although manually developing such a framework with enormous task coverage is extremely cumbersome, we accelerate the development procedure through an automated agent workflow supplemented by manual validation protocols, which enables the task scope to expand rapidly. % With these bootcamps, we further establish Bootcamp-EVAL, an automatically generated benchmark for comprehensive performance assessment. Evaluation reveals that frontier models still underperform in many reasoning tasks, while training with InternBootcamp provides an effective way to significantly improve performance, leading to our 32B model that achieves state-of-the-art results on Bootcamp-EVAL and excels on other established benchmarks. In particular, we validate that consistent performance gains come from including more training tasks, namely extbf{task scaling}, over two orders of magnitude, offering a promising route towards capable reasoning generalist.
Problem

Research questions and friction points this paper is trying to address.

Addressing diverse real-world reasoning gaps in LLMs
Automating scalable task generation for LLM training
Enhancing LLM performance through verifiable task scaling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated diverse task generation with configurable difficulty
Integrated verification modules for objective evaluation
Automated agent workflow for rapid task expansion
🔎 Similar Papers
No similar papers found.