🤖 AI Summary
Existing open-source reasoning datasets predominantly target mathematics and coding tasks, leaving a critical gap in resources for training general-purpose logical reasoning capabilities—primarily due to the difficulty of constructing diverse, verifiable, and controllably difficult reinforcement learning (RL) data. Method: We propose the first controllable data synthesis framework specifically designed for logical reasoning, covering 35 formal logic task categories. Our framework enables rule-based automatic verification and precise control over task difficulty and scale. It builds a synthesis pipeline grounded in formal logic modeling and integrates Proximal Policy Optimization (PPO)-based RL with a multi-task hybrid training strategy. Contribution/Results: On the BBEH benchmark, our model outperforms DeepSeek-R1-Distill-Qwen-32B by 6 points. Hybrid training significantly enhances generalization across mathematical reasoning, code generation, and cross-domain logical inference. This work fills dual gaps: the scarcity of open-source logical reasoning data and the absence of principled, controllable training paradigms for general logical reasoning.
📝 Abstract
Recent advances such as OpenAI-o1 and DeepSeek R1 have demonstrated the potential of Reinforcement Learning (RL) to enhance reasoning abilities in Large Language Models (LLMs). While open-source replication efforts have primarily focused on mathematical and coding domains, methods and resources for developing general reasoning capabilities remain underexplored. This gap is partly due to the challenge of collecting diverse and verifiable reasoning data suitable for RL. We hypothesize that logical reasoning is critical for developing general reasoning capabilities, as logic forms a fundamental building block of reasoning. In this work, we present SynLogic, a data synthesis framework and dataset that generates diverse logical reasoning data at scale, encompassing 35 diverse logical reasoning tasks. The SynLogic approach enables controlled synthesis of data with adjustable difficulty and quantity. Importantly, all examples can be verified by simple rules, making them ideally suited for RL with verifiable rewards. In our experiments, we validate the effectiveness of RL training on the SynLogic dataset based on 7B and 32B models. SynLogic leads to state-of-the-art logical reasoning performance among open-source datasets, surpassing DeepSeek-R1-Distill-Qwen-32B by 6 points on BBEH. Furthermore, mixing SynLogic data with mathematical and coding tasks improves the training efficiency of these domains and significantly enhances reasoning generalization. Notably, our mixed training model outperforms DeepSeek-R1-Zero-Qwen-32B across multiple benchmarks. These findings position SynLogic as a valuable resource for advancing the broader reasoning capabilities of LLMs. We open-source both the data synthesis pipeline and the SynLogic dataset at https://github.com/MiniMax-AI/SynLogic.