🤖 AI Summary
To address the insufficient mathematical and code reasoning capabilities of medium-scale (32B) open-source language models, this work introduces AM-Thinking-v1—the first dense-architecture 32B model achieving performance on par with hundred-billion-parameter Mixture-of-Experts (MoE) models in mathematical and coding reasoning. Built upon the Qwen2.5-32B base architecture, AM-Thinking-v1 employs a novel synergistic post-training paradigm integrating high-quality supervised fine-tuning (SFT) and reinforcement learning (RL), leveraging publicly available reasoning datasets. It achieves state-of-the-art scores among open-source models of comparable scale: 85.3 on AIME 2024, 74.4 on AIME 2025, and 70.3 on LiveCodeBench. These results demonstrate that carefully optimized dense models at the 32B scale can effectively balance reasoning capability, deployment efficiency, and open collaboration—without requiring MoE sparsity or excessive parameter counts. The model, training recipes, and evaluation protocols are fully open-sourced.
📝 Abstract
We present AM-Thinking-v1, a 32B dense language model that advances the frontier of reasoning, embodying the collaborative spirit of open-source innovation. Outperforming DeepSeek-R1 and rivaling leading Mixture-of-Experts (MoE) models like Qwen3-235B-A22B and Seed1.5-Thinking, AM-Thinking-v1 achieves impressive scores of 85.3 on AIME 2024, 74.4 on AIME 2025, and 70.3 on LiveCodeBench, showcasing state-of-the-art mathematical and coding capabilities among open-source models of similar scale. Built entirely from the open-source Qwen2.5-32B base model and publicly available queries, AM-Thinking-v1 leverages a meticulously crafted post-training pipeline - combining supervised fine-tuning and reinforcement learning - to deliver exceptional reasoning capabilities. This work demonstrates that the open-source community can achieve high performance at the 32B scale, a practical sweet spot for deployment and fine-tuning. By striking a balance between top-tier performance and real-world usability, we hope AM-Thinking-v1 inspires further collaborative efforts to harness mid-scale models, pushing reasoning boundaries while keeping accessibility at the core of innovation. We have open-sourced our model on href{https://huggingface.co/a-m-team/AM-Thinking-v1}{Hugging Face}.