LIMI: Less is More for Agency

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite strong reasoning and generative capabilities, current AI systems lack agency—the capacity to autonomously execute tasks, manipulate tools, and produce tangible outcomes. Method: Challenging the prevailing “more data yields stronger agents” paradigm, we propose a “less-is-more” approach to cultivating agency: training models on only 78 high-quality, human-annotated demonstrations of autonomous behavior—spanning problem identification, hypothesis formulation, and solution execution—within collaborative software development and scientific research workflows. Contribution/Results: We establish the *Agency Efficiency Principle*, demonstrating that demonstration quality—not dataset scale—is decisive for autonomy. Our method achieves 73.5% on a comprehensive agency benchmark, outperforming state-of-the-art models trained on ~10,000 samples by 53.7% absolute gain, despite using merely 1/128 the data volume. This advances AI from passive “thinking” toward active “acting,” marking a pivotal shift toward agentic intelligence.

Technology Category

Application Category

📝 Abstract
We define Agency as the emergent capacity of AI systems to function as autonomous agents actively discovering problems, formulating hypotheses, and executing solutions through self-directed engagement with environments and tools. This fundamental capability marks the dawn of the Age of AI Agency, driven by a critical industry shift: the urgent need for AI systems that don't just think, but work. While current AI excels at reasoning and generating responses, industries demand autonomous agents that can execute tasks, operate tools, and drive real-world outcomes. As agentic intelligence becomes the defining characteristic separating cognitive systems from productive workers, efficiently cultivating machine autonomy becomes paramount. Current approaches assume that more data yields better agency, following traditional scaling laws from language modeling. We fundamentally challenge this paradigm. LIMI (Less Is More for Intelligent Agency) demonstrates that agency follows radically different development principles. Through strategic focus on collaborative software development and scientific research workflows, we show that sophisticated agentic intelligence can emerge from minimal but strategically curated demonstrations of autonomous behavior. Using only 78 carefully designed training samples, LIMI achieves 73.5% on comprehensive agency benchmarks, dramatically outperforming state-of-the-art models: Kimi-K2-Instruct (24.1%), DeepSeek-V3.1 (11.9%), Qwen3-235B-A22B-Instruct (27.5%), and GLM-4.5 (45.1%). Most strikingly, LIMI demonstrates 53.7% improvement over models trained on 10,000 samples-achieving superior agentic intelligence with 128 times fewer samples. Our findings establish the Agency Efficiency Principle: machine autonomy emerges not from data abundance but from strategic curation of high-quality agentic demonstrations.
Problem

Research questions and friction points this paper is trying to address.

Challenges the paradigm that more data yields better AI agency
Demonstrates agency follows different principles than language modeling
Shows efficient cultivation of machine autonomy requires strategic curation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Achieves agency with minimal curated training samples
Demonstrates superior performance using only 78 demonstrations
Establishes agency emerges from quality not quantity
🔎 Similar Papers
No similar papers found.
Y
Yang Xiao
PolyU, GAIR
Mohan Jiang
Mohan Jiang
Shanghai Jiao Tong University
Agentic SystemMultimodal Large Language Model
J
Jie Sun
USTC, SII, GAIR
K
Keyu Li
SJTU, SII, GAIR
J
Jifan Lin
SJTU, GAIR
Y
Yumin Zhuang
SJTU, GAIR
J
Ji Zeng
SJTU, GAIR
Shijie Xia
Shijie Xia
Shanghai Jiao Tong University
Natural Language Processing
Q
Qishuo Hua
SJTU, SII, GAIR
X
Xuefeng Li
SJTU, SII, GAIR
X
Xiaojie Cai
SJTU, SII, GAIR
T
Tongyu Wang
SII
Y
Yue Zhang
SII
L
Liming Liu
SII
Xia Wu
Xia Wu
Central University of Finance and Economics
Entanglement TheoryQuantum Information TheoryFoundations of Quantum Theory
Jinlong Hou
Jinlong Hou
Shanghai Innovation Institute (SII)
machine learningdeep learninghigh performance computingdrug discoverymedical
Y
Yuan Cheng
SII
W
Wenjie Li
PolyU
X
Xiang Wang
USTC
Dequan Wang
Dequan Wang
Shanghai Jiao Tong University
AI for ScienceAI4Science
P
Pengfei Liu
SJTU, SII, GAIR