EffGen: Enabling Small Language Models as Capable Autonomous Agents

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost, privacy concerns, and deployment challenges of existing agent systems that rely on large language model APIs by proposing an efficient agent framework tailored for small language models. The framework incorporates prompt optimization with 70–80% context compression, dependency-aware task decomposition, a five-factor complexity-aware routing mechanism, and a unified memory architecture integrating short-term, long-term, and vector-based memory. It also supports multi-protocol communication via MCP, A2A, and ACP. Evaluated across 13 benchmarks, the framework consistently outperforms LangChain, AutoGen, and Smolagents in success rate, execution speed, and memory efficiency. Prompt optimization yields up to an 11.2% performance gain for small models (e.g., 1.5B parameters), while complexity-aware routing benefits larger models; their combination ensures consistent performance improvements across all model scales.

Technology Category

Application Category

📝 Abstract
Most existing language model agentic systems today are built and optimized for large language models (e.g., GPT, Claude, Gemini) via API calls. While powerful, this approach faces several limitations including high token costs and privacy concerns for sensitive applications. We introduce effGen, an open-source agentic framework optimized for small language models (SLMs) that enables effective, efficient, and secure local deployment (pip install effgen). effGen makes four major contributions: (1) Enhanced tool-calling with prompt optimization that compresses contexts by 70-80% while preserving task semantics, (2) Intelligent task decomposition that breaks complex queries into parallel or sequential subtasks based on dependencies, (3) Complexity-based routing using five factors to make smart pre-execution decisions, and (4) Unified memory system combining short-term, long-term, and vector-based storage. Additionally, effGen unifies multiple agent protocols (MCP, A2A, ACP) for cross-protocol communication. Results on 13 benchmarks show effGen outperforms LangChain, AutoGen, and Smolagents with higher success rates, faster execution, and lower memory. Our results reveal that prompt optimization and complexity routing have complementary scaling behavior: optimization benefits SLMs more (11.2% gain at 1.5B vs 2.4% at 32B), while routing benefits large models more (3.6% at 1.5B vs 7.9% at 32B), providing consistent gains across all scales when combined. effGen (https://effgen.org/) is released under the MIT License, ensuring broad accessibility for research and commercial use. Our framework code is publicly available at https://github.com/ctrl-gaurav/effGen.
Problem

Research questions and friction points this paper is trying to address.

small language models
autonomous agents
local deployment
privacy concerns
token costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

small language models
prompt optimization
task decomposition
complexity-based routing
unified memory system
🔎 Similar Papers
No similar papers found.