An Empirical Study of Agent Developer Practices in AI Agent Frameworks

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Practitioners face significant challenges in selecting appropriate AI agent frameworks—over 80% of developers struggle to match frameworks to their requirements—and frequently encounter redundant implementation efforts. Method: This study presents the first empirical investigation of LLM-based agent frameworks, analyzing 11,910 developer community discussions across five dimensions: development efficiency, functional abstraction, learning cost, performance optimization, and maintainability. We conduct a multi-faceted qualitative and quantitative comparative analysis of ten mainstream frameworks. Contribution/Results: Our findings reveal substantial disparities among frameworks in satisfying developer needs, identify critical design bottlenecks—including abstraction gaps and steep learning curves—and propose empirically grounded improvement directions. The study delivers systematic, data-driven insights and methodological guidance to inform both framework evolution and developer decision-making, thereby advancing evidence-based practice in AI agent system development.

Technology Category

Application Category

📝 Abstract
The rise of large language models (LLMs) has sparked a surge of interest in agents, leading to the rapid growth of agent frameworks. Agent frameworks are software toolkits and libraries that provide standardized components, abstractions, and orchestration mechanisms to simplify agent development. Despite widespread use of agent frameworks, their practical applications and how they influence the agent development process remain underexplored. Different agent frameworks encounter similar problems during use, indicating that these recurring issues deserve greater attention and call for further improvements in agent framework design. Meanwhile, as the number of agent frameworks continues to grow and evolve, more than 80% of developers report difficulties in identifying the frameworks that best meet their specific development requirements. In this paper, we conduct the first empirical study of LLM-based agent frameworks, exploring real-world experiences of developers in building AI agents. To compare how well the agent frameworks meet developer needs, we further collect developer discussions for the ten previously identified agent frameworks, resulting in a total of 11,910 discussions. Finally, by analyzing these discussions, we compare the frameworks across five dimensions: development efficiency, functional abstraction, learning cost, performance optimization, and maintainability, which refers to how easily developers can update and extend both the framework itself and the agents built upon it over time. Our comparative analysis reveals significant differences among frameworks in how they meet the needs of agent developers. Overall, we provide a set of findings and implications for the LLM-driven AI agent framework ecosystem and offer insights for the design of future LLM-based agent frameworks and agent developers.
Problem

Research questions and friction points this paper is trying to address.

Investigates recurring issues and design improvements in AI agent frameworks.
Explores difficulties developers face in selecting suitable frameworks for specific needs.
Compares frameworks across key dimensions like efficiency, abstraction, and maintainability.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conducted empirical study on LLM-based agent frameworks
Analyzed developer discussions across ten frameworks
Compared frameworks across five key dimensions
🔎 Similar Papers
No similar papers found.