A Call for Collaborative Intelligence: Why Human-Agent Systems Should Precede AI Autonomy

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current research pursuing fully autonomous AI agents faces fundamental limitations in reliability, transparency, and alignment with human needs. Method: This paper proposes the Large Language Model–Human-Agentic Systems (LLM-HAS) paradigm, positioning LLMs as collaborative partners—not substitutes—for humans. It introduces a novel “collaborative intelligence” theoretical framework that redefines AI evaluation criteria—from autonomy to human-AI co-performance—and formally specifies structural human-in-the-loop roles: guidance, clarification, and control. Integrating human factors engineering, explainable AI, and domain-specific workflow modeling, the approach is empirically validated in high-stakes domains including healthcare, finance, and software development. Contribution/Results: LLM-HAS significantly outperforms fully autonomous agents in task completion rate, error detection latency, and user trust. The work delivers actionable design principles and implementation pathways for trustworthy AI governance and effective human-AI integration.

Technology Category

Application Category

📝 Abstract
Recent improvements in large language models (LLMs) have led many researchers to focus on building fully autonomous AI agents. This position paper questions whether this approach is the right path forward, as these autonomous systems still have problems with reliability, transparency, and understanding the actual requirements of human. We suggest a different approach: LLM-based Human-Agent Systems (LLM-HAS), where AI works with humans rather than replacing them. By keeping human involved to provide guidance, answer questions, and maintain control, these systems can be more trustworthy and adaptable. Looking at examples from healthcare, finance, and software development, we show how human-AI teamwork can handle complex tasks better than AI working alone. We also discuss the challenges of building these collaborative systems and offer practical solutions. This paper argues that progress in AI should not be measured by how independent systems become, but by how well they can work with humans. The most promising future for AI is not in systems that take over human roles, but in those that enhance human capabilities through meaningful partnership.
Problem

Research questions and friction points this paper is trying to address.

Questioning reliability of fully autonomous AI systems
Proposing human-AI collaboration for better adaptability
Enhancing human capabilities through AI partnership
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based Human-Agent Systems (LLM-HAS)
Human-AI teamwork for complex tasks
Enhance human capabilities through partnership