Towards Rationality in Language and Multimodal Agents: A Survey

📅 2024-06-01
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited rationality of large language models—manifested in knowledge gaps and inconsistent outputs—this paper systematically surveys state-of-the-art approaches for enhancing rationality in language and multimodal agents, proposing a reliable decision-making paradigm grounded in evidence-driven reasoning, logical consistency, and utility optimization. Methodologically, it introduces the first comprehensive evaluation framework for rational agents, integrating external tool invocation, symbolic reasoning engines, conformal risk control, and uncertainty quantification. A novel collaborative architecture is designed, unifying multimodal perception, multi-agent coordination, and programmatic execution. The work synthesizes over 100 key studies to identify root causes of rationality degradation and releases an open-source, continuously updated benchmark suite alongside a GitHub knowledge repository. These contributions provide both theoretical foundations and practical infrastructure for advancing rational AI research.

Technology Category

Application Category

📝 Abstract
This work discusses how to build more rational language and multimodal agents and what criteria define rationality in intelligent systems.Rationality is the quality of being guided by reason, characterized by decision-making that aligns with evidence and logical principles. It plays a crucial role in reliable problem-solving by ensuring well-grounded and consistent solutions. Despite their progress, large language models (LLMs) often fall short of rationality due to their bounded knowledge space and inconsistent outputs. In response, recent efforts have shifted toward developing multimodal and multi-agent systems, as well as integrating modules like external tools, programming codes, symbolic reasoners, utility function, and conformal risk controls rather than relying solely on a single LLM for decision-making. This paper surveys state-of-the-art advancements in language and multimodal agents, assesses their role in enhancing rationality, and outlines open challenges and future research directions. We maintain an open repository at https://github.com/bowen-upenn/Agent_Rationality.
Problem

Research questions and friction points this paper is trying to address.

Enhancing rationality in language and multimodal agents
Addressing limitations of large language models in decision-making
Surveying advancements and future directions in intelligent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal multi-agent systems
Integration of external tools
Symbolic reasoners and risk controls
🔎 Similar Papers
No similar papers found.