PALADIN: Self-Correcting Language Model Agents to Cure Tool-Failure Cases

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tool-augmented language agents frequently suffer from cascading failures and task interruptions in real-world settings due to tool malfunctions (e.g., timeouts, API errors), yet prevailing training paradigms optimize only successful execution traces and neglect robustness to failures. To address this, we propose PALADIN: (1) the first large-scale failure trajectory dataset—comprising over 50,000 instances with fine-grained recovery annotations; (2) a two-stage recovery architecture that preserves base model capabilities via LoRA-based fine-tuning, integrates ToolScan—a systematic taxonomy for classifying tool failures—and a library of 55+ expert-crafted recovery strategies, enabling dynamic, similarity-based retrieval during execution; (3) strong cross-tool generalization, achieving 89.68% recovery accuracy on PaladinEval and ToolReflectEval—outperforming ToolBench by 57 percentage points and the original agent by 66%, while maintaining 95.2% recovery performance even on unseen APIs.

Technology Category

Application Category

📝 Abstract
Tool-augmented language agents frequently fail in real-world deployment due to tool malfunctions--timeouts, API exceptions, or inconsistent outputs--triggering cascading reasoning errors and task abandonment. Existing agent training pipelines optimize only for success trajectories, failing to expose models to the tool failures that dominate real-world usage. We propose extbf{PALADIN}, a generalizable framework for equipping language agents with robust failure recovery capabilities. PALADIN trains on 50,000+ recovery-annotated trajectories constructed via systematic failure injection and expert demonstrations on an enhanced ToolBench dataset. Training uses LoRA-based fine-tuning to retain base capabilities while injecting recovery competence. At inference, PALADIN detects execution-time errors and retrieves the most similar case from a curated bank of 55+ failure exemplars aligned with ToolScan's taxonomy, then executes the corresponding recovery action. This approach generalizes to novel failures beyond the training distribution, retaining 95.2% recovery performance on unseen tool APIs. Evaluation across PaladinEval and ToolReflectEval demonstrates consistent improvements in Recovery Rate (RR), Task Success Rate (TSR), Catastrophic Success Rate (CSR), and Efficiency Score (ES). PALADIN improves RR from 32.76% to 89.68% (+57% relative) over ToolBench and outperforms the strongest baseline CRITIC (76.34%) by +13.3%. Against vanilla agents, PALADIN achieves 89.86% RR (+66% relative improvement from 23.75%). These results establish PALADIN as an effective method for building fault-tolerant agents capable of robust recovery in real-world tool environments.
Problem

Research questions and friction points this paper is trying to address.

Addresses tool failure recovery in language agents
Enhances robustness against API exceptions and timeouts
Improves task success rates through systematic error correction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trains agents with failure recovery using annotated trajectories
Detects errors and retrieves similar cases from failure exemplars
Uses LoRA fine-tuning to retain capabilities while adding recovery
🔎 Similar Papers
No similar papers found.