RIMRULE: Improving Tool-Using Language Agents via MDL-Guided Rule Learning

📅 2025-12-31
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited tool-use capabilities of large language models (LLMs) when interacting with non-standard, poorly documented, or private APIs. To overcome this challenge, the authors propose a dynamic injection mechanism that automatically distills compact, interpretable bimodal rules—combining natural language and symbolic representations—from failed execution trajectories, guided by the Minimum Description Length (MDL) principle. These rules are efficiently retrieved and applied during inference without modifying the model’s weights. The approach significantly improves accuracy across multiple tool-use benchmarks, outperforming existing prompting strategies. Moreover, the distilled rules exhibit cross-architecture transferability, enabling reuse across different LLMs, and complement fine-tuning methods by offering a lightweight, modular enhancement to tool-augmented reasoning.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) often struggle to use tools reliably in domain-specific settings, where APIs may be idiosyncratic, under-documented, or tailored to private workflows. This highlights the need for effective adaptation to task-specific tools. We propose RIMRULE, a neuro-symbolic approach for LLM adaptation based on dynamic rule injection. Compact, interpretable rules are distilled from failure traces and injected into the prompt during inference to improve task performance. These rules are proposed by the LLM itself and consolidated using a Minimum Description Length (MDL) objective that favors generality and conciseness. Each rule is stored in both natural language and a structured symbolic form, supporting efficient retrieval at inference time. Experiments on tool-use benchmarks show that this approach improves accuracy on both seen and unseen tools without modifying LLM weights. It outperforms prompting-based adaptation methods and complements finetuning. Moreover, rules learned from one LLM can be reused to improve others, including long reasoning LLMs, highlighting the portability of symbolic knowledge across architectures.
Problem

Research questions and friction points this paper is trying to address.

tool use
large language models
domain-specific adaptation
API idiosyncrasy
reliable tool integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

neuro-symbolic
dynamic rule injection
Minimum Description Length (MDL)
tool-use adaptation
symbolic knowledge portability
🔎 Similar Papers
No similar papers found.