Agent-based Automated Claim Matching with Instruction-following LLMs

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of manual prompt engineering—namely, high human dependency and poor generalizability—in automated insurance claim matching, this paper proposes a two-stage agent-based claim statement matching method. First, an instruction-following large language model (LLM) automatically generates high-quality prompts; second, these prompts guide a lightweight classifier to perform binary classification. The key contribution lies in the empirical finding that medium- and small-scale LLMs achieve performance on par with larger models in prompt generation, enabling an efficient “small-model-for-prompt-generation + small-model-for-classification” collaborative paradigm. Experiments demonstrate that our approach significantly outperforms state-of-the-art methods relying on handcrafted prompts, achieving superior trade-offs between accuracy and computational efficiency. These results validate the feasibility and advantages of cross-model task decomposition in real-world financial risk control applications.

Technology Category

Application Category

📝 Abstract
We present a novel agent-based approach for the automated claim matching task with instruction-following LLMs. We propose a two-step pipeline that first generates prompts with LLMs, to then perform claim matching as a binary classification task with LLMs. We demonstrate that LLM-generated prompts can outperform SOTA with human-generated prompts, and that smaller LLMs can do as well as larger ones in the generation process, allowing to save computational resources. We also demonstrate the effectiveness of using different LLMs for each step of the pipeline, i.e. using an LLM for prompt generation, and another for claim matching. Our investigation into the prompt generation process in turn reveals insights into the LLMs' understanding of claim matching.
Problem

Research questions and friction points this paper is trying to address.

Automating claim matching using agent-based LLM approaches
Optimizing prompt generation to outperform human-crafted prompts
Exploring computational efficiency with smaller versus larger language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agent-based approach using instruction-following LLMs
Two-step pipeline with LLM-generated prompts
Smaller LLMs match larger models' performance
🔎 Similar Papers
No similar papers found.