ExperienceWeaver: Optimizing Small-sample Experience Learning for LLM-based Clinical Text Improvement

📅 2026-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of clinical text refinement, which is hindered by the scarcity of high-quality data and the intricate constraints inherent in medical documentation, leading to suboptimal performance of existing large language models in few-shot settings. To overcome this limitation, the authors propose ExperienceWeaver, a novel framework that introduces an experience-based learning mechanism. By identifying error types and distilling multidimensional feedback into error-specific editing tips and high-level revision strategies, ExperienceWeaver enables the model to learn not just what to edit but how to edit through an agent-driven reasoning pipeline. This approach marks a paradigm shift from data retrieval to structured knowledge distillation. Extensive experiments across four clinical datasets demonstrate that ExperienceWeaver significantly outperforms state-of-the-art models such as Gemini-3 Pro, confirming its effectiveness and robustness under few-shot conditions.

Technology Category

Application Category

📝 Abstract
Clinical text improvement is vital for healthcare efficiency but remains difficult due to limited high-quality data and the complex constraints of medical documentation. While Large Language Models (LLMs) show promise, current approaches struggle in small-sample settings: supervised fine-tuning is data-intensive and costly, while retrieval-augmented generation often provides superficial corrections without capturing the reasoning behind revisions. To address these limitations, we propose ExperienceWeaver, a hierarchical framework that shifts the focus from data retrieval to experience learning. Instead of simply recalling past examples, ExperienceWeaver distills noisy, multi-dimensional feedback into structured, actionable knowledge. Specifically, error-specific Tips and high-level Strategies. By injecting this distilled experience into an agentic pipeline, the model learns"how to revise"rather than just"what to revise". Extensive evaluations across four clinical datasets demonstrate that ExperienceWeaver consistently improves performance, surpassing state-of-the-art models such as Gemini-3 Pro in small-sample settings.
Problem

Research questions and friction points this paper is trying to address.

clinical text improvement
small-sample learning
large language models
medical documentation
experience learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Experience Learning
Small-sample Learning
Clinical Text Improvement
Hierarchical Knowledge Distillation
LLM-based Revision
🔎 Similar Papers
No similar papers found.