๐ค AI Summary
Large language models (LLMs) for vertical domains face bottlenecks in requiring massive annotated data, high computational resources, and lengthy adaptation cycles. Method: This paper proposes a user-coconstructed zero-shot domain enhancement framework. It dynamically acquires user feedback via conversational interaction, employs attribution analysis and a scoring mechanism to identify high-value knowledge snippets, and injects them into context prompts in a structured mannerโenabling fine-tuning-free, parameter-free dynamic domain evolution. A novel user collaborative editing mechanism is introduced to transform distributed, real-time feedback into interpretable and verifiable knowledge units. Results: Evaluated on 15k real-world user interactions in a financial domain, the framework significantly improves content professionalism and achieves a 32.7% accuracy gain, while eliminating training overhead entirely.
๐ Abstract
Vertical-domain large language models (LLMs) play a crucial role in specialized scenarios such as finance, healthcare, and law; however, their training often relies on large-scale annotated data and substantial computational resources, impeding rapid development and continuous iteration. To address these challenges, we introduce the Collaborative Editable Model (CoEM), which constructs a candidate knowledge pool from user-contributed domain snippets, leverages interactive user-model dialogues combined with user ratings and attribution analysis to pinpoint high-value knowledge fragments, and injects these fragments via in-context prompts for lightweight domain adaptation. With high-value knowledge, the LLM can generate more accurate and domain-specific content. In a financial information scenario, we collect 15k feedback from about 120 users and validate CoEM with user ratings to assess the quality of generated insights, demonstrating significant improvements in domain-specific generation while avoiding the time and compute overhead of traditional fine-tuning workflows.