DACIP-RC: Domain Adaptive Continual Instruction Pre-Training via Reading Comprehension on Business Conversations

šŸ“… 2025-10-09
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Small language models (LLMs) exhibit weak zero-shot cross-domain instruction-following capabilities, and conventional fine-tuning often induces catastrophic forgetting, degrading generalization. To address this, we propose DACIP-RC—a novel domain adaptation framework that replaces fine-tuning with continual pre-training. DACIP-RC is the first to apply instruction pre-training to commercial dialogue corpora; it automatically constructs diverse instruction-response pairs via a reading comprehension–inspired mechanism. Integrating reading-comprehension–based instruction generation, continual pre-training, and zero-shot transfer learning, DACIP-RC enables effective, forgetting-free domain adaptation. Empirical evaluation on real-world business tasks—including meeting summarization, action-item generation, and call-purpose identification—demonstrates substantial improvements in zero-shot performance for small LLMs. Results confirm DACIP-RC’s effectiveness, robustness across multiple scenarios, and industrial scalability.

Technology Category

Application Category

šŸ“ Abstract
The rapid advancements in Large Language Models (LLMs) have enabled their adoption in real-world industrial scenarios for various natural language processing tasks. However, the high inference cost of large-scale LLMs makes their deployment impractical, necessitating the use of smaller models. Despite their efficiency, smaller LLMs lack robust zero-shot instruction-following capabilities across diverse domains, limiting their adaptability to dynamic user requirements. Traditional fine-tuning approaches exacerbate this issue by inducing catastrophic forgetting, reducing the model's generalization ability for unseen tasks. In this paper, we propose Domain Adaptive Continual Instruction Pre-Training via Reading Comprehension (DACIP-RC), a continual pre-training technique that enhances smaller LLMs' domain adaptability for business conversational tasks. Unlike conventional pre-training approaches that rely on next-token prediction, DACIP-RC generates diverse task instructions and responses via reading comprehension on conversation transcripts, enabling better instruction generalization. Our empirical evaluations demonstrate that DACIP-RC significantly improves zero-shot generalization across a wide range of business conversational tasks, including meeting summarization, action item generation, and call purpose identification. To the best of our knowledge, this is the first work to apply instruction pre-training on business conversational data, providing insights into how industries can leverage proprietary datasets for domain adaptation.
Problem

Research questions and friction points this paper is trying to address.

Smaller LLMs lack robust zero-shot instruction-following across domains
Traditional fine-tuning causes catastrophic forgetting in smaller models
Limited adaptability of smaller LLMs to dynamic business requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain adaptive continual pre-training for small LLMs
Generates task instructions via reading comprehension
Enhances zero-shot generalization for business conversations
šŸ”Ž Similar Papers
No similar papers found.
E
Elena Khasanova
Dialpad Inc.
Harsh Saini
Harsh Saini
Dialpad Inc.
Md Tahmid Rahman Laskar
Md Tahmid Rahman Laskar
Senior Applied Scientist, Dialpad
Large Language ModelsNatural Language ProcessingDeep LearningQuestion AnsweringSummarization
X
Xue-Yong Fu
Dialpad Inc.
C
Cheng Chen
Dialpad Inc.
S
Shashi Bhushan TN
Dialpad Inc.