๐ค AI Summary
To address low task completion rates in multi-turn dialogue caused by context drift, this paper proposes a skeleton-guided multi-turn instruction generation framework. The method introduces the first taxonomy of nine human dialogue intent trajectories, explicitly encoding global dialogue structure into a structured generation skeleton; it further employs intent-constrained controllable data distillation to construct, from scratch, ConsistentChatโthe first large-scale, cross-turn consistent multi-turn instruction dataset (15,000+ dialogues, 224,392 utterances). Experiments on Light, TopDial, and MT-Eval benchmarks demonstrate that the framework improves dialogue consistency by 20โ30% and achieves up to a 15% gain in task success rate. Key contributions include: (1) intent-driven dialogue structure modeling; (2) a skeleton-guided paradigm for controllable synthetic data generation; and (3) ConsistentChat, the first publicly available large-scale instruction dataset ensuring cross-turn consistency.
๐ Abstract
Current instruction data synthesis methods primarily focus on single-turn instructions and often neglect cross-turn coherence, resulting in context drift and reduced task completion rates in extended conversations. To address this limitation, we propose Skeleton-Guided Multi-Turn Dialogue Generation, a framework that constrains multi-turn instruction synthesis by explicitly modeling human conversational intent. It operates in two stages: (1) Intent Modeling, which captures the global structure of human dialogues by assigning each conversation to one of nine well-defined intent trajectories, ensuring a coherent and goal-oriented information flow; and (2) Skeleton Generation, which constructs a structurally grounded sequence of user queries aligned with the modeled intent, thereby serving as a scaffold that constrains and guides the downstream instruction synthesis process. Based on this process, we construct ConsistentChat, a multi-turn instruction dataset with approximately 15,000 multi-turn conversations and 224,392 utterances. Experiments on the Light, Topdial, and MT-Eval benchmarks show that models fine-tuned on ConsistentChat achieve a 20-30% improvement in chat consistency and up to a 15% increase in task success rate, significantly outperforming models trained on existing single-turn and multi-turn instruction datasets.