ESAinsTOD: A Unified End-to-End Schema-Aware Instruction-Tuning Framework for Task-Oriented Dialog Modeling

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes ESAinsTOD, a unified framework for end-to-end task-oriented dialogue systems that addresses the limited generalization of existing approaches, which are typically tailored to specific datasets. By integrating instruction tuning with a structured alignment mechanism, ESAinsTOD equips large language models with both instruction-aware and schema-aware capabilities, enabling conversation-level modeling across diverse task flows. The method combines full-parameter fine-tuning with joint instruction and schema alignment, significantly enhancing model generalization in cross-dataset, low-resource, and zero-shot settings while improving robustness against noise and cascading errors. Experimental results demonstrate that ESAinsTOD consistently outperforms state-of-the-art methods on benchmark datasets including CamRest676, In-Car, and MultiWOZ.

Technology Category

Application Category

📝 Abstract
Existing end-to-end modeling methods for modular task-oriented dialog systems are typically tailored to specific datasets, making it challenging to adapt to new dialog scenarios. In this work, we propose ESAinsTOD, a unified End-to-end Schema-Aware Instruction-tuning framework for general Task-Oriented Dialog modeling. This framework introduces a structured methodology to go beyond simply fine-tuning Large Language Models (LLMs), enabling flexible adaptation to various dialogue task flows and schemas. Specifically, we leverage full-parameter fine-tuning of LLMs and introduce two alignment mechanisms to make the resulting system both instruction-aware and schema-aware: (i) instruction alignment, which ensures that the system faithfully follows task instructions to complete various task flows from heterogeneous TOD datasets; and (ii) schema alignment, which encourages the system to make predictions adhering to the specified schema. In addition, we employ session-level end-to-end modeling, which allows the system to access the results of previously executed task flows within the dialogue history, to bridge the gap between the instruction-tuning paradigm and the real-world application of TOD systems. Empirical results show that while a fine-tuned LLM serves as a strong baseline, our structured approach provides significant additional benefits. In particular, our findings indicate that: (i) ESAinsTOD outperforms state-of-the-art models by a significant margin on end-to-end task-oriented dialog modeling benchmarks: CamRest676, In-Car and MultiWOZ; (ii) more importantly, it exhibits superior generalization capabilities across various low-resource settings, with the proposed alignment mechanisms significantly enhancing zero-shot performance; and (iii) our instruction-tuning paradigm substantially improves the model's robustness against data noise and cascading errors.
Problem

Research questions and friction points this paper is trying to address.

Task-Oriented Dialog
End-to-End Modeling
Schema Adaptation
Generalization
Instruction Tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

schema-aware
instruction-tuning
end-to-end modeling
task-oriented dialog
alignment mechanisms
🔎 Similar Papers
No similar papers found.
Dechuan Teng
Dechuan Teng
Harbin Institute of Technology
Natural Language ProcessingTask-oriented dialog system
C
Chunlin Lu
School of Computer Science and Engineering, Central South University, Changsha, China.
L
Libo Qin
School of Computer Science and Engineering, Central South University, Changsha, China.
Wanxiang Che
Wanxiang Che
Professor of Harbin Institute of Technology
Natural Language Processing