FedDTRE: Federated Dialogue Generation Models Powered by Trustworthiness Evaluation

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated dialogue generation, balancing privacy preservation with personalization, mitigating overfitting due to limited client data, and preventing global knowledge forgetting remain critical challenges. To address these, this paper proposes FedDTRE—a novel federated learning framework. Its core innovation is a trustworthiness scoring mechanism grounded in a fairness-evaluated dataset, which dynamically quantifies the reliability of both global and local models and adaptively adjusts aggregation weights—thereby avoiding blind global model replacement. This mechanism effectively alleviates bias induced by data heterogeneity, suppresses client-level overfitting, and enhances model generalization. Extensive experiments demonstrate that FedDTRE significantly outperforms existing federated dialogue generation methods across standard metrics—including BLEU, ROUGE, Distinct, and human evaluation—yielding responses that are more coherent, contextually relevant, and lexically diverse.

Technology Category

Application Category

📝 Abstract
With the rapid development of artificial intelligence, dialogue systems have become a prominent form of human-computer interaction. However, traditional centralized or fully local training approaches face challenges in balancing privacy preservation and personalization due to data privacy concerns and heterogeneous device capabilities. Federated learning, as a representative distributed paradigm, offers a promising solution. However, existing methods often suffer from overfitting under limited client data and tend to forget global information after multiple training rounds, leading to poor generalization. To address these issues, we propose FedDTRE, a Federated adaptive aggregation strategy for Dialogue generation based on Trustworthiness Evaluation. Instead of directly replacing local models with the global model, FedDTRE leverages trustworthiness scores of both global and local models on a fairness-oriented evaluation dataset to dynamically regulate the global model's contribution during local updates. Experimental results demonstrate that FedDTRE can improve dialogue model performance and enhance the quality of dialogue generation.
Problem

Research questions and friction points this paper is trying to address.

Addresses privacy-personalization tradeoff in federated dialogue systems
Mitigates overfitting and global information forgetting in FL
Improves dialogue generation quality through trustworthiness-based aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated adaptive aggregation for dialogue generation
Trustworthiness scores regulate global model contribution
Dynamic local updates using fairness evaluation dataset
🔎 Similar Papers
No similar papers found.
S
Shule Lu
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Institute of Artificial Intelligence, Beihang University, China
Lingxiang Wang
Lingxiang Wang
Beihang university
NLP
S
Sijia Wen
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Institute of Artificial Intelligence, Beihang University, China
Z
Ziwei Wang
Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Institute of Artificial Intelligence, Beihang University, China
Hainan Zhang
Hainan Zhang
Beihang University
Dialogue GenerationText GenerationFederated LearningNatural Language Processing