🤖 AI Summary
Mental health task-oriented dialogue (TOD) systems lack high-fidelity, failure-sensitive user simulators. Method: We propose the first adversarial user simulation framework for this domain, comprising a fine-tuned LLM-based generator and a binary classifier discriminator. Through iterative adversarial training, the generator learns to match the failure distribution of real users, with KL divergence and correlation between simulated and actual failure rates serving as distributional fidelity metrics. Contribution/Results: This is the first application of adversarial training to mental health TOD user simulation. It significantly improves failure exposure capability and failure-mode diversity. Experiments show strong correlation (r > 0.9) between simulated and real failure rates, a 42% reduction in KL divergence of failure distributions, and a 37% drop in discriminator accuracy within three training rounds—demonstrating substantially enhanced simulation authenticity.
📝 Abstract
Realistic user simulation is crucial for training and evaluating task-oriented dialogue (TOD) systems, yet creating simulators that accurately replicate human behavior remains challenging. A key property of effective simulators is their ability to expose failure modes of the systems they evaluate. We present an adversarial training framework that iteratively improves user simulator realism through a competitive dynamic between a generator (user simulator) and a discriminator. Applied to mental health support chatbots, our approach demonstrates that fine-tuned simulators dramatically outperform zero-shot base models at surfacing system issues, and adversarial training further enhances diversity, distributional alignment, and predictive validity. The resulting simulator achieves a strong correlation between simulated and real failure occurrence rates across diverse chatbot configurations while maintaining low distributional divergence of failure modes. Discriminator accuracy decreases drastically after three adversarial iterations, suggesting improved realism. These results provide evidence that adversarial training is a promising approach for creating realistic user simulators in mental health support TOD domains, enabling rapid, reliable, and cost-effective system evaluation before deployment.