Controlled Yet Natural: A Hybrid BDI-LLM Conversational Agent for Child Helpline Training

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training child helpline counselors has long been hindered by the high cost of human role-playing and the linguistic rigidity of rule-based agents. This paper introduces a novel virtual dialogue agent that deeply integrates large language models (LLMs) with the Belief-Desire-Intention (BDI) framework: LLMs are innovatively embedded into three core BDI processes—intention recognition, response generation, and bypass regulation—thereby preserving behavioral controllability while substantially enhancing linguistic diversity and conversational realism. Script-based evaluation and within-subject experiments demonstrate that the system’s response quality is non-inferior to manually authored content (p > 0.05); moreover, users perceive it as significantly more credible, hold more positive attitudes toward it, and exhibit stronger engagement intentions (posterior probability = 0.845). This work establishes a new paradigm for generative training agents that are high-fidelity, interpretable, and dynamically controllable.

Technology Category

Application Category

📝 Abstract
Child helpline training often relies on human-led roleplay, which is both time- and resource-consuming. To address this, rule-based interactive agent simulations have been proposed to provide a structured training experience for new counsellors. However, these agents might suffer from limited language understanding and response variety. To overcome these limitations, we present a hybrid interactive agent that integrates Large Language Models (LLMs) into a rule-based Belief-Desire-Intention (BDI) framework, simulating more realistic virtual child chat conversations. This hybrid solution incorporates LLMs into three components: intent recognition, response generation, and a bypass mechanism. We evaluated the system through two studies: a script-based assessment comparing LLM-generated responses to human-crafted responses, and a within-subject experiment (N=37) comparing the LLM-integrated agent with a rule-based version. The first study provided evidence that the three LLM components were non-inferior to human-crafted responses. In the second study, we found credible support for two hypotheses: participants perceived the LLM-integrated agent as more believable and reported more positive attitudes toward it than the rule-based agent. Additionally, although weaker, there was some support for increased engagement (posterior probability = 0.845, 95% HDI [-0.149, 0.465]). Our findings demonstrate the potential of integrating LLMs into rule-based systems, offering a promising direction for more flexible but controlled training systems.
Problem

Research questions and friction points this paper is trying to address.

Developing realistic virtual child chat conversations for helpline training
Overcoming limited language understanding in rule-based training agents
Integrating LLMs into BDI framework for controlled yet natural interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs into BDI framework for realism
Uses LLMs for intent recognition and response
Hybrid system balances control with natural conversation
🔎 Similar Papers
No similar papers found.