๐ค AI Summary
This paper addresses three core challenges in in-game dialogue AI: character consistency, world-model alignment, and robust tool invocation support. To this end, we propose a multi-LoRA fusion architecture that separately models (1) tool-call decision making, (2) response generation conditioned on tool execution results, and (3) open-domain dialogue generation without tool reliance. Leveraging vLLM for efficient MultiLoRA inference, our approach builds upon the Qwen3-14B foundation model and incorporates synthetic data augmentation alongside multi-task fine-tuningโenabling strong performance and generalization under resource constraints. Evaluated on the CPDC 2025 GPU Track, our method achieves first place in Task 1 (character-consistent dialogue) and Task 3 (tool-augmented reasoning), and second place in Task 2 (world-aligned response generation). These results demonstrate the effectiveness and state-of-the-art capability of our framework for building tool-enhanced, immersive game dialogue agents.
๐ Abstract
This paper presents the opdainlp team's solution for the GPU track of the CPDC 2025 challenge. The challenge consists of three tasks, aiming to build an in-game conversational AI that adheres to character personas, aligns with the game's worldview, and supports function calling. Considering both effectiveness and resource/time constraints during inference, we synthesized data for some of the tasks based on the datasets provided by the competition organizers. We employed Qwen3-14B with LoRA fine-tuning and model fusion, and utilized a base model integrated with multiple LoRA adapters during inference. Specifically, in the competition, we used three distinct LoRA adapters to handle tool calling, response generation with tool call results, and response generation without tool call results, respectively. MultiLoRA inference was implemented using vLLM. Our solution achieved the first place in Task 1 and Task 3, and the second place in Task 2 of the GPU track.