🤖 AI Summary
Existing evaluation of multimodal interactions in mobile intelligent assistants faces bottlenecks including high manual labor costs, inconsistent criteria, and strong subjectivity. To address these challenges, this paper proposes an automated evaluation framework leveraging the large language model Qwen3-8B and a multi-agent collaboration paradigm. The framework adopts a three-tier agent architecture—comprising interaction behavior evaluators, semantic consistency verifiers, and user experience decision-makers—to enable task decomposition and dynamic coordination. By integrating supervised fine-tuning with multimodal understanding capabilities, the model automatically infers user satisfaction tendencies and detects generation defects. Experimental evaluation across eight mainstream intelligent assistants demonstrates significant improvements over human expert judgments: satisfaction prediction accuracy increases by 23.6%, defect identification achieves an F1-score of 0.89, and overall evaluation matching accuracy improves by 31.4% relative to baseline methods.
📝 Abstract
With the rapid development of mobile intelligent assistant technologies, multi-modal AI assistants have become essential interfaces for daily user interactions. However, current evaluation methods face challenges including high manual costs, inconsistent standards, and subjective bias. This paper proposes an automated multi-modal evaluation framework based on large language models and multi-agent collaboration. The framework employs a three-tier agent architecture consisting of interaction evaluation agents, semantic verification agents, and experience decision agents. Through supervised fine-tuning on the Qwen3-8B model, we achieve a significant evaluation matching accuracy with human experts. Experimental results on eight major intelligent agents demonstrate the framework's effectiveness in predicting users' satisfaction and identifying generation defects.