An Automated Multi-Modal Evaluation Framework for Mobile Intelligent Assistants

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation of multimodal interactions in mobile intelligent assistants faces bottlenecks including high manual labor costs, inconsistent criteria, and strong subjectivity. To address these challenges, this paper proposes an automated evaluation framework leveraging the large language model Qwen3-8B and a multi-agent collaboration paradigm. The framework adopts a three-tier agent architecture—comprising interaction behavior evaluators, semantic consistency verifiers, and user experience decision-makers—to enable task decomposition and dynamic coordination. By integrating supervised fine-tuning with multimodal understanding capabilities, the model automatically infers user satisfaction tendencies and detects generation defects. Experimental evaluation across eight mainstream intelligent assistants demonstrates significant improvements over human expert judgments: satisfaction prediction accuracy increases by 23.6%, defect identification achieves an F1-score of 0.89, and overall evaluation matching accuracy improves by 31.4% relative to baseline methods.

Technology Category

Application Category

📝 Abstract
With the rapid development of mobile intelligent assistant technologies, multi-modal AI assistants have become essential interfaces for daily user interactions. However, current evaluation methods face challenges including high manual costs, inconsistent standards, and subjective bias. This paper proposes an automated multi-modal evaluation framework based on large language models and multi-agent collaboration. The framework employs a three-tier agent architecture consisting of interaction evaluation agents, semantic verification agents, and experience decision agents. Through supervised fine-tuning on the Qwen3-8B model, we achieve a significant evaluation matching accuracy with human experts. Experimental results on eight major intelligent agents demonstrate the framework's effectiveness in predicting users' satisfaction and identifying generation defects.
Problem

Research questions and friction points this paper is trying to address.

Automated evaluation of multi-modal AI assistants
Reducing manual costs and subjective bias
Improving accuracy in user satisfaction prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated multi-modal evaluation framework
Large language model fine-tuning
Multi-agent collaboration architecture
🔎 Similar Papers
No similar papers found.
M
Meiping Wang
College of Software, Nankai University
J
Jian Zhong
College of Software, Nankai University
Rongduo Han
Rongduo Han
Nankai University
L
Liming Kang
College of Software, Nankai University
Z
Zhengkun Shi
College of Software, Nankai University
X
Xiao Liang
vivo AI Lab
X
Xing Lin
vivo AI Lab
N
Nan Gao
College of Software, Nankai University
H
Haining Zhang
Nankai University