🤖 AI Summary
Existing MRHP research heavily relies on English and Indonesian datasets, leaving low-resource languages—such as Vietnamese—largely unaddressed, thereby hindering localized review helpfulness prediction. To address this gap, we introduce ViMRHP, the first large-scale Vietnamese multimodal review helpfulness prediction benchmark, covering four domains, 2K products, and 46K image–text reviews. We propose a human–AI collaborative annotation paradigm: leveraging large language models and multimodal understanding techniques to assist human annotators, coupled with rigorous quality validation. This reduces per-instance annotation time to 20–40 seconds and cuts costs by 65%, while systematically exposing AI’s limitations in complex semantic tasks. We release a high-quality, open-source dataset—including a human-verified subset—and conduct baseline experiments confirming its utility. ViMRHP fills a critical void in low-resource-language MRHP research and establishes a new foundation for cross-lingual multimodal helpfulness analysis.
📝 Abstract
Multimodal Review Helpfulness Prediction (MRHP) is an essential task in recommender systems, particularly in E-commerce platforms. Determining the helpfulness of user-generated reviews enhances user experience and improves consumer decision-making. However, existing datasets focus predominantly on English and Indonesian, resulting in a lack of linguistic diversity, especially for low-resource languages such as Vietnamese. In this paper, we introduce ViMRHP (Vietnamese Multimodal Review Helpfulness Prediction), a large-scale benchmark dataset for MRHP task in Vietnamese. This dataset covers four domains, including 2K products with 46K reviews. Meanwhile, a large-scale dataset requires considerable time and cost. To optimize the annotation process, we leverage AI to assist annotators in constructing the ViMRHP dataset. With AI assistance, annotation time is reduced (90 to 120 seconds per task down to 20 to 40 seconds per task) while maintaining data quality and lowering overall costs by approximately 65%. However, AI-generated annotations still have limitations in complex annotation tasks, which we further examine through a detailed performance analysis. In our experiment on ViMRHP, we evaluate baseline models on human-verified and AI-generated annotations to assess their quality differences. The ViMRHP dataset is publicly available at https://github.com/trng28/ViMRHP