MAPS: Motivation-Aware Personalized Search via LLM-Driven Consultation Alignment

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing personalized search methods overlook users’ pre-search consultation behaviors, thus failing to capture their underlying intent. This paper addresses the e-commerce scenario and proposes the first dynamic modeling framework for the “consultation → intent evolution → search” process. We introduce a Mixture-of-Attention Experts (MoAE) mechanism coupled with a dual-alignment framework—integrating contrastive learning and bidirectional attention—to jointly model latent consultation-driven intent, query semantics, user preferences, and item features, while fusing heterogeneous textual sources (consultations, reviews, and product descriptions). Our approach effectively aligns cross-modal semantics, mitigates historical behavioral noise, and bridges category–text semantic gaps. Extensive experiments on both real-world and synthetic datasets demonstrate significant improvements: +12.6% in Recall@10 and +9.8% in NDCG@10 over state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Personalized product search aims to retrieve and rank items that match users' preferences and search intent. Despite their effectiveness, existing approaches typically assume that users' query fully captures their real motivation. However, our analysis of a real-world e-commerce platform reveals that users often engage in relevant consultations before searching, indicating they refine intents through consultations based on motivation and need. The implied motivation in consultations is a key enhancing factor for personalized search. This unexplored area comes with new challenges including aligning contextual motivations with concise queries, bridging the category-text gap, and filtering noise within sequence history. To address these, we propose a Motivation-Aware Personalized Search (MAPS) method. It embeds queries and consultations into a unified semantic space via LLMs, utilizes a Mixture of Attention Experts (MoAE) to prioritize critical semantics, and introduces dual alignment: (1) contrastive learning aligns consultations, reviews, and product features; (2) bidirectional attention integrates motivation-aware embeddings with user preferences. Extensive experiments on real and synthetic data show MAPS outperforms existing methods in both retrieval and ranking tasks.
Problem

Research questions and friction points this paper is trying to address.

Aligning contextual motivations with concise queries
Bridging the category-text gap in personalized search
Filtering noise within sequence history for better search
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven unified semantic space embedding
Mixture of Attention Experts prioritizes semantics
Dual alignment integrates motivation-aware embeddings
🔎 Similar Papers
No similar papers found.
Weicong Qin
Weicong Qin
Gaoling School of Artificial Intelligence, Renmin University of China
AINLPInformation RetrievalRecommendationLegal Intelligence
Y
Yi Xu
Gaoling School of Artificial Intelligence, Renmin University of China, China
W
Weijie Yu
University of International Business and Economics, China
Chenglei Shen
Chenglei Shen
Gaoling School of Artificial Intelligence, Renmin University of China
Recommender systemsLarge language model
M
Ming He
AI Lab at Lenovo Research, Lenovo Group Limited, China
Jianping Fan
Jianping Fan
AI Lab at Lenovo Research
AIComputer VisionMachine LearningQuantum Computing
X
Xiao Zhang
Gaoling School of Artificial Intelligence, Renmin University of China, China
J
Jun Xu
Gaoling School of Artificial Intelligence, Renmin University of China, China