E-CARE: An Efficient LLM-based Commonsense-Augmented Framework for E-Commerce

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low query–item matching accuracy and high real-time inference costs of large language models (LLMs) in e-commerce, this paper proposes E-CARE: a framework that leverages LLMs to automatically generate commonsense reasoning factor graphs, encoding commonsense knowledge into lightweight, structured representations—enabling commonsense injection with only one forward pass per query. Integrating supervised fine-tuning with cross-feature mining, E-CARE enhances the semantic understanding capability of downstream recommendation models. The approach achieves significant gains in relevance modeling while maintaining high computational efficiency. Experiments on two downstream tasks demonstrate that E-CARE improves Precision@5 by up to 12.1%, validating its effectiveness, efficiency, and scalability.

Technology Category

Application Category

📝 Abstract
Finding relevant products given a user query plays a pivotal role in an e-commerce platform, as it can spark shopping behaviors and result in revenue gains. The challenge lies in accurately predicting the correlation between queries and products. Recently, mining the cross-features between queries and products based on the commonsense reasoning capacity of Large Language Models (LLMs) has shown promising performance. However, such methods suffer from high costs due to intensive real-time LLM inference during serving, as well as human annotations and potential Supervised Fine Tuning (SFT). To boost efficiency while leveraging the commonsense reasoning capacity of LLMs for various e-commerce tasks, we propose the Efficient Commonsense-Augmented Recommendation Enhancer (E-CARE). During inference, models augmented with E-CARE can access commonsense reasoning with only a single LLM forward pass per query by utilizing a commonsense reasoning factor graph that encodes most of the reasoning schema from powerful LLMs. The experiments on 2 downstream tasks show an improvement of up to 12.1% on precision@5.
Problem

Research questions and friction points this paper is trying to address.

Reducing high LLM inference costs in e-commerce
Eliminating human annotations and fine-tuning requirements
Enhancing query-product correlation prediction accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient commonsense reasoning with single LLM pass
Commonsense reasoning factor graph encoding schemas
Improves precision while reducing inference costs
🔎 Similar Papers
No similar papers found.
G
Ge Zhang
Huawei Noah’s Ark Lab, Montreal, Québec, Canada
R
R. Ajwani
Huawei Noah’s Ark Lab, Montreal, Québec, Canada
T
Tony Zheng
Huawei Noah’s Ark Lab, Montreal, Québec, Canada
H
Hongjian Gu
Huawei Noah’s Ark Lab, Montreal, Québec, Canada
Yaochen Hu
Yaochen Hu
Huawei Technologies Canada, University of Alberta
Large scale machine learningOptimizationRecommender systemsApproximation algorithmsStatistical machine learning
W
Wei Guo
Huawei Noah’s Ark Lab, Singapore
Mark Coates
Mark Coates
Professor of Electrical Engineering, McGill University
Signal ProcessingComputer Networks
Y
Yingxue Zhang
Huawei Noah’s Ark Lab, Montreal, Québec, Canada