🤖 AI Summary
To address low query–item matching accuracy and high real-time inference costs of large language models (LLMs) in e-commerce, this paper proposes E-CARE: a framework that leverages LLMs to automatically generate commonsense reasoning factor graphs, encoding commonsense knowledge into lightweight, structured representations—enabling commonsense injection with only one forward pass per query. Integrating supervised fine-tuning with cross-feature mining, E-CARE enhances the semantic understanding capability of downstream recommendation models. The approach achieves significant gains in relevance modeling while maintaining high computational efficiency. Experiments on two downstream tasks demonstrate that E-CARE improves Precision@5 by up to 12.1%, validating its effectiveness, efficiency, and scalability.
📝 Abstract
Finding relevant products given a user query plays a pivotal role in an e-commerce platform, as it can spark shopping behaviors and result in revenue gains. The challenge lies in accurately predicting the correlation between queries and products. Recently, mining the cross-features between queries and products based on the commonsense reasoning capacity of Large Language Models (LLMs) has shown promising performance. However, such methods suffer from high costs due to intensive real-time LLM inference during serving, as well as human annotations and potential Supervised Fine Tuning (SFT). To boost efficiency while leveraging the commonsense reasoning capacity of LLMs for various e-commerce tasks, we propose the Efficient Commonsense-Augmented Recommendation Enhancer (E-CARE). During inference, models augmented with E-CARE can access commonsense reasoning with only a single LLM forward pass per query by utilizing a commonsense reasoning factor graph that encodes most of the reasoning schema from powerful LLMs. The experiments on 2 downstream tasks show an improvement of up to 12.1% on precision@5.