π€ AI Summary
Existing fraud detection methods struggle to effectively extract complex risk signals from e-commerce transaction texts, and the practical efficacy of large language models in financial anti-fraud applications remains unclear. This work proposes a novel post-training framework that integrates reinforcement learning with a lightweight language model, leveraging the Group Sequence Policy Optimization (GSPO) algorithm and a rule-based reward mechanism to identify credit fraud using only raw transaction textβsuch as customer information, logistics details, and product descriptions. By incorporating an exploration mechanism, the approach uncovers emerging fraud patterns that are difficult to capture through conventional feature engineering. Evaluated on a real-world global payment dataset, the method significantly improves F1 scores while enhancing model interpretability and generalization capability.
π Abstract
E-commerce platforms and payment solution providers face increasingly sophisticated fraud schemes, ranging from identity theft and account takeovers to complex money laundering operations that exploit the speed and anonymity of digital transactions. However, despite their theoretical promise, the application of Large Language Models (LLMs) to fraud detection in real-world financial contexts remains largely unexploited, and their practical effectiveness in handling domain-specific e-commerce transaction data has yet to be empirically validated. To bridge this gap between conventional machine learning limitations and the untapped potential of LLMs in fraud detection, this paper proposes a novel approach that employs Reinforcement Learning (RL) to post-train lightweight language models specifically for fraud detection tasks using only raw transaction data. We utilize the Group Sequence Policy Optimization (GSPO) algorithm combined with a rule-based reward system to fine-tune language models of various sizes on a real-life transaction dataset provided by a Chinese global payment solution company. Through this reinforcement learning framework, the language models are encouraged to explore diverse trust and risk signals embedded within the textual transaction data, including patterns in customer information, shipping details, product descriptions, and order history. Our experimental results demonstrate the effectiveness of this approach, with post-trained language models achieving substantial F1-score improvements on held-out test data. Our findings demonstrate that the observed performance improvements are primarily attributable to the exploration mechanism inherent in reinforcement learning, which allows models to discover novel fraud indicators beyond those captured by traditional engineered features.