🤖 AI Summary
This paper addresses the low conversion rates and poor generalizability in personalized promotional offer generation. To tackle these challenges, we propose a contrastive learning–enhanced encoder-decoder framework built upon T5-Small. Specifically, we incorporate an InfoNCE loss-driven contrastive learning mechanism that aligns user profiles and promotional offers within a shared embedding space, dynamically refining the latent structure to improve adaptability to unseen user behaviors. Our key contribution lies in explicitly integrating contrastive learning into the offer generation pipeline—jointly optimizing for semantic relevance and behavioral consistency. Experiments on a synthetic dataset demonstrate that our approach achieves a 17% improvement in offer acceptance rate over standard supervised fine-tuning baselines, validating its effectiveness in enhancing recommendation relevance and customer satisfaction.
📝 Abstract
Personalized marketing has emerged as a pivotal strategy for enhancing customer engagement and driving business growth. Academic and industry efforts have predominantly focused on recommendation systems and personalized advertisements. Nonetheless, this facet of personalization holds significant potential for increasing conversion rates and improving customer satisfaction. Prior studies suggest that well-executed personalization strategies can boost revenue by up to 40 percent, underscoring the strategic importance of developing intelligent, data-driven approaches for offer generation. This work introduces SLM4Offer, a generative AI model for personalized offer generation, developed by fine-tuning a pre-trained encoder-decoder language model, specifically Google's Text-to-Text Transfer Transformer (T5-Small 60M) using a contrastive learning approach. SLM4Offer employs InfoNCE (Information Noise-Contrastive Estimation) loss to align customer personas with relevant offers in a shared embedding space. A key innovation in SLM4Offer lies in the adaptive learning behaviour introduced by contrastive loss, which reshapes the latent space during training and enhances the model's generalizability. The model is fine-tuned and evaluated on a synthetic dataset designed to simulate customer behaviour and offer acceptance patterns. Experimental results demonstrate a 17 percent improvement in offer acceptance rate over a supervised fine-tuning baseline, highlighting the effectiveness of contrastive objectives in advancing personalized marketing.