🤖 AI Summary
This work proposes CharacterFlywheel, an end-to-end, production-grade iterative optimization framework designed to enhance user engagement and instruction-following capabilities of large language models in large-scale social chat applications. The framework integrates high-quality data curation, reward modeling, supervised fine-tuning (SFT), and reinforcement learning (RL), with iterative refinement driven by a closed-loop combination of offline evaluation and online A/B testing to mitigate overfitting. Deployed on a LLaMA 3.1–based system, the approach achieved significant improvements across eight iterations: seven demonstrated notable gains in user engagement—up to +8.8% in breadth and +19.4% in depth—and instruction adherence rose from 59.2% to 84.8%, while violation rates dropped from 26.6% to 5.8%.
📝 Abstract
This report presents CharacterFlywheel, an iterative flywheel process for improving large language models (LLMs) in production social chat applications across Instagram, WhatsApp, and Messenger. Starting from LLaMA 3.1, we refined models across 15 generations using data from both internal and external real-user traffic. Through continuous deployments from July 2024 to April 2025, we conducted controlled 7-day A/B tests showing consistent engagement improvements: 7 of 8 newly deployed models demonstrated positive lift over the baseline, with the strongest performers achieving up to 8.8% improvement in engagement breadth and 19.4% in engagement depth. We also observed substantial gains in steerability, with instruction following increasing from 59.2% to 84.8% and instruction violations decreasing from 26.6% to 5.8%. We detail the CharacterFlywheel process which integrates data curation, reward modeling to estimate and interpolate the landscape of engagement metrics, supervised fine-tuning (SFT), reinforcement learning (RL), and both offline and online evaluation to ensure reliable progress at each optimization step. We also discuss our methods for overfitting prevention and navigating production dynamics at scale. These contributions advance the scientific rigor and understanding of LLMs in social applications serving millions of users.