CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes CharacterFlywheel, an end-to-end, production-grade iterative optimization framework designed to enhance user engagement and instruction-following capabilities of large language models in large-scale social chat applications. The framework integrates high-quality data curation, reward modeling, supervised fine-tuning (SFT), and reinforcement learning (RL), with iterative refinement driven by a closed-loop combination of offline evaluation and online A/B testing to mitigate overfitting. Deployed on a LLaMA 3.1–based system, the approach achieved significant improvements across eight iterations: seven demonstrated notable gains in user engagement—up to +8.8% in breadth and +19.4% in depth—and instruction adherence rose from 59.2% to 84.8%, while violation rates dropped from 26.6% to 5.8%.

Technology Category

Application Category

📝 Abstract
This report presents CharacterFlywheel, an iterative flywheel process for improving large language models (LLMs) in production social chat applications across Instagram, WhatsApp, and Messenger. Starting from LLaMA 3.1, we refined models across 15 generations using data from both internal and external real-user traffic. Through continuous deployments from July 2024 to April 2025, we conducted controlled 7-day A/B tests showing consistent engagement improvements: 7 of 8 newly deployed models demonstrated positive lift over the baseline, with the strongest performers achieving up to 8.8% improvement in engagement breadth and 19.4% in engagement depth. We also observed substantial gains in steerability, with instruction following increasing from 59.2% to 84.8% and instruction violations decreasing from 26.6% to 5.8%. We detail the CharacterFlywheel process which integrates data curation, reward modeling to estimate and interpolate the landscape of engagement metrics, supervised fine-tuning (SFT), reinforcement learning (RL), and both offline and online evaluation to ensure reliable progress at each optimization step. We also discuss our methods for overfitting prevention and navigating production dynamics at scale. These contributions advance the scientific rigor and understanding of LLMs in social applications serving millions of users.
Problem

Research questions and friction points this paper is trying to address.

engagement
steerability
large language models
production deployment
social chat applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

CharacterFlywheel
iterative improvement
steerability
reward modeling
large language models
🔎 Similar Papers
No similar papers found.
Yixin Nie
Yixin Nie
Meta, UNC Chapel Hill
Natural Language ProcessingMachine Learning
Lin Guan
Lin Guan
Meta GenAI
Reinforcement LearningPlanningRLHFAI Agents
Zhongyao Ma
Zhongyao Ma
OpenAI
Anchit Gupta
Anchit Gupta
CVIT, IIIT Hyderabad
Machine LearningComputer Vision
Yipin Zhou
Yipin Zhou
Facebook AI
Computer Vision
X
Xiao Li
Meta Superintelligence Labs
Z
Zhengping Zhou
Meta Superintelligence Labs
R
Raymond Zeng
Meta Superintelligence Labs
G
Gelin Zhou
Meta Superintelligence Labs
S
Shigan Chu
Meta Superintelligence Labs
A
Ajay Thampi
Meta Superintelligence Labs
W
Wancen Mu
Meta Superintelligence Labs
N
Nathan Shuster
Meta Superintelligence Labs
K
Ketong Wang
Meta Superintelligence Labs
Lin Chen
Lin Chen
Facebook, Inc.
Machine LearningNLP
J
Jason Brewer
Meta Superintelligence Labs
Derek Hao Hu
Derek Hao Hu
Meta
visual searchvisual recognition
A
Alexander McCauley
OpenAI
Jason Weston
Jason Weston
Meta
Artificial IntelligenceMachine LearningBioinformaticsVisionNatural Language Processing
S
Sem Park
Meta Superintelligence Labs
N
Na Zhang
Meta Superintelligence Labs
Kevin Tang
Kevin Tang
Director, Meta
Computer VisionMachine Learning