Synthesizing and Adapting Error Correction Data for Mobile Large Language Model Applications

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Typing-assist LLMs on mobile devices suffer from suboptimal input error correction due to distributional mismatch between synthetic training data and real-world user interactions. Method: This paper proposes a realistic, deployment-oriented data construction and adaptation framework for mobile LLM error correction. It comprises: (1) domain-knowledge-guided LLM synthesis of high-quality correction samples; (2) a novel data reweighting mechanism jointly leveraging online A/B test metric prediction and on-device lightweight language model scoring to align synthetic distributions with actual mobile interaction patterns; and (3) a privacy-preserving integration of on-device user feedback, offline evaluation, online metrics, and multi-source data mixing for training. Results: Experiments demonstrate significant improvements in correction accuracy on both offline benchmarks and live A/B tests, alongside reduced user editing latency—establishing a practical, deployable paradigm for lightweight, data-adapted error correction in mobile LLMs.

Technology Category

Application Category

📝 Abstract
Error correction is an important capability when applying large language models (LLMs) to facilitate user typing on mobile devices. In this paper, we use LLMs to synthesize a high-quality dataset of error correction pairs to evaluate and improve LLMs for mobile applications. We first prompt LLMs with error correction domain knowledge to build a scalable and reliable addition to the existing data synthesis pipeline. We then adapt the synthetic data distribution to match the mobile application domain by reweighting the samples. The reweighting model is learnt by predicting (a handful of) live A/B test metrics when deploying LLMs in production, given the LLM performance on offline evaluation data and scores from a small privacy-preserving on-device language model. Finally, we present best practices for mixing our synthetic data with other data sources to improve model performance on error correction in both offline evaluation and production live A/B testing.
Problem

Research questions and friction points this paper is trying to address.

Synthesizing high-quality error correction data for mobile LLMs
Adapting synthetic data distribution to match mobile domain
Improving error correction performance via mixed data strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs synthesize high-quality error correction datasets
Reweight samples to match mobile domain distribution
Mix synthetic data with other sources optimally
🔎 Similar Papers