🤖 AI Summary
This work addresses the challenges faced by on-device small language models in continual learning, where limited memory and computational resources hinder adaptation to shifting task distributions, often leading to catastrophic forgetting and unstable cloud offloading behavior. To tackle these issues, the authors propose DA-GRPO, a novel approach that integrates a cloud invocation budget constraint directly into a dual-advantage function within a Group Relative Policy Optimization framework. This method jointly optimizes on-device task learning and cloud collaboration decisions without requiring predefined reward functions or external routing modules. Experimental results demonstrate that DA-GRPO significantly improves post-switch accuracy on mathematical reasoning and code generation tasks, effectively mitigates catastrophic forgetting, and achieves stable, efficient edge-cloud collaboration under a fixed budget.
📝 Abstract
Locally deployed Small Language Models (SLMs) must continually support diverse tasks under strict memory and computation constraints, making selective reliance on cloud Large Language Models (LLMs) unavoidable. Regulating cloud assistance during continual learning is challenging, as naive reward-based reinforcement learning often yields unstable offloading behavior and exacerbates catastrophic forgetting as task distributions shift. We propose DA-GRPO, a dual-advantage extension of Group Relative Policy Optimization that incorporates cloud-usage constraints directly into advantage computation, avoiding fixed reward shaping and external routing models. This design enables the local model to jointly learn task competence and collaboration behavior, allowing cloud requests to emerge naturally during post-training while respecting a prescribed assistance budget. Experiments on mathematical reasoning and code generation benchmarks show that DA-GRPO improves post-switch accuracy, substantially reduces forgetting, and maintains stable cloud usage compared to prior collaborative and routing-based approaches.