๐ค AI Summary
This work addresses the high cost and latency challenges in deploying large language models, as well as the limitations of existing adaptive computation methods in optimization complexity and compatibility with distributed training. Building upon the AI Flow framework, the authors propose Ruyi2, an adaptive model that introduces a novel family-based parameter sharing mechanism and a variable-depth computation architecture, establishing a new โtrain once, deploy everywhereโ paradigm. Implemented on Megatron-LM, Ruyi2 integrates 3D parallelism with an early-exit strategy to enable efficient large-scale distributed training. Experiments demonstrate that Ruyi2 achieves 2โ3ร speedup over its predecessor Ruyi while matching the performance of similarly sized Qwen3 models, validating the effectiveness of the proposed family-based sharing strategy in enhancing the synergy between training and inference efficiency.
๐ Abstract
Large Language Models (LLMs) face significant challenges regarding deployment costs and latency, necessitating adaptive computing strategies. Building upon the AI Flow framework, we introduce Ruyi2 as an evolution of our adaptive model series designed for efficient variable-depth computation. While early-exit architectures offer a viable efficiency-performance balance, the Ruyi model and existing methods often struggle with optimization complexity and compatibility with large-scale distributed training. To bridge this gap, Ruyi2 introduces a stable "Familial Model" based on Megatron-LM. By using 3D parallel training, it achieves a 2-3 times speedup over Ruyi, while performing comparably to same-sized Qwen3 models. These results confirm that family-based parameter sharing is a highly effective strategy, establishing a new "Train Once, Deploy Many" paradigm and providing a key reference for balancing architectural efficiency with high-performance capabilities.