Ruyi2 Technical Report

๐Ÿ“… 2026-02-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the high cost and latency challenges in deploying large language models, as well as the limitations of existing adaptive computation methods in optimization complexity and compatibility with distributed training. Building upon the AI Flow framework, the authors propose Ruyi2, an adaptive model that introduces a novel family-based parameter sharing mechanism and a variable-depth computation architecture, establishing a new โ€œtrain once, deploy everywhereโ€ paradigm. Implemented on Megatron-LM, Ruyi2 integrates 3D parallelism with an early-exit strategy to enable efficient large-scale distributed training. Experiments demonstrate that Ruyi2 achieves 2โ€“3ร— speedup over its predecessor Ruyi while matching the performance of similarly sized Qwen3 models, validating the effectiveness of the proposed family-based sharing strategy in enhancing the synergy between training and inference efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) face significant challenges regarding deployment costs and latency, necessitating adaptive computing strategies. Building upon the AI Flow framework, we introduce Ruyi2 as an evolution of our adaptive model series designed for efficient variable-depth computation. While early-exit architectures offer a viable efficiency-performance balance, the Ruyi model and existing methods often struggle with optimization complexity and compatibility with large-scale distributed training. To bridge this gap, Ruyi2 introduces a stable "Familial Model" based on Megatron-LM. By using 3D parallel training, it achieves a 2-3 times speedup over Ruyi, while performing comparably to same-sized Qwen3 models. These results confirm that family-based parameter sharing is a highly effective strategy, establishing a new "Train Once, Deploy Many" paradigm and providing a key reference for balancing architectural efficiency with high-performance capabilities.
Problem

Research questions and friction points this paper is trying to address.

deployment cost
latency
adaptive computing
distributed training
optimization complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Familial Model
Adaptive Computation
3D Parallel Training
Early-exit Architecture
Train Once Deploy Many
๐Ÿ”Ž Similar Papers
Huan Song
Huan Song
Amazon AWS AI
Deep learningmachine learninggraph neural networkstime-series analysis
S
Shuyu Tian
Institute of Artificial Intelligence (TeleAI), China Telecom
J
Junyi Hao
Institute of Artificial Intelligence (TeleAI), China Telecom
M
Minxiu Xu
Institute of Artificial Intelligence (TeleAI), China Telecom
H
Hongjun An
Institute of Artificial Intelligence (TeleAI), China Telecom
Y
Yiliang Song
Institute of Artificial Intelligence (TeleAI), China Telecom
J
Jiawei Shao
Institute of Artificial Intelligence (TeleAI), China Telecom
X
Xuelong Li
Institute of Artificial Intelligence (TeleAI), China Telecom