🤖 AI Summary
To address the inefficiency of retraining and resource-intensive compression for multi-scale large language models (LLMs), this paper proposes a nested multi-scale inference architecture. Built upon a single 12B parent model, it enables zero-shot extraction of smaller child models (e.g., 9B, 6B) for flexible cross-scale deployment. Key contributions include: (i) group-aware State Space Model (SSM) elastic expansion and heterogeneous MLP expansion; (ii) layer importance estimation via normalized mean squared error; (iii) a hybrid Mamba-Attention backbone; (iv) an end-to-end differentiable router; and (v) a two-stage inference curriculum learning strategy. Integrated with knowledge distillation and weight sharing, the method achieves performance on par with or surpassing state-of-the-art models after training on only 110 billion tokens. Compared to de novo training, it reduces training cost by 360×; relative to advanced compression methods, it cuts cost by 7×; and it maintains constant deployment memory footprint.
📝 Abstract
Training a family of large language models targeting multiple scales and deployment objectives is prohibitively expensive, requiring separate training runs for each different size. Recent work on model compression through pruning and knowledge distillation has reduced this cost; however, this process still incurs hundreds of billions of tokens worth of training cost per compressed model. In this paper, we present Nemotron Elastic, a framework for building reasoning-oriented LLMs, including hybrid Mamba-Attention architectures, that embed multiple nested submodels within a single parent model, each optimized for different deployment configurations and budgets. Each of these submodels shares weights with the parent model and can be extracted zero-shot during deployment without additional training or fine-tuning. We enable this functionality through an end-to-end trained router, tightly coupled to a two-stage training curriculum designed specifically for reasoning models. We additionally introduce group-aware SSM elastification that preserves Mamba's structural constraints, heterogeneous MLP elastification, normalized MSE-based layer importance for improved depth selection, and knowledge distillation enabling simultaneous multi-budget optimization. We apply Nemotron Elastic to the Nemotron Nano V2 12B model, simultaneously producing a 9B and a 6B model using only 110B training tokens; this results in over 360x cost reduction compared to training model families from scratch, and around 7x compared to SoTA compression techniques. Each of the nested models performs on par or better than the SoTA in accuracy. Moreover, unlike other compression methods, the nested capability of our approach allows having a many-in-one reasoning model that has constant deployment memory against the number of models in the family.