🤖 AI Summary
This work addresses the limitation of conventional quantization-aware training (QAT), which supports only a single numerical format and thus struggles to accommodate dynamic precision requirements during inference. To overcome this, the authors propose a multi-format QAT framework integrated with a Slice-and-Scale transformation mechanism, enabling a single model to achieve high performance across diverse MXINT and MXFP formats while allowing runtime precision switching without retraining. Anchored on high-precision formats such as MXINT8 or MXFP8, the method generalizes instantly to unseen lower-precision configurations. Experimental results demonstrate that the proposed approach matches the accuracy of dedicated single-format QAT models across all target precisions, with negligible degradation during format transitions, thereby substantially enhancing deployment flexibility.
📝 Abstract
Quantization-aware training (QAT) is typically performed for a single target numeric format, while practical deployments often need to choose numerical precision at inference time based on hardware support or runtime constraints. We study multi-format QAT, where a single model is trained to be robust across multiple quantization formats. We find that multi-format QAT can match single-format QAT at each target precision, yielding one model that performs well overall across different formats, even formats that were not seen during training. To enable practical deployment, we propose the Slice-and-Scale conversion procedure for both MXINT and MXFP that converts a high-precision representation into lower-precision formats without re-training. Building on this, we introduce a pipeline that (i) trains a model with multi-format QAT, (ii) stores a single anchor format checkpoint (MXINT8/MXFP8), and (iii) allows on-the-fly conversion to lower MXINT or MXFP formats at runtime with negligible-or no-additional accuracy degradation. Together, these components provide a practical path to elastic precision scaling and allow selecting the runtime format at inference time across diverse deployment targets.