🤖 AI Summary
Existing Transformer inference lacks cross-dimensional computational adaptability, struggling to simultaneously satisfy hardware constraints and latency sensitivity. This paper proposes AdaPerceiver—the first unified architecture enabling joint adaptation across depth, width, and token count—achieved via scalable network design and efficient co-training to dynamically select computation paths within a single model. Its key contribution lies in overcoming the conventional limitation of dynamic computation to a single dimension, thereby significantly expanding the accuracy-throughput trade-off space. Experiments demonstrate that AdaPerceiver achieves 85.4% top-1 accuracy on image classification with 36% higher throughput than FlexiViT-L; matches ViT-H/14 performance on dense prediction tasks while reducing encoder FLOPs by 26×; and maintains ImageNet-1K accuracy while cutting FLOPs by 24–33%.
📝 Abstract
Modern transformer architectures achieve remarkable performance across tasks and domains but remain rigid in how they allocate computation at inference time. Real-world deployment often requires models to adapt to diverse hardware and latency constraints, yet most approaches to dynamic computation focus on a single axis -- such as reducing the number of tokens. We present a novel capability: AdaPerceiver, the first transformer architecture with unified adaptivity across depth, width, and tokens within a single model. We propose an architecture that supports adaptivity along these axes. We couple this with an efficient joint training regime that ensures the model maintains performance across its various configurations. We evaluate AdaPerceiver on image classification, semantic segmentation, and depth estimation tasks. On image classification, AdaPerceiver expands the accuracy-throughput Pareto front. It achieves 85.4% accuracy while yielding 36% higher throughput than FlexiViT-L. On dense prediction, AdaPerceiver matches ViT-H/14 while having $sim$26x fewer encoder FLOPs (floating-point operations) on semantic segmentation and depth estimation. Finally, we show how AdaPerceiver equipped with a policy can maintain ImageNet1K accuracy ($pm0.1$ percentage points) while reducing FLOPs by $24-33$%.