ParaFormer: Shallow Parallel Transformers with Progressive Approximation

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency and poor deployability of deep models—characterized by slow training, high inference latency, and unsuitability for resource-constrained devices—this paper proposes ParaFormer, a shallow parallel Transformer architecture. Methodologically, it reformulates the Transformer as a closed-form function approximator, theoretically demonstrating that inter-layer collaboration can be algorithmically realized without deep stacking. Leveraging the universal approximation theorem, ParaFormer employs a multi-branch parallel structure with a progressive approximation mechanism that enforces collaborative convergence across branches. Experiments demonstrate that ParaFormer outperforms baseline models (e.g., ViT) on image classification while enabling up to 15.07× model compression. In multi-GPU training, it achieves a 3.30× speedup over FairScale. These results highlight significant improvements in computational efficiency, scalability, and hardware adaptability—advancing practical deployment of vision Transformers.

Technology Category

Application Category

📝 Abstract
The widespread 'deeper is better' philosophy has driven the creation of architectures like ResNet and Transformer, which achieve high performance by stacking numerous layers. However, increasing model depth comes with challenges such as longer training times, higher inference latency, and impracticality on resource-constrained devices. To address these issues, we propose ParaFormer, a shallow Transformer architecture designed for true parallelism in both structure and computation. By formulating standard Transformers as function approximators in closed-form, our theoretical analysis shows that their performance relies on inter-layer collaboration for progressive approximation, rather than depth itself. While deep Transformers enforce this collaboration through sequential designs, we demonstrate that such collaboration is not inherently tied to sequential structures. ParaFormer removes the sequential constraint by organizing layers into parallel branches, enforcing inter-layer collaboration algorithmically. Specifically, we implement progressive approximation, ensuring that each new branch further reduces the loss from preceding branches, enabling faster convergence. Extensive experiments validate ParaFormer's effectiveness, outperforming standard Transformers like ViT. Moreover, ParaFormer supports up to 15.07x model compression and facilitates model expansion for adaptive continuous learning. Experimental results on multi-GPU deployment demonstrate that ParaFormer is 3.30x faster than widely used parallelism solutions such as FairScale. These advancements stem from our closed-form formulation of Transformers based on the Universal Approximation Theorem, which not only explains the ``depth belief'' but also opens new avenues for designing efficient Transformer architectures. Source code: https://(open-upon-acceptance)
Problem

Research questions and friction points this paper is trying to address.

Addressing inefficiency of deep Transformers through parallel architecture
Enabling faster convergence with progressive approximation method
Achieving high compression and deployment speed for resource constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shallow parallel architecture replaces deep sequential Transformers
Progressive approximation algorithm enables inter-layer collaboration
Closed-form formulation supports model compression and expansion
🔎 Similar Papers
No similar papers found.
W
Wei Wang
Dept. of Computing, Hong Kong Polytechnic University, Hong Kong
Xiao-Yong Wei
Xiao-Yong Wei
Sichuan University, China
health computingimage/video retrievalmachine learningdata miningcomputational linguistics
Q
Qing Li
Dept. of Computing, Hong Kong Polytechnic University, Hong Kong