FFN Fusion: Rethinking Sequential Computation in Large Language Models

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Serial computation in feed-forward network (FFN) layers imposes a critical efficiency bottleneck in large language model (LLM) inference. Method: This paper proposes FFN Fusion, an architectural optimization that systematically uncovers the inherent parallelizability of consecutive FFN layers in Transformers and further proves that full attention–FFN blocks possess structured parallelism potential. It achieves computational flow reordering via FFN sequence identification and fusion, structure-aware parallelization restructuring, and synergistic optimization with quantization and pruning. Contribution/Results: Evaluated on the Llama-3.1 series, FFN Fusion accelerates inference by 1.71× for the 405B model while reducing per-token computation cost by 35×; consistent speedups are observed across 49B–253B models, with diminishing returns as scale decreases. Crucially, this work establishes a new paradigm for efficient LLM inference grounded in inter-layer structural redundancy exploitation—enabling substantial latency reduction without compromising model fidelity.

Technology Category

Application Category

📝 Abstract
We introduce FFN Fusion, an architectural optimization technique that reduces sequential computation in large language models by identifying and exploiting natural opportunities for parallelization. Our key insight is that sequences of Feed-Forward Network (FFN) layers, particularly those remaining after the removal of specific attention layers, can often be parallelized with minimal accuracy impact. We develop a principled methodology for identifying and fusing such sequences, transforming them into parallel operations that significantly reduce inference latency while preserving model behavior. Applying these techniques to Llama-3.1-405B-Instruct, we create Llama-Nemotron-Ultra-253B-Base (Ultra-253B-Base), an efficient and soon-to-be publicly available model that achieves a 1.71X speedup in inference latency and 35X lower per-token cost while maintaining strong performance across benchmarks. Through extensive experiments on models from 49B to 253B parameters, we demonstrate that FFN Fusion becomes increasingly effective at larger scales and can complement existing optimization techniques like quantization and pruning. Most intriguingly, we find that even full transformer blocks containing both attention and FFN layers can sometimes be parallelized, suggesting new directions for neural architecture design.
Problem

Research questions and friction points this paper is trying to address.

Reduces sequential computation in large language models
Identifies parallelizable FFN layers to minimize accuracy impact
Achieves faster inference with lower cost while maintaining performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallelizes FFN layers to reduce latency
Fuses sequences into parallel operations efficiently
Maintains accuracy while speeding up inference
🔎 Similar Papers
2024-03-04Computer Vision and Pattern RecognitionCitations: 3