🤖 AI Summary
Existing works primarily target static CNN/Transformer workloads and struggle to address the dynamic mapping challenges posed by mixed request types and variable sequence lengths in LLM inference. This paper proposes a fine-grained mapping space exploration framework for multi-chip accelerators. It introduces a computation-execution-graph-based mapping encoding scheme that decouples micro-batch scheduling from inter-layer dependencies, enabling precise execution control across heterogeneous chips. Furthermore, it develops a multi-objective evaluation engine integrating genetic algorithms for efficient search, jointly modeling tensor parallelism, pipeline parallelism, and expert parallelism. Experiments demonstrate that our approach reduces the energy-delay product (EDP) by 63.12% on average over state-of-the-art methods, significantly improving both resource utilization and inference throughput.
📝 Abstract
Large Language Models (LLMs) impose massive computational demands, driving the need for scalable multi-chiplet accelerators. However, existing mapping space exploration efforts for such accelerators primarily focus on traditional CNN/Transformer workloads and fail to adequately support the dynamic behaviors of mixed request types and variable sequence lengths in real-world LLM inference serving. To bridge this gap, we first propose a computation execution graph-based mapping encoding scheme that decouples micro-batches and layers, enabling fine-grained execution control on heterogeneous chiplets and flexibly representing various parallelism strategies. Second, building upon this scheme, we develop the Compass framework, which integrates an evaluation engine and a genetic algorithm-based mapping generation engine to achieve efficient mapping search. Compared to state-of-the-art works, our solution achieves an average EDP reduction of 63.12%.