🤖 AI Summary
Scaling large language models (LLMs) exacerbates GPU memory pressure, often exceeding HBM capacity; existing memory swapping optimizations assume static computation graphs and thus fail in Eager Mode, where operator sequences dynamically change. Method: We propose Chameleon—the first adaptive memory swapping framework designed specifically for Eager Mode that supports dynamic operator sequences. Its core innovations include a lightweight online profiler, a dynamic swap-policy generation algorithm, and an optimized execution module—jointly enabling real-time memory awareness and scheduling without static graph analysis. Contribution/Results: Experiments show Chameleon reduces profiling overhead by 84.25%, enables training models up to 4× larger than hardware memory capacity, and achieves a 38.94% throughput improvement over compute-heavy or highly parallel baselines. This significantly enhances the trainability of large models under resource-constrained settings.
📝 Abstract
The increasing size of large language models (LLMs) has led to a surge in memory requirements during training, often exceeding the capacity of high-bandwidth memory (HBM). Swap-based memory optimization incurs neither accuracy loss nor additional end-to-end overhead when effectively overlapped, thus being an attractive solution. However, existing swap methods assume consistent operator sequences, which is impractical in Eager Mode, where operator sequences can vary during change.
We propose Chameleon, which redesigns the end-to-end process of swap-based memory optimization and is the first work to consider varying operator sequences in Eager Mode. Chameleon (i) introduces a lightweight online profiler to enable continuous profiling for monitoring operator sequences, (ii) generates effective swap policies with limited operator information, and (iii) optimizes the policy execution module for accurate policy application and better performance. Experimental results demonstrate that Chameleon reduces profiling overhead by 84.25%, enables training models up to 4x larger than hardware memory while adapting to changes in operator sequences, improves performance by up to 38.94% compared to recomputation or high-degree parallelism.