On-Device Fine-Tuning via Backprop-Free Zeroth-Order Optimization

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Edge AI devices face severe memory constraints that hinder fine-tuning of large models; conventional backpropagation (BP) requires storing activations and optimizer states, drastically limiting deployable model size. To address this, we propose MeZO—a zeroth-order fine-tuning framework tailored for edge deployment—that eliminates BP entirely. MeZO estimates gradients via forward passes perturbed by random noise, thereby obviating all intermediate activation and optimizer state storage. Through rigorous theoretical analysis and system-level validation, we demonstrate that MeZO enables larger model capacity under identical memory budgets and achieves higher fine-tuning accuracy within limited training time. Experiments show MeZO’s memory efficiency significantly surpasses BP: on representative edge devices, it increases the number of tunable parameters by 2–3×. This work establishes a new paradigm for adaptive large-model deployment in resource-constrained environments.

Technology Category

Application Category

📝 Abstract
On-device fine-tuning is a critical capability for edge AI systems, which must support adaptation to different agentic tasks under stringent memory constraints. Conventional backpropagation (BP)-based training requires storing layer activations and optimizer states, a demand that can be only partially alleviated through checkpointing. In edge deployments in which the model weights must reside entirely in device memory, this overhead severely limits the maximum model size that can be deployed. Memory-efficient zeroth-order optimization (MeZO) alleviates this bottleneck by estimating gradients using forward evaluations alone, eliminating the need for storing intermediate activations or optimizer states. This enables significantly larger models to fit within on-chip memory, albeit at the cost of potentially longer fine-tuning wall-clock time. This paper first provides a theoretical estimate of the relative model sizes that can be accommodated under BP and MeZO training. We then numerically validate the analysis, demonstrating that MeZO exhibits accuracy advantages under on-device memory constraints, provided sufficient wall-clock time is available for fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Enables edge AI adaptation under memory constraints
Eliminates activation storage via zeroth-order optimization
Allows larger models on-device with accuracy tradeoffs
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-device fine-tuning via zeroth-order optimization
Estimates gradients using forward evaluations only
Eliminates storage of activations and optimizer states
🔎 Similar Papers
No similar papers found.