🤖 AI Summary
Edge devices face severe memory constraints and support only forward inference—lacking backward propagation capability—posing significant challenges for conventional fine-tuning.
Method: This paper proposes a gradient-free, efficient prompt tuning method tailored for such resource-constrained settings. Its core innovation is a two-stage hybrid optimization framework: (i) Stage I employs sharpness-aware evolutionary strategies to perform global, smooth exploration of the loss landscape; (ii) Stage II applies sparse zeroth-order optimization for local, high-precision refinement. The entire method relies exclusively on forward passes, integrating zeroth-order optimization, evolutionary algorithms, and sharpness-aware training.
Contribution/Results: The approach ensures computational efficiency and deployment safety while maintaining competitive performance. Extensive experiments on CLIP demonstrate that it achieves an average +7% accuracy gain over state-of-the-art forward-only methods, with significantly accelerated convergence and superior overall performance.
📝 Abstract
Fine-tuning vision language models (VLMs) has achieved remarkable performance across various downstream tasks; yet, it requires access to model gradients through backpropagation (BP), making them unsuitable for memory-constrained, inference-only edge devices. To address this limitation, previous work has explored various BP-free fine-tuning methods. However, these approaches often rely on high-variance evolutionary strategies (ES) or zeroth-order (ZO) optimization, and often fail to achieve satisfactory performance. In this paper, we propose a hybrid Sharpness-aware Zeroth-order optimization (SharpZO) approach, specifically designed to enhance the performance of ZO VLM fine-tuning via a sharpness-aware warm-up training. SharpZO features a two-stage optimization process: a sharpness-aware ES stage that globally explores and smooths the loss landscape to construct a strong initialization, followed by a fine-grained local search via sparse ZO optimization. The entire optimization relies solely on forward passes. Detailed theoretical analysis and extensive experiments on CLIP models demonstrate that SharpZO significantly improves accuracy and convergence speed, achieving up to 7% average gain over state-of-the-art forward-only methods.