SharpZO: Hybrid Sharpness-Aware Vision Language Model Prompt Tuning via Forward-Only Passes

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Edge devices face severe memory constraints and support only forward inference—lacking backward propagation capability—posing significant challenges for conventional fine-tuning. Method: This paper proposes a gradient-free, efficient prompt tuning method tailored for such resource-constrained settings. Its core innovation is a two-stage hybrid optimization framework: (i) Stage I employs sharpness-aware evolutionary strategies to perform global, smooth exploration of the loss landscape; (ii) Stage II applies sparse zeroth-order optimization for local, high-precision refinement. The entire method relies exclusively on forward passes, integrating zeroth-order optimization, evolutionary algorithms, and sharpness-aware training. Contribution/Results: The approach ensures computational efficiency and deployment safety while maintaining competitive performance. Extensive experiments on CLIP demonstrate that it achieves an average +7% accuracy gain over state-of-the-art forward-only methods, with significantly accelerated convergence and superior overall performance.

Technology Category

Application Category

📝 Abstract
Fine-tuning vision language models (VLMs) has achieved remarkable performance across various downstream tasks; yet, it requires access to model gradients through backpropagation (BP), making them unsuitable for memory-constrained, inference-only edge devices. To address this limitation, previous work has explored various BP-free fine-tuning methods. However, these approaches often rely on high-variance evolutionary strategies (ES) or zeroth-order (ZO) optimization, and often fail to achieve satisfactory performance. In this paper, we propose a hybrid Sharpness-aware Zeroth-order optimization (SharpZO) approach, specifically designed to enhance the performance of ZO VLM fine-tuning via a sharpness-aware warm-up training. SharpZO features a two-stage optimization process: a sharpness-aware ES stage that globally explores and smooths the loss landscape to construct a strong initialization, followed by a fine-grained local search via sparse ZO optimization. The entire optimization relies solely on forward passes. Detailed theoretical analysis and extensive experiments on CLIP models demonstrate that SharpZO significantly improves accuracy and convergence speed, achieving up to 7% average gain over state-of-the-art forward-only methods.
Problem

Research questions and friction points this paper is trying to address.

Enhance ZO VLM fine-tuning without backpropagation
Improve accuracy and convergence speed in VLMs
Enable memory-efficient tuning for edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid sharpness-aware zeroth-order optimization
Two-stage forward-only optimization process
Sharpness-aware warm-up training initialization
🔎 Similar Papers
No similar papers found.
Y
Yifan Yang
University of California, Santa Barbara
Z
Zhen Zhang
University of California, Santa Barbara
Rupak Vignesh Swaminathan
Rupak Vignesh Swaminathan
Amazon Alexa Speech
Large Language ModelsNeural EfficiencySpeech Recognition
J
Jing Liu
Amazon AGI
Nathan Susanj
Nathan Susanj
Applied Science Manager, Amazon
machine learningdeep learningnatural language processingautomatic speech recognition
Z
Zheng Zhang
University of California, Santa Barbara