GLOV: Guided Large Language Models as Implicit Optimizers for Vision Language Models

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak prompt generalization capability of vision-language models (VLMs) in zero-shot cross-modal tasks, this paper proposes an LLM-based implicit optimization paradigm: leveraging large language models (LLMs) as differentiable optimizers to automatically generate and iteratively refine textual prompts via meta-prompt engineering, embedding-space offset guidance, and in-context learning; it further introduces prompt purity ranking and a two-stage evaluation feedback mechanism to enable directed evolution of cross-modal prompts. Evaluated on 16 zero-shot visual classification benchmarks, the method achieves substantial improvements—averaging +3.8% for CLIP-family models and +21.6% for LLaVA-family models, with a maximum gain of 57.5%. The core contribution lies in the first formulation of LLMs as implicit optimizers, enabling gradient-free, automatic prompt evolution without parameter updates.

Technology Category

Application Category

📝 Abstract
In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to a purity measure obtained through a fitness function. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation process in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Visual Tasks
Unseen Categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

GLOV
Prompt Engineering
Visual-Language Performance Enhancement
🔎 Similar Papers
No similar papers found.