Probing the Limits of Stylistic Alignment in Vision-Language Models

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capability boundaries of small-scale vision-language models (VLMs) in generating stylistically specific image captions—e.g., humorous or romantic—in zero-shot and low-resource settings. To address insufficient style alignment and poor data efficiency, we propose a human-preference-driven few-shot reinforcement learning framework that integrates zero-shot transfer, style-contrastive learning, and model parameter-efficient design. Experiments demonstrate that style alignment “saturates” with ≤100 style-preference demonstrations, yielding substantial gains in subjective user satisfaction (+32.7%) and stylistic consistency (+41.5%). Furthermore, we quantitatively establish performance thresholds for small VLMs across distinct stylistic tasks—revealing both their practical potential and inherent limitations in low-resource regimes. Our work delivers a reproducible methodology and empirical benchmark for efficient, controllable multi-style vision-language generation.

Technology Category

Application Category

📝 Abstract
Vision-language models are increasingly used to generate image captions in specific styles, such as humor or romantic. However, these transformer-based models often struggle with this subjective task in a zero-shot setting. While preference data can be used to align them toward a desired style, such data is expensive to acquire, limiting the ability to explore the models' full capabilities. This work addresses this by studying the data efficiency of aligning small vision-language models to humor and romantic styles. This approach helps to define the performance limits of these models and determine how little preference data is needed to achieve stylistic saturation, benchmarking their capabilities and limitations.
Problem

Research questions and friction points this paper is trying to address.

Aligning vision-language models to stylistic preferences efficiently
Determining minimal preference data needed for stylistic saturation
Benchmarking performance limits of small vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Aligns small vision-language models to styles
Studies data efficiency for humor and romantic styles
Determines minimal preference data for stylistic saturation
🔎 Similar Papers
No similar papers found.