🤖 AI Summary
This study investigates the capability boundaries of small-scale vision-language models (VLMs) in generating stylistically specific image captions—e.g., humorous or romantic—in zero-shot and low-resource settings. To address insufficient style alignment and poor data efficiency, we propose a human-preference-driven few-shot reinforcement learning framework that integrates zero-shot transfer, style-contrastive learning, and model parameter-efficient design. Experiments demonstrate that style alignment “saturates” with ≤100 style-preference demonstrations, yielding substantial gains in subjective user satisfaction (+32.7%) and stylistic consistency (+41.5%). Furthermore, we quantitatively establish performance thresholds for small VLMs across distinct stylistic tasks—revealing both their practical potential and inherent limitations in low-resource regimes. Our work delivers a reproducible methodology and empirical benchmark for efficient, controllable multi-style vision-language generation.
📝 Abstract
Vision-language models are increasingly used to generate image captions in specific styles, such as humor or romantic. However, these transformer-based models often struggle with this subjective task in a zero-shot setting. While preference data can be used to align them toward a desired style, such data is expensive to acquire, limiting the ability to explore the models' full capabilities. This work addresses this by studying the data efficiency of aligning small vision-language models to humor and romantic styles. This approach helps to define the performance limits of these models and determine how little preference data is needed to achieve stylistic saturation, benchmarking their capabilities and limitations.