🤖 AI Summary
Existing cross-modal captioning systems (e.g., audio/video captioning) exhibit poor generalization and struggle with zero-shot adaptation to novel modalities.
Method: We propose a fine-tuning-free, inference-time classifier-guided framework: the pre-trained captioning model is frozen; GPT-4 automatically synthesizes multimodal classification data to train a lightweight, plug-and-play text classifier; during inference, prompt-driven semantic guidance dynamically adapts the system to modality-specific semantic requirements.
Contribution/Results: Our approach achieves pure inference-level modality adaptation with zero additional training overhead. On zero-shot audio captioning, it attains state-of-the-art performance, significantly improving descriptive accuracy and source-sound association fidelity. This work is the first to empirically validate the effectiveness of coupling a frozen language model with a synthetic-data-trained classifier for enhancing generalization in cross-modal captioning.
📝 Abstract
Most current captioning systems use language models trained on data from specific settings, such as image-based captioning via Amazon Mechanical Turk, limiting their ability to generalize to other modality distributions and contexts. This limitation hinders performance in tasks like audio or video captioning, where different semantic cues are needed. Addressing this challenge is crucial for creating more adaptable and versatile captioning frameworks applicable across diverse real-world contexts. In this work, we introduce a method to adapt captioning networks to the semantics of alternative settings, such as capturing audibility in audio captioning, where it is crucial to describe sounds and their sources. Our framework consists of two main components: (i) a frozen captioning system incorporating a language model (LM), and (ii) a text classifier that guides the captioning system. The classifier is trained on a dataset automatically generated by GPT-4, using tailored prompts specifically designed to enhance key aspects of the generated captions. Importantly, the framework operates solely during inference, eliminating the need for further training of the underlying captioning model. We evaluate the framework on various models and modalities, with a focus on audio captioning, and report promising results. Notably, when combined with an existing zero-shot audio captioning system, our framework improves its quality and sets state-of-the-art performance in zero-shot audio captioning.