VT-FSL: Bridging Vision and Text with LLMs for Few-Shot Learning

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Few-shot learning (FSL) suffers from semantic hallucination—generating class descriptions inconsistent with visual evidence—due to sparse support samples, thereby impairing generalization. To address this, we propose a cross-modal iterative prompting framework that leverages large language models (LLMs) to generate precise textual descriptions from support images and subsequently guides vision-language diffusion models to synthesize semantically consistent images. Furthermore, we introduce a kernelized parallelepiped volume minimization mechanism to achieve geometric alignment, enabling joint structured representation and fine-grained semantic-visual alignment across text, support images, and synthesized images. Our method achieves state-of-the-art performance on ten standard, cross-domain, and fine-grained FSL benchmarks. It significantly mitigates hallucination, enhances multimodal consistency, and improves few-shot generalization capability.

Technology Category

Application Category

📝 Abstract
Few-shot learning (FSL) aims to recognize novel concepts from only a few labeled support samples. Recent studies enhance support features by incorporating additional semantic information or designing complex semantic fusion modules. However, they still suffer from hallucinating semantics that contradict the visual evidence due to the lack of grounding in actual instances, resulting in noisy guidance and costly corrections. To address these issues, we propose a novel framework, bridging Vision and Text with LLMs for Few-Shot Learning (VT-FSL), which constructs precise cross-modal prompts conditioned on Large Language Models (LLMs) and support images, seamlessly integrating them through a geometry-aware alignment. It mainly consists of Cross-modal Iterative Prompting (CIP) and Cross-modal Geometric Alignment (CGA). Specifically, the CIP conditions an LLM on both class names and support images to generate precise class descriptions iteratively in a single structured reasoning pass. These descriptions not only enrich the semantic understanding of novel classes but also enable the zero-shot synthesis of semantically consistent images. The descriptions and synthetic images act respectively as complementary textual and visual prompts, providing high-level class semantics and low-level intra-class diversity to compensate for limited support data. Furthermore, the CGA jointly aligns the fused textual, support, and synthetic visual representations by minimizing the kernelized volume of the 3-dimensional parallelotope they span. It captures global and nonlinear relationships among all representations, enabling structured and consistent multimodal integration. The proposed VT-FSL method establishes new state-of-the-art performance across ten diverse benchmarks, including standard, cross-domain, and fine-grained few-shot learning scenarios. Code is available at https://github.com/peacelwh/VT-FSL.
Problem

Research questions and friction points this paper is trying to address.

Generating precise class descriptions from limited visual examples
Aligning multimodal representations to reduce semantic contradictions
Compensating for scarce training data through synthetic image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal iterative prompting with LLMs for precise descriptions
Zero-shot synthesis of semantically consistent images
Geometry-aware alignment for multimodal representation integration
🔎 Similar Papers
No similar papers found.
W
Wenhao Li
School of Software, Shandong University
Qiangchang Wang
Qiangchang Wang
Shandong University
Computer VisionDeep Learning
X
Xianjing Meng
School of Computing and Artificial Intelligence, Shandong University of Finance and Economics
Z
Zhibin Wu
School of Software, Shandong University
Y
Yilong Yin
School of Software, Shandong University