🤖 AI Summary
Current precision agriculture lacks fine-grained insect identification and semantic understanding, while existing multimodal models suffer from insufficient entomological visual knowledge. Method: We introduce Insect-VL—the first large-scale vision-language dataset tailored for insect understanding—and propose Insect-LLaVA, a foundational multimodal model. Our approach features the first instruction-tuned multimodal dataset for entomology; a novel Patch-wise Relevant Attention mechanism to enhance local fine-grained feature modeling; and a Description Consistency loss enabling unsupervised, label-free learning of micro-features. Contribution/Results: Evaluated on our newly constructed insect vision-language question answering benchmark, Insect-LLaVA achieves state-of-the-art performance, significantly improving fine-grained classification, cross-modal retrieval, and generation capabilities. The framework establishes an interpretable and generalizable vision-language understanding paradigm for sustainable agriculture.
📝 Abstract
Multimodal conversational generative AI has shown impressive capabilities in various vision and language understanding through learning massive text-image data. However, current conversational models still lack knowledge about visual insects since they are often trained on the general knowledge of vision-language data. Meanwhile, understanding insects is a fundamental problem in precision agriculture, helping to promote sustainable development in agriculture. Therefore, this paper proposes a novel multimodal conversational model, Insect-LLaVA, to promote visual understanding in insect-domain knowledge. In particular, we first introduce a new large-scale Multimodal Insect Dataset with Visual Insect Instruction Data that enables the capability of learning the multimodal foundation models. Our proposed dataset enables conversational models to comprehend the visual and semantic features of the insects. Second, we propose a new Insect-LLaVA model, a new general Large Language and Vision Assistant in Visual Insect Understanding. Then, to enhance the capability of learning insect features, we develop an Insect Foundation Model by introducing a new micro-feature self-supervised learning with a Patch-wise Relevant Attention mechanism to capture the subtle differences among insect images. We also present Description Consistency loss to improve micro-feature learning via text descriptions. The experimental results evaluated on our new Visual Insect Question Answering benchmarks illustrate the effective performance of our proposed approach in visual insect understanding and achieve State-of-the-Art performance on standard benchmarks of insect-related tasks.