🤖 AI Summary
To address the limitations of unimodal representation and frequent incompleteness of multimodal information in fine-grained plant identification, this paper proposes the first end-to-end multimodal deep learning framework for plant organ-level recognition (flower, leaf, fruit, stem). Methodologically, it introduces (1) a Modality Fusion Architecture Search (MFAS) mechanism that automatically discovers optimal fusion strategies via neural architecture search; (2) Multimodal-PlantCLEF—the first large-scale, organ-annotated multimodal plant benchmark comprising 979 species; and (3) multimodal dropout, a regularization technique enhancing robustness under partial modality absence. Experiments demonstrate state-of-the-art accuracy of 82.61% on Multimodal-PlantCLEF, outperforming the best handcrafted late-fusion baseline by 10.33%. McNemar’s test confirms statistical significance (p < 0.01).
📝 Abstract
Plant classification is vital for ecological conservation and agricultural productivity, enhancing our understanding of plant growth dynamics and aiding species preservation. The advent of deep learning (DL) techniques has revolutionized this field by enabling autonomous feature extraction, significantly reducing the dependence on manual expertise. However, conventional DL models often rely solely on single data sources, failing to capture the full biological diversity of plant species comprehensively. Recent research has turned to multimodal learning to overcome this limitation by integrating multiple data types, which enriches the representation of plant characteristics. This shift introduces the challenge of determining the optimal point for modality fusion. In this paper, we introduce a pioneering multimodal DL-based approach for plant classification with automatic modality fusion. Utilizing the multimodal fusion architecture search, our method integrates images from multiple plant organs -- flowers, leaves, fruits, and stems -- into a cohesive model. To address the lack of multimodal datasets, we contributed Multimodal-PlantCLEF, a restructured version of the PlantCLEF2015 dataset tailored for multimodal tasks. Our method achieves 82.61% accuracy on 979 classes of Multimodal-PlantCLEF, surpassing state-of-the-art methods and outperforming late fusion by 10.33%. Through the incorporation of multimodal dropout, our approach demonstrates strong robustness to missing modalities. We validate our model against established benchmarks using standard performance metrics and McNemar's test, further underscoring its superiority.