🤖 AI Summary
To address the small-sample bottleneck in burn scar segmentation for remote sensing–based disaster assessment, this paper proposes a multi-granularity dual-encoder network. The method employs a local–global dual-encoder collaborative mechanism that jointly models fine-grained texture and coarse-grained semantic context in a single forward pass, enhancing feature discriminability without requiring additional annotations. A lightweight architecture is adopted, integrated with multi-granularity feature fusion and an IoU-guided optimization training paradigm. Experimental results demonstrate that, under limited labeled data, the model achieves a 2.65% average IoU improvement over baseline methods. Moreover, its computational cost is reduced by 50% GFLOPs compared to state-of-the-art (SOTA) counterparts, while attaining superior generalization performance—setting a new benchmark in the field.
📝 Abstract
In crisis management and remote sensing, image segmentation plays a crucial role, enabling tasks like disaster response and emergency planning by analyzing visual data. Neural networks are able to analyze satellite acquisitions and determine which areas were affected by a catastrophic event. The problem in their development in this context is the data scarcity and the lack of extensive benchmark datasets, limiting the capabilities of training large neural network models. In this paper, we propose a novel methodology, namely Magnifier, to improve segmentation performance with limited data availability. The Magnifier methodology is applicable to any existing encoder-decoder architecture, as it extends a model by merging information at different contextual levels through a dual-encoder approach: a local and global encoder. Magnifier analyzes the input data twice using the dual-encoder approach. In particular, the local and global encoders extract information from the same input at different granularities. This allows Magnifier to extract more information than the other approaches given the same set of input images. Magnifier improves the quality of the results of +2.65% on average IoU while leading to a restrained increase in terms of the number of trainable parameters compared to the original model. We evaluated our proposed approach with state-of-the-art burned area segmentation models, demonstrating, on average, comparable or better performances in less than half of the GFLOPs.