🤖 AI Summary
Remote sensing image scene classification faces challenges including high noise levels, low-quality annotations, and insufficient feature diversity, which collectively limit model performance. To address these issues, this paper proposes a Cuttlefish Optimization–enhanced Bidirectional Recurrent Neural Network (CO-BRNN), which intelligently optimizes network parameters and gating mechanisms to significantly improve joint modeling of temporal and spatial features in complex remote sensing scenes. By integrating bio-inspired metaheuristic optimization with bidirectional sequence modeling, CO-BRNN achieves robust feature learning under limited supervision. Evaluated on standard remote sensing benchmarks, CO-BRNN attains a 97% classification accuracy—outperforming mainstream approaches such as MLP-CNN, CNN-LSTM, and LSTM-CRF. This work establishes a new paradigm for remote sensing scene understanding that is highly robust and less dependent on large-scale, high-fidelity annotated data.
📝 Abstract
Scene categorization (SC) in remotely acquired images is an important subject with broad consequences in different fields, including catastrophe control, ecological observation, architecture for cities, and more. Nevertheless, its several apps, reaching a high degree of accuracy in SC from distant observation data has demonstrated to be difficult. This is because traditional conventional deep learning models require large databases with high variety and high levels of noise to capture important visual features. To address these problems, this investigation file introduces an innovative technique referred to as the Cuttlefish Optimized Bidirectional Recurrent Neural Network (CO- BRNN) for type of scenes in remote sensing data. The investigation compares the execution of CO-BRNN with current techniques, including Multilayer Perceptron- Convolutional Neural Network (MLP-CNN), Convolutional Neural Network-Long Short Term Memory (CNN-LSTM), and Long Short Term Memory-Conditional Random Field (LSTM-CRF), Graph-Based (GB), Multilabel Image Retrieval Model (MIRM-CF), Convolutional Neural Networks Data Augmentation (CNN-DA). The results demonstrate that CO-BRNN attained the maximum accuracy of 97%, followed by LSTM-CRF with 90%, MLP-CNN with 85%, and CNN-LSTM with 80%. The study highlights the significance of physical confirmation to ensure the efficiency of satellite data.