Edge-Guided Shallow and Contextual Deep Feature Learning via Bidirectional Attention for Salient Object Detection in Optical Remote Sensing Images

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE ACCESS, 2025, 13, (99), pp. 218066-218081
Issue Date:
2025
Full metadata record
Salient object detection from optical remote sensing images seeks to detect and segment out objects that stand out visually by mimicking human interpretation. Accurately detecting salient objects in remote sensing imagery is hindered by the broad variability in object appearances and background complexity. Despite recent advancements, there exist two major challenges. Firstly, it remains unclear how to fuse shallow features with deep features optimally. Secondly, defining an optimal strategy for multi-scale feature processing is crucial, as patterns can vary significantly across different levels. We introduce a novel bidirectional attention mechanism to tackle these challenges. Our framework features: (1) a Parallel Convolution-Channel Attention (PCCA) module that boosts edge representation and highlights salient feature channels, and (2) a Holistic Reverse Attention (HRA) module that preserves crucial details in high-level features. Coupled with a multiscale progressive decoder, our method enables precise feature integration. Extensive evaluation on benchmark datasets demonstrates the superiority of our proposed model compared with 17 state-of-the-art (SOTA) models, achieving a significant 11.57% reduction in Mean Absolute Error and setting a new state-of-the-art on S-measure with a 0.4% improvement.
Please use this identifier to cite or link to this item: