Next Article in Journal
A Single-Sensor Approach to Quantify Gait in Patients with Hereditary Spastic Paraplegia
Previous Article in Journal
A Novel Optimized iBeacon Localization Algorithm Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Alignment Integration Network for Salient Object Detection and Its Application for Optical Remote Sensing Images

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(14), 6562; https://doi.org/10.3390/s23146562
Submission received: 27 June 2023 / Revised: 18 July 2023 / Accepted: 19 July 2023 / Published: 20 July 2023
(This article belongs to the Section Intelligent Sensors)

Abstract

:
Salient object detection has made substantial progress due to the exploitation of multi-level convolutional features. The key point is how to combine these convolutional features effectively and efficiently. Due to the step by step down-sampling operations in almost all CNNs, multi-level features usually have different scales. Methods based on fully convolutional networks directly apply bilinear up-sampling to low-resolution deep features and then combine them with high-resolution shallow features by addition or concatenation, which neglects the compatibility of features, resulting in misalignment problems. In this paper, to solve the problem, we propose an alignment integration network (ALNet), which aligns adjacent level features progressively to generate powerful combinations. To capture long-range dependencies for high-level integrated features as well as maintain high computational efficiency, a strip attention module (SAM) is introduced into the alignment integration procedures. Benefiting from SAM, multi-level semantics can be selectively propagated to predict precise salient objects. Furthermore, although integrating multi-level convolutional features can alleviate the blur boundary problem to a certain extent, it is still unsatisfactory for the restoration of a real object boundary. Therefore, we design a simple but effective boundary enhancement module (BEM) to guide the network focus on boundaries and other error-prone parts. Based on BEM, an attention weighted loss is proposed to boost the network to generate sharper object boundaries. Experimental results on five benchmark datasets demonstrate that the proposed method can achieve state-of-the-art performance on salient object detection. Moreover, we extend the experiments on the remote sensing datasets, and the results further prove the universality and scalability of ALNet.

1. Introduction

As an important research branch in computer vision, salient object detection (SOD) has received much attention in recent years. It can serve as a fundamental pre-processing technique to facilitate various computer vision applications, such as foreground map evaluation [1], image retrieval [2], visual tracking [3,4], remote sensing image segmentation [5], and semantic segmentation [6].
Benefiting from the development of deep learning technology, great advancements [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37] in SOD have been made. In [38], Wang et al. provide a comprehensive survey that reviews deep SOD algorithms from various aspects, including network architecture, level of supervision, and so on. As summarized by [38], most of the current deep learning based methods design their architectures based on fully convolutional networks (FCN) [39] to integrate multi-level convolutional features. However, due to stepwise down-sampling operations, features from different levels have contradictions, and the contextual information they possess is asymmetric, which results in misalignment problems during the feature aggregation process; current work tends to ignore this problem.
To address the misalignment problem, we explore various alignment technologies and propose a novel alignment integration network (ALNet) for SOD. Figure 1 illustrates the alignment processes of different technologies.
To increase interpretability of the models, we visualize the integrated feature maps of each model. An FCN-based model, which combines adjacent level features by direct addition, is utilized as the baseline model (i.e., w.o.Align). As we can see, features without alignment are fuzzy and unfocused. The important semantic and structural information is not well represented because of misalignment. Flow alignment (see Figure 1a), which has been proven to be effective in semantic segmentation [40], provides us with a feasible solution to alleviate the misalignment. Motivated by [40], we propose a flow alignment model to align adjacent level features for SOD. In flow alignment, semantic flow (i.e., offset Δ ) is learned for spatial warping of high-level features. The visualized results in Figure 1 demonstrate the effectiveness of flow alignment. However, the flow alignment only learns one offset at each spatial position of a feature, which is sometimes not enough to handle complex misalignment. Therefore, we further propose a deformable alignment model (see Figure 1b) by substituting deformable convolution for spatial warping to increase the offset diversity for better alignment. Compared with flow alignment, deformable alignment can better highlight the salient region as well as maintain useful spatial details. The details of flow alignment and deformable alignment are explained in Section 3.2.
Moreover, the ability of a network to model global context is also critical to performance improvement. Recently, non-local self-attention mechanism [41] has been proven to be effective in capturing long-range dependencies. However, how to effectively incorporate it in SOD is still challenging. First of all, we need to consider computational efficiency. In this paper, we introduce strip attention [42] into our network to augment contextual information as well as ensure computational efficiency. Second, the adaptation of the self-attention mechanism for SOD is also an important factor to consider. Different from [42], where strip attention is utilized once to enhance the final feature for scene parsing, in our ALNet, strip attention modules (SAMs) are embedded in the intermediate procedure of alignment integration to augment contextual information for the high-level integrated features. Due to SAM, the global semantics are selectively incorporated in the alignment integration to recover precise salient objects.
Furthermore, to strengthen the model’s learning ability at the object boundary, we design a simple but effective boundary enhancement module, which can output an attention map for the network. Based on the attention map, an attention weighted loss (AW loss) function is proposed to make the network pay more attention to the ambiguous and hard regions. Features from this branch are utilized as a complement for the multi-level integrated features to conduct the final prediction.
Finally, to prove the robustness and scalability of the proposed method, we directly apply our network to optical remote sensing images (RSIs) and compare it with state-of-the-art RSI-SOD methods [32,43,44,45,46] (salient object detection methods that are specially designed for RSIs). The extensive experiments demonstrate the effectiveness of our method.
The main contributions of our proposed method are summarized as follows:
  • We propose an alignment integration network (ALNet) to alleviate the misalignment problem in multi-level feature fusion, thereby generating effective representation for salient object detection.
  • Strip attention is introduced into our network to augment global contextual information for the high-level integrated features as well as keep computational efficiency.
  • To make the network focus more on the boundary and error-prone regions, we propose a boundary enhancement module and an attention weighted loss function to help the network generate results with sharper boundaries.
  • Experimental results on SOD benchmarks as well as remote sensing datasets demonstrate the effectiveness and scalability of the proposed ALNet.

2. Related Work

Existing deep SOD methods can be roughly categorized into multi-level features integration based and boundary learning based approaches.

2.1. Integrating Multi-Level Features for SOD

A simple but effective way to integrate multi-level features is adding or concatenating features step by step, as with FCN [39], which is usually taken as a baseline model. However, in this direct integration way, associations between features cannot be well modeled, resulting in unsatisfactory performance. Compared with this direct way, Amulet [10] integrates multi-scale features in a fully connected way. Nevertheless, fusing features from all levels at every specific scale may introduce unnecessary redundant information. Based on FCN, PAGRN [12] introduces both channel-wise and spatial-wise attention to suppress the irrelevant interference from features and then combines attentive features by stepwise addition. Pyramid fusion structure is utilized by Wei et al. [23] to fuse high-level semantics with low-level details via lateral connections. In [17], Wang et al. design an ingenious network that conduct both top-down and bottom-up inference in an iterative and cooperative manner. The predicted saliency map is integrated with multi-level features step by step for coarse-to-fine saliency estimation. Sun et al. [28] leverage the average- and max-pooling modules to integrate the multi-level features in the spatial and channel-wise dimensions, respectively. An architecture search framework is proposed by Zhang et al. [29] to automatically learn a multi-scale features fusion strategy. All of the existing methods design ingenious modules to integrate features; nevertheless, they neglect the misalignment problem of multi-level features. To address this problem, we introduce alignment technology into SOD and further design an alignment integration network to relieve the misalignment for effective feature integration.

2.2. Boundary Learning for SOD

Precise salient object boundaries are beneficial for the performance of SOD methods. CNN-based methods suffer from blurred boundaries due to stride and pooling operations. Incorporating shallow layer features can alleviate the problem to a certain extent, but sometimes this is not enough. In order to obtain sharper object boundaries, some methods, such as [9,11,14], utilize CRF [47] as the post-processing step to enhance object edges. However, the post-processing operation is too time-consuming to be employed in real-time applications. In [16], Wang et al. design a salient edge detection module to emphasize the importance of boundary information, and L2-norm loss is employed to supervise salient edges. BASNet [20] employs a hybrid loss that incorporates SSIM [48] to capture the structural information in an image. Weighted BCE and IOU loss are utilized by F3Net [23], which synthesizes the local structure information of a pixel to guide the network to focus more on local details. In [29], Zhang et al. employ boundary loss [49] to penalize the misalignment of salient object boundaries. Mei et al. [37] adopt the patch-level edge preservation loss [50], which considers a local neighborhood of each pixel and assigns more attention to the object boundary. Different form these algorithms, in this paper, based on the boundary enhancement module, we propose an attention weighted loss, which can adaptively promote the network to focus on the hard pixels (i.e., pixels from boundaries or other error-prone parts).

3. Materials and Methods

In this section, we explain the details of our proposed ALNet, whose main framework is shown in Figure 2.
The backbone includes five convolutional blocks, which are { Block } = 0 4 . Multi-level features with different resolutions (i.e., 1/4, 1/8, 1/16, and 1/32 of the original resolution) are side-outputted from Block 1 to Block 4 and are denoted as { X } = 1 4 . Then, the features are sent to the pre-process module, which is explained in Section 3.1. Next, we propose the alignment integration module (AIM) to combine adjacent level features by feature alignment in Section 3.2. The boundary enhancement module (BEM), which is utilized to equip AIM to generate more powerful features, is explained in Section 3.3. Finally, we introduce the proposed attention weighted loss and the supervision strategy in our work in Section 3.4.

3.1. Pre-Process Module

As shown in Figure 2, shallower features { X } = 1 3 are fed into 1 × 1 convolution followed by the batch norm and ReLU operations, respectively. As for the top-level feature (i.e., X 4 ), an additional 3 × 3 convolution is applied to extract high-level semantics for the network. After pre-processing, we can obtain multi-level features { F } = 1 4 . Then, alignment integration is carried out for them.

3.2. Alignment Integration Module

Most of the existing methods directly integrate multi-level features without considering the misalignment problem between them. To alleviate this problem, we propose a novel alignment integration module (AIM), which is constructed based on the feature alignment (FA). As shown in Figure 2, in AIM, adjacent level features conduct FA to generate an aligned feature, which is then fed into next FA with the shallower level feature, and so on. The procedures of FA are shown in Figure 1. For the adjacent level features F and F + 1 , we first generate alignment offset for them.

3.2.1. Offset Generation

First of all, F + 1 , which denotes a high-level feature with low resolution, is up-sampled to the same size as F . Next, we concatenate them together and take the concatenated features as the input for a 3 × 3 convolution layer to output the alignment offset:
Δ = Conv ( Cat ( UP ( F + 1 ) , F ) ) ,
where Cat(·) and Up(·) denote the concatenation and bi-linear upsampling operation, respectively. Then, we conduct feature alignment for them.

3.2.2. Feature Alignment

Two kinds of feature alignment models (i.e., flow alignment in Figure 1a and deformable alignment in Figure 1b), which intrinsically share the same formulation but differ in their offset diversity, are proposed in our work.
Flow Alignment. For flow alignment, the offset Δ R H × W × 2 is utilized for the spatial warping of F + 1 :
F ˜ + 1 = T ( F + 1 , Δ ) ,
where T ( · , · ) represents the alignment transformation function; Δ consists of two feature maps, which represent the offset for x- and y-coordinates of each position on the feature map to be aligned, respectively. Let T h w denote the output of T ( F , Δ ). The function is defined as follows:
T h w = h = 1 H w = 1 W F h w · max ( 0 , 1 | h + Δ 1 h w δ h | ) · max ( 0 , 1 | w + Δ 2 h w δ w | ) ,
which samples features on position p ( h + Δ 1 h w δ , w + Δ 2 h w δ ) of F and linearly interpolates the values of the four neighbors (top-left, top-right, bottom-left, and bottom-right) of p to approximate the output. The variable δ denotes the scale difference between F and Δ (e.g., when F ’s resolution is half that of Δ , δ = 2 ); Δ 1 h w and Δ 2 h w represent the learned 2D transformation offsets for position (h, w).
Deformable Alignment. As for deformable alignment, a 3 × 3 deformable convolution is utilized in our network. The number of offsets is in proportion to the kernel size of the deformable convolution. Therefore, the learned offset is Δ R H × W × 18 . The feature is aligned by modulated deformable convolution (i.e., DCN-v2 [51]) based on the offset:
F ˜ + 1 = DeformConv ( UP ( F + 1 ) , Δ ) .
Let Y denote the output of DeformConv ( F , Δ ) :
Y ( p ) = k = 1 n 2 ω ( p k ) · F ( p + p k + Δ p k ) · m k ( p ) ,
where p is the spatial position, p k is the kth sampling offset in a standard convolution, and n is the kernel size of deformable convolution (i.e., 3); ω and m are learnable parameters in the DeformConv. Compared with flow alignment, deformable alignment adaptively learns the diverse offsets for features, thus can deal with the misalignment problem better, which corresponds with the experimental results in Section 5.1.

3.2.3. Aligned Integration

The aligned integrated feature can be obtained by:
F ˜ = CBR 3 × 3 ( F ˜ + 1 + F ) ,
where CBR 3 × 3 ( · ) denotes a 3 × 3 convolution with batch normalization and ReLU operations. In AIM, multi-level features are integrated step by step like FCN but alleviate misalignment. The integrated feature of the last step (i.e., F ˜ 1 ) is equipped with both semantic information and spatial details.

3.2.4. Strip Attention Module

To augment the contextual information for the intermediate integrated features and promote their pixel-wise representative capacity, we incorporate non-local self-attention into our network. The standard non-local self-attention has a computational complexity of O ( ( H × W ) × ( H × W ) ) , where H and W denote the spatial dimensions of the input feature map. In this paper, we introduce strip attention [42], which reduces the computational complexity to O ( ( H × W ) × W ) by a stripping operation to add global context as well as keep efficiency. The strip attention module (SAM) is displayed in Figure 3.
For simplicity, here we use F R C × H × W to denote the input feature. First, F is fed into three convolutional layers with 1 × 1 filters followed by batch normalization and ReLU to generate three new feature maps, which are Q R C × H × W , K R C × H × W , and V R C × H × W , respectively; C is an intermediate feature dimension number for variable Q and K . To make SAM efficient, we set C smaller than C.
A stripping operation (i.e., average pooling with pooling windows of size H × 1 ) is applied on K to encode global context representation in the vertical direction, and then we get K R C × 1 × W . We also try to apply 1 × W pooling on the feature to incorporate context in the horizontal direction, but it has little effect on the performance improvement. Considering computation complexity, we only use a one direction stripping operation.
Next, we reshape Q and K to R C × N and R C × W , respectively, where N = H × W . Then, we can calculate the strip attention map SA R N × W along the horizontal as follows:
SA = s o f t m a x ( Q T K ) ,
where ★ means matrix multiplication and T means matrix transposition. Similarly, we apply stripping and reshape operations to V and can obtain V R C × W . Then, we conduct a matrix multiplication between SA and V T and reshape the result to get F S A R C × H × W . The output feature can be formulated as:
F = F + F S A .
For inputs F ˜ 3 and F ˜ 2 , the outputs of SAM are denoted as F ˜ 3 and F ˜ 2 , respectively. As shown in Figure 2, after adding SAM in our network, when = 1 or = 2 , input of Equations (2) and (4) should be F ˜ + 1 . For SOD, a high-level feature is expected to be augmented by global context, whereas a shallow-level feature is supposed to place emphasis on structural details. Therefore, we do not add SAM in the shallow level integration (i.e., level 1 in Figure 2). The experimental results in Section 5.2 demonstrate the rationality of our design (i.e., SAM-ver vs. SAM-ver-1).

3.3. Boundary Enhancement Module

An auxiliary boundary enhancement branch, which is simple but effective, is proposed to guide the network focus on boundaries and other error-prone parts of the image. The boundary enhancement module (BEM) is illustrated in Figure 4.
We apply two convolution followed by batch normalization operations on the input feature to generate attention map A, which is utilized as a weight for the loss computation in Section 3.4. Ground-truth boundary maps, which are pre-computed by the method in [52], are used to provide guidance for the attention generation. In addition, we extract the intermediate feature as guidance to enhance and complement the input feature. As shown in Figure 4, the input feature and the guidance are concatenated together and fused by a 1 × 1 convolution with batch normalization and ReLU operations. The enhanced feature is then used to conduct salient object prediction.

3.4. Supervision Strategy

In this paper, a hybrid loss function is proposed to supervise the network. At first, we introduce BCE [53] and IOU loss [54] to ensure pixel-wise smooth gradient as well as optimize the global structure. For the saliency map S and ground truth G, the BCE loss can be calculated as follows:
L B ( S , G ) = x = 1 H y = 1 W [ G x y log ( S x y ) + ( 1 G x y ) log ( 1 S x y ) ] ,
where ( x , y ) denotes the spatial position; H and W represent the height and width of images. IOU loss is formulated as:
L I ( S , G ) = 1 x = 1 H y = 1 W S x y G x y x = 1 H y = 1 W [ S x y + G x y S x y G x y ] .
Furthermore, to boost the network to learn sharper boundaries, we propose an attention weighted loss (AW loss) based on the learned attention map A in Section 3.3. The AW loss can be considered as a combination of attention weighted BCE and IOU loss:
L A W ( S , G , A ) = L A W B ( S , G ) + L A W I ( S , G ) = x = 1 H y = 1 W [ G x y log ( S x y ) + ( 1 G x y ) log ( 1 S x y ) ] A x y x = 1 H y = 1 W A x y + 1 x = 1 H y = 1 W S x y G x y A x y x = 1 H y = 1 W [ S x y + G x y S x y G x y ] A x y .
In addition, to ensure the attention map focuses on the boundary, we use an auxiliary weighted BCE Loss L A X ( A , G b ) to supervise A, where G b is the ground-truth boundary (radius = 2) generated from G. The calculation of L A X is as in [55].
The final loss function for the proposed network is as follows:
L = L B + L I + β L A W + λ L A X ,
where β = 1 and λ = 20 are weighting coefficients for the loss function. We set the parameters based on experimental experience.

4. Results

Experimental results of the proposed work are displayed in this section. In Section 4.1 and Section 4.2, we introduce the datasets and evaluation metrics of the experimental results. Implementation details of the proposed ALNet are described in Section 4.3. In Section 4.4, we compare our method with the state-of-the-art models from both quantitative and qualitative aspects. Furthermore, we conduct extension experiments on optical remote sensing images (RSIs) and compare our ALNet with state-of-the-art RSI-SOD methods. The details are introduced in Section 4.5.

4.1. Datasets

The experiments are conducted on five benchmark datasets: ECSSD [56], HKU-IS [57], PASCAL-S [58], DUT-OMRON [59], and DUTS [60]. The ECCSD dataset contains 1000 natural images with complex structures. In HKU-IS, there are 4447 images, which include multiple salient objects or objects touching the image boundary. PASCAL-S, which is generated from the PASCAL VOC dataset [61], contains 850 images. DUT-OMRON is a challenging dataset with 5168 images. DUTS is a relatively large dataset that contains 10,553 training images and 5019 testing images. We train our network based on the training images of DUTS for salient object detection.
In addition, in order to further demonstrate the stability and scalability of ALNet, we test the proposed method on two optical remote sensing datasets dedicated to SOD: ORSSD [44] and EORSSD [43]. ORSSD is the first publicly available dataset for SOD in optical remote sensing images. It contains 800 images (600 for training and 200 for testing), which are collected from the Google Earth and some existing RSI datasets. EORSSD is a large public dataset for RSI-SOD that extends ORSSD to 2000 images (1400 for training and 600 for testing). Specifically, we augment the training set of EORSSD and ORSSD by flipping and rotation, generating seven additional variants of the original training data. On EORSSD, we train our ALNet based on 11,200 augmented pairs. On ORSSD, we train our ALNet with 4800 augment pairs.

4.2. Metrics

We adopt the popular precision–recall (PR) curves, F-measure curves, mean F-measure ( F β ) [62], weighted F-measure ( F β ω ) [63], mean absolute error (M) [64], and mean E-measure ( E ξ m ) [1] as our evaluation metrics. Mean F-measure is an overall performance measurement, which is defined as:
F β = ( 1 + β 2 ) × P r e c i s i o n × R e c a l l β 2 × P r e c i s i o n + R e c a l l ,
where β 2 = 0.3 to emphasize the precision. Weighted F-measure offers an intuitive generalization of mean F-measure by changing p r e c i s i o n and r e c a l l to their ω th power. As suggested in [65], β 2 for the weighted F-measure is set to 1.0 . Mean absolute error is defined as the average pixel-wise absolute difference between the binary ground truth G and the saliency map S, which can be computed by:
M A E = 1 W × H x = 1 W y = 1 H | S ( x , y ) G ( x , y ) | ,
where W and H denote width and height of saliency map, respectively. The E-measure focuses on both local pixel values and image-level statistics. It can be computed by:
E ξ = 1 W × H x = 1 W y = 1 H θ ( ξ ) ,
where θ ( ξ ) is the enhanced alignment matrix. Mean E-measure ( E ξ m ) is utilized in our experiment.

4.3. Implementation Details

The proposed method is based on the Pytorch platform. We conduct our experiments on a PC with an Intel Core i7-9700KF CPU (with 3.9 GHz Turbo boost) and a single NVIDIA GTX 2080Ti GPU. The input images are resized to 352 × 352 for both training and testing. We use data augmentation methods such as normalizing, cropping, and flipping. The parameters of the backbone are initialized from VGG16 [66], ResNet50 [67], and MSCAN-b [68] for fair comparison with existing methods. We utilize SGD optimizer [69] to train the entire network end to end. The base learning rate is set to 0.05, and the warm-up and linear decay strategies are used to adjust the learning rate. The momentum and the weight decay are set to 0.9 and 1 × 10 4 , respectively. Batch size is set to 30 (for ResNet50 backbone) and 20 (for VGG16 and MSCAN-b backbone), and we train the network for 60 epochs. Apex (https://github.com/NVIDIA/apex (accessed on 20 December 2022)) and fp16 are utilized to accelerate the training process.
For extended experiments on remote sensing datasets, the implementation details are just the same as the original SOD. The only difference is the training dataset. Specially, on the EORSSD and ORSSD datasets, we resize the input image to 288 × 288 and train our ALNet for 65 epochs and 45 epochs, respectively.
For VGG16, ResNet50, and MSCAN-b backbone, the inference time of the proposed method for a 352 × 352 image is 0.0235 s (43 fps), 0.0188 s (53 fps), and 0.0280 s (36 fps), respectively, which demonstrates the feasibility of our method for real-time applications. The source code will be released to facilitate reproducibility.

4.4. Comparison to State-of-the-Art Methods

We compare our proposed algorithm with 17 state-of-the-art salient object detection methods, including AFNet [15], PAGENet [16], PS [17], ASNet [18], CPD [19], BASNet [20], EGNet [21], SCRN [22], F3Net [23], GateNet [24], GCPANet [25], ITSD [26], MINet [27], A-MSF [29], VST [35], ICON [36], and DCENet [37].
For fair comparisons, we directly use the saliency maps offered by the authors or use the provided codes to generate the results. As some algorithms employ various backbones, we compare with the best results of them.
Quantitative Comparison. Table 1 shows the quantitative comparison results in terms of mean F-measure, weighted F-measure, mean absolute error, and mean E-measure. We also compare the computational complexity and the size of parameters in the second and third columns of Table 1 (i.e., MACs and Params). Figure 5 shows the P-R curves and F-measure curves of our method and the state-of-the-art methods. From the results, we can see that, for VGG16 and ResNet50 backbone, our proposed network performs favorably against other state-of-the-art methods on all datasets and metrics, as well as keeps the complexity and model size relatively small, which demonstrates the effectiveness of our proposed network based on alignment integration.
For the attention based backbone, we implement our network based on MSCAN-b, which utilizes multi-scale convolution attention to encode features. Compared with the existing methods, ALNet-MS ranks first on most of the datasets and metrics. It is noteworthy that ALNet-MS has smaller MACs and Params than the existing methods. The experiments based on different backbones all prove that our proposed network can achieve state-of-the-art performance in both effectiveness and efficiency.
Qualitative Comparison. In Figure 6, we compare the visual results of the methods for qualitative evaluation. Benefiting from multi-level alignment integration, our network can generate powerful integrated features, which contain both high-level semantics and spatial details, to segment salient regions even in very challenging scenes (e.g., 1st and 2nd rows in Figure 6). In addition, compared with other boundary learning based methods such as F3Net and A-MSF, our proposed methods can generate relatively clear and accurate object boundaries.

4.5. Extension Experiment on the Remote-Sensing Datasets

To further discuss the proposed model’s robustness and scalability, we conduct experiments on optical remote sensing datasets. We compare our ALNet with four state-of-the-art RSI-SOD methods: LVNet [44], DAFNet [43], MJRB [45], and ACCoNet [46]. For fair comparison with existing methods, the network is initialized from ResNet50.
Quantitative Comparison. The quantitative comparison results of mean F-measure, weighted F-measure, mean absolute error, and mean E-measure are shown in Table 2. For the EORSSD dataset, the proposed method ranks first on all metrics. For the ORSSD dataset, the result of our method is also competitive. In Figure 7, we display the F-measure curves of the proposed method with state-of-the-art methods on two remote sensing datasets. Our method performs well against state-of-the-art RSI-SOD methods. It is worth mentioning that the proposed method is a universal framework for salient object detection and not dedicated to optical remote sensing images. However, the results demonstrate the effectiveness and scalability of the proposed network.
Qualitative Comparison. The qualitative results, including several challenging and representative scenes of optical remote sensing images, are shown in Figure 8.
For the first scene (i.e., object with shadows), being affected by the shadows, ACCoNet, MJRB, DAFNet, and LVNet cannot generate accurate and sharp boundaries, but our method can better highlight the object and produce relatively accurate results.
For the scene with a tiny object, which is typical in optical remote sensing images, our proposed method can segment the tiny object with fine details; compared with the other methods, the object shape generated by our method is closer to the ground truth.
Another difficult scene is one with multiple objects. As shown in Figure 8, ACCoNet and MJRB incorrectly predict non-salient interference in the background as foreground. DAFNet generates blur salient regions, and LVNet fails to detect the real objects in the first row of this scene. In contrast, our method captures all objects finely without any redundant regions.
For the scene with irregular geometry structure (e.g., lakes and rivers), the saliency maps of our method obviously have sharper boundaries, and the highlighted regions are concentrated. From the visual results, we can see that our methods can better deal with the complex and challenging scenes in optical remote sensing images, which further proves the reliability of the method.

5. Discussion

In this section, we conduct ablation studies for all the proposed modules (i.e., feature alignment, strip attention module, and boundary enhancement module) in our ALNet and analyze the effectiveness of them in Section 5.1, Section 5.2, and Section 5.3, respectively. For a comprehensive analysis of the model, we further discuss the failure cases in Section 5.4.

5.1. Effectiveness of Feature Alignment

We use an FCN-based model, which combines adjacent level features by direct addition, as the baseline model (i.e., w.o.Align). Different alignment technologies are exploited on the baseline model. The results based on ResNet50 are shown in the first part of Table 3. F-Align and and D-Align denote flow alignment and deformable alignment, respectively. The comparisons of the alignment methods and the baseline demonstrate the misalignment problem in multi-level feature integration indeed decreases the performance. Deformable alignment performs better than flow alignment, which indicates the importance of offset diversity.
In addition, we visualize the last-stage integrated features of different methods to make the results explainable, as shown in Figure 9. As we can see, features without alignment are fuzzy and lack both semantic and structural information. After alignment, the models can generate more meaningful feature representation. Compared with flow alignment, deformable alignment features can better highlight salient regions and have more precise boundaries, which coincides with the quantitative results in Table 3.

5.2. Effectiveness of SAM

On the basis of D-Align, we conduct stripping operations in both the vertical and horizontal directions for the integrated feature. In the second part of Table 3, we list the results of using vertical stripping (+SAM-ver), using horizontal stripping (+SAM-hori), and using both directions (+BSAM). The experimental results demonstrate SAM is effective for adaptively encoding global contextual relations for the integrated feature. SAM-ver is superior to SAM-hori on most of the datasets, and using both directions did not bring out improvement. On the basis of SAM-ver, we add SAM on the integration of feature level 1 (SAM-ver-1). The results show that SAM-ver is better than SAM-ver-1, which indicates shallow level integration prefers spatial details to global context.
Furthermore, SAM is essentially a simplified self attention mechanism, and to further prove its effectiveness, we compare it with a non-local module in Table 3. Compared with non-local, SAM performs better in our network, and, due to stripping operations, SAM is more efficient in computation.

5.3. Effectiveness of BEM

The boundary enhancement module is a simple but effective branch for the network to generate clear boundaries. In the third part of Table 3, we conduct ablation studies for BEM based on +SAM-ver. The results indicate that BEM is effective for performance improvement. Removing L A X or L A W lowers the final results, which proves the effectiveness of each part in BEM.
In Figure 10, we compare the saliency maps with and without BEM and visualize attention map A at the same time.
From the results, we can see that BEM can learn reasonable attention maps, which make the network put more emphasis on boundary and error-prone regions. The results with BEM obviously have sharper boundaries and can deal with more complex backgrounds.

5.4. Failure Cases

The failure cases of our method are displayed in Figure 11.
In the first row, our method incorrectly predicts oranges as the foreground objects. In the second row, the whole bed (not just the pillow) is taken as the salient object by our method. In the third row, our method cannot detect the real object (i.e., the board with “organic”). Similarly, other state-of-the-art methods also fail in these cases. We summarize the possible reasons for these failure cases: (1) insufficient training samples (e.g., 1st row); (2) controversial annotations (e.g., 2nd row); (3) too complex scene and requirements for additional information like depth (e.g., 3rd row).

6. Conclusions

In this paper, an alignment integration network (ALNet) is proposed to alleviate misalignment problems in combining multi-level convolutional features. Feature alignment is designed in our network to align adjacent level features step by step to produce effective feature representation for salient object detection. To help the network encode global context, a strip attention module is introduced to augment the representative capacity of the feature. Finally, we construct a boundary enhancement module and an attention weighted loss function to make the network focus on boundaries and hard regions. Comprehensive experiments are conducted on five SOD benchmarks and two remote sensing datasets. The experimental results demonstrate the state-of-the-art performance of our ALNet as well as the effectiveness of each proposed modules.

Author Contributions

X.Z. designed and implemented the whole model architecture and manuscript writing. Y.Y. and Y.W. proofread the manuscript. X.C. and C.W. provided suggestions and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The source code and datasets will be available at https://github.com/zhangxiaoning666/ (acessed on 18 July 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fan, D.; Gong, C.; Cao, Y.; Ren, B.; Cheng, M.-M.; Borji, A. Enhanced-alignment measure for binary foreground map evaluation. In Proceedings of the International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 698–704. [Google Scholar]
  2. Yang, X.; Qian, X.; Xue, Y. Scalable mobile image retrieval by exploring contextual saliency. IEEE Trans. Image Process. 2015, 24, 1709–1721. [Google Scholar] [CrossRef] [PubMed]
  3. Mahadevan, V.; Vasconcelos, N. Biologically inspired object tracking using center-surround saliency mechanisms. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 541–554. [Google Scholar] [CrossRef] [Green Version]
  4. Borji, A.; Frintrop, S.; Sihite, D.N.; Itti, L. Adaptive object tracking by learning background context. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; pp. 23–30. [Google Scholar]
  5. Chen, F.; Liu, H.; Zeng, Z.; Zhou, X.; Tan, X. Bes-net: Boundary enhancing semantic context network for high-resolution image semantic segmentation. Remote Sens. 2022, 14, 1638. [Google Scholar] [CrossRef]
  6. Zhao, K.; Han, Q.; Zhang, C.; Xu, J.; Cheng, M. Deep hough transform for semantic line detection. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4793–4806. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, L.; Lu, H.; Ruan, X.; Yang, M.-H. Deep networks for saliency detection via local estimation and global search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  8. Liu, N.; Han, J. Dhsnet: Deep hierarchical saliency network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  9. Hou, Q.; Cheng, M.-M.; Hu, X.; Borji, A.; Tu, Z.; Torr, P.H. Deeply supervised salient object detection with short connections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3203–3212. [Google Scholar]
  10. Zhang, P.; Wang, D.; Lu, H.; Wang, H.; Ruan, X. Amulet: Aggregating multi-level convolutional features for salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
  11. Liu, N.; Han, J.; Yang, M.-H. Picanet: Learning pixel-wise contextual attention for saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3089–3098. [Google Scholar]
  12. Zhang, X.; Wang, T.; Qi, J.; Lu, H.; Wang, G. Progressive attention guided recurrent network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  13. Liu, J.-J.; Hou, Q.; Cheng, M.-M.; Feng, J.; Jiang, J. A simple pooling-based design for real-time salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3917–3926. [Google Scholar]
  14. Zhu, L.; Chen, J.; Hu, X.; Fu, C.-W.; Xu, X.; Qin, J.; Heng, P.-A. Aggregating attentional dilated features for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 3358–3371. [Google Scholar] [CrossRef]
  15. Feng, M.; Lu, H.; Ding, E. Attentive feedback network for boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 1623–1632. [Google Scholar]
  16. Wang, W.; Zhao, S.; Shen, J.; Hoi, S.C.; Borji, A. Salient object detection with pyramid attention and salient edges. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1448–1457. [Google Scholar]
  17. Wang, W.; Shen, J.; Cheng, M.-M.; Shao, L. An iterative and cooperative top-down and bottom-up inference network for salient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5968–5977. [Google Scholar]
  18. Wang, W.; Shen, J.; Dong, X.; Borji, A.; Yang, R. Inferring salient objects from human fixations. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1913–1927. [Google Scholar] [CrossRef] [PubMed]
  19. Wu, Z.; Su, L.; Huang, Q. Cascaded partial decoder for fast and accurate salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3907–3916. [Google Scholar]
  20. Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Dehghan, M.; Jagersand, M. Basnet: Boundary-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 7479–7489. [Google Scholar]
  21. Zhao, J.; Liu, J.; Fan, D.; Cao, Y.; Yang, J.; Cheng, M. Egnet:edge guidance network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8779–8788. [Google Scholar]
  22. Wu, Z.; Su, L.; Huang, Q. Stacked cross refinement network for edge-aware salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7264–7273. [Google Scholar]
  23. Wei, J.; Wang, S.; Huang, Q. F3net: Fusion, feedback and focus for salient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12321–12328. [Google Scholar]
  24. Zhao, X.; Pang, Y.; Zhang, L.; Lu, H.; Zhang, L. Suppress and balance: A simple gated network for salient object detection. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 35–51. [Google Scholar]
  25. Chen, Z.; Xu, Q.; Cong, R.; Huang, Q. Global context-aware progressive aggregation network for salient object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 10599–10606. [Google Scholar]
  26. Zhou, H.; Xie, X.; Lai, J.; Chen, Z.; Yang, L. Interactive two-stream decoder for accurate and fast saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9141–9150. [Google Scholar]
  27. Pang, Y.; Zhao, X.; Zhang, L.; Lu, H. Multi-scale interactive network for salient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9413–9422. [Google Scholar]
  28. Sun, L.; Chen, Z.; Wu, Q.J.; Zhao, H.; He, W.; Yan, X. Ampnet: Average-and max-pool networks for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4321–4333. [Google Scholar] [CrossRef]
  29. Zhang, M.; Liu, T.; Piao, Y.; Yao, S.; Lu, H. Auto-msfnet: Search multi-scale fusion network for salient object detection. In Proceedings of the 29th ACM International Conference on Multimedia, Chengdu, China, 20–24 October 2021; pp. 667–676. [Google Scholar]
  30. Hu, X.; Fu, C.-W.; Zhu, L.; Wang, T.; Heng, P.-A. Sac-net: Spatial attenuation context for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1079–1090. [Google Scholar] [CrossRef]
  31. Hussain, R.; Karbhari, Y.; Ijaz, M.F.; Woźniak, M.; Singh, P.K.; Sarkar, R. Revise-net: Exploiting reverse attention mechanism for salient object detection. Remote Sens. 2021, 13, 4941. [Google Scholar] [CrossRef]
  32. Huang, Z.; Chen, H.; Liu, B.; Wang, Z. Semantic-guided attention refinement network for salient object detection in optical remote sensing images. Remote Sens. 2021, 13, 2163. [Google Scholar] [CrossRef]
  33. Zhang, L.; Zhang, Q.; Zhao, R. Progressive dual-attention residual network for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5902–5915. [Google Scholar] [CrossRef]
  34. Zhang, Q.; Duanmu, M.; Luo, Y.; Liu, Y.; Han, J. Engaging part-whole hierarchies and contrast cues for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 3644–3658. [Google Scholar] [CrossRef]
  35. Liu, N.; Zhang, N.; Wan, K.; Shao, L.; Han, J. Visual saliency transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
  36. Zhuge, M.; Fan, D.; Liu, N.; Zhang, D.; Xu, D.; Shao, L. Salient object detection via integrity learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3738–3752. [Google Scholar] [CrossRef]
  37. Mei, H.; Liu, Y.; Wei, Z.; Zhou, D.; Wei, X.; Zhang, Q.; Yang, X. Exploring dense context for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1378–1389. [Google Scholar] [CrossRef]
  38. Wang, W.; Lai, Q.; Fu, H.; Shen, J.; Ling, H.; Yang, R. Salient object detection in the deep learning era: An in-depth survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3239–3259. [Google Scholar] [CrossRef]
  39. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  40. Li, X.; You, A.; Zhu, Z.; Zhao, H.; Yang, M.; Yang, K.; Tan, S.; Tong, Y. Semantic flow for fast and accurate scene parsing. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 775–793. [Google Scholar]
  41. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
  42. Song, Q.; Mei, K.; Huang, R. Attanet: Attention-augmented network for fast and accurate scene parsing. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 2567–2575. [Google Scholar]
  43. Zhang, Q.; Cong, R.; Li, C.; Cheng, M.; Fang, Y.; Cao, X.; Zhao, Y.; Kwong, S. Dense attention fluid network for salient object detection in optical remote sensing images. IEEE Trans. Image Process. 2021, 30, 1305–1317. [Google Scholar] [CrossRef]
  44. Li, C.; Cong, R.; Hou, J.; Zhang, S.; Qian, Y.; Kwong, S. Nested network with two-stream pyramid for salient object detection in optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9156–9166. [Google Scholar] [CrossRef] [Green Version]
  45. Tu, Z.; Wang, C.; Li, C.; Fan, M.; Zhao, H.; Luo, B. Orsi salient object detection via multiscale joint region and boundary model. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  46. Li, G.; Liu, Z.; Zeng, D.; Lin, W.; Ling, H. Adjacent context coordination network for salient object detection in optical remote sensing images. IEEE Trans. Cybern. 2023, 53, 526–538. [Google Scholar] [CrossRef] [PubMed]
  47. Lafferty, J.; McCallum, A.; Pereira, F.C. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, Williamstown, MA, USA, June 2001; pp. 282–289. [Google Scholar]
  48. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multiscale structural similarity for image quality assessment. In Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  49. Bokhovkin, A.; Burnaev, E. Boundary loss for remote sensing imagery semantic segmentation. In Proceedings of the International Symposium on Neural Networks, Moscow, Russia, 10–12 July; Springer: Berlin/Heidelberg, Germany, 2019; pp. 388–401. [Google Scholar]
  50. Zhao, T.; Wu, X. Pyramid feature attention network for saliency detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3085–3094. [Google Scholar]
  51. Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  52. Takikawa, T.; Acuna, D.; Jampani, V.; Fidler, S. Gated-scnn: Gated shape cnns for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5229–5238. [Google Scholar]
  53. Boer, P.-T.D.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
  54. Yu, J.; Jiang, Y.; Wang, Z.; Cao, Z.; Huang, T. Unitbox: An advanced object detection network. In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 516–520. [Google Scholar]
  55. Borse, S.; Wang, Y.; Zhang, Y.; Porikli, F. Inverseform: A loss function for structured boundary-aware segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 5901–5911. [Google Scholar]
  56. Yan, Q.; Xu, L.; Shi, J.; Jia, J. Hierarchical saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  57. Li, G.; Yu, Y. Visual saliency based on multiscale deep features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  58. Li, Y.; Hou, X.; Koch, C.; Rehg, J.M.; Yuille, A.L. The secrets of salient object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  59. Yang, C.; Zhang, L.; Lu, H.; Ruan, X.; Yang, M.-H. Saliency detection via graph-based manifold ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013. [Google Scholar]
  60. Wang, L.; Lu, H.; Wang, Y.; Feng, M.; Wang, D.; Yin, B.; Ruan, X. Learning to detect salient objects with image-level supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017. [Google Scholar]
  61. Everingham, M.; Gool, L.V.; Williams, C.K.I.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
  62. Achanta, R.; Hemami, S.; Estrada, F.; Susstrunk, S. Frequency-tuned salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20—25 June 2009; pp. 1597–1604. [Google Scholar]
  63. Margolin, R.; Zelnik-Manor, L.; Tal, A. How to evaluate foreground maps? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 248–255. [Google Scholar]
  64. Perazzi, F.; Krahenbuhl, P.; Pritch, Y.; Hornung, A. Saliency filters: Contrast based filtering for salient region detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 733–740. [Google Scholar]
  65. Borji, A.; Cheng, M.M.; Jiang, H.; Li, J. Salient object detection: A benchmark. IEEE Trans. Image Process. 2015, 24, 5706–5722. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  67. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  68. Guo, M.H.; Lu, C.; Hou, Q.; Liu, Z.; Cheng, M.; Hu, S. Segnext: Rethinking convolutional attention design for semantic segmentation. Adv. Neural Inf. Process. Syst. 2022, 35, 1140–1156. [Google Scholar]
  69. Bottou, L. Stochastic gradient descent tricks. Neural Netw. Tricks Trade 2012, 7700, 421–436. [Google Scholar]
  70. Tay, F.E.H.; Yuan, L.; Chen, Y.; Wang, T.; Yu, W.; Shi, Y.; Jiang, Z.; Tay, F.E.; Feng, J.; Yan, S. Tokens-to-token vit: Training vision transformers from scratch on imagenet. In Proceedings of the International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
  71. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
Figure 1. Illustration of various alignment technologies. CBR 3 × 3 means a 3 × 3 convolution followed by batch normalization and ReLU operations. Conv 3 × 3 is a 3 × 3 convolution to generate offset. The feature maps of each model are visualized by averaging along the channel dimension. Larger values are denoted by hot colors, and vice versa.
Figure 1. Illustration of various alignment technologies. CBR 3 × 3 means a 3 × 3 convolution followed by batch normalization and ReLU operations. Conv 3 × 3 is a 3 × 3 convolution to generate offset. The feature maps of each model are visualized by averaging along the channel dimension. Larger values are denoted by hot colors, and vice versa.
Sensors 23 06562 g001
Figure 2. Main framework of our alignment integration network; CBR k × k means a k × k convolution followed by batch normalization and ReLU operations. We first side-out the multi-level convolutional features from the backbone and process them by the pre-process module. An additional 3 × 3 convolution operation is applied on the top level feature to encode high-level semantics. Then, features from multi-level are fed into the alignment integration module, in which adjacent level features are progressively combined by feature alignment. A strip attention module is utilized to capture non-local contextual information for the intermediate integrated feature. The final integrated feature is further enhanced by a boundary enhancement module, and the enhanced feature is exploited to conduct salient object prediction.
Figure 2. Main framework of our alignment integration network; CBR k × k means a k × k convolution followed by batch normalization and ReLU operations. We first side-out the multi-level convolutional features from the backbone and process them by the pre-process module. An additional 3 × 3 convolution operation is applied on the top level feature to encode high-level semantics. Then, features from multi-level are fed into the alignment integration module, in which adjacent level features are progressively combined by feature alignment. A strip attention module is utilized to capture non-local contextual information for the intermediate integrated feature. The final integrated feature is further enhanced by a boundary enhancement module, and the enhanced feature is exploited to conduct salient object prediction.
Sensors 23 06562 g002
Figure 3. Illustration of the strip attention module. Stripping operations are utilized to reduce the computational complexity of this module.
Figure 3. Illustration of the strip attention module. Stripping operations are utilized to reduce the computational complexity of this module.
Sensors 23 06562 g003
Figure 4. Illustration of boundary enhancement module; CB k × k means a k × k convolution followed by batch normalization operation. The input feature is processed to generate an attention map for attention weight loss computation, and the intermediate feature is taken as a guidance to produce the enhanced feature.
Figure 4. Illustration of boundary enhancement module; CB k × k means a k × k convolution followed by batch normalization operation. The input feature is processed to generate an attention map for attention weight loss computation, and the intermediate feature is taken as a guidance to produce the enhanced feature.
Sensors 23 06562 g004
Figure 5. P-R curves and F-measure curves of the proposed method compared with other state-of-the-art methods on five benchmark datasets.
Figure 5. P-R curves and F-measure curves of the proposed method compared with other state-of-the-art methods on five benchmark datasets.
Sensors 23 06562 g005
Figure 6. Visual comparisons of our results and the state-of-the-art methods. Our method can uniformly highlight salient regions and produce sharper boundaries even with complex background distractions in the scene.
Figure 6. Visual comparisons of our results and the state-of-the-art methods. Our method can uniformly highlight salient regions and produce sharper boundaries even with complex background distractions in the scene.
Sensors 23 06562 g006
Figure 7. F-measure curves of the proposed method with state-of-the-art RSI-SOD methods on two remote sensing datasets.
Figure 7. F-measure curves of the proposed method with state-of-the-art RSI-SOD methods on two remote sensing datasets.
Sensors 23 06562 g007
Figure 8. Visual comparisons with four representative state-of-the-art RSI-SOD methods.
Figure 8. Visual comparisons with four representative state-of-the-art RSI-SOD methods.
Sensors 23 06562 g008
Figure 9. Visualization of feature maps with and without alignment. Larger values are denoted by hot colors, and vice versa.
Figure 9. Visualization of feature maps with and without alignment. Larger values are denoted by hot colors, and vice versa.
Sensors 23 06562 g009
Figure 10. Visual comparisons for BEM. Attention map A generated by BEM is also visualized; w.o.BEM means without BEM in our network.
Figure 10. Visual comparisons for BEM. Attention map A generated by BEM is also visualized; w.o.BEM means without BEM in our network.
Sensors 23 06562 g010
Figure 11. Failure cases of our proposed method and other state-of-the-art methods.
Figure 11. Failure cases of our proposed method and other state-of-the-art methods.
Sensors 23 06562 g011
Table 1. Comparisons with 17 methods on 5 benchmark datasets. The best two results of each part are shown in red and blue; ↑ means higher value is better, whereas ↓ is the contrary. ‘-V’: VGG16 [66], ‘-R’: ResNet50 [67], ‘-T2’: T2T-ViT [70], ‘-S’: SWIN [71], ‘-MS’: MSCAN-b [68].
Table 1. Comparisons with 17 methods on 5 benchmark datasets. The best two results of each part are shown in red and blue; ↑ means higher value is better, whereas ↓ is the contrary. ‘-V’: VGG16 [66], ‘-R’: ResNet50 [67], ‘-T2’: T2T-ViT [70], ‘-S’: SWIN [71], ‘-MS’: MSCAN-b [68].
MethodMACsParamsECSSDHKU-ISPASCAL-SDUTSDUT-OMRON
F β F β ω M E ξ m F β F β ω M E ξ m F β F β ω M E ξ m F β F β ω M E ξ m F β F β ω M E ξ m
VGG based Backbone
PAGENet (2019)0.9040.8860.0420.9360.8840.8650.0370.9350.8110.7830.0760.8780.7930.7690.0520.8830.7430.7220.0620.849
AFNet (2019)21.6635.950.9050.8860.0420.9350.8880.8690.0360.9340.8240.7970.0700.8830.8120.7850.0460.893
ASNet (2020)0.8900.8650.0470.9260.8730.8460.0410.9230.8170.7840.0700.8820.7600.7150.0610.854
ALNet-V48.2415.950.9280.9150.0330.9500.9200.9100.0270.9560.8360.8150.0640.9000.8530.8360.0370.9210.7650.7440.0560.864
ResNet50 based Backbone
PS (2019)0.9040.8810.0410.9370.8830.8560.0380.9330.8140.7800.0710.8830.8040.7620.0480.8920.7600.7300.0610.867
CPD (2019)17.747.850.9130.8980.0370.9420.8920.8750.0340.9380.8190.7940.0710.8820.8210.7950.0430.8980.7420.7190.0560.847
BASNet (2019)127.3687.060.9170.9040.0370.9430.9020.8890.0320.9430.8180.7930.0760.8790.8220.8030.0480.8950.7670.7510.0560.865
EGNet (2019)157.21111.690.9180.9030.0370.9430.9020.8870.0310.9440.8230.7950.0740.8810.8390.8150.0390.9070.7600.7380.0530.857
SCRN (2019)15.0925.230.9160.9000.0370.9390.8940.8760.0340.9350.8330.8070.0630.8920.8330.8030.0400.9000.7490.7200.0560.848
F3Net (2020)16.4325.540.9240.9120.0330.9480.9100.9000.0280.9520.8350.8160.0610.8980.8510.8350.0350.9200.7660.7470.0530.864
GateNet (2020)162.13128.630.9130.8940.0400.9360.8970.8800.0330.9370.8260.7970.0670.8860.8370.8090.0400.9060.7570.7290.0550.855
GCPANet (2020)54.3167.060.9160.9030.0350.9440.9010.8890.0310.9440.8290.8080.0620.8950.8410.8210.0380.9110.7560.7340.0560.853
ITSD (2020)15.9626.470.9210.9100.0340.9470.9040.8940.0310.9470.8310.8120.0660.8940.8400.8230.0410.9130.7680.7500.0610.865
MINet (2020)87.11126.380.9230.9110.0330.9500.9090.8970.0290.9520.8300.8090.0640.8960.8440.8250.0370.9170.7570.7380.0560.860
A-MSF (2021)17.532.50.9270.9160.0330.9510.9120.9030.0270.9560.8420.8220.0610.9010.8550.8410.0340.9280.7720.7570.0500.873
DCENet (2022)59.78192.960.9240.9130.0350.9480.9080.8980.0290.9510.8450.8250.0610.9020.8490.8330.0380.9180.7690.7530.0550.865
ICON-R (2023)20.9133.090.9280.9180.0320.9540.9120.9020.0290.9530.8380.8180.0640.8990.8530.8360.0370.9240.7790.7610.0570.876
ALNet-R19.8228.460.9320.9230.0300.9550.9210.9130.0260.9590.8430.8260.0590.9070.8600.8470.0350.9280.7780.7610.0550.874
Attention based Backbone
VST-T2 (2021)23.1644.630.9200.9100.0330.9510.9070.8970.0290.9520.8350.8160.0610.9020.8450.8280.0370.9190.7740.7550.0580.871
ICON-S (2023)52.5994.300.9400.9360.0230.9660.9290.9250.0220.9680.8650.8540.0480.9240.8930.8860.0250.9540.8150.8040.0430.900
ALNet-MS15.1427.450.9430.9380.0240.9640.9360.9320.0200.9690.8660.8510.0510.9220.8990.8930.0240.9550.8170.8060.0430.903
Table 2. Comparisons with four state-of-the-art RSI-SOD methods on two remote sensing datasets. The best two results are shown in red and blue.
Table 2. Comparisons with four state-of-the-art RSI-SOD methods on two remote sensing datasets. The best two results are shown in red and blue.
MethodsBackboneEORSSDORSSD
F β F β ω M E ξ m F β F β ω M E ξ m
LVNet (2019)VGG0.7360.7020.0150.8820.8000.7750.0210.926
DAFNet (2021)ResNet0.7840.7830.0060.9290.8510.8440.0110.954
MJRB (2022)ResNet0.8060.7920.0100.9210.8570.8420.0150.939
ACCoNet (2023)ResNet0.8460.8520.0070.9660.8950.8960.0090.977
ALNet-RResNet0.8650.8650.0060.9670.8950.8920.0090.975
Table 3. Ablation study of our proposed method. We show the results based on the ResNet50 backbone. The table can be divided into three parts to demonstrate effectiveness of the proposed modules in ALNet. Best results are shown in bold.
Table 3. Ablation study of our proposed method. We show the results based on the ResNet50 backbone. The table can be divided into three parts to demonstrate effectiveness of the proposed modules in ALNet. Best results are shown in bold.
SettingsECSSDPASCAL-SDUTSDUT-OMRON
F β F β ω M F β F β ω M F β F β ω M F β F β ω M
Effectiveness of Alignment
w.o.Align0.8980.8830.0430.8140.7900.0700.8080.7870.0450.7270.7010.063
F-Align0.9210.9080.0350.8340.8120.0620.8470.8300.0380.7570.7360.056
D-Align0.9230.9100.0340.8370.8160.0620.8520.8350.0370.7660.7470.056
Effectiveness of SAM
+SAM-ver0.9280.9170.0320.8380.8180.0630.8590.8450.0350.7770.7580.054
+SAM-hori0.9240.9120.0330.8410.8220.0610.8560.8420.0360.7730.7550.056
+BSAM0.9270.9160.0320.8370.8180.0640.8540.8390.0370.7680.7490.062
+SAM-ver-10.9260.9150.0330.8420.8220.0620.8550.8400.0360.7690.7500.058
+Non-Local0.9240.9120.0340.8370.8150.0640.8520.8350.0370.7720.7520.055
Effectiveness of BEM
+BEM0.9320.9230.0300.8430.8260.0590.8600.8470.0350.7780.7610.055
w/o L A X 0.9270.9170.0310.8360.8170.0640.8550.8410.0360.7710.7530.057
w/o L A W 0.9280.9170.0310.8430.8240.0610.8550.8410.0350.7770.7590.053
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, X.; Yu, Y.; Wang, Y.; Chen, X.; Wang, C. Alignment Integration Network for Salient Object Detection and Its Application for Optical Remote Sensing Images. Sensors 2023, 23, 6562. https://doi.org/10.3390/s23146562

AMA Style

Zhang X, Yu Y, Wang Y, Chen X, Wang C. Alignment Integration Network for Salient Object Detection and Its Application for Optical Remote Sensing Images. Sensors. 2023; 23(14):6562. https://doi.org/10.3390/s23146562

Chicago/Turabian Style

Zhang, Xiaoning, Yi Yu, Yuqing Wang, Xiaolin Chen, and Chenglong Wang. 2023. "Alignment Integration Network for Salient Object Detection and Its Application for Optical Remote Sensing Images" Sensors 23, no. 14: 6562. https://doi.org/10.3390/s23146562

APA Style

Zhang, X., Yu, Y., Wang, Y., Chen, X., & Wang, C. (2023). Alignment Integration Network for Salient Object Detection and Its Application for Optical Remote Sensing Images. Sensors, 23(14), 6562. https://doi.org/10.3390/s23146562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop