Next Article in Journal
Uncovering a Seismogenic Fault in Southern Iran through Co-Seismic Deformation of the Mw 6.1 Doublet Earthquake of 14 November 2021
Next Article in Special Issue
Unpaired Remote Sensing Image Dehazing Using Enhanced Skip Attention-Based Generative Adversarial Networks with Rotation Invariance
Previous Article in Journal
A Multi-Cycle Echo Energy Concentration Method for High-Mobility Targets Enveloped by Time-Varying Plasma Sheath
Previous Article in Special Issue
MRG-T: Mask-Relation-Guided Transformer for Remote Vision-Based Pedestrian Attribute Recognition in Aerial Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SREDet: Semantic-Driven Rotational Feature Enhancement for Oriented Object Detection in Remote Sensing Images

1
School of Electronic Information Engineering, Beihang University, Beijing 100191, China
2
Institute of Unmanned System, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2317; https://doi.org/10.3390/rs16132317
Submission received: 24 May 2024 / Revised: 19 June 2024 / Accepted: 21 June 2024 / Published: 25 June 2024

Abstract

:
Significant progress has been achieved in the field of oriented object detection (OOD) in recent years. Compared to natural images, objects in remote sensing images exhibit characteristics of dense arrangement and arbitrary orientation while also containing a large amount of background information. Feature extraction in OOD becomes more challenging due to the diversity of object orientations. In this paper, we propose a semantic-driven rotational feature enhancement method, termed SREDet, to fully leverage the joint semantic and spatial information of oriented objects in the remote sensing images. We first construct a multi-rotation feature pyramid network (MRFPN), which leverages a fusion of multi-angle and multiscale feature maps to enhance the capability to extract features from different orientations. Then, considering feature confusion and contamination caused by the dense arrangement of objects and background interference, we present a semantic-driven feature enhancement module (SFEM), which decouples features in the spatial domain to separately enhance the features of objects and weaken those of backgrounds. Furthermore, we introduce an error source evaluation metric for rotated object detection to further analyze detection errors and indicate the effectiveness of our method. Extensive experiments demonstrate that our SREDet method achieves superior performance on two commonly used remote sensing object detection datasets (i.e., DOTA and HRSC2016).

Graphical Abstract

1. Introduction

Oriented object detection in remote sensing images aims to utilize rotated bounding boxes to accurately determine the position and category of the object of interest [1,2]. It has gradually evolved into a significant domain within computer vision [3] and serves as a foundation task for various applications, such as smart cities, maritime rescue, and battlefield surveillance [4,5,6,7,8,9]. Due to the characteristics of overhead perspective and remote photography [10], remote sensing images typically have several characteristics: (1) objects are distributed with arbitrary orientations and variant appearances; (2) dense small-scale objects, such as vehicles and ships, often tend to cluster together closely; and (3) remote sensing images contain a significant amount of background information.
Analyzing the first characteristic of remote sensing imagery, regular convolutional networks cannot guarantee the precision of features when the object is rotated [11], as shown in Figure 1A. We input an image and its 90-degree-rotated version into the network, and the resulting feature map visualizations are depicted in Figure 1(A2). We observe that the feature maps exhibit accurate and well-represented responses under normal input conditions. However, when the object is rotated, the extracted features show missing components and weakened responses. The last two characteristics of remote sensing images introduce noise in object detection, including both interferences between objects and background noise, as shown in Figure 1B. Dense arrangements of objects may encounter interclass feature coupling and intraclass feature boundary blurring, leading to less prominent feature responses for some objects, as seen in the yellow circle of Figure 1(B2). Using the effective objects in DOTAv-1.0 [12] as reference, we leverage the corresponding segmentation information from iSAID [13] to perform pixel-wise analysis, where the background pixels account for 96. 95% of the total. The abundance of background information may cause the similarity of background areas to the erroneously activated objects, as observed in the red circle of Figure 1(B2).
Currently, rotation-invariant feature extraction methods focus on two main approaches. One involves improving the network extraction structure, such as designing rotation-invariant network architectures using group convolution networks [14]. However, these methods require complex network design and are challenging to train. The other approach involves integrating feature maps from different angles [15], but the feature maps lack semantic information communication between different scales, leading to low information utilization. For rotation object detection, in addition to extracting rotation-invariant features, optimizing the feature maps is also crucial. Representative feature enhancement and attention mechanism methods can be categorized into three aspects. Firstly, effective attention mechanisms, such as channel attention and spatial attention, are introduced to focus on the salient features of the object, addressing noise and boundary blur problems [16,17]. However, utilizing information from feature pooling operations to generate weights and reconstruct feature maps does not guarantee the reliability of the weighting. This approach may still result in the activation of incorrect channels or spatial locations. Additionally, supplementary supervision, such as object box boundaries, center points, or masks, can be helpful to strengthen features [18,19,20]. However, these supervision signals may not be comprehensive enough, leading to feature overlap issues. Finally, considering the distinctions between the regression and classification tasks within the detection heads, it can be effective to design different feature activation methods to decouple features and address feature incompatibility problems [21]. However, this approach may fail to suppress interference from background noise, resulting in many false positives during the detection process.
In this paper, to further address the aforementioned issues, we design a multi-rotation feature pyramid network (MRFPN) architecture for oriented object detection in remote sensing imagery. This architecture enhances the rotation-invariant characteristics of objects while strengthening the contextual information and semantic consistency between feature maps by acquiring a more comprehensive set of rotation-invariant feature fusion across various rotation angles and scales. The feature response from our proposed architecture is depicted in Figure 1(A3). Furthermore, we introduce a novel component, named the Semantic-driven Feature Enhancement Module (SFEM), to obtain more precise and reliable semantic information to enhance feature maps. It approximates the decoupling of features from different object categories and enforces constraints, achieving feature denoising in the spatial domain. This component reduces inter-class feature coupling and intra-class interference and alleviates background interference to achieve robust rotation detection, as illustrated in Figure 1(B3). Finally, we propose a new evaluation metric for oriented object detection to assess the model response to different types of errors, further demonstrating the effectiveness of the proposed modules. The main contributions of this work are summarized as follows:
  • We propose a semantic-driven rotational feature enhancement method for oriented object detection, effectively addressing the significant rotational feature variations and complex backgrounds in remote sensing object detection.
  • We introduce a multi-rotation feature pyramid network to extract rotation-invariant features and maintain the consistency of multiscale semantic information. This module utilizes multi-angle and multiscale feature maps combined with deformable convolutions to represent remote sensing objects.
  • We innovatively integrate the semantics information into oriented object detection by designing the semantic-driven feature enhancement module in an implicit supervision paradigm. It enhances features along the channel and spatial dimensions, effectively addressing inter-class coupling and background interference in feature maps.
  • We introduce a novel evaluation metric for oriented object detection that refines different error types, which can reflect the sensitivity of the model to various types of errors. Extensive experiments demonstrate the superiority of the proposed method.

2. Related Work

2.1. Arbitrary Oriented Object Detection

In recent years, object detection in remote sensing images has become increasingly popular. Unlike general object detection, objects in remote sensing images can be oriented in arbitrary directions. With the continuous advancement of deep learning technology, many excellent methods have emerged to detect rotated objects [22]. Oriented R-CNN [23] employs a novel box encoding system called midpoint offset to constrain directed candidate areas effectively, while R3Det [24] addresses feature misalignment during refinement with a feature refinement module. Yolov8 [25] designed an entirely new backbone extraction network, significantly enhancing the ability to extract target features and improving performance in remote sensing rotation object detection tasks. Zhang et al. [26] and Yang et al. [27] integrate image super-resolution methods for detecting small objects within vast backgrounds, even on low-resolution inputs. Additionally, an adaptive detection system based on early exit neural networks [28] reduces training costs by allowing high-confidence samples to exit the model early, thus improving the efficiency of detecting complex remote sensing images. Besides convolutional networks, transformer architectures have also made significant contributions. Ma et al. [29] are the first to attempt and implement an end-to-end transformer-based framework for oriented object detection. Building on the DERT network, Carion et al. [30] and Dai et al. [31] propose an adaptive oriented proposal refinement module, which effectively enhanced the capability of rotation detection in remote sensing targets. Additionally, Yu et al. [32] introduce a method called spatial transformation decoupling, providing a simple yet effective solution for oriented object detection using the ViT framework.

2.2. Rotation Invariant Feature Extraction

The varied orientations of objects in remote sensing imagery highlight the crucial need for extracting rotation-invariant features. To tackle this challenge, researchers have proposed two main approaches. The first involves modifying the convolution operation itself to enable the extraction of rotation-sensitive features. Cohen et al. [33] first propose the concept of group convolution, integrating four-fold rotational equivariance into CNNs. Hoogeboom et al. [34] expand group convolution to hexagonal lattices, incorporating sixfold rotational equivariance. This adaptation enables more efficient handling of rotations, improving feature recognition across various orientations. Following this, ReDet [14] constructed a backbone to extract rotation-invariant features. Pu et al. [35] design adaptive rotated convolution, which can adaptively rotate to effectively extract target features. Mei et al. [36] propose using polar coordinate transformation to convert rotational changes into translational changes, thereby mitigating the rotation sensitivity issue in CNN networks. The second approach extracts rotation-invariant features through feature mappings with rotational channels. Han et al. [37] and Deng et al. [11] utilize convolutional kernels at different angles to generate feature maps in various directions, thereby enriching the orientation information represented in the feature maps. Zheng et al. [38] propose an object-oriented rotation-invariant semantic representation framework to guide the network in learning rotation-invariant features. Finally, Cao et al. [15] construct a rotation-invariant spatial pooling pyramid by rotating feature maps to extract rotation-invariant features.

2.3. Semantic Information Feature Enhancement

Remote sensing images often include complex background details that can introduce noise into feature maps, potentially impacting object detection performance. Traditional channel or spatial attention mechanisms may not always accurately enhance regions corresponding to actual objects. To overcome this challenge, several approaches have been developed that leverage semantic information to enhance feature maps. Yang et al. [19] and Li et al. [39] utilize binary masks as supervisory information to spatially weight feature maps according to predicted probability maps, aiming to focus the model’s attention on relevant areas. Yu et al. [7] use a deep segmentation network to enhance the relationship between roads and vehicles, incorporating this into a visual attention mechanism with spatiotemporal constraints to detect small vehicles. Correspondingly, Yang et al. [40] introduce multi-mask supervision to implicitly generate weight information, decoupling the features of different objects. Song et al. [20] use regions enclosed by the midpoints of the edges of the object’s bounding box as masks, acquiring weight information for object and non-object areas through supervised learning of spatial feature encoding. Cao et al. [15] develope semantic edge supervision features using object box boundary information, effectively addressing the challenges of complex backgrounds and the lack of contextual cues in remote sensing object detection. Liu et al. [18] transform object boxes into two-dimensional Gaussian expressions to obtain center and boundary masks, enhancing the network to improve object localization accuracy while suppressing interference from complex backgrounds. Finally, Zhang et al. [41] introduce a multistage enhancement network that enhances tiny objects at both the instance level and the feature level across different stages of the detector.

3. Method

The proposed SREDet is based on a fundamental single-stage detector. The complete framework, as depicted in Figure 2, consists of four main components: a feature extraction backbone, the MRFPN for extracting rotation-invariant features across multiple angles and scales, the SFEM for feature enhancement driven by semantic segmentation information, and the oriented detection head for classification and regression tasks.

3.1. Multi-Rotation Feature Pyramid Network

We consider the distinct characteristics of remote sensing imagery, specifically the fact that objects exhibit arbitrary directional features in the overhead view. Therefore, we believe the extraction of rotation-invariant features of objects essential for oriented object detection in remote sensing applications. Traditional convolutional neural networks cannot directly extract features that remain consistent across object rotations, leading to discrepancies in the features extracted from the same object with different angles. To overcome this limitation, two primary strategies have been developed: Deng et al. [11] and Weiler et al. [42] modify the convolution operation architecture to support the extraction of rotation-invariant features, and Han et al. [37] integrate multi-angle features to augment directional information and extract rotation-invariant characteristics.
In response to the previously mentioned issue, we propose a feature pyramid network that integrates multiscale and multi-angle features, as shown in Figure 2. This network aims to reduce discrepancies in feature extraction for the same object from different angles and to approximate rotation-invariant features as closely as possible. Let I be the input image. The output after passing through the backbone extraction network is as follows:
F θ i = B ( T θ i ( I ) ) ,
where represents the backbone network, θ i represents different rotation angles, and T represents the rotation operation.
After obtaining multi-angle feature maps through the backbone network, to ensure feature map consistency, we designed a rotation feature alignment module (RFAM) as seen in Figure 3. This module maps rotated features back to their original states and concatenates n branches together in the channel dimension. Finally, the features are fused using convolution with a kernel size of 1:
C i = C o n v ( C o n c a t [ T θ 1 ( F θ 1 ) i , T θ 2 ( F θ 2 ) i , , T θ n ( F θ n ) i ] ) ( i = 3 , 4 , 5 ) .
To merge semantic information from different levels and extract features of objects with varying aspect ratios and shapes, we enhanced the feature pyramid network (FPN) [43] architecture by substituting the conventional convolutional layers with deformable convolutions. The outputs of different levels can be represented by the following formulas:
P i = D C N ( C i + I n t e r p o l a t i o n ( C i + 1 ) ) ( i = 3 , 4 ) , P 5 = D C N ( C 5 ) , P i = D C N ( P i 1 ) ( i = 6 , 7 ) ,
where D C N represents deformable convolution and p i represents the outputs of MRFPN.

3.2. Semantic-Driven Feature Enhancement Module

In remote sensing scenarios, background information is abundant, which can lead to inadvertent amplification of features similar to certain categories of objects. This phenomenon, in turn, generates a significant number of false positive samples during detection. Furthermore, the characteristic presentation of objects as densely packed small objects in remote sensing imagery leads to mutual interference among object features. This interference culminates in the blurring of feature maps and a diminished activation level.
To enhance feature maps, attention mechanisms such as channel attention, spatial attention, and hybrid attention are commonly employed to reweight the feature maps, thus highlighting significant areas while suppressing irrelevant ones. However, this approach is predicated on computing responses based on the spatial and channel characteristics of the feature maps and does not guarantee the effectiveness and reliability of the areas being enhanced or suppressed. To further enhance the reliability of the regions being augmented, Li et al. [39] and Cao et al. [44] utilize mask information obtained from bounding boxes to assist in the enhancement of feature maps. Yang et al. [19] employe an explicit feature map enhancement approach, whereby the probability predicted by the mask is directly multiplied on the original feature maps. In contrast, yang et al. [40] adopt an implicit feature map enhancement method, using convolution to generate weights from the feature maps of the layer preceding the mask prediction, which are the same dimensions as the original feature maps, and then apply these weights to the original feature maps, as seen in Figure 4. In this paper, we define the method of directly multiplying the predicted semantic information probabilities with the feature maps in the spatial domain as explicit feature enhancement. Conversely, the approach of generating a set of weights from the semantic information feature maps and then weighting the feature maps accordingly is defined as implicit enhancement.
However, we believe that the use of bounding boxes to generate mask information still presents certain inadequacies:
  • The overlap of bounding boxes for objects can still lead to mixing features within and between classes.
  • The shape of some objects cannot be closely aligned with the bounding boxes, resulting in masks that incorporate excessive background information. This not only complicates the task of mask prediction but may also inadvertently enhance certain background regions.
We use semantic segmentation information to resolve the aforementioned issues and employ an implicit feature map enhancement approach. The features of objects belonging to different categories are decoupled into their respective channels and the features of both objects and backgrounds are separately enhanced and weakened within the spatial domain. The architecture of SFEM is illustrated in the provided Figure 2. To enhance the network’s accuracy in predicting semantic segmentation without incurring additional computational costs, we employ dilated convolutions to expand the receptive field of the feature maps, thereby furnishing more abundant semantic information. The process of feature extraction can be represented as follows:
F = c o n v d n ( c o n v d 1 ( F , W 1 , b 1 ) , W n , b n ) ,
we use convolution with a kernel size of 1 to adjust the number of channels and employ a sigmoid function as the activation function to generate feature weights:
F o u t = s i g m o i d ( c o n v 1 × 1 ( F ) ) W S F E M F .
From another perspective, it can be considered that the network decouples the features of different categories into their respective channels. Without loss of generality, assuming that the dataset includes a total of L categories and that the test image contains the first L 0 categories, the output can be represented as follows:
F o u t = i = 1 L 0 n = 1 C i w n i x n i j = L 0 + 1 L m = 1 C j w m j x m j t = 1 C b g w t b g x t b g ,
where F o u t R C × H × W is the element-wise product. C i represents the number of channels belonging to the i-th category, and w n i and x n i denote the weight and feature of the i-th category along the n-th channel. The meanings of the remaining symbols can be deduced following the same logic.

3.3. Identifying Oriented Object Detection Errors

The primary evaluation metric for oriented object detection in remote sensing images is the mean average precision ( m A P ). Although m A P succinctly summarizes model performance, it is challenging to discern what errors constrain the model’s performance. For example, a false positive may result from misclassification, incorrect orientation, inaccurate localization, or background confusion. Inspired by Bolya D [45], we introduce oriented object detection errors in rotation detection.

3.3.1. Defining Main Error Types

To comprehensively assess the error distribution within the component m A P , false positive and false negative samples are classified into five distinct types, as shown in Figure 5. We use the rotational intersection-over-unit ( R I o U ) metric to quantify the overlap between two rotated bounding boxes, where R I o U max denotes the maximum R I o U between a false positive sample and its corresponding ground truth( G T ). Additionally, t b represents the threshold for background objects, conventionally set to 0.1, while t f signifies the threshold for foreground objects. Since we primarily focus on the m A P 50 metric of the model, t f is generally set to 0.5 unless otherwise noted.
  • Classification Error:  R I o U max t f , but the predicted category is incorrect.
  • Localization Error: The predicted category is correct, but t b R I o U max t f .
  • Cls and Loc Error:  t b R I o U max t f , and the predicted category is incorrect.
  • Background Error: The background was falsely detected as the object, R I o U max t b .
  • Missed Error: All undetected G T instances.

3.3.2. Setting Evaluation Metrics

Simply counting the numbers of each type of error does not scientifically demonstrate its impact on the performance of the model. To evaluate the impact of each type of error on the model, we will modify the errors by type and recalculate the mean average precision to obtain Δ m A P . The error between m A P mod and the original m A P will serve as the evaluation metric:
Δ m A P = m A P mod m A P .
We will modify each error type according to the following procedure:
  • Modify Classification Error: Modify the predicted incorrect categories to the correct categories. If duplicate detections occur, remove object boxes with low confidence.
  • Modify Localization Error: Replace the predicted object boxes with the corresponding G T object boxes. If duplicate detections occur, remove object boxes with low confidence.
  • Modify Cls and Loc Error: Due to the inability to determine which G T object box matches the predicted object box, remove it from false positives.
  • Modify Background Error: Remove all prediction boxes that misclassify background as objects.
  • Modify Missed Error: When calculating m A P , subtract the number of ground truths missed from the total G T . From another perspective, it can be said that the model has performed precise detection on all missed objects.

3.4. Loss Function

Our loss function mainly consists of three components; besides the classification and regression losses from the original single-stage detector, we incorporate a semantic segmentation task loss to supervise the SFEM module. Therefore, the total loss definition for SREDet is as follows:
L = L c l s ( l i , l i * ) + L r e g ( t i , t i * ) + L s e g ( p i , p i * ) ,
where L c l s denotes the classification loss, l i signifies the probability as predicted by the network that an anchor is an object, and l i * represents the corresponding ground truth label. Our network employs focal loss [46] as the classification loss. The L1 loss is used as the regression loss L r e g , and t i and t i * denote the predicted bounding box and the ground truth bounding box, respectively. Each box is represented in vector form, and the boxes are encoded following the format specified in (9) and (10):
t x = ( x x a ) / w a , t y = ( y y a ) / h a t w = log ( w / w a ) , t h = log ( h / h a ) , t θ = θ θ a ,
t x * = ( x * x a ) / w a , t y * = ( y * y a ) / h a t w * = log ( w * / w a ) , t h * = log ( h * / h a ) , t θ * = θ * θ a .
In the SFEM for the task of semantic segmentation detection, we utilize Dice loss [47]. Given its application across various pyramid layers, the overall semantic segmentation loss is formulated as follows:
L s e g = i = 1 S ε i ( 1 2 × p i * p i p i * + p i ) ,
where S represents the number of feature maps used for supervision, ε i represents the weight coefficients associated with each feature map. p i * represents the set of pixels in the semantic segmentation mask of ground truth, and p i represents the set of pixels in the predicted semantic segmentation mask. The intersection p i * p i counts the pixels common to both the prediction and the ground truth, while p i * and p i count the pixels in the ground truth and the predicted semantic segmentation masks, respectively.

4. Results

In this section, we provide a comprehensive description of the two datasets employed in our experiments along with a detailed discussion of the principal results obtained. Furthermore, we meticulously outline and analyze the design of our ablation studies, shedding light on their significance and impact.

4.1. Datasets

4.1.1. DOTA and iSAID

DOTA [12] is one of the commonly used datasets for remote sensing image detection, currently available in three versions. Since only the DOTA-v1.0 dataset has been annotated with segmentation information by some scholars to form the iSAID dataset [13], we select DOTA-v1.0 as the experimental data. The dataset comprises 2806 high-resolution aerial images, covering various complex scenes and shooting angles. It contains a rich variety of targets, including planes (PL), baseball diamonds (BD), bridges (BR), ground track fields (GTF), small vehicles (SV), large vehicles (LV), ships (SH), tennis courts (TC), basketball courts (BC), storage tanks (ST), soccer-ball fields (SBF), roundabouts (RA), harbors (HA), swimming pools (SP) and helicopters (HC), totaling 188,282 annotated objects. We used DOTA’s standard rotated bounding boxes as a reference and filtered the corresponding segmentation labels from the iSAID dataset. We divided the dataset according to the original DOTA data partitioning method, with the dataset split into 1 / 2 training set, 1 / 6 validation set, and 1 / 3 test set. The results in Table 1 were obtained by training on the training and validation sets and then predicting the test set, with the test results acquired through the official evaluation server. Results in the remaining tables are, by default, based on training on the training set and testing on the validation set.

4.1.2. HRSC2016

HRSC2016 is a publicly available remote sensing dataset specifically designed for ship detection. It includes images from six prominent harbors, featuring two primary scenarios: ships at sea and ships near the shore. The dataset comprises a total of 1061 images and 2976 object instances. The training set contains 436 images, the validation set includes 181 images, and the test set comprises 444 images. Image sizes range from 300 × 300 pixels to 1500 × 900 pixels, with the majority exceeding 1000 × 600 pixels. The original dataset provides ship targets labeled with oriented bounding boxes and we annotated all targets with semantic segmentation to facilitate model training.

4.2. Implementation Details

We conducted experiments on multiple baselines. For one approach, we selected networks such as RetinaNet [46] and Faster R-CNN [48] as baseline networks, using ResNet101 as the default backbone. To maintain consistency, the experiments were trained and tested on the MMrotate platform [49]. We used the SGD optimizer, setting momentum and weight decay to 0.9 and 0.0001, respectively. A MultiStepLR strategy was adopted, starting with a learning rate of 0.0025. The training spanned 24 epochs, with the learning rate automatically reduced to 1/10 of its original value at epochs 16 and 22. Rotated non-maximum suppression was applied to the predicted rotated bounding boxes to minimize redundancy, with a confidence score threshold of 0.1 and an IoU threshold of 0.1. For another approach, we conducted experiments using YOLOv8 as the baseline, employing the default YOLOv8 framework configuration [25]. The initial learning rate was set to 0.01, and the final learning rate was 0.001. The momentum was configured at 0.937, and the weight decay was set to 0.0005. We implemented a warmup period of 3.0 epochs, during which the initial momentum was set to 0.8 and the initial bias learning rate was 0.1. All training and testing experiments were conducted on an RTX A6000 with a batch size of 2.
DOTA dataset comprises large-scale images, so during the training and testing phases, the images were divided into 1024 × 1024 patches with a 200-pixel overlap. Various data augmentation techniques were employed, specifically including random flipping (with a probability of 0.25), random rotation (with a probability of 0.25), and random color transformation (with a probability of 0.25). When YOLOv8 was being trained, mosaic augmentation was also introduced. Additionally, to further enhance network performance, multiscale training and testing were applied. When training on the HRSC2016 dataset, the number of training epochs was set to 72. The initial learning rate was set to 0.0025, and it was reduced to 0.1 of its original value at epochs 48 and 66. The data processing method used was the same as that applied to the DOTA dataset.

4.3. Main Results

4.3.1. Results on DOTA

We evaluate the proposed method against other state-of-the-art approaches on the DOTA dataset. The results are presented in Table 1. Our method achieved a m A P 50 of 76.34%, and under the approach of multiscale training and testing, our method achieved a m A P 50 of 79.31%. When comparing metrics in different categories, our detectors performed the best for the categories of small vehicles, ships, and tennis courts, and achieved the second-best results for the ground field tracks, basketball courts, roundabouts, and helicopters.
Table 1. Comparison to state-of-the-art methods on the DOTA-v1.0 dataset. R-101 denotes ResNet-101 (likewise for R-50 and R-152), RX-101 denotes ResNeXt-101 and H-104 denotes Hourglass-104. The best result is highlighted in bold, and the second-best result is underlined. * denotes multiscale training and multiscale testing.
Table 1. Comparison to state-of-the-art methods on the DOTA-v1.0 dataset. R-101 denotes ResNet-101 (likewise for R-50 and R-152), RX-101 denotes ResNeXt-101 and H-104 denotes Hourglass-104. The best result is highlighted in bold, and the second-best result is underlined. * denotes multiscale training and multiscale testing.
MethodsBackbonePLBDBRGTFSVLVSHTCBCSTSBFRAHASPHC mAP 50
FR-O [12]R-10179.0969.1217.1763.4934.2037.1636.2089.1969.6058.9649.4052.5246.6944.8046.3052.93
RRPN [50]R-10188.5271.2031.6659.3051.8556.1957.2590.8172.8467.3856.6952.8453.0851.9453.5861.01
RetinaNet-R [46]R-10188.9267.6733.5556.8366.1173.2875.2490.8773.9575.0743.7756.7251.0555.8621.4662.02
CADNet [51]R-10187.8082.4049.4073.5071.1063.5076.6090.9079.2073.3048.4060.9062.0067.0062.2069.90
O2-DNet [52]H-10489.3182.1447.3361.2171.3274.0378.6290.7682.2381.3660.9360.1758.2166.9861.0371.04
CenterMap-Net [53]R-5088.8881.2453.1560.6578.6266.5578.1088.8377.8083.6149.3666.1972.1072.3658.7071.74
BBAVector [54]R-10188.3579.9650.6962.1878.4378.9887.9490.8583.5884.3554.1360.2465.2264.2855.7072.32
SCRDet [19]R-10189.9880.6552.0968.3668.3660.3272.4190.8587.9486.8665.0266.6866.2568.2465.2172.61
DRN [55]H-10489.7182.3447.2264.1076.2274.4385.8490.5786.1884.8957.6561.9369.3069.6358.4873.23
Gliding Vertex [56]R-10189.8985.9946.0978.4870.3269.4476.9390.7179.3683.8057.7968.3572.9071.0359.7873.39
SRDF [20]R-10187.5584.1252.3363.4678.2177.0288.1390.8886.6885.5847.5564.8865.1771.4259.5173.50
R3Det [24]R-15289.4981.1750.5366.1070.9278.6678.2190.8185.2684.2361.8163.7768.1669.8367.1773.74
FCOSR-S [57]R-5089.0980.5844.0473.3379.0776.5487.2890.8884.8985.3755.9564.5666.9276.9655.3274.05
S2A-Net [37]R-5089.1182.8448.3771.1178.1178.3987.2590.8384.9085.6460.3662.6065.2669.1357.9474.12
SCRDet++ [40]R-10189.2083.3650.9268.1771.6180.2378.5390.8386.0984.0465.9360.8068.8371.3166.2474.41
Oriented R-CNN [23]R-5088.7982.1852.6472.1478.7582.3587.6890.7685.3584.6861.4464.9967.4069.1957.0175.00
MaskOBB [58]RX-10189.5689.9554.2172.9076.5274.1685.6389.8583.8186.4854.8969.6473.9469.0663.3275.33
CBDA-Net [18]R-10189.1785.9250.2865.0277.7282.3287.8990.4886.4785.9066.8566.4867.4171.3362.8975.74
DODet [59]R-10189.6183.1051.4372.0279.1681.9987.7190.8986.5384.5662.2165.3871.9870.7961.9375.89
SREDet(ours)R-10189.3685.5150.8774.5280.5074.7886.4390.9187.4083.9764.3669.1067.7273.6565.9376.34
SREDet(ours) *R-10190.2386.7554.3480.8180.4179.3787.0290.9088.2886.8470.1670.6874.4376.1173.4279.32
This performance can be attributed to the rotation-invariant features’ sensitivity to capturing the orientation of objects and the feature enhancement effects realized through semantic information. Represented by its detection capabilities for swimming pools, helicopters, and planes, our method effectively identifies and regresses objects with irregular shapes. This is primarily due to incorporating semantic segmentation information as supervision, allowing the network to focus precisely on the object and contextual features against complex backgrounds, providing more regression clues. Additionally, our method performs well with densely arranged objects, such as cars, benefiting from SFEM which reduces the coupling of intra-class features, thereby highlighting crucial features. We also observed that for ground field tracks, roundabouts, and baseball diamonds, utilizing semantic segmentation information is more efficient than object bounding box masks. The primary reason for this is that masks might include background information or other objects, causing feature confusion or erroneous enhancement. Our method also adeptly handles the challenges posed by arbitrary orientations, irregular shapes, dense arrangements, and varying scales of remote sensing objects, achieving precise rotation object detection.
From the visualized detection images, as seen in Figure 6, it can be observed that our network achieves excellent detection results for various types of objects. As seen in the first row of images, the network can accurately detect harbors of different shapes and sizes. This is primarily due to the MRFEN module’s ability to extract features of varying scales and shapes. From the fourth column of images, it is evident that the network exhibits effective detection performance on dense objects, largely attributed to the SFEM, which alleviates the feature overlap among similar objects and enhances the feature maps of small objects.

4.3.2. Ablation Study

We conducted ablation studies on the proposed modules to determine their respective contributions and effectiveness. All experiments employed simple random flipping as an augmentation technique to avoid overfitting. The results of these experiments are depicted in Table 2, while the error-type metrics proposed in this paper are presented in Table 3.
Firstly, to ascertain the effectiveness of the MRFPN and SFEM modules individually, model variants that solely incorporated each module were developed on the basis of the baseline. The integration of the MRFPN module led to a 4.7 increase in detection performance, particularly for objects such as basketball courts, storage tanks, and harbors. This improvement suggests that the multiscale rotation-invariant features extracted by the module facilitate the network’s effective detection of objects varying in scale and orientation. Incorporation of the SFEM module resulted in a 5.4 improvement in detection capabilities for objects like large vehicles, swimming pools, helicopters, and ships, indicating that the SFEM module effectively intensifies object features and mitigates feature overlap among closely spaced objects. Finally, the combined application of both modules yielded an overall improvement of 6.3, demonstrating that the two feature enhancement components produce a synergistic effect. The multiscale rotation-invariant features extracted by MRFPN benefit the semantic segmentation tasks within the SFEM module, whereas the SFEM module can suppress noise in the features extracted by MRFPN.
We compared the responses of different types of errors to various improvement strategies, as seen in Table 3. In general, all improvement strategies were observed to contribute to a decrease in classification errors, regression errors, false positives in background detection, and missed objects detections, thus validating the effectiveness of our proposed modules. When only MRFPN was introduced, E b k g was 5.84 and E m i s s was 6.76. Similarly, with the introduction of SFEM alone, E b k g was 5.56 and E m i s s was 7.13. The comparison reveals that MRFPN can provide richer features (most notably by reducing classification errors) and reduce missed detections, but it may misidentify some backgrounds as objects. On the other hand, SFEM can suppress background noise and enhance object features (most notably reducing regression errors), but this approach can lead to increased missed objects. However, when both modules are applied together, they simultaneously reduce false positives from the background and missed detections, suggesting that the two components work together synergistically.

4.3.3. Detailed Evaluation and Performance Testing of Components

In this section, we mainly explore the impact of different styles of semantic labels (SemSty) and feature enhancement methods(Enh-Mtds) on network performance. Expl and Impl represent the explicit and implicit enhancement methods mentioned in Methods 3.2 of this article, respectively. Mask refers to the semantic mask obtained from object bounding boxes, and Seg indicates semantic segmentation information. Based on the experimental results in Table 4, we observe that under the same semantic annotation of Mask, the implicit method outperforms the explicit method by 1.5. Similarly, under the semantic annotation of Seg, the implicit method surpasses the explicit method by 1.4. Thus, when choosing the same type of semantic label, the implicit enhancement method is superior to the explicit enhancement method. This advantage primarily stems from the implicit method’s ability to decouple features of different objects into separate channels, facilitating the classification and regression of various categories of objects. In contrast, the explicit enhancement method is highly dependent on the accuracy of semantic segmentation, where any misclassification or omission in segmentation directly impacts network performance.
Furthermore, we also observe that under the explicit enhancement approach, Seg annotation improves performance by 0.9 compared to Mask annotation, and under the implicit enhancement method, Seg annotation leads to a 0.8 improvement over Mask annotation. Therefore, using Seg for supervision is superior to Mask under the same feature enhancement method, mainly due to the precise semantic information reducing background contamination and inter-class feature overlap. Specifically, using Seg annotation significantly outperforms mask annotation for objects like roundabouts. This is primarily because in the original DOTA dataset annotations the labeling for RA is not uniform, including objects like small or large vehicles, leading to inter-class feature overlap when using the Mask directly as semantic supervision. The mask may include part of the background information for objects with irregular shapes, such as swimming pools and helicopters, affecting the network’s regression performance.
We compared the responses of different feature enhancement strategies to various types of errors, as seen in Table 5, all improvement strategies led to reductions in classification errors, regression errors, false positives from the background, and missed detections of objects, which demonstrates the effectiveness and versatility of the methods. Notably, using Masks as supervisory information with explicit feature enhancement best improved the issue of missed detections. However, among the four strategies, this approach showed the least improvement in false positives from the background, primarily because using Mask as semantic supervisory information reduces the difficulty of semantic segmentation but also increases the risk of incorrect segmentation.
Regarding false positives from the background, under the same style of semantic annotation, models using implicit enhancement methods outperform those with explicit frameworks. This advantage is mainly because semantic segmentation information does not directly affect network features. Instead, it indirectly generates weights for spatial feature enhancement and decoupling between different types of features, mitigating the direct impact of semantic segmentation errors. Concerning regression errors, using the same feature enhancement method is superior to using a Mask. The main reason is that using Seg for semantic supervisory information can provide more accurate enhancement areas, which aids the network’s regression tasks.
We provide a detailed visualization of different strategies for feature enhancement, as seen in Figure 7. From the visualization results of object boxes, it is evident that when using Masks as semantic guidance, false detections occur (as indicated by the red circles in the figure). Additionally, for some object detection cases, the results are suboptimal, failing to completely enclose the objects (as indicated by the green circles in the figure). This is primarily attributed to the utilization of Mask as a semantic guide, which introduces erroneous semantic information. In (e), for example, areas of the sea without ships are segmented as harbors, directly impacting the generation of feature weights and resulting in poor detection outcomes.
Regarding the feature maps, employing implicit enhancement effectively decouples features of different categories into different channels, as demonstrated by (h) and (k) as well as (i) and (l). It is apparent that (h) and (i) enhance features belonging to the category of ships, while (k) and (l) enhance features characteristic of harbors. Furthermore, a comparison of feature maps reveals that for images containing dense objects, using Segmentation as semantic supervision is more effective, yielding clearer and more responsive feature maps.
In the MRFPN, we tested different numbers of feature layers and compared the use of standard convolutions with deformable convolutions, as seen in Table 6. The experiments revealed that when using standard convolutions, there was no significant difference in performance between using four and five feature layers. However, after employing DCN for feature extraction, additional feature layers improved the network’s performance. This improvement is primarily attributed to the DCN’s enhanced capability to extract features from irregular targets.
In our experimental analysis of different strategies within the SFEM module, as seen in Table 7. When an equal number of dilated convolutions are stacked at each layer of the feature map, enhancing features across all feature maps yields better outcomes than enhancing only a subset of feature maps. When enhancing the same set of feature maps, appropriately stacking a certain number of dilated convolutions can enhance the model’s detection performance. The primary reason is that multiple layers of dilated convolutions introduce a larger receptive field to the SFEM module, enabling the acquisition of more comprehensive contextual information.
We proposed a method for implicitly generating weights using semantic segmentation information to enhance feature maps. Therefore, the accuracy of semantic segmentation directly affects the network’s performance. In the SFEM module, we tested three different losses, as seen in Table 8. By comparison, it can be seen that without adjusting the loss weights, focal loss performs best on the DOTA dataset for the class imbalance in remote sensing images. However, considering that Dice loss has a stronger ability to distinguish target regions, and based on our statistics, background pixels account for 96.95% of the dataset. We introduced weights to Dice loss by setting the classification weight of background pixels to 1 and foreground pixels to 20. The experimental results showed that this approach achieved the best performance.
We conducted comparative experiments to test the SFEM module with different base models, including the two-stage detection algorithm Faster R-CNN and the single-stage object detection model YOLOv8, as seen in Table 9.
All models were trained on the training set and tested on the validation set. Our module achieved an improvement of 0.88 on m A P 50 over Faster R-CNN, which is less pronounced compared to the single-stage detector. The main reason is that the RPN operation in the two-stage algorithm helps the network focus on the key feature regions of the target, rather than detecting over the entire feature map. Our module achieved improvements of 0.61 and 0.76 on m A P 50 over YOLOv8-m and YOLOv8-l, respectively. It is worth noting that, for a fair comparison, no pre-trained models were used during training, and the default data augmentation method of YOLOv8 was applied.

4.3.4. Results on HRSC2016

The experimental results of HRSC2016 are presented in Table 10 as follows.
With the modules proposed, our SERDet achieved an exemplary performance of 89.9%. Compared to specific ship detectors, SERDet shows an improvement of 5.2 over the baseline model. Simultaneously, we present the visualization results of ship detection, as seen in Figure 8, wherein it is evident that our proposed network is proficient in effectively detecting ships. SREDet exceeds the performance of other leading two-stage and single-stage detectors in the comparison.

5. Conclusions

This study proposes a semantic-enhanced rotation object detection network, SREDet, targeting remote sensing image data. On the DOTA and HRSC2016 datasets, our network achieved m A P 50 scores of 79.32% and 89.84%, respectively, surpassing other advanced methods compared in this study. First, the MRFPN module is designed to extract rotation-invariant features by fusing multi-angle feature maps. Second, the SFEM module, which utilizes semantic segmentation information for feature enhancement, is introduced. This module decouples the features of different object categories into separate channels. We compared our approach on several baselines, including Faster R-CNN and YOLOv8, and SFEM consistently improved detection accuracy, demonstrating the effectiveness of our proposed method. Finally, we introduced error-type analysis methods from general object detection, providing more refined evaluation metrics for rotated object detection. These metrics can demonstrate the network’s ability to handle different types of errors, guiding further network improvements. However, the application of semantic segmentation information in this study is not comprehensive, as it does not consider the dependencies between different semantics. These relationships could further optimize the object representation in the network. In the future, we will investigate ways to integrate information from both semantic segmentation and object detection streams, designing better network structures to enhance rotation object detection capabilities. Furthermore, we will refine the proposed error-type evaluation metrics, focusing on angle error analysis, to provide a more comprehensive evaluation system.

Author Contributions

This research was a collaborative effort among seven authors, each contributing significantly to various aspects of the project. Y.W. was responsible for the conceptualization of experimental ideas and the design of the overall methodology. H.Z. played a crucial role in verifying the experimental design and conducting the initial analysis of the data. D.Q. took charge of the detailed analysis and interpretation of the experimental results. Q.L. contributed by exploring and investigating the experimental outcomes in depth. C.W. managed the organization and preprocessing of the data, ensuring its readiness for analysis. Z.Z. was instrumental in drafting the initial manuscript and in the primary visualization of the experimental results, including the creation of figures. Finally, W.D. was responsible for reviewing and revising the initial manuscript, providing critical feedback and making necessary modifications to enhance the quality of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under Grant U20B2042 and 62076019.

Data Availability Statement

Publicly available datasets were used in this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wen, L.; Cheng, Y.; Fang, Y.; Li, X. A comprehensive survey of oriented object detection in remote sensing images. Expert Syst. Appl. 2023, 224, 119960. [Google Scholar] [CrossRef]
  2. Li, K.; Wan, G.; Cheng, G.; Meng, L.; Han, J. Object detection in optical remote sensing images: A survey and a new benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 159, 296–307. [Google Scholar] [CrossRef]
  3. Han, W.; Chen, J.; Wang, L.; Feng, R.; Li, F.; Wu, L.; Tian, T.; Yan, J. Methods for small, weak object detection in optical high-resolution remote sensing images: A survey of advances and challenges. IEEE Geosci. Remote Sens. Mag. 2021, 9, 8–34. [Google Scholar] [CrossRef]
  4. Yang, L.; Jiang, H.; Cai, R.; Wang, Y.; Song, S.; Huang, G.; Tian, Q. Condensenet v2: Sparse feature reactivation for deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3569–3578. [Google Scholar]
  5. Wang, Y.; Bashir, S.M.A.; Khan, M.; Ullah, Q.; Wang, R.; Song, Y.; Guo, Z.; Niu, Y. Remote sensing image super-resolution and object detection: Benchmark and state of the art. Expert Syst. Appl. 2022, 197, 116793. [Google Scholar] [CrossRef]
  6. Gao, T.; Niu, Q.; Zhang, J.; Chen, T.; Mei, S.; Jubair, A. Global to local: A scale-aware network for remote sensing object detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5615614. [Google Scholar] [CrossRef]
  7. Yu, R.; Li, H.; Jiang, Y.; Zhang, B.; Wang, Y. Tiny vehicle detection for mid-to-high altitude UAV images based on visual attention and spatial-temporal information. Sensors 2022, 22, 2354. [Google Scholar] [CrossRef]
  8. Pu, Y.; Liang, W.; Hao, Y.; Yuan, Y.; Yang, Y.; Zhang, C.; Hu, H.; Huang, G. Rank-DETR for high quality object detection. arXiv 2024, arXiv:2310.08854. [Google Scholar]
  9. Wang, Y.; Ding, W.; Zhang, B.; Li, H.; Liu, S. Superpixel labeling priors and MRF for aerial video segmentation. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2590–2603. [Google Scholar] [CrossRef]
  10. Yang, L.; Chen, Y.; Song, S.; Li, F.; Huang, G. Deep Siamese networks based change detection with remote sensing images. Remote Sens. 2021, 13, 3394. [Google Scholar] [CrossRef]
  11. Deng, C.; Jing, D.; Han, Y.; Deng, Z.; Zhang, H. Towards feature decoupling for lightweight oriented object detection in remote sensing images. Remote Sens. 2023, 15, 3801. [Google Scholar] [CrossRef]
  12. Xia, G.S.; Bai, X.; Ding, J.; Zhu, Z.; Belongie, S.; Luo, J.; Datcu, M.; Pelillo, M.; Zhang, L. DOTA: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3974–3983. [Google Scholar]
  13. Waqas Zamir, S.; Arora, A.; Gupta, A.; Khan, S.; Sun, G.; Shahbaz Khan, F.; Zhu, F.; Shao, L.; Xia, G.S.; Bai, X. isaid: A large-scale dataset for instance segmentation in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 28–37. [Google Scholar]
  14. Han, J.; Ding, J.; Xue, N.; Xia, G.S. Redet: A rotation-equivariant detector for aerial object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2786–2795. [Google Scholar]
  15. Cao, D.; Zhu, C.; Hu, X.; Zhou, R. Semantic-Edge-Supervised Single-Stage Detector for Oriented Object Detection in Remote Sensing Imagery. Remote Sens. 2022, 14, 3637. [Google Scholar] [CrossRef]
  16. Lu, X.; Ji, J.; Xing, Z.; Miao, Q. Attention and feature fusion SSD for remote sensing object detection. IEEE Trans. Instrum. Meas. 2021, 70, 5501309. [Google Scholar] [CrossRef]
  17. Li, C.; Xu, C.; Cui, Z.; Wang, D.; Zhang, T.; Yang, J. Feature-attentioned object detection in remote sensing imagery. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 3886–3890. [Google Scholar]
  18. Liu, S.; Zhang, L.; Lu, H.; He, Y. Center-boundary dual attention for oriented object detection in remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5603914. [Google Scholar] [CrossRef]
  19. Yang, X.; Yang, J.; Yan, J.; Zhang, Y.; Zhang, T.; Guo, Z.; Sun, X.; Fu, K. Scrdet: Towards more robust detection for small, cluttered and rotated objects. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8232–8241. [Google Scholar]
  20. Song, B.; Li, J.; Wu, J.; Chang, J.; Wan, J.; Liu, T. SRDF: Single-Stage Rotate Object Detector via Dense Prediction and False Positive Suppression. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5616616. [Google Scholar] [CrossRef]
  21. Ming, Q.; Miao, L.; Zhou, Z.; Dong, Y. CFC-Net: A critical feature capturing network for arbitrary-oriented object detection in remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5605814. [Google Scholar] [CrossRef]
  22. Li, Z.; Wang, Y.; Zhang, N.; Zhang, Y.; Zhao, Z.; Xu, D.; Ben, G.; Gao, Y. Deep learning-based object detection techniques for remote sensing images: A survey. Remote Sens. 2022, 14, 2385. [Google Scholar] [CrossRef]
  23. Xie, X.; Cheng, G.; Wang, J.; Yao, X.; Han, J. Oriented R-CNN for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 3520–3529. [Google Scholar]
  24. Yang, X.; Yan, J.; Feng, Z.; He, T. R3det: Refined single-stage detector with feature refinement for rotating object. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 3163–3171. [Google Scholar]
  25. Jocher, G.; Chaurasia, A.; Qiu, J. YOLO by Ultralytics. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 1 December 2023).
  26. Zhang, J.; Lei, J.; Xie, W.; Fang, Z.; Li, Y.; Du, Q. SuperYOLO: Super resolution assisted object detection in multimodal remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5605415. [Google Scholar] [CrossRef]
  27. Yang, L.; Han, Y.; Chen, X.; Song, S.; Dai, J.; Huang, G. Resolution adaptive networks for efficient inference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2369–2378. [Google Scholar]
  28. Yang, L.; Zheng, Z.; Wang, J.; Song, S.; Huang, G.; Li, F. An Adaptive Object Detection System based on Early-exit Neural Networks. IEEE Trans. Cogn. Dev. Syst. 2023, 16, 332–345. [Google Scholar] [CrossRef]
  29. Ma, T.; Mao, M.; Zheng, H.; Gao, P.; Wang, X.; Han, S.; Ding, E.; Zhang, B.; Doermann, D. Oriented object detection with transformer. arXiv 2021, arXiv:2106.03146. [Google Scholar]
  30. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2020; pp. 213–229. [Google Scholar]
  31. Dai, L.; Liu, H.; Tang, H.; Wu, Z.; Song, P. Ao2-detr: Arbitrary-oriented object detection transformer. IEEE Trans. Circuits Syst. Video Technol. 2022, 33, 2342–2356. [Google Scholar] [CrossRef]
  32. Yu, H.; Tian, Y.; Ye, Q.; Liu, Y. Spatial transform decoupling for oriented object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 6782–6790. [Google Scholar]
  33. Cohen, T.; Welling, M. Group equivariant convolutional networks. In Proceedings of the International Conference on Machine Learning. PMLR, New York, NY, USA, 20–23 June 2016; pp. 2990–2999. [Google Scholar]
  34. Hoogeboom, E.; Peters, J.W.; Cohen, T.S.; Welling, M. Hexaconv. arXiv 2018, arXiv:1803.02108. [Google Scholar]
  35. Pu, Y.; Wang, Y.; Xia, Z.; Han, Y.; Wang, Y.; Gan, W.; Wang, Z.; Song, S.; Huang, G. Adaptive rotated convolution for rotated object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 6589–6600. [Google Scholar]
  36. Mei, S.; Jiang, R.; Ma, M.; Song, C. Rotation-invariant feature learning via convolutional neural network with cyclic polar coordinates convolutional layer. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5600713. [Google Scholar] [CrossRef]
  37. Han, J.; Ding, J.; Li, J.; Xia, G.S. Align deep features for oriented object detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5602511. [Google Scholar] [CrossRef]
  38. Zheng, S.; Wu, Z.; Du, Q.; Xu, Y.; Wei, Z. Oriented Object Detection For Remote Sensing Images via Object-Wise Rotation-Invariant Semantic Representation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5625515. [Google Scholar] [CrossRef]
  39. Li, Y.; Huang, Q.; Pei, X.; Jiao, L.; Shang, R. RADet: Refine feature pyramid network and multi-layer attention network for arbitrary-oriented object detection of remote sensing images. Remote Sens. 2020, 12, 389. [Google Scholar] [CrossRef]
  40. Yang, X.; Yan, J.; Liao, W.; Yang, X.; Tang, J.; He, T. Scrdet++: Detecting small, cluttered and rotated objects via instance-level feature denoising and rotation loss smoothing. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 2384–2399. [Google Scholar] [CrossRef]
  41. Zhang, T.; Zhang, X.; Zhu, X.; Wang, G.; Han, X.; Tang, X.; Jiao, L. Multistage Enhancement Network for Tiny Object Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5611512. [Google Scholar] [CrossRef]
  42. Weiler, M.; Cesa, G. General e (2)-equivariant steerable cnns. Adv. Neural Inf. Process. Syst. 2019, 32, 8792–8802. [Google Scholar]
  43. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  44. Cao, L.; Zhang, X.; Wang, Z.; Ding, G. Multi angle rotation object detection for remote sensing image based on modified feature pyramid networks. Int. J. Remote Sens. 2021, 42, 5253–5276. [Google Scholar] [CrossRef]
  45. Bolya, D.; Foley, S.; Hays, J.; Hoffman, J. Tide: A general toolbox for identifying object detection errors. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part III 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 558–573. [Google Scholar]
  46. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  47. Milletari, F.; Navab, N.; Ahmadi, S.A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  48. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  49. Zhou, Y.; Yang, X.; Zhang, G.; Wang, J.; Liu, Y.; Hou, L.; Jiang, X.; Liu, X.; Yan, J.; Lyu, C.; et al. Mmrotate: A rotated object detection benchmark using pytorch. In Proceedings of the 30th ACM International Conference on Multimedia, Lisboa, Portugal, 10 October 2022; pp. 7331–7334. [Google Scholar]
  50. Ma, J.; Shao, W.; Ye, H.; Wang, L.; Wang, H.; Zheng, Y.; Xue, X. Arbitrary-oriented scene text detection via rotation proposals. IEEE Trans. Multimed. 2018, 20, 3111–3122. [Google Scholar] [CrossRef]
  51. Zhang, G.; Lu, S.; Zhang, W. CAD-Net: A context-aware detection network for objects in remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10015–10024. [Google Scholar] [CrossRef]
  52. Wei, H.; Zhang, Y.; Chang, Z.; Li, H.; Wang, H.; Sun, X. Oriented objects as pairs of middle lines. ISPRS J. Photogramm. Remote Sens. 2020, 169, 268–279. [Google Scholar] [CrossRef]
  53. Wang, J.; Yang, W.; Li, H.C.; Zhang, H.; Xia, G.S. Learning center probability map for detecting objects in aerial images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4307–4323. [Google Scholar] [CrossRef]
  54. Yi, J.; Wu, P.; Liu, B.; Huang, Q.; Qu, H.; Metaxas, D. Oriented object detection in aerial images with box boundary-aware vectors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 2150–2159. [Google Scholar]
  55. Pan, X.; Ren, Y.; Sheng, K.; Dong, W.; Yuan, H.; Guo, X.; Ma, C.; Xu, C. Dynamic refinement network for oriented and densely packed object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11207–11216. [Google Scholar]
  56. Xu, Y.; Fu, M.; Wang, Q.; Wang, Y.; Chen, K.; Xia, G.S.; Bai, X. Gliding vertex on the horizontal bounding box for multi-oriented object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 1452–1459. [Google Scholar] [CrossRef]
  57. Li, Z.; Hou, B.; Wu, Z.; Ren, B.; Yang, C. FCOSR: A simple anchor-free rotated detector for aerial object detection. Remote Sens. 2023, 15, 5499. [Google Scholar] [CrossRef]
  58. Wang, J.; Ding, J.; Guo, H.; Cheng, W.; Pan, T.; Yang, W. Mask OBB: A semantic attention-based mask oriented bounding box representation for multi-category object detection in aerial images. Remote Sens. 2019, 11, 2930. [Google Scholar] [CrossRef]
  59. Cheng, G.; Yao, Y.; Li, S.; Li, K.; Xie, X.; Wang, J.; Yao, X.; Han, J. Dual-aligned oriented detector. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–11. [Google Scholar] [CrossRef]
  60. Zhang, Z.; Sabuncu, M. Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst. 2018, 31, 14334–14345. [Google Scholar]
  61. Jiang, Y.; Zhu, X.; Wang, X.; Yang, S.; Li, W.; Wang, H.; Fu, P.; Luo, Z. R2CNN: Rotational region CNN for orientation robust scene text detection. arXiv 2017, arXiv:1706.09579. [Google Scholar]
  62. Shu, Z.; Hu, X.; Sun, J. Center-point-guided proposal generation for detection of small and dense buildings in aerial imagery. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1100–1104. [Google Scholar] [CrossRef]
  63. Ding, J.; Xue, N.; Long, Y.; Xia, G.S.; Lu, Q. Learning RoI transformer for oriented object detection in aerial images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2849–2858. [Google Scholar]
  64. Liao, M.; Zhu, Z.; Shi, B.; Xia, G.s.; Bai, X. Rotation-sensitive regression for oriented scene text detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5909–5918. [Google Scholar]
  65. Ren, Z.; Tang, Y.; He, Z.; Tian, L.; Yang, Y.; Zhang, W. Ship detection in high-resolution optical remote sensing images aided by saliency information. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5623616. [Google Scholar] [CrossRef]
Figure 1. Challenges in Feature Extraction: Poor Rotation Handling, Feature Overlapping, Enhancement Errors, and Weak Responses. Columns indicate the images (left) and their feature maps produced by RetinaNet (middle) and our model (right). Specifically, (A1,B1) represent the images selected from the DOTA dataset, (A2,B2) represent the feature maps generated by ResNet50+FPN and (A3,B3) represent the feature maps extracted by the ResNet50 + MRFPN and ResNet50 + FPN + SFEM variants of our method.
Figure 1. Challenges in Feature Extraction: Poor Rotation Handling, Feature Overlapping, Enhancement Errors, and Weak Responses. Columns indicate the images (left) and their feature maps produced by RetinaNet (middle) and our model (right). Specifically, (A1,B1) represent the images selected from the DOTA dataset, (A2,B2) represent the feature maps generated by ResNet50+FPN and (A3,B3) represent the feature maps extracted by the ResNet50 + MRFPN and ResNet50 + FPN + SFEM variants of our method.
Remotesensing 16 02317 g001
Figure 2. Overall architecture of the proposed SREDet model. SREDet primarily consists of four parts: First, the backbone feature extraction network is utilized for initial feature extraction. Subsequently, multi-angle (different colors represent different angle feature maps) and multiscale feature maps are fused through the MRFPN to extract rotation-invariant features. Features are then fed into the SFEM module to suppress background noise and enhance foreground objects. Finally, the processed features are passed to both classification and regression heads to obtain oriented bounding box prediction results.
Figure 2. Overall architecture of the proposed SREDet model. SREDet primarily consists of four parts: First, the backbone feature extraction network is utilized for initial feature extraction. Subsequently, multi-angle (different colors represent different angle feature maps) and multiscale feature maps are fused through the MRFPN to extract rotation-invariant features. Features are then fed into the SFEM module to suppress background noise and enhance foreground objects. Finally, the processed features are passed to both classification and regression heads to obtain oriented bounding box prediction results.
Remotesensing 16 02317 g002
Figure 3. Structure of Rotation Feature Alignment Module. This module maps the features from different orientations back to the original direction, and extracts features more closely aligned with the object through deformable convolution.
Figure 3. Structure of Rotation Feature Alignment Module. This module maps the features from different orientations back to the original direction, and extracts features more closely aligned with the object through deformable convolution.
Remotesensing 16 02317 g003
Figure 4. Different Semantic Formats and Enhancement Strategies. This figure shows two types of semantic annotation and two distinct enhancement methods, where (a,b) demonstrate the implicit and explicit enhancement, respectively. F i n and F o u t represent the feature map before and after enhancement, and W indicates the weights generated by different strategies.
Figure 4. Different Semantic Formats and Enhancement Strategies. This figure shows two types of semantic annotation and two distinct enhancement methods, where (a,b) demonstrate the implicit and explicit enhancement, respectively. F i n and F o u t represent the feature map before and after enhancement, and W indicates the weights generated by different strategies.
Remotesensing 16 02317 g004
Figure 5. Definition of error types. Red boxes denote the G T of the object, green boxes represent false positive samples, and the actual situation of R I o U for each error type is indicated by yellow highlighted line segments.
Figure 5. Definition of error types. Red boxes denote the G T of the object, green boxes represent false positive samples, and the actual situation of R I o U for each error type is indicated by yellow highlighted line segments.
Remotesensing 16 02317 g005
Figure 6. Visualization of Detection Results. Visualization of predictions on the DOTA dataset using our method, SREDet.
Figure 6. Visualization of Detection Results. Visualization of predictions on the DOTA dataset using our method, SREDet.
Remotesensing 16 02317 g006
Figure 7. Visualization of Different Strategies. (al) The first row (ac) and second rows (df) present the visualization results of detecting object boxes and outputting semantic maps. The last two rows (gl) indicate the visualization results of different channel feature maps. The first column represents the experimental results of the baseline, while the second and third columns illustrate the results obtained by employing Mask and Segmentation as semantic guidance information for implicit feature map enhancement, respectively.
Figure 7. Visualization of Different Strategies. (al) The first row (ac) and second rows (df) present the visualization results of detecting object boxes and outputting semantic maps. The last two rows (gl) indicate the visualization results of different channel feature maps. The first column represents the experimental results of the baseline, while the second and third columns illustrate the results obtained by employing Mask and Segmentation as semantic guidance information for implicit feature map enhancement, respectively.
Remotesensing 16 02317 g007
Figure 8. Visualization of Detection Results. Visualization of predictions on the HRSC2016 dataset using our SREDet method.
Figure 8. Visualization of Detection Results. Visualization of predictions on the HRSC2016 dataset using our SREDet method.
Remotesensing 16 02317 g008
Table 2. Results of ablation experiments on DOTA dataset.
Table 2. Results of ablation experiments on DOTA dataset.
MRFPNSFEM mAP 50 BRGTFSVLVSHBCSTSBFHASPHC
baseline63.441.360.365.669.878.155.259.750.558.852.940.3
Ours 68.143.766.767.575.685.866.166.048.167.658.055.8
68.847.568.768.477.586.260.464.457.962.659.357.1
69.747.870.268.678.186.665.765.858.167.258.957.4
Table 3. Error type metrics of ablation experiments on DOTA dataset.
Table 3. Error type metrics of ablation experiments on DOTA dataset.
MRFPNSFEM E cls E loc E cls & loc E bkg E miss
baseline2.278.870.106.147.52
Ours 1.727.590.115.846.76
1.817.330.085.567.13
1.757.430.095.516.77
Table 4. Results of Semantic Supervision with Different Strategies.
Table 4. Results of Semantic Supervision with Different Strategies.
Enh-MtdsSemSty
Expl Impl Mask Seg mAP 50 PL BD GTF BC SBF RA HA SP HC
baseline63.488.574.960.355.250.563.958.852.940.3
Ours 66.588.777.162.958.654.261.060.957.252.2
67.488.976.863.260.853.663.961.357.654.5
68.088.877.170.062.153.361.562.358.756.1
68.889.277.468.760.457.964.162.659.357.1
Table 5. Error type metrics of Semantic Supervision with Different Strategies.
Table 5. Error type metrics of Semantic Supervision with Different Strategies.
Enh-MtdsSemSty
Expl Impl Mask Seg E cls E loc E cls & loc E bkg E miss
baseline2.278.870.106.147.52
Ours 1.747.760.066.516.91
1.757.730.075.707.25
1.767.640.076.057.15
1.817.330.085.567.13
Table 6. Ablative study of MRFPN with different strategies.
Table 6. Ablative study of MRFPN with different strategies.
MRFPN LayersUse DCN mAP 50
{ p 3 , p 4 , p 5 } 67.98
{ p 3 , p 4 , p 5 , p 6 } 68.08
{ p 3 , p 4 , p 5 , p 6 } 68.09
{ p 3 , p 4 , p 5 , p 6 , p 7 } 68.08
{ p 3 , p 4 , p 5 , p 6 , p 7 } 68.11
Table 7. Ablative study of SFEM with different strategies.
Table 7. Ablative study of SFEM with different strategies.
Enhanced LayersStacked Dilated Convolution mAP 50
{ p 3 , p 4 , p 5 } { 1 , 1 , 1 } 68.1
{ p 3 , p 4 , p 5 } { 4 , 3 , 2 } 68.5
{ p 3 , p 4 , p 5 , p 6 , p 7 } { 1 , 1 , 1 , 1 , 1 } 68.3
{ p 3 , p 4 , p 5 , p 6 , p 7 } { 4 , 4 , 3 , 2 , 2 } 68.8
Table 8. Ablation study of the SFEM with different loss functions. BG represents the background class weight and FG represents the foreground class weight.
Table 8. Ablation study of the SFEM with different loss functions. BG represents the background class weight and FG represents the foreground class weight.
LossWeights mAP 50 mAP
Focal loss [46]68.8050.11
CE loss [60]BG{1},FG{1}67.8949.87
Dice loss [47]BG{1},FG{1}67.9950.07
Dice loss [47]BG{1},FG{20}68.8350.28
Table 9. Performance of SFEM with Various Base Models.
Table 9. Performance of SFEM with Various Base Models.
Base ModelBackbonewith SFEM mAP 50 mAP
Faster R-CNN [48]ResNet10170.2453.12
Faster R-CNN [48]ResNet10171.1253.28
yolov8-m [25]CSPDarknet74.7557.32
yolov8-m [25]CSPDarknet75.3658.06
yolov8-l [25]CSPDarknet75.0857.81
yolov8-l [25]CSPDarknet75.8458.47
Table 10. Comparison with state-of-the-art methods on the HRSC2016 dataset.
Table 10. Comparison with state-of-the-art methods on the HRSC2016 dataset.
MethodsBackboneSize mAP 50
R2CNN [61]ResNet101800 × 80073.1
R2PN [50]VGG16/79.6
OLPD [62]ResNet101800 × 80088.4
RoI-Trans [63]ResNet101512 × 80086.2
R3Det [24]ResNet101800 × 80089.3
RetinaNet(baseline) [46]ResNet101800 × 80084.6
RRD [64]VGG16384 × 38484.3
BBAVectors [54]ResNet101800 × 80089.7
SDet [65]ResNet101800 × 80089.2
SREDet (ours)ResNet101800 × 80089.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Wang, C.; Zhang, H.; Qi, D.; Liu, Q.; Wang, Y.; Ding, W. SREDet: Semantic-Driven Rotational Feature Enhancement for Oriented Object Detection in Remote Sensing Images. Remote Sens. 2024, 16, 2317. https://doi.org/10.3390/rs16132317

AMA Style

Zhang Z, Wang C, Zhang H, Qi D, Liu Q, Wang Y, Ding W. SREDet: Semantic-Driven Rotational Feature Enhancement for Oriented Object Detection in Remote Sensing Images. Remote Sensing. 2024; 16(13):2317. https://doi.org/10.3390/rs16132317

Chicago/Turabian Style

Zhang, Zehao, Chenhan Wang, Huayu Zhang, Dacheng Qi, Qingyi Liu, Yufeng Wang, and Wenrui Ding. 2024. "SREDet: Semantic-Driven Rotational Feature Enhancement for Oriented Object Detection in Remote Sensing Images" Remote Sensing 16, no. 13: 2317. https://doi.org/10.3390/rs16132317

APA Style

Zhang, Z., Wang, C., Zhang, H., Qi, D., Liu, Q., Wang, Y., & Ding, W. (2024). SREDet: Semantic-Driven Rotational Feature Enhancement for Oriented Object Detection in Remote Sensing Images. Remote Sensing, 16(13), 2317. https://doi.org/10.3390/rs16132317

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop