A Fast and Lightweight Detection Network for Multi-Scale SAR Ship Detection under Complex Backgrounds
Abstract
:1. Introduction
- A Channel-Attention Path Enhancement block (CAPE-Block) is designed by adding a bottom-up enhancement path with a channel-attention mechanism based on feature pyramid networks (FPN), which is used to shorten the path of information transmission and enhance the precise positioning information stored in the low-level feature maps.
- A fast and lightweight detection network is designed based on CAPE-Block, ASIR-Block, Focus-Block, and SPP-Block for multi-scale SAR ship detection under complex backgrounds.
- A novel loss function is designed for the training of FASC-Net. Binary cross-entropy loss is used to calculate the object loss and classification loss, and GIoU loss is used to calculate the loss of the prediction box. Three hyperparameters , and are introduced to balance the weights of the three sub-losses.
- Comparing with other excellent methods (e.g., Faster R-CNN [17], SSD [13], YOLO-V4 [30], DAPN [24], HR-SDNet [25], and Quad-FPN [26]), a series of comparative experiments and ablation studies on the SSDD dataset [31], SAR-Ship-Dataset [32], and HRSID dataset [33] illustrate that our FASC-Net achieves higher mean average precision (mAP) and faster detection speed with smaller number of parameters.
2. Materials and Methods
2.1. Backbone
2.1.1. Focus-Block
2.1.2. ASIR-Block
2.1.3. SPP-Block
2.2. CAPE-Block, Prediction, and Post-Processing
2.2.1. CAPE-Block
2.2.2. Prediction
2.2.3. Post-Processing
2.3. Network Architecture of FASC-Net
2.4. Training of FASC-Net
2.5. Dataset Description
- SSDD: SSDD [31] is made by referring to the production process of the PASCAL VOC dataset. This is because the PASCAL VOC dataset has more applications and the data format is more standardized. In the SSDD dataset, there are a total of 1160 images with 480 × 480 average image size and 2456 ships from Radarsat-2, TerraSAR-X, and Sentinel-1. The SAR ship in the SSDD dataset has a resolution of 1–10 m, with HH, HV, VV, VH polarization, as shown in Figure 10. Select the image names with index suffixes 1 and 0 as the testing dataset, and the others as the training dataset. The ratio of the number of SAR images in the training dataset to the testing dataset is 8:2.
- SAR-Ship-Dataset: This dataset took Sentinel-1 SAR data and Gaofen-3 SAR data as the dominant data sources. The ship slices in the dataset have diverse backgrounds and rich types, which are conducive to various SAR image applications. In the SAR-Ship-Dataset [32], there are 43,819 images with 256 × 256 image size, and SAR ships have a resolution of 5 m~20 m with HH, HV, VV, and VH polarization. There is plenty of noise in these SAR images, which can effectively detect the ability of the proposed network to resist noise interference. Randomly divide the SAR images in the dataset into a training dataset and a testing dataset, the ratio of the number of SAR images in these two datasets is 8:2.
- HRSID: HRSID [33] is a dataset for instance segmentation tasks, ship detection and, semantic segmentation in high-resolution SAR images. The HRSID dataset borrows from the construction process of the COCO dataset, including SAR images with different coastal ports, sea areas, sea conditions, polarization, and resolutions. This dataset contains a total of 5604 high-resolution SAR images with 800 × 800 image size and 16,951 ship instances from TerraSAR-X and Sentinel-1. In the HRSID dataset, the SAR ship has a resolution of 0.1–3 m, with HH, HV, and VV polarization. These images have high resolution and complex coastal backgrounds which is able to accurately assess the ability of the proposed network to detect multi-scale targets under complex coastal backgrounds. There are 1961 and 3642 SAR images in the testing dataset and training dataset respectively. The ratio of the number of SAR images in these two datasets is 13:7.
3. Experiments and Results
3.1. Data Augmentation
3.2. Evaluation Criteria
3.3. Detection Results on the SSDD Dataset
3.4. Detection Results on the SAR-Ship-Dataset
3.5. Detection Results on the HRSID Dataset
3.6. Methods Comparison
3.7. Generalization Ability Verification
4. Discussion
- FPNet: Composed by traditional convolutional layers and an FPN-Block, FPNet has the same width and length network as FASC-Net. The traditional convolution layers are used to downsample and extract features. Same data augmentation methods are used in the training methods and loss function as FASC-Net.
- A-FPNet: We get A-FPN by replacing all the traditional convolutional layers of FPNet with ASIR-Blocks. We can evaluate the effect of ASIR-Blocks by comparing the parameters and performance of FPNet and A-FPNet.
- FA-FPNet: After replacing the first ASIR-Block-2 used for down-sampling in A-FPNet with a Focus-Block, we get FA-FPNet. The effect of Focus-Block can be testified by comparing the parameters and performances of FA-FPNet and A-FPNet.
- FAS-FPNet: FAS-FPNet is obtained by adding an SPP-Block into FA-FPNet. The effect of SPP-Block can be testified by comparing the performances of FA-FPNet and FAS-FPNet and the effect of the CAPE-Block can be testified by comparing the performances of FAS-FPNet and FASC-Net.
5. Conclusions
- Compared with the other six excellent methods, the fast and lightweight FASC-Net achieves higher detection performance on the three datasets and has fewer parameters.
- The detection performance improves remarkably when upgrading FPN-Block to CAPE-Block, because it can shorten the path of information transmission and make full use of the precise positioning information stored in low-level features.
- Using ASIR-Net as the backbone of the target detection network can significantly decrease the number of parameters while keeping the detection accuracy.
- Compared with L2 loss, mean square error and GIou loss can better reflect the overlap of the two boxes and improve the training effect of the network.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Yang, M.; Guo, C.; Zhong, H.; Yin, H. A curvature-based saliency method for ship detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1590–1594. [Google Scholar] [CrossRef]
- Lin, H.; Chen, H.; Jin, K.; Zeng, L.; Yang, J. Ship detection with superpixel-level Fisher vector in high-resolution SAR images. IEEE Geosci. Remote Sens. Lett. 2019, 17, 247–251. [Google Scholar] [CrossRef]
- Chen, J.; Chen, Y.; Yang, J. Ship detection using polarization cross-entropy. IEEE Geosci. Remote Sens. Lett. 2009, 6, 723–727. [Google Scholar] [CrossRef]
- Shirvany, R.; Chabert, M.; Tourneret, J.Y. Ship and oil-spill detection using the degree of polarization in linear and hybrid/compact dual-pol SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 885–892. [Google Scholar] [CrossRef] [Green Version]
- Tello, M.; López-Martínez, C.; Mallorqui, J.J. A novel algorithm for ship detection in SAR imagery based on the wavelet transform. IEEE Geosci. Remote Sens. Lett. 2005, 2, 201–205. [Google Scholar] [CrossRef]
- Tello, M.; Lopez-Martinez, C.; Mallorqui, J.; Bonastre, R. Automatic detection of spots and extraction of frontiers in SAR images by means of the wavelet transform: Application to ship and coastline detection. In Proceedings of the 2006 IEEE International Symposium on Geoscience and Remote Sensing, Denver, CO, USA, 31 July–4 August 2006; pp. 383–386. [Google Scholar]
- Leng, X.; Ji, K.; Xing, X.; Zhou, S.; Zou, H. Area ratio invariant feature group for ship detection in SAR imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2376–2388. [Google Scholar] [CrossRef]
- Pappas, O.; Achim, A.; Bull, D. Superpixel-level CFAR detectors for ship detection in SAR imagery. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1397–1401. [Google Scholar] [CrossRef] [Green Version]
- Kang, M.; Leng, X.; Lin, Z.; Ji, K. A modified faster R-CNN based on CFAR algorithm for SAR ship detection. In Proceedings of the 2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 18–21 May 2017; pp. 1–4. [Google Scholar]
- Park, K.; Park, J.J.; Jang, J.C.; Lee, J.H.; Oh, S.; Lee, M. Multi-spectral ship detection using optical, hyperspectral, and microwave SAR remote sensing data in coastal regions. Sustainability 2018, 10, 4064. [Google Scholar] [CrossRef] [Green Version]
- Dai, J.; Li, Y.; He, K.; Sun, J. R-fcn: Object detection via region-based fully convolutional networks. Adv. Neural Inf. Process. Syst. 2016, 379–387. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [Green Version]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Jiao, J.; Zhang, Y.; Sun, H.; Yang, X.; Gao, X.; Hong, W.; Fu, K.; Sun, X. A densely connected end-to-end neural network for multiscale and multiscene SAR ship detection. IEEE Access 2018, 6, 20881–20892. [Google Scholar] [CrossRef]
- Li, J.; Qu, C.; Shao, J. A ship detection method based on cascade CNN in SAR images. Control Decis. 2019, 34, 2191–2197. [Google Scholar]
- Kang, M.; Ji, K.; Leng, X.; Lin, Z. Contextual region-based convolutional neural network with multilayer fusion for SAR ship detection. Remote Sens. 2017, 9, 860. [Google Scholar] [CrossRef] [Green Version]
- Chang, Y.L.; Anagaw, A.; Chang, L.; Wang, Y.C.; Hsiao, C.Y.; Lee, W.H. Ship detection based on YOLOv2 for SAR imagery. Remote Sens. 2019, 11, 786. [Google Scholar] [CrossRef] [Green Version]
- Lin, Z.; Ji, K.; Leng, X.; Kuang, G. Squeeze and excitation rank faster R-CNN for ship detection in SAR images. IEEE Geosci. Remote Sens. Lett. 2018, 16, 751–755. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. Automatic ship detection based on RetinaNet using multi-resolution Gaofen-3 imagery. Remote Sens. 2019, 11, 531. [Google Scholar] [CrossRef] [Green Version]
- Zhang, T.; Zhang, X.; Shi, J.; Wei, S. Depthwise separable convolution neural network for high-speed SAR ship detection. Remote Sens. 2019, 11, 2483. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Wang, H.; Xu, C.; Lv, Y.; Fu, C.; Xiao, H.; He, Y. A lightweight feature optimizing network for ship detection in SAR image. IEEE Access 2019, 7, 141662–141678. [Google Scholar] [CrossRef]
- Cui, Z.; Li, Q.; Cao, Z.; Liu, N. Dense attention pyramid networks for multi-scale ship detection in SAR images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8983–8997. [Google Scholar] [CrossRef]
- Wei, S.; Su, H.; Ming, J.; Wang, C.; Yan, M.; Kumar, D.; Shi, J.; Zhang, X. Precise and robust ship detection for high-resolution SAR imagery based on HR-SDNet. Remote Sens. 2020, 12, 167. [Google Scholar] [CrossRef] [Green Version]
- Zhang, T.; Zhang, X.; Ke, X. Quad-FPN: A Novel Quad Feature Pyramid Network for SAR Ship Detection. Remote Sens. 2021, 13, 2771. [Google Scholar] [CrossRef]
- Rostami, M.; Kolouri, S.; Eaton, E.; Kim, K. Deep transfer learning for few-shot SAR image classification. Remote Sens. 2019, 11, 1374. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Huo, C.; Xu, N.; Jiang, H.; Cao, Y.; Ni, L.; Pan, C. Multitask Learning for Ship Detection from Synthetic Aperture Radar Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 8048–8062. [Google Scholar] [CrossRef]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Li, J.; Qu, C.; Shao, J. Ship detection in SAR images based on an improved faster R-CNN. In Proceedings of the 2017 SAR in Big Data Era: Models, Methods and Applications (BIGSARDATA), Beijing, China, 13–14 November 2017; pp. 1–6. [Google Scholar]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. A SAR dataset of ship detection for deep learning under complex backgrounds. Remote Sens. 2019, 11, 765. [Google Scholar] [CrossRef] [Green Version]
- Wei, S.; Zeng, X.; Qu, Q.; Wang, M.; Su, H.; Shi, J. HRSID: A high-resolution SAR images dataset for ship detection and instance segmentation. IEEE Access 2020, 8, 120234–120254. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
- Zhang, T.; Zhang, X.; Shi, J.; Wei, S. HyperLi-Net: A hyper-light deep learning network for high-accurate and high-speed ship detection from synthetic aperture radar imagery. ISPRS J. Photogramm. Remote Sens. 2020, 167, 123–153. [Google Scholar] [CrossRef]
- Zhang, T.; Zhang, X.; Ke, X.; Zhan, X.; Shi, J.; Wei, S.; Pan, D.; Li, J.; Su, H.; Zhou, Y.; et al. LS-SSDD-v1. 0: A deep learning dataset dedicated to small ship detection from large-scale Sentinel-1 SAR images. Remote Sens. 2020, 12, 2997. [Google Scholar] [CrossRef]
- Chen, J.; Xing, m.; Yu, H.; Liang, B.; Peng, J.; Sun, G.C. Motion Compensation/Autofocus in Airborne Synthetic Aperture Radar: A Review. IEEE Geosci. Remote Sens. Mag. 2021, 2–23. [Google Scholar] [CrossRef]
- Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef] [Green Version]
Layers | Anchors | ||
---|---|---|---|
P1 | (116, 90) | (156, 198) | (373, 326) |
P2 | (30, 61) | (62, 45) | (59, 119) |
P3 | (10, 13) | (16, 30) | (33, 23) |
Input | Operator | n | k | s | Output | Parameters |
---|---|---|---|---|---|---|
3 × 640 × 640 | Focus | 8 | 3 | - | 8 × 320 × 320 | 880 |
8 × 320 × 320 | ASIR-Block-2 | 8 | 3 | 2 | 8 × 160 × 160 | 1496 |
8 × 160 × 160 | ASIR-Block-1 | 8 | 3 | 1 | 8 × 160 × 160 | 1496 |
8 × 160 × 160 | ASIR-Block-2 | 16 | 3 | 2 | 16 × 80 × 80 | 1768 |
16 × 80 × 80 | ASIR-Block-1 | 16 | 3 | 1 | 16 × 80 × 80 | 5040 |
16 × 80 × 80 | Channel-Attention | - | - | - | 16 × 80 × 80 | 280 |
16 × 80 × 80 | ASIR-Block-2 | 32 | 3 | 2 | 32 × 40 × 40 | 6096 |
32 × 40 × 40 | ASIR-Block-1 | 32 | 3 | 1 | 32 × 40 × 40 | 18,272 |
32 × 40 × 40 | Channel-Attention | - | - | - | 32 × 40 × 40 | 552 |
32 × 40 × 40 | ASIR-Block-2 | 64 | 3 | 2 | 64 × 20 × 20 | 22,432 |
64 × 20 × 20 | SSP | 64 | - | - | 64 × 20 × 20 | 10,432 |
64 × 20 × 20 | ASIR-Block-1 | 64 | 3 | 1 | 64 × 20 × 20 | 69,312 |
64 × 20 × 20 | CBS | 32 | 1 | 1 | 32 × 20 × 20 | 2112 |
32 × 20 × 20 | Channel-Attention | - | - | - | 32 × 20 × 20 | 552 |
32 × 20 × 20 | Upsample | - | - | - | 32 × 40 × 40 | 0 |
32 × 40 × 40 | Concat (−1, 6) | - | - | - | 64 × 40 × 40 | 0 |
64 × 40 × 40 | ASIR-Block-1 | 32 | 3 | 1 | 32 × 40 × 40 | 61,056 |
32 × 40 × 40 | CBS | 32 | 1 | 1 | 32 × 40 × 40 | 1088 |
32 × 40 × 40 | Channel-Attention | - | - | - | 32 × 40 × 40 | 552 |
32 × 40 × 40 | Upsample | - | - | - | 32 × 80 × 80 | 0 |
32 × 80 × 80 | Concat (−1, 4) | - | - | - | 48 × 80 × 80 | 0 |
48 × 80 × 80 | ASIR-Block-1 | 32 | 3 | 1 | 32 × 80 × 80 | 36,592 |
32 × 80 × 80 | Conv | 18 | 1 | 1 | 18 × 80 × 80 | 576 |
32 × 80 × 80 | ASIR-Block-2 | 64 | 3 | 2 | 64 × 40× 40 | 22,432 |
64× 40× 40 | Concat (−11, 14) | - | - | - | 96 × 40 × 40 | 0 |
96 × 40 × 40 | ASIR-Block-1 | 64 | 3 | 1 | 64 × 40 × 40 | 140,768 |
64 × 40 × 40 | Conv | 18 | 1 | 1 | 18× 40 × 40 | 1152 |
64 × 40 × 40 | ASIR-Block-2 | 64 | 3 | 2 | 64 × 20 × 20 | 69,312 |
64 × 20 × 20 | Concat (−1, 10) | - | - | - | 96 × 20 × 20 | 0 |
96 × 20 × 20 | ASIR-Block-1 | 128 | 3 | 1 | 128 × 20 × 20 | 165,472 |
128 × 20 × 20 | Conv | 18 | 1 | 1 | 18 × 20 × 20 | 2304 |
Method | Dataset | Precision (%) | Recall (%) | F1 (%) | mAP (%) | FPS |
---|---|---|---|---|---|---|
FASC-Net | SSDD | 95.6 ± 1.1 | 94.5 ± 0.4 | 94.9 ± 0.4 | 97.4 ± 0.3 | 42.5 ± 2.1 |
Method | Dataset | Precision(%) | Recall(%) | F1(%) | mAP(%) | FPS |
---|---|---|---|---|---|---|
FASC-Net | SAR-Ship-Dataset | 91.1 ± 0.9 | 92.2 ± 0.6 | 92.1 ± 0.5 | 96.1 ± 0.4 | 60.4 ± 2.6 |
Method | Dataset | Precision(%) | Recall(%) | F1(%) | mAP(%) | FPS |
---|---|---|---|---|---|---|
FASC-Net | HRSID | 88.3 ± 1.2 | 80.2 ± 0.5 | 84.1 ± 0.4 | 88.3 ± 0.5 | 24.5 ± 1.6 |
Methods | SSDD | SAR-Ship-Dataset | HRSID | Para (106) | MS (MB) | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
P | R | F1 | mAP | FPS | P | R | F1 | mAP | FPS | P | R | F1 | mAP | FPS | |||
Faster R-CNN | 87.1 | 90.5 | 88.8 | 89.7 | 11.9 | 93.3 | 86.9 | 89.9 | 91.7 | 23.7 | 81.5 | 82 | 81.8 | 80.7 | 10.2 | 41.3 | 178.54 |
SSD | 86.2 | 92.3 | 89.2 | 92.3 | 16.1 | 86.8 | 93.6 | 90.1 | 92.2 | 30.5 | 80.1 | 83 | 81.6 | 81.51 | 13.8 | 14.1 | 58.22 |
YOLO-V4 | 96.9 | 95.9 | 96.4 | 96.3 | 22.7 | 93.1 | 90.5 | 91.8 | 93.1 | 32.6 | 85.9 | 83.3 | 84.6 | 87.7 | 14.5 | 11.2 | 40.27 |
DAPN | 85.6 | 91.4 | 88.4 | 90.6 | 12.2 | 87.3 | 93.4 | 90.3 | 91.9 | 21.5 | 83.4 | 80.5 | 81.9 | 81.9 | 13 | 22.5 | 90.73 |
HR-SDNet | 95.1 | 93 | 94.1 | 96.8 | 8.3 | 93.3 | 92.1 | 92.7 | 92.3 | 8.9 | 86.9 | 81.2 | 83.9 | 88.2 | 6.8 | 74.8 | 265.44 |
Quad-FPN | 89.5 | 95.8 | 92.6 | 95.3 | 11.4 | 77.6 | 96.1 | 85.9 | 94.4 | 23 | 88 | 87.3 | 87.7 | 86.1 | 13.4 | 15.2 | 60.52 |
FASC-Net | 95.6 | 94.5 | 94.9 | 97.4 | 42.5 | 91.1 | 92.2 | 92.1 | 96.1 | 60.4 | 88.3 | 80.2 | 84.1 | 88.3 | 24.5 | 0.64 | 1.47 |
No. | Place | Time | Polarization | GT | Resolution | Image Size |
---|---|---|---|---|---|---|
Image 1 | Tokyo Port | 20 June 2020 | VV | 536 | 5 m × 20 m | 25,479 × 16,709 |
Image 2 | Singapore Strait | 6 June 2020 | VV | 760 | 5 m × 20 m | 25,650 × 16,768 |
Networks | Image 1 | Image 2 | ||||||
---|---|---|---|---|---|---|---|---|
P | R | F1 | mAP | P | R | F1 | mAP | |
CFAR | 60.2 | 75.6 | 67 | - | 62.7 | 75 | 68.3 | - |
Faster R-CNN | 77.7 | 73.7 | 75.6 | 73.7 | 74.8 | 77.2 | 76 | 74.3 |
SSD | 67.5 | 62.1 | 64.7 | 61.4 | 70 | 61.7 | 65.6 | 64.8 |
YOLO-V4 | 76.9 | 74.1 | 75.5 | 73.9 | 80.9 | 72.3 | 76.4 | 75.3 |
HR-SDNet | 84.9 | 68.8 | 76 | 68.8 | 85.8 | 70.1 | 77.1 | 70.5 |
Quad-FPN | 78.5 | 74.4 | 76.4 | 74.4 | 79.5 | 73.2 | 76.2 | 74.9 |
FASC-Net | 88.7 | 70.4 | 78.5 | 79.8 | 84.5 | 72.4 | 78 | 78.6 |
Networks | Configurations | Results | Para () | |||||
---|---|---|---|---|---|---|---|---|
AB | FB | SB | CB | F1(%) | mAP(%) | FPS | ||
FPNet | 88.1 ± 0.5 | 93.1 ± 0.3 | 20.2 ± 1.1 | 3.72 | ||||
A-FPNet | √ | 88.2 ± 0.6 | 92.6 ± 0.6 | 50.5 ± 1.6 | 0.27 | |||
FA-FPNet | √ | √ | 89.3 ± 0.7 | 92.9 ± 0.6 | 52.1 ± 1.8 | 0.25 | ||
FAS-FPNet | √ | √ | √ | 90.7 ± 0.6 | 94.8 ± 0.5 | 49.3±1.9 | 0.27 | |
FASC-Net | √ | √ | √ | √ | 94.9 ± 0.4 | 97.4 ± 0.3 | 42.5 ± 2.1 | 0.64 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, J.; Zhou, G.; Zhou, S.; Qin, M. A Fast and Lightweight Detection Network for Multi-Scale SAR Ship Detection under Complex Backgrounds. Remote Sens. 2022, 14, 31. https://doi.org/10.3390/rs14010031
Yu J, Zhou G, Zhou S, Qin M. A Fast and Lightweight Detection Network for Multi-Scale SAR Ship Detection under Complex Backgrounds. Remote Sensing. 2022; 14(1):31. https://doi.org/10.3390/rs14010031
Chicago/Turabian StyleYu, Jimin, Guangyu Zhou, Shangbo Zhou, and Maowei Qin. 2022. "A Fast and Lightweight Detection Network for Multi-Scale SAR Ship Detection under Complex Backgrounds" Remote Sensing 14, no. 1: 31. https://doi.org/10.3390/rs14010031
APA StyleYu, J., Zhou, G., Zhou, S., & Qin, M. (2022). A Fast and Lightweight Detection Network for Multi-Scale SAR Ship Detection under Complex Backgrounds. Remote Sensing, 14(1), 31. https://doi.org/10.3390/rs14010031