LH-YOLO: A Lightweight and High-Precision SAR Ship Detection Model Based on the Improved YOLOv8n
Abstract
:1. Introduction
- The lightweight StarNet-nano structure are designed for the LH-YOLO’s backbone, significantly enhancing network efficiency and reducing the parameter count. StarNet-nano retains key feature extraction capabilities by optimizing the structure of convolutional modules and reducing layer complexity, while notably decreasing parameter count and computational overhead. The structural design adheres to the principle of balancing lightweight design with efficiency, significantly decreasing parameter count and reducing computational load while maintaining high model precision.
- We also introduce the LFE-C2f structure in the neck of LH-YOLO, effectively reducing model parameters while maintaining performance comparable to that of YOLOv8n. LFE-C2f reduces redundant convolutional computations by adopting a branching feature fusion design, ensuring improved overall model efficiency. The core operation of the LFE-C2f architecture is the element-wise multiplication, which maps input feature to a high-dimensional nonlinear feature space, significantly enhancing the model’s feature representation capabilities.
- To mitigate the high computational load of the YOLOv8’s detection head, we designed a reused and shared convolutional detection (RSCD) head, employing a weight sharing mechanism to improve parameter utilization. This design reduces parameter count while improving the performance of the detection head. Overall, the LH-YOLO model has a relatively small parameter count of only 1.862M, which is 1.144M fewer than YOLOv8n, representing a 38.1% decrease. Despite this substantial reduction, LH-YOLO’s precision in detecting SAR ships surpasses that of YOLOv8n, achieving a mAP50 that is 1.4% higher than YOLOv8n on the HRSID dataset.
2. Methodology
2.1. The Architecture of YOLOv8
2.2. The Proposed LH-YOLO Structure
2.2.1. Structure of the Lightweight StarNet-Nano Backbone Network
2.2.2. Lightweight Feature Extraction Module
2.2.3. Resued and Shared Convolutional Detection Head
3. Experiments
3.1. Implementation Details
3.1.1. Platform
3.1.2. Datasets
- HRSIDThe HRSID dataset comprises 5604 cropped SAR images and 16,951 ships. All images are 800 × 800 pixels in size. Ships in the dataset are categorized into three classes based on pixel area: small ships (<482 pixels), medium ships (482–1452 pixels), and large ships (>1452 pixels). Specifically, the dataset includes 9242 small ships, 7388 medium ships, and 321 large ships [45]. The SAR images in the HRSID exhibited very high spatial resolution and provide complex backgrounds, making them a vital source for developing high-precision ship detection algorithms.
- SAR-Ship-DatasetThe SAR-Ship-Dataset comprises 43,819 cropped SAR images and 59,535 ships. All images are 256 × 256 pixels in size. Specifically, the dataset includes 35,695 small ships, 23,660 medium ships, and 180 large ships [46]. The dataset provides a variety of images captured by different SAR sensors, covering different camera angles and geographical regions. Given that the dataset originates from multiple SAR sensors, significant variations in image quality and resolution exist, requiring the algorithm to handle data from various sources. Furthermore, the detection algorithm must demonstrate strong scale invariance to effectively handle ship targets of varying sizes and shapes. In conclusion, the dataset highlights image diversity and a wide range of application scenarios, covering various sensors and sea regions, making it ideal for developing detection algorithms with strong generalization capabilities.
3.1.3. Evaluation Metrics
3.2. Ablation Experiment
3.3. Comparison with Other Methods
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
- Wysocki, K.; Niewińska, M. Counteracting imagery (IMINT), optoelectronic (EOIMINT) and radar (SAR) intelligence. Sci. J. Mil. Univ. Land Forces 2022, 54, 222–244. [Google Scholar] [CrossRef]
- Agrawal, S.; Khairnar, G.B. A comparative assessment of remote sensing imaging techniques: Optical, sar and lidar. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-5/W3, 1–6. [Google Scholar] [CrossRef]
- Li, J.; Xu, C.; Su, H.; Gao, L.; Wang, T. Deep Learning for SAR Ship Detection: Past, Present and Future. Remote Sens. 2022, 14, 2712. [Google Scholar] [CrossRef]
- Alexandre, C.; Devillers, R.; Mouillot, D.; Seguin, R.; Catry, T. Ship Detection with SAR C-Band Satellite Images: A Systematic Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 14353–14367. [Google Scholar] [CrossRef]
- Yasir, M.; Jianhua, W.; Mingming, X.; Hui, S.; Zhe, Z.; Shanwei, L.; Colak, A.T.I.; Hossain, M.S. Ship detection based on deep learning using SAR imagery: A systematic literature review. Soft Comput. 2023, 27, 63–84. [Google Scholar] [CrossRef]
- Liu, T.; Zhang, J.; Gao, G.; Yang, J.; Marino, A. CFAR Ship Detection in Polarimetric Synthetic Aperture Radar Images Based on Whitening Filter. IEEE Trans. Geosci. Remote Sens. 2020, 58, 58–81. [Google Scholar] [CrossRef]
- Smith, M.; Varshney, P. VI-CFAR: A novel CFAR algorithm based on data variability. In Proceedings of the 1997 IEEE National Radar Conference, Syracuse, NY, USA, 13–15 May 1997; pp. 263–268. [Google Scholar] [CrossRef]
- Blake, S. OS-CFAR theory for multiple targets and nonuniform clutter. IEEE Trans. Aerosp. Electron. Syst. 1988, 24, 785–790. [Google Scholar] [CrossRef]
- Abdou, L.; Soltani, F. OS-CFAR and CMLD threshold optimization in distributed systems using evolutionary strategies. Signal Image Video Process. 2008, 2, 155–167. [Google Scholar] [CrossRef]
- Arisoy, S.; Kayabol, K. Mixture-Based Superpixel Segmentation and Classification of SAR Images. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1721–1725. [Google Scholar] [CrossRef]
- Wang, X.; Li, G.; Plaza, A.; He, Y. Revisiting SLIC: Fast Superpixel Segmentation of Marine SAR Images Using Density Features. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
- Peng, B.; Peng, B.; Zhou, J.; Xie, J.; Liu, L. Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5236217. [Google Scholar] [CrossRef]
- Huang, Q.; Zhu, W.; Li, Y.; Zhu, B.; Gao, T.; Wang, P. Survey of Target Detection Algorithms in SAR Images. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; pp. 1756–1765. [Google Scholar] [CrossRef]
- Er, M.J.; Zhang, Y.; Chen, J.; Gao, W. Ship detection with deep learning: A survey. Artif. Intell. Rev. 2023, 56, 11825–11865. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Adarsh, P.; Rathi, P.; Kumar, M. YOLO v3-Tiny: Object Detection and Recognition using one stage improved model. In Proceedings of the 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 6–7 March 2020; pp. 687–694. [Google Scholar] [CrossRef]
- Wang, Z.; Hua, Z.; Wen, Y.; Zhang, S.; Xu, X.; Song, H. E-YOLO: Recognition of estrus cow based on improved YOLOv8n model. Expert Syst. Appl. 2024, 238, 122212. [Google Scholar] [CrossRef]
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Guo, Y.; Chen, S.; Zhan, R.; Wang, W.; Zhang, J. LMSD-YOLO: A Lightweight YOLO Algorithm for Multi-Scale SAR Ship Detection. Remote Sens. 2022, 14, 4801. [Google Scholar] [CrossRef]
- Tang, X.; Zhang, J.; Xia, Y.; Xiao, H. DBW-YOLO: A High-Precision SAR Ship Detection Method for Complex Environments. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 7029–7039. [Google Scholar] [CrossRef]
- Humayun, M.F.; Nasir, F.A.; Bhatti, F.A.; Tahir, M.; Khurshid, K. YOLO-OSD: Optimized Ship Detection and Localization in Multiresolution SAR Satellite Images Using a Hybrid Data-Model Centric Approach. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 5345–5363. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar] [CrossRef]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: Delving Into High Quality Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar] [CrossRef]
- Zhang, Y.; Hao, Y. A Survey of SAR Image Target Detection Based on Convolutional Neural Networks. Remote Sens. 2022, 14, 6240. [Google Scholar] [CrossRef]
- Feng, Y.; Chen, J.; Huang, Z.; Wan, H.; Xia, R.; Wu, B.; Sun, L.; Xing, M. A Lightweight Position-Enhanced Anchor-Free Algorithm for SAR Ship Detection. Remote Sens. 2022, 14, 1908. [Google Scholar] [CrossRef]
- Yasir, M.; Liu, S.; Pirasteh, S.; Xu, M.; Sheng, H.; Wan, J.; de Figueiredo, F.A.; Aguilar, F.J.; Li, J. YOLOShipTracker: Tracking ships in SAR images using lightweight YOLOv8. Int. J. Appl. Earth Obs. Geoinf. 2024, 134, 104137. [Google Scholar] [CrossRef]
- Gao, Z.; Yu, X.; Rong, X.; Wang, W. Improved YOLOv8n for Lightweight Ship Detection. J. Mar. Sci. Eng. 2024, 12, 1774. [Google Scholar] [CrossRef]
- Wang, J.; Cui, Z.; Jiang, T.; Cao, C.; Cao, Z. Lightweight Deep Neural Networks for Ship Target Detection in SAR Imagery. IEEE Trans. Image Process. 2023, 32, 565–579. [Google Scholar] [CrossRef]
- Wang, K.; Liew, J.H.; Zou, Y.; Zhou, D.; Feng, J. PANet: Few-Shot Image Semantic Segmentation With Prototype Alignment. In Proceedings of the The IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Wang, C.Y.; Liao, H.Y.M.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A new backbone that can enhance learning capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 390–391. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ma, X.; Dai, X.; Bai, Y.; Wang, Y.; Fu, Y. Rewrite the Stars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024. [Google Scholar]
- Zhou, D.; Hou, Q.; Chen, Y.; Feng, J.; Yan, S. Rethinking bottleneck structure for efficient mobile network design. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Part III 16. Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 680–697. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Rao, Y.; Zhao, W.; Tang, Y.; Zhou, J.; Lim, S.N.; Lu, J. HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions. In Proceedings of the Advances in Neural Information Processing Systems; Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2022; Volume 35, pp. 10353–10366. [Google Scholar]
- Li, X.; Wang, W.; Wu, L.; Chen, S.; Hu, X.; Li, J.; Tang, J.; Yang, J. Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection. In Proceedings of the Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2020; Volume 33, pp. 21002–21012. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. FCOS: Fully Convolutional One-Stage Object Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9626–9635. [Google Scholar] [CrossRef]
- Ghiasi, G.; Lin, T.Y.; Le, Q.V. NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7029–7038. [Google Scholar] [CrossRef]
- Wei, S.; Zeng, X.; Qu, Q.; Wang, M.; Su, H.; Shi, J. HRSID: A High-Resolution SAR Images Dataset for Ship Detection and Instance Segmentation. IEEE Access 2020, 8, 120234–120254. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, C.; Zhang, H.; Dong, Y.; Wei, S. A SAR Dataset of Ship Detection for Deep Learning under Complex Backgrounds. Remote Sens. 2019, 11, 765. [Google Scholar] [CrossRef]
- Zhang, Y.; Chen, C.; Hu, R.; Yu, Y. ESarDet: An Efficient SAR Ship Detection Method Based on Context Information and Large Effective Receptive Field. Remote Sens. 2023, 15, 3018. [Google Scholar] [CrossRef]
- Kong, W.; Liu, S.; Xu, M.; Yasir, M.; Wang, D.; Liu, W. Lightweight algorithm for multi-scale ship detection based on high-resolution SAR images. Int. J. Remote Sens. 2023, 44, 1390–1415. [Google Scholar] [CrossRef]
- Ren, X.; Bai, Y.; Liu, G.; Zhang, P. YOLO-Lite: An Efficient Lightweight Network for SAR Ship Detection. Remote Sens. 2023, 15, 3771. [Google Scholar] [CrossRef]
- Luo, Y.; Li, M.; Wen, G.; Tan, Y.; Shi, C. SHIP-YOLO: A Lightweight Synthetic Aperture Radar Ship Detection Model Based on YOLOv8n Algorithm. IEEE Access 2024, 12, 37030–37041. [Google Scholar] [CrossRef]
- Chen, S.; Wang, H. SAR target recognition based on deep learning. In Proceedings of the 2014 International Conference on Data Science and Advanced Analytics (DSAA), Shanghai, China, 30 October–2 November 2014; pp. 541–547. [Google Scholar] [CrossRef]
- Ding, J.; Chen, B.; Liu, H.; Huang, M. Convolutional Neural Network With Data Augmentation for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
- Zeng, G.Q.; Wei, H.N.; Lu, K.D.; Geng, G.G.; Weng, J. DACO-BD: Data Augmentation Combinatorial Optimization-Based Backdoor Defense in Deep Neural Networks for SAR Image Classification. IEEE Trans. Instrum. Meas. 2024, 73, 2526213. [Google Scholar] [CrossRef]
- Yu, T.; Shigang, W.; Jian, W.; Yan, Z.; Jiehua, L.; Jiaqi, Y.; Dongliang, L. Scene-aware data augmentation for ship detection in SAR images. Int. J. Remote Sens. 2024, 45, 3396–3411. [Google Scholar] [CrossRef]
- Wang, G.; Qin, R.; Xia, Y. M-FSDistill: A Feature Map Knowledge Distillation Algorithm for SAR Ship Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 13217–13231. [Google Scholar] [CrossRef]
- Yu, J.; Chen, J.; Wan, H.; Zhou, Z.; Cao, Y.; Huang, Z.; Li, Y.; Wu, B.; Yao, B. SARGap: A Full-Link General Decoupling Automatic Pruning Algorithm for Deep Learning-Based SAR Target Detectors. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5202718. [Google Scholar] [CrossRef]
Parameter | HRSID | SAR-Ship-Dataset | |
---|---|---|---|
Data sources | Sentinel-1B; TerraSAR-X; TanDEM | GF-3; Sentinel-1 | |
Image size | 800 × 800 | 256 × 256 | |
Images number | 5604 | 43,819 | |
Small | 9242 | 35,695 | |
Ships number | Medium | 7388 | 23,660 |
Large | 321 | 180 | |
Resolution (m) | 0.5, 1, 3 | 3∼25 |
Dataset | StarNet-Nano | LFE-C2f | RSCD | Precision | Recall | mAP50 | Model Size (MB) | Params (M) | FLOPs (G) |
---|---|---|---|---|---|---|---|---|---|
HRSID | 0.940 | 0.901 | 0.952 | 6.3 | 3.006 | 12.6 | |||
✓ | 0.937 | 0.899 | 0.949 | 5.7 | 2.705 | 12.5 | |||
✓ | 0.938 | 0.902 | 0.955 | 5.4 | 2.532 | 10.7 | |||
✓ | 0.954 | 0.906 | 0.965 | 5.0 | 2.363 | 10.2 | |||
✓ | ✓ | 0.946 | 0.904 | 0.958 | 5.3 | 2.505 | 12.0 | ||
✓ | ✓ | 0.943 | 0.886 | 0.957 | 4.4 | 2.062 | 10.2 | ||
✓ | ✓ | 0.947 | 0.901 | 0.962 | 4.2 | 1.956 | 9.6 | ||
✓ | ✓ | ✓ | 0.952 | 0.908 | 0.966 | 4.1 | 1.862 | 9.6 | |
SAR-Ship-Dataset | 0.882 | 0.885 | 0.931 | 6.2 | 3.006 | 1.3 | |||
✓ | 0.890 | 0.881 | 0.931 | 5.7 | 2.705 | 1.2 | |||
✓ | 0.885 | 0.878 | 0.927 | 5.3 | 2.532 | 1.1 | |||
✓ | 0.892 | 0.894 | 0.937 | 4.9 | 2.363 | 1.0 | |||
✓ | ✓ | 0.89 | 0.886 | 0.931 | 5.3 | 2.505 | 1.2 | ||
✓ | ✓ | 0.893 | 0.884 | 0.935 | 4.4 | 2.062 | 1.0 | ||
✓ | ✓ | 0.886 | 0.888 | 0.932 | 4.2 | 1.956 | 1.0 | ||
✓ | ✓ | ✓ | 0.895 | 0.889 | 0.938 | 4.0 | 1.862 | 1.0 |
Dataset | Model | Precision | Recall | mAP50 | Params (M) |
---|---|---|---|---|---|
HRSID | YOLOv3-Tiny | 0.943 | 0.822 | 0.904 | 12.128 |
YOLOv5 | 0.925 | 0.893 | 0.945 | 2.503 | |
YOLOv8n | 0.940 | 0.901 | 0.952 | 3.006 | |
YOLOv10n | 0.934 | 0.883 | 0.957 | 2.265 | |
Faster R-CNN | 0.911 | 0.871 | 0.875 | 41.753 | |
Cascade R-CNN | 0.902 | 0.890 | 0.894 | 69.395 | |
Mask R-CNN | 0.894 | 0.858 | 0.863 | 44.396 | |
ESarDet * | - | - | 0.932 | 6.200 | |
Improved YOLOx-Tiny * | 0.936 | - | 0.868 | 1.464 | |
LH-YOLO(ours) | 0.934 | 0.883 | 0.966 | 1.862 | |
SAR-Ship-Dataset | YOLOv3-Tiny | 0.885 | 0.822 | 0.892 | 12.128 |
YOLOv5 | 0.887 | 0.864 | 0.916 | 2.503 | |
YOLOv8n | 0.886 | 0.886 | 0.931 | 3.006 | |
YOLOv10n | 0.877 | 0.879 | 0.926 | 2.265 | |
Faster R-CNN | 0.824 | 0.943 | 0.935 | 41.753 | |
Cascade R-CNN | 0.836 | 0.945 | 0.936 | 63.395 | |
Mask R-CNN | 0.819 | 0.942 | 0.937 | 44.396 | |
YOLO-lite | 0.948 | 0.881 | 0.921 | 7.640 | |
SHIP-YOLO | 0.932 | 0.928 | 0.966 | 2.500 | |
LH-YOLO(ours) | 0.895 | 0.889 | 0.938 | 1.862 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cao, Q.; Chen, H.; Wang, S.; Wang, Y.; Fu, H.; Chen, Z.; Liang, F. LH-YOLO: A Lightweight and High-Precision SAR Ship Detection Model Based on the Improved YOLOv8n. Remote Sens. 2024, 16, 4340. https://doi.org/10.3390/rs16224340
Cao Q, Chen H, Wang S, Wang Y, Fu H, Chen Z, Liang F. LH-YOLO: A Lightweight and High-Precision SAR Ship Detection Model Based on the Improved YOLOv8n. Remote Sensing. 2024; 16(22):4340. https://doi.org/10.3390/rs16224340
Chicago/Turabian StyleCao, Qi, Hang Chen, Shang Wang, Yongqiang Wang, Haisheng Fu, Zhenjiao Chen, and Feng Liang. 2024. "LH-YOLO: A Lightweight and High-Precision SAR Ship Detection Model Based on the Improved YOLOv8n" Remote Sensing 16, no. 22: 4340. https://doi.org/10.3390/rs16224340
APA StyleCao, Q., Chen, H., Wang, S., Wang, Y., Fu, H., Chen, Z., & Liang, F. (2024). LH-YOLO: A Lightweight and High-Precision SAR Ship Detection Model Based on the Improved YOLOv8n. Remote Sensing, 16(22), 4340. https://doi.org/10.3390/rs16224340