HY1C/D-CZI Noctiluca scintillans Bloom Recognition Network Based on Hybrid Convolution and Self-Attention
Abstract
:1. Introduction
2. Materials and Methods
2.1. CZI Data and NSB Event Information
2.2. Noctiluca scintillans Bloom Index
2.3. Annotation Method
2.4. Dataset Construction
2.5. Evaluation Criteria
3. Noctiluca scintillans Bloom Recognition Network (NSBRNet)
3.1. Network Structure
3.2. Inception Conv Block
3.3. Swin Attention Block
4. Results
4.1. Result Validation
4.2. Comparison Experiment
5. Discussion
5.1. Ablation Experiment
5.2. NSBI Applicability Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Elbrächter, M.; Qi, Z. Aspects of Noctiluca (Dinophyceae) population dynamics. In Physiological Ecology of Harmful Algal Blooms; Anderson, D.M., Cembella, A.D., Hallegraeff, G.M., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; pp. 315–335. ISBN 3540641173. [Google Scholar]
- Tang, D.; Di, B.; Wei, G.; Ni, I.-H.; Oh, I.S.; Wang, S. Spatial, seasonal and species variations of harmful algal blooms in the South Yellow Sea and East China Sea. Hydrobiologia 2006, 568, 245–253. [Google Scholar] [CrossRef]
- Harrison, P.J.; Furuya, K.; Glibert, P.M.; Xu, J.; Liu, H.B.; Yin, K.; Lee, J.H.W.; Anderson, D.M.; Gowen, R.; Al-Azri, A.R.; et al. Geographical distribution of red and green Noctiluca scintillans. Chin. J. Oceanol. Limnol. 2011, 29, 807–831. [Google Scholar] [CrossRef]
- Song, J.; Bi, H.; Cai, Z.; Cheng, X.; He, Y.; Benfield, M.C.; Fan, C. Early warning of Noctiluca scintillans blooms using in-situ plankton imaging system: An example from Dapeng Bay, PR China. Ecol. Indic. 2020, 112, 106123. [Google Scholar] [CrossRef]
- Huang, C.; Qi, Y. The abundance cycle and influence factors on red tide phenomena of Noctiluca scintillans (Dinophyceae) in Dapeng Bay, the South China Sea. J. Plankton Res. 1997, 19, 303–318. [Google Scholar] [CrossRef] [Green Version]
- Do Rosário Gomes, H.; Goes, J.I.; Matondkar, S.P.; Buskey, E.J.; Basu, S.; Parab, S.; Thoppil, P. Massive outbreaks of Noctiluca scintillans blooms in the Arabian Sea due to spread of hypoxia. Nat. Commun. 2014, 5, 4862. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Buskey, E.J. Growth and bioluminescence of Noctiluca scintillans on varying algal diets. J. Plankton Res. 1995, 17, 29–40. [Google Scholar] [CrossRef]
- Xue, C.; Chen, S.; Zhang, T. Optical proxy for the abundance of red Noctiluca scintillans from bioluminescence flash kinetics in the Yellow Sea and Bohai Sea. Opt. Express 2020, 28, 25618–25632. [Google Scholar] [CrossRef] [PubMed]
- Rohr, J.; Hyman, M.; Fallon, S.; Latz, M.I. Bioluminescence flow visualization in the ocean: An initial strategy based on laboratory experiments. Deep Sea Res. Part I 2002, 49, 2009–2033. [Google Scholar] [CrossRef]
- Lapota, D. Night time surveillance of harbors and coastal areas using bioluminescence camera and buoy systems. In Proceedings of the Photonics for Port and Harbor Security, Orlando, FL, USA, 29–30 March 2005; pp. 128–137. [Google Scholar]
- Schaumann, K.; Gerdes, D.; Hesse, K. Hydrographic and biological characteristics of a Noctiluca scintillans red tide in the German Bight, 1984. Meeresforschung 1988, 32, 77–91. [Google Scholar] [CrossRef]
- Uhlig, G.; Sahling, G. Long-term studies on Noctiluca scintillans in the German Bight population dynamics and red tide phenomena 1968–1988. Neth. J. Sea Res. 1990, 25, 101–112. [Google Scholar] [CrossRef]
- Tseng, L.-C.; Kumar, R.; Chen, Q.-C.; Hwang, J.-S. Summer distribution of Noctiluca scintillans and mesozooplankton in the Western and Southern East China Sea prior to the Three Gorges Dam operation. Hydrobiologia 2011, 666, 239–256. [Google Scholar] [CrossRef]
- Junwu, T.; Jing, D.; Qimao, W.; Chaofei, M. Research of the effects of atmospheric scattering on red tide remote sensing with normalized vegetation index. Acta Oceanol. Sin. 2004, 26, 136–142. (In Chinese) [Google Scholar]
- Hu, C.; Muller-Karger, F.E.; Taylor, C.J.; Carder, K.L.; Kelble, C.; Johns, E.; Heil, C.A. Red tide detection and tracing using MODIS fluorescence data: A regional example in SW Florida coastal waters. Remote Sens. Environ. 2005, 97, 311–321. [Google Scholar] [CrossRef]
- Ahn, Y.-H.; Shanmugam, P. Detecting the red tide algal blooms from satellite ocean color observations in optically complex Northeast-Asia Coastal waters. Remote Sens. Environ. 2006, 103, 419–437. [Google Scholar] [CrossRef]
- Takahashi, W.; Kawamura, H.; Omura, T.; Furuya, K. Detecting red tides in the eastern Seto inland sea with satellite ocean color imagery. J. Oceanogr. 2009, 65, 647–656. [Google Scholar] [CrossRef]
- Sakuno, Y.; Maeda, A.; Mori, A.; Ono, S.; Ito, A. A simple red tide monitoring method using sentinel-2 data for sustainable management of Brackish Lake Koyama-ike, Japan. Water 2019, 11, 1044. [Google Scholar] [CrossRef] [Green Version]
- Qi, L.; Tsai, S.F.; Chen, Y.; Le, C.; Hu, C. In Search of Red Noctiluca scintillans Blooms in the East China Sea. Geophys. Res. Lett. 2019, 46, 5997–6004. [Google Scholar] [CrossRef]
- Qi, L.; Hu, C.; Liu, J.; Ma, R.; Zhang, Y.; Zhang, S. Noctiluca blooms in the East China Sea bounded by ocean fronts. Harmful Algae 2022, 112, 102172. [Google Scholar] [CrossRef] [PubMed]
- Dwivedi, R.; Priyaja, P.; Rafeeq, M.; Sudhakar, M. MODIS-Aqua detects Noctiluca scintillans and hotspots in the central Arabian Sea. Environ. Monit. Assess. 2016, 188, 1–11. [Google Scholar] [CrossRef]
- Liu, R.-J.; Zhang, J.; Cui, B.-G.; Ma, Y.; Song, P.-J.; An, J.-B. Red tide detection based on high spatial resolution broad band satellite data: A case study of GF-1. J. Coast. Res. 2019, 90, 120–128. [Google Scholar] [CrossRef]
- Liu, R.; Xiao, Y.; Ma, Y.; Cui, T.; An, J. Red tide detection based on high spatial resolution broad band optical satellite data. ISPRS J. Photogramm. Remote Sens. 2022, 184, 131–147. [Google Scholar] [CrossRef]
- Zhao, X.; Liu, R.; Ma, Y.; Xiao, Y.; Ding, J.; Liu, J.; Wang, Q. Red Tide Detection Method for HY−1D Coastal Zone Imager Based on U−Net Convolutional Neural Network. Remote Sens. 2021, 14, 88. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Lee, M.-S.; Park, K.-A.; Chae, J.; Park, J.-E.; Lee, J.-S.; Lee, J.-H. Red tide detection using deep learning and high-spatial resolution optical satellite imagery. Int. J. Remote Sens. 2019, 41, 5838–5860. [Google Scholar] [CrossRef]
- Kim, S.M.; Shin, J.; Baek, S.; Ryu, J.-H. U-Net convolutional neural network model for deep red tide learning using GOCI. J. Coast. Res. 2019, 90, 302–309. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Liu, J.; Li, C.; Liang, F.; Lin, C.; Sun, M.; Yan, J.; Ouyang, W.; Xu, D. Inception convolution with efficient dilation search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11486–11495. [Google Scholar]
- Lin, Z.; Feng, M.; dos Santos, C.N.; Yu, M.; Xiang, B.; Zhou, B.; Bengio, Y. A structured self-attentive sentence embedding. arXiv 2017, arXiv:1703.03130. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 603–612. [Google Scholar]
- Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; Zhang, L. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 22–31. [Google Scholar]
- Dai, Z.; Liu, H.; Le, Q.V.; Tan, M. Coatnet: Marrying convolution and attention for all data sizes. Adv. Neural Inf. Process. Syst. 2021, 34, 3965–3977. [Google Scholar] [CrossRef]
- Li, K.; Wang, Y.; Gao, P.; Song, G.; Liu, Y.; Li, H.; Qiao, Y. Uniformer: Unified transformer for efficient spatiotemporal representation learning. arXiv 2022, arXiv:2201.04676. [Google Scholar]
- Si, C.; Yu, W.; Zhou, P.; Zhou, Y.; Wang, X.; Yan, S. Inception Transformer. arXiv 2022, arXiv:2205.12956. [Google Scholar]
- Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P.H. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6881–6890. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 568–578. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Gao, Y.; Zhou, M.; Metaxas, D.N. UTNet: A hybrid transformer architecture for medical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Online, 27 September–1 October 2021; pp. 61–71. [Google Scholar]
- Sha, Y.; Zhang, Y.; Ji, X.; Hu, L. Transformer-Unet: Raw Image Processing with Unet. arXiv 2021, arXiv:2109.08417. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. arXiv 2021, arXiv:2105.05537. [Google Scholar]
- Yuan, J.; Wang, L.; Cheng, S. STransUNet: A Siamese TransUNet-Based Remote Sensing Image Change Detection Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9241–9253. [Google Scholar] [CrossRef]
- Li, Q.; Zhong, R.; Du, X.; Du, Y. TransUNetCD: A Hybrid Transformer Network for Change Detection in Optical Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 3169479. [Google Scholar] [CrossRef]
- Zhang, C.; Wang, L.; Cheng, S.; Li, Y. SwinSUNet: Pure Transformer Network for Remote Sensing Image Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 3160007. [Google Scholar] [CrossRef]
- Zhang, C.; Jiang, W.; Zhang, Y.; Wang, W.; Zhao, Q.; Wang, C. Transformer and CNN Hybrid Deep Neural Network for Semantic Segmentation of Very-High-Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 3144894. [Google Scholar] [CrossRef]
- He, X.; Zhou, Y.; Zhao, J.; Zhang, D.; Yao, R.; Xue, Y. Swin Transformer Embedding UNet for Remote Sensing Image Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 3144165. [Google Scholar] [CrossRef]
- Yao, J.; Jin, S. Multi-Category Segmentation of Sentinel-2 Images Based on the Swin UNet Method. Remote Sens. 2022, 14, 3382. [Google Scholar] [CrossRef]
- Wang, L.; Li, R.; Zhang, C.; Fang, S.; Duan, C.; Meng, X.; Atkinson, P.M. UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS J. Photogramm. Remote Sens. 2022, 190, 196–214. [Google Scholar] [CrossRef]
- Hu, C.; Chen, Z.; Clayton, T.D.; Swarzenski, P.; Brock, J.C.; Muller–Karger, F.E. Assessment of estuarine water-quality indicators using MODIS medium-resolution bands: Initial results from Tampa Bay, FL. Remote Sens. Environ. 2004, 93, 423–441. [Google Scholar] [CrossRef]
- Hu, C. A novel ocean color index to detect floating algae in the global oceans. Remote Sens. Environ. 2009, 113, 2118–2129. [Google Scholar] [CrossRef]
- Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
- Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 2018 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; pp. 327–331. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Bokhovkin, A.; Burnaev, E. Boundary loss for remote sensing imagery semantic segmentation. In Proceedings of the International Symposium on Neural Networks, Moscow, Russia, 10–12 July 2019; pp. 388–401. [Google Scholar]
- Kervadec, H.; Bouchtiba, J.; Desrosiers, C.; Granger, E.; Dolz, J.; Ayed, I.B. Boundary loss for highly unbalanced segmentation. In Proceedings of the International Conference on Medical Imaging with Deep Learning, London, UK, 8–10 July 2019; pp. 285–296. [Google Scholar]
Date | Region | Longitude | Latitude |
---|---|---|---|
17 August 2020 | East China Sea | 123°36′–125°32′ | 31°58′–33°18′ |
17 August 2020 | East China Sea | 124°92′–135°45′ | 32°63′–34°73′ |
14 February 2021 | Beibu Gulf | 107°71′–109°38′ | 19°21′–21°25′ |
13 March 2022 | Dapeng Bay | 108°84′–118°15′ | 19°49′–21°45′ |
10 April 2022 | Yangjiang | 111°45′–120°83′ | 20°08′–22°06′ |
10 April 2022 | Dapeng Bay | 112°05′–121°62′ | 22°90′–24°90′ |
Confusion Matrix | Ground Truth | ||
---|---|---|---|
Positive | Negative | ||
Recognition Result | Positive | True-Positive (TP) | False-Positive (FP) |
Negative | False-Negative (FN) | True-Negative (TN) |
Model | Precision | Recall | F1-Score | IOU |
---|---|---|---|---|
Res UNet | 91.51 | 85.42 | 88.26 | 79.22 |
UNet | 90.75 | 84.46 | 87.38 | 77.82 |
Swin UNet | 87.72 | 83.64 | 85.49 | 74.92 |
Trans UNet | 87.76 | 82.79 | 85.03 | 74.25 |
FCN-8s | 85.00 | 80.57 | 82.67 | 70.75 |
PSPNet (ResNet34) | 84.52 | 76.28 | 80.12 | 66.94 |
NSBRNet | 92.22 | 88.20 | 90.10 | 82.18 |
Model | Precision | Recall | F1-Score | IOU |
---|---|---|---|---|
UNet | 90.75 | 84.46 | 87.38 | 77.82 |
NSBRNet (SAB) | 91.52 | 85.94 | 88.56 | 79.70 |
NSBRNet (ICB) | 92.02 | 86.14 | 88.89 | 80.28 |
NSBRNet | 92.22 | 88.20 | 90.10 | 82.18 |
Model | Precision | Recall | F1-Score | IOU |
---|---|---|---|---|
R, G, B, NIR | 92.11 | 87.25 | 89.51 | 81.25 |
R, G, B, NIR, NSBI | 92.22 | 88.20 | 90.10 | 82.18 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Cui, H.; Chen, S.; Hu, L.; Wang, J.; Cai, H.; Ma, C.; Liu, J.; Zou, B. HY1C/D-CZI Noctiluca scintillans Bloom Recognition Network Based on Hybrid Convolution and Self-Attention. Remote Sens. 2023, 15, 1757. https://doi.org/10.3390/rs15071757
Cui H, Chen S, Hu L, Wang J, Cai H, Ma C, Liu J, Zou B. HY1C/D-CZI Noctiluca scintillans Bloom Recognition Network Based on Hybrid Convolution and Self-Attention. Remote Sensing. 2023; 15(7):1757. https://doi.org/10.3390/rs15071757
Chicago/Turabian StyleCui, Hanlin, Shuguo Chen, Lianbo Hu, Junwei Wang, Haobin Cai, Chaofei Ma, Jianqiang Liu, and Bin Zou. 2023. "HY1C/D-CZI Noctiluca scintillans Bloom Recognition Network Based on Hybrid Convolution and Self-Attention" Remote Sensing 15, no. 7: 1757. https://doi.org/10.3390/rs15071757
APA StyleCui, H., Chen, S., Hu, L., Wang, J., Cai, H., Ma, C., Liu, J., & Zou, B. (2023). HY1C/D-CZI Noctiluca scintillans Bloom Recognition Network Based on Hybrid Convolution and Self-Attention. Remote Sensing, 15(7), 1757. https://doi.org/10.3390/rs15071757