OESA-UNet: An Adaptive and Attentional Network for Detecting Diverse Magnetopause under the Limited Field of View
Abstract
:1. Introduction
2. Materials
3. Methods
3.1. Image Adaptive Preprocessing
3.2. Attention Block
3.2.1. Squeeze and Excitation Units
3.2.2. Convolutional Block Attention Module
3.3. Efficientnet as Encoder
3.4. Loss Function
3.5. OESA-UNet Architecture
4. Experiments and Metrics
4.1. Training
4.2. Evaluation Metrics
5. Results
6. Discussion
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Cravens, T.E. Comet Hyakutake X-ray Source: Charge Transfer of Solar Wind Heavy Ions. Geophys. Res. Lett. 1997, 24, 105–108. [Google Scholar] [CrossRef]
- Bhardwaj, A.; Elsner, R.F.; Randall Gladstone, G.; Cravens, T.E.; Lisse, C.M.; Dennerl, K.; Branduardi-Raymont, G.; Wargelin, B.J.; Hunter Waite, J.; Robertson, I.; et al. X-rays from Solar System Objects. Planet. Space Sci. 2007, 55, 1135–1189. [Google Scholar] [CrossRef]
- Wang, C.; Sun, T. Methods to Derive the Magnetopause from Soft X-ray Images by the SMILE Mission. Geosci. Lett. 2022, 9, 30. [Google Scholar] [CrossRef]
- Branduardi-Raymont, G.; Wang, C.; Escoubet, C.P.; Sembay, S.; Donovan, E.; Dai, L.; Li, L.; Li, J.; Agnolon, D.; Raab, W.; et al. Imaging solar-terrestrial interactions on the global scale: The SMILE mission. In Proceedings of the EGU General Assembly Conference Abstracts, Online, 19–30 April 2021; p. EGU21-3230. [Google Scholar]
- Sun, T.R.; Wang, C.; Wei, F.; Sembay, S. X-ray Imaging of Kelvin-Helmholtz Waves at the Magnetopause. J. Geophys. Res. Space Phys. 2015, 120, 266–275. [Google Scholar] [CrossRef]
- Soman, M.R.; Hall, D.J.; Holland, A.D.; Burgon, R.; Buggey, T.; Skottfelt, J.; Sembay, S.; Drumm, P.; Thornhill, J.; Read, A.; et al. The SMILE Soft X-ray Imager (SXI) CCD Design and Development. J. Inst. 2018, 13, C01022. [Google Scholar] [CrossRef]
- Xu, Q.; Tang, B.; Sun, T.; Li, W.; Zhang, X.; Wei, F.; Guo, X.; Wang, C. Modeling of the Subsolar Magnetopause Motion Under Interplanetary Magnetic Field Southward Turning. Space Weather 2022, 20, 12. [Google Scholar] [CrossRef]
- Haaland, S.; Gjerloev, J. On the Relation between Asymmetries in the Ring Current and Magnetopause Current. JGR Space Phys. 2013, 118, 7593–7604. [Google Scholar] [CrossRef]
- Haaland, S.; Paschmann, G.; Øieroset, M.; Phan, T.; Hasegawa, H.; Fuselier, S.A.; Constantinescu, V.; Eriksson, S.; Trattner, K.J.; Fadanelli, S.; et al. Characteristics of the Flank Magnetopause: MMS Results. JGR Space Phys. 2020, 125, e2019JA027623. [Google Scholar] [CrossRef]
- Walsh, B.M.; Sibeck, D.G.; Nishimura, Y.; Angelopoulos, V. Statistical Analysis of the Plasmaspheric Plume at the Magnetopause. J. Geophys. Res. Space Phys. 2013, 118, 4844–4851. [Google Scholar] [CrossRef]
- Robertson, I.P.; Cravens, T.E. X-ray Emission from the Terrestrial Magnetosheath. Geophys. Res. Lett. 2003, 30, 2002GL016740. [Google Scholar] [CrossRef]
- Jorgensen, A.M.; Xu, R.; Sun, T.; Huang, Y.; Li, L.; Dai, L.; Wang, C. A Theoretical Study of the Tomographic Reconstruction of Magnetosheath X-ray Emissions. JGR Space Phys. 2022, 127, 4. [Google Scholar] [CrossRef]
- Collier, M.R.; Connor, H.K. Magnetopause Surface Reconstruction from Tangent Vector Observations. JGR Space Phys. 2018, 123, 12. [Google Scholar] [CrossRef]
- Jorgensen, A.M.; Sun, T.; Wang, C.; Dai, L.; Sembay, S.; Zheng, J.; Yu, X. Boundary Detection in Three Dimensions with Application to the SMILE Mission: The Effect of Model-Fitting Noise. J. Geophys. Res. Space Phys. 2019, 124, 4341–4355. [Google Scholar] [CrossRef]
- Sun, T.; Wang, C.; Connor, H.K.; Jorgensen, A.M.; Sembay, S. Deriving the Magnetopause Position from the Soft X-ray Image by Using the Tangent Fitting Approach. JGR Space Phys. 2020, 125, 9. [Google Scholar] [CrossRef]
- Wang, J.; Wang, R.; Li, D.; Sun, T.; Peng, X. An Approach of Filtering Simulated Magnetospheric X-ray Images Based on Self-Supervised Network and Random Forest. Phys. Scr. 2023, 98, 096002. [Google Scholar] [CrossRef]
- Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active Contour Models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
- Burman, R.; Paul, S.; Das, S. A Differential Evolution Approach to Multi-Level Image Thresholding Using Type II Fuzzy Sets. In Swarm, Evolutionary, and Memetic Computing; Springer: Cham, Switzerland, 2013; pp. 274–285. [Google Scholar]
- Singh, S.; Singh, R. Comparison of Various Edge Detection Techniques. In Proceedings of the 2015 2nd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 11–13 March 2015; pp. 393–396. [Google Scholar]
- Xu, Q.; Ma, Z.; He, N.; Duan, W. DCSAU-Net: A Deeper and More Compact Split-Attention U-Net for Medical Image Segmentation. Comput. Biol. Med. 2023, 154, 106626. [Google Scholar] [CrossRef]
- Shinde, P.P.; Shah, S. A Review of Machine Learning and Deep Learning Applications. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6. [Google Scholar]
- Zhu, X.; Cheng, Z.; Wang, S.; Chen, X.; Lu, G. Coronary Angiography Image Segmentation Based on PSPNet. Comput. Methods Programs Biomed. 2021, 200, 105897. [Google Scholar] [CrossRef]
- DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs|IEEE Journals & Magazine|IEEE Xplore. Available online: https://ieeexplore.ieee.org/abstract/document/7913730 (accessed on 11 December 2023).
- Bi, L.; Feng, D.; Kim, J. Dual-Path Adversarial Learning for Fully Convolutional Network (FCN)-Based Medical Image Segmentation. Vis. Comput. 2018, 34, 1043–1052. [Google Scholar] [CrossRef]
- Khosravan, N.; Mortazi, A.; Wallace, M.; Bagci, U. PAN: Projective Adversarial Network for Medical Image Segmentation. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2019; Springer: Cham, Switzerland, 2019; pp. 68–76. [Google Scholar]
- Jha, D.; Smedsrud, P.H.; Riegler, M.A.; Johansen, D.; Lange, T.D.; Halvorsen, P.D.; Johansen, H. ResUNet++: An Advanced Architecture for Medical Image Segmentation. In Proceedings of the 2019 IEEE International Symposium on Multimedia (ISM), San Diego, CA, USA, 9–11 December 2019; pp. 225–2255. [Google Scholar]
- Yu, W.; Yang, T.; Chen, C. Towards Resolving the Challenge of Long-Tail Distribution in UAV Images for Object Detection. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2021; pp. 3257–3266. [Google Scholar]
- Hu, Y.Q.; Guo, X.C.; Wang, C. On the Ionospheric and Reconnection Potentials of the Earth: Results from Global MHD Simulations. J. Geophys. Res. Space Phys. 2007, 112, A07215. [Google Scholar] [CrossRef]
- Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2021 IEEE International Conference on Industrial Application of Artificial Intelligence (IAAI), Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the Computer Vision – ECCV 2018: 15th European Conference, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Visualizing and Understanding Convolutional Networks|SpringerLink. Available online: https://www.usualwant.com/chapter/10.1007/978-3-319-10590-1_53 (accessed on 11 December 2023).
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2020, arXiv:1905.11946. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. MobileNetV2: Inverted Residuals and Linear Bottlenecks; CRV: Leeuwarden, The Netherlands, 2018; pp. 4510–4520. [Google Scholar]
- Gikunda, P.K.; Jouandeau, N. State-of-the-Art Convolutional Neural Networks for Smart Farms: A Review. In Intelligent Computing; Arai, K., Bhatia, R., Kapoor, S., Eds.; Advances in Intelligent Systems and Computing; Springer International Publishing: Cham, Switzerland, 2019; Volume 997, pp. 763–775. ISBN 978-3-030-22870-5. [Google Scholar]
- Xie, C.; Tan, M.; Gong, B.; Wang, J.; Yuille, A.; Le, Q.V. Adversarial Examples Improve Image Recognition. arXiv 2020, arXiv:1911.09665. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Li, F.-F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Chaurasia, A.; Culurciello, E. LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
- He, P.; Jiao, L.; Shang, R.; Wang, S.; Liu, X.; Quan, D.; Yang, K.; Zhao, D. MANet: Multi-Scale Aware-Relation Network for Semantic Segmentation in Aerial Scenes. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Syazwany, N.S.; Nam, J.-H.; Lee, S.-C. MM-BiFPN: Multi-Modality Fusion Network With Bi-FPN for MRI Brain Tumor Segmentation. IEEE Access 2021, 9, 160708–160720. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. arXiv 2021, arXiv:2105.05537. [Google Scholar]
Track Number | Time of Each Year (UTC) |
---|---|
1 | 1.1 T00:00:00–1.3 T04:00:00 |
2 | 3.15 T00:00:00–3.17 T04:00:00 |
3 | 5.27 T00:00:00–5.29 T04:00:00 |
4 | 8.8 T00:00:00–8.10 T04:00:00 |
5 | 11.20 T00:00:00–11.22 T04:00:00 |
No. | Solar Density (cm−3) | Solar Velocity (km/s) | BX (nT) | By (nT) | Bz (nT) |
---|---|---|---|---|---|
1 | 5 | 400 | 0 | 0 | 0 |
2 | 5 | 900 | 0 | 0 | 5 |
3 | 5 | 800 | 0 | 10 | 0 |
4 | 7 | 500 | 10 | 0 | 0 |
5 | 15 | 800 | 0 | 0 | −5 |
6 | 20 | 800 | 0 | −10 | −10 |
7 | 20 | 400 | 0 | 10 | −20 |
Methods | Recall | Precision | F1 Score | Accuracy | ||||
---|---|---|---|---|---|---|---|---|
DeeplabV3 [23] | 79.5% | 36.8% | 57.9% | 99.1% | 90% | 1.25 | 0.00 | 0.1992 |
DeeplabV3+ [23] | 83.2% | 57.6% | 62.4% | 99.5% | 88.4% | 1.25 | 0.00 | 0.1644 |
FPN [41] | 33% | 95.8% | 49.1% | 99.5% | 82% | 1.3 | 0.00 | 0.1171 |
PAN [21] | 95.3% | 57.5% | 71.7% | 99.3% | 89.3% | 1.63 | 0.00 | 0.1316 |
PSPNet [22] | 74.7% | 35.8% | 55.4% | 99.1% | 90.1% | 1.58 | 0.00 | 0.0905 |
MANet [40] | 85.2% | 71.3% | 79.5% | 99.2% | 94.3% | 0.46 | 0.00 | 0.0726 |
LinkNet [39] | 85.4% | 70.3% | 76.2% | 99.4% | 94.3% | 0.41 | 0.00 | 0.0555 |
UNet [42] | 85.3% | 84.8% | 85.0% | 99.6% | 94.5% | 0.28 | 0.00 | 0.0263 |
UNet++ [26] | 87.8% | 88.2% | 86.5% | 99.8% | 95.4% | 0.26 | 0.00 | 0.018 |
Ours | 93.8% | 92.1% | 92.9% | 99.9% | 97.4% | 0.10 | 0.00 | 0.005 |
Methods | Recall | Precision | F1 Score | Accuracy | ||||
---|---|---|---|---|---|---|---|---|
OVSA-UNet (Vgg16 backbone) | 76.4% | 72.2% | 74.3% | 99.6% | 94.5% | 0.267 | 0.00 | 0.0251 |
ORSA-UNet (Resnet50 backbone) | 84.0% | 78.7% | 81.2% | 99.7% | 96.2% | 0.287 | 0.00 | 0.0172 |
OMSA-UNet (MobilenetV2 backbone) | 88.1% | 87.0% | 87.7% | 99.7% | 94.9% | 0.291 | 0.00 | 0.0106 |
ODSA-UNet (Densenet121 backbone) | 88.2% | 86.9% | 87.7% | 99.8% | 92.0% | 0.204 | 0.00 | 0.0114 |
ODPSA-UNet (Dpn68 backbone) | 92.6% | 92.4% | 92.5% | 99.8% | 96.9% | 0.147 | 0.00 | 0.0105 |
OXSA-UNet (Xception backbone) | 93.9% | 91.6% | 92.8% | 99.9% | 96.7% | 0.110 | 0.00 | 0.008 |
OESA-UNet (Ours) | 93.8% | 92.1% | 92.9% | 99.9% | 97.4% | 0.10 | 0.00 | 0.005 |
Methods | Recall | Precision | F1 Score | Accuracy | ||||
---|---|---|---|---|---|---|---|---|
OES-UNet (w/o CBAM block) | 93.0% | 91.5% | 92.6% | 99.9% | 97.2% | 0.15 | 0.00 | 0.008 |
OEA-UNet (w/o SE block) | 92.6% | 92.5% | 92.5% | 99.9% | 97.0% | 0.131 | 0.00 | 0.007 |
ESA-UNet (w/o preprocessing) | 91.9% | 91.3% | 91.5% | 99.9% | 96.8% | 0.134 | 0.00 | 0.009 |
OESA-UNet (Ours) | 93.8% | 92.1% | 92.9% | 99.9% | 97.4% | 0.10 | 0.00 | 0.005 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Wang, R.; Li, D.; Sun, T.; Peng, X. OESA-UNet: An Adaptive and Attentional Network for Detecting Diverse Magnetopause under the Limited Field of View. Remote Sens. 2024, 16, 994. https://doi.org/10.3390/rs16060994
Wang J, Wang R, Li D, Sun T, Peng X. OESA-UNet: An Adaptive and Attentional Network for Detecting Diverse Magnetopause under the Limited Field of View. Remote Sensing. 2024; 16(6):994. https://doi.org/10.3390/rs16060994
Chicago/Turabian StyleWang, Jiaqi, Rongcong Wang, Dalin Li, Tianran Sun, and Xiaodong Peng. 2024. "OESA-UNet: An Adaptive and Attentional Network for Detecting Diverse Magnetopause under the Limited Field of View" Remote Sensing 16, no. 6: 994. https://doi.org/10.3390/rs16060994
APA StyleWang, J., Wang, R., Li, D., Sun, T., & Peng, X. (2024). OESA-UNet: An Adaptive and Attentional Network for Detecting Diverse Magnetopause under the Limited Field of View. Remote Sensing, 16(6), 994. https://doi.org/10.3390/rs16060994