Field Patch Extraction Based on High-Resolution Imaging and U2-Net++ Convolutional Neural Networks
Abstract
:1. Introduction
- (1)
- To evaluate and compare the efficacy of several deep neural networks, namely, FCN, SegNet, U2-Net, DeepLabv3+, and U-Net, in farmland extraction.
- (2)
- To evaluate the impact and effectiveness of integrating separable convolution and CBAM modules in a cultivated land extraction task. We analyzed how the inclusion of these modules influenced the accuracy and performance of the segmentation models used.
- (3)
- To extract a distribution map of the cultivated land using a limited number of samples for training, including different types of cultivated land blocks.
- (4)
- To validate the applicability of our model under various resolutions.
2. Materials and Methods
2.1. Study Area
2.2. Datasets
2.2.1. Satellite Data Collection and Preprocessing
2.2.2. Plot Boundary Labeling and Sample Making
2.3. U2-Net++ Deep Learning Model
2.3.1. RSU Structure
- (1)
- The input convolution layer was used for local feature extraction by converting the input feature map x (H × W × Cin) into an intermediate feature map F1(x) with Cout channels. This is a typical convolutional layer that is used for local feature extraction.
- (2)
- The model has a symmetrical encoder–decoder structure similar to U-Net with a height of L, taking the intermediate feature map F1(x) as the input to learn to extract and encode multiscale contextual information U(F1(x)), where U denotes the U-net structure. A larger L results in deeper U-blocks (RSUs), more pooling operations, a larger receptive field range, and richer local and global features. By configuring this parameter, multiscale features can be extracted from input feature maps at any spatial resolution. Multiscale features were extracted from the gradually downsampled feature maps and encoded into high-resolution feature maps through progressive upsampling, concatenation, and convolution. This process alleviated the loss of detail caused by large-scale upsampling.
- (3)
- Local and multiscale features are fused through residual connections, that is, HRSU(x) = U(F1(x)) + F1(x).
2.3.2. Depth-Wise Separable Convolution
2.3.3. Spatial-Channel Attention Mechanism
2.3.4. Loss
2.4. Accuracy Evaluation
3. Results
3.1. CBAM and Depth-Separable Convolution Performance Deviation
3.2. Extract Effects from Different Areas
3.3. Performance Evaluation of Different Deep Learning Models
3.4. Applicability of Different Resolutions in Different Regions
3.4.1. Dryland in Northern China (Gaoqing County)
3.4.2. Paddy Fields in Southern China (Suxicheng District)
3.4.3. Terraced Fields in Southwestern China (Yuanyang County)
4. Discussion
- (1)
- Although the proposed model has been proven to be accurate and effective, it has certain limitations. Analyzing the results showed that when the image contained unclear field boundaries, boundaries obscured by trees or shadows, large changes in crops within the field, or irregular field shapes, the extracted field boundaries were unclear. However, this was to be expected because, combined with human observation, the boundaries of the fields were less visible. Trees and shadows could cause changes in the color of an image, which could prevent the model from detecting the field or generating curved boundaries that are not conducive to engineering applications. There are some post-processing methods for obtaining smoother boundaries, but this is not in line with our original intention for the use of deep learning. Despite these issues, we were still able to extract the field boundaries effectively.
- (2)
- The images used in this study were obtained in the winter and spring. During this period, no crops were planted in the fields, which is why we chose to extract more accurate field boundaries and minimize the impact of other factors on the extraction results. However, if there are no corresponding high-resolution images of the area where the field must be extracted during this period, it is difficult to continue this work. The samples used for training were all bare soil. Therefore, they would not have a positive effect on images with crops. If the images of bare soil and crops were trained together, the model would not achieve the highest accuracy in both scenarios. Therefore, the proposed method trains different weights for different regions and seasons for engineering applications. This still requires a substantial amount of work. Therefore, follow-up research will be conducted from the aspect of samples and models to ensure that all fields can be extracted with a complete set of sample sets.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Haworth, B.T.; Biggs, E.; Duncan, J.; Wales, N.; Boruff, B.; Bruce, E. Geographic Information and Communication Technologies for Supporting Smallholder Agriculture and Climate Resilience. Climate 2018, 6, 97. [Google Scholar] [CrossRef]
- Jain, M.; Singh, B.; Srivastava, A.A.K.; Malik, R.K.; McDonald, A.J.; Lobell, D.B. Using Satellite Data to Identify the Causes of and Potential Solutions for Yield Gaps in India’s Wheat Belt. Environ. Res. Lett. 2017, 12, 094011. [Google Scholar] [CrossRef]
- Neumann, K.; Verburg, P.H.; Stehfest, E.; Müller, C. The Yield Gap of Global Grain Production: A Spatial Analysis. Agric. Syst. 2010, 103, 316–326. [Google Scholar] [CrossRef]
- Wagner, M.P.; Oppelt, N. Extracting Agricultural Fields from Remote Sensing Imagery Using Graph-Based Growing Contours. Remote Sens. 2020, 12, 1205. [Google Scholar] [CrossRef]
- Persello, C.; Tolpekin, V.A.; Bergado, J.R.; de By, R.A. Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping. Remote Sens. Environ. 2019, 231, 111253. [Google Scholar] [CrossRef] [PubMed]
- Zhao, W.Z.; Du, S.H.; Emery, W.J. Object-based convolutional neural network for high-resolution imagery classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3386–3396. [Google Scholar] [CrossRef]
- Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
- Matton, N.; Canto, G.; Waldner, F.; Valero, S.; Morin, D.; Inglada, J.; Arias, M.; Bontemps, S.; Koetz, B.; Defourny, P. An automated method for annual cropland mapping along the season for various globally- distributed agrosystems using high spatial and temporal resolution time series. Remote Sens. 2015, 7, 13208–13232. [Google Scholar] [CrossRef]
- Marvaniya, S.; Devi, U.; Hazra, J.; Mujumdar, S.; Gupta, N. Small, sparse, but substantial: Techniques for segmenting small agricultural fields using sparse ground data. Int. J. Remote Sens. 2021, 42, 1512–1534. [Google Scholar] [CrossRef]
- Turker, M.; Kok, E.H. Field-based sub-boundary extraction from remote sensing imagery using perceptual grouping. ISPRS J. Photogramm. Remote Sens. 2013, 79, 106–121. [Google Scholar] [CrossRef]
- Yan, L.; Roy, D.P. Automated crop field extraction from multi- temporal Web Enabled Landsat Data. Remote Sens. Environ. 2014, 144, 42–64. [Google Scholar] [CrossRef]
- Cheng, T.; Ji, X.S.; Yang, G.X.; Zheng, H.; Ma, J.; Yao, X.; Zhu, Y.; Cao, W. DESTIN: A new method for delineating the boundaries of crop fields by fusing spatial and temporal information from WorldView and Planet satellite imagery. Comput. Electron. Agric. 2020, 178, 105787. [Google Scholar] [CrossRef]
- Evans, C.; Jones, R.; Svalbe, I.; Berman, M. Segmenting multispectral Landsat TM images into field units. IEEE Trans. Geosci. Remote Sens. 2002, 40, 1054–1064. [Google Scholar] [CrossRef]
- Watkins, B.; van Niekerk, A. Automating field boundary delineation with multi- temporal Sentinel-2 imagery. Comput. Electron. Agric. 2019, 167, 105078. [Google Scholar] [CrossRef]
- García-Pedrero, A.; Gonzalo-Martín, C.; Lillo-Saavedra, M. A machine learning approach for agricultural parcel delineation through agglomerative segmentation. Int. J. Remote Sens. 2017, 38, 1809–1819. [Google Scholar] [CrossRef]
- Chen, B.; Qiu, F.; Wu, B.; Du, H. Image Segmentation Based on Constrained Spectral Variance Difference and Edge Penalty. Remote Sens. 2015, 7, 5980–6004. [Google Scholar] [CrossRef]
- Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
- Crommelinck, S.; Bennett, R.; Gerke, M.; Yang, M.Y.; Vosselman, G. Contour Detection for UAV-Based Cadastral Mapping. Remote Sens. 2017, 9, 171. [Google Scholar] [CrossRef]
- Masoud, K.M.; Persello, C.; Tolpekin, V.A. Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks. Remote Sens. 2020, 12, 59. [Google Scholar] [CrossRef]
- Xu, W.; Deng, X.; Guo, S.; Chen, J.; Wang, X. High-Resolution U-Net: Preserving Image Details for Cultivated Land Extraction. Sens. Multidiscip. Digit. Publ. Inst. 2020, 20, 4064. [Google Scholar] [CrossRef]
- Waldner, F.; Diakogiannis, F.I. Deep Learning on Edge: Extracting Field Boundaries from Satellite Images with a Convolutional Neural Network. Remote Sens. Environ. 2020, 245, 111741. [Google Scholar] [CrossRef]
- Wang, S.; Waldner, F.; Lobell, D.B. Delineating Smallholder Fields Using Transfer Learning and Weak Supervision. In AGU Fall Meeting 2021; AGU: Washington, DC, USA, 2021. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
- Xia, L.; Luo, J.; Sun, Y.; Yang, H. Deep extraction of cropland parcels from very high-resolution remotely sensed imagery. In Proceedings of the 2018 7th International Conference on Agro-geoinformatics (Agro-geoinformatics), Hangzhou, China, 6–9 August 2018; pp. 1–5. [Google Scholar] [CrossRef] [PubMed]
- Pal, M.; Rasmussen, T.; Porwal, A. Optimized lithological mapping from multispectral and hyperspectral remote sensing images using fused multi-classifiers. Remote Sens. 2020, 12, 177. [Google Scholar] [CrossRef]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. arXiv 2018, arXiv:1807.10165. [Google Scholar]
- Sun, J.; Peng, Y.; Li, D.; Guo, Y. Segmentation of the Multimodal Brain Tumor Images Used Res-U-Ne; Springer: Cham, Switzerland, 2021; pp. 263–273. [Google Scholar]
- Yanan, V.; Kendall, A. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 11, 1. [Google Scholar]
- Qin, X.B.; Zhang, Z.C.; Huang, C.Y.; Dehghan, M.; Zaiane, O.R.; Jagersand, M. U2-Net: Going deeper with nested Ustructure for salient object detection. Pattern Recognit. 2020, 106, 107404. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Dong, Z.; Wang, K.; Qu, Z.; Haibo, W.A. New Rural Scenery of ‘Four Harvests a Year’ in Huagou Town, Gaoqing County. Zibo Daily, 23 May 2023. [Google Scholar]
- Cheng, X. Research on Comprehensive Evaluation of Urban Green Logistics Distribution. Ph.D. Thesis, Hunan University of Technology, Zhuzhou, China, 2017. [Google Scholar]
- Bradski, G. The OpenCV Library. Dr. Dobbs J. Softw. Tools Prof. Program. 2000, 25, 120–123. [Google Scholar]
- GDAL/OGR Contributors GDAL/OGR Geospatial Data Abstraction Software Library. Available online: https://gdal.org (accessed on 1 June 2020).
- van Kemenade, H.; Wiredfool; Murray, A.; Clark, A.; Karpinsky, A.; Gohlke, C.; Dufresne, J.; Nulano; Crowell, B.; Schmidt, D.; et al. Python-Pillow/Pillow 7.1.2 (7.1.2). Available online: https://zenodo.org/record/3766443 (accessed on 1 June 2020).
- Xie, S.; Tu, Z. Holistically Nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
- Li, C.; Kao, C.; Gore, J.; Ding, Z. Minimization of region-scalable fitting energy for image segmentation. IEEE Trans. Image Process. 2008, 17, 1940–1949. [Google Scholar]
Module | (0, x) | (1, x) | (2, x) | (3, x) | (4, x) |
---|---|---|---|---|---|
Image size | 512 × 512 | 256 × 256 | 128 × 128 | 64 × 64 | 32 × 32 |
0, 0 | 1, 0 | 2, 0 | 3, 0 | 4, 0 | 0, 1 | 1, 1 | 2, 1 | 3, 1 | 0, 2 | 1, 2 | 2, 2 | 0, 3 | 1, 3 | 0, 4 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
I | 3 | 64 | 128 | 256 | 512 | 192 | 384 | 256 | 1024 | 256 | 512 | 1024 | 320 | 640 | 384 |
M | 32 | 32 | 64 | 128 | 256 | 32 | 32 | 32 | 126 | 32 | 32 | 64 | 32 | 32 | 32 |
O | 64 | 128 | 256 | 512 | 512 | 64 | 128 | 128 | 512 | 54 | 128 | 256 | 64 | 128 | 64 |
Items | Parameters and Versions |
---|---|
CPU | Intel® Core™ i9-10980XE @3.00 GHz |
RAM | 128 GB |
HDD | DELL PERC H730P Adp SCSI Disk Device 21T |
GPU | NVIDIA GeForce RTX 3090 |
OS | Windows 10 Professional |
ENVS | PyTorch 1.10.0+ Python 3.8 |
Methods | Precision (%) | Recall (%) | F1-Score (%) | OA (%) | IoU (%) | Time (h) |
---|---|---|---|---|---|---|
U-Net | 89.54 | 72.20 | 79.50 | 71.32 | 66.74 | 7.40 |
FCN | 97.04 | 84.20 | 89.77 | 85.39 | 81.92 | 7.55 |
U-Net++ | 96.64 | 89.75 | 92.94 | 89.79 | 86.84 | 7.59 |
SegNet | 97.04 | 84.20 | 89.77 | 85.39 | 81.92 | 7.06 |
DeepLabv3+ | 97.53 | 92.99 | 95.19 | 93.23 | 90.87 | 9.83 |
U2-Net | 97.06 | 93.99 | 95.49 | 93.69 | 91.44 | 9.78 |
U2-Net++ | 98.82 | 95.49 | 97.13 | 95.58 | 94.42 | 10.12 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Long, C.; Wenlong, S.; Tao, S.; Yizhu, L.; Wei, J.; Jun, L.; Hongjie, L.; Tianshi, F.; Rongjie, G.; Abbas, H.; et al. Field Patch Extraction Based on High-Resolution Imaging and U2-Net++ Convolutional Neural Networks. Remote Sens. 2023, 15, 4900. https://doi.org/10.3390/rs15204900
Long C, Wenlong S, Tao S, Yizhu L, Wei J, Jun L, Hongjie L, Tianshi F, Rongjie G, Abbas H, et al. Field Patch Extraction Based on High-Resolution Imaging and U2-Net++ Convolutional Neural Networks. Remote Sensing. 2023; 15(20):4900. https://doi.org/10.3390/rs15204900
Chicago/Turabian StyleLong, Chen, Song Wenlong, Sun Tao, Lu Yizhu, Jiang Wei, Liu Jun, Liu Hongjie, Feng Tianshi, Gui Rongjie, Haider Abbas, and et al. 2023. "Field Patch Extraction Based on High-Resolution Imaging and U2-Net++ Convolutional Neural Networks" Remote Sensing 15, no. 20: 4900. https://doi.org/10.3390/rs15204900
APA StyleLong, C., Wenlong, S., Tao, S., Yizhu, L., Wei, J., Jun, L., Hongjie, L., Tianshi, F., Rongjie, G., Abbas, H., Lingwei, M., Shengjie, L., & Qian, H. (2023). Field Patch Extraction Based on High-Resolution Imaging and U2-Net++ Convolutional Neural Networks. Remote Sensing, 15(20), 4900. https://doi.org/10.3390/rs15204900