Rice Mapping in Training Sample Shortage Regions Using a Deep Semantic Segmentation Model Trained on Pseudo-Labels
Abstract
:1. Introduction
- (RQ1)
- Based on synthetic aperture radar (SAR) images, what was the optimal multi-temporal datasets for rice mapping?
- (RQ2)
- In the absence of training samples, could the proposed workflow (i.e., the combination of K-means and RF, K–RF) effectively extract rice distribution information as pseudo-labels?
- (RQ3)
- How was the performance of U-Net model trained on the pseudo-labels?
2. Study Area and Datasets
2.1. Study Area
2.2. Sentinel-1 Datasets
2.3. Reference Datasets
3. Methodology
3.1. Global Search of Optimal Multi-Temporal Datasets for Rice Mapping
3.2. Optimal Time Series Analysis for Rice Mapping
3.3. The Process of K–RF
- (1)
- Masks for water bodies, buildings and parts of dryland vegetation. It is well known that the backscattering temporal signal of a water body always keeps a lower value, and therefore the threshold was set to VHmax ≤ −20 after experimental comparison. Similarly, buildings could be masked out by using a higher minimum value of backscattering coefficient, since their backscattering temporal signal always keeps a higher value. In addition, the minimum backscattering characteristics of rice are close to that of water bodies, due to the existence of rice transplanting periods (i.e., the specular scattering of water is dominant). During the rice-growth cycle, for the part of dryland vegetation, its backscattering coefficient is always higher than that of a water body, since there are no obvious water bodies in the growing environment (i.e., volume scattering is dominant). Consequently, during the whole rice-growth stages, an appropriate minimum value of the backscattering coefficient could be used to mask out the building and part of dryland vegetation at the same time. After experimental comparison, the threshold was set to VHmin ≥ −17.
- (2)
- The difference calculation of pixel points between adjacent acquisition time images. Due to a larger fluctuation of rice in backscattering characteristics, caused by the flooding phenomenon (i.e., before and after rice transplanting period), the backscattering coefficient of the previous observation time was subtracted from the next observation time, to highlight the relative changes of backscattering characteristics of rice during its growth cycle. The specific mathematical expression is formulated as follows:
- (3)
- Clustering for the time series difference images. The time series difference images generated in step 2 were clustered into four classes (the value was determined after testing three, four and five) using the K-means algorithm based on Euclidean distance. Then, the standard deviation of each centroid vector was calculated. The centroid vector with the largest standard deviation was defined as the centroid of rice; the centroid vector with the smallest standard deviation was defined as the centroid of water and buildings (i.e., non-vegetation); the rest were defined as the centroid of non-rice vegetation.
- (4)
- Homogeneous sample construction. Based on current samples (the samples generated the first time were collected from step 3), 10,000 non-vegetation pixels were randomly selected from the current samples as non-vegetation training samples. For the construction of rice and non-rice vegetation training samples, on the basis of a homogeneous window, when all pixels in the window were classified as rice (or non-rice vegetation), the center pixel point of the window was selected as the rice (or non-rice vegetation) training point. After experimental comparison, the homogeneous window size was set to 11 × 11.
- (5)
- Loop-based training. The RF classifier was trained based on the homogeneous samples generated in step 4, and then the well-trained RF classifier could be used to further extract the potential rice points with unobvious features as the current samples. The overlap between the current samples extracted by RF and the previous samples was calculated. If it was higher than the set threshold, then the sample construction process could jump out of the loop, otherwise, based on samples extracted by the current RF, the 4th to 5th steps were repeated. After experimental comparison, the threshold of the overlap was set to 0.9.
- (6)
- Final fine training sample generation. After jumping out of the loop, the last produced result was taken as the final fine training sample (i.e., pseudo-label) required for U-Net model training.
3.4. Evaluation Parameters for Quantitative Analysis
4. Experimental Results and Discussion
4.1. Pseudo-Labels Generated by Different Methods
4.1.1. Pseudo-Labels Generated by the K–RF
4.1.2. Pseudo-Labels Generated by the Transfer Model
4.1.3. Comparison between the Two Types of Pseudo Labels
4.2. Rice Mapping Results of U-Net Trained on Different Pseudo-Labels
4.3. Comparison of Rice Mapping Performance with Other Rice Mapping Methods
4.4. Discussion
5. Conclusions
- (1)
- Based on the CDL data and the multi-temporal Sentinel-1 images, through global search, it can be concluded that multi-temporal datasets acquired from the sowing period to the mature period were the optimal datasets for local rice mapping, when the rice mapping accuracy, user accuracy, kappa and F1 were 84.63%, 94.98%, 0.84, and 0.90, respectively;
- (2)
- When the latitudes between the target and reference regions were significantly different, compared with multi-temporal datasets containing the whole rice-growth stages, the datasets that only contained middle to late growth stages shared by rice in different regions were more suitable for the transfer model;
- (3)
- For the U-Net model trained on pseudo-labels, compared with the U-Net trained on the TF pseudo-labels, the model trained on the K–RF pseudo-labels could further correct the errors contained in the pseudo-labels;
- (4)
- Compared with the two-rule-based methods, whether from the perspective of extracted rice area error or the perspective of extracted rice spatial distribution information, the U-Net trained on the K–RF pseudo-labels had obvious advantages.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Sharma, A.; Liu, X.; Yang, X.; Shi, D. A patch-based convolutional neural network for remote sensing image classification. Neural Netw. 2017, 95, 19–28. [Google Scholar] [CrossRef]
- Tian, F.; Wu, B.; Zeng, H.; Zhang, X.; Xu, J. Efficient Identification of Corn Cultivation Area with Multitemporal Synthetic Aperture Radar and Optical Images in the Google Earth Engine Cloud Platform. Remote Sens. 2019, 11, 629. [Google Scholar] [CrossRef] [Green Version]
- Phan, A.; Ha, D.N.; Man, C.D.; Nguyen, T.T.; Bui, H.Q.; Nguyen, T.T. Rapid Assessment of Flood Inundation and Damaged Rice Area in Red River Delta from Sentinel 1A Imagery. Remote Sens. 2019, 11, 2034. [Google Scholar] [CrossRef] [Green Version]
- Xu, J.; Yang, J.; Xiong, X.; Li, H.; Huang, J.; Ting, K.C.; Ying, Y.; Lin, T. Towards interpreting multi-temporal deep learning models in crop mapping. Remote Sens. Environ. 2021, 264, 112599. [Google Scholar] [CrossRef]
- Lasko, K.; Vadrevu, K.P.; Tran, V.T.; Justice, C. Mapping Double and Single Crop Paddy Rice With Sentinel-1A at Varying Spatial Scales and Polarizations in Hanoi, Vietnam. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 498–512. [Google Scholar] [CrossRef] [PubMed]
- Singha, M.; Dong, J.; Zhang, G.; Xiao, X. High resolution paddy rice maps in cloud-prone Bangladesh and Northeast India using Sentinel-1 data. Sci. Data 2019, 6, 24–31. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Zhai, H.; Shen, Y.; Lou, B.; Jiang, C.; Li, T.; Hussain, S.B.; Shen, G. Large-Scale Crop Mapping From Multisource Remote Sensing Images in Google Earth Engine. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 414–427. [Google Scholar] [CrossRef]
- Mansaray, L.R.; Wang, F.; Huang, J.; Yang, L.; Kanu, A.S. Accuracies of support vector machine and random forest in rice mapping with Sentinel-1A, Landsat-8 and Sentinel-2A datasets. Geocarto Int. 2020, 35, 1088–1108. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef] [Green Version]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- Chai, D.; Newsam, S.; Zhang, H.K.; Qiu, Y.; Huang, J. Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks. Remote Sens. Environ. 2019, 225, 307–316. [Google Scholar] [CrossRef]
- Qiu, C.; Schmitt, M.; Geiß, C.; Chen, T.-H.K.; Zhu, X.X. A framework for large-scale mapping of human settlement extent from Sentinel-2 images via fully convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2020, 163, 152–170. [Google Scholar] [CrossRef]
- Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
- Thorp, K.R.; Drajat, D. Deep machine learning with Sentinel satellite data to map paddy rice production stages across West Java, Indonesia. Remote Sens. Environ. 2021, 265, 112679. [Google Scholar] [CrossRef]
- Parente, L.; Taquary, E.; Silva, A.P.; Souza, C.M.; Ferreira, L.G. Next Generation Mapping: Combining Deep Learning, Cloud Computing, and Big Remote Sensing Data. Remote Sens. 2019, 11, 2881. [Google Scholar] [CrossRef] [Green Version]
- Gargiulo, M.; Dell’Aglio, D.A.G.; Iodice, A.; Riccio, D.; Ruello, G. Integration of Sentinel-1 and Sentinel-2 Data for Land Cover Mapping Using W-Net. Sensors 2020, 20, 2969. [Google Scholar] [CrossRef] [PubMed]
- Zhang, D.; Pan, Y.; Zhang, J.; Hu, T.; Zhao, J.; Li, N.; Chen, Q. A generalized approach based on convolutional neural networks for large area cropland mapping at very high resolution. Remote Sens. Environ. 2020, 247, 111912. [Google Scholar] [CrossRef]
- Wei, P.; Chai, D.; Lin, T.; Tang, C.; Du, M.; Huang, J. Large-scale rice mapping under different years based on time-series Sentinel-1 images using deep semantic segmentation model. ISPRS J. Photogramm. Remote Sens. 2021, 174, 198–214. [Google Scholar] [CrossRef]
- Ni, R.; Tian, J.; Li, X.; Yin, D.; Li, J.; Gong, H.; Zhang, J.; Zhu, L.; Wu, D. An enhanced pixel-based phenological feature for accurate paddy rice mapping with Sentinel-2 imagery in Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2021, 178, 282–296. [Google Scholar] [CrossRef]
- You, N.; Dong, J.; Huang, J.; Du, G.; Zhang, G.; He, Y.; Yang, T.; Di, Y.; Xiao, X. The 10-m crop type maps in Northeast China during 2017–2019. Sci. Data 2021, 8, 41. [Google Scholar] [CrossRef]
- Jiang, X.; Fang, S.; Huang, X.; Liu, Y.; Guo, L. Rice Mapping and Growth Monitoring Based on Time Series GF-6 Images and Red-Edge Bands. Remote Sens. 2021, 13, 579. [Google Scholar] [CrossRef]
- Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-Temporal SAR Data Large-Scale Crop Mapping Based on U-Net Model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef] [Green Version]
- Pan, Z.; Xu, J.; Guo, Y.; Hu, Y.; Wang, G. Deep Learning Segmentation and Classification for Urban Village Using a Worldview Satellite Image Based on U-Net. Remote Sens. 2020, 12, 1574. [Google Scholar] [CrossRef]
- Pang, J.; Zhang, R.; Yu, B.; Liao, M.; Lv, J.; Xie, L.; Li, S.; Zhan, J. Pixel-level rice planting information monitoring in Fujin City based on time-series SAR imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102551. [Google Scholar] [CrossRef]
- Paris, C.; Bruzzone, L. A Novel Approach to the Unsupervised Extraction of Reliable Training Samples From Thematic Products. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1930–1948. [Google Scholar] [CrossRef]
- Zhu, A.-X.; Zhao, F.-H.; Pan, H.-B.; Liu, J.-Z. Mapping Rice Paddy Distribution Using Remote Sensing by Coupling Deep Learning with Phenological Characteristics. Remote Sens. 2021, 13, 1360. [Google Scholar] [CrossRef]
- Zhang, C.; Zhang, H.; Zhang, L. Spatial domain bridge transfer: An automated paddy rice mapping method with no training data required and decreased image inputs for the large cloudy area. Comput. Electron. Agric. 2021, 181, 105978. [Google Scholar] [CrossRef]
- Yang, G.; Yu, W.; Yao, X.; Zheng, H.; Cao, Q.; Zhu, Y.; Cao, W.; Cheng, T. AGTOC: A novel approach to winter wheat mapping by automatic generation of training samples and one-class classification on Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102446. [Google Scholar] [CrossRef]
- Hao, P.; Di, L.; Zhang, C.; Guo, L. Transfer Learning for Crop classification with Cropland Data Layer data (CDL) as training samples. Sci. Total Environ. 2020, 733, 138869. [Google Scholar] [CrossRef] [PubMed]
- Xu, J.; Zhu, Y.; Zhong, R.; Lin, Z.; Lin, T. DeepCropMapping: A multi-temporal deep learning approach with improved spatial generalizability for dynamic corn and soybean mapping. Remote Sens. Environ. 2020, 247, 111946. [Google Scholar] [CrossRef]
- Ge, S.; Zhang, J.; Pan, Y.; Yang, Z.; Zhu, S. Transferable deep learning model based on the phenological matching principle for mapping crop extent. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102451. [Google Scholar] [CrossRef]
- Zhang, W.; Liu, H.; Wu, W.; Zhan, L.; Wei, J. Mapping rice paddy based on machine learning with Sentinel-2 multi-temporal data: Model comparison and transferability. Remote Sens. 2020, 12, 1620. [Google Scholar] [CrossRef]
- Arthur, D.; Vassilvitskii, S. K-Means++: The Advantages of Careful Seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, Philadelphia, PA, USA, 2007; pp. 1027–1035. Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.360.7427&rep=rep1&type=pdf (accessed on 1 September 2020).
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Xiao, X.; Boles, S.; Liu, J.; Zhuang, D.; Frolking, S.; Li, C.; Salas, W.; Moore, B. Mapping paddy rice agriculture in southern China using multi-temporal MODIS images. Remote Sens. Environ. 2005, 95, 480–492. [Google Scholar] [CrossRef]
- Liu, L.; Huang, J.; Xiong, Q.; Zhang, H.; Song, P.; Huang, Y.; Dou, Y.; Wang, X. Optimal MODIS data processing for accurate multi-year paddy rice area mapping in China. GISci Remote Sens. 2020, 57, 687–703. [Google Scholar] [CrossRef]
- Liu, L.; Xiao, X.; Qin, Y.; Wang, J.; Xu, X.; Hu, Y.; Qiao, Z. Mapping cropping intensity in China using time series Landsat and Sentinel-2 images and Google Earth Engine. Remote Sens. Environ. 2020, 239, 111624. [Google Scholar] [CrossRef]
- Hu, Q.; Yin, H.; Friedl, M.A.; You, L.; Li, Z.; Tang, H.; Wu, W. Integrating coarse-resolution images and agricultural statistics to generate sub-pixel crop type maps and reconciled area estimates. Remote Sens. Environ. 2021, 258, 112365. [Google Scholar] [CrossRef]
- USDA National Agricultural Statistics Service. Cropland Data Layer. 2020. Available online: https://www.nass.usda.gov/Research_and_Science/ (accessed on 24 January 2020).
- Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US Department of Agriculture, National Agricultural Statistics Service, Cropland Data Layer Program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
- Ashourloo, D.; Shahrabi, H.S.; Azadbakht, M.; Rad, A.M.; Aghighi, H.; Radiom, S. A novel method for automatic potato mapping using time series of Sentinel-2 images. Comput. Electron. Agric. 2020, 175, 105583. [Google Scholar] [CrossRef]
- Yaramasu, R.; Bandaru, V.; Pnvr, K. Pre-season crop type mapping using deep neural networks. Comput. Electron. Agric. 2020, 176, 105664. [Google Scholar] [CrossRef]
- Sun, L.; Gao, F.; Xie, D.; Anderson, M.; Chen, R.; Yang, Y.; Yang, Y.; Chen, Z. Reconstructing daily 30 m NDVI over complex agricultural landscapes using a crop reference curve approach. Remote Sens. Environ. 2021, 253, 112156. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. pp. 234–241. Available online: https://arxiv.org/pdf/1505.04597.pdf (accessed on 20 October 2020).
- Chai, D.; Newsam, S.; Huang, J. Aerial image semantic segmentation using DCNN predicted distance maps. ISPRS J. Photogramm. Remote Sens. 2020, 161, 309–322. [Google Scholar] [CrossRef]
- Samani, N.; Gohari-Moghadam, M.; Safavi, A.A. A simple neural network model for the determination of aquifer parameters. J. Hydrol. 2007, 340, 1–11. [Google Scholar] [CrossRef]
- Nguyen, D.B.; Gruber, A.; Wagner, W. Mapping rice extent and cropping scheme in the Mekong Delta using Sentinel-1A data. Remote Sens. Lett. 2016, 7, 1209–1218. [Google Scholar] [CrossRef]
- Zhan, P.; Zhu, W.; Li, N. An automated rice mapping method based on flooding signals in synthetic aperture radar time series. Remote Sens. Environ. 2021, 252, 112112. [Google Scholar] [CrossRef]
- Dong, J.; Xiao, X.; Kou, W.; Qin, Y.; Zhang, G.; Li, L.; Jin, C.; Zhou, Y.; Wang, J.; Biradar, C.; et al. Tracking the dynamics of paddy rice planting area in 1986–2010 through time series Landsat images and phenology-based algorithms. Remote Sens. Environ. 2015, 160, 99–113. [Google Scholar] [CrossRef]
- Bazzi, H.; Baghdadi, N.; Hajj, E.; Mohammad, Z.; Mehrez, M.; Tong, D.H. Mapping Paddy Rice Using Sentinel-1 SAR Time Series in Camargue, France. Remote Sens. 2019, 11, 887. [Google Scholar] [CrossRef] [Green Version]
- Inoue, S.; Ito, A.; Yonezawa, C. Mapping Paddy Fields in Japan by Using a Sentinel-1 SAR Time Series Supplemented by Sentinel-2 Images on Google Earth Engine. Remote Sens. 2020, 12, 1622. [Google Scholar] [CrossRef]
Jiangsu (i.e., the Target Region) | Arkansas River Basin (i.e., the Reference Region) | ||
---|---|---|---|
2019 | 2017 | 2018 | 2019 |
4/23 | 4/4 | 3/30 | 4/6 |
5/5 | 4/28 | 4/23 | 4/30 |
5/17 | 5/22 | 5/17 | 5/24 |
5/29 | 6/3 | 5/29 | 6/5 |
6/10 | 6/15 | 6/10 | 6/17 |
6/22 | 6/27 | 6/22 | 6/29 |
7/4 | 7/9 | 7/4 | 7/11 |
7/16 | 7/21 | 7/16 | 7/23 |
7/28 | 8/2 | 7/28 | 8/4 |
8/9 | 8/14 | 8/9 | 8/16 |
8/21 | 8/26 | 8/21 | 8/28 |
9/2 | 9/7 | 9/2 | 9/9 |
9/26 | 9/19 | 9/14 | 9/21 |
Region | County Name | S1/Saa | S1/103 Hectares | S2/103 Hectares | S3/103 Hectares | S4/103 Hectares |
---|---|---|---|---|---|---|
Northern part | Ganyu | 18% | 27.76 | 7.41 | 17.82 | 26.21 |
Donghai | 31% | 65.07 | 34.31 | 58.81 | 66.26 | |
Xinyi | 17% | 28.10 | 14.72 | 21.72 | 30.38 | |
Guanyun | 31% | 48.16 | 48.97 | 64.79 | 46.97 | |
RMSE | / | / | 19.62 | 10.66 | 1.61 | |
RRMSE | / | / | 49.70% | 27.76% | 5.16% | |
Central part | Jinhu | 27% | 38.29 | 30.58 | 38.43 | 36.73 |
Gaoyou | 27% | 53.48 | 46.03 | 45.57 | 46.48 | |
Baoying | 36% | 54.34 | 52.75 | 59.71 | 53.44 | |
Hongze | 22% | 31.66 | 31.62 | 34.76 | 39.09 | |
Jianhu | 40% | 47.08 | 37.97 | 44.09 | 48.87 | |
RMSE | / | / | 6.33 | 4.68 | 4.70 | |
RRMSE | / | / | 14.02% | 9.52% | 12.30% | |
Southern part | Gaochun | 16% | 6.04 | 8.17 | 6.56 | 6.12 |
Lishui | 20% | 17.10 | 10.51 | 7.87 | 9.24 | |
Liyang | 12% | 31.37 | 22.42 | 21.09 | 31.47 | |
Yixing | 4% | 24.98 | 20.49 | 20.09 | 23.44 | |
Wujin | 13% | 4.40 | 17.95 | 4.65 | 1.11 | |
Jintan | 28% | 12.73 | 26.03 | 13.93 | 10.33 | |
Danyang | 16% | 29.60 | 30.34 | 24.67 | 28.62 | |
RMSE | / | / | 8.52 | 5.86 | 3.41 | |
RRMSE | / | / | 125.14% | 26.30% | 34.04% | |
The three parts | RMSETotal | / | / | 11.85 | 7.09 | 3.56 |
RRMSETotal | / | / | 86.78% | 22.88% | 23.68% |
Pseudo-Label Generation Method | Datasets | IoU/% | RMSE/103 Hectares | RRMSE |
---|---|---|---|---|
K–RF (K–RF pseudo-labels) | Training | 94.46 | / | / |
Validation | 93.79 | / | / | |
Test | 95.02 | / | / | |
Northern | 96.73 | 8.80 | 25.34% | |
Central | 93.16 | 4.25 | 8.39% | |
Southern | 95.20 | 4.23 | 29.82% | |
Transfer model (TF pseudo-labels) | Training | 92.41 | / | / |
Validation | 92.20 | / | / | |
Test | 93.25 | / | / | |
Northern | 94.15 | 4.21 | 14.06% | |
Central | 86.86 | 5.64 | 13.51% | |
Southern | 96.66 | 3.74 | 35.87% |
Method | Region | RMSE/103 Hectares | RRMSE |
---|---|---|---|
Nguyen et al. [47] | Northern | 13.86 | 31.97% |
Zhan et al. [48] | 33.91 | 68.57% | |
U-Net trained on the K–RF pseudo-labels | 8.80 | 25.34% | |
Nguyen et al. [47] | Central | 37.52 | 80.14% |
Zhan et al. [48] | 9.66 | 26.31% | |
U-Net trained on the K–RF pseudo-labels | 4.25 | 8.39% | |
Nguyen et al. [47] | Southern | 19.45 | 90.65% |
Zhan et al. [48] | 20.68 | 257.34% | |
U-Net trained on the K–RF pseudo-labels | 4.23 | 29.82% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wei, P.; Huang, R.; Lin, T.; Huang, J. Rice Mapping in Training Sample Shortage Regions Using a Deep Semantic Segmentation Model Trained on Pseudo-Labels. Remote Sens. 2022, 14, 328. https://doi.org/10.3390/rs14020328
Wei P, Huang R, Lin T, Huang J. Rice Mapping in Training Sample Shortage Regions Using a Deep Semantic Segmentation Model Trained on Pseudo-Labels. Remote Sensing. 2022; 14(2):328. https://doi.org/10.3390/rs14020328
Chicago/Turabian StyleWei, Pengliang, Ran Huang, Tao Lin, and Jingfeng Huang. 2022. "Rice Mapping in Training Sample Shortage Regions Using a Deep Semantic Segmentation Model Trained on Pseudo-Labels" Remote Sensing 14, no. 2: 328. https://doi.org/10.3390/rs14020328
APA StyleWei, P., Huang, R., Lin, T., & Huang, J. (2022). Rice Mapping in Training Sample Shortage Regions Using a Deep Semantic Segmentation Model Trained on Pseudo-Labels. Remote Sensing, 14(2), 328. https://doi.org/10.3390/rs14020328