Figure 1.
Study area: A test site located in Guangzhou, China, which can represent the typical region of croplands of southern China (a). Some non-conventional cropland parcels, which were separated into many small trial areas: citrus (b), vegetables (c), rice (d), and corn (e).
Figure 1.
Study area: A test site located in Guangzhou, China, which can represent the typical region of croplands of southern China (a). Some non-conventional cropland parcels, which were separated into many small trial areas: citrus (b), vegetables (c), rice (d), and corn (e).
Figure 2.
Crop growth stages in the study area. Rice: (1) seedling, (2) tillering, (3) spikelet development, (4) flowering and fruiting. Corn: (1) seedling, (2) spike, (3) flowering to maturity. Soybean: (1) emergence, (2) seedling, (3) flower bud differentiation, (4) blooming and fruiting, (5) seed filling period, (6) harvesting. Peanuts: (1) sowing and emergence, (2) seedling, (3) flowering needle, (4) pod, (5) package maturity. Sugarcane: (1) sowing to infancy, (2) emergence, (3) tillering, (4) elongation, (5) maturity. Banana: (1) seedling, (2) vigorous growth, (3) flower bud burst stage, (4) fruiting stages, (5) fruit development and harvest. Citrus: (1) flower bud differentiation, (2) budding, (3) flowering, (4) fruit growth and development, (5) fruit ripening, (6) flower bud differentiation. Pumpkin: (1) emergence, (2) rambling, (3) flowering, (4) harvesting. Pepper: (1) germination, (2) seedling, (3) flowering and fruit setting, (4) fruiting. Vegetables: vary from 60 to 120 days and grow all year round.
Figure 2.
Crop growth stages in the study area. Rice: (1) seedling, (2) tillering, (3) spikelet development, (4) flowering and fruiting. Corn: (1) seedling, (2) spike, (3) flowering to maturity. Soybean: (1) emergence, (2) seedling, (3) flower bud differentiation, (4) blooming and fruiting, (5) seed filling period, (6) harvesting. Peanuts: (1) sowing and emergence, (2) seedling, (3) flowering needle, (4) pod, (5) package maturity. Sugarcane: (1) sowing to infancy, (2) emergence, (3) tillering, (4) elongation, (5) maturity. Banana: (1) seedling, (2) vigorous growth, (3) flower bud burst stage, (4) fruiting stages, (5) fruit development and harvest. Citrus: (1) flower bud differentiation, (2) budding, (3) flowering, (4) fruit growth and development, (5) fruit ripening, (6) flower bud differentiation. Pumpkin: (1) emergence, (2) rambling, (3) flowering, (4) harvesting. Pepper: (1) germination, (2) seedling, (3) flowering and fruit setting, (4) fruiting. Vegetables: vary from 60 to 120 days and grow all year round.
Figure 3.
Spectral features of data from UAV and field measurements. Among them, spectral reflectance curves of some main crops in cropland parcels investigated through a spectroradiometer (a), and spectral reflectance curves and the confidence interval (CI) of main land classes and some main crops in the cropland parcels investigated through the UAV images (b,c).
Figure 3.
Spectral features of data from UAV and field measurements. Among them, spectral reflectance curves of some main crops in cropland parcels investigated through a spectroradiometer (a), and spectral reflectance curves and the confidence interval (CI) of main land classes and some main crops in the cropland parcels investigated through the UAV images (b,c).
Figure 4.
U-Net (a) and U-Net++ (b) architecture.
Figure 4.
U-Net (a) and U-Net++ (b) architecture.
Figure 5.
Overall experiment framework.
Figure 5.
Overall experiment framework.
Figure 6.
The maps extracted from the test dataset of different (Convolutional neural network) CNN architectures are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–h) results of experiments A1–A6.
Figure 6.
The maps extracted from the test dataset of different (Convolutional neural network) CNN architectures are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–h) results of experiments A1–A6.
Figure 7.
The maps extracted from the test dataset of different spatial resolutions in experiment group B are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–j) results of experiments B1–B8.
Figure 7.
The maps extracted from the test dataset of different spatial resolutions in experiment group B are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–j) results of experiments B1–B8.
Figure 8.
Local prediction details of the maps extracted from the test dataset in experiment group B are shown. (a-1) Original multispectral data (standard false color composited), (b-1) Ground truth, (c-1) the result of experiment B3, and (d-1) the result of experiment B6. (a-2,a-3) The local details of (a-1), corresponding to the black boxes of ground truth, (b-2,b-3) the local details in the black boxes of (b-1), (c-2,c-3) the local details in the black boxes of (c-1), and (d-2,d-3) the local details in the black boxes of (d-1).
Figure 8.
Local prediction details of the maps extracted from the test dataset in experiment group B are shown. (a-1) Original multispectral data (standard false color composited), (b-1) Ground truth, (c-1) the result of experiment B3, and (d-1) the result of experiment B6. (a-2,a-3) The local details of (a-1), corresponding to the black boxes of ground truth, (b-2,b-3) the local details in the black boxes of (b-1), (c-2,c-3) the local details in the black boxes of (c-1), and (d-2,d-3) the local details in the black boxes of (d-1).
Figure 9.
The maps extracted from the test dataset of different spectral compositions in experiment group C are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–f) results of experiments C1–C4.
Figure 9.
The maps extracted from the test dataset of different spectral compositions in experiment group C are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–f) results of experiments C1–C4.
Figure 10.
Local prediction details of the maps extracted from the test dataset in experiment group C are shown. (a-1) Original multispectral data (standard false color composited), (b-1) ground truth, (c-1) the result of experiment C1, and (d-1) the result of experiment C4. (a-2,a-3) The local details of (a-1), corresponding to the black boxes of ground truth, (b-2,b-3) the local details in the black boxes of (b-1), (c-2,c-3) the local details in the black boxes of (c-1), and (d-2,d-3) the local details in the black boxes of (d-1).
Figure 10.
Local prediction details of the maps extracted from the test dataset in experiment group C are shown. (a-1) Original multispectral data (standard false color composited), (b-1) ground truth, (c-1) the result of experiment C1, and (d-1) the result of experiment C4. (a-2,a-3) The local details of (a-1), corresponding to the black boxes of ground truth, (b-2,b-3) the local details in the black boxes of (b-1), (c-2,c-3) the local details in the black boxes of (c-1), and (d-2,d-3) the local details in the black boxes of (d-1).
Figure 11.
The maps extracted from the test dataset of different terrain information in experiment group D are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–e) results of experiments D1–D3.
Figure 11.
The maps extracted from the test dataset of different terrain information in experiment group D are shown. (a) Original multispectral data (standard false color composited), (b) ground truth label, and (c–e) results of experiments D1–D3.
Figure 12.
Local prediction details of the maps extracted from the test dataset in experiment group D are shown. (a-1) Original multispectral data (standard false color composited), (b-1) ground truth, (c-1) the result of experiment D1, (d-1) the result of experiment D2, and (e-1) the result of experiment D3. (a-2–a-4) The local details of (a-1), corresponding to the black boxes of ground truth, (b-2–b-4) the local details in the black boxes of (b-1), (c-2–c-4) the local details in the black boxes of (c-1), (d-2–d-4) the local details in the black boxes of (d-1), and (e-2–e-4) the local details in the black boxes of (e-1).
Figure 12.
Local prediction details of the maps extracted from the test dataset in experiment group D are shown. (a-1) Original multispectral data (standard false color composited), (b-1) ground truth, (c-1) the result of experiment D1, (d-1) the result of experiment D2, and (e-1) the result of experiment D3. (a-2–a-4) The local details of (a-1), corresponding to the black boxes of ground truth, (b-2–b-4) the local details in the black boxes of (b-1), (c-2–c-4) the local details in the black boxes of (c-1), (d-2–d-4) the local details in the black boxes of (d-1), and (e-2–e-4) the local details in the black boxes of (e-1).
Figure 13.
The maps extracted from more periods for the robustness test. (a) Ground truth label, (b–d) original multispectral data (standard false color composited) collected from August, November, and December, respectively, and (e–g) the maps extracted from August, November, and December, respectively.
Figure 13.
The maps extracted from more periods for the robustness test. (a) Ground truth label, (b–d) original multispectral data (standard false color composited) collected from August, November, and December, respectively, and (e–g) the maps extracted from August, November, and December, respectively.
Table 1.
Related parameters of model training, among which: ‘learn rate decay = 0.9’: decaying the learn rate to 90% when it is needed, ‘time-validation = 3’: making a prediction on the validation dataset after every 3 training epochs, ‘patuence = 2’: decaying the learn rate if the effect of the model did not improve after 2 predictions on the validation dataset, and ‘epsilon = 0.001’: when the difference between the prediction loss function of the last two tests on the validation dataset is less than 0.001, it is considered that the model cannot be improved under the current learning rate, and the learning rate reduction mechanism needs to be triggered.
Table 1.
Related parameters of model training, among which: ‘learn rate decay = 0.9’: decaying the learn rate to 90% when it is needed, ‘time-validation = 3’: making a prediction on the validation dataset after every 3 training epochs, ‘patuence = 2’: decaying the learn rate if the effect of the model did not improve after 2 predictions on the validation dataset, and ‘epsilon = 0.001’: when the difference between the prediction loss function of the last two tests on the validation dataset is less than 0.001, it is considered that the model cannot be improved under the current learning rate, and the learning rate reduction mechanism needs to be triggered.
Item | Parameter |
---|
Batch size | 12 |
Optimizer | Adam |
Maximum epochs | 350 |
Original learn rate | 1 × 10−4 |
Learn rate decay | 0.9 |
Minimum learn rate | 1 × 10−7 |
Time-validation | 3 |
Patuence | 2 |
Epsilon | 1 × 10−3 |
Table 2.
Exploration of different down-sampling architectures of networks.
Table 2.
Exploration of different down-sampling architectures of networks.
Experiment Code | Network | Size of Max Pooling | Number of Down-Sampling Layers |
---|
A1 | U-Net | (2,2) | 4 |
A2 | U-Net | (2,2) | 5 |
A3 | U-Net | (2,2) | 6 |
A4 | U-Net | (4,4) | 3 |
A5 | U-Net++ | (2,2) | 4 |
A6 | U-Net++ | (4,4) | 3 |
Table 3.
Exploration of the spatial resolution of different data.
Table 3.
Exploration of the spatial resolution of different data.
Experiment Code | Network | Spatial Resolution of Image (meters) |
---|
B1 | U-Net++ | 0.3 |
B2 | U-Net++ | 0.4 |
B3 | U-Net++ | 0.5 |
B4 | U-Net++ | 0.6 |
B5 | U-Net++ | 0.7 |
B6 | U-Net++ | 0.8 |
B7 | U-Net++ | 0.9 |
B8 | U-Net++ | 1.0 |
Table 4.
Exploration of different spectral compositions of data.
Table 4.
Exploration of different spectral compositions of data.
Experiment Code | Network | Bands Composition of Image |
---|
C1 | U-Net++ | RGB |
C2 | U-Net++ | RGB + RE |
C3 | U-Net++ | RGB + NIR |
C4 | U-Net++ | RGB + RE + NIR |
Table 5.
Exploration of different spectral compositions of data.
Table 5.
Exploration of different spectral compositions of data.
Experiment Code | Network | Bands’ Composition of Image |
---|
D1 | U-Net++ | RGB + RE + NIR + DSM |
D2 | U-Net++ | RGB + RE + NIR |
D3 | U-Net++ | RGB + RE + NIR + Slope |
Table 6.
Test (Overall accuracy) OA, Kappa coefficient, and (Intersection-over-Union) IoU of different network architectures.
Table 6.
Test (Overall accuracy) OA, Kappa coefficient, and (Intersection-over-Union) IoU of different network architectures.
| A1 | A2 | A3 | A4 | A5 | A6 |
---|
OA (%) | 83.5 | 89.9 | 88.2 | 86.0 | 91.3 | 90.4 |
Kappa (%) | 62.9 | 74.6 | 68.6 | 64.4 | 78.7 | 76.6 |
IoU (%) | 83.1 | 89.6 | 87.9 | 85.8 | 91.1 | 90.2 |
Table 7.
Test OA, Kappa coefficient, and IoU of different network architectures.
Table 7.
Test OA, Kappa coefficient, and IoU of different network architectures.
| B1 | B2 | B3 | B4 | B5 | B6 | B7 | B8 |
---|
OA (%) | 94.2 | 94.1 | 94.5 | 93.9 | 93.8 | 93.9 | 92.7 | 92.3 |
Kappa (%) | 85.1 | 84.9 | 85.6 | 84.1 | 84.7 | 84.3 | 82.2 | 80.6 |
IoU (%) | 94.0 | 93.9 | 94.4 | 93.6 | 93.5 | 93.7 | 92.4 | 91.9 |
Table 8.
Test OA, Kappa coefficient, and IoU of different network architectures.
Table 8.
Test OA, Kappa coefficient, and IoU of different network architectures.
| C1 | C2 | C3 | C4 |
---|
OA (%) | 94.5 | 92.5 | 94.0 | 95.6 |
Kappa (%) | 84.8 | 81.0 | 84.7 | 88.6 |
IoU (%) | 94.3 | 92.1 | 93.8 | 95.5 |
Table 9.
Test OA, Kappa coefficient, and IoU of different network architectures.
Table 9.
Test OA, Kappa coefficient, and IoU of different network architectures.
| D1 | D2 | D3 |
---|
OA (%) | 95.6 | 94.5 | 95.9 |
Kappa (%) | 88.6 | 85.6 | 89.2 |
IoU (%) | 95.5 | 94.4 | 95.7 |
Table 10.
Test OA, Kappa coefficient, and IoU of other periods.
Table 10.
Test OA, Kappa coefficient, and IoU of other periods.
| August | November | December |
---|
OA (%) | 97.2 | 96.9 | 96.5 |
Kappa (%) | 90.6 | 89.6 | 88.4 |
IoU (%) | 97.1 | 96.7 | 96.4 |