Next Article in Journal
Severe Precipitation Recognition Using Attention-UNet of Multichannel Doppler Radar
Next Article in Special Issue
Evaluation of MODIS, Landsat 8 and Sentinel-2 Data for Accurate Crop Yield Predictions: A Case Study Using STARFM NDVI in Bavaria, Germany
Previous Article in Journal
Detection of Sargassum from Sentinel Satellite Sensors Using Deep Learning Approach
Previous Article in Special Issue
A Retrospective Analysis of National-Scale Agricultural Development in Saudi Arabia from 1990 to 2021
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Limited-Samples-Based Crop Classification Using a Time-Weighted Dynamic Time Warping Method, Sentinel-1 Imagery, and Google Earth Engine

1
College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
2
Key Laboratory of Regional Sustainable Development Modeling, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China
3
Shandong Ruizhi Flight Control Technology, Co., Ltd., Qingdao 266500, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(4), 1112; https://doi.org/10.3390/rs15041112
Submission received: 5 December 2022 / Revised: 2 February 2023 / Accepted: 15 February 2023 / Published: 17 February 2023
(This article belongs to the Special Issue Monitoring Crops and Rangelands Using Remote Sensing)

Abstract

:
Reliable crop type classification supports the scientific basis for food security and sustainable agricultural development. However, it still lacks a limited-samples-based crop classification method which is labor- and time-efficient. To this end, we used the Google Earth Engine (GEE) and Sentinel-1A/B SAR time series to develop eight types of crop classification strategies based on different sampling methods of central and scattered, different perspectives of object-based and pixel-based, and different classifiers of the Time-Weighted Dynamic Time Warping (TWDTW) and Random Forest (RF). We carried out 30-times classifications with different samples for each strategy to classify the crop types at the North Dakota–Minnesota border in the U.S. We then compared their classification accuracies and assessed the accuracy sensitivity to sample size. The results found that the TWDTW generally performed better than RF, especially for small-sample classification. Object-based classifications had higher accuracies than pixel-based classifications, and the object-based TWDTW had the highest accuracy. RF performed better in scattered sampling than the central sampling strategy. TWDTW performed better than RF in distinguishing soybean and dry bean with similar curves. The accuracies improved for all eight classification strategies with increasing sample size, and TWDTW was more robust, while RF was more sensitive to sample size change. RF required many more samples than TWDTW to achieve satisfactory accuracy, and it performed better than TWDTW when the sample size exceeded 50. The accuracy comparisons indicated that the TWDTW has stronger temporal and spatial generalization capabilities and has high potential applications for early, historical, and limited-samples-based crop type classification. The findings of our research are worthwhile contributions to the methodology and practices of crop type classification as well as sustainable agricultural development.

1. Introduction

Cropland is critical to food security and sustainable development [1]. To meet the increasing human demand for food caused by population growth [2,3], cropland use has been intensified, and the structure and distribution of crop types have been changed significantly. However, these changes have led to negative impacts, such as increased agricultural water use, decreased soil fertility, and degraded ecosystems, and have threatened regional food security and sustainable development [4,5,6,7]. Efficient and accurate crop classification information can help develop positive cropland protection policies and reasonable agricultural planning, which can effectively optimize the crop type structure and mitigate negative environmental impacts. Therefore, it is urgent to develop an effective method framework to accurately identify crop type distribution, supporting scientific and basic information for sustainable agricultural management and development [8].
With the improvement in the quantity and quality of satellite sensors and remote sensing (RS) imagery, many studies on crop-type monitoring have been carried out [9,10]. For example, the National Agricultural Statistics Service of the United States Department of Agriculture (USDA-NASS) and the National Oceanic and Atmospheric Administration (NOAA) initiated the Large Area Crop Inventory Experiment (LACIE) in the 1970s to improve domestic and international crop forecasting methods [11], and the USDA-NASS has generated Cropland Data Layer maps since 1997 using medium-resolution satellite imagery and extensive ground-truth data [12,13]. Moreover, the Earth Observation Team of the Science and Technology Branch (STB) at Agriculture and Agri-Food Canada (AAFC) has produced Annual Crop Inventory digital maps based on satellite imagery since 2009 [14], and the European Space Agency (ESA) launched the Global Monitoring for Food Security (GMFS) project in 2003 [15].
Generally, different crop types have their unique spectral characteristics that vary with the phenological period [16], which makes crop type classification based on time-series RS images more effective than single-period images [17,18,19]. However, it is also more time-consuming and labor-intensive to process long-time series RS images. In recent years, the rapid development of big data cloud platforms such as Google Earth Engine (GEE) has provided researchers with open-access RS datasets, widely-used algorithms, and high-performance computing ability [20,21], significantly reducing the labor and time consumption of large-scale and long-term crop classification studies [22,23]. Thus, it is urgent to efficiently classify crop types based on the GEE platform and long time-series RS images.
Previous studies have mostly carried out crop classification based on optical RS images such as MODIS, Landsat, and Sentinel-2 due to their global coverage, short revisit period, and easy accessibility [24,25,26]. However, the quality of optical imagery is affected by clouds and weather, which makes it hard to obtain continuous cloud-free time series images. Although filtering methods such as harmonics and Savitzky–Golay can be used to reconstruct time-series optical images [18,27], the reconstructed data still have a large uncertainty in long-term cloud-affected areas [28]. Compared to optical images, synthetic aperture radar (SAR) images are hardly polluted by clouds due to their strong penetrability of longer wavelength [29], making it easy to obtain effective and continuous time series data. The C-band SAR data provided by the Sentinel-1 satellite have very sensitive backscatter to crop structural changes and have been widely and effectively used in crop type classification [30,31]. The coherent speckle noise in SAR imagery can lead to high intra-class variance [32], especially for pixel-based classification methods, compromising classification accuracy. Object-based methods can effectively reduce the spectral variation and noise interference within an object by averaging a large number of pixels [33,34,35]. Simple Non-Iterative Clustering (SNIC) [36] is a commonly used image segmentation method to create efficient and compact polygons based on pixel values and spatial distances, which is easily accessible on the GEE platform. Previous studies used the SNIC method for image segmentation and object-based classification in GEE, achieving higher accuracy compared to the pixel-based method [23,37].
Traditional machine learning classifiers such as Random Forest (RF) and support vector machine (SVM) were commonly used for SAR-based crop classification [33,35,38,39], which achieved satisfactory accuracy but usually required a large number of classification samples. Dynamic Time Warping (DTW) [40] is an effective method for limited-samples-based crop classification and compares the similarity between two time-series curves [41,42] due to its reduced sensitivity to training samples [43]. In addition, a time weight was added to balance the effects of planting time differences on crop classification, generating a new Time-Weighted Dynamic Time Warping (TWDTW) method [44]. Previous studies have demonstrated the good performance of this method in land use, forest, and crop classifications using optical or SAR RS images [44,45,46].
The sampling strategy also affects crop classification accuracy. In general, the more correct samples covering the study area, the more accurate the crop classification. However, collecting large numbers of scattered samples is time-consuming and labor-intensive. Thus, it is urgent to adopt effective methods that are insensitive to samples over space and time for crop type classification. To this end, taking the North Dakota–Minnesota border in the U.S. as the study area, we carried out 30-times crop classifications for map major crop types (i.e., spring wheat, dry bean, corn, soybean, sugar beet, and hay) in 2018 based on different limited samples and Sentinel-1 SAR time series images on the GEE. We then compared the crop classification results based on different sampling strategies (i.e., central sampling and scattered sampling), different perspectives (i.e., object-based and pixel-based), and different classifiers (i.e., TWDTW and RF) and assessed their sensitivities to sample size (i.e., 2, 5, 10, 20, 50, and 100 samples for each crop type). Moreover, we summarized these eight classification schemes to discuss their advantages and disadvantages, implications, and future research topics for accurate crop type mapping and sustainable agricultural management.

2. Materials and Methods

2.1. Study Area

The study area is located on the border between eastern North Dakota and northwestern Minnesota in the USA (96°32′49″W–97°24′14″W, 47°30′0″N–48°10′48″N) (Figure 1), with an area of 4824.61 km2. Agriculture occupies a high proportion of the economy and is vital to regional development of these two states. The crop types and planting structures in the two states are different, but both are dominated by corn and soybean. To confirm the spatial adaptability of the TWDTW-based method, we selected the border area between these two states as the study area to classify six major crop types, i.e., spring wheat, dry bean, corn, soybean, sugar beet, and hay.

2.2. Data Sources

The data sources used in this study mainly include Sentinel-1A/B SAR images, the U.S. Cropland Data Layer (CDL) maps, and the data from National Agriculture Imagery Program (NAIP).
The Sentinel-1A/B SAR images were from the Copernicus initiative conducted by the European Space Agency. The dataset went through multiple processing steps, including boundary noise correction, speckle filtering, radiometric slope correction, and Savitzky–Golay filtering. Boundary noise correction [47,48] was applied on each image to remove the boundary regions with too high or low incident angles. The LEE filtering [49] with a kernel size of 9 was used to reduce coherent speckle noise. The angle-based radiometric slope correction [50,51] was performed using the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) at a resolution of 30 m [52]. In addition, previous studies have shown that cross-polarization is the best single polarization for classification [53,54]. Thus, we extracted the VH-polarized time series containing 58 periods of images in 2018 from Sentinel-1A/B SAR images for crop-type classification and then smoothed the time series using Savitzky–Golay (SG) filter [55]. All backscatter values of VH-polarized time series were adjusted from linear values to dB values and multiplied by a factor of 10,000. All the processing was performed on the GEE platform.
The CDL maps were from the CropScape program of the U.S. NASS. We used the CDL maps to select crop samples of six major types, i.e., spring wheat, dry bean, corn, soybean, sugar beet, and hay, for classification and validation. We used two sampling strategies, i.e., scattered sampling and central sampling. The former is a commonly used method to select samples uniformly in the study area. The latter is to select samples locally and centrally in the study area, enabling a reduction in the difficulty of field sample collection. In order to explore the spatial generalizability of the TWDTW classification method, we carried out 30-times crop classifications with different limited samples (i.e., 5 points for each crop type) for each training sample selection strategy, and one of the selected sample distributions is shown in Figure 2. Moreover, we assessed the sensitivity of different classification strategies to gradually increasing sample size (i.e., 2, 5, 10, 20, 50, and 100 training samples for each crop type). We then randomly selected 1650 verification samples for each classification to assess the accuracy.
The NAIP acquires aerial imagery during the agricultural growing season in the continental U.S. [56]. NAIP imagery has a ground resolution of 1 m, with four bands of red, green, blue, and near-infrared. We used the 2018 NAIP images to perform object-based segmentation to obtain high-resolution cropland plots.

2.3. Methods

In this study, we first preprocessed the Sentinel-1 SAR images to construct a VH-polarized time series (Figure 3). We then used the SNIC method to segment study area as multiple-pixel objects. Third, based on the U.S. CDL maps, we used two strategies of central sampling and scattered sampling to select samples of six major crops. We finally classified the crop types in the North Dakota–Minnesota border using TWDTW and RF classification methods, pixel-based and object-based perspectives, and central and scattered sampling on the GEE platform and compared their classification performance.

2.3.1. Simple Non-Iterative Clustering (SNIC) Image Segmentation

Simple Linear Iterative Clustering (SLIC) is one of the most prominent superpixel segmentation algorithms based on a localized k-means optimization in the five-dimensional CIELAB color and image space, starting from seeds chosen on a regular grid [57]. It has been widely used in RS image segmentation due to its simplicity, computational efficiency, good boundary adherence, and limited adjacency. The SNIC algorithm is a non-iterative simplified but improved version of the SLIC while explicitly enforcing connectivity from the start [36]. The SNIC algorithm first generates K regular grids for RS image. The superpixel centroid of k-th grid is C[k] = {Xk, Ck}, of which Xk and Ck refer to the spatial position and CIELAB color. The K grids then generate the distance-minimizing priority queue to cluster candidate pixels, starting with the superpixel centroid, and the pixels with minimum distance pop out of the queue and aggregate into corresponding superpixels. The distance of the k-th superpixel centroid C[k] to the j-th candidate pixel is dj,k, and its calculation formula is as follows.
d j , k = X j X k 2 2 s + C j C k 2 2 m
where s and m are the normalizing factors for spatial and color distances, respectively. The main parameters of SNIC are the superpixel seed spacing and compactness factor, which refer to the distance between the superpixel centroids and the shape compactness of superpixel, respectively.
The fine details and structures of high spatial resolution images are beneficial for segmentation [58], so we used all four bands of 1 m resolution NAIP images for SNIC segmentation. In this study, SNIC segmentation was performed, with the compactness factor of 1 and the superpixel seed spacing selected from 15, 20, 25, and 30, respectively, to obtain the segmentation result closer to actual boundary.

2.3.2. Time-Weighted Dynamic Time Warping (TWDTW)

DTW is a commonly used method to measure the similarity between two time series data. It was originally applied in the field of speech recognition and has been widely used for crop-type classification [59,60]. Due to the different planting and growth conditions of different crops, the growth phenology and time series curves are different in different crops and similar in the same crops [61]. The DTW uses the cost matrix to measure the similarity between the reference and target time series curves. For the reference time series curve of training samples, R = {R(1), R(2), …, R(m), …, R(M)}, and the target time series curve of candidate pixels, T = {T(1), T(2), …, T(n), …, T(N)}, their cost matrixes are calculated as follows.
d m , n = | R m T n |
d M × N = { d ( 1 , 1 ) d ( 1 , 2 ) d ( 1 , n ) d ( 1 , N ) d ( 2 , 1 ) d ( 2 , 2 ) d ( 2 , n ) d ( 2 , N ) d ( m , 1 ) d ( m , 2 ) d ( m , n ) d ( m , N ) d ( M , 1 ) d ( M , 2 ) d ( M , n ) d ( M , N ) }
D M × N = { D ( 1 , 1 ) D ( 1 , 2 ) D ( 1 , n ) D ( 1 , N ) D ( 2 , 1 ) D ( 2 , 2 ) D ( 2 , n ) D ( 2 , N ) D ( m , 1 ) D ( m , 2 ) D ( m , n ) D ( m , N ) D ( M , 1 ) D ( M , 2 ) D ( M , n ) D ( M , N ) }
D ( m , n ) = { d ( 1 , 1 ) , m = 1   and   n = 1 d ( 1 , n ) + D ( 1 , n 1 ) , m = 1   and   n 1 d ( m , 1 ) + D ( m 1 , 1 ) , m 1   and   n = 1 d ( m , n ) + min { D ( m 1 , n 1 ) , D ( m 1 , n ) , D ( m , n 1 ) } , m 1   and   n 1
where m and n are the time sequences, and M and N are the lengths of the reference and target time series, respectively. R(m) and T(n) are the values of the reference and target time series at time sequences m and n, respectively. The d(m,n) and dM×N are the absolute difference value and cost matrix, and D(m,n) and DM×N are the accumulated absolute difference value and the accumulated cost matrix between the reference and target time series, respectively. The smaller the value of DM×N, the higher the similarity between two time series.
In the execution of DTW, the calculation of the overall DM×N, i.e., the minimum accumulated cost matrix Dmin, all samples should take into account all the sample types. Specifically, it first calculates the median D(m,n) for each sample type and then minimizes all the medians to obtain the overall minimum DM×N.
D m i n = min median D m , n 1 , 1 , D m , n 1 , 2 D m , n 1 , v median D m , n u , 1 , D m , n u , 2 D m , n u , v
where u and v represent the type and number of training samples, respectively.
However, there may be small temporal differences between the time series curves of same sample type. For crop type classification, the time series curves of same crop type may differ due to differences in planting management and climatic conditions. Thus, the TWDTW method adds a logical time weight ω to calculate the absolute difference d m , n and cost matrix dM×N [44].
d m , n = R m T n + ω m , n
ω m , n = 1 1 + e α g t m , t n β
where ω(m,n) and g(tm, tn) are the logical time weight and time difference between the reference and target time series at time sequences m and n, respectively. The logistic time cost used in the TWDTW method has a lower or larger penalty for smaller or larger cost distances, which can reduce the sensitivity of classification compared with the linear method [44]. In this study, α and β were set as 0.1 and 15, ensuring a lower penalty for time differences of less than 15 days. The logical time weight was multiplied by a coefficient of 500 and then added to the cost matrix.

2.3.3. Random Forest Classification

RF is a classical machine learning algorithm based on decision tree rule integration, which is widely used in crop type and land use classification and data prediction. The algorithm feeds training data into multiple decision classifiers to generate multiple classification results and uses a voting method to determine the final classification result with highest accuracy. Its high noise immunity and high stability can effectively improve the accuracy and stability of crop classification studies [62,63,64]. In this study, we used the RF classifier with 100 trees.

2.3.4. Classification Accuracy Assessment

In this study, the accuracy assessment of classification results was performed based on the widely used error matrix and its associated measures, including producer’s accuracy (PA), user’s accuracy (UA), overall accuracy (OA) [65], and Kappa coefficient [66]. The producer’s accuracy reflects the accuracy of all correctly classified pixels to classification results in a specific category correctly classified in the results, the user’s accuracy reflects the accuracy of all correctly classified pixels to validation samples in a specific category, and the overall accuracy and Kappa coefficient reflect the average accuracy of all categories. We used the error matrix to assess the accuracies for the 30 times classifications of each strategy and used the box plot method [67] to show the accuracy statistics of the crop classifications.

2.3.5. Sensitivity Assessment of Classification Accuracy to Sample Size

In order to assess the sensitivities of different classification strategies to the number of sample points, we carried out multiple crop classifications based on gradually increasing sample size (i.e., 2, 5, 10, 20, 50, and 100 samples for each crop type). Each classification strategy with different sample size was executed three times to obtain an average accuracy, and we compared all the averaged accuracies of 8 classification strategies.

3. Results

3.1. Time Series Curves of Major Crop Types

Different crops have their own unique VH time series curves under either scattered or central sampling strategies (Figure 4). Anomalously high values of VH reception were observed for all crops on the approximately 100th day of the year (DOY), implying unusually high backscattering during this period. After that, the VH curves showed significant differences between different crops. Specifically, spring wheat was grown slightly earlier than others, reaching a peak before the 150th DOY, and had 3 significant peaks with smaller values than other crops during the growing season. The rising and falling times of dry bean and soybean were similar on the VH curves, and dry bean has only one peak while soybean has two peaks. Corn had three significant peaks, and the third one arrived later than other crops, indicating its later harvest period. The first peak of sugar beet arrived a little earlier and had slightly higher values than other crops. In addition, hay maintained low values and decreased slowly after the 150th DOY, with a low peak on the 300th DOY. The time series curves of the five sample points for each crop (especially spring wheat, soybean, sugar beet, and corn) based on central sampling had less difference than scattered sampling (Figure 5), indicating that the growth conditions of the same crop were similar to nearby areas but more different from distant areas.

3.2. SNIC Image Segmentation Results

The SNIC algorithm using different parameters of seed spacing caused differentiated segmentation results (Figure 6). With the increase in seed spacing parameters, the image superpixels or pixel objects obtained by segmentation became smoother. Too large or too small seed spacing will lead to results inconsistent with actual fields. The segmentation results were fragmented and rough when using small seed spacing parameters (e.g., 15). However, large seed spacing (e.g., 25 and 30) can lead to over-segmentation with over-smooth pixel objects. Thus, we finally chose a median value of the seed spacing parameter (i.e., 20) to segment the NAIP image and to obtain 58 periods object-based VH time series data for crop type classification.

3.3. Comparisons of Different Classification Results and Accuracies

By comparing the crop classification results based on different methods, we found that object-based classification performed better than pixel-based classification (Figure 7), and the crop type distribution maps zoomed in the partial area depicted these characteristics in more detail (Figure 8). Affected by the salt and pepper effects, there were many unavoidable small patches in the pixel-based classification results. At the same time, object-based classification results had fewer of these patches. In addition, because the time curves of soybean and dry bean were similar, there was a certain misclassification between these two crops. Using the object-based classification method, the TWDTW algorithm caused significantly less misclassification than the RF algorithm.
Accuracy assessment analyses found that the accuracies of the TWDTW algorithm were generally higher than those of the RF algorithm using either scattered or central sampling strategies and either pixel- or object-based methods (Table 1). The object-based TWDTW algorithm had the highest overall accuracy in both the central and scattered sampling strategies, 82.11% and 81.64%, respectively. In terms of the TWDTW algorithm, the object-based method was much better than the pixel-based method. Due to its poor generalization learning ability, the RF algorithm had a low performance in limited-samples-based classification, especially in central sampling. This indicated that RF classification needs the support of comprehensive samples, while TWDTW has a strong spatial generalization learning ability and is suitable for limited-samples-based crop classification.
We performed 30-times classifications with different samples for different methods to further compare their accuracies using box plots (Figure 9). The comparison results showed that TWDTW performed better than RF in the classifications with the same strategies. The object-based methods had higher accuracies than pixel-based ones, and the object-based TWDTW had the highest accuracy under either scattered or central sampling strategies. Moreover, the advantage of TWDTW over RF was greater in central sampling than in scattered sampling.
In terms of crop type, spring wheat had the highest accuracy, while dry bean had the lowest accuracy (Table 2). For central sampling, the user accuracy of dry bean, corn, and hay and the producer accuracy of soybean and dry bean were relatively low using the pixel-based or object-based RF method. The time series curves of soybean and dry bean were almost similar and only had different shapes between the 180th and 250th DOY, leading to a large number of misclassifications between them (Figure 10). However, under either scattered or central sampling strategies, both object-based and pixel-based TWDTW results performed better than RF, and the PA and UA of dry bean and soybean significantly improved (Table 2). This indicated that TWDTW could effectively distinguish the different values between soybean and dry bean between the 180th and 250th DOY in the overall similar curves.

3.4. Sensitivities of Different Classification Strategies to Sample Size

Sensitivity analyses found that the TWDTW algorithm generally had higher accuracy than the RF algorithm (Figure 11), especially for the classifications with a small sample size under the central sampling strategy (Figure 11a,b). Under the central sampling strategy, TWDTW consistently performed better than RF with increasing sample size. However, RF performed better than TWDTW when the sample size exceeded 50 for each crop type under the scattered sampling strategy (Figure 11b). This indicated that RF requires much more samples than TWDTW to achieve satisfactory classification accuracy.
Along with the increasing sample size, the accuracy improved for all classification methods (Figure 11). The accuracy improvement in RF due to increased sample size was more significant than that of TWDTW under each strategy (Figure 11c,d), indicating that TWDTW was more robust while RF was more sensitive to the sample size change. For the TWDTW algorithm, the object-based classifications performed much better than pixel-based ones, especially for the central sampling strategy with a small sample size. For the RF algorithm, the scattered sampling strategy performed better than the central sampling one, especially for the object-based classifications with large sample sizes.

4. Discussion

4.1. Advantages of GEE Platform and Sentinel-1 SAR Images for Crop Classification

Previous studies generally classified crop types based on a certain algorithm or compared the classification accuracy of multiple algorithms. In this study, we compared eight different crop type classifications based on different sampling strategies (i.e., central sampling and scattered sampling), different perspectives (i.e., object-based and pixel-based), and different classifiers (i.e., TWDTW and RF) using Sentinel-1A/B SAR images and GEE big-data cloud platform. Due to its open access datasets and algorithms and powerful computing ability, GEE has been widely used in mapping crop types. In addition, unlike optical images, the Sentinel-1 SAR images used in this study have strong penetrability to obtain long-term cloud-free continuous time series data for crop classification [28,29]. Previous research provided an Analysis Ready Data (ARD) framework for researchers to easily preprocess Sentinel-1 SAR backscatter on the GEE [51]. The easy-access SAR data and efficient processing capabilities on the GEE platform made all of the Sentinel-1A/B data processing in this study take only ten minutes. This makes it easy to obtain dense time series images, which can provide more detailed crop phenology information and effectively improve the accuracy of crop classification [68]. GEE also provides an efficient, simple non-iterative clustering method for effective image segmentation. Moreover, we found that the object-based method can effectively reduce the coherent spot noise of SAR data and improve the classification accuracy compared to the pixel-based method, which was consistent with previous studies [35,69].

4.2. Implications for Crop Classification and Sustainable Agricultural Management

The same crop type may differ in sowing time and growth conditions, causing similar shapes but different start–end times and amplitudes of time series curves. Such ubiquitous difference characteristics are difficult to capture by traditional machine learning methods such as RF but can be effectively handled by the TWDTW method [24,46,61]. Although TWDTW has been used relatively less than RF, we found that TWDTW can perform better than RF in few-samples-based crop classification due to its strong spatial generalization ability, which was consistent with previous research [70,71]. We also found that TWDTW had a robust performance to sample size change and performed better than RF in the central sampling strategy, which proved its effectiveness in crop classification in large areas with limited samples [72,73]. Previous studies have confirmed the effectiveness of the TWDTW method in complex agricultural areas using a small number of samples for crop classification [8] and also proved that TWDTW performed better than other methods (e.g., RF and SVM) in classification trials with recursive sample reduction [43]. However, we found that RF performed better than TWDTW when the sample size exceeded 50. This indicated that RF requires a large number of representative and spatially distributed samples to achieve satisfactory accuracy [74], consuming more labor and time.
Crop classification plays an important role in precision agriculture, food security, and sustainable cropland use [75]. By accurately identifying different crop types and their growth stages, farmers can optimize their management practices to increase yields, reduce costs, and minimize environmental impacts [76,77]. The use of Sentinel-1 and TWDTW for crop classification can provide effective social benefits, especially in situations where the number of samples is limited. Sentinel-1 SAR can provide effective and continuous time series images in any climatic situation. The TWDTW classifier only requires local and limited samples to achieve satisfactory classification accuracy in a large area, which can effectively improve sample collection efficiency and reduce the time, manpower, and money required for crop monitoring. This is particularly important for crop classification in some cloudy and rainy regions near the equator in Africa and Latin America [78,79]. Moreover, the combination of TWDTW and RF in crop classification may lead to significant improvements in classification accuracy and sustainable agricultural management.

4.3. Limitations and Future Research Topics

In this study, we only identified the crop types on the North Dakota–Minnesota border in 2018. Future research can focus on larger-scale and longer-term crop classification, providing basic data for the characteristics, driving mechanisms, and impacts of spatiotemporal crop type changes and supporting decision-making for future scenario optimization. Moreover, we used only one linear polarization band (i.e., VH) of Sentinel-1A/B SAR images to classify crop types. Previous research has confirmed that crop classification using multiple polarization parameters extracted by Cloude–Pottier and Freeman–Durden decomposition had higher accuracy than only using one linear polarization based on RADARSAT-2 data [22]. Other research used Sentinel-1 double polarization decomposition to obtain parameters of entropy, anisotropy, and alpha angle to improve classification accuracy [80]. Thus, future research can focus on digging for more useful polarization decomposition parameters for higher-accuracy crop classification.
In this study, we segmented the high-resolution NAIP images to carry out lower-resolution classification based on Sentinel-1 data, which may help improve classification performance. However, large-area high-resolution images were difficult to obtain, especially in a free and timely way. In addition, we only used the visual interpretation method to determine the seed spacing parameter for image segmentation. Future research should assess the classification accuracies under different seed sizes and different methods to determine the most efficient segmentation parameter. The classification performances of TWDTW for other data sources (e.g., Landsat-8 and Sentinel-2), other study areas, or other crop types should be analyzed in future research to assess its robustness. In addition, early crop mapping can provide important information for crop growth monitoring and yield prediction [81]. The TWDTW allows for the use of additional data from previous years and the incorporation of temporal information (e.g., crop phenology changes) to improve classification accuracy [75], which has potential applications for early crop mapping as well as historical and limited-samples-based crop type classification and thus deserves more attention in future studies.

5. Conclusions

By comparing the accuracies of eight classification strategies, we found that the TWDTW algorithm generally performed better than the RF algorithm, especially for small-sample classification with a central sampling strategy. The object-based methods had higher accuracies than pixel-based ones, and the object-based TWDTW had the highest accuracy under either scattered or central sampling strategies. For the RF method, the scattered sampling performed much better than central sampling, especially for large-sample classification. In terms of crop type, although soybean and dry bean had relatively more misclassifications than other crops due to their similar curves, the TWDTW performed significantly better than RF in distinguishing these two crops.
Sensitivity analyses found that the accuracies improved for all eight classification strategies with increasing sample size. The improvement in RF was more significant than that of TWDTW, indicating that TWDTW was more robust while RF was more sensitive to sample size change. TWDTW performed much better than RF in limited-sample classifications, while RF performed slightly better than TWDTW when the sample size exceeded 50. This proved that RF requires many more samples than TWDTW to achieve satisfactory classification accuracy.
The comparison results indicated that TWDTW has stronger temporal and spatial generalization capabilities and can effectively capture similar curve characteristics in the same crop despite small differences induced by sowing time and climatic conditions. TWDTW only requires a labor-, time-, and cost-effective sampling strategy to generate local and limited samples for large-area classification, which makes it have numerous potential applications for early, historical, and limited-samples-based crop type classification, which deserves more future attention. The findings of this study are worthwhile contributions to the methodology and practices of crop type classification as well as sustainable agricultural development.

Author Contributions

Conceptualization, Y.L.; methodology, X.X. and L.J.; software, L.J.; validation, L.J.; formal analysis, Y.L.; data curation, L.J.; writing—original draft preparation, X.X., L.J. and G.R.; writing—review and editing, Y.L.; visualization, L.J.; supervision, Y.L.; project administration, X.X. and Y.L.; funding acquisition, X.X. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (Grant Nos. 42271273 and 42201289) and a project funded by the China Postdoctoral Science Foundation (Grant No. 2021M700143).

Data Availability Statement

The data presented in this research are available on request from the corresponding author.

Acknowledgments

We thank the Google Earth Engine for the computing platform and the European Space Agency for the Sentinel-1 data. We also appreciate the editors and anonymous reviewers for their valuable time, constructive suggestions, and insightful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, Y.; Song, W. Modelling crop yield, water consumption, and water use efficiency for sustainable agroecosystem management. J. Clean. Prod. 2020, 253, 119940. [Google Scholar] [CrossRef]
  2. Godfray, H.C.J.; Beddington, J.R.; Crute, I.R.; Haddad, L.; Lawrence, D.; Muir, J.F.; Pretty, J.; Robinson, S.; Thomas, S.M.; Toulmin, C. Food security: The challenge of feeding 9 billion people. Science 2010, 327, 812–818. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Brown, M.; De Beurs, K.; Marshall, M. Global phenological response to climate change in crop areas using satellite remote sensing of vegetation, humidity and temperature over 26 years. Remote Sens. Environ. 2012, 126, 174–183. [Google Scholar] [CrossRef]
  4. Liu, Y.; Song, W.; Deng, X. Spatiotemporal patterns of crop irrigation water requirements in the Heihe River Basin, China. Water 2017, 9, 616. [Google Scholar] [CrossRef] [Green Version]
  5. Liu, Y.; Song, W.; Mu, F. Changes in ecosystem services associated with planting structures of cropland: A case study in Minle County in China. Phys. Chem. Earth Parts A/B/C 2017, 102, 10–20. [Google Scholar] [CrossRef]
  6. Li, H.; Zhang, C.; Zhang, S.; Atkinson, P.M. Full year crop monitoring and separability assessment with fully-polarimetric L-band UAVSAR: A case study in the Sacramento Valley, California. Int. J. Appl. Earth Obs. Geoinf. 2019, 74, 45–56. [Google Scholar] [CrossRef] [Green Version]
  7. Xiao, X.; Li, X.; Jiang, T.; Tan, M.; Hu, M.; Liu, Y.; Zeng, W. Response of net primary production to land use and climate changes in the middle-reaches of the Heihe River Basin. Ecol. Evol. 2019, 9, 4651–4666. [Google Scholar] [CrossRef] [Green Version]
  8. Csillik, O.; Belgiu, M.; Asner, G.P.; Kelly, M. Object-based time-constrained dynamic time warping classification of crops using Sentinel-2. Remote Sens. 2019, 11, 1257. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, Y.; Song, W.; Deng, X. Changes in crop type distribution in Zhangye City of the Heihe River Basin, China. Appl. Geogr. 2016, 76, 22–36. [Google Scholar] [CrossRef]
  10. Gao, F.; Anderson, M.C.; Zhang, X.; Yang, Z.; Alfieri, J.G.; Kustas, W.P.; Mueller, R.; Johnson, D.M.; Prueger, J.H. Toward mapping crop progress at field scales through fusion of Landsat and MODIS imagery. Remote Sens. Environ. 2017, 188, 9–25. [Google Scholar] [CrossRef] [Green Version]
  11. Pinter, P.J., Jr.; Ritchie, J.C.; Hatfield, J.L.; Hart, G.F. The agricultural research service’s remote sensing program. Photogramm. Eng. Remote Sens. 2003, 69, 615–618. [Google Scholar] [CrossRef] [Green Version]
  12. Boryan, C.; Yang, Z.; Mueller, R.; Craig, M. Monitoring US agriculture: The US department of agriculture, national agricultural statistics service, cropland data layer program. Geocarto Int. 2011, 26, 341–358. [Google Scholar] [CrossRef]
  13. USDA-NASS. Usda-National Agricultural Statistics Service, Cropland Data Layer; USDA-NASS: Washington, DC, USA, 2016. [Google Scholar]
  14. Fisette, T.; Rollin, P.; Aly, Z.; Campbell, L.; Daneshfar, B.; Filyer, P.; Smith, A.; Davidson, A.; Shang, J.; Jarvis, I. AAFC annual crop inventory. In Proceedings of the 2013 Second International Conference on Agro-Geoinformatics (Agro-Geoinformatics), Fairfax, VA, USA, 12–16 August 2013; pp. 270–274. [Google Scholar]
  15. Sannier, C.; Gilliams, S.; Ham, F.; Fillol, E. Use of Satellite Image Derived Products for Early Warning and Monitoring of the Impact of Drought on Food Security in Africa. In Time-Sensitive Remote Sensing; Springer: Berlin/Heidelberg, Germany, 2015; pp. 183–198. [Google Scholar]
  16. Gao, H.; Wang, C.; Wang, G.; Fu, H.; Zhu, J. A novel crop classification method based on ppfSVM classifier with time-series alignment kernel from dual-polarization SAR datasets. Remote Sens. Environ. 2021, 264, 112628. [Google Scholar] [CrossRef]
  17. Kennedy, R.E.; Yang, Z.; Cohen, W.B. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr—Temporal segmentation algorithms. Remote Sens. Environ. 2010, 114, 2897–2910. [Google Scholar] [CrossRef]
  18. Melaas, E.K.; Friedl, M.A.; Zhu, Z. Detecting interannual variation in deciduous broadleaf forest phenology using Landsat TM/ETM+ data. Remote Sens. Environ. 2013, 132, 176–185. [Google Scholar] [CrossRef]
  19. Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef] [Green Version]
  20. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  21. Mutanga, O.; Kumar, L. Google earth engine applications. Remote Sens. 2019, 11, 591. [Google Scholar] [CrossRef] [Green Version]
  22. Jiao, X.; Kovacs, J.M.; Shang, J.; McNairn, H.; Walters, D.; Ma, B.; Geng, X. Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data. J. Photogramm. Remote Sens. 2014, 96, 38–46. [Google Scholar] [CrossRef]
  23. Tassi, A.; Vizzari, M. Object-oriented lulc classification in google earth engine combining snic, glcm, and machine learning algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  24. Dong, Q.; Chen, X.; Chen, J.; Zhang, C.; Liu, L.; Cao, X.; Zang, Y.; Zhu, X.; Cui, X. Mapping winter wheat in North China using Sentinel 2A/B data: A method based on phenology-time weighted dynamic time warping. Remote Sens. 2020, 12, 1274. [Google Scholar] [CrossRef] [Green Version]
  25. Chaves, M.E.; Alves, M.d.C.; Sáfadi, T.; de Oliveira, M.S.; Picoli, M.C.; Simoes, R.E.; Mataveli, G.A. Time-weighted dynamic time warping analysis for mapping interannual cropping practices changes in large-scale agro-industrial farms in Brazilian Cerrado. Sci. Remote Sens. 2021, 3, 100021. [Google Scholar] [CrossRef]
  26. Liu, Y.; Wang, J. Revealing Annual Crop Type Distribution and Spatiotemporal Changes in Northeast China Based on Google Earth Engine. Remote Sens. 2022, 14, 4056. [Google Scholar] [CrossRef]
  27. Wilson, B.T.; Knight, J.F.; McRoberts, R.E. Harmonic regression of Landsat time series for modeling attributes from national forest inventory data. ISPRS J. Photogramm. Remote Sens. 2018, 137, 29–46. [Google Scholar] [CrossRef]
  28. Chen, Y.; Cao, R.; Chen, J.; Liu, L.; Matsushita, B. A practical approach to reconstruct high-quality Landsat NDVI time-series data by gap filling and the Savitzky–Golay filter. ISPRS J. Photogramm. Remote Sens. 2021, 180, 174–190. [Google Scholar] [CrossRef]
  29. Wan, Y.; Zhang, R.; Pan, X.; Fan, C.; Dai, Y. Evaluation of the Significant Wave Height Data Quality for the Sentinel-3 Synthetic Aperture Radar Altimeter. Remote Sens. 2020, 12, 3107. [Google Scholar] [CrossRef]
  30. McNairn, H.; Shang, J. A review of multitemporal synthetic aperture radar (SAR) for crop monitoring. Multitemporal Remote Sens. 2016, 20, 317–340. [Google Scholar] [CrossRef]
  31. Li, M.; Bijker, W. Vegetable classification in Indonesia using Dynamic Time Warping of Sentinel-1A dual polarization SAR time series. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 268–280. [Google Scholar] [CrossRef]
  32. Nagraj, G.M.; Karegowda, A.G. Crop mapping using SAR imagery: An review. Int. J. Adv. Res. Comput. Sci. 2016, 7, 47–52. [Google Scholar]
  33. Salehi, B.; Daneshfar, B.; Davidson, A.M. Accurate crop-type classification using multi-temporal optical and multi-polarization SAR data in an object-based image analysis framework. Int. J. Remote Sens. 2017, 38, 4130–4155. [Google Scholar] [CrossRef]
  34. Wu, Q.; Zhong, R.; Zhao, W.; Fu, H.; Song, K. A comparison of pixel-based decision tree and object-based Support Vector Machine methods for land-cover classification based on aerial images and airborne lidar data. Int. J. Remote Sens. 2017, 38, 7176–7195. [Google Scholar] [CrossRef]
  35. Luo, C.; Qi, B.; Liu, H.; Guo, D.; Lu, L.; Fu, Q.; Shao, Y. Using time series Sentinel-1 images for object-oriented crop classification in Google earth engine. Remote Sens. 2021, 13, 561. [Google Scholar] [CrossRef]
  36. Achanta, R.; Susstrunk, S. Superpixels and polygons using simple non-iterative clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4651–4660. [Google Scholar]
  37. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Brisco, B.; Homayouni, S.; Gill, E.; DeLancey, E.R.; Bourgeau-Chavez, L. Big data for a big country: The first generation of Canadian wetland inventory map at a spatial resolution of 10-m using Sentinel-1 and Sentinel-2 data on the Google Earth Engine cloud computing platform. Can. J. Remote Sens. 2020, 46, 15–33. [Google Scholar] [CrossRef]
  38. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  39. Asam, S.; Gessner, U.; Almengor González, R.; Wenzl, M.; Kriese, J.; Kuenzer, C. Mapping Crop Types of Germany by Combining Temporal Statistical Metrics of Sentinel-1 and Sentinel-2 Time Series with LPIS Data. Remote Sens. 2022, 14, 2981. [Google Scholar] [CrossRef]
  40. Sakoe, H.; Chiba, S. Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 1978, 26, 43–49. [Google Scholar] [CrossRef] [Green Version]
  41. Berndt, D.J.; Clifford, J. Using dynamic time warping to find patterns in time series. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, 31 July–1 August 1994; pp. 359–370. [Google Scholar]
  42. Maus, V.; Câmara, G.; Appel, M.; Pebesma, E. dtwsat: Time-weighted dynamic time warping for satellite image time series analysis in r. J. Stat. Softw. 2019, 88, 1–31. [Google Scholar] [CrossRef] [Green Version]
  43. Cheng, K.; Wang, J. Forest-type classification using time-weighted dynamic time warping analysis in mountain areas: A case study in southern China. Forests 2019, 10, 1040. [Google Scholar] [CrossRef] [Green Version]
  44. Maus, V.; Câmara, G.; Cartaxo, R.; Sanchez, A.; Ramos, F.M.; De Queiroz, G.R. A time-weighted dynamic time warping method for land-use and land-cover mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3729–3739. [Google Scholar] [CrossRef]
  45. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  46. Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
  47. Hird, J.N.; DeLancey, E.R.; McDermid, G.J.; Kariyeva, J. Google Earth Engine, open-access satellite data, and machine learning in support of large-area probabilistic wetland mapping. Remote Sens. 2017, 9, 1315. [Google Scholar] [CrossRef] [Green Version]
  48. Stasolla, M.; Neyt, X. An Operational Tool for the Automatic Detection and Removal of Border Noise in Sentinel-1 GRD Products. Sensors 2018, 18, 3454. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Lee, J.-S. Digital image enhancement and noise filtering by use of local statistics. IEEE Trans. Pattern Anal. Mach. Intell. 1980, PAMI-2, 165–168. [Google Scholar] [CrossRef] [Green Version]
  50. Vollrath, A.; Mullissa, A.; Reiche, J. Angular-based radiometric slope correction for Sentinel-1 on google earth engine. Remote Sens. 2020, 12, 1867. [Google Scholar] [CrossRef]
  51. Mullissa, A.; Vollrath, A.; Odongo-Braun, C.; Slagter, B.; Balling, J.; Gou, Y.; Gorelick, N.; Reiche, J. Sentinel-1 SAR Backscatter Analysis Ready Data Preparation in Google Earth Engine. Remote Sens. 2021, 13, 1954. [Google Scholar] [CrossRef]
  52. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L. The shuttle radar topography mission. Rev. Geophys. 2007, 45, 1–33. [Google Scholar] [CrossRef] [Green Version]
  53. Paris, J.F. Radar backscattering properties of corn and soybeans at frequencies of 1.6, 4.75, and 13.3 Ghz. IEEE Trans. Geosci. Remote Sens. 1983, GE-21, 392–400. [Google Scholar] [CrossRef]
  54. McNairn, H.; Shang, J.; Jiao, X.; Champagne, C. The contribution of ALOS PALSAR multipolarization and polarimetric data to crop classification. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3981–3992. [Google Scholar] [CrossRef] [Green Version]
  55. Press, W.H.; Teukolsky, S.A. Savitzky-Golay smoothing filters. Comput. Phys. 1990, 4, 669–672. [Google Scholar] [CrossRef]
  56. Maxwell, A.E.; Warner, T.A.; Vanderbilt, B.C.; Ramezan, C.A. Land cover classification and feature extraction from National Agriculture Imagery Program (NAIP) Orthoimagery: A review. Photogramm. Eng. Remote Sens. 2017, 83, 737–747. [Google Scholar] [CrossRef]
  57. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [Green Version]
  58. Petitjean, F.; Kurtz, C.; Passat, N.; Gançarski, P. Spatio-temporal reasoning for the classification of satellite image time series. Pattern Recognit. Lett. 2012, 33, 1805–1815. [Google Scholar] [CrossRef] [Green Version]
  59. Virnodkar, S.S.; Pachghare, V.K.; Patil, V.; Jha, S.K. Application of machine learning on remote sensing data for sugarcane crop classification: A review. ICT Anal. Appl. 2020, 2, 539–555. [Google Scholar] [CrossRef]
  60. Li, F.; Ren, J.; Wu, S.; Zhao, H.; Zhang, N. Comparison of regional winter wheat mapping results from different similarity measurement indicators of NDVI time series and their optimized thresholds. Remote Sens. 2021, 13, 1162. [Google Scholar] [CrossRef]
  61. Gella, G.W.; Bijker, W.; Belgiu, M. Mapping crop types in complex farming areas using SAR imagery with dynamic time warping. ISPRS J. Photogramm. Remote Sens. 2021, 175, 171–183. [Google Scholar] [CrossRef]
  62. Tatsumi, K.; Yamashiki, Y.; Torres, M.A.C.; Taipe, C.L.R. Crop classification of upland fields using Random forest of time-series Landsat 7 ETM+ data. Comput. Electron. Agric. 2015, 115, 171–179. [Google Scholar] [CrossRef]
  63. Hariharan, S.; Mandal, D.; Tirodkar, S.; Kumar, V.; Bhattacharya, A.; Lopez-Sanchez, J.M. A novel phenology based feature subset selection technique using random forest for multitemporal PolSAR crop classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4244–4258. [Google Scholar] [CrossRef] [Green Version]
  64. Saini, R.; Ghosh, S.K. Crop classification on single date sentinel-2 imagery using random forest and suppor vector machine. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 683–688. [Google Scholar] [CrossRef] [Green Version]
  65. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  66. Lucieer, A.; Stein, A. Existential uncertainty of spatial objects segmented from satellite sensor imagery. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2518–2521. [Google Scholar] [CrossRef] [Green Version]
  67. McGill, R.; Tukey, J.W.; Larsen, W.A. Variations of box plots. Am. Stat. 1978, 32, 12–16. [Google Scholar]
  68. Griffiths, P.; Nendel, C.; Hostert, P. Intra-annual reflectance composites from Sentinel-2 and Landsat for national-scale crop and land cover mapping. Remote Sens. Environ. 2019, 220, 135–151. [Google Scholar] [CrossRef]
  69. Kussul, N.; Lemoine, G.; Gallego, F.J.; Skakun, S.V.; Lavreniuk, M.; Shelestov, A.Y. Parcel-based crop classification in Ukraine using Landsat-8 data and Sentinel-1A data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2500–2508. [Google Scholar] [CrossRef]
  70. Li, H.; Wan, J.; Liu, S.; Sheng, H.; Xu, M. Wetland Vegetation Classification through Multi-Dimensional Feature Time Series Remote Sensing Images Using Mahalanobis Distance-Based Dynamic Time Warping. Remote Sens. 2022, 14, 501. [Google Scholar] [CrossRef]
  71. Wang, X.; Hou, M.; Shi, S.; Hu, Z.; Yin, C.; Xu, L. Winter Wheat Extraction Using Time-Series Sentinel-2 Data Based on Enhanced TWDTW in Henan Province, China. Sustainability 2023, 15, 1490. [Google Scholar] [CrossRef]
  72. Huang, X.; Fu, Y.; Wang, J.; Dong, J.; Zheng, Y.; Pan, B.; Skakun, S.; Yuan, W. High-Resolution Mapping of Winter Cereals in Europe by Time Series Landsat and Sentinel Images for 2016–2020. Remote Sens. 2022, 14, 2120. [Google Scholar] [CrossRef]
  73. Zheng, Y.; Li, Z.; Pan, B.; Lin, S.; Dong, J.; Li, X.; Yuan, W. Development of a Phenology-Based Method for Identifying Sugarcane Plantation Areas in China Using High-Resolution Satellite Datasets. Remote Sens. 2022, 14, 1274. [Google Scholar] [CrossRef]
  74. Yuan, Y.; Wen, Q.; Zhao, X.; Liu, S.; Zhu, K.; Hu, B. Identifying Grassland Distribution in a Mountainous Region in Southwest China Using Multi-Source Remote Sensing Images. Remote Sens. 2022, 14, 1472. [Google Scholar] [CrossRef]
  75. Lv, S.; Xia, X.; Pan, Y. Optimization of Characteristic Phenological Periods for Winter Wheat Extraction Using Remote Sensing in Plateau Valley Agricultural Areas in Hualong, China. Remote Sens. 2023, 15, 28. [Google Scholar] [CrossRef]
  76. Logavitool, G.; Intarat, K.; Horanont, T. Integration of Machine Learning Algorithms and Time-Series Satellite Images on Land Use/Land Cover Mapping with Google Earth Engine. In Applied Geography and Geoinformatics for Sustainable Development, Proceedings of ICGGS 2022, Phuket, Thailand, 7–8 April 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 171–182. [Google Scholar]
  77. Olivares, B.O.; Vega, A.; Calderón, M.A.R.; Rey, J.C.; Lobo, D.; Gómez, J.A.; Landa, B.B. Identification of Soil Properties Associated with the Incidence of Banana Wilt Using Supervised Methods. Plants 2022, 11, 2070. [Google Scholar] [CrossRef]
  78. Carr, T.; Mkuhlani, S.; Segnon, A.C.; Ali, Z.; Zougmoré, R.; Dangour, A.D.; Green, R.; Scheelbeek, P.F. Climate change impacts and adaptation strategies for crops in West Africa: A systematic review. Environ. Res. Lett. 2022, 17, 053001. [Google Scholar] [CrossRef]
  79. Vignola, R.; Esquivel, M.J.; Harvey, C.; Rapidel, B.; Bautista-Solis, P.; Alpizar, F.; Donatti, C.; Avelino, J. Ecosystem-Based Practices for Smallholders’ Adaptation to Climate Extremes: Evidence of Benefits and Knowledge Gaps in Latin America. Agronomy 2022, 12, 2535. [Google Scholar] [CrossRef]
  80. Harfenmeister, K.; Itzerott, S.; Weltzien, C.; Spengler, D. Agricultural monitoring using polarimetric decomposition parameters of sentinel-1 data. Remote Sens. 2021, 13, 575. [Google Scholar] [CrossRef]
  81. Hao, P.-Y.; Tang, H.-J.; Chen, Z.-X.; Meng, Q.-Y.; Kang, Y.-P. Early-season crop type mapping using 30-m reference time series. J. Integr. Agric. 2020, 19, 1897–1911. [Google Scholar] [CrossRef]
Figure 1. Location of study area. The RGB true color composite image was from the 2018 NAIP data.
Figure 1. Location of study area. The RGB true color composite image was from the 2018 NAIP data.
Remotesensing 15 01112 g001
Figure 2. Spatial distribution of training samples using scattered (a) and central sampling strategies (b) and verification samples (c).
Figure 2. Spatial distribution of training samples using scattered (a) and central sampling strategies (b) and verification samples (c).
Remotesensing 15 01112 g002
Figure 3. Methodological framework for crop type classification.
Figure 3. Methodological framework for crop type classification.
Remotesensing 15 01112 g003
Figure 4. Mean values of VH time series curves for different crop training samples: the original (a) and SG filter curves (b) of scattered samples, and the original (c) and SG filter curves (d) of central samples.
Figure 4. Mean values of VH time series curves for different crop training samples: the original (a) and SG filter curves (b) of scattered samples, and the original (c) and SG filter curves (d) of central samples.
Remotesensing 15 01112 g004
Figure 5. SG filter time series curves of different crops under scattered and central sampling strategies.
Figure 5. SG filter time series curves of different crops under scattered and central sampling strategies.
Remotesensing 15 01112 g005
Figure 6. SNIC segmentation results using seed spacing parameters of 15 (a), 20 (b), 25 (c), and 30 (d).
Figure 6. SNIC segmentation results using seed spacing parameters of 15 (a), 20 (b), 25 (c), and 30 (d).
Remotesensing 15 01112 g006
Figure 7. Crop classification results under different sampling strategies, perspectives, and classifiers. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Figure 7. Crop classification results under different sampling strategies, perspectives, and classifiers. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Remotesensing 15 01112 g007
Figure 8. Crop classification results in the zoom area under different sampling strategies, perspectives, and classifiers. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Figure 8. Crop classification results in the zoom area under different sampling strategies, perspectives, and classifiers. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Remotesensing 15 01112 g008
Figure 9. Box plots of 30-times overall accuracies for each classification strategy. Red line refers to the average accuracy. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Figure 9. Box plots of 30-times overall accuracies for each classification strategy. Red line refers to the average accuracy. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Remotesensing 15 01112 g009
Figure 10. Confusion matrixes under different sampling strategies, perspectives, and classifiers. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Figure 10. Confusion matrixes under different sampling strategies, perspectives, and classifiers. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Remotesensing 15 01112 g010
Figure 11. Accuracy sensitivities of different classification strategies to sample size. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Figure 11. Accuracy sensitivities of different classification strategies to sample size. RF and TWDTW are the abbreviations of Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Remotesensing 15 01112 g011
Table 1. Overall accuracies and Kappa coefficients under different sampling strategies, perspectives, and classifiers.
Table 1. Overall accuracies and Kappa coefficients under different sampling strategies, perspectives, and classifiers.
Different Classification MethodsOverall Accuracy (%)Kappa Coefficient
Scattered samplingPixel-based, RF73.170.66
Object-based, RF73.580.67
Pixel-based, TWDTW73.170.66
Object-based, TWDTW81.640.77
Central samplingPixel-based, RF71.630.64
Object-based, RF69.650.62
Pixel-based, TWDTW75.390.68
Object-based, TWDTW82.110.77
Abbreviations: RF and TWDTW refer to Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Table 2. Producer’s and User’s accuracies (unit: %) under different sampling strategies, perspectives, and classifiers.
Table 2. Producer’s and User’s accuracies (unit: %) under different sampling strategies, perspectives, and classifiers.
Different Classification MethodsSpring WheatDry BeanCornSoybeanSugar BeetHay
Scattered samplingPixel-based
RF
User’s Accuracy
Producer’s Accuracy
92.0
85.9
41.2
61.5
64.1
80.3
80.5
55.9
89.0
75.7
46.6
87.3
Object-based
RF
User’s Accuracy
Producer’s Accuracy
88.0
89.2
38.4
66.9
70.4
80.2
78.6
52.9
88.5
78.9
59.0
78.5
Pixel-based
TWDTW
User’s Accuracy
Producer’s Accuracy
88.7
88.1
50.0
70.9
63.1
57.6
75.6
64.7
91.6
58.8
48.9
89.3
Object-based
TWDTW
User’s Accuracy
Producer’s Accuracy
91.9
91.6
59.5
74.8
76.7
79.7
87.8
70.7
90.3
82.4
56.0
92.4
Central samplingPixel-based
RF
User’s Accuracy
Producer’s Accuracy
88.2
87.6
49.3
68.9
54.5
76.9
83.0
47.7
80.0
77.0
47.4
85.6
Object-based
RF
User’s Accuracy
Producer’s Accuracy
92.5
84.7
37.5
68.2
56.8
85.5
78.8
41.6
84.7
73.5
50.7
90.1
Pixel-based
TWDTW
User’s Accuracy
Producer’s Accuracy
81.4
93.4
55.9
55.6
72.3
63.5
84.4
62.2
75.4
82.6
53.9
80.2
Object-based
TWDTW
User’s Accuracy
Producer’s Accuracy
88.0
92.5
68.4
75.0
72.3
74.2
90.2
72.3
85.2
84.3
62.8
93.4
Abbreviations: RF and TWDTW refer to Random Forest and Time-Weighted Dynamic Time Warping, respectively.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, X.; Jiang, L.; Liu, Y.; Ren, G. Limited-Samples-Based Crop Classification Using a Time-Weighted Dynamic Time Warping Method, Sentinel-1 Imagery, and Google Earth Engine. Remote Sens. 2023, 15, 1112. https://doi.org/10.3390/rs15041112

AMA Style

Xiao X, Jiang L, Liu Y, Ren G. Limited-Samples-Based Crop Classification Using a Time-Weighted Dynamic Time Warping Method, Sentinel-1 Imagery, and Google Earth Engine. Remote Sensing. 2023; 15(4):1112. https://doi.org/10.3390/rs15041112

Chicago/Turabian Style

Xiao, Xingyuan, Linlong Jiang, Yaqun Liu, and Guozhen Ren. 2023. "Limited-Samples-Based Crop Classification Using a Time-Weighted Dynamic Time Warping Method, Sentinel-1 Imagery, and Google Earth Engine" Remote Sensing 15, no. 4: 1112. https://doi.org/10.3390/rs15041112

APA Style

Xiao, X., Jiang, L., Liu, Y., & Ren, G. (2023). Limited-Samples-Based Crop Classification Using a Time-Weighted Dynamic Time Warping Method, Sentinel-1 Imagery, and Google Earth Engine. Remote Sensing, 15(4), 1112. https://doi.org/10.3390/rs15041112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop