Next Article in Journal
Use of Geostatistics for Multi-Scale Spatial Modeling of Xylella fastidiosa subsp. pauca (Xfp) Infection with Unmanned Aerial Vehicle Image
Previous Article in Journal
An Influence of Snow Covers on the Radar Interferometry Observations of Industrial Infrastructure: Norilsk Thermal Power Plant Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparison and Assessment of Data Sources with Different Spatial and Temporal Resolution for Efficiency Orchard Mapping: Case Studies in Five Grape-Growing Regions

1
College of Land Science and Technology, China Agricultural University, Beijing 100083, China
2
Key Laboratory of Remote Sensing for Agri-Hazards, Ministry of Agriculture and Rural Affairs, Beijing 100083, China
3
Key Laboratory for Agricultural Land Quality, Ministry of Natural Resources of the People’s Republic of China, Beijing 100083, China
4
Ministry of Education Key Laboratory for Earth System Modelling, Department of Earth System Science, Institute for Global Change Studies, Tsinghua University, Beijing 100084, China
5
Department of Earth System Science, Ministry of Education Ecological Field Station for East Asian Migratory Birds, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(3), 655; https://doi.org/10.3390/rs15030655
Submission received: 14 December 2022 / Revised: 18 January 2023 / Accepted: 19 January 2023 / Published: 22 January 2023
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
As one of the most important agricultural production types in the world, orchards have high economic, ecological, and cultural value, so the accurate and timely mapping of orchards is highly demanded for many applications. Selecting a remote-sensing (RS) data source is a critical step in efficient orchard mapping, and it is hard to have a RS image with both rich temporal and spatial information. A trade-off between spatial and temporal resolution must be made. Taking grape-growing regions as an example, we tested imagery at different spatial and temporal resolutions as classification inputs (including from Worldview-2, Landsat-8, and Sentinel-2) and compared and assessed their orchard-mapping performance using the same classifier of random forest. Our results showed that the overall accuracies improved from 0.6 to 0.8 as the spatial resolution of the input images increased from 58.86 m to 0.46 m (simulated from Worldview-2 imagery). The overall accuracy improved from 0.7 to 0.86 when the number of images used for classification was increased from 2 to 20 (Landsat-8) or approximately 60 (Sentinel-2) in one year. The marginal benefit of increasing the level of details (LoD) of temporal features on accuracy is higher than that of spatial features, indicating that the classification ability of temporal information is higher than that of spatial information. The highest accuracy of using a very high-resolution (VHR) image can be exceeded only by using four to five medium-resolution multi-temporal images, or even two to three growing season images with the same classifier. Combining the spatial and temporal features from multi-source data can improve the overall accuracies by 5% to 7% compared to using only temporal features. It can also compensate for the accuracy loss caused by missing data or low-quality images in single-source input. Although selecting multi-source data can obtain the best accuracy, selecting single-source data can improve computational efficiency and at the same time obtain an acceptable accuracy. This study provides practical guidance on selecting data at various spatial and temporal resolutions for the efficient mapping of other types of annual crops or orchards.

1. Introduction

Horticultural orchards are one of the most important agricultural production types in the world [1]. According to data from the Food and Agriculture Organization of the United Nations (FAO) (http://www.fao.org/faostat/zh/#data/QC accessed on 18 August 2022), the world’s orchards area as well as annual production are showing a continuous growth trend. The orchard industry is also an essential component part of the agricultural industry structure [2]; it creates higher competitive advantages, fosters benefits other than conventional agriculture, and helps farmers to achieve rapid growth in agricultural income [3]. Methods that quickly and efficiently obtain relevant information, such as orchard area and spatial distribution, are of great significance to guide fruit production and the planning of planting structure adjustments [4]. The traditional method of obtaining farmland information mainly relies on many manual field surveys, which requires a lot of human resources, reduces economic efficiency, and has a slow data update rate. In this case, the development of remote-sensing technology provides an advanced and convenient technical means to solve this problem and is widely used in orchard classification [5,6,7].
Generally, the remote-sensing (RS) data obtained from multi-sources’ sensors have varying spatial, temporal, and spectral resolutions. Thus, selecting suitable RS data is a critical step in carrying out successful orchard classification [8]. These RS data will provide extra information on their respective strengths in the classification procedure. The first category is spatial information. High-resolution images, such as Quick Bird and Worldview images, certainly provide more detailed spatial information of the surface [9]. The second category is spectral information. The difference among spectral profiles is used to help classify RS images into land cover classes, with higher differentiating ability caused by more spectral bands [8,10]. The third category is temporal information. Accurate and timely information about the nature and extent of the land surface and its change over time is essential for producing finer classification maps [11].
It is difficult for one type of RS image to have both rich temporal and spatial information [12,13]. In general, very high-resolution (VHR) imagery can achieve meter or sub-meter resolution and can acquire data containing richer spectral information. However, individual VHR optical sensors have poor temporal resolution, mainly due to the satellite revisit period, possible competing orders of different users on the satellite pointing, the limited life of a satellite mission, and weather conditions [13,14]. Fewer VHR images are available in a year. The processing of VHR data is often complex and computationally expensive when applied to large-scale problems [15,16]. High temporal resolution images such as those from Landsat-8 and Sentinel-2, which contain spectral information, will generally be richer than VHR images, but they have a lower spatial resolution. This type of data is less expensive to obtain, simple to acquire, and suitable for studying large-scale problems [17]. Therefore, in order to achieve the goal of efficient orchard mapping, we need to make a trade-off between these three types of data in the data selection process in order to address the abovementioned phenomena.
Orchards are not easy to be distinguished from other vegetation types in orchard mapping, although different spectral bands could help in vegetation classification [18]. The ground cultivation of orchards has unique characteristics, and there is some variability in the texture characteristics between different types of orchards and other crops [19]. Most orchards are also perennial crops, and their temporal information is very useful classification information [20]. Therefore, RS images rich in spatial and temporal information are the first choice in the process of orchard classification.
Many studies have compared and assessed the potential of images with different spatial resolutions [21,22] or different temporal resolutions [23,24] in crop classification, to further select the spatial and temporal resolutions that can achieve the best classification performance. However, these studies are geared toward showing the great potential of multi-source RS data in crop classification, as well as a comparison of the ability of different spatial information and different temporal information respectively. Few studies have investigated which RS feature is more efficient for accurate classification in the process of resolution change from the perspective of the substitution relationship between the spatial information and temporal information, and under which circumstances they can be replaced mutually at the level of classification accuracy.
Grape, which is one of the four largest and most productive fruits globally, has demonstrated a high degree of economic, environmental, and cultural importance over the past 20 years [25]. Viticulture is also very sensitive to climate change; the increased use of water for grape planting in a climate of global warming means that efficient water use is needed now more than ever [26]. Therefore, the production of timely and accurate vineyard maps is necessary for the relevant sectors.
Our study was guided by the following four questions: (1) How does the accuracy change with the spatial resolution of the input data for classification? (2) How does the accuracy change with the temporal resolution of the input data? (3) For a comparison of the classification ability of temporal information and spatial information, can we reach the best classification performance by using one type of input (images with high spatial resolution or temporal resolution) instead of the other one, and how? (4) How much does it contribute to the improvement of classification accuracy when we use multi-source input data? We address the challenges of data selection and trade-offs that exist in the mapping process by answering the questions above, using the grape-growing region as an example. Our study can provide guidance for data selection in different mapping application cases. It is also useful for other types of orchards or perennial crops.

2. Study Area and Data

2.1. Study Area

Vineyards are widely distributed in the world, and among all atmospheric elements, temperature is considered the most important factor driving grape growth [27]. This may result in the growth characteristics of grapes in various climate zones differing from one another. Therefore, in order to draw conclusions with universal suitability, five grape-growing regions with different climatic types were selected for the experiment (Figure 1).
The climate types and characteristics of the five study areas are shown in Table 1. Study area 1 (SA1) is situated in Caiyu Town, Daxing District, Beijing, China. SA1 covers 22.84 km2 and is in a mid-latitude zone with a temperate semi-humid continental monsoon climate [28]. Study area 2 (SA2) in Penglai City, Shandong Province, China spans an area of approximately 21.11 km2. It has a lengthy, cold winter, a rainy, brief spring, and autumn seasons with a pleasant, temperate monsoon climate. Study area 3 (SA3), situated in the northwest Turpan region, Xinjiang Province, China, has a continental desert climate, with less precipitation in drought and hot weather, and higher precipitation in summer than in winter, covering an area of about 21.94 km2. The three study locations are situated in China’s “golden belt” of grape growing, and the agricultural landscape is very fragmented.
Two research regions in the world’s primary viticultural belt were chosen in addition to the Chinese study areas. In Bordeaux, France’s Upper Medoc region, study area 4 (SA4) is close to Chateau Bayatu, France and has a moderate maritime climate. The area of SA4 is 22.26 km2. Sonoma, California, in the United States is home to study area 5 (SA5). It has a typical Mediterranean climate with pleasant winters and dry summers with a surface area of roughly 25.64 km2. Recently, due to drought, the grape-growing industry in this region has been put through a great deal of strain [29]. The viticulture parcels in these two areas are larger and more compact.

2.2. Data

2.2.1. VHR Images

Vineyard-specific texture features can be observed on the VHR images, which can be distinguished from each other (Figure 2). Worldview-2 (WV-2) images (DigitalGlobe, Inc., USA) were used in this study to compare the classification accuracy of input images with different spatial resolutions. The image dataset included two types of images: one panchromatic image with a spatial resolution of 0.5 m and one multispectral image with a spatial resolution of 2 m [30,31]. It contains four standard bands (red, green, blue, and NIR1) and four new bands (coastal blue, yellow, red edge, and NIR2). In Table 2, the bands and the timing of the WV-2 images used for the different study areas in our paper are displayed.
The different images with spatial resolutions used in this study were downloaded from commercial software, and the different levels represent different image resolutions, where the VHR image data (0.46 m) originated from WV-2. The remaining images were obtained by resampling the WV-2 images with this commercial software (18.8.5.0), and the detailed resolutions are shown in Table 3.

2.2.2. Multi-Temporal-Resolution Images

To investigate the effect of the temporal resolution of the input image on classification accuracy, we employed Landsat-8 and Sentinel-2 imagery with higher temporal resolution but lower spatial resolution. Landsat-8 satellites offered a revisit duration of 16 days. After Sentinel-2B was launched on 7 March 2017, the revisit time in the study area decreased from 10 to 5 days [32]. Landsat-8 provides a multispectral spatial resolution of 30 m, and Sentinel-2 can achieve a spatial resolution of 10 m in most optical bands. Based on the GEE platform, we collected and processed Landsat-8 (https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C02_T1_RT) (accessed on 18 August 2022) and Sentinel-2 images (https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR) (accessed on 18 August 2022) of the corresponding years (Table 3).
For Landsat-8 images, the “pixel_qa” band is used to remove pixels occupied by cloud and ground shadow [32]. Sentinel-2 images with a cloudy pixel percentage of less than 20% were selected and masked by the QA60 band to reduce the effect of clouds [33]. Figure 3 shows the time distribution of the satisfactory images. These images are used to build regular time series with various lengths as variables, allowing us to assess the effectiveness of various temporal information in mapping orchards.

2.2.3. Sample Collection

Field samples were collected in SA1, SA2, and SA3 in 2021–2022. In order to draw more convincing conclusions, SA4 and SA5, as typical grape-growing regions, are necessary to add to this study. Therefore, for the study areas of SA4 and SA5, which are currently difficult to reach in the field, samples were collected through image interpretation and statistical data (CropScape—NASS CDL Program (gmu.edu), https://land.copernicus.eu/pan-european/corine-land-cover) (accessed on 18 August 2022). To lower the interpretation uncertainty, time series of spectral indices, Google Earth historical imagery, and street views were also used to support the interpretation [34,35].
Based on the actual situation of the five study areas, we designed a classification scheme and obtained a sample database of them. Seven categories make up the classification system: grape, woodland, cropland, grassland, impervious surface, water, and others. All samples were distributed as evenly as possible during the interpretation process. Samples were stored in generalized polygons. We divided the sample dataset, using 30% for validation and 70% for training. The number of samples allocated according to different planting conditions are shown in Table 4. There is a certain imbalance due to planting characteristics, and the application of the random forest (RF) classifier can solve this problem [36].

3. Methods

The workflow consists of the following steps: (1) data preparation and pre-processing to create different spatial and temporal resolution images (see Section 3.1 and Section 3.2 for more details); (2) design of classification inputs, where both single-source data and multi-source data are used to identify the vineyard; (3) sample collection; and (4) classification and accuracy assessment (Figure 4).

3.1. Spatial Features

The object-oriented (OO) method has higher classification accuracy in the processing of classifying VHR images [37,38]. Spatial features were extracted based on the OO approach in this study. First, images were segmented using the simple non-iterative clustering (SNIC) algorithm on the GEE platform. In this algorithm, the size of the cluster is changed by changing the seed spacing. Therefore, we found the most suitable one by trying different seed spacings according to the planting structure characteristics of crop patches. After that, the texture features are calculated based on the gray level co-occurrence matrix algorithm (GLCM) [39,40], which requires a gray-level 8-bit image as input. In our code, these images were generated through a linear combination of the red, green, and blue bands of the VHR image we used in this study, according to the following Formula (1):
Gray = ( 0.299 × RED ) + ( 0.587 × GREEN ) + ( 0.114 × BLUE )
Then, after a proper standardization, the seven most appropriate GLCM metrics (Table 5) were selected. Principal component analysis (PCA) was used to calculate the mean value of each object in SNIC segmentation results by using the first and second principal components (PC1, PC2), containing about 70% texture information. The PC1 and PC2 object-averaged bands were finally added to those extracted from the SNIC segmentation process. The same operation was used to extract spatial features from images with different spatial resolutions.

3.2. Temporal Features

To make a comparison fairly with the mapping performance of spatial features, we also used the OO approach for the construction of the time series. A regular time series was constructed to achieve efficient mapping. Some studies have found that temporal aggregation (e.g., maximum) is a promising method in image composition [41]. For a missing part of the image, we chose to use the time linear interpolation method to fill it, and for an overlapping part, we used the maximum value composite (MVC) method to fill it. Then, we adjusted the image fusion interval to control the time series densities of all available observations, for the purpose of comparing the mapping performance at various time resolutions.
Generally, the maximum and minimum values of the Enhanced Vegetation Index (EVI) are used as thresholds for the growing and non-growing seasons, and the corresponding dates are used as the start and end dates of the growing season [42]. According to the work of Liu and Huete [43], the EVI is defined as (2), in general G = 2.5, C1 = 6.0, C2 = 7.5, and L = 1:
EVI = G × ρ N I R ρ R E D ρ N I R + ( C 1 × ρ R E D C 2 × ρ B L U E ) + L
We summarized the seasonal growth of vineyards and other vegetation classes, expressed by EVI (Figure 5). Considering vineyard and other types of temporal patterns, the growing season was finally determined to be April–October. When the density of the time series is higher, the delicate distinctions between vineyards and other areas can be better identified, and temporal features can play a role in recognition.

3.3. Classification

The limitations of data storage and availability require a trade-off between temporal and spatial resolution of the images if we want to map large-scale vineyards. Therefore, based on the exploration of different spatial and temporal resolutions of images with regard to orchard mapping, we designed four groups of classification inputs (Figure 6).

3.4. Random Forest Classifier

To date, random forest (RF) [44] is considered one of the most widely used algorithms for land cover classification using RS data [45,46,47,48]. The reasons for RF receiving considerable interest over the last two decades are [49,50] its: (1) good handling of outliers and noise datasets; (2) good performance with high dimensional and multi-source datasets; (3) higher accuracy than other popular classifiers, such as SVM, k-NN, or MLC in many applications [51,52]; (4) deep learning (DL) method must be supported by a large number of training samples, and overfitting is likely to occur if only a small number of labeled samples are used for deep neural network training [53,54]. The RF classifier is not prone to overfitting.
Another factor that makes the RF classifier more popular than other machine-learning algorithms is that only two parameters (ntree and mtry) need to be optimized [55]. Through a meta-analysis of 349 GEE peer-reviewed articles in the past 10 years, a study has shown that the RF algorithm is the most commonly used satellite image classification algorithm [41,56]. Considering all these reasons, we chose RF for the present study. The construction of the RF classifier involves two primary aspects: random selection of data and random selection of features. The basic principles of the algorithm are as follows: (1) Using put-back sampling, the statistical DN values and the probabilities of the reference images serve as the original dataset from which a subset of data is constructed with the same amount of data as the original dataset [57]. The above parameters are the default values (mtry) for the bag fraction parameter of the GEE randomization algorithm. (2) Multiple CART decision trees are constructed using the subsets of each node variable after internal splitting to form a random forest. Based on the recommendations of previous studies [58,59] and pretests from our data, we selected 50 trees (ntree = 50). (3) The statistics of image DN values and probability distributions are sorted. (4) The generated RF classifier classifies the data [60].

3.5. Accuracy Assessment and Comparison

In each study area, we evaluated the classification accuracy using the same validation sample set. Overall accuracy (OA) is one of the simplest and most popular accuracy measures [61]. The OA defines the overall percentage of correctly classified categories, calculated as the number of correctly classified class pixels divided by the total number of pixels in the dataset [61]. The accuracy of individual categories can also be evaluated in terms of “Producer Accuracy” (PA) and “User Accuracy” (UA) [62]. The PA defines the percentage accuracy of each class in a map, calculated by dividing the number of correct pixels in each class by the total number of pixels of that class from the reference data [63]. The UA defines how close the resulting classification map is to ground observations, calculated by dividing the number of correctly classified pixels in each category by the total number of pixels classified in that class [63]. These indicators have been successfully used to validate classification maps generated at different geographical scales [64,65,66]. The calculation formula for each indicator is as follows (3)–(5):
O A = T P + T N N
P A = T P T P + F N
U A = T P T P + F P
where TP and FN, respectively, refer to the true category of samples as positive examples and the model prediction results as positive examples and negative examples [67]. TN and FP refer to the negative examples of the true category of samples, which are predicted by the model as negative examples and positive examples [67]. N is the total number of real samples. We compared vineyard mapping capabilities of different inputs through comparing the OA, PA, and UA and visually inspecting the classification maps.
To further compare the impact of changing the level of details (LoD) for spatial or temporal features on mapping accuracy, we calculated the marginal benefit of data volume to accuracy, i.e., the contribution of each pixel to accuracy as the classification features change. The calculation formula is as in (6):
A c c u r a c y   m a r g i n a l   b e n e f i t s = Δ A c c u r a c y Δ P i x e l s

4. Results

4.1. Classification Accuracies of Images at Various Spatial Resolutions

As the spatial resolution of the input imagery increased, the overall classification accuracies of wine-growing regions show a general rising trend (Figure 7). In all five study areas, imagery with a spatial resolution of 0.46 m was classified with the best accuracy, ranging from 77.4% to 86.9%. Compared to SA4 and SA5, the other three study areas have slightly lower ideal classification accuracy because of the different planting pattern. In China, vineyard parcels are fragmented and dispersed, mixed with fields of other crop types. Contrarily, vineyards in SA4 and SA5 are more uniform and clustered.
By evaluating the PA and UA of vineyards calculated from the confusion matrix, we found that grapes were classified with acceptable accuracies in all five study areas (Figure 8). SA4 and SA5 had the best UA and PA, which were around 90%, followed by SA1. In general, the UA is greater than the PA for each study area, indicating a lower commission error than omission error.

4.2. Classification Accuracies of Images at Various Temporal Resolutions

The OA of vineyard identification increased with the number of images in the time series of inputs, and the uncertainty decreased in the meantime (Figure 9). The red point in Figure 9 represents the point at which a change in classification accuracy tends to stabilize for the whole dataset. The average stable point accuracy across all five study areas was in the range of 87–88%, which can be achieved by using a time series of an average of 22 Sentinel-2 images or 10 Landsat-8 images per year. Sentinel-2 images have a greater stable classification accuracy than Landsat-8 images. The highest stable accuracy was obtained in SA5 with Sentinel-2 (around 92.5%), which can be reached using a time series of 20 Sentinel-2 image a year. The accuracy improves rapidly with increasing time series density until the level of stable point classification accuracy is reached. Then, the accuracy increases slowly and gradually reaches saturation. Compared to the other study areas, the number of images acquired in SA4 was small, and the image quality was poor, mainly since this area is at the intersection of image strips. This gives rise to a small number of images that can be applied in the construction of regular time series of different lengths, see Figure 9(a4). The highest accuracy achievable for the temporal features of Landsat-8 in this study area is lower than the highest accuracy achievable for the spatial features.
We chose the mapping result with the highest OA for each study area, and the PA and UA for the grape of these maps were around 80% (Figure 10). Sentinel-2 imagery has a higher OA for vineyard recognition than Landsat-8 imagery. The best PA and UA were found in SA5, with SA3 coming in second at about 90%. Overall, the PA is higher than the UA, showing a higher commission error than omission error when employing temporal features to identify vineyards.

4.3. Mapping Performance Comparison of Single-Source Data

From Figure 11, we can see that the accuracy marginal benefits (AMB) decrease gradually with the degree of feature variation. For the spatial features, we mainly calculate the AMB when the resolution is increased from Level 11 to Level 18. The AMB decreases as the spatial resolution increases. In other words, although the accuracy of VHR images is high, the contribution to the accuracy improvement decreases by increasing the unit pixel value. Similarly, for the temporal features, we mainly calculate the AMB for the time series length (two images) growing up to the stable point of the time series length (Figure 9 red point). The AMB also decreases gradually as the time series length increases. The AMB of Landsat-8 is higher than that of Sentinel-2 by about 7.2 × 10−5.
Comparing the AMB of the spatial and temporal features, we find that the AMB of spatial features is within the range of 9.27 × 10−5 to 2.25 × 10−7, the AMB of temporal features is within the range of 1.53 × 10−4 to 3.03 × 10−5, and the former is lower than the latter, which indicates that the accuracy would benefit more from increasing the LoD of temporal features than spatial features of similar data volume. That means the efficiency of the contribution to accuracy from temporal feature changes will be higher than the contribution to accuracy from spatial feature changes. This further demonstrates that the classification performance of temporal features is superior to that of spatial features under this experimental condition.
Due to the difficulty of obtaining VHR images, especially for larger study areas, we would like to explore the alternatives of using lower spatial resolution but multi-temporal images to obtain a comparable mapping performance.
Assuming the same conditions (the same classifier and the same training dataset), based on the discussion in Section 4.1 and Section 4.2, we tested the minimum number of multi-temporal Landsat-8 and Sentinel-2 images in a time series to use as classification inputs to acquire the best mapping accuracy using WV-2 imagery. The green lines in Figure 9 represent the optimal accuracy achieved using spatial features extracted from WV-2 imagery only, and these accuracies can be reached using a relatively short Landsat-8 or Sentinel-2 image time series. In most of the study areas, the shortest time series includes less than five images to reach the highest classification accuracy using spatial features (four images for SA1, SA2, and SA5 and four Landsat-8 or five Sentinel-2 images in SA3). The results obtained in SA4 are slightly different. To obtain the ideal spatial feature recognition accuracy in SA4 (OA = 0.869), it takes nine Sentinel-2 images in the time series.
In the above study, we mainly considered time series distribution over the whole season, ignoring the superiority of images acquired during the growing season in the identification. Therefore, it is important to explore the influence of the temporal distribution of the images in the time series on the complementary relationship of spatial and temporal features. The distribution of image numbers acquired in the growing season is shown in Table 6.
Like the findings from the full-season images, the OA increased with the number of growing season images in the time series of inputs, and the uncertainty decreased (Figure 12). The green lines in Figure 9 represent the optimal accuracy achieved using spatial features extracted from WV-2 imagery only, and these accuracies can be reached using a shorter Landsat-8 or Sentinel-2 growing season series. In most of the study areas, the shortest growing season time series includes less than three images to reach the highest classification accuracy using spatial features (three images for SA1 and SA2, two images for SA5, and three Landsat-8 or four Sentinel-2 images in SA3). During feature changes, we can use images from shorter growing season time series to obtain similar results to the full season time series. Generally, three growing season images may be used as a substitute for five images, which can also obtain a comparable mapping performance to the best level that can be achieved with spatial features.

4.4. Mapping Performance Comparison of Multi-Source Data

After evaluating the mapping performance with single-source inputs, we tried to explore the potential of integrating multi-source data for vineyard mapping. We tried fusing simply spatial and temporal features at the feature level. The accuracy after integrating the spatial features of VHR images was found to be higher compared to using only temporal features of the time series (Figure 13). For Landsat-8 imagery, the accuracy is significantly improved, especially when the image distribution density is small. In SA2, the accuracy improved by about 5% when there were two images, and in SA3, the accuracy increased by 6.5% when there were three images. For Sentinel-2 imagery, the OA improvement after fusion is not obvious. However, SA4 is a special case, which indicates the poor quality of images in this region, and the inclusion of spatial features can mitigate the impact of this deficiency to some extent.
Since the remaining multi-spectral bands were not used in the previous experiment, we attempted to add multi-spectral bands to the time series and fuse them with the spatial features. We based them on the length of the time series whose accuracy tends to stabilize with the change of temporal features. We note that combining temporal features of time series that incorporate multi-spectral with spatial features can obtain a better result (Table 7). In SA5, the OA of using the temporal features of time series that add Landsat-8’s multi-spectral data is improved by around 4–7% compared to the original time features only, and for Sentinel-2, the OA is improved by about 2–4%. Overall, the classification accuracy of Landsat-8 images improved more significantly than that of Sentinel-2 images after adding multi-spectral features.

4.5. Mapping Results

The best mapping results described in this study using single-source data and multi-source data are shown in Figure 14. The OO approach was adopted, the pepper noise was significantly reduced from the map effect, and the boundaries of each type of plot were more uniform. At small scales, the Sentinel-2 image shows some cartographic advantages over Landsat-8. This is mainly because of its higher temporal and spatial resolution, which can show the classification detail part.
The results of the detailed comparison between the vineyard maps and Google Earth high-resolution images are shown in Figure 15. We found the images had a high degree of spatial consistency and a preferable distinction between vineyards. Vineyards are easily confused with cropland, woodland, or roads in SA1, SA2, and SA3, study areas in which cultivation is dispersed. For example, in SA2, roads were found to be misclassified as vineyards using Landsat-8 imagery. This indicated that even though the 30 m resolution of the Landsat images is sufficient for detailed mapping, there is a high probability of mixed pixels, allowing vineyards to be misclassified with roadside trees on both sides of the road.
According to the mapping results of the five study areas, the classification of the selected single-source data is less effective compared to the multi-source data and is better with the increase in the classification features (Table 8). However, sometimes adding features can lead to incorrect classification instead. For example, in SA1, the shed was incorrectly classified as vineyard after adding the spatial features. This may be due to the similar texture features they share, and the addition confuses. It is necessary to perform mapping analysis according to the specific conditions of the study area. Overall, despite limitations and potential errors, the distinction between vineyards and other classes in this study achieved relatively satisfactory results, either by selecting features from single-source data or multi-source data as input.

5. Discussion

This work investigates some data selection strategies for the efficient mapping of orchards by designing four different classification inputs to compare and evaluate the classification accuracy at different spatial and temporal resolutions, using grape-growing regions as examples.
Considering both single-source data and multi-source data, first, we compared the changes in accuracy with different spatial and temporal resolutions, respectively. We found that the accuracy increased with the increase in spatial resolution, which is basically consistent with the findings of previous studies [21,22,68]. However, the accuracy increased with fluctuations, which may be related to the SNIC segmentation algorithm we used. The optimal segmentation scale used for classification depends on the real patchiness of the vegetation and land cover types in the study area [22,69]. We did not change the scale of segmentation during the spatial resolution change in order to control the variables, so it has fluctuations to some extent. However, there is still an upward trend.
We found that the accuracy increases with the length of the time series, which is generally consistent with previous studies [23,24]. The results from UA and PA (Figure 9) reveal that the classification results have some errors and uncertainties. In general, this is caused by two reasons: image error and sample error. First, in order to construct a regular time series, missing and overlapping images are processed, which will increase the uncertainty of classification results. In addition, although the Sentinel-2 and Landsat-8 images’ resolution is sufficient for detailed mapping, there is still a probability of mixed pixels. Consequently, for pixels with different types existing, there are difficulties in classification due to the similarity. Furthermore, due to the limited number of samples in the field campaign, the training samples may be uncertain.
Then, we found that the classification ability of temporal features is stronger than that of spatial features by comparing AMB metrics, which means that the accuracy of the highest spatial resolution images can be achieved by using images of shorter time series. We consider the main reason for this to be that texture features may not be fully accurate with simple machine-learning algorithms; instead, temporal features can better represent perennial crops and are useful for identifying seasonal cropland [70]. The accuracy obtained using four or five images can replace the accuracy obtained from the highest spatial resolution images. However, the results obtained in SA4 are slightly different. It is necessary to use nine Sentinel-2 images to achieve this substitution relationship. We think there are two reasons for this situation: (1) the spatial features are more recognizable here under the planting structure, and (2) the Sentinel-2 images had high cloud coverage and poor image quality for that time period (Figure 3). In later applications, we should also analyze specific problems, considering the regional planting structure and the imbalanced distribution of regional image quality.
We paid attention to the growing season images and found that the substitution relationship could be achieved using fewer than two or three growing season images. However, this still has some limitations and uncertainties: (1) the determination of growing seasons is more generalized; (2) the period interval of growing season images is fixed; (3) the method of constructing growing season time series with different lengths is simple. Yet the obtained results can still provide reference significance. In the subsequent research and application, we can conduct exhaustive experiments by delineating more accurate growing seasons and by combining growing season image permutations [24].
Finally, we investigated from the perspective of multi-source data and found that the spatial, temporal, and spectral features can be fully utilized, and better classification accuracy can be obtained if multi-source data are selected. Although the overall gains were modest, they were relatively significant across all metrics in this study, suggesting that multi-feature inputs improved classification results compared to more traditional inputs of single-source data. This agrees with the findings of other studies [71,72].
Future research perspectives exist for this study. On the one hand, considering that the northern hemisphere has a larger global share of viticultural area [73] and a typical climatic distribution, the study area is predominantly distributed in the northern hemisphere, and the applicability of the findings is on a northern hemisphere scale. We have conducted a simple preliminary experiment in Chile, South America, which has some similarity to the findings of this study. In the future, we will further explore whether the conclusions of this study can be applied on a global scale by considering the southern hemisphere’s viticultural area. On the other hand, the conclusions were based on the premise of using the universal adaptive classification algorithm (RF) and simple feature-processing methods, which mainly delivers a lower limit of accuracy. Later, algorithms such as those using deep learning can also be used to improve the utilization of individual features to take maximum advantage of remote-sensing images.

6. Conclusions

Orchards are an important part of the agricultural industry. Accurate orchard mapping is of great significance for monitoring and guiding agricultural activities. Data selection is one of the most important steps in mapping. Limited by processing efficiency and data availability, when we select data inputs, we face a trade-off between spatial and temporal resolution. Thus, in this study, taking five grape-growing areas as examples, we compared and evaluated the classification accuracies of data sources with different spatial and temporal resolutions using the same classifier. The results of this study indicate that: (1) The accuracy increases with increasing spatial resolution, with the highest accuracy range of 77.4–86.9% occurring at the highest resolution (0.46 m). (2) An increase in temporal resolution also contributed to the improvement of mapping performance. Before reaching a stable accuracy, the accuracy increases rapidly with the increase in time series density, and it then enters a saturation stage. (3) In terms of single-source data selection, the marginal benefits of accuracy from increasing the LoD of temporal features is higher than spatial features. The temporal features derived from Landsat-8 or Sentinel-2 image time series would be more suitable for vineyard mapping under experimental conditions using the RF classifier than the spatial features of WV-2. The best accuracy of classification using spatial features extracted from WV-2 can be substituted by using four or five multi-temporal images of Landsat-8 or Sentinel-2. The time series could be shorter, including two to three images, when the multi-temporal images are acquired in the growing season. (4) In terms of multi-source data selection, when spatial features of VHR images were fused into temporal features, the accuracy was improved compared to single-source data (about 5%). When including time series for all spectral bands, this time series is the model with the highest accuracy and is fused with spatial features, improving 3–7% OA compared to models using only temporal features.
This study provides guidance for data selection in different mapping application contexts. For efficient mapping of large areas, high temporal resolution imagery can be considered, and acceptable accuracy can be achieved by selecting four to five or less than two to three growing season images; for situations requiring finer thematic mapping of small areas, multi-source data can be considered, but the improvement in accuracy is limited. The framework of evaluating the contribution of different input data to accuracy can be used as a reference for other types of annual crop or orchard mapping.

Author Contributions

Conceptualization, Z.Y., Y.Z., L.Y., Z.L., X.Z., and S.L.; Data curation, Z.Y., H.W., H.L., X.Y., and T.R.; Funding acquisition, Y.Z.; Investigation, Z.Y., H.W., H.L., X.Y., and T.R.; Methodology, Z.Y., Y.Z., L.Y., Z.L., X.Z., and S.L.; Project administration, Y.Z.; Supervision, Y.Z. and L.Y.; Validation, Z.Y.; Visualization, Z.Y.; Writing—original draft, Z.Y.; Writing—review and editing, Y.Z. and L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant no. 42001352.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

For privacy reasons, the VHR images (spatial resolution up to 0.46) used in this study cannot be opened.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kozhoridze, G.; Orlovsky, N.; Orlovsky, L.; Blumberg, D.G.; Golan-Goldhirsh, A. Classification-based mapping of trees in commercial orchards and natural forests. Int. J. Remote Sens. 2018, 39, 8784–8797. [Google Scholar] [CrossRef]
  2. Cen, Y.; Li, L.; Guo, L.; Li, C.; Jiang, G. Organic management enhances both ecological and economic profitability of apple orchard: A case study in Shandong Peninsula. Sci. Hortic. 2020, 265, 109201. [Google Scholar] [CrossRef]
  3. Chen, Z.; Sarkar, A.; Hasan, A.K.; Li, X.; Xia, X. Evaluation of farmers’ ecological cognition in responses to specialty orchard fruit planting behavior: Evidence in Shaanxi and Ningxia, China. Agriculture 2021, 11, 1056. [Google Scholar] [CrossRef]
  4. Yang, Y.; Huang, Q.; Wu, W.; Luo, J.; Gao, L.; Dong, W.; Wu, T.; Hu, X. Geo-parcel based crop identification by integrating high spatial-temporal resolution imagery from multi-source satellite data. Remote Sens. 2017, 9, 1298. [Google Scholar] [CrossRef] [Green Version]
  5. Ashourloo, D.; Shahrabi, H.S.; Azadbakht, M.; Aghighi, H.; Nematollahi, H.; Alimohammadi, A.; Matkan, A.A. Automatic canola mapping using time series of sentinel 2 images. ISPRS J. Photogramm. Remote Sens. 2019, 156, 63–76. [Google Scholar] [CrossRef]
  6. Johansen, K.; Phinn, S.; Witte, C.; Philip, S.; Newton, L. Mapping banana plantations from object-oriented classification of SPOT-5 imagery. Photogramm. Eng. Remote Sens. 2009, 75, 1069–1081. [Google Scholar] [CrossRef] [Green Version]
  7. Xu, H.; Qi, S.; Gong, P.; Liu, C.; Wang, J. Long-term monitoring of citrus orchard dynamics using time-series Landsat data: A case study in southern China. Int. J. Remote Sens. 2018, 39, 8271–8292. [Google Scholar] [CrossRef]
  8. Chen, B.; Huang, B.; Xu, B. Multi-source remotely sensed data fusion for improving land cover classification. ISPRS J. Photogramm. Remote Sens. 2017, 124, 27–39. [Google Scholar] [CrossRef]
  9. Duarte, L.; Silva, P.; Teodoro, A.C. Development of a QGIS plugin to obtain parameters and elements of plantation trees and vineyards with aerial photographs. ISPRS Int. J. Geo.-Inf. 2018, 7, 109. [Google Scholar] [CrossRef] [Green Version]
  10. Maschler, J.; Atzberger, C.; Immitzer, M. Individual tree crown segmentation and classification of 13 tree species using airborne hyperspectral data. Remote Sens. 2018, 10, 1218. [Google Scholar] [CrossRef]
  11. Duarte, L.; Teodoro, A.C.; Monteiro, A.T.; Cunha, M.; Gonçalves, H. QPhenoMetrics: An open source software application to assess vegetation phenology metrics. Comput. Electron. Agric. 2018, 148, 82–94. [Google Scholar] [CrossRef]
  12. Dalponte, M.; Marzini, S.; Solano-Correa, Y.T.; Tonon, G.; Vescovo, L.; Gianelle, D. Mapping forest windthrows using high spatial resolution multispectral satellite images. Int. J. Appl. Earth Obs. Geoinf. 2020, 93, 102206. [Google Scholar] [CrossRef]
  13. Solano-Correa, Y.T.; Bovolo, F.; Bruzzone, L. Generation of homogeneous VHR time series by nonparametric regression of multisensor bitemporal images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7579–7593. [Google Scholar] [CrossRef]
  14. Solano-Correa, Y.T.; Bovolo, F.; Bruzzone, L. An approach for unsupervised change detection in multitemporal VHR images acquired by different multispectral sensors. Remote Sens. 2018, 10, 533. [Google Scholar] [CrossRef] [Green Version]
  15. Pehani, P.; Čotar, K.; Marsetič, A.; Zaletelj, J.; Oštir, K. Automatic geometric processing for very high resolution optical satellite data based on vector roads and orthophotos. Remote Sens. 2016, 8, 343. [Google Scholar] [CrossRef] [Green Version]
  16. Vahidi, H.; Klinkenberg, B.; Johnson, B.A.; Moskal, L.M.; Yan, W. Mapping the individual trees in urban orchards by incorporating Volunteered Geographic Information and very high resolution optical remotely sensed data: A template matching-based approach. Remote Sens. 2018, 10, 1134. [Google Scholar] [CrossRef] [Green Version]
  17. Higginbottom, T.P.; Symeonakis, E.; Meyer, H.; van der Linden, S. Mapping fractional woody cover in semi-arid savannahs using multi-seasonal composites from Landsat data. ISPRS J. Photogramm. Remote Sens. 2018, 139, 88–102. [Google Scholar] [CrossRef] [Green Version]
  18. Peña, M.; Liao, R.; Brenning, A. Using spectrotemporal indices to improve the fruit-tree crop classification accuracy. ISPRS J. Photogramm. Remote Sens. 2017, 128, 158–169. [Google Scholar] [CrossRef]
  19. Sarron, J.; Malézieux, É.; Sané, C.A.B.; Faye, É. Mango yield mapping at the orchard scale based on tree structure and land cover assessed by UAV. Remote Sens. 2018, 10, 1900. [Google Scholar] [CrossRef] [Green Version]
  20. Brinkhoff, J.; Vardanega, J.; Robson, A.J. Land cover classification of nine perennial crops using sentinel-1 and-2 data. Remote Sens. 2019, 12, 96. [Google Scholar] [CrossRef]
  21. Fisher, J.R.; Acosta, E.A.; Dennedy-Frank, P.J.; Kroeger, T.; Boucher, T.M. Impact of satellite imagery spatial resolution on land use classification accuracy and modeled water quality. Remote Sens. Ecol. Conserv. 2018, 4, 137–149. [Google Scholar] [CrossRef]
  22. Räsänen, A.; Virtanen, T. Data and resolution requirements in mapping vegetation in spatially heterogeneous landscapes. Remote Sens. Environ. 2019, 230, 111207. [Google Scholar] [CrossRef]
  23. Hao, P.; Zhan, Y.; Wang, L.; Niu, Z.; Shakir, M. Feature selection of time series MODIS data for early crop classification using random forest: A case study in Kansas, USA. Remote Sens. 2015, 7, 5347–5369. [Google Scholar] [CrossRef] [Green Version]
  24. Xu, Y.; Yu, L.; Peng, D.; Cai, X.; Cheng, Y.; Zhao, J.; Zhao, Y.; Feng, D.; Hackman, K.; Huang, X. Exploring the temporal density of Landsat observations for cropland mapping: Experiments from Egypt, Ethiopia, and South Africa. Int. J. Remote Sens. 2018, 39, 7328–7349. [Google Scholar] [CrossRef]
  25. Mirás-Avalos, J.M.; Araujo, E.S. Optimization of vineyard water management: Challenges, strategies, and perspectives. Water 2021, 13, 746. [Google Scholar] [CrossRef]
  26. Carrasco-Benavides, M.; Ortega-Farías, S.; Gil, P.M.; Knopp, D.; Morales-Salinas, L.; Lagos, L.O.; de la Fuente, D.; López-Olivari, R.; Fuentes, S. Assessment of the vineyard water footprint by using ancillary data and EEFlux satellite images. Examples in the Chilean central zone. Sci. Total Environ. 2022, 811, 152452. [Google Scholar] [CrossRef]
  27. Fraga, H.; Pinto, J.G.; Santos, J.A. Climate change projections for chilling and heat forcing conditions in European vineyards and olive orchards: A multi-model assessment. Clim. Chang. 2019, 152, 179–193. [Google Scholar] [CrossRef]
  28. Beck, H.E.; Zimmermann, N.E.; McVicar, T.R.; Vergopolan, N.; Berg, A.; Wood, E.F. Present and future Köppen-Geiger climate classification maps at 1-km resolution. Sci. Data 2018, 5, 1–12. [Google Scholar] [CrossRef] [Green Version]
  29. Medellín-Azuara, J.; Escriva-Bou, A.; Abatzoglou, J.A.; Viers, J.H.; Cole, S.A.; Rodríguez-Flores, J.M.; Sumner, D.A. Economic Impacts of the 2021 Drought on California Agriculture; University of California: Merced, CA, USA, 2022. [Google Scholar]
  30. Chuang, Y.-C.M.; Shiu, Y.-S. A comparative analysis of machine learning with WorldView-2 pan-sharpened imagery for tea crop mapping. Sensors 2016, 16, 594. [Google Scholar] [CrossRef] [Green Version]
  31. Madonsela, S.; Cho, M.A.; Mathieu, R.; Mutanga, O.; Ramoelo, A.; Kaszta, Ż.; Van De Kerchove, R.; Wolff, E. Multi-phenology WorldView-2 imagery improves remote sensing of savannah tree species. Int. J. Appl. Earth Obs. Geoinf. 2017, 58, 65–73. [Google Scholar] [CrossRef]
  32. Li, J.; Chen, B. Global revisit interval analysis of Landsat-8-9 and Sentinel-2A-2B data for terrestrial monitoring. Sensors 2020, 20, 6631. [Google Scholar] [CrossRef]
  33. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Homayouni, S.; Gill, E. The first wetland inventory map of newfoundland at a spatial resolution of 10 m using sentinel-1 and sentinel-2 data on the google earth engine cloud computing platform. Remote Sens. 2018, 11, 43. [Google Scholar] [CrossRef] [Green Version]
  34. Tarko, A.; Tsendbazar, N.-E.; De Bruin, S.; Bregt, A.K. Producing consistent visually interpreted land cover reference data: Learning from feedback. Int. J. Digital Earth 2021, 14, 52–70. [Google Scholar] [CrossRef]
  35. Fan, L.; Yang, J.; Sun, X.; Zhao, F.; Liang, S.; Duan, D.; Chen, H.; Xia, L.; Sun, J.; Yang, P. The effects of Landsat image acquisition date on winter wheat classification in the North China Plain. ISPRS J. Photogramm. Remote Sens. 2022, 187, 1–13. [Google Scholar] [CrossRef]
  36. More, A.; Rana, D.P. Review of random forest classification techniques to resolve data imbalance. In Proceedings of the 2017 1st International Conference on Intelligent Systems and Information Management (ICISIM), Aurangabad, India, 5–6 October 2017; pp. 72–78. [Google Scholar]
  37. Tassi, A.; Gigante, D.; Modica, G.; Di Martino, L.; Vizzari, M. Pixel-vs. Object-based landsat 8 data classification in google earth engine using random forest: The case study of maiella national park. Remote Sens. 2021, 13, 2299. [Google Scholar] [CrossRef]
  38. Tassi, A.; Vizzari, M. Object-oriented lulc classification in google earth engine combining snic, glcm, and machine learning algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  39. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  40. Haralick, R.M. Statistical and structural approaches to texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  41. Phan, T.N.; Kuch, V.; Lehnert, L.W. Land Cover Classification using Google Earth Engine and Random Forest Classifier—The Role of Image Composition. Remote Sens. 2020, 12, 2411. [Google Scholar] [CrossRef]
  42. White, M.A.; Thornton, P.E.; Running, S.W. A continental phenology model for monitoring vegetation responses to interannual climatic variability. Global Biogeochem. Cycles 1997, 11, 217–234. [Google Scholar] [CrossRef]
  43. Liu, H.Q.; Huete, A. A feedback based modification of the NDVI to minimize canopy background and atmospheric noise. IEEE Trans. Geosci. Remote Sens. 1995, 33, 457–465. [Google Scholar] [CrossRef]
  44. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  45. Amani, M.; Mahdavi, S.; Afshar, M.; Brisco, B.; Huang, W.; Mohammad Javad Mirzadeh, S.; White, L.; Banks, S.; Montgomery, J.; Hopkinson, C. Canadian wetland inventory using Google Earth Engine: The first map and preliminary results. Remote Sens. 2019, 11, 842. [Google Scholar] [CrossRef] [Green Version]
  46. Huang, C.; Zhang, C.; He, Y.; Liu, Q.; Li, H.; Su, F.; Liu, G.; Bridhikitti, A. Land Cover Mapping in Cloud-Prone Tropical Areas Using Sentinel-2 Data: Integrating Spectral Features with Ndvi Temporal Dynamics. Remote Sens. 2020, 12, 1163. [Google Scholar] [CrossRef] [Green Version]
  47. Teluguntla, P.; Thenkabail, P.S.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Yadav, K.; Huete, A. A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2018, 144, 325–340. [Google Scholar] [CrossRef]
  48. Zhang, T.; Su, J.; Xu, Z.; Luo, Y.; Li, J. Sentinel-2 satellite imagery for urban land cover classification by optimized random forest classifier. Appl. Sci. 2021, 11, 543. [Google Scholar] [CrossRef]
  49. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 13–31. [Google Scholar] [CrossRef]
  50. Xia, J.; Falco, N.; Benediktsson, J.A.; Du, P.; Chanussot, J. Hyperspectral image classification with rotation random forest via KPCA. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1601–1609. [Google Scholar] [CrossRef] [Green Version]
  51. Abdel-Rahman, E.M.; Mutanga, O.; Adam, E.; Ismail, R. Detecting Sirex noctilio grey-attacked and lightning-struck pine trees using airborne hyperspectral data, random forest and support vector machines classifiers. ISPRS J. Photogramm. Remote Sens. 2014, 88, 48–59. [Google Scholar] [CrossRef]
  52. Rodriguez-Galiano, V.F.; Chica-Rivas, M. Evaluation of different machine learning methods for land cover mapping of a Mediterranean area using multi-seasonal Landsat images and Digital Terrain Models. Int. J. Digital Earth 2014, 7, 492–509. [Google Scholar] [CrossRef]
  53. Nitze, I.; Schulthess, U.; Asche, H. Comparison of machine learning algorithms random forest, artificial neural network and support vector machine to maximum likelihood for supervised crop type classification. In Proceedings of the 4th GEOBIA, Rio de Janeiro, Brazil, 7–9 May 2012; Volume 79, p. 3540. [Google Scholar]
  54. Song, A.; Choi, J. Fully convolutional networks with multiscale 3D filters and transfer learning for change detection in high spatial resolution satellite images. Remote Sens. 2020, 12, 799. [Google Scholar] [CrossRef] [Green Version]
  55. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  56. Tamiminia, H.; Salehi, B.; Mahdianpari, M.; Quackenbush, L.; Adeli, S.; Brisco, B. Google Earth Engine for geo-big data applications: A meta-analysis and systematic review. ISPRS J. Photogramm. Remote Sens. 2020, 164, 152–170. [Google Scholar] [CrossRef]
  57. Sadras, V.; Bongiovanni, R. Use of Lorenz curves and Gini coefficients to assess yield inequality within paddocks. Field Crops Res. 2004, 90, 303–310. [Google Scholar] [CrossRef]
  58. Cánovas-García, F.; Alonso-Sarría, F.; Gomariz-Castillo, F.; Oñate-Valdivieso, F. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery. Comput. Geosci. 2017, 103, 1–11. [Google Scholar] [CrossRef] [Green Version]
  59. Ghimire, B.; Rogan, J.; Galiano, V.R.; Panday, P.; Neeti, N. An evaluation of bagging, boosting, and random forests for land-cover classification in Cape Cod, Massachusetts, USA. GISci. Remote Sens. 2012, 49, 623–643. [Google Scholar] [CrossRef]
  60. Yan, X.; Li, J.; Yang, D.; Li, J.; Ma, T.; Su, Y.; Shao, J.; Zhang, R. A Random Forest Algorithm for Landsat Image Chromatic Aberration Restoration Based on GEE Cloud Platform—A Case Study of Yucatán Peninsula, Mexico. Remote Sens. 2022, 14, 5154. [Google Scholar] [CrossRef]
  61. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  62. Story, M.; Congalton, R.G. Accuracy assessment: A user’s perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  63. Yuh, Y.G.; Tracz, W.; Matthews, H.D.; Turner, S.E. Application of machine learning approaches for land cover monitoring in northern Cameroon. Ecol. Inf. 2022, 74, 101955. [Google Scholar] [CrossRef]
  64. Liu, L.; Zhang, X.; Gao, Y.; Chen, X.; Shuai, X.; Mi, J. Finer-resolution mapping of global land cover: Recent developments, consistency analysis, and prospects. J. Remote 2021, 2021, 5289697. [Google Scholar] [CrossRef]
  65. Sari, I.L.; Weston, C.J.; Newnham, G.J.; Volkova, L. Assessing accuracy of land cover change maps derived from automated digital processing and visual interpretation in tropical forests in Indonesia. Remote Sens. 2021, 13, 1446. [Google Scholar] [CrossRef]
  66. Yang, Y.; Yang, D.; Wang, X.; Zhang, Z.; Nawaz, Z. Testing accuracy of land cover classification algorithms in the qilian mountains based on gee cloud platform. Remote Sens. 2021, 13, 5064. [Google Scholar] [CrossRef]
  67. Zhang, L.; Liu, Z.; Liu, D.; Xiong, Q.; Yang, N.; Ren, T.; Zhang, C.; Zhang, X.; Li, S. Crop mapping based on historical samples and new training samples generation in Heilongjiang Province, China. Sustainability 2019, 11, 5052. [Google Scholar] [CrossRef] [Green Version]
  68. Wang, L.; Liu, J.; Gao, J.; Yang, L.; Yang, F.; Wang, X. Relationship between accuracy of winter wheat area remote sensing identification and spatial resolution. Trans. Chin. Soc. Agric. Eng. 2016, 32, 152–160. [Google Scholar] [CrossRef]
  69. Cao, J.; Leng, W.; Liu, K.; Liu, L.; He, Z.; Zhu, Y. Object-based mangrove species classification using unmanned aerial vehicle hyperspectral images and digital surface models. Remote Sens. 2018, 10, 89. [Google Scholar] [CrossRef] [Green Version]
  70. Dong, J.; Xiao, X.; Kou, W.; Qin, Y.; Zhang, G.; Li, L.; Jin, C.; Zhou, Y.; Wang, J.; Biradar, C. Tracking the dynamics of paddy rice planting area in 1986–2010 through time series Landsat images and phenology-based algorithms. Remote Sens. Environ. 2015, 160, 99–113. [Google Scholar] [CrossRef]
  71. Wang, L.; Wang, J.; Zhang, X.; Wang, L.; Qin, F. Deep segmentation and classification of complex crops using multi-feature satellite imagery. Comput. Electron. Agric. 2022, 200, 107249. [Google Scholar] [CrossRef]
  72. Anastasiou, E.; Balafoutis, A.; Darra, N.; Psiroukis, V.; Biniari, A.; Xanthopoulos, G.; Fountas, S. Satellite and proximal sensing to estimate the yield and quality of table grapes. Agriculture 2018, 8, 94. [Google Scholar] [CrossRef]
  73. Tuck, B.; Gartner, W.; Appiah, G. Vineyards and Grapes of the North; University of Minnesota: Minneapolis, MI, USA, 2016. [Google Scholar]
Figure 1. The five grape-growing regions, distributed in different climatic zones around the world, are SA1: Caiyu Town, Daxing District, Beijing, China; SA2: Penlai City, Shandong Province, China; SA3: Turpan, Xinjiang Province, China; SA4: Chateau Bayatu France; SA5: Sonoma, CA, USA.
Figure 1. The five grape-growing regions, distributed in different climatic zones around the world, are SA1: Caiyu Town, Daxing District, Beijing, China; SA2: Penlai City, Shandong Province, China; SA3: Turpan, Xinjiang Province, China; SA4: Chateau Bayatu France; SA5: Sonoma, CA, USA.
Remotesensing 15 00655 g001
Figure 2. VHR images of (a) vineyards and other landscapes (b) Woodland, (c) Cropland, (d) Grassland, (e) Impervious surface and (f) Water.
Figure 2. VHR images of (a) vineyards and other landscapes (b) Woodland, (c) Cropland, (d) Grassland, (e) Impervious surface and (f) Water.
Remotesensing 15 00655 g002
Figure 3. Temporal distribution of all available Landsat-8 (a) and Sentinel-2 (b) images with scene cloud coverage less than 20%.
Figure 3. Temporal distribution of all available Landsat-8 (a) and Sentinel-2 (b) images with scene cloud coverage less than 20%.
Remotesensing 15 00655 g003
Figure 4. Workflow for orchard mapping of five grape-growing regions with different spatial and temporal resolutions.
Figure 4. Workflow for orchard mapping of five grape-growing regions with different spatial and temporal resolutions.
Remotesensing 15 00655 g004
Figure 5. Temporal patterns of EVI for vineyard, woodland, cropland, and grassland. The dots represent the EVI observations of the image, and the lines represent the fitting time patterns. Between the red vertical line and the blue vertical line is the growing season of the vineyard.
Figure 5. Temporal patterns of EVI for vineyard, woodland, cropland, and grassland. The dots represent the EVI observations of the image, and the lines represent the fitting time patterns. Between the red vertical line and the blue vertical line is the growing season of the vineyard.
Remotesensing 15 00655 g005
Figure 6. Design experiment: four groups of classified inputs.
Figure 6. Design experiment: four groups of classified inputs.
Remotesensing 15 00655 g006
Figure 7. The relationship between the spatial resolution of input imagery and the OA for the five study areas.
Figure 7. The relationship between the spatial resolution of input imagery and the OA for the five study areas.
Remotesensing 15 00655 g007
Figure 8. User’s and producer’s accuracy (UA and PA) for grape identification using spatial features.
Figure 8. User’s and producer’s accuracy (UA and PA) for grape identification using spatial features.
Remotesensing 15 00655 g008
Figure 9. The relationship between the number of Landsat-8 (a1a5) or Sentinel-2 (b1b5) images as inputs and the OA for the five study areas.
Figure 9. The relationship between the number of Landsat-8 (a1a5) or Sentinel-2 (b1b5) images as inputs and the OA for the five study areas.
Remotesensing 15 00655 g009aRemotesensing 15 00655 g009b
Figure 10. UA and PA for vineyards of the maps with the best OA for all study areas. (a) Map produced by Landsat-8 images; (b) maps produced by Sentinel-2 images.
Figure 10. UA and PA for vineyards of the maps with the best OA for all study areas. (a) Map produced by Landsat-8 images; (b) maps produced by Sentinel-2 images.
Remotesensing 15 00655 g010aRemotesensing 15 00655 g010b
Figure 11. (a1,b1,a2,b2,a3,b3,b4,a5,b5) are the accuracy marginal benefits for each of the five study areas. Among them, (a1a3,a5) is the comparison of the marginal benefits of spatial features and Landsat-8 temporal features. The marginal benefits of SA4 are not compared due to the small number of Landsat-8 images that met the requirements during the time period of the SA4 experiment. (b1b5) is a comparison of the marginal benefits of the spatial features and the temporal features of Sentienl-2.
Figure 11. (a1,b1,a2,b2,a3,b3,b4,a5,b5) are the accuracy marginal benefits for each of the five study areas. Among them, (a1a3,a5) is the comparison of the marginal benefits of spatial features and Landsat-8 temporal features. The marginal benefits of SA4 are not compared due to the small number of Landsat-8 images that met the requirements during the time period of the SA4 experiment. (b1b5) is a comparison of the marginal benefits of the spatial features and the temporal features of Sentienl-2.
Remotesensing 15 00655 g011aRemotesensing 15 00655 g011b
Figure 12. The relationship between the number of growing season images and OA in five study areas, where (a,c,e,g,i) are the results obtained using Landsat-8 images, and (b,d,f,h,j) are the results obtained using Sentinel-2 images.
Figure 12. The relationship between the number of growing season images and OA in five study areas, where (a,c,e,g,i) are the results obtained using Landsat-8 images, and (b,d,f,h,j) are the results obtained using Sentinel-2 images.
Remotesensing 15 00655 g012aRemotesensing 15 00655 g012b
Figure 13. The growth of OA after fusion of spatial features with Landsat (a,c,e,g,i) temporal features and Sentinel-2 (b,d,f,h,j) temporal features for the five study areas.
Figure 13. The growth of OA after fusion of spatial features with Landsat (a,c,e,g,i) temporal features and Sentinel-2 (b,d,f,h,j) temporal features for the five study areas.
Remotesensing 15 00655 g013aRemotesensing 15 00655 g013b
Figure 14. Comparison between different feature classification maps and Google Earth high-resolution images, where the black box is an example of correct classification, and the red box is an example of incorrect classification.
Figure 14. Comparison between different feature classification maps and Google Earth high-resolution images, where the black box is an example of correct classification, and the red box is an example of incorrect classification.
Remotesensing 15 00655 g014
Figure 15. The VHR image and the classification maps of the five study areas using different classification inputs. The first dashed box: spatial features; The second dashed box: temporal features; The third dashed box: spatial + temporal features; The fourth dotted box: spatial + temporal + spectral features.
Figure 15. The VHR image and the classification maps of the five study areas using different classification inputs. The first dashed box: spatial features; The second dashed box: temporal features; The third dashed box: spatial + temporal features; The fourth dotted box: spatial + temporal + spectral features.
Remotesensing 15 00655 g015
Table 1. Specific information on the five study areas.
Table 1. Specific information on the five study areas.
Study AreaClimate TypeMax_TempMin_TempAver_Prec
SA1Temperate sub-humid continental monsoon17 °C7 °C556 mm
SA2Warm temperate monsoon28.8 °C−2.3 °C664 mm
SA3Continental desert49.6 °C−28.7 °C118 mm
SA4Temperate marine17 °C10 °C656 mm
SA5Mediterranean5 °C13.4 °C510 mm
Note: Max_temp represents the maximum temperature; Min_temp represents the minimum temperature; Aver_prec represents average annual precipitation.
Table 2. Acquisition dates and bands of the images used in this study.
Table 2. Acquisition dates and bands of the images used in this study.
Study AreaVHR ImagesLandsat-8 and Sentinel-2
Acquisition DateSelected BandsAcquisition DateSelected Bands
SA114 August 2020Blue
Green
Red
2020–2021Blue, green, red
NIR, SWIR1, SWIR2
Red edge 1, red edge 2
red edge 3
SA26 June 20212021–2022
SA315 September 20212021–2022
SA419 August 20182018–2019
SA522 October 20202020–2021
Table 3. Different levels of spatial resolution of the images used for classification.
Table 3. Different levels of spatial resolution of the images used for classification.
LevelResolution (m/pix)
1158.86
1229.41
1314.71
147.36
153.68
161.84
170.92
18 0.46
Table 4. Sample size of grapes and other land cover types used in this study.
Table 4. Sample size of grapes and other land cover types used in this study.
SA1SA2SA3SA4SA5
LabelTypePolygons
0Grape4166475147
1Woodland2827282425
2Cropland2825232222
3Grassland1820171618
4Impervious surface3031333033
5Water15//11/
6Others16151517/
Total176184163171145
Table 5. Selected GLCM metrics.
Table 5. Selected GLCM metrics.
BandsDescription
ContrastMeasure the drastic change in grayscale between adjacent pixels.
CorrelationMeasures the linear relationship between the gray levels of neighboring pixels.
EntropyMeasures the degree of the disorder in the image and when image is texturally complex or includes much noise entropy.
VarianceMeasures the dispersion of the gray level distribution to draw attention to the visible borders of land-cover patches.
Inverse Difference Moment (IDM)Measures the homogeneity of the gray-level distribution.
Sum Average (SAVG)Measures the average of gray-level values in an image.
Angular Second Moment (ASM)Measures the uniformity or energy of the gray-level distribution of the image.
Table 6. Number of images acquired in the growing-season and the length of the shortest image time series (acquired in growing seasons/all seasons) to reach the optimal classification accuracy based on spatial features instead.
Table 6. Number of images acquired in the growing-season and the length of the shortest image time series (acquired in growing seasons/all seasons) to reach the optimal classification accuracy based on spatial features instead.
Study AreaImage TypeNumber of Images Acquired in Growing SeasonNumber of Growing Season Images RequiredNumber of Full Season Images Required
SA1Landsat-8934
Sentinel-21934
SA2Landsat-8734
Sentinel-22334
SA3Landsat-81234
Sentinel-21645
SA4Landsat-83//
Sentinel-21489
SA5Landsat-8924
Sentinel-22324
Table 7. The number of images for which the change in temporal features tends to smooth the OA using only temporal features, a combination of spatial and temporal features, and a combination of spatial and temporal features after adding new bands.
Table 7. The number of images for which the change in temporal features tends to smooth the OA using only temporal features, a combination of spatial and temporal features, and a combination of spatial and temporal features after adding new bands.
Study AreaImagery TypeInflection PointTemporal FeaturesSpatio-Temporal
Features
Temporal Spectral and Spatial Features
SA1Landsat-850.8640.8740.886
Sentinel-2100.8800.8910.910
SA2Landsat-860.8270.8400.873
Sentinel-2180.8600.8690.880
SA3Landsat-890.8820.8890.913
Sentinel-2180.8690.8930.926
SA4Landsat-860.8270.8380.854
Sentinel-290.8700.8800.894
SA5Landsat-8110.8440.8720914
Sentinel-2120.9150.9290.940
Table 8. The optimal vineyard classification OA (%) obtained using different classification inputs (where the OA of spatial + temporal + spectral classification input was obtained when approaching saturation with temporal features).
Table 8. The optimal vineyard classification OA (%) obtained using different classification inputs (where the OA of spatial + temporal + spectral classification input was obtained when approaching saturation with temporal features).
Classification InputsImagery Types
Study Area
SA1SA2SA3SA4SA5
SpatialWorldview-282.977.481.386.980.7
TemporalLandsat-889.085.689.682.791.2
Sentinel-289.191.089.789.293.6
Spatial + TemporalLandsat-890.886.893.183.891.8
Sentinel-291.090.892.993.095.6
Spatial + Temporal + SpectralLandsat-888.687.391.385.491.4
Sentinel-291.088.092.689.494.0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, Z.; Zhao, Y.; Wang, H.; Li, H.; Yuan, X.; Ren, T.; Yu, L.; Liu, Z.; Zhang, X.; Li, S. Comparison and Assessment of Data Sources with Different Spatial and Temporal Resolution for Efficiency Orchard Mapping: Case Studies in Five Grape-Growing Regions. Remote Sens. 2023, 15, 655. https://doi.org/10.3390/rs15030655

AMA Style

Yao Z, Zhao Y, Wang H, Li H, Yuan X, Ren T, Yu L, Liu Z, Zhang X, Li S. Comparison and Assessment of Data Sources with Different Spatial and Temporal Resolution for Efficiency Orchard Mapping: Case Studies in Five Grape-Growing Regions. Remote Sensing. 2023; 15(3):655. https://doi.org/10.3390/rs15030655

Chicago/Turabian Style

Yao, Zhiying, Yuanyuan Zhao, Hengbin Wang, Hongdong Li, Xinqun Yuan, Tianwei Ren, Le Yu, Zhe Liu, Xiaodong Zhang, and Shaoming Li. 2023. "Comparison and Assessment of Data Sources with Different Spatial and Temporal Resolution for Efficiency Orchard Mapping: Case Studies in Five Grape-Growing Regions" Remote Sensing 15, no. 3: 655. https://doi.org/10.3390/rs15030655

APA Style

Yao, Z., Zhao, Y., Wang, H., Li, H., Yuan, X., Ren, T., Yu, L., Liu, Z., Zhang, X., & Li, S. (2023). Comparison and Assessment of Data Sources with Different Spatial and Temporal Resolution for Efficiency Orchard Mapping: Case Studies in Five Grape-Growing Regions. Remote Sensing, 15(3), 655. https://doi.org/10.3390/rs15030655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop