Next Article in Journal
Growth and Development of Leaf Vegetable Crops under Conditions of the Phytotechnical Complex in Antarctica
Next Article in Special Issue
Early Identification of Corn and Soybean Using Crop Growth Curve Matching Method
Previous Article in Journal
Change Trend and Attribution Analysis of Reference Evapotranspiration under Climate Change in the Northern China
Previous Article in Special Issue
Hyperspectral Estimation of Chlorophyll Content in Apple Tree Leaf Based on Feature Band Selection and the CatBoost Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crop Classification in Mountainous Areas Using Object-Oriented Methods and Multi-Source Data: A Case Study of Xishui County, China

1
State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Agronomy 2023, 13(12), 3037; https://doi.org/10.3390/agronomy13123037
Submission received: 22 November 2023 / Revised: 7 December 2023 / Accepted: 10 December 2023 / Published: 11 December 2023

Abstract

:
Accurate crop mapping can represent the fundamental data for digital agriculture and ecological security. However, current crop classification methods perform poorly in mountainous areas with small cropland field parcel areas and multiple crops under cultivation. This study proposed a new object-oriented classification method to address this issue, using multi-source data and object features to achieve multi-crop classification in mountainous areas. Firstly, a deep learning method was employed to extract cropland field parcels in mountainous areas. Subsequently, the fusion of multi-source data was carried out based on cropland field parcels, while object features tailored for mountainous crops were designed for crop classification. Comparative analysis indicates that the proposed classification method demonstrates exceptional performance, enabling accurate mapping of various crops in mountainous regions. The F1 score and overall accuracy (OA) of the proposed method are 0.8449 and 0.8502, representing a 10% improvement over the pixel-based random forest classification results. Furthermore, qualitative analysis reveals that the proposed method exhibits higher classification accuracy for smaller plots and more precise delineation of crop boundaries. Finally, meticulous crop mapping of corn, sorghum, rice, and other crops in Xishui County, Guizhou Province, demonstrates the significant potential of the proposed method in crop classification within mountainous scenarios.

1. Introduction

The accurate and extensive mapping of crop classification has significant implications for agriculture and ecology [1]. Classifying crops allows for a better understanding of their growth habits and environmental adaptability, enabling targeted selection of planting areas, optimization of cultivation methods, and fertilization techniques [2]. By combining precise crop mapping with soil data, climate data, and other relevant information, it can play a significant role in various fields such as agricultural insurance [3], soil pollution tracking [4], and crop yield estimation [5]. Remote sensing data can provide extensive surface and multiple spectral information, making it one of the mainstream methods for crop mapping [6,7,8].
Due to variations in factors such as crop water content, chlorophyll levels, and cellular structure, different crops display varying reflectance on specific spectral bands. The vegetation index is calculated based on these spectral bands, which can further reflect the distinctions among crops. Consequently, the vegetation index has been widely utilized in crop classification. Zheng et al. [9] effectively utilized Landsat time-series normalized difference vegetation index (NDVI) data to classify different crop types in complex scenarios and proposed a method for intelligent selection of crop samples based on expert knowledge. With the advancement of classification algorithms and the substantial expansion of remote sensing data, numerous advanced machine learning and deep learning methods are employed in crop classification. These methods can automatically extract patterns and features of crops from a large amount of remote sensing data, enabling automated analysis and prediction [10]. Commonly used methods for crop classification, such as random forest (RF) [11], extreme gradient boosting (XG-Boost) [12], and recurrent neural networks (RNN) [13], have been validated in numerous regions. However, most of these methods are pixel-based, targeting individual predictions for each pixel in remote-sensing images. The cropland area in mountainous regions is generally small, and the boundaries are relatively indistinct, resulting in a noticeable phenomenon of mixed pixels in commonly used remote sensing images such as S2 imagery. Mixed pixel refers to a pixel containing multiple cover types, which can lead to errors in classification results [14]. Ren et al. [15] employed cropland masks to alleviate the issue of mixed pixels in mountainous regions and conducted crop mapping in northeastern China. However, acquiring cropland masks proved to be challenging and did not address the problem of mixed pixels between different types of crops.
Object-oriented classification is a method based on objects rather than pixels [16]. Object-oriented classification groups pixels in an image into objects with similar features and obtains more accurate information by extracting and classifying these features. Object-oriented classification preserves the shape and spatial structure of land features better than pixel-based classification, to some extent alleviating the impact of mixed pixels, thus enhancing the accuracy and interpretability of the classification [17]. Zhang et al. [18] employed optimized spectral feature sets and object-oriented classification methods for crop classification, significantly reducing the salt-and-pepper noise issue caused by mixed pixels and enhancing the accuracy of crop classification. Jiao et al. [19] utilized an object-oriented classification method and RADARSAT-2 data to classify five types of crops, including wheat, oats, soybeans, rapeseed, and forage, resulting in a 6% increase in accuracy. Sun et al. [20] employed cropland field parcels as the primary units of analysis and integrated optical and radar data to reconstruct time series imagery in cloudy and rainy mountainous regions, resulting in a commendable level of accuracy in crop classification. However, these methods did not specifically design features for object classification, and the accuracy of the obtained objects is also relatively low.
In addition, the complex planting structure of crops in mountainous areas makes precise crop classification relatively challenging [21]. For intricate landscapes such as mountainous regions, relying solely on a single data source proves inadequate in accomplishing the task of crop classification [22]. Therefore, it becomes imperative to incorporate additional data sources in order to obtain a richer set of information and enhance the accuracy of classification. Constrained by the physical performance of satellite sensors, imagery from a single data source cannot achieve high resolution in both spatial and spectral domains [23]. A single data source is insufficient for accurate crop classification in complex landscapes such as mountainous regions. In the field of remote sensing, data fusion refers to the amalgamation of data from diverse satellites or sensors to attain more comprehensive, precise, and valuable information, thereby enhancing the effectiveness of classification [24]. Dino et al. [25] constructed a Siamese deep learning network that fused the Sentinel-1 and Sentinel-2 data at the feature level to enhance the accuracy of land cover classification. The application of deep learning allows for the integration of multi-source data at the feature level, facilitating the mutual matching of features. However, it necessitates the construction of corresponding networks, which is notably intricate and challenging. Huang et al. [26] adeptly combine the abundant spectral information derived from hyperspectral data with the formidable capability of laser radar in extracting intricate plant structures, thereby enabling precise crop classification. However, the utilization of high-resolution hyperspectral data and laser radar data comes with a higher cost, making their application in large-scale crop extraction challenging. A commonly employed method for data fusion in the field of crop classification is the Google Earth Engine (GEE) [27] platform. The platform integrates a vast amount of satellite imagery and geographic information system (GIS) data, providing users with robust capabilities for earth observation and analysis. It can automatically fuse the selected remote sensing data [28,29,30]. Liu et al. [31] combined various data sources, including Sentinel-1, Sentinel-2, and Landsat, on the GEE platform to classify three types of crops: wheat, rapeseed, and maize, achieving an accuracy of 84.25%. However, GEE is limited to processing only the data available on the platform and cannot handle remote sensing data from other sources, which restricts the integration of additional data.
Roughly one-third of the Earth’s land consists of mountainous or hilly terrain [32], underscoring the significance of accurately obtaining crop classifications for these areas. Nevertheless, the current extraction of crops in mountainous regions still faces numerous challenges, including insufficient accuracy in classifying small-sized land parcels, imprecise classification boundaries, and the inconvenience of the data fusion process. Xishui is a county in Guizhou Province, China, with the majority of its administrative area being mountainous. However, a variety of crops, such as corn, rice, and sorghum, are also cultivated there. We have synthesized the aforementioned issues and developed an object-oriented crop classification method tailored to mountainous terrain, conducting experiments in Xishui County. In summary, our work encompasses the following:
  • By using a deep learning cropland field parcels extraction algorithm, we accurately extracted cropland field parcels. We developed an object-oriented crop classification method based on these parcels tailored to mountainous terrain.
  • A data fusion method has been developed by utilizing cropland field parcels, simplifying the data fusion process and eliminating the need for cloud platforms and extensive processing of remote sensing images.
  • We designed cropland field parcel features for crop classification based on the crop characteristics of Xishui County.
  • We obtained the refined crop classification mapping of Xishui County through the proposed method.
The structure of this paper is as follows: Section 2 describes the data and sample conditions utilized, as well as the methods and evaluation criteria employed. Section 3 discusses the results of crop classification, providing both qualitative and quantitative comparative analyses with other approaches. Section 4 delves into the importance of utilizing classification features and elucidates the advantages and limitations of the proposed method. Finally, Section 5 summarizes this study, emphasizing the practical value of the proposed approach.

2. Materials and Methods

2.1. Study Area

Xishui is a county in Zunyi City, Guizhou Province, China, located in the northern part of Guizhou, with a total area of approximately 3127.7 km2 (28°6′8″–28°49′59″ N, 105°50′28″–106°45′2″ E; Figure 1). Xishui County is situated in the transitional zone between the northwest slope of the Dalou Mountain Range and the southern edge of the Sichuan Basin, featuring numerous mountain ranges and valleys with highly undulating terrain. The highest elevation is 1841.9 m, while the lowest elevation is 275.4 m. Xishui County enjoys a subtropical humid monsoon climate, with an average annual temperature of 13.5 °C, average annual precipitation of 1109.9 mm, and an annual average sunshine duration of 1053.0 h [33]. Due to the predominantly terraced or sloping nature of Xishui County’s arable land and its relatively low fertility, the primary crop cultivated in the region is maize. Sorghum, a traditional Chinese brewing material, is similar to maize in that it can be grown in relatively infertile soil. With a rich history of brewing, Xishui County is home to several large distilleries, making sorghum a widely grown crop. Additionally, in the relatively flat basins or river valleys, a variety of crops, such as rice, chili peppers, sweet potatoes, and tobacco, are cultivated. The complex and diverse planting structure of Xishui County’s terraced fields poses a significant challenge for crop classification.

2.2. Datasets

2.2.1. Remote Sensing Imagery

It is challenging to attain the desired classification accuracy using a single data source in mountainous terrain. Therefore, this paper utilizes a variety of remote sensing and geographic data for crop classification mapping. The data used includes Gaofen-2 imagery, Jilin-1 imagery, Sentinel-2 imagery, ZY-2 D/E imagery, and the Copernicus Digital Elevation Model (DEM) data. The following part provides an introduction to the data used in this paper:
  • Gaofen-2 Imagery. The Gaofen-2 satellite is the first domestically developed civilian optical remote sensing satellite in China with a spatial resolution better than 1 m. It is equipped with two high-resolution cameras, one with a resolution of 0.8 m for panchromatic imaging and the other with a resolution of 3.2 m for multi-spectral imaging. It features sub-meter spatial resolution, high positioning accuracy, and rapid attitude maneuvering capabilities [34].
  • Jilin-1 Imagery. The Jilin-1 satellite constellation is a Chinese commercial optical remote sensing satellite constellation. Currently, 79 Jilin-1 satellites have been successfully placed into their designated orbits, establishing the world’s largest sub-meter commercial remote sensing satellite constellation. Each satellite is equipped with a 0.75 m panchromatic camera and a 3 m multi-spectral camera, enabling the satellite constellation to achieve 23–25 revisits per day for any location worldwide [35].
  • Sentinel-2 Imagery. The Sentinel-2 satellite is part of the European Space Agency’s (ESA) Copernicus program, and its images can be downloaded from the official website of the ESA (https://scihub.copernicus.eu/, accessed on 21 November 2023). The data used in this article is the bottom-of-atmosphere reflectance data (L2A level) processed by ESA, with 12 spectral bands and a resolution of 10–60 m. Four of these bands are red-edge bands, which are sensitive to vegetation, making them suitable for crop classification [36].
  • ZY-2 D/E Imagery. The Resource-1 satellite is part of a medium-resolution Earth observation constellation constructed under Chinese leadership. This satellite configuration includes a visible near-infrared camera and a hyperspectral camera. The image used in this paper was captured by the visible near-infrared camera, with a panchromatic resolution of 2.5 m and a multi-spectral resolution of 10 m. This image not only contains the red edge band suitable for crop classification but also offers a higher resolution compared to Sentinel-2 images, making it more suitable for classifying crops in small mountainous areas [37].
  • Copernicus DEM. The Copernicus DEM is a global DEM project developed by the ESA for the European Union’s Earth observation program. This DEM collects elevation data using various technologies such as radar altimetry, optical satellites, and lidar, covering the entire globe with a resolution of 30 m [38].
We executed data processing on the GaoFen-2, Jilin-1, and ZY-2 D/E data, encompassing pansharping and radiometric correction. For the Sentinel-2 images, bands with a resolution below 10 m were resampled to a 10 m resolution. When utilizing multiple datasets, the registration of data is of utmost importance. The Copernicus DEM and Sentinel-2 images are released by ESA, and their geographical positioning is accurate [39]. Hence, for other images, we use the Sentinel-2 image as the base image and employ feature point matching and DEM to register the Gaofen-2, Jilin-1, and ZY-2 D/E data to the Sentinel-2 image. Through manual sampling inspection, the geometric errors of our registered data are within 1 m. The parameters of the data used in this paper are detailed in Table 1.

2.2.2. Crop Ground Reference Samples

Figure 2 depicts the distribution of crop sample points collected through multiple field surveys within the study area. All field surveys were conducted in July 2023, ensuring proximity to the time of remote sensing image capture. During the field surveys, we utilized handheld GPS devices (GARMIN Etrex221x, with a positioning error of less than 3 m) to record nearly 2000 sample points. Additionally, we compared the sample points with high-resolution remote sensing images during the sampling process to ensure their accurate placement within the respective cropland field parcels. The samples primarily comprise three types of crops: corn, sorghum, and rice. Additionally, a small number of other crops, such as sweet potatoes, chili peppers, and tobacco, are recorded under the “other crops” category. In the subsequent experimental process, in order to better demonstrate the performance of the method, it is imperative to retain a larger number of samples for accuracy validation. We allocated 40% of the samples for model training and reserved 60% for accuracy verification. The specific quantities of each type of sample are detailed in Table 2.

2.3. Methods

As depicted in Figure 3, The proposed object-oriented classification method for crop classification in complex mountainous terrain is primarily composed of four components. (1) Utilizing high-resolution imagery and cropland field parcel generation algorithms to produce all parcel objects within the research area. (2) Integration of collected Sentinel data, ZY-2 D/E data, and DEM data based on parcel objects and the construction of object features. (3) Model training and classification using a RF classifier. (4) Accuracy assessment of the produced results.

2.3.1. Method for Extracting Cropland Field Parcels

The cropland field parcel is an area of cropland with relatively uniform internal characteristics, serving as the fundamental spatial unit of arable land [40]. Accurately obtaining the location and area of cropland can be achieved through the use of cropland field parcels, thereby providing fundamental data support for digital agricultural services. However, both land surveying and manual interpretation based on remote sensing images or GIS systems require a significant amount of time and manpower, rendering them unsuitable for large-scale or multi-period cropland field parcel acquisition. With the advancement of deep learning technology, various arable land parcel extraction algorithms based on deep learning have been developed, such as BsiNet [41], SEANet [42], ResUNet_a [43], etc. These methods utilize multi-task convolutional neural networks to obtain cropland field parcel edges, attributes, and distance estimates and integrate these to derive parcel results. BsiNet consolidates the three parallel decoders of multi-task learning into a singular encoder, thereby enhancing computational efficiency and reducing network parameters. The BsiNet also incorporates a spatial feature enhancement module to enhance the recognition performance of small-area cropland field parcels. The lightweight structure of BsiNet and its capability to recognize small cropland field parcels make it suitable for the scene in Xishui County. Therefore, we have chosen this method for cropland field parcel extraction.
The deep learning model requires samples for training. Long et al. [41] have publicly released the dataset they used, but the scene does not quite match our study area. As shown in Figure 4, we manually drew some cropland field parcel samples from the high-resolution images of Xishui County. We delineated parcel samples of approximately 321.87 km2 in Xishui County, with 80% of them used in conjunction with publicly available data for model training, while the remaining 20% were reserved for accuracy validation. The model’s various parameters and structural design during the training process were set according to the recommended parameters in the article by Long et al. Furthermore, in order to fully utilize the annotated samples, we employed a series of data augmentation techniques such as selection, cropping, scaling, and adding random noise to enhance the effectiveness of our model [44].

2.3.2. Object-Oriented Classification

2.3.2.1. Utilizing Parcel Objects for Multi-Source Data Fusion

A single data source cannot meet the demand for high-precision crop classification for complex mountainous terrain such as Xishui County. In the case of the data used in this paper, the spatial resolution of Sentinel-2 data is highest at 10 m, with the red edge bands resolution of only 20 m. Such resolution is relatively low for mountainous terrain with numerous terraced fields and small land parcels, making it susceptible to misclassification. The ZY-2 D/E data can better accommodate mountainous terrain despite its 2.5 m spatial resolution. However, this data only has one red edge band, and the spectral resolution is relatively low, potentially leading to inaccurate differentiation of crops with similar spectral properties. The DEM data can, to some extent, represent the likelihood of cultivating a certain crop, but it cannot play a primary role. Lastly, high-resolution imagery (HR Imagery) offers extremely high resolution and rich texture information, enabling accurate extraction of crop boundaries. However, high-resolution imagery only consists of the RGB and near-infrared bands and cannot undertake the task of crop classification. Therefore, in order to obtain accurate boundaries and categorization in crop mapping, it is necessary to simultaneously utilize all the aforementioned data, which requires data fusion.
This paper proposes a method of multi-source data fusion based on cropland field parcels. The process of generating cropland field parcels from HR imagery involves the utilization of spatial and textural information. Thus, the parcels representing accurate boundaries of the crops inherently encapsulate the most crucial features of HR imagery in crop classification. By leveraging the geographical coordinates of the land parcels and the projection information of the imagery, it is possible to obtain pixel values of the multi-spectral data and DEM data that fall within the coverage of these parcels. In this manner, the unification of all data is achieved by treating the cropland field parcel as a cohesive unit, consolidating the information of all data into the parcel unit. In contrast to fusion methods using GEE, this approach offers the flexibility to freely incorporate available image data. Moreover, it eliminates the need for complex formatting and resampling processes to construct data cubes, thereby greatly streamlining the data fusion workflow.

2.3.2.2. The Construction of Object Features

In order to enhance the classification performance of the model, it is necessary to construct features for the classification of the land parcels. Due to the relatively low resolution of the multi-spectral images being used, as shown in Figure 5, there is a noticeable occurrence of mixed pixels in the images. These mixed pixels contain reflectance information from various land features, which may interfere with the effectiveness of crop classification. However, we observed that within a cropland field parcel object, mixed pixels are primarily distributed at the edges of the parcel object. Conversely, the centroids of the parcels predominantly consist of pure pixels, as they are relatively distant from other types of crops. These pure pixels can better represent the spectral characteristics of crops. Therefore, we obtain the spectral characteristics of a parcel object by weighting the distance to the object centroid within a land parcel object. Specifically, we utilize Formula (1) to calculate the spectral characteristics:
F e a t u r e = i = 1 n 2 n D i S × V i
In Formula (1), n represents the number of pixels within the land parcel object, D i denotes the Euclidean distance from the i-th pixel to the centroid of the land parcel, S represents the sum of the distances from all pixels within the land parcel object to the centroid, and V i signifies the pixel value of the i-th pixel.
Furthermore, during our field surveys and examination of remote sensing images, we noticed that the cropland field parcels of mostly rice, chili, tobacco, and other crops exhibit relatively regular shapes and similar areas. In contrast, the areas and shapes of corn and sorghum differ significantly. In particular, corn and sorghum are predominantly cultivated in the terraced fields in the mountains of Xishui County. This also aligns with the general pattern of crop cultivation, as rice, chili, tobacco, and other crops require water retention or irrigation, and their economic value is relatively high; hence, they are predominantly cultivated on relatively flat and regular land. On the sloping hillsides, the elongated terraced fields have relatively infertile soil suitable only for cultivating resilient crops such as corn or sorghum. To further validate this pattern, we calculated the area and circularity of 1998 samples’ cropland field parcels and plotted a scatter diagram as shown in Figure 6. Circularity is a commonly used parameter for calculating the shape of a plot, with the formula being [45]:
C i r c u l a r i t y = 4 π A P
In Formula (2), A represents the area of the land plot, and P represents the perimeter of the land plot. Generally, the simpler the shape, the greater the circularity. From Figure 6, it can also be observed that the circularity of rice and other crops is mostly distributed at higher positions, with smaller areas. Meanwhile, the circularity of corn and sorghum is more dispersed, and the majority of cropland field parcels are planted with corn or sorghum. Overall, the category of crops influences the circularity and area of the cropland field parcels to some extent. Therefore, we also considered these two indicators as classification features.
Ultimately, the features used for our object-oriented classification consist of the following five components: the weighted average of pixels under the coverage of cropland field parcels calculated by the Formula (1) in Sentinel-2 imagery, ZY-2 D/E imagery and DEM and the area and circularity of cropland field parcels.

2.3.3. Classifier

In this paper, we employed Random Forest as the classifier for object-oriented crop classification. Random Forest is a commonly used machine learning method widely utilized for classification tasks in the field of remote sensing. Random Forest introduces randomness in building each decision tree by conducting random sampling with replacement in the data and considering a random subset of features at each node to increase diversity. Therefore, Random Forest is a collection of multiple decision trees, each independently learning from the data and ultimately making predictions through voting or averaging. The advantage of Random Forest lies in its ability to reduce the risk of overfitting and enhance the overall model robustness and accuracy by combining multiple models, making it particularly suitable for addressing classification problems involving large volumes of data and complex features [46].
We employed the method of random grid search [47] in the selection of parameters for Random Forest. Random grid searches address the limitations of a grid search by randomly sampling the parameter space. Compared to a grid search, it can often find better parameter settings more quickly at the same computational cost, as it does not need to try all possible settings. By conducting random sampling in the parameter space, it can efficiently search for good parameter settings. Ultimately, the parameter settings for the Random Forest model used in crop mapping are shown in Table 3.

2.3.4. Accuracy Assessment

To assess the effectiveness of parcel extraction, we utilized a series of evaluation metrics to object geometric accuracy and positional accuracy in this paper, including the over-classification error (OC), under-classification error (UC), and total error (TC) for individual cropland field parcels. From these three types of evaluation metrics, the global over-classification error (GOC), global under-classification error (GUC), and global total error (GTC) are derived for all cropland field parcels. OC, UC, and TC can be represented by the following formulas:
O C M i = 1 a r e a M i O i a r e a O i U C M i = 1 a r e a M i O i a r e a M i T C M i = O C M i 2 + U C M i 2 2
In Formula (3) M i represents the parcel being evaluated, with an area of a r e a M i , and O i is the sample parcel with the largest overlapping area with the parcel being evaluated, with an area of a r e a O i . a r e a M i O i represents the area of intersection between these two parcels. Based on the evaluation metrics of OC, UC, and TC for individual parcels, the accuracy of the overall parcel extraction results can be obtained as shown in formula (4), where N is the total number of parcel objects.
G O C = i = 1 N O C M i × a r e a M i a r e a M j G U C = i = 1 N U C M i × a r e a M i a r e a M j G T C = i = 1 N T C M i × a r e a M i a r e a M j
The confusion matrix was employed to assess the crop classification results. The confusion matrix is the standard format for evaluating the accuracy of crop classification. In the confusion matrix, the number of rows represents the number of categories to be evaluated, and the element P i , j in the i row and j column indicates the number of pixels that actually belong to category i and were predicted as category j. Through the confusion matrix, we primarily utilized four types of evaluation metrics. Firstly, we utilized the producer accuracy (PA) metric, which signifies the proportion of pixels correctly classified into a category to the total number of pixels in that category. Secondly, we employed the user accuracy (UA) metric, representing the ratio of pixels correctly classified into a category to the total number of pixels classified as that category. Through PA and UA, we can analyze the classification results for each type of crop and explore the reasons for accuracy variations. The overall accuracy (OA) refers to the proportion of pixels correctly classified in the total number of pixels. Lastly, the Kappa coefficient (KC) [48] serves as a measure of consistency and can be utilized to gauge the classification performance. In the context of classification, consistency denotes whether the model’s predicted results align with the actual classification outcomes.
We also utilized the F1 score as an evaluation metric, as the classification process focuses not only on PA or UA. In the assessment of classification accuracy, the F1 score is widely used as an evaluation metric because it simultaneously considers precision and recall. The F1 score can be seen as the harmonic mean of the model’s precision and recall. The relevant calculation formula is as follows:
P r e c i s i o n = T P T P + F P R e c a l l = T P T P + F N F 1 S c o r e = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
In Formula (5), true positive (TP) represents pixels correctly classified as positive pixels. False positive (FP) is incorrectly classified as a positive pixel. False negative (FN) is a pixel incorrectly classified as negative.

3. Results

3.1. The Precision of Cropland Field Parcel Extraction

Due to the fact our crop extraction results are based on object-oriented classification of cropland field parcels; the accuracy of land parcel extraction directly affects the precision of our crop classification. Therefore, we first validated the accuracy of the cropland field. Through accuracy testing on a reserved 20% sample, the accuracy of the cropland field obtained is shown in Table 4. Quantitative analysis reveals that the GTC of land parcel extraction in Xishui County is 0.128, indicating a good alignment between our extracted parcels and manually annotated samples. The overall precision of cropland field parcels is relatively high.
In order to further validate the effectiveness of cropland field parcels, we conducted an inspection of the results of cropland field parcels. Figure 7 shows a partial illustration of the extraction results. We examined the extraction results in most scenarios in Xishui County, including relatively flat river valley areas and mountainous areas dominated by terraced fields. Regardless of the scenario, it can be observed that the method accurately extracts the majority of land parcels and obtains relatively correct parcel boundaries. Although some small parcels are partially omitted, the overall precision of the land parcels is relatively high, making it suitable for use as a fundamental unit in object-oriented classification.

3.2. The Result of Object-Oriented Classification

3.2.1. Comparison with Other Methods

To validate the effectiveness of the object-oriented classification method in this paper, we also employed three other methods for comparison: (1) pixel-based random forest classification method; (2) patch-based deep learning method SPTNet [49]; and (3) object-based classification method based on voting. To employ these methods, all data we used were down-sampled to 2.5 m resolution, similar to ZY-02 D/E imagery. Then, all down-sampled data were synthesized into a single image, enabling these methods to correctly utilize all the data features.
The pixel-based random forest method is the most traditional crop classification approach, classifying each pixel of the synthesized image based on its band values. The patch-based method involves classifying crops by inputting patches of a certain size around the pixels to be classified into a deep network. The object-based classification method based on voting is a crop classification approach proposed by Wang et al. [50], which involves using random forest for pixel-level classification and then determining the category of the cropland field parcels based on the proportion of each category under cropland field parcels coverage. In the comparative experiment, the random forest model also utilized a random grid search to determine parameters, while the deep learning model was set according to the default parameters. The accuracy evaluation results are shown in Table 5.
As shown in Table 5, our proposed object-oriented crop classification method achieved the highest accuracy in both individual crop types and overall accuracy. The OA, KC, and F1 values of 0.8502, 0.8348, and 0.8449 were significantly higher than those of other methods; this demonstrates the efficacy of employing the proposed method. SPTNet had the lowest accuracy, with an F1 score of only 0.6257, which may be attributed to two factors. While the patch-based method introduces spatial information, it may introduce a large amount of interference for very small croplands, leading to a decrease in classification accuracy. Additionally, deep learning models have many parameters and require a substantial number of samples for training, which our dataset lacks. This may have resulted in SPTNet being overfitted, ultimately leading to the lowest classification accuracy. Due to the exclusion of some non-cultivated land effects by using cropland field parcels and the optimization of crop classification edge cases to reduce salt-and-pepper noise, the use of the object-based method significantly improved accuracy compared to other methods, with a minimum 7% increase in OA. Compared to the voting-based method, our approach showed a 3.45% improvement in F1, indicating that our designed plot features can further enhance crop classification capabilities.
As shown in Figure 8, to further evaluate the proposed method’s performance in crop classification, we compared the prediction results for different crop distributions, area of cropland field parcels, and terrain conditions. Due to the lack of other land cover categories in our field survey samples, pixel-based and patch-based methods are prone to significant false classifications, identifying many non-cultivated areas as crops. However, the use of cropland field parcels can exclude the influence of non-cultivated land, optimizing the boundaries of crops and effectively improving classification accuracy. Specifically, the object-oriented crop classification method we employed exhibits stronger classification capabilities for small cropland field parcels. For instance, there is a typical mountainous terraced area with numerous small cropland field parcels in the third row of Figure 8. While other methods incorrectly classify some narrow terraces as rice, our approach accurately classifies these plots as sorghum by incorporating features such as parcel area and circularity.
Moreover, as depicted in rows two and four of Figure 8, the pixel-based method can correctly classify some small plots. However, due to the influence of mixed pixels, only the central pixels of these plots are correctly classified, making it impossible to achieve accurate classification for the entire parcels. Furthermore, the vote-based method assigns the category based on the maximum area covered by the category within the parcel, leading to misclassification of the entire parcel. In contrast, our approach utilizes the weighted average based on the distance from the pixels within the parcel to the centroid as the feature effectively mitigating misclassification caused by mixed pixels. Finally, as shown in the first and second rows of Figure 8, unavoidably, the cropland field parcels we extracted contain some errors. For instance, in the first row, forest land was mistakenly classified as a parcel, and in the second row, buildings were erroneously classified as parcels. The use of the voting method would perpetuate these errors, whereas our approach classifies based on objects, allowing erroneous objects to be categorized as background, further enhancing classification accuracy.
In conclusion, through qualitative and quantitative analysis, the object-oriented methods employed proved effective in reducing omissions and detecting errors in crop extraction results, surpassing the comparative methods in various accuracy indicators. Specifically, the fusion of high-resolution data and parcels significantly enhances the extraction of crop boundary details. The utilization of parcel features designed specifically for mountainous crops reduces the misclassification of small parcels and, to a certain extent, corrects errors in parcel extraction, resulting in more refined crop classification outcomes.

3.2.2. Crop Mapping in Xishui County

As depicted in Figure 9, we utilized the proposed object-oriented classification method to complete the classification of principal crops, including corn, sorghum, rice, and other crops, at a resolution of 0.8 m in Xishui County, Guizhou Province, in 2023.

4. Discussion

4.1. Assessment of Feature Importance

As a decision tree model, Random Forest can calculate the importance of input features. Each time a decision tree is split during the construction of the Random Forest, the model records the decrease in information gain using that feature and then takes the average of this decreased information gain as the importance of that feature [51]. This allows us to obtain the relative importance of each feature, enabling us to understand the contribution of each feature in the model. We used this method to assess the importance of the 23 features used in our proposed object-oriented classification method, as shown in Figure 10.
From Figure 10, it can be observed that the sum of the importance of each band of Sentinel-2 imagery is 0.3673, while for ZY-02 D/E imagery, it is 0.4557. Respectively, the importance of DEM, cropland field parcel area, and cropland field parcel circularity are 0.0343, 0.0824, and 0.0591. This indicates that these two types of multi-spectral imagery primarily dominate the crop classification process, with ZY-02 D/E making a greater contribution due to its higher resolution. Additionally, the cumulative importance of DEM, cropland field parcels area, and cropland field parcels circularity amounts to 0.1776, to some extent indicating the significance of incorporating cropland field parcels attribute features. Within the two types of multi-spectral imagery, the dominant bands are the 5th, 6th, 7th, and 8th of the Sentinel imagery, and the 7th band of the ZY-02 D/E imagery, with each accounting for more than half of the total importance in both types of imagery. This underscores the sensitivity of the red-edge band to crop growth, indicating its significant role in crop classification. Examining the importance of the features not only enhances the interpretability of the model but also allows for the selection of important features to construct relatively simple models, thereby improving efficiency and providing insights for subsequent research.

4.2. Advantages and Limitations

This study introduces a method for object-oriented crop classification that integrates multi-source data, achieving satisfactory classification results even in mountainous areas with small arable land and diverse crops. Firstly, the proposed method employs deep learning for plot extraction, yielding excellent results in mountainous scenarios. Secondly, the method integrates high-resolution data, Sentinel data, resource data, and DEM data based on plots, simplifying the data fusion process. Lastly, the study designs object features for crop classification tailored to the characteristics of crops in mountainous areas. Compared to other methods, the proposed approach achieves the highest extraction accuracy regardless of crop type. In qualitative comparison, the proposed method significantly enhances the extraction capability of small crop plots and improves crop edges. These comparisons demonstrate the advantages of our method.
Although our method has certain advantages, it also has some limitations. Firstly, our method is based on object-oriented classification of cropland field parcels, and the accuracy of cropland field parcel extraction directly affects the final classification accuracy, thus requiring precise cropland field parcel extraction results. Although cropland field parcel extraction algorithms have made rapid progress in recent years, methods based on deep learning require a large number of samples for training and often perform poorly in areas lacking samples, limiting the applicability of our method. Secondly, the spectral features of the cropland field parcel, we constructed may contain redundancies. As shown in Figure 10, some bands have relatively low importance due to insensitivity to vegetation. Using high-dimensional data may lead to a decline in classifier performance. Lastly, due to the similar spectral characteristics of crops such as tobacco and chili, we categorize them as other crops during the classification process without distinguishing them separately. In future research, we will attempt to optimize the plot extraction model to enhance its generalization ability, apply this method to a larger scope, and also endeavor to optimize and select features from multiple sources of data to simplify and improve the classification ability of the model.

5. Conclusions

This study proposed a novel method for crop classification in mountainous areas using object-oriented classification and multi-source remote sensing data. Firstly, deep learning methods were used to extract cultivated cropland field parcels in mountainous areas. Subsequently, a fusion of multi-source data was carried out based on the cultivated cropland field parcels, and corresponding object features were designed to classify crops in mountainous areas. Comparative analysis indicates that the inclusion of cultivated cropland field parcels can significantly enhance the performance of crop classification in mountainous areas. Compared to mainstream methods such as pixel-based RF, the crop classification method proposed in this study demonstrates the highest classification accuracy, enabling the accurate classification of various crops in mountainous areas. The F1 score of the proposed method is 0.8449, and the KC is 0.8438, representing improvements of 3.56% and 4.16%, respectively, compared to the vote-based classification method. This demonstrates that our proposed method effectively enhances the process of crop classification in mountainous areas. The crop classification mapping of corn, sorghum, rice, and other crops in Xishui County, Guizhou Province, also validates the effectiveness of this method in practical applications.

Author Contributions

Conceptualization, Y.B. and Z.C.; methodology, Y.L. and Y.B.; validation, X.T. and Y.B.; formal analysis, Y.B.; resources, Z.C.; data curation, Y.B.; writing—original draft preparation, Y.L. and X.T.; writing—review and editing, X.T.; visualization, X.T.; supervision, X.T.; project administration, Z.C.; funding acquisition, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by The National Key Research and Development Program of China (No. 2021YFB3901300).

Data Availability Statement

The data presented in this study are available in article.

Acknowledgments

We are grateful to the anonymous reviewers whose constructive suggestions have improved the quality of this study. Additionally, we would like to express our sincere thanks to all the field data collectors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, X.; Cui, Z.; Fan, M.; Vitousek, P.; Zhao, M.; Ma, W.; Wang, Z.; Zhang, W.; Yan, X.; Yang, J.; et al. Producing more grain with lower environmental costs. Nature 2014, 514, 486–489. [Google Scholar] [CrossRef]
  2. You, N.; Dong, J.; Huang, J.; Du, G.; Zhang, G.; He, Y.; Yang, T.; Di, Y.; Xiao, X. The 10-m crop type maps in Northeast China during 2017–2019. Sci. Data 2021, 8, 41. [Google Scholar] [CrossRef] [PubMed]
  3. Benami, E.; Jin, Z.; Carter, M.R.; Ghosh, A.; Hijmans, R.J.; Hobbs, A.; Kenduiywo, B.; Lobell, D.B. Uniting remote sensing, crop modelling and economics for agricultural risk management. Nat. Rev. Earth Environ. 2021, 2, 140–159. [Google Scholar] [CrossRef]
  4. Gholizadeh, A.; Kopăcková, V. Detecting vegetation stress as a soil contamination proxy: A review of optical proximal and remote sensing techniques. Int. J. Environ. Sci. Technol. 2019, 16, 2511–2524. [Google Scholar] [CrossRef]
  5. Rasti, S.; Bleakley, C.J.; Holden, N.; Whetton, R.; Langton, D.; O’Hare, G. A survey of high resolution image processing techniques for cereal crop growth monitoring. Inf. Process. Agric. 2021, 9, 300–315. [Google Scholar] [CrossRef]
  6. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  7. Kobayashi, N.; Tani, H.; Wang, X.; Sonobe, R. Crop classification using spectral indices derived from Sentinel-2A imagery. J. Inf. Telecommun. 2020, 4, 67–90. [Google Scholar] [CrossRef]
  8. Wang, L.; Wang, J.; Liu, Z.; Zhu, J.; Qin, F. Evaluation of a deep-learning model for multispectral remote sensing of land use and crop classification. Crop J. 2022, 10, 1435–1451. [Google Scholar] [CrossRef]
  9. Zheng, B.; Myint, S.W.; Thenkabail, P.S.; Aggarwal, R.M. A support vector machine to identify irrigated crop types using time-series Landsat NDVI data. Int. J. Appl. Earth Obs. Geoinf. 2015, 34, 103–112. [Google Scholar] [CrossRef]
  10. Virnodkar, S.S.; Pachghare, V.K.; Patil, V.; Jha, S.K. Application of machine learning on remote sensing data for sugarcane crop classification: A review. ICT Anal. Appl. Proc. ICT4SD 2019, 2, 539–555. [Google Scholar]
  11. Yang, N.; Liu, D.; Feng, Q.; Xiong, Q.; Zhang, L.; Ren, T.; Zhao, Y.; Zhu, D.; Huang, J. Large-scale crop mapping based on machine learning and parallel computation with grids. Remote Sens. 2019, 11, 1500. [Google Scholar] [CrossRef]
  12. Chen, T.; He, T.; Benesty, M.; Khotilovich, V.; Tang, Y.; Cho, H.; Chen, K.; Mitchell, R.; Cano, I.; Zhou, T.; et al. Xgboost: Extreme Gradient Boosting; R Package Version 0.4-2. 2015, pp. 1–4. Available online: https://cran.ms.unimelb.edu.au/web/packages/xgboost/vignettes/xgboost.pdf (accessed on 21 November 2023).
  13. Fan, J.; Bai, J.; Li, Z.; Ortiz-Bobea, A.; Gomes, C.P. A GNN-RNN approach for harnessing geospatial and temporal information: Application to crop yield prediction. Proc. AAAI Conf. Artif. Intell. 2022, 36, 11873–11881. [Google Scholar] [CrossRef]
  14. Hsieh, P.F.; Lee, L.C.; Chen, N.Y. Effect of spatial resolution on classification errors of pure and mixed pixels in remote sensing. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2657–2663. [Google Scholar] [CrossRef]
  15. Ren, T.; Xu, H.; Cai, X.; Yu, S.; Qi, J. Smallholder crop type mapping and rotation monitoring in mountainous areas with Sentinel-1/2 imagery. Remote Sens. 2022, 14, 566. [Google Scholar] [CrossRef]
  16. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2017, 130, 277–293. [Google Scholar] [CrossRef]
  17. Costa, H.; Foody, G.M.; Boyd, D.S. Using mixed objects in the training of object-based image classifications. Remote Sens. Environ. 2017, 190, 188–197. [Google Scholar] [CrossRef]
  18. Zhang, X.; Sun, Y.; Shang, K.; Zhang, L.; Wang, S. Crop classification based on feature band set construction and object-oriented approach using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4117–4128. [Google Scholar] [CrossRef]
  19. Jiao, X.; Kovacs, J.M.; Shang, J.; McNairn, H.; Walters, D.; Ma, B.; Geng, X. Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data. ISPRS J. Photogramm. Remote Sens. 2014, 96, 38–46. [Google Scholar] [CrossRef]
  20. Sun, Y.; Li, Z.L.; Luo, J.; Wu, T.; Liu, N. Farmland parcel-based crop classification in cloudy/rainy mountains using Sentinel-1 and Sentinel-2 based deep learning. Int. J. Remote Sens. 2022, 43, 1054–1073. [Google Scholar] [CrossRef]
  21. Kyere, I.; Astor, T.; Graß, R.; Wachendorf, M. Agricultural crop discrimination in a heterogeneous low-mountain range region based on multi-temporal and multi-sensor satellite data. Comput. Electron. Agric. 2020, 179, 105864. [Google Scholar] [CrossRef]
  22. Zhang, K.; Chen, Y.; Zhang, B.; Hu, J.; Wang, W. A multitemporal mountain rice identification and extraction method based on the optimal feature combination and machine learning. Remote Sens. 2022, 14, 5096. [Google Scholar] [CrossRef]
  23. Rocchini, D. Effects of spatial and spectral resolution in estimating ecosystem α-diversity by satellite imagery. Remote Sens. Environ. 2007, 111, 423–434. [Google Scholar] [CrossRef]
  24. Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B.; Chanussot, J. Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102926. [Google Scholar] [CrossRef]
  25. Ienco, D.; Interdonato, R.; Gaetano, R.; Minh, D.H.T. Combining Sentinel-1 and Sentinel-2 Satellite Image Time Series for land cover mapping via a multi-source deep learning architecture. ISPRS J. Photogramm. Remote Sens. 2019, 158, 11–22. [Google Scholar] [CrossRef]
  26. Huang, Z.; Xie, S. Classification Method for Crop by fusion Hyper Spectral and LiDAR Data. In Proceedings of the 2022 14th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), Changsha, China, 15–16 January 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1011–1014. [Google Scholar]
  27. Tamiminia, H.; Salehi, B.; Mahdianpari, M.; Quackenbush, L.; Adeli, S.; Brisco, B. Google Earth Engine for geo-big data applications: A meta-analysis and systematic review. ISPRS J. Photogramm. Remote Sens. 2020, 164, 152–170. [Google Scholar] [CrossRef]
  28. Shelestov, A.; Lavreniuk, M.; Kussul, N.; Novikov, A.; Skakun, S. Large scale crop classification using Google earth engine platform. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 3696–3699. [Google Scholar]
  29. Luo, C.; Liu, H.J.; Lu, L.P.; Liu, Z.R.; Kong, F.C.; Zhang, X.L. Monthly composites from Sentinel-1 and Sentinel-2 images for regional major crop mapping with Google Earth Engine. J. Integr. Agric. 2021, 20, 1944–1957. [Google Scholar] [CrossRef]
  30. Zhang, C.; Di, L.; Lin, L.; Li, H.; Guo, L.; Yang, Z.; Eugene, G.Y.; Di, Y.; Yang, A. Towards automation of in-season crop type mapping using spatiotemporal crop information and remote sensing data. Agric. Syst. 2022, 201, 103462. [Google Scholar] [CrossRef]
  31. Liu, X.; Zhai, H.; Shen, Y.; Lou, B.; Jiang, C.; Li, T.; Hussain, S.B.; Shen, G. Large-scale crop mapping from multisource remote sensing images in google earth engine. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 414–427. [Google Scholar] [CrossRef]
  32. Grumbine, R.E.; Xu, J. Mountain futures: Pursuing innovative adaptations in coupled social–ecological systems. Front. Ecol. Environ. 2021, 19, 342–348. [Google Scholar] [CrossRef]
  33. Liang, B.; Liu, H.; Quine, T.A.; Chen, X.; Hallett, P.D.; Cressey, E.L.; Zhu, X.; Cao, J.; Yang, S.; Wu, L.; et al. Analysing and simulating spatial patterns of crop yield in Guizhou Province based on artificial neural networks. Prog. Phys. Geogr. Earth Environ. 2021, 45, 33–52. [Google Scholar] [CrossRef]
  34. Zhang, R.; Jia, M.; Wang, Z.; Zhou, Y.; Wen, X.; Tan, Y.; Cheng, L. A comparison of Gaofen-2 and Sentinel-2 imagery for mapping mangrove forests using object-oriented analysis and random forest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 4185–4193. [Google Scholar] [CrossRef]
  35. Guk, E.; Levin, N. Analyzing spatial variability in night-time lights using a high spatial resolution color Jilin-1 image–Jerusalem as a case study. ISPRS J. Photogramm. Remote Sens. 2020, 163, 121–136. [Google Scholar] [CrossRef]
  36. Guan, H.; Huang, J.; Li, X.; Zeng, Y.; Su, W.; Ma, Y.; Dong, J.; Niu, Q.; Wang, W. An improved approach to estimating crop lodging percentage with Sentinel-2 imagery using machine learning. Int. J. Appl. Earth Obs. Geoinf. 2022, 113, 102992. [Google Scholar] [CrossRef]
  37. Liang, D.; Yu, J.; Han, B.; Zhu, H. Side-slither radiometric calibration mode design and in-orbit verification of Ziyuan-1 (02D) satellite hyperspectral imager. In Proceedings of the International Conference on Image, Signal Processing, and Pattern Recognition (ISPP 2022), Guilin, China, 25 February 2022; SPIE: Kuala Lumpur, Malaysia, 2022; Volume 12247, pp. 83–89. [Google Scholar]
  38. Cenci, L.; Galli, M.; Palumbo, G.; Sapia, L.; Santella, C.; Albinet, C. Describing the quality assessment workflow designed for DEM products distributed via the Copernicus Programme. Case study: The absolute vertical accuracy of the Copernicus DEM dataset in Spain. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 6143–6146. [Google Scholar]
  39. Gascon, F.; Bouzinac, C.; Thépaut, O.; Jung, M.; Francesconi, B.; Louis, J.; Lonjou, V.; Lafrance, B.; Massera, S.; Gaudel-Vacaresse, A.; et al. Copernicus Sentinel-2A calibration and products validation status. Remote Sens. 2017, 9, 584. [Google Scholar] [CrossRef]
  40. Pan, Y.; Wang, X.; Zhang, L.; Zhong, Y. E2EVAP: End-to-end vectorization of smallholder agricultural parcel boundaries from high-resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2023, 203, 246–264. [Google Scholar] [CrossRef]
  41. Long, J.; Li, M.; Wang, X.; Stein, A. Delineation of agricultural fields using multi-task BsiNet from high-resolution satellite images. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102871. [Google Scholar] [CrossRef]
  42. Li, M.; Long, J.; Stein, A.; Wang, X. Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images. ISPRS J. Photogramm. Remote Sens. 2023, 200, 24–40. [Google Scholar] [CrossRef]
  43. Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 111741. [Google Scholar] [CrossRef]
  44. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
  45. Kim, Y.; Dodbiba, G. A novel method for simultaneous evaluation of particle geometry by using image processing analysis. Powder Technol. 2021, 393, 60–73. [Google Scholar] [CrossRef]
  46. Rigatti, S.J. Random forest. J. Insur. Med. 2017, 47, 31–39. [Google Scholar] [CrossRef] [PubMed]
  47. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  48. Fitzgerald, R.; Lees, B. Assessing the classification accuracy of multisource remote sensing data. Remote Sens. Environ. 1994, 47, 362–368. [Google Scholar] [CrossRef]
  49. Tian, X.; Bai, Y.; Li, G.; Yang, X.; Huang, J.; Chen, Z. An Adaptive Feature Fusion Network with Superpixel Optimization for Crop Classification Using Sentinel-2 Imagery. Remote Sens. 2023, 15, 1990. [Google Scholar]
  50. Wang, M.; Wang, J.; Cui, Y.; Liu, J.; Chen, L. Agricultural Field Boundary Delineation with Satellite Image Segmentation for High-Resolution Crop Mapping: A Case Study of Rice Paddy. Agronomy 2022, 12, 2342. [Google Scholar] [CrossRef]
  51. Menze, B.H.; Kelm, B.M.; Masuch, R.; Himmelreich, U.; Bachert, P.; Petrich, W.; Hamprecht, F.A. A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data. BMC Bioinform. 2009, 10, 213. [Google Scholar] [CrossRef]
Figure 1. The geographical location and basic information of Xishui County. (a) High-resolution image of Xishui County. (b) Guizhou Province, China. (c) The spatial location of Xishui in Guizhou.
Figure 1. The geographical location and basic information of Xishui County. (a) High-resolution image of Xishui County. (b) Guizhou Province, China. (c) The spatial location of Xishui in Guizhou.
Agronomy 13 03037 g001
Figure 2. Distribution of sample points in the study area. The color of the sample in the bottom image is consistent with the legend.
Figure 2. Distribution of sample points in the study area. The color of the sample in the bottom image is consistent with the legend.
Agronomy 13 03037 g002
Figure 3. Overview of the proposed framework for crop mapping with object-oriented classification.
Figure 3. Overview of the proposed framework for crop mapping with object-oriented classification.
Agronomy 13 03037 g003
Figure 4. Overall distribution and specific details of cropland field parcel samples plotted.
Figure 4. Overall distribution and specific details of cropland field parcel samples plotted.
Agronomy 13 03037 g004
Figure 5. The confusion of pixels at different resolutions within the cropland field parcels. (a) The situation on the high-resolution images (0.8 m). (b) The situation on the ZY-02 D/E images (2.5 m).
Figure 5. The confusion of pixels at different resolutions within the cropland field parcels. (a) The situation on the high-resolution images (0.8 m). (b) The situation on the ZY-02 D/E images (2.5 m).
Agronomy 13 03037 g005
Figure 6. Area-circularity scatter plot of all cropland field parcels corresponding to the collected samples in ground surveys.
Figure 6. Area-circularity scatter plot of all cropland field parcels corresponding to the collected samples in ground surveys.
Agronomy 13 03037 g006
Figure 7. Cropland field parcel extraction results in different scenarios. From left to right: (a) High-resolution images. (b) Boundary of the cropland field parcels. (c) Cropland field parcels, and each parcel is represented by a different color.
Figure 7. Cropland field parcel extraction results in different scenarios. From left to right: (a) High-resolution images. (b) Boundary of the cropland field parcels. (c) Cropland field parcels, and each parcel is represented by a different color.
Agronomy 13 03037 g007
Figure 8. Some examples of the results on the dataset. From left to right: (a) Band 4, Band 3, Band 7 false color synthesis of ZY-02 D/E image; (b) Pixel-based RF; (c) SPTNet; (d) Vote-based RF; (e) and Object-oriented classification.
Figure 8. Some examples of the results on the dataset. From left to right: (a) Band 4, Band 3, Band 7 false color synthesis of ZY-02 D/E image; (b) Pixel-based RF; (c) SPTNet; (d) Vote-based RF; (e) and Object-oriented classification.
Agronomy 13 03037 g008
Figure 9. Mapping results of corn, rice, sorghum and other crops from multi-source satellite images in Xishui County, Guizhou Province, China, obtained by the proposed object-oriented classification method.
Figure 9. Mapping results of corn, rice, sorghum and other crops from multi-source satellite images in Xishui County, Guizhou Province, China, obtained by the proposed object-oriented classification method.
Agronomy 13 03037 g009
Figure 10. The importance of each feature. SB-i represents the i-th Sentinel-2 band, and ZYB-i represents the i-th ZY-02D/E band.
Figure 10. The importance of each feature. SB-i represents the i-th Sentinel-2 band, and ZYB-i represents the i-th ZY-02D/E band.
Agronomy 13 03037 g010
Table 1. The parameters of the dataset used in this paper.
Table 1. The parameters of the dataset used in this paper.
Data TypeNumber of ImagesTime TakenResolution (m)
Gaofen-291May 2023–September 20230.8
Jilin-154May 2023–September 20230.75
Sentinel-2217 July 2023, 30 July 202310–60
ZY-2 D/E117 July 20232.5
Copernicus DEM 2011–201530
Table 2. The number of reference samples that were divided into train and validation samples.
Table 2. The number of reference samples that were divided into train and validation samples.
Crop TypeTotal TrainValidation
Corn785314471
Sorghum728291437
Rice342136206
Other Crops1435786
Total19987981200
Table 3. Parameter settings of the Random Forest classifier used.
Table 3. Parameter settings of the Random Forest classifier used.
Parameter NameValue
n_estimators43
max_depth14
min_samples_leaf1
min_samples_split2
Table 4. Geometric errors of the extraction cropland field parcels.
Table 4. Geometric errors of the extraction cropland field parcels.
MetricsGOCGUCGTC
0.132 0.1040.128
Table 5. The comparison between the proposed method and other crop classification methods in the accuracy evaluation index on the dataset.
Table 5. The comparison between the proposed method and other crop classification methods in the accuracy evaluation index on the dataset.
MethodMetricsCornRiceSorghumOther Crops
Pixel-based RFPA0.69780.75880.76150.7324
UA0.70550.74840.72920.7105
OA0.7548
KC0.7447
F10.7351
SPTNetPA0.64960.63540.64030.6780
UA0.62190.65430.67490.6371
OA0.6339
KC0.6406
F10.6257
Vote based RFPA0.81660.80040.80820.7913
UA0.81640.77470.81080.8164
OA0.8272
KC0.8022
F10.8093
Object-oriented classificationPA0.85880.83110.83740.8365
UA0.83970.85240.82360.8501
OA0.8502
KC0.8438
F10.8449
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, X.; Chen, Z.; Li, Y.; Bai, Y. Crop Classification in Mountainous Areas Using Object-Oriented Methods and Multi-Source Data: A Case Study of Xishui County, China. Agronomy 2023, 13, 3037. https://doi.org/10.3390/agronomy13123037

AMA Style

Tian X, Chen Z, Li Y, Bai Y. Crop Classification in Mountainous Areas Using Object-Oriented Methods and Multi-Source Data: A Case Study of Xishui County, China. Agronomy. 2023; 13(12):3037. https://doi.org/10.3390/agronomy13123037

Chicago/Turabian Style

Tian, Xiangyu, Zhengchao Chen, Yixiang Li, and Yongqing Bai. 2023. "Crop Classification in Mountainous Areas Using Object-Oriented Methods and Multi-Source Data: A Case Study of Xishui County, China" Agronomy 13, no. 12: 3037. https://doi.org/10.3390/agronomy13123037

APA Style

Tian, X., Chen, Z., Li, Y., & Bai, Y. (2023). Crop Classification in Mountainous Areas Using Object-Oriented Methods and Multi-Source Data: A Case Study of Xishui County, China. Agronomy, 13(12), 3037. https://doi.org/10.3390/agronomy13123037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop