Next Article in Journal
Estimation of Daily Air Temperature Based on MODIS Land Surface Temperature Products over the Corn Belt in the US
Next Article in Special Issue
An Optimal Sampling Design for Observing and Validating Long-Term Leaf Area Index with Temporal Variations in Spatial Heterogeneities
Previous Article in Journal
Estimation of Land Surface Temperature under Cloudy Skies Using Combined Diurnal Solar Radiation and Surface Temperature Evolution
Previous Article in Special Issue
Impact of Missing Passive Microwave Sensors on Multi-Satellite Precipitation Retrieval Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data

1
State Key Laboratory of Remote Sensing Science, Research Center for Remote Sensing and GIS, and School of Geography, Beijing Normal University, Beijing 100875, China
2
Beijing Key Laboratory for Remote Sensing of Environment and Digital Cities, Beijing 100875, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(1), 922-950; https://doi.org/10.3390/rs70100922
Submission received: 9 November 2014 / Accepted: 6 January 2015 / Published: 15 January 2015

Abstract

:
Identification of crop species is an important issue in agricultural management. In recent years, many studies have explored this topic using multi-spectral and hyperspectral remote sensing data. In this study, we perform dedicated research to propose a framework for mapping crop species by combining hyperspectral and Light Detection and Ranging (LiDAR) data in an object-based image analysis (OBIA) paradigm. The aims of this work were the following: (i) to understand the performances of different spectral dimension-reduced features from hyperspectral data and their combination with LiDAR derived height information in image segmentation; (ii) to understand what classification accuracies of crop species can be achieved by combining hyperspectral and LiDAR data in an OBIA paradigm, especially in regions that have fragmented agricultural landscape and complicated crop planting structure; and (iii) to understand the contributions of the crop height that is derived from LiDAR data, as well as the geometric and textural features of image objects, to the crop species’ separabilities. The study region was an irrigated agricultural area in the central Heihe river basin, which is characterized by many crop species, complicated crop planting structures, and fragmented landscape. The airborne hyperspectral data acquired by the Compact Airborne Spectrographic Imager (CASI) with a 1 m spatial resolution and the Canopy Height Model (CHM) data derived from the LiDAR data acquired by the airborne Leica ALS70 LiDAR system were used for this study. The image segmentation accuracies of different feature combination schemes (very high-resolution imagery (VHR), VHR/CHM, and minimum noise fractional transformed data (MNF)/CHM) were evaluated and analyzed. The results showed that VHR/CHM outperformed the other two combination schemes with a segmentation accuracy of 84.8%. The object-based crop species classification results of different feature integrations indicated that incorporating the crop height information into the hyperspectral extracted features provided a substantial increase in the classification accuracy. The combination of MNF and CHM produced higher classification accuracy than the combination of VHR and CHM, and the solely MNF-based classification results. The textural and geometric features in the object-based classification could significantly improve the accuracy of the crop species classification. By using the proposed object-based classification framework, a crop species classification result with an overall accuracy of 90.33% and a kappa of 0.89 was achieved in our study area.

Graphical Abstract

1. Introduction

Precise crop mapping is vitally important in agriculture and agricultural management, such as crop damage estimation [1], crop acreage and yield estimation [2], and precision agriculture [3]. Crops mapped in detail are basic data and materials for scientific study and governmental decision-making. Compared with conventional field investigation approaches, remote sensing has been considered to be a cost-effective, labor-saving, and time-efficient method of vegetation mapping that has been widely applied in crop mapping [4].
It is challenging for multi-spectral remote sensing data to discriminate between different species of crops. One of the reasons is the spectral similarity between different types of crops [5]. Hyperspectral remote sensing data, which has narrow spectral bands of up to hundreds from the visible to the infrared region of the spectrum, are more powerful in identifying different crop species than multi-spectral images. In order to investigate the capability of hyperspectral data in distinguishing different crops, studies on the choosing of appropriate hyperspectral data waveband locations were performed [6,7]. However, due to the variability within the same crop caused by growth calendars, farmer decisions, and local weather [5], it is still a challenging task to choose hyperspectral remote sensing data of proper bands and time phases to classify crops in detail. To improve the accuracy of crop species classification, incorporation of the plant canopy structure information into the optical remote sensing data classification is promising. A LiDAR system that can measure the vertical structural information of vegetation has been used in tree species inventory [8,9,10,11]. The combination of hyperspectral and LiDAR data in tree species mapping showed its potential for tree species classification [12,13]. As for crops, canopy height differences of different crop species are more obvious than those of tree species. Using third-dimensional information on the crops when differentiating crops that have similar spectral characteristics could be more promising.
Another reason why it is difficult for multi-spectral remote sensing data to discriminate between different species of crops is the limitation in spatial resolution of remote sensing images [5]. In regions that have spatially fragmented landscapes and complicated planting structures, using high-spatial resolution remote sensing data is important for accurate crop species classification. The coarse or medium spatial resolution remote sensing images could cause “mixed” pixels of multiple land cover types or crop species, which causes the data to be insufficient or inadequate for detailed crop species classification [14,15]. VHR images that could provide more detailed observation information on a finer or even single plant distribution model are more promising. VHR images have been widely used in urban land cover classification, forest inventory [9,16,17], and crop species mapping [18,19,20,21].
However, high spatial resolution imagery might be not effective in accurately mapping crop species because the pixels of the VHR image could capture the information of the soil background or shadows as well, even though the crops are the only targets for mapping. The background information could increase the spectral variability and the mixed pixels of parcels, which will cause a decrease in the statistical separability between different classes when applying pixel-based classification [22]. This scenario is known as the H-resolution problem [14]. As a way of solving the H-resolution problem, object-based image analysis (OBIA) has been developed and used in crop species classification [5,20,23].
In contrast to pixel-based classification, object-based classification considers image objects to be the basic classification units [14,16]. One of the advantages of object-based classification over pixel-based classification is that object-based classification can achieve more reliable classification results by combining different types of features of objects [19], such as spectral features, textural features, and geometric features. The object-based image classification consists of two stages, image segmentation and image object classification. In the image segmentation procedure, remote sensing imagery is segmented into relatively homogeneous regions as “image objects” [24]. Previous studies have shown that multi-sensor data-based image segmentation has higher segmentation accuracy than only multi-spectral data-based image segmentation [16,25,26,27]. A combination of third-dimensional features of the vegetation canopy derived from LiDAR data and high spatial resolution images could improve the segmentation accuracy [16,17,28,29]. In the image object classification process, each segmented object is labeled as a corresponding class using an appropriate classification algorithm.
While the effectiveness of combining hyperspectral- and LiDAR-derived vegetation height data for tree species mapping has been confirmed by several studies [11,12,13,30,31], the combination of hyperspectral and LiDAR data has never been used for crop species classification, and the effectiveness of this combination in crop species mapping is unknown. Furthermore, most studies that were based on combining hyperspectral- and LiDAR-derived vegetation height data for tree species mapping relied on pixel-based classification and, thus, ignored the geometric and textural features that lie in high spatial resolution remote sensing data.
The main objective of this study was to develop a framework for mapping crop species by combining hyperspectral and LiDAR data in an object-based image analysis (OBIA) paradigm and to test the effectiveness of this framework in the irrigated agricultural region. The study area is located in the middle reaches of the Heihe River Basin, Gansu Province, China, where the landscape is fragmented and the crop planting structure is complicated. The specific aims of this paper are: (i) to understand the performances of different spectral dimension-reduced features from hyperspectral data and their combinations with LiDAR-derived height information in image segmentation; (ii) to understand what classification accuracies of crop species can be achieved by combining hyperspectral and LiDAR data in an OBIA paradigm; and (iii) to understand the contributions of the crop height derived from LiDAR data and the textural and geometric features of the image objects to the crop species’ separabilities.
The remainder of this paper is organized as follows. In Section 2, we describe the study area and the dataset used in the analysis. In Section 3, we present our methods for data pre-processing, image segmentation and segmentation accuracy assessment, and object-based classification. The results are presented and analyzed in Section 4. A summary of the entire study and the conclusions are presented in Section 5.

2. Study Area and Data

2.1. Study Area

The study area is located in the middle reaches of the Heihe River basin (north corner: 38°54′5.55′′N, 100°21′23.39′′E; east corner: 38°52′42.20′′N, 100°24′34.88′′E; south corner: 38°50′16.38′′N, 100°22′36.24′′E; west corner: 38°51′42.37′′N, 100°19′25.84′′E), approximately eight kilometers southwest of Zhangye City, Gansu Province, China (Figure 1). The area is located in an artificial oasis in which irrigated crops and forest are the dominant vegetation types. The vegetation species in the study area include shelter forest, cereal crops (maize, wheat) and vegetables (leek, lettuce, cauliflower, potato, watermelon, and pepper). In addition to vegetation cover, there are man-made buildings and roads in the study area.
Figure 1. The study area, a cultivation base in the middle reaches of the Heihe River, Gansu Province, China, is approximately 8 km southwest of Zhangye City. The hyperspectral cube was obtained from airborne CASI, and the remote sensing image on top of the cube is a false color composition of three hyperspectral imagery bands (R: band centered at 826 nm, G: band centered at 683 nm, B: band centered at 540 nm).
Figure 1. The study area, a cultivation base in the middle reaches of the Heihe River, Gansu Province, China, is approximately 8 km southwest of Zhangye City. The hyperspectral cube was obtained from airborne CASI, and the remote sensing image on top of the cube is a false color composition of three hyperspectral imagery bands (R: band centered at 826 nm, G: band centered at 683 nm, B: band centered at 540 nm).
Remotesensing 07 00922 g001

2.2. Data

The data used in this study were supplied by the Heihe Water Allied Telemetry Experimental Research (HiWATER) [32]. The overall objectives of HiWATER were to improve the observability of hydrological and ecological processes, to build a world-class watershed observing system, and to enhance the applicability of remote sensing in integrated eco-hydrological studies and water recourse management at the basin scale.

2.2.1. Airborne Remote Sensing Data

The remote sensing data used in this study included hyperspectral images that were collected by airborne CASI (Compact Airborne Spectrographic Imager, ITRES Research Ltd., Calgary, AB, Canada) and CHM data that were acquired by an airborne LiDAR system (Leica ALS70 LiDAR system). The CASI sensor was on board a Harbin Y-12 aircraft that was at an average flying altitude of 2000 m above the ground on 29 June 2012. The hyperspectral images that were acquired by the CASI had 48 bands that ranged from 380 nm to 1050 nm in wavelength. The spectral resolution of the hyperspectral data was 7 nm. The spatial resolution of the hyperspectral data was 1.0 m. While the CASI was in flight, the ground control points for the geometric rectification and the atmospheric parameters for the atmospheric correction of the CASI hyperspectral remote sensing data were measured simultaneously. The atmospheric correction and geometric rectification of the CASI hyperspectral data were conducted by the HiWATER team [32]. The hyperspectral data were georeferenced using Universal Transverse Mercator (UTM) coordinates with the WGS84 datum.
The airborne LiDAR data were obtained from the flight mission conducted by HiWATER on 19 July 2012. The LiDAR sensor flew at an altitude of approximately 1500 m to collect the first and last returns of each emitted pulse. The average point density of the LiDAR data was 4 points per m2, and the vertical placement accuracy of the LiDAR data was 0.05–0.3 m. The returns from the ground (e.g., bare soil, road) and non-ground targets (e.g., plant canopy, roofs of buildings) were classified into digital elevation model (DEM) and digital surface model (DSM) using TerraScan software (Terra-solid Ltd., Helsinki, Finland). The DEM and DSM derived from the LiDAR point-cloud data were georeferenced and resampled to a spatial resolution of 1 m for the convenience of combining them with the CASI hyperspectral data.

2.2.2. Field Data Collection

The vegetation types and crop species in the study area were surveyed simultaneously during the HiWATER flight mission (8 July–9 August 2012) [32]. Areas that had complex plant structures were surveyed intensively. The distribution of the survey points is shown in Figure 2. There are a total of 912 ground survey points that cover 393 crop parcels. For most parcels, more than two points were surveyed in each parcel. Non-vegetation types such as buildings, road, and shadow were visually interpreted based on the 1-m spatial resolution VHR image for classifier training and classification accuracy assessment. Details of the “ground truth” data are listed in Table 1. Reference polygons to be used in image segmentation accuracy assessment were plotted according to the field survey points. The 122 reference polygons were chosen for the image segmentation accuracy assessment and optimal segmentation parameter selection.
Table 1. Details of the ground truth data used for classifier training and classification accuracy assessment.
Table 1. Details of the ground truth data used for classifier training and classification accuracy assessment.
AssignmentNumber of Ground Investigated Points (912 in Total)Number of Visually Interpreted Points (270 in Total)
MaizeOrchardShelter
Forest
LeekLettuceCauliflowerNurseryPotatoWatermelonPepperBuildingsRoadShadow
Parcels51434231264135404143352646
Classification Training68644070507844806054584874
Accuracy Assessment34322035253922403027292437
Total1029660105751176612090818772111
Figure 2. Locations of field survey parcels and reference objects that were used in our study. The yellow stars are the locations of field survey parcels. The blue polygons are reference objects for image segmentation accuracy assessment.
Figure 2. Locations of field survey parcels and reference objects that were used in our study. The yellow stars are the locations of field survey parcels. The blue polygons are reference objects for image segmentation accuracy assessment.
Remotesensing 07 00922 g002

3. Methodology

In this study, we employed airborne hyperspectral and LiDAR data for object-based crop species classification. This object-based crop classification study consisted of four steps: (1) image features extraction from hyperspectral data and LiDAR data for image segmentation; (2) image segmentation for imagery object generation; (3) object feature extraction and selection for object-based crop species classification; and (4) object-based classification using the non-parametric machine learning classification algorithm support vector machine (SVM). As is shown in Figure 3, the data were first geometrically co-registered to minimize the classification uncertainty that is induced by geometric error between different data sources. The hyperspectral data were transformed using the MNF transformation to extract spectral features. The DSM and DEM data derived from the LiDAR point-cloud data were used to build the CHM data, which would be applied in the image segmentation and classification process. Different data combination schemes were tested for image segmentation. We provided a novel image segmentation assessment method for the optimum segmentation result and optimal segmentation parameter selection. When the image objects were built, object feature extraction was conducted on each data source to extract features, including spectral, object texture, object geometrical features, and crop height. In the classification process, the kernel-based SVM machine learning classifier was employed to classify crop species using the features that were extracted. Finally, an accuracy assessment was performed with the support of the ground survey data.
Figure 3. Flowchart of crop species classification procedure.
Figure 3. Flowchart of crop species classification procedure.
Remotesensing 07 00922 g003

3.1. Image Feature Extraction

The features that were extracted from the hyperspectral images and LiDAR data were used in both image segmentation and crop species classification procedure.
Hyperspectral imagery was considered to be potential for crop species classification due to its high spectral resolution [6,33]. However, the high dimensionality of the hyperspectral data will cause Hughe’s phenomenon in the classification process [34]. Statistical analysis revealed that many of the hyperspectral imagery bands are highly correlated [4,35,36], which means that it is necessary to perform a dimension reduction process on the hyperspectral imagery [31]. The MNF transformation algorithm was widely used in hyperspectral data feature extraction [4,10,12,37,38]. Studies showed that MNF-extracted features performed better in hyperspectral data-based image segmentation [26]. Thus, the MNF feature extraction method was used to extract the most informative spectral features of the hyperspectral data in this study. The first 10 layers of the MNF transformation were selected because the eigenvalues of the MNF transformation revealed that they accounted for more than 94% of the total information.
The problem of shadowing is especially significant in high-resolution imaging [39,40]. The spatial resolution of the hyperspectral data was high up to 1 m; elevated plants such as trees had shadows that should be considered in the classification process. Although it is remarkably difficult to interpret the shadowed area in an image because of the reduction or total loss of spectral information on these shaded objects [39], we found that the PRI (Photochemical Reflectance Index) was effective for shadow detection in the hyperspectral images. PRI was calculated using Equation (1) [41]:
P R I = R 528 R   567 R 528 + R   567
where R528 and R567 are the bands that centered at 528 nm and 567 nm, respectively. After several tests using different thresholds, an optimal PRI threshold of 0 was determined. The areas where PRI values were greater than 0 were labeled as shadow.
To capture the crop height differences of different crop species, high-density point-cloud LiDAR data were employed to generate the third-dimensional structure of the crop canopy. The DSM and DEM data derived from the LiDAR point-clouds that were provided by the HiWATER project were used to calculate the CHM. CHM was calculated as the difference between the DSM and DEM. The CHM map of the study area is shown in Figure 4.
Figure 4. CHM map of the study area.
Figure 4. CHM map of the study area.
Remotesensing 07 00922 g004

3.2. Image Segmentation and Segmentation Accuracy Assessment

In an object-based classification, the image segmentation accuracy would significantly influence the classification accuracy [19,42]. Segments that match the reference objects better would produce higher classification accuracies. Under-segmentation refers to the case in which a segment contains parts that belong to different regions and should be split. In the object-based classification, the under-segmented object is labeled as a unique type, so the under-segmentation would have a negative impact on the classification results [16]. We employed the most widely used multi-resolution segmentation method (FNEA, Fractal Net Evolution Approach) [43] to generate image objects. FNEA is a bottom-up region-merging algorithm, which will merge smaller neighboring objects (it starts with one-pixel objects) into larger segments until the user-defined heterogeneity thresholds are exceeded. The three threshold parameters are defined through both spectral and shape percentage as follows: scale, color (spectral properties), and shape (smoothness and compactness). FNEA is a scale-dependent segmentation algorithm, and the quality of the segmentation and overall object-based classification are largely dependent on the scale of the segmentation [44].
Appropriate segmentation parameters are crucial for achieving an optimal image segmentation result [20,45]. To obtain an optimal image segmentation result, researchers have explored a couple of indicators for optimal parameter identification [24,45,46]. However, there has been no effective method of obtaining the optimum segmentation result automatically so far. We proposed a segmentation accuracy feedback process to select the optimum FNEA segmentation parameters and segmentation results. The procedure is as follows: (1) define the changing ranges of each parameter and the increments in every iteration for every parameter (e.g., scale: 5 to 100 with an increment of 5 each time; compactness: 0.05 to 1 with an increment of 0.05; shape: 0.05 to 1 with an increment of 0.05); (2) segment the images with each parameter combination (20 × 20 × 20 segmentations); and (3) assess the segmentation accuracy of each result to select the most accurate result for the whole iteration procedure. For this process, the key step is the assessment of the image segmentation accuracy.
There are many methods for image segmentation accuracy assessment [47]. In general, these methods can be divided into two categories. The first is based on the statistical characteristics of the image object values. The second is based on the reference objects. Although both of these methods can quantitatively evaluate the segmentation accuracy, the reference-based method was considered to be more objective [48]. Hence, we followed the reference (ground truth)-based method presented by [16,48,49], in which the segmentation accuracy was measured by the overlapped regions and the distances between the reference polygons and segmented objects. Unlike the existing reference-based methods, in which both the area and position discrepancy are considered [16,42], we use the area discrepancy as the only standard to evaluate the segmentation. We did not use the position discrepancy index because the reference polygons and the segmentation results were both under the same georeference system; the area discrepancy would mainly reflect the positional disagreement. To describe the difference between the reference polygons and the segment results, we defined accurate segmentation (AS), over-segmentation (OS), and under-segmentation (US) as follows:
(1)
As shown in Figure 5a–c, there are multiple segmented objects that have overlapped regions with the same reference polygon. We define a reference polygon as Over-Segmented (OS) if one of the following three conditions holds: (a) among the overlapped regions, there are more than one of the overlapped regions that are greater than 10% of the reference polygon’s area; (b) each of the overlapped regions is less than (or equal to) 10% of the reference polygon’s area, but the total area of the overlapped regions is more than 90% of the reference polygon’s area, as shown in Figure 5b; (c) among the overlapped regions, there is only one overlapped region that is greater than 10% of the reference polygon’s area, but less than 90% of the reference polygon’s area, as shown in Figure 5c.
(2)
As shown in Figure 5d, for each of the reference polygons, there could be one (or more) segmented object(s) that have overlapped region(s) with the same reference polygon. We define the reference polygon as Under-Segmented (US) if the following condition holds: for the overlapped region(s), if there is an overlapped region that is greater than 90% of the reference polygon’s area, but smaller than 90% of the segmented object’s area;
(3)
As shown in Figure 5e, for each of the reference polygons, there could be one (or more) segmented object(s) that have overlapped region(s) with the same reference polygon. We define the reference polygon as Accurate-Segmented (AS) if the following condition holds: for the overlapped region(s), if there is an overlapped region that is both greater than 90% but less than 110% of the reference polygon’s area and greater than 90% of the segmented object’s area.
Figure 5. Illustrations of Over-Segmented (ac), Under-Segmented (d), and Accurate-Segmented (e) objects. In (a), more than three Overlapped Regions are greater than 10% of the Reference Polygon; in (b), there are 11 Overlapped Regions, but each of them is less than 10% of the Reference polygon’s area; under condition (c), only one Overlapped Region is greater than 10% of the Reference Polygon’s area, but the region is less than 90% of the Reference Polygon’s area. In (d), only one Overlapped Region is greater than 90% of the Reference Polygon’s area, but the Overlapped Region is less than 90% of the segmented object’s area (image object a in (d)). In (e), only one Overlapped Region is both greater than 90% of the Reference Polygon’s area and 90% of the segmented object’s area (image object a in (e)).
Figure 5. Illustrations of Over-Segmented (ac), Under-Segmented (d), and Accurate-Segmented (e) objects. In (a), more than three Overlapped Regions are greater than 10% of the Reference Polygon; in (b), there are 11 Overlapped Regions, but each of them is less than 10% of the Reference polygon’s area; under condition (c), only one Overlapped Region is greater than 10% of the Reference Polygon’s area, but the region is less than 90% of the Reference Polygon’s area. In (d), only one Overlapped Region is greater than 90% of the Reference Polygon’s area, but the Overlapped Region is less than 90% of the segmented object’s area (image object a in (d)). In (e), only one Overlapped Region is both greater than 90% of the Reference Polygon’s area and 90% of the segmented object’s area (image object a in (e)).
Remotesensing 07 00922 g005
For the entire set of image segmentation results, we define OSR (over-segmentation rate), USR (under-segmentation rate), and ASR (accurate segmentation rate) to assess the image segmentation accuracy. The OSR, USR, and ASR are defined in Equations (2)–(4):
O S R = i = 1 N O A r e a ( O R i ) A r e a ( R ) × 100 %
U S R = i = 1 N U A r e a ( U R i ) A r e a ( R ) × 100 %
A S R = i = 1 N A A r e a ( A R i ) A r e a ( R ) × 100 %
where NO, NU, and NA denote the number of Over-Segmented polygons, Under-Segmented polygons, and Accurate-Segmented polygons, respectively; Area(R) is the overall area of the reference polygons, Area(ORi) is the area of the ith Over-Segmented polygon; and Area(URi) is the area of the ith Under-Segmented polygon. The segmentation result with the largest ASR will be selected as the optimum segmentation result, and the corresponding segmentation parameters will be selected as the optimum segmentation parameters. The corresponding OSR and USR will also be obtained, to be used to quantify the segmentation errors.

3.3. Object Feature Extraction

3.3.1. Image Object Crop Height Feature Extraction

The mean value of an object is one of the most important statistical parameters for describing the feature of an image object. However, due to the intensive heterogeneity of a high spatial resolution image, the directly calculated image object CHM mean value might not represent its dominating crop type’s height anymore. For example, for orchard and watermelon image objects, as shown in Figure 6, the pixel values represent both the plants’ (fruit trees, watermelon) and their background’s (e.g., weeds or bare soil) height. A directly calculated mean value could be confusing because the mean height value could represent neither of the two types of crops, although the dominating type was the crop plants. Furthermore, this average height value could be close to another type of crop’s mean height value. This arrangement will make these two distinguishable crop types’ object mean height values similar. As shown in Figure 6, some of the CHM object pixel values actually represent the height of the background, which indicates that the directly computed image object mean value does not actually represent the object’s dominating crop height values. Therefore, the CHM object data should be pre-processed to ensure that the CHM image object mean value represents its dominating crop’s height.
Figure 6. The distribution of the crop CHM value and background in an image object (a) Orchard image object, (b) CHM distribution in orchard image object; (c) Watermelon image object, (d) CHM distribution in watermelon image object.
Figure 6. The distribution of the crop CHM value and background in an image object (a) Orchard image object, (b) CHM distribution in orchard image object; (c) Watermelon image object, (d) CHM distribution in watermelon image object.
Remotesensing 07 00922 g006
To make the objects’ CHM mean values represent their dominating crop’s average height, we propose a pre-process to eliminate the background values when calculating the image object height mean values. Because field investigation showed that the minimum crop height was 0.3 m (for watermelon), we chose 0.2 m as the threshold to eliminate the background values of CHM. For each image object, a CHM value of greater than 0.2 m was considered to be background value and was removed when calculating the object’s mean CHM value. The results before and after the pre-processing are shown in Figure 7.
In the pre-processed CHM data, we randomly chose 10 objects for each of the 10 crop species referenced the ground investigation data. Each of the crop heights was calculated based on the 10 objects height values; the results are shown in Figure 8. Crops of different species have a distinguishable plant height difference. Taller plants like shelter forest, nursery, orchard, and maize have more height differences.
Figure 7. The frequency distributions and the means of the CHM value of image objects before and after pre-processing. (Left): before pre-processing; (Right): after pre-processing. The red dotted lines show the mean values of their corresponding CHM image objects.
Figure 7. The frequency distributions and the means of the CHM value of image objects before and after pre-processing. (Left): before pre-processing; (Right): after pre-processing. The red dotted lines show the mean values of their corresponding CHM image objects.
Remotesensing 07 00922 g007aRemotesensing 07 00922 g007bRemotesensing 07 00922 g007c
Figure 8. Crop height box plot of the 10 crop species.
Figure 8. Crop height box plot of the 10 crop species.
Remotesensing 07 00922 g008

3.3.2. Extraction of the Image Object Texture Features

Because the crops are always cultivated along a certain direction or followed some given spatial pattern, artificial planted crops have strong arrangement characteristics. This planting structure could be captured by the remote sensing imageries as an image texture feature. Image texture features can provide valuable discriminating spatial characteristics in an object-based classification [5,50,51]. In this study, we employed the statistical texture calculating matrix, gray-level co-occurrence matrix (GLCM) [52], to extract the object-level textural features. The image features that we applied were from the LiDAR-derived CHM data because the CHM data has more obvious texture features than the hyperspectral images. To match the scale of the ground crop plant structure, a window size of 3 × 3 (3 m × 3 m on the ground) was chosen to calculate the second-order texture measures (standard deviation, angular second moment, contrast, dissimilarity, and entropy). The texture features are summarized in Table 2.

3.3.3. Geometric Feature Extraction of Image Objects

Image objects of man-made constructions or artificial cultivation areas always have distinguishable geometric shapes. The parcels of artificial cultivation such as leek, maize, and watermelon are mostly planted in rectangular areas, but natural vegetation always has irregular borders. Man-made constructions such as roads are in lines and have a large length/width ratio. Thus, we employed three geometric indicators: shape index, length/width ratio, and rectangular fit [53]. The shape index describes the smoothness of an image object border. The smoother the border of an image object is, the lower its shape index. This index is calculated from the border length feature of the image object divided by four times the square root of its area. The rectangular fit feature describes how well an image object fits into a rectangle of similar size and proportions—while 0 indicates no fit, 1 indicates a completely fitting image object. We normalized the results of the shape index and length/width ratio to be between 0 and 1 to make the results comparable. For a detailed definition and explanation of shape index, length/width ratio, and rectangular fit, readers are referred to [50,53].
Table 2. Second-order textural measures of LiDAR-derived CHM image objects, calculated using GLCM.
Table 2. Second-order textural measures of LiDAR-derived CHM image objects, calculated using GLCM.
Statistic FeatureExpressionDescription
Standard deviation i , j = 0 N 1 P i , j ( i , j u i , j ) 2 Measures the dispersion of the values around the mean, similar to contrast or dissimilarity.
GLCM angular second moment i , j = 0 N 1 ( P i , j ) 2 High when the GLCM is locally homogeneous.
GLCM contrast i , j = 0 N 1 P i , j ( i j ) 2 A measure of the amount of local variation in the image.
GLCM dissimilarity i , j = 0 N 1 P i , j | i j | Similar to contrast, but increases linearly. High if the local region has a high contrast.
GLCM entropy i , j = 0 N 1 P i , j ( ln P i , j ) The value for entropy is high if the elements of GLCM are distributed equally. It is low if the elements are close to either 0 or 1.
Parameters: i is the row number; j is the column number; Pi,j is the normalized value in cell i,j; N is the number of rows or columns.
Figure 9. Image geometric features: (a) the segmentation result of the remote sensing imagery; (b) the shape index feature; (c) the length/width ratio; and (d) the rectangular fit feature.
Figure 9. Image geometric features: (a) the segmentation result of the remote sensing imagery; (b) the shape index feature; (c) the length/width ratio; and (d) the rectangular fit feature.
Remotesensing 07 00922 g009

3.4. Classification

The SVM (Support vector machine) was chosen for classifying the crop species of the objects in this study. The data used for classification were multi-source data and had a high dimension, which means that parametric classifiers such as Maximum Likelihood Classifiers would be inadequate. Thus, the non-parametric distribution-free classifier SVM was applied. The SVM classifier is based on statistical machine learning theory, which determines the location of decision boundaries that produce an optimal separation of classes. The training of the classifier is relatively easy even with limited training samples, and it has offered state-of-the-art performance on ill-posed classification problems that are associated with high-dimensional features [39,54,55].
Considering the high dimensionality and complexity of the derived classification features, a radial basis function (RBF) was selected to reduce the computational burden caused by the high dimensions. The training of the SVM classifier involves the tuning of two parameters: cost of constraints violation (C) and sigma (σ). Larger values of C can lead to an over-fitted model, whereas σ controls the shape of the hyperplane [12]. Different combinations of features were tested for classification to obtain the optimal classification result. Training samples were derived from the field investigated GCP (Ground Control Point) and crop species data. A random selection of two-thirds of the training parcels (608 points covering 262 crop parcels and 180 points of non-vegetation) were used as training data for the SVM classification.

3.5. Classification Accuracy Assessment

To evaluate the effectiveness of the integration of hyperspectral data and LiDAR in crop species classification, confusion matrix analysis was used. The confusion matrix provides the Overall Accuracy (OA), which indicates the percentage of correctly classified samples; the User’s Accuracy (UA), which indicates how well training-set samples were classified; and the Producer’s Accuracy (PA), which indicates the probability that a classified sample actually represents that category in reality [56]. For each of the classification results, one-third of the field surveyed parcels (304 points covering 131 crop parcels and 90 points of non-vegetation types) that were not used in the classifier training process were used as “ground truth” for the confusion matrix analysis.
The Kappa analysis and Kappa Z-test (Equations (5) and (6)) were used to assess the overall performances of the classifications with different features. The Kappa Z-test of two classification results will determine which classification result is better [57]. At the 95% confidence level, the absolute critical value would be 1.96, which indicates that the classification is significantly better than a random result [56].
Z = k 1 V a r ( k 1 )
Z = | k 1 k 2 | V a r ( k 1 ) + V a r ( k 2 )
where k1 and k2 are the two Kappa values, Var(k1) and Var(k2)are their variances.

4. Results and Discussion

4.1. Segmentations of Different Image Feature Integrations

Three schemes of image segmentation were conducted using the FNEA algorithm, to achieve the optimum image segmentation result (see Table 3). For the VHR (very high resolution)-based segmentation scheme, the VHR image was a false color composition of three hyperspectral imagery bands (R: band centered at 826 nm, G: band centered at 683 nm, B: band centered at 540 nm); in the VHR/CHM-based segmentation, the CHM data that were derived from LiDAR data were expected to provide differentiation between different crop species; and for the CHM/MNF-based segmentation, the MNF feature was generated by hyperspectral data to provide spectral differences for different crop species. For each of the segmentation schemes, the weight of the CHM layer was assigned to be 30, to emphasize the information in the third dimension, while the weights of the other layers were assigned to be 1. The scale parameters ranged from 5 to 80. The shape parameters ranged from 0.05 to 0.45, and the compactness parameters ranged from 0.1 to 0.9 (see Table 3).
Table 3. Segmentation parameters for each of the integration schemes.
Table 3. Segmentation parameters for each of the integration schemes.
Segmentation SchemeRangeIncrement
ScaleShapeCompactnessScaleShapeCompactness
VHR-based segmentation5–800.05–0.450.1–0.950.050.1
VHR/CHM-based segmentation5–800.05–0.450.1–0.950.050.1
MNF/CHM-based segmentation5–800.05–0.450.1–0.950.050.1
The segmentation accuracy of the three segmentation schemes’ results were all evaluated using the reference polygons followed by the segmentation accuracy assessment method given in Section 3.2. The reference polygons used in the segmentation accuracy assessment process were the polygons that were plotted according to the field survey points (see Figure 2).
Figure 10. Image segmentation parameters and their corresponding segmentation accuracies. (ac) are segmentation parameters of VHR/CHM data integration; (df) are segmentation parameters of MNF/CHM data integration; and (gi) are segmentation parameters of VHR data. In (a), (d), and (g), the best segmentation result will be obtained within the shape parameter range, and the corresponding shape parameter will be chosen as the optimum shape parameter for its segmentation scheme. Similarly, the optimum compactness and scale parameters will be achieved from the results shown in (b), (e), (h) and (c), (f), (i).
Figure 10. Image segmentation parameters and their corresponding segmentation accuracies. (ac) are segmentation parameters of VHR/CHM data integration; (df) are segmentation parameters of MNF/CHM data integration; and (gi) are segmentation parameters of VHR data. In (a), (d), and (g), the best segmentation result will be obtained within the shape parameter range, and the corresponding shape parameter will be chosen as the optimum shape parameter for its segmentation scheme. Similarly, the optimum compactness and scale parameters will be achieved from the results shown in (b), (e), (h) and (c), (f), (i).
Remotesensing 07 00922 g010
The segmentation accuracies of all of the segmentation schemes are shown in Figure 10. Within the ranges of the segmentation parameters listed in Table 3, an optimum segmentation result was obtained for each segmentation scheme, and the corresponding parameters were chosen as optimum segmentation parameters. Take the VHR/CHM-based segmentation for example: when choosing the optimum shape parameter, we first chose the scale and compactness parameters at random (15 and 0.8, respectively). Then, the shape parameter changed from 0.05 to 0.45 with an increment of 0.05, and we obtained nine segmentation results. The nine segmentation results’ accuracies were assessed, and the results are as shown in Figure 10a. The best segmentation result was 84.8%, and the corresponding shape parameter 0.05 was chosen as the optimum shape parameter for the VHR/CHM segmentation scheme. Similarly, each optimum segmentation parameter of the segmentation schemes was chosen by following this process. The optimum segmentation parameters and the segmentation results of each segmentation scheme are summarized in Table 4.
Table 4. Optimum Segmentation parameters and their corresponding results of the different segmentation feature integration schemes.
Table 4. Optimum Segmentation parameters and their corresponding results of the different segmentation feature integration schemes.
Segmentation SchemeOptimum Segmentation ParametersSegmentation Accuracy (%)
ScaleShapeCompactnessOSRUSRASR
VHR-based segmentation300.20.73.202472.80
VHR/CHM-based segmentation150.050.86.408.8084.80
MNF/CHM-based segmentation100.050.317.6020.8061.60
Table 4 shows that the optimum segmentation accuracies of VHR/CHM outperform the other two segmentation schemes with an ASR of 84.8%, OSR of 6.4%, and USR of 8.8%. The optimum segmentation accuracies of VHR data alone is 72.8%, 3.2%, and 24% for ASR, OSR, and USR, respectively. The optimum segmentation accuracies of the MNF/CHM data combination are 61.6%, 17.6%, and 20.8% for ASR, OSR, and USR, respectively. The VHR/CHM-based segmentation achieved an ASR 12% higher than that of the VHR-based segmentation and 23.2% higher than that of the VHR/CHM-based segmentation. The USR was 8.8% for VHR/CHM-based segmentation, which was lower than the other two segmentation schemes.
The image segmentation results of the three segmentation schemes are shown in Figure 11; the VHR/CHM-based image segmentation outperformed the VHR imagery in partitioning vegetation species (such as tree crowns and orchards), which have significant height differences. For the MNF-transformed imagery, the edge of a crop field became somehow fuzzier than in the original image. Thus, the MNF/CHM-based segmentation had a higher under-segmentation rate. Because under-segmentation was considered to have a negative impact on object-based classification [44], the VHR/CHM-based segmentation result with the lowest under-segmentation rate was accepted and used in the following classification procedures.
Figure 11. Image segmentation results of the three segmentation schemes. (a) VHR image of the sub-study area; (bd) show the VHR-, VHR/CHM-, and MNF/CHM-based segmentation results.
Figure 11. Image segmentation results of the three segmentation schemes. (a) VHR image of the sub-study area; (bd) show the VHR-, VHR/CHM-, and MNF/CHM-based segmentation results.
Remotesensing 07 00922 g011aRemotesensing 07 00922 g011b

4.2. Classification Accuracies of Different Data Integrations

The shaded area that was extracted using the PRI index is displayed in Figure 12. The visually interpreted shadowed areas were used to evaluate the PRI-based shadow area extraction accuracy. The results revealed that the accuracy of the extracted shadow area using the hyperspectral-derived PRI is up to 97.66%. The shadow area that was extracted by using PRI was applied as a mask to extract the shadowed area before the classification procedure.
Figure 12. The PRI extracted shadow area: (a) image with shaded areas; (b) extracted shadow area (in red polygons).
Figure 12. The PRI extracted shadow area: (a) image with shaded areas; (b) extracted shadow area (in red polygons).
Remotesensing 07 00922 g012
Classifications of crop species with five different feature integrations were conducted using the SVM classifier. The data combination schemes are shown in Table 5. The classification with VHR and MNF data was taken as the benchmark to comparatively evaluate the performances of the data combination schemes for crop species classification. The results show that the combination of hyperspectral MNF-transformed features and LiDAR-derived CHM data obtained a more accurate classification result. As shown in Figure 13, classification based only on VHR data resulted in a lower overall classification accuracy than its combination with CHM data, and more than 8% of the overall classification accuracy increase was obtained when VHR was integrated with CHM. The overall classification accuracy of the MNF/CHM combination is 9.16% higher than that of the MNF classification. By incorporating the GLCM and the geometric features, the overall classification accuracy increased more than 2% over the MNF/CHM-based classification.
Table 5. Summary of classification schemes.
Table 5. Summary of classification schemes.
Feature IntegrationFeature Description
VHRWith a high spatial resolution of 1 m, four bands were derived from hyperspectral data (bands centered at 454.4 nm, 540.4 nm, 697.7 nm, and 826.3 nm)
VHR/CHMCHM was derived from LiDAR data.
MNFMNF features were the first 10 components of MNF transformed hyperspectral data.
MNF/CHMMNF features combined with CHM data.
MNF/CHM/GLCM/GeometricGLCM features including object-level standard deviation, angular second moment, contrast, dissimilarity, and entropy; geometric features including image object shape index, length/width ratio, and rectangular fit indices.
Figure 13. Overall classification accuracies of different data integrations.
Figure 13. Overall classification accuracies of different data integrations.
Remotesensing 07 00922 g013
The LiDAR-derived CHM data made a substantial contribution to the classification accuracy increment of crop species with significant crop height differences. As shown in Table 6, compared with the VHR-based classification, the VHR/CHM-based classification accuracy increments of taller plants (shelter forest, orchards, nursery, and maize) were greater than those of the shorter crops. The VHR/CHM-based classification achieved an overall classification accuracy of 83.21%, with a kappa value of 0.8, which is 8.15% higher than the VHR-based classification (the overall classification accuracy is 75.06%, with a kappa of 0.7). The increments of the Producer’s and User’s classification accuracies are 15.55% and 15.55% for shelter forest, 34.79% and 19.37% for orchards, 0% and −20% for nursery, and 7.48% and 8.95% for maize. Similar results were achieved with the MNF/CHM- and MNF-based classification.
However, shorter plants such as cauliflower, leek, lecture, maize, pepper, potato, and watermelon had no accuracy increment or a smaller accuracy increment between VHR/CHM- and VHR-based classifications: the maximum Producer’s and User’s classification accuracy increases are 11.76% for pepper and 8.95% for maize. Small classification accuracy increments for shorter crops arose mainly because the vertical resolution of the LiDAR data is 20 cm, which is less than the plant height differences in short plant species (such as cauliflower, leek, pepper, lettuce, potato, and watermelon). There are fewer accuracy increments for shorter crops than the taller crops between the MNF/CHM- and MNF-based classification, as shown in Table 6.
Compared to the VHR/CHM-based classification, the classification accuracy increased 5% when the MNF features were incorporated. The Z-test between the kappa statistics of the VHR/CHM- and MNF/CHM-based classification results was 2.89, which means that the result of the kappa statistic of the latter is significantly larger than that of the former. The overall accuracy of the MNF/CHM-based classification is 88.30%, with a kappa of 0.86. Crops that have an accuracy increment include leek, maize, nursery, orchard, potato, and shelter forest. This accuracy increment is mainly due to the addition of the MNF-transformed hyperspectral data.
Table 6. Plant heights and classification accuracy increments from VHR to VHR/CHM, and from MNF to MNF/CHM-based classifications.
Table 6. Plant heights and classification accuracy increments from VHR to VHR/CHM, and from MNF to MNF/CHM-based classifications.
Crop SpeciesCrop Height (cm)Classification Accuracy Increment from VHR to VHR/CHMClassification Accuracy Increment from MNF to MNF/CHM
Producer’s Accuracy (%)User’s Accuracy (%)Producer’s Accuracy (%)User’s Accuracy (%)
Orchard36934.7919.3724.5420.65
Shelter Forest168515.5515.5527.7814.91
Nursery3560−20−3.47−20
Maize2127.488.955.7611.17
Leek34−101.322.55−3.02
Cauliflower520−3.70.05−3.7
Pepper5611.767.697.2216.08
Lettuce3700−5.750
Potato5407.14−5.967.51
Watermelon300009.42
Buildings4125.830020
Road011.115.5651.99−10.26
Shadow000−28.172.53
The classification accuracy assessment results of the five different classification schemes are shown in Table 7: the overall classification accuracy was 90.33% with a kappa value of 0.89; both the producer’s and user’s accuracy of watermelon are 100%. The nursery has the lowest classification, with a producer’s accuracy of 66.67% and a user’s accuracy of 80%, respectively. The producer’s and user’s classifications of maize, the most widely distributed crop type in our study area, are 100% and 85.6%, respectively. The classification accuracies of most of the crop species are above 80% for the producer’s and user’s accuracy. The result of the MNF/CHM/GLCM/ Geo-based classification is depicted in Figure 14.
Table 7. Summary of classification accuracies of the different classification schemes.
Table 7. Summary of classification accuracies of the different classification schemes.
Crop SpeciesVHRVHR/CHMMNFMNF/CHMMNF/CHM/GLCM/Geo
PAUAPAUAPAUAPAUAPAUA
Cauliflower96.310096.396.396.396.396.396.394.4494.44
Leek86.6783.8776.6785.1976.67928077.428080
Lettuce10010010010010010087.510090100
Maize92.5259.6410068.5910069.9310082.3110084.52
Nursery66.6710066.678083.3371.4366.67806075
Orchard41.370.3776.0989.7480.4392.573.9187.1871.8888.46
Pepper70.5992.3182.3510070.5985.7182.3510081.82100
Potato76.4792.8676.4710082.3510070.5910072.73100
Shelter forest75.5675.5691.1191.1191.1193.1888.8990.9186.6796.3
Watermelon10093.3310093.3310093.33100100100100
Buildings10087.510093.3310082.35100100100100
Road22.2288.8927.7810022.2288.8997.2289.7495.8388.46
Shadow55.5690.9155.5690.9161.1191.67509010092.3
OA75.0683.2183.4688.390.33
Kappa0.70.80.810.860.89
Figure 14. The crop species map classified using the combination of MNF/CHM/GLCM/Geometric features in the object-based classification paradigm.
Figure 14. The crop species map classified using the combination of MNF/CHM/GLCM/Geometric features in the object-based classification paradigm.
Remotesensing 07 00922 g014

5. Conclusions

In this paper, we proposed a framework for mapping crop species by combining hyperspectral and LiDAR data in an object-based image analysis (OBIA) paradigm. To test the effectiveness of this framework, a study was conducted on the irrigated agricultural region in the central Heihe River Basin, where the landscape is fragmented and the crop planting structure is quite complex. A pre-processing procedure was proposed for extracting the mean crop height of the segmented image objects. The performances of different spectral dimension-reduced features from hyperspectral data and their combinations with LiDAR-derived height information in image segmentation were evaluated and compared. The contributions of the crop height derived by LiDAR data and the geometric and textural features of the image objects to the crop species’ separabilities were studied.
We evaluated and compared the performances of different combinations of features extracted from hyperspectral and LiDAR data for image segmentation and image classification. The main indications and conclusions derived from our analysis are the following:
(i)
The framework we presented in this study for mapping crop species by combining hyperspectral and LiDAR data in an object-based image analysis (OBIA) paradigm is effective. This approach produced a good crop species classification result, with an overall accuracy of 90.33% and a kappa coefficient of 0.89 in our study area, where there was a spatially fragmented agricultural landscape and a complicated planting structure.
(ii)
The image segmentation accuracy depends heavily on the hyperspectral data dimension-reduction method. In this case, the VHR data that was selected from the hyperspectral bands has higher segmentation accuracy than the MNF. Incorporating the CHM information extracted from high point density LiDAR data could significantly improve the segmentation accuracy of the VHR data.
(iii)
The height information derived from LiDAR data provided a substantial increase in the crop species classification accuracy. The MNF/CHM combination produced higher accuracy of crop species classification than VHR/CHM.
(iv)
Incorporating the textural and geometric features (i.e., the shape index, length-width ratio, and rectangular fit) of objects could significantly increase the crop species classification accuracy, which indicates that, due to its ability to provide diverse textural and geometric features, object-based image classification is effective for crop species mapping in regions with spatially fragmented landscape and complicated planting structure.
The remote sensing data used in this paper were airborne hyperspectral data with high spatial resolution, and LiDAR data with high density of point cloud. However, the method of crop species classification we presented in this paper is applicable to combining satellite hyperspectral data with moderate spatial resolution and LiDAR data with low cloud density for crop mapping in a large area as well. For future development of this study, it would be interesting to investigate the performance of LiDAR data combined with more features derived from hyperspectral data in both image segmentation and classification. Further testing of the method in a different area with other kind of crops and with LiDAR data of different quality should also be attempted.

Acknowledgments

This work was jointly supported by the National Natural Science Foundation of China (Grant No. 91125004), the National Key Basic Research Program of China (973 Program) Project (Grant No. 2013CB733403), the National Natural Science Foundation of China (Grant No. 41271347), and the High-Tech Research and Development Program of China (863 Program) Project (Grant No. 2012AA12A305). The authors would like to thank the three anonymous reviewers for their constructive comments for revising this paper.

Author Contributions

Yanchen Bo contributed the main idea, designed the methodology, and finished the final version of this paper. Xiaolong Liu processed all the data, conducted all the experiments, and drafted the preliminary version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tapia-Silva, F.-O.; Itzerott, S.; Foerster, S.; Kuhlmann, B.; Kreibich, H. Estimation of flood losses to agricultural crops using remote sensing. Phys. Chem. Earth Parts A/B/C 2011, 36, 253–265. [Google Scholar] [CrossRef]
  2. Jia, K.; Li, Q.; Tian, Y.; Wu, B.; Zhang, F.; Meng, J. Crop classification using multi-configuration SAR data in the North China Plain. Int. J. Remote Sens. 2012, 33, 170–183. [Google Scholar] [CrossRef]
  3. Atzberger, C. Advances in remote sensing of agriculture: Context description, existing operational monitoring systems and major information needs. Remote Sens. 2013, 5, 949–981. [Google Scholar] [CrossRef]
  4. Zhang, C.; Xie, Z. Combining object-based texture measures with a neural network for vegetation mapping in the Everglades from hyperspectral imagery. Remote Sens. Environ. 2012, 124, 310–320. [Google Scholar] [CrossRef]
  5. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
  6. Wilson, J.; Zhang, C.; Kovacs, J. Separating crop species in northeastern Ontario using hyperspectral data. Remote Sens. 2014, 6, 925–945. [Google Scholar] [CrossRef]
  7. Nidamanuri, R.R.; Zbell, B. Use of field reflectance data for crop mapping using airborne hyperspectral image. ISPRS J. Photogramm. Remote Sens. 2011, 66, 683–691. [Google Scholar] [CrossRef]
  8. Ewijk., K.Y.V.; Randin., C.F.; Treitz., P.M.; Scott., N.A. Predicting fine-scale tree species abundance patterns using biotic variables derived from LiDAR and high spatial resolution imagery. Remote Sens. Environ. 2014, 150, 120–131. [Google Scholar] [CrossRef]
  9. Maselli, F.; Chiesi, M.; Mura, M.; Marchetti, M.; Corona, P.; Chirici, G. Combination of optical and LiDAR satellite imagery with forest inventory data to improve wall-to-wall assessment of growing stock in Italy. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 377–386. [Google Scholar] [CrossRef]
  10. Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and LiDAR data fusion. Remote Sens. Environ. 2014, 148, 70–83. [Google Scholar] [CrossRef]
  11. Cho, M.A.; Mathieu, R.; Asner, G.P.; Naidoo, L.; Aardt, V.J.; Ramoelo, A.; Debba, P.; Wessels, K.; Main, R.; Smit, I.P.J.; et al. Mapping tree species composition in South African savannas using an integrated airborne spectral and LiDAR system. Remote Sens. Environ. 2012, 125, 214–226. [Google Scholar] [CrossRef]
  12. Ghosh, A.; Fassnacht, F.E.; Joshi, P.K.; Koch, B. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 49–63. [Google Scholar] [CrossRef]
  13. Naidoo, L.; Cho, M.A.; Mathieu, R.; Asner, G. Classification of savanna tree species, in the Greater Kruger National Park region, by integrating hyperspectral and LiDAR data in a Random Forest data mining environment. ISPRS J. Photogramm. Remote Sens. 2012, 69, 167–179. [Google Scholar] [CrossRef]
  14. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 199–811. [Google Scholar]
  15. Wardlow, B.; Egbert, S.; Kastens, J. Analysis of time-series MODIS 250 m vegetation index data for crop classification in the U.S. Central Great Plains. Remote Sens. Environ. 2007, 108, 290–310. [Google Scholar] [CrossRef]
  16. Ke, Y.; Quackenbush, L.J.; Im, J. Synergistic use of QuickBird multispectral imagery and LiDAR data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154. [Google Scholar] [CrossRef]
  17. Chen, G.; Hay, G.J.; Castilla, G.; St-Onge, B.; Powers, R. A multiscale geographic object-based image analysis to estimate LiDAR-measured forest canopy height using Quickbird imagery. Int. J. Geogr. Inf. Sci. 2011, 25, 877–893. [Google Scholar] [CrossRef]
  18. Johansen, K.; Sohlbach, M.; Sullivan, B.; Stringer, S.; Peasley, D.; Phinn, S. Mapping banana plants from high spatial resolution orthophotos to facilitate plant health assessment. Remote Sens. 2014, 6, 8261–8286. [Google Scholar] [CrossRef]
  19. Castillejo-González, I.L.; López-Granados, F.; García-Ferrer, A.; Peña-Barragán, J.M.; Jurado-Expósito, M.; de la Orden, M.S.; González-Audicana, M. Object- and pixel-based analysis for mapping crops and their agro-environmental associated measures using QuickBird imagery. Comput. Electron. Agric. 2009, 68, 207–215. [Google Scholar] [CrossRef]
  20. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  21. Tansey, K.; Chambers, I.; Anstee, A.; Denniss, A.; Lamb, A. Object-oriented classification of very high resolution airborne imagery for the extraction of hedgerows and field margin cover in agricultural areas. Appl. Geogr. 2009, 29, 145–157. [Google Scholar] [CrossRef]
  22. Peña, J.M.; Gutiérrez, P.A.; Hervás-Martínez, C.; Six, J.; Plant, R.E.; López-Granados, F. Object-based image classification of summer crops with machine learning methods. Remote Sens. 2014, 6, 5019–5041. [Google Scholar] [CrossRef]
  23. Conrad, C.; Fritsch, S.; Zeidler, J.; Rücker, G.; Dech, S. Per-field irrigated crop classification in arid central asia using SPOT and ASTER data. Remote Sens. 2010, 2, 1035–1056. [Google Scholar] [CrossRef]
  24. Johnson, B.; Xie, Z. Unsupervised image segmentation evaluation and refinement using a multi-scale approach. ISPRS J. Photogramm. Remote Sens. 2011, 66, 473–483. [Google Scholar] [CrossRef]
  25. Zhang, L.; Zhang, L.; Tao, D.; Huang, X. On combining multiple features for hyperspectral remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 879–893. [Google Scholar] [CrossRef]
  26. Zhang, L.; Huang, X. Object-oriented subspace analysis for airborne hyperspectral remote sensing imagery. Neurocomputing 2010, 73, 927–936. [Google Scholar] [CrossRef]
  27. Tarabalka, Y.; Chanussot, J.; Benediktsson, J.A. Segmentation and classification of hyperspectral images using watershed transformation. Pattern Recognit. 2010, 43, 2367–2379. [Google Scholar] [CrossRef]
  28. Sasaki, T.; Imanishi, J.; Ioki, K.; Morimoto, Y.; Kitada, K. Object-based classification of land cover and tree species by integrating airborne LiDAR and high spatial resolution imagery data. Landsc. Ecol. Eng. 2011, 8, 157–171. [Google Scholar] [CrossRef]
  29. Antonarakis, A.S.; Richards, K.S.; Brasington, J. Object-based land cover classification using airborne LiDAR. Remote Sens. Environ. 2008, 112, 2988–2998. [Google Scholar] [CrossRef]
  30. Dalponte, M.; Bruzzone, L.; Gianelle, D. Fusion of hyperspectral and LiDAR remote sensing data for classification of complex forest areas. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1416–1427. [Google Scholar] [CrossRef]
  31. Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
  32. Li, X.; Cheng, G.; Liu, S.; Xiao, Q.; Ma, M.; Jin, R.; Che, T.; Liu, Q.; Wang, W.; Qi, Y. Heihe watershed allied telemetry experimental research (HiWATER): Scientific objectives and experimental design. Bull. Am. Meteorol. Soc. 2013, 94, 1145–1160. [Google Scholar] [CrossRef]
  33. Bannari, A.; Pacheco, A.; Staenz, K.; McNairn, H.; Omari, K. Estimating and mapping crop residues cover on agricultural lands using hyperspectral and IKONOS data. Remote Sens. Environ. 2006, 104, 447–459. [Google Scholar] [CrossRef]
  34. Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
  35. Adjorlolo, C.; Mutanga, O.; Cho, M.A.; Ismail, R. Spectral resampling based on user-defined inter-band correlation filter: C3 and C4 grass species classification. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 535–544. [Google Scholar] [CrossRef]
  36. Gomez-Chova, L.; Calpe, J.; Soria, E.; Camps-Valls, G.; Martin, J.D.; Moreno, J. Cart-based feature selection of hyperspectral images for crop cover classification. In Proceedings of the 2003 International Conference on Image Processing, ICIP 2003, Barcelona, Catalonia, Spain, 14–17 September 2003; Volume 3, pp. 589–592.
  37. Green, A.A.; Berman, M.; Switzer, P.; Craig, M.D. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans. Geosci. Remote Sens. 1988, 26, 65–74. [Google Scholar] [CrossRef]
  38. Zhang, B.; Sun, X.; Gao, L.R.; Yang, L.N. Endmember extraction of hyperspectral remote sensing images based on the Ant Colony Optimization (ACO) algorithm. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2635–2646. [Google Scholar] [CrossRef]
  39. Dare, P.M. Shadow analysis in high-resolution satellite imagery of urban areas. Photogramm. Eng. Remote Sens. 2005, 71, 169–177. [Google Scholar] [CrossRef]
  40. Zhou, W.; Huang, G.; Troy, A.; Cadenasso, M. Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas: A comparison study. Remote Sens. Environ. 2009, 113, 1769–1777. [Google Scholar] [CrossRef]
  41. Gamon, J.; Penuelas, J.; Field, C. A narrow-waveband spectral index that tracks diurnal changes in photosynthetic efficiency. Remote Sens. Environ. 1992, 41, 35–44. [Google Scholar] [CrossRef]
  42. Cheng, J.; Bo, Y.; Zhu, Y.; Ji, X. A novel method for assessing the segmentation quality of high-spatial resolution remote-sensing images. Int. J. Remote Sens. 2014, 35, 3816–3839. [Google Scholar] [CrossRef]
  43. Baatz, M.; Schäpe, A. Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. Angew. Geogr. Inf. 2000, XII, 12–23. [Google Scholar]
  44. Liu, D.; Xia, F. Assessing object-based classification: Advantages and limitations. Remote Sens. Lett. 2010, 1, 187–194. [Google Scholar] [CrossRef]
  45. Drǎguţ, L.; Tiede, D.; Levick, S.R. ESP: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  46. Kim, M.; Madden, M.; Warner, T. Estimation of optimal image object size for the segmentation of forest stands with multispectral IKONOS imagery. In Object-Based Image Analysis; Springer: Berlin, Germany, 2008; pp. 291–307. [Google Scholar]
  47. Chen, J.; Deng, M.; Mei, X.; Chen, T.; Shao, Q.; Hong, L. Optimal segmentation of a high-resolution remote-sensing image guided by area and boundary. Int. J. Remote Sens. 2014, 35, 6914–6939. [Google Scholar] [CrossRef]
  48. Polak, M.; Zhang, H.; Pi, M. An evaluation metric for image segmentation of multiple objects. Image Vis. Comput. 2009, 27, 1223–1227. [Google Scholar] [CrossRef]
  49. Liu, Y.; Bian, L.; Meng, Y.; Wang, H.; Zhang, S.; Yang, Y.; Shao, X.; Wang, B. Discrepancy measures for selecting optimal combination of parameter values in object-based image analysis. ISPRS J. Photogramm. Remote Sens. 2012, 68, 144–156. [Google Scholar] [CrossRef]
  50. Yue, A.; Zhang, C.; Yang, J.; Su, W.; Yun, W.; Zhu, D. Texture extraction for object-oriented classification of high spatial resolution remotely sensed images using a semivariogram. Int. J. Remote Sens. 2013, 34, 3736–3759. [Google Scholar] [CrossRef]
  51. Zhang, R.; Zhu, D. Study of land cover classification based on knowledge rules using high-resolution remote sensing images. Expert Syst. Appl. 2011, 38, 3647–3652. [Google Scholar] [CrossRef]
  52. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 2, 610–621. [Google Scholar] [CrossRef]
  53. Definiens, A.G. Definiens eCognition Developer 8 Reference Book; Definiens AG: München, Germany, 2009. [Google Scholar]
  54. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  55. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of hyperspectral images with regularized linear discriminant analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  56. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: New York, NY, USA, 2008. [Google Scholar]
  57. Bishop, Y.M.; Fienberg, S.E.; Holland, P.W. Discrete Multivariate Analysis: Theory and Practice; Springer: New York, NY, USA, 2007. [Google Scholar]

Share and Cite

MDPI and ACS Style

Liu, X.; Bo, Y. Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data. Remote Sens. 2015, 7, 922-950. https://doi.org/10.3390/rs70100922

AMA Style

Liu X, Bo Y. Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data. Remote Sensing. 2015; 7(1):922-950. https://doi.org/10.3390/rs70100922

Chicago/Turabian Style

Liu, Xiaolong, and Yanchen Bo. 2015. "Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data" Remote Sensing 7, no. 1: 922-950. https://doi.org/10.3390/rs70100922

APA Style

Liu, X., & Bo, Y. (2015). Object-Based Crop Species Classification Based on the Combination of Airborne Hyperspectral Images and LiDAR Data. Remote Sensing, 7(1), 922-950. https://doi.org/10.3390/rs70100922

Article Metrics

Back to TopTop