Next Article in Journal
Monitoring Grassland Seasonal Carbon Dynamics, by Integrating MODIS NDVI, Proximal Optical Sampling, and Eddy Covariance Measurements
Next Article in Special Issue
Improving Spring Maize Yield Estimation at Field Scale by Assimilating Time-Series HJ-1 CCD Data into the WOFOST Model Using a New Method with Fast Algorithms
Previous Article in Journal
A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification
Previous Article in Special Issue
Estimating Evapotranspiration of an Apple Orchard Using a Remote Sensing-Based Soil Water Balance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification

1
College of Resource and Environment, Huazhong Agricultural University, 1 Shizishan Street, Wuhan 430070, China
2
USDA-Agricultural Research Service, Aerial Application Technology Research Unit, 3103 F & B Road, College Station, TX 77845, USA
3
College of Mechanical and Electronic Engineering, Northwest A&F University, 22 Xinong Road, Yangling 712100, China
4
Anhui Engineering Laboratory of Agro-Ecological Big Data, Anhui University, 111 Jiulong Road, Hefei 230601, China
5
College of Engineering, Huazhong Agricultural University, 1 Shizishan Street, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(3), 257; https://doi.org/10.3390/rs8030257
Submission received: 21 January 2016 / Revised: 7 March 2016 / Accepted: 11 March 2016 / Published: 18 March 2016
(This article belongs to the Special Issue Remote Sensing in Precision Agriculture)

Abstract

:
Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications has not been well documented in related studies. The objective of this research was to apply three commonly-used classification methods (unsupervised, supervised, and object-based) to three-band imagery with RGB (red, green, and blue bands) and four-band imagery with RGB and near-infrared (NIR) bands to evaluate the performance of a dual-camera imaging system for crop identification. Airborne images were acquired from a cropping area in Texas and mosaicked and georeferenced. The mosaicked imagery was classified using the three classification methods to assess the usefulness of NIR imagery for crop identification and to evaluate performance differences between the object-based and pixel-based methods. Image classification and accuracy assessment showed that the additional NIR band imagery improved crop classification accuracy over the RGB imagery and that the object-based method achieved better results with additional non-spectral image features. The results from this study indicate that the airborne imaging system based on two consumer-grade cameras used in this study can be useful for crop identification and other agricultural applications.

Graphical Abstract

1. Introduction

Remote sensing has played a key role in precision agriculture and other agricultural applications [1]. It provides a very efficient and convenient way to capture and analyze agricultural information. As early as 1972, the Multispectral Scanner System (MSS) sensors were used for accurate identification of agricultural crops [2]. Since then, numerous commercial satellite and custom-built airborne imaging systems have been developed for remote sensing applications with agriculture being one of the major application fields. Remote sensing is very mysterious for most people who usually perceive it as complex designs and sophisticated sensors on satellites and other platforms. This is true for quite a long time, especially for the scientific-grade remote sensing systems on satellites and manned aircraft. However, with the advances of electronic imaging technology, remote sensing sensors have significant breakthroughs, including digitalization, miniaturization, ease of use, high resolution, and affordability. Recently, more and more imaging systems based on consumer-grade cameras have been designed as remote sensing platforms [3,4,5,6].
As early as in the 2000s, consumer-grade cameras with 35 mm film were mounted on a unmanned aerial vehicle (UAV) platform to acquire imagery in a small area for range and resource managers [7]. Remote sensing systems with consumer-grade cameras have such advantages over scientific-grade platforms as low cost with high spatial resolution [8,9] and easy deployment [7]. There are two main types of consumer-grade cameras: non-modified and modified. The non-modified cameras capturing visual light with R, G, and B channels have been commonly used since the early periods of consumer-grade cameras for remote sensing [7,10,11]. Additionally, this type of cameras have been commonly used for aerial photogrammetric surveys [12]. However, most scientific-grade remote sensing sensors include near-infrared (NIR) detection capabilities due to its sensitivity for plants, water and other cover types [13]. Therefore, using modified cameras to capture NIR light with different modifying methods is becoming popular.
Modified NIR cameras are generally obtained by replacing the NIR-blocking filter in front of the complementary metal-oxide-semiconductor (CMOS) or the charge-coupled device (CCD) with a long-pass NIR filter. A dual-camera imaging system is a common choice with one camera for normal color imaging and the other for NIR imaging [14,15]. With a dual-camera configuration, four-band imagery with RGB and NIR sensitivities can be captured simultaneously. Some consumer-grade cameras have been modified to capture this imagery by this method [10]. The other method is to remove the internal NIR-blocking filter and replace it with a blue-blocking [16] or a red-blocking filter [17]. This method can capture three-band imagery, including the NIR band and two visible bands, with just one single camera.
Both methods can be implemented, but both still have some issues [16]. The separate RGB and NIR images from dual cameras need to be registered for dual cameras. The NIR light may influence the other two bands for the single camera when capturing the NIR band. In other words, extensive post-processing is required to get the desired imagery. Therefore, it is necessary to decide if a simple unmodified RGB camera or a relatively complex dual-camera system with the NIR band should be selected. Many remote sensing imaging systems based on unmodified single cameras have been used for some agricultural applications to achieve satisfying results [9,14,18]. Consumer-grade cameras cannot capture very detail spectral information, but most of them can obtain very high or ultra-high ground resolution because these cameras can be carried on such low-flying platforms as UAV, airplanes and balloons. Moreover, with the rapid advancement of object-based image analysis (OBIA) methods, imagery with abundant shape, texture and context information could usually improve classification performance for practical applications [19,20]. Therefore, to choose pixel-based or object-based image processing methods is another question that needs to be considered.
Despite some shortcomings, consumer-grade cameras with the attributes of low cost and easy deployment have been gradually used in different application fields in the last decade, in particular for agricultural applications, such as crop identification [10], monitoring [11,14,21], mapping [9], pest detection [18], species invasion [17] and crop phenotyping [6,22]. It is becoming a reality for farmers to use these consumer-grade cameras for crop production and protection. Therefore, it is important to evaluate and compare the performances of normal RGB cameras and modified NIR cameras using different image processing methods.
Scientists at the Aerial Application Technology Research Unit at the U.S. Department of Agriculture-Agricultural Research Service’s Southern Plains Agricultural Research Center in College Station, Texas has assembled a dual-camera imaging system with both visible and NIR sensitivities using two consumer-grade cameras. The overall goal of the study was to evaluate the performance of RGB imagery alone and RGB combined with NIR imagery for crop identification. The specific objectives were to: (1) compare the differences between the three-band imagery and four-band imagery for crop identification; (2) assess the performance on image classification between the object-based and pixel-based analysis methods; and (3) make recommendations for the selection of imaging systems and imagery processing methods for practical applications.

2. Materials and Methods

2.1. Study Area

This study was conducted at the Brazos River Valley near College Station, Texas, in July 2015 within a cropping area of 21.5 km2 (Figure 1). This area is located at the lower part of the Brazos River (the “Brazos Bottom”) with humid subtropical climate.
In this area, the main crops were cotton, corn, sorghum, soybean and watermelon in the 2015 growing season. Cotton was the main crop with the largest cultivated area, and it had very diverse growing conditions due to different planting dates and management conditions. Most cornfields were near physiological maturity with very few green leaves, and most of sorghum fields were in the generative phase reflected by beginning senescence at the imaging time. Especially corn was drying in fields for harvest. Soybean and watermelon were at the vegetative growth stages. Due to cloudy and rainy weather in much of May and June, aerial imagery was not acquired during the optimum period of crop discrimination, based on the crop calendars for this study area. However, this type of weather conditions is probably a common dilemma for agricultural remote sensing.

2.2. Imaging System and Airborne Image Acquisition

2.2.1. Imaging System and Platform

The dual-camera imaging system used in this study consisted primarily of two Nikon D90 digital CMOS cameras with Nikon AF Nikkor 24mm f/2.8D lenses (Nikon, Inc., Melville, NY, USA). One camera was used to capture three-band RGB images. The other camera was modified to capture NIR images after the infrared-blocking filter installed in front of the CMOS of the camera was replaced by a 720 nm long-pass filter (Life Pixel Infrared, Mukilteo, WA, USA). The other components of the system included two GPS receivers, a video monitor and a wireless remote trigger as shown in Figure 2. The detailed description of this system can be found in a single-camera imaging system described by Yang et al. [18]. The difference between the two imaging systems was that the single-camera system contained only one Nikon D90 camera for taking RGB images, while the dual-camera system had a the RGB camera and a modified camera for NIR imaging necessary for this study. This dual-camera imaging system was attached via a camera mount box on to an Air Tractor AT-402B as shown in Figure 2.

2.2.2. Spectral Characteristics of the Cameras

The spectral sensitivity of the two cameras was measured in the laboratory through a monochromator (Optical Building Blocks, Inc., Edison, NJ, USA) and a calibrated photodiode. The two cameras were spectrally calibrated with the lenses by taking photographs of monochromatic light from the monochromator projected onto a white panel. A calibrated photodiode was positioned at the same distance of the camera to measure the light intensity. The relative spectral response of one channel could be calculated for a given wavelength λ and a given channel (RGB) as shown in Equation (1) [23].
R ( λ , n ) = C ( λ , n ) C d a r k I ( λ ) ,   n = r , g , b ;   λ = 400 1000   n m
where R ( λ , n ) is the spectral response of channel n = r , g , b at λ wavelength. I ( λ ) is the light intensity measured with the photodiode at λ wavelength. C(λ, n) is the digital count corresponding to channel n = r , g , b at λ wavelength. C d a r k is the mean digital count of the dark background of channel n = r , g , b at λ wavelength. Wavelength ranged from 400 to 1000 nm, and the measurement wavelength interval was 20 nm. The average digital count for each channel was determined for the center of the projected light beam using image-processing software (MATLAB R2015a, MathWorks, Inc., Natick, MA, USA). In addition, the images were recorded by the raw camera format.
From the normalized spectral sensitivity of the two cameras (Figure 3), the sensitivity varied from 400 to 700 nm for the non-modified camera and from 680 to 1000 nm for the modified camera. With the 720 nm long-pass filter, the spectral response rose from 0 at 680 nm to maximum at 720 nm. It can be seen that the spectral sensitivity curves have some overlaps among the channels of each camera. This is very common in consumer-grade cameras, and is also one of the reasons that this type of cameras had not been commonly used for most scientific applications in the past. For the modified camera, the red channel had a much stronger response than the other two channels (blue and green) and the monochrome imaging mode in the NIR range. Thus, the red channel was chosen as the NIR image for remote sensing applications.

2.3. Image Acquisition and Pre-Processing

2.3.1. Airborne Image Acquisition

Airborne images were taken from the study area at altitudes of 1524 m (5000 ft.) above ground level (AGL) with a ground speed of 225 km/h (140 mph) on 15 July 2015 under sunny conditions. The spatial resolution was 0.35 m at this height. In order to achieve at least 50% overlaps along and between the flight lines, images were acquired at 5-s intervals. Both cameras simultaneously and independently captured 144 images each over the study. Moreover, each image was recorded in both 12-bit RAW format for processing and JPG format for viewing and checking.

2.3.2. Image Pre-Processing

Vignetting and geometric distortion problems are the inherent issues of most cameras which usually cause some inaccuracy in image analysis results, especially for modified cameras [21,24]. Therefore, the free Capture NX-D 1.2.1 software (Nikon, Inc., Tokyo, Japan) provided with the camera manufacturer was used to correct the vignetting and geometric distortion in the images. The corrected images were saved in 16-bit TIFF format to preserve image quality.
There were 144 images to be mosaicked for each camera. The Pix4D Mapper software (Pix4D, Inc., Lausanne, Switzerland) was selected, which is a software package for automatic image mosaicking with high accuracy [25]. To improve the positional accuracy of the mosaicked image, some white plastic square panels with a side of 1 m were placed across the study area. A Trimble GPS Pathfinder ProXRT receiver (Trimble Navigation Limited, Sunnyvale, CA, USA), which provided a 0.2-m average horizontal position accuracy with the real-time OmniSTAR satellite correction, was used to collect the coordinates from these panels. Fifteen ground control points (GCP) as shown in Figure 1 were used for geo-referencing. As shown in Figure 4, the spatial resolutions were 0.399 and 0.394 m for the mosaicked RGB and NIR images. The absolute horizontal position accuracy was 0.470 and 0.701 m for the respective mosaicked images. These positional errors were well within 1 to 3 times of the ground sampling distance (GSD) or spatial resolution [26].
To generate a mosaicked four-band image, the mosaicked RGB and NIR images were registered to each other using the AutoSync module in ERDAS Imagine (Intergraph Corporation, Madison, AL, USA). The RGB image was chosen as the reference image as it had better image quality than the NIR image. Several tie control points were chosen manually before the automatic registration. Thousands of tie points were generated by AutoSync and a third-order polynomial geometric model as recommended with the number of tie points was used [27]. The root mean square (RMS) error for the registration was 0.49 pixels or 0.2 m. The combined image was resampled to 0.4-m spatial resolution. The color-infrared (CIR) composite of the four-band image is shown in Figure 4.

2.4. Crop Identification

Selection of different band combinations and classification methods generally influence classification results. To quantify and analyze these effects on crop identification results, three typical and general image classification methods (unsupervised, supervised and object-based) were selected. Meanwhile, to examine how numbers of LULC classes affect the classification results, six different class groupings were defined as shown in Table 1. It should be noted that the three-band or four-band image was first classified into 10 classes and then the classification results were regrouped into six, five, four, three and two classes. For the ten-class grouping, the impervious class mainly included solid roads and buildings. Bare soil and fallow were grouped in one class and the water class included river, ponds, and pools. Considering soybeans and watermelon accounted for only a small portion of the study area, they were treated as non-crop vegetation with grass and forest in the five-class grouping and as non-crop in the four- and three-class groupings.

2.4.1. Pixel-Based Classification

The unsupervised Iterative Self-Organizing Data (ISODATA) and the supervised maximum likelihood classification were chosen as pixel-based methods in this study. Given the diversity and complexity of the land cover in the study area, the number of clusters was set to ten times of the number of land cover classes. The number of maximum interactions was set to 20 and the convergence threshold to 0.95. Then all of the clusters were assigned to the 10 land cover classes. For the supervised maximum likelihood classification, each class was further divided into 3 to 10 subclasses due to the variability within each of the land cover classes. After supervised classification, these subclasses were merged. For each subclass, 5 to 15 training samples were selected and the total number of training samples was almost equal to the number of clusters in ISODATA. The same training samples were used for supervised classification for both the three-band and four-band images.

2.4.2. Object-Based Classification

OBIA has been recognized to have outstanding classification performance for high-resolution imagery. Segmentation and definition of classification rules are the main steps of object-based classification. In order to show a transparent process and obtain an objective result, the estimation of scale parameter (ESP) tool was used for chosing segmentation parameters [28] and the classifier known as classification and regression tree (CART) was used for generating classification rules.
Segmentation for object-based classification was performed using the commercial software eCognition Developer (Trimble Inc., Munich, Germany). The segmentation processing that integrates spectral, shape and compactness factors is very important for the subsequent classification [29], but standardized or widely accepted methods are lacking to determine the optimal scale for different types of imagery or applications [30]. To minimize the influence of contrived factors in this step, some reference segmentation scales can be estimated by the estimation of scale parameter (ESP) tool [28], which evaluates variation in heterogeneity of image objects that are iteratively generated at multiple scale levels to obtain the most appropriate scales. For this study, a scale step of 50 was set to find some optimal segmentation scales from 0 to 10,000 with the ESP tool, and three segmentation parameters (SP) (1900, 4550 and 9200) had been estimated. To simplify the processing, the SP 4550 was used for image segmentation, which is suitable for most of land cover classes without over-segmentation or under-segmentation. To further improve the segmentation results, spectral difference segmentation with a scale of 1000 was performed to merge neighboring objects with similar spectral values. The three-band and four-band image segmentation produced 970 and 950 image objects, respectively, as shown in Figure 5.
The classification pattern of object-based classification like eCongnition is mainly based on a series of rules from several features. User knowledge and past experience could be transferred to some constraint rules for classification [31]. However, it is very unreliable and highly individualized. Therefore, the CART algorithm was used for the training of object-based classification rules [32]. Because it is a non-parametric rule-based classifier and has a “white box” workflow [30], the structure and terminal nodes of a decision tree is easy to interpret, allowing the user to know the mechanism of the object-based classification method and evaluate it.
The CART classifier included in eCongnition could create the decision-tree model based on some features from training samples. In order to minimize the impact by the selection of different sample sets, the sample sets used in the supervised classification was also used. The difference was that the samples were turned into image objects. Then these image objects containing the class information were used as the training samples for the object-based classification. There were three main feature types used for modeling: layer spectral features including some vegetation indices (VIs), shape features, and texture features (Table 2) [33,34,35,36,37,38,39,40,41,42,43,44,45].

2.5. Accuracy Assessment

For accuracy assessment, 200 random points were generated and assigned to each class in a stratified random pattern based on each classification map. At least 10 points were generated for each class. For this study, three classification methods were applied to two types of images, so there were six classification maps. A total of 1200 points were used for accuracy assessment of the six classification maps [30]. The number of points and percentages by class type are given in Table 3. Ground verification of all the points for the LULC classes was performed shortly after image acquisition for this study area. If one or more points fell within a field, the field was checked. Overall accuracy [46], confusion matrix [47], and kappa coefficient [48] were calculated. In order to evaluate the performance of the image types and methods, average kappa coefficients were calculated by class and by method.

3. Results

3.1. Classification Results

Figure 6 shows the ten-class classification maps based on the three methods applied to the three-band and four-band images, including unsupervised classification for the three-band image (3US), unsupervised classification for the four-band image (4US), supervised classification for the three-band image (3S), supervised classification for the four-band image (4S), object-based classification for the three-band image (3OB), and object-based classification for the four-band image (4OB). To compare the actual differences between the pixel-based and object-based methods directly [29], no such post-processing operations as clump, sieve, and eliminate were perfomed for the pixel-based classification maps and no generalization was applied to the object-based classification maps either.
Most of the classification maps appear to distinguish different land cover types reasonably well. From a visual perspective, the “salt-and-pepper” effect on the pixel-based maps is the obvious difference with the object-based maps. The object-based maps present a very good visual effect as different cover types are shown by the homogenous image objects. Without considering the accuracy of the maps, the object-based classification maps look cleaner and more appropriate to produce thematic maps. Visually, it is difficult to evaluate the differences between the unsupervised and supervised methods or between the three-band and four-band images.
Specifically, the classification results of water and impervious had high consistence. Because of the lack of NIR band, some water areas in the three-band image was classified as bare soil and fallow for all the methods. Sorghum and corn were difficult to distinguish because both crops were at their late growth stages with reduced green leaf area. Corn was close to physiological maturity and above ground biomass was fully senescent, whereas sorghum was in the generative phase and started senescence, but still had significant green leaf material. Although late growth stages casued a reduction in canopy NDVI values for both corn and sorghum, the background weeds and soil exposure also affected the overall NDVI values. All crops and cover types show varying degrees of confusion among themselves. This problem also occurred in the object-based maps, but it does not appear to be as obvious as in the pixel-based maps.

3.2. Accuracy Assessment

Table 4 summarizes the accuracy assessment results for the ten-class and two-class classification maps for the three methods applied to the two images. The accuracy assessment results for the other groupings are discussed and compared with the ten-class and two-class results in Section 4.3. Overall accuracy for the ten-class classification maps ranged from 58% for 3US to 78% for 4OB and overall kappa from 0.51 for 3US to 0.74 for 4OB. As expected, the overall accuracy and kappa were higher for the four-band image than for the three-band image for all the three methods. Among the three methods, the object-based method performed better than the two pixel-based methods, and the supervised method was slightly better than the unsupervised method.
For the individual classes, the non-plant classes such as water, impervious, and bare soil and fallow had better and stable accuracy results for all the six scenarios with an average kappa of 0.85, 0.82 and 0.74, respectively. Due to variable growing stages and management conditions, the main crop class cotton had a relatively low accuracy with an average kappa of 0.47 for all the scenarios. Although at later growing stages, sorghum and corn had a relatively good accuracy with an average kappa of 0.67 and 0.62, respectively. The main reason was that both crops were at senescence and had less green leaf material, so they could easily be distinguished with other vegetation. Soybean and watermelon had unstable accuracy results among the six scenarios, but their differentiation was significantly improved with the object-based method. The grass and forest in the study area were difficult to distinguish using the pixel-based methods, but they were more accurately separated with the object-based method.
For the two broad classes (crop and non-crop), overall accuracy ranged from 76% for 3US to 91% for 4OB and overall kappa from 0.51 for 3US to 0.82 for 4OB. Clearly, overall accuracy and kappa were generally higher for the two-class maps than for the ten-class maps. The two-class classification maps will be useful for some appliccations when total cropping area information is needed.

4. Discussion

4.1. Importance of NIR Band

To analyze the importance of the NIR band, some kappa coefficients from Table 4 were rearranged and the average coefficients by image (AKp1) and by method (AKp2) were calculated (Table 5).
It can be seen from Table 5 that the NIR band improved the kappa coefficients for four of the five crops and for three of the five non-crop classes. The net increases in AKp1 for the four crops were 0.28 for soybean, 0.12 for watermelon, 0.07 for cotton and 0.03 for sorghum, while the decrease in AKp1 for corn was 0.05. Although the classification for soybean was greatly improved, soybean only acounted for a very small portion of the study area which was less than 2.5%. Due to its small area and misclassification, there were unstable classification results for soybean as shown by the unique zero kappa value in Table 4. The contribution of the improvement for watermelon was mainly due to the object-based classification method. The classification for corn got worse mainly due to its later growth stage. Corn had low chlorophyll contents as shown by its flat RGB and reduced water contens as indicated by the relatively low NIR reflectance compared to the other vegetation classes. These observations could be confirmed by the spectral curves shown in Figure 3, which were derived by calculating the average spectral values from each class using the training samples for the supervised classification. The spectral curve of corn had the lowest reflectance at the NIR band among the vegetation classes. In other words, the NIR band was not sensitive to corn at this stage, which had a similar NIR response to the bare soil and fallow fields. In fact, the bare soil and fallow class was one of the main classes for misclassification with corn as shown in Table 4.
For the non-crop classes, the NIR band improved the classification for the water, impervious and bare soil classes. This result conforms with the general knowledge that NIR is effective at distinguishing water and impervious. The classes of grass and forest also benefited from the NIR band with the supervised method.
To compare the differences between the three-band and four-band images by classification method, average kappa coefficients (AKp2) increased for each of the three methods for the combined crop class and for two of the three methods for the combined no-crop class. If AKp3 is the average of the AKp2 values for the three methods, AKp3 increased from 0.52 for the three-band image to 0.61 for the four-band image for the crop class, and from 0.67 for the three-band image to 0.71 for the four-band image for the non-crop class. The crop class benefited from the NIR band more than the non-crop class.
If AKp4 is the average of the AKp3 values for the two general classes, AKp4 increased from 0.6 for the three-band image to 0.66 for the four-band image. Therefore, the addition of NIR improved the classification results over the normal RGB image.
To illustrate the classification results and explain the misclassification between some classes, spectral separability between any two classes in terms of Euclidean distance was calculated by ERDAS. To facilitate discussion, the Euclidean distance was normalized by the following formula:
x = ( x x 0 ) / m a x | ( x x 0 ) |
where x is the absolute Euclidean distance any two classes based on the training samples, and x 0 is the average of all the two-class Euclidean distances for either the three-band or four-band image, and x is the normalized spectral distance ranging from −1 for the worse separability to 1 for the best separability.
From the normalized Euclidean distance results shown in Figure 7, the forest and impervious classes had the best separation, while soybean and cotton had the worse separation for both the three-band and four-band images. These results should clearly explain why some of the classes had higher classification accuracy and kappa values than others. In general, the non-crop classes such as forest, water and impervious had high separability with crop classes, while the crop classes had relatively low separability among themselves. Since corn and sorghum are near the bottom of the list, it explains in another way why they were difficult to separate. There are more class pairs above the average spectral separability for the four-band than for the three-band, indicating that the NIR band is a useful for crop identification, especially for plants at their vegetative growth periods.

4.2. Importance of Object-Based Method

As can be seen from Table 4 and Table 5, the selection of the classification methods had a great effect on classification results. To clearly see this effect, the kappa analysis results were rearranged by the classification methods as shown in Table 6.
The average kappa coefficients between the three-band and four-band images (AKp5) were calculated for all the crop and non-crop classes for each of the three classification methods. For all the crop classes, the object-based method performed best with the highest AKp5 values, followed by the supervised and unsupervised methods. Moreover, the object-based method performed better than the other two methods for all the no-crop classes except for water, for which the unsupervised method was the best. Similarly, if AKp6 is the average of the AKp5 values for the five crop classes, the AKp6 values for the crop class were 0.43, 0.51 and 0.76 for the unsupervised, supervised and object-based methods, respectively. The AKp6 values for the non-crop class were 0.65, 0.65 and 0.78 for the three respective classification methods.
Clearly, the object-based method was superior to the pixel-based methods. This was because the object-based method used many shape and texture features as shown in Table 2 to create homogeneous image objects as the processing units during the classification, while the pixel-based classification methods only used spectral information in each pixel during the classification. Figure 8 shows the decision trees and the number of features involved in the image classification process using the object-based method, which was created automatically by eCongnition.
Figure 9 shows the average kappa coefficients and differences for the crop and non-crop classes for the three classification methods. The difference in AKp6 between the crop and non-crop classes reduced from 0.22 for the unsupervised to 0.14 for the supervised and to 0.02 for the object-based method. Evidently, non-crop had a better average kappa coefficient than crop for the pixel-based methods because most of the non-crop classes such as water, impervious, and bare soil and fallow classes had better spectral separability than the other classes. However, both crop and non-crop had essentially the same average kappa coefficient for the object-based classification method.
To explain the reason for this, the statistical results for the decision tree models used in the object-based classification method are summarized in Table 7. It can be seen from Figure 8 and Table 7, the three-band image used more non-spectral features at a higher frequency than the four-band image, which could compensate for the lacking of the NIR band in the normal RGB image. Most of the branches of three-band or four-band image decision tree models for classification used the shape and texture features (95% for the three-band and 82% for the four-band). These features were used more than one time with an average of 1.62 times for the three-band and 1.15 times for the four-band image. All these showed the importance and advantage of the non-spectral features for image classification. The non-spectral features are particularly important when there is no sufficient spectral information.
As shown in Table 6, the pixel-based methods performed better than object-based method for distinguish water. This is because the spectral information was enough to distinguish water and non-spectral features could cause a worse result with the object-based method. Thus, the four-band image with the pixel-based methods achieved better classification results for water.

4.3. Importance of Classification Groupings

Thus far, only the ten-class and two-class classification results shown in Table 4 have been discussed. Figure 10 shows the overall accuracy and overall kappa for the six class groupings defined in Table 1 based on the six classification types.
The overall accuracy generally increased as the number of classes decreased. However, this was not necessarily the case for the overall kappa. The two-class, five-class and ten-class classifications had higher kappa values than the three-class, four-class and six-class classifications except that the two-class classification for the object-based method had slightly higher kappa values than the ten-class classifications. Overall classification accuracy simply considers the probability of image pixels being correctly identified in the classification map. Kappa coefficient, by contrast, considers not only the correct classification but also the effect of omission and commission errors. By using the spectral separability shown in Figure 3 and Figure 7, it could be found that class groupings with poor spectral separability between subclasses generally had higher kappa values. For example, the five-class classifications achieved the second highest kappa value for both the supervised and object-based methods. This particular class grouping combined four vegetation classes (grass, forest, soybean and watermelon) with similar spectral characteristics into one class. Approximately two-thirds of the spectral separability values between any two of the four classes were below the average level and these classes were very easy to become confused during the classification process. This confusion was eliminated when these classes were grouped into one class. Therefore, depending on the requirements of particular applications, all available classes can be regrouping based on their spectral characteristics into appropriate classes to improve classification results. With such a regrouping, the agronomical use of the classification map is practically reduced for relevant crops, but it could still be used for a LULC census.

4.4. Implications for Selection of Imaging Platform and Classification Method

From the above analysis, the additional NIR band and the object-based method both could improve the performance of image classification for crop identification. The imaging system used in this study included a modified camera to capture NIR information. The camera along with the lens, GPS, and modification fees was about $1300. Moreover, the images from the two cameras need to be aligned for analysis. The object-based classification method performed better than the pixel-based methods. However, the object-based method involves complex and time-consuming processing such as segmentation and rule training, and requires experienced operators to use the software. Therefore, how to weigh such factors as the cost, ease of use and acceptable classification results is a real and practical issue for users, especially for those without much remote sensing knowledge and experience.
Based on the results from this study, some suggestions are provided for consideration. If users do not have much experience in image processing, a single RGB camera with pixel-based classification can be used. For users with some image processing experience, a dual-camera system with the NIR sensitivity and pixel-based classification methods may be a good combination. For users with sufficient image processing experience, either a single RGB camera or a dual-camera system in conjunction with object-based classification may be an appropriate choice. It is also possible to modify a single RGB camera to have two visible bands and one NIR band [16,49]. This will eliminate the image alignment involved with the dual-camera system.

5. Conclusions

This study addressed important and practical issues related to the use of consumer-grade RGB cameras and modified NIR cameras for crop identification, which is a common remote sensing application in agriculture. Through synthetically comparing the performance of the three commonly-used classification methods with the three-band and four-band images over a relative large cropping area, some interesting results have been found.
Firstly, the NIR image from the modified camera improved classification results from the normal RGB alone. This finding is consistent with the common knowledge and results from scientific-grade imaging systems. Moreover, the importance of the NIR band appears to be especially evident in the classification results from pixel-based methods. Since pixel-based methods usually are easy to use by users without much experience in remote sensing, imaging systems with more spectral information should be used for these users.
Secondly, many non-spectral features such as shape and texture can be obtained from the image to improve the accuracy of image classification. However, object-based methods are more complex and time-consuming and require a better understanding of the classification process, so only advanced users with much experience in image processing could use object-based methods to obtain good results even with RGB images. Moreover, appropriately grouping classes with similar spectral response can improve classification results if these classes do not need to be separated. All in all, the selection of imaging systems, image processing methods, and class groupings needs to consider the budget, application requirements and operating personnel’s experience. The results from this study have demonstrated that the dual-camera imaging system is useful for crop identification and has the potential for other agricultural applications. More research is needed to evaluate this type of imaging systems for crop monitoring and pest detection.

Acknowledgments

This project was conducted as part of a visiting scholar research program, and the first author was financially supported by the National Natural Science Foundation of China (Grant No. 41201364 and 31501222) and the China Scholarship Council (201308420447). The author wishes to thank Jennifer Marshall and Nicholas Mondrik of the College of Science of Texas A & M University, College Station, Texas, for allowing us to use their monochromator and assisting in the measurements of the spectral response of the cameras. Thanks are also extended to Fred Gomez and Lee Denham of USDA-ARS in College Station, Texas, for acquiring the images for this study.

Author Contributions

Jian Zhang designed and conducted the experiment, processed and analyzed the imagery, and wrote the manuscript. Chenghai Yang guided the study design, participated in camera testing and image collection, advised in data analysis, and revised the manuscript. Huaibo Song, Wesley Clint Hoffmann, Dongyan Zhang, and Guozhong Zhang were involved in the process of the experiment and ground data collection. All authors reviewed and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Disclaimer

Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture.

References

  1. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  2. Bauer, M.E.; Cipra, J.E. Identification of agricultural crops by computer processing of ERTS MSS data. In Proceedings of the Symposium on Significant Results Obtained from the Earth Resources Technology Satellite-1, New Carollton, IN, USA, 5–9 March 1973; pp. 205–212.
  3. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  4. Zhang, C.; Kovacs, J.M. The application of small unmanned aerial systems for precision agriculture: A review. Precis. Agric. 2012, 13, 693–712. [Google Scholar] [CrossRef]
  5. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned aircraft systems in remote sensing and scientific research: Classification and considerations of use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef]
  6. Araus, J.L.; Cairns, J.E. Field high-throughput phenotyping: The new crop breeding frontier. Trends Plant Sci. 2014, 19, 52–61. [Google Scholar] [CrossRef] [PubMed]
  7. Quilter, M.C.; Anderson, V.J. Low altitude/large scale aerial photographs: A tool for range and resource managers. Rangel. Arch. 2000, 22, 13–17. [Google Scholar]
  8. Hunt, E., Jr.; Daughtry, C.; McMurtrey, J.; Walthall, C.; Baker, J.; Schroeder, J.; Liang, S. Comparison of remote sensing imagery for nitrogen management. In Proceedings of the Sixth International Conference on Precision Agriculture and Other Precision Resources Management, Minneapolis, MN, USA, 14–17 July 2002; pp. 1480–1485.
  9. Wellens, J.; Midekor, A.; Traore, F.; Tychon, B. An easy and low-cost method for preprocessing and matching small-scale amateur aerial photography for assessing agricultural land use in burkina faso. Int. J. Appl. Earth Obs. Geoinf. 2013, 23, 273–278. [Google Scholar] [CrossRef]
  10. Oberthür, T.; Cock, J.; Andersson, M.S.; Naranjo, R.N.; Castañeda, D.; Blair, M. Acquisition of low altitude digital imagery for local monitoring and management of genetic resources. Comput. Electron. Agric. 2007, 58, 60–77. [Google Scholar] [CrossRef]
  11. Hunt, E.R., Jr.; Cavigelli, M.; Daughtry, C.S.; Mcmurtrey, J.E., III; Walthall, C.L. Evaluation of digital photography from model aircraft for remote sensing of crop biomass and nitrogen status. Precis. Agric. 2005, 6, 359–378. [Google Scholar] [CrossRef]
  12. Chiabrando, F.; Nex, F.; Piatti, D.; Rinaudo, F. UAV and RPV systems for photogrammetric surveys in archaelogical areas: Two tests in the piedmont region (Italy). J. Archaeol. Sci. 2011, 38, 697–710. [Google Scholar] [CrossRef]
  13. Nijland, W.; de Jong, R.; de Jong, S.M.; Wulder, M.A.; Bater, C.W.; Coops, N.C. Monitoring plant condition and phenology using infrared sensitive consumer grade digital cameras. Agric. For. Meteorol. 2014, 184, 98–106. [Google Scholar] [CrossRef]
  14. Murakami, T.; Idezawa, F. Growth survey of crisphead lettuce (Lactuca sativa L.) in fertilizer trial by low-altitude small-balloon sensing. Soil Sci. Plant Nutr. 2013, 59, 410–418. [Google Scholar] [CrossRef]
  15. Yang, C.; Westbrook, J.K.; Suh, C.P.C.; Martin, D.E.; Hoffmann, W.C.; Lan, Y.; Fritz, B.K.; Goolsby, J.A. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing. Remote Sens. 2014, 6, 5257–5278. [Google Scholar] [CrossRef]
  16. Miller, C.D.; Fox-Rabinovitz, J.R.; Allen, N.F.; Carr, J.L.; Kratochvil, R.J.; Forrestal, P.J.; Daughtry, C.S.T.; McCarty, G.W.; Hively, W.D.; Hunt, E.R. Nir-green-blue high-resolution digital images for assessment of winter cover crop biomass. GISci. Remote Sens. 2011, 48, 86–98. [Google Scholar]
  17. Artigas, F.; Pechmann, I.C. Balloon imagery verification of remotely sensed phragmites australis expansion in an urban estuary of New Jersey, USA. Landsc. Urban Plan. 2010, 95, 105–112. [Google Scholar] [CrossRef]
  18. Yang, C.; Hoffmann, W.C. Low-cost single-camera imaging system for aerial applicators. J. Appl. Remote Sens. 2015, 9, 096064. [Google Scholar] [CrossRef]
  19. Wang, Q.; Zhang, X.; Wang, Y.; Chen, G.; Dan, F. The design and development of object-oriented uav image change detection system. In Geo-Informatics in Resource Management and Sustainable Ecosystem; Springer-Verlag: Berlin/Heidelberg, Germany, 2013; pp. 33–42. [Google Scholar]
  20. Diaz-Varela, R.; Zarco-Tejada, P.J.; Angileri, V.; Loudjani, P. Automatic identification of agricultural terraces through object-oriented analysis of very high resolution dsms and multispectral imagery obtained from an unmanned aerial vehicle. J. Environ. Manag. 2014, 134, 117–126. [Google Scholar] [CrossRef] [PubMed]
  21. Lelong, C.C.D. Assessment of unmanned aerial vehicles imagery for quantitative monitoring of wheat crop in small plots. Sensors 2008, 8, 3557–3585. [Google Scholar] [CrossRef]
  22. Liebisch, F.; Kirchgessner, N.; Schneider, D.; Walter, A.; Hund, A. Remote, aerial phenotyping of maize traits with a mobile multi-sensor approach. Plant Methods 2015, 11. [Google Scholar] [CrossRef] [PubMed]
  23. Labbé, S.; Roux, B.; Bégué, A.; Lebourgeois, V.; Mallavan, B. An operational solution to acquire multispectral images with standard light cameras: Characterization and acquisition guidelines. In Proceedings of the International Society of Photogrammetry and Remote Sensing Workshop, Newcastle, UK, 11–14 September 2007; pp. TS10:1–TS10:6.
  24. Lebourgeois, V.; Bégué, A.; Labbé, S.; Mallavan, B.; Prévot, L.; Roux, B. Can commercial digital cameras be used as multispectral sensors? A crop monitoring test. Sensors 2008, 8, 7300–7322. [Google Scholar] [CrossRef] [Green Version]
  25. Visockiene, J.S.; Brucas, D.; Ragauskas, U. Comparison of UAV images processing softwares. J. Meas. Eng. 2014, 2, 111–121. [Google Scholar]
  26. Pix4D. Getting GCPs in the Field or through Other Sources. Available online: https://support.pix4d.com/hc/en-us/articles/202557489-Step-1-Before-Starting-a-Project-4-Getting-GCPs-in-the-Field-or-Through-Other-Sources-optional-but-recommended- (accessed on 26 December 2015).
  27. Hexagon Geospatial. ERDAS IMAGINE Help. Automatic Tie Point Generation Properties. Available online: https://hexagongeospatial.fluidtopics.net/book#!book;uri=fb4350968f8f8b57984ae66bba04c48d;breadcrumb=23b983d2543add5a03fc94e16648eae7-b70c75349e3a9ef9f26c972ab994b6b6-0041500e600a77ca32299b66f1a5dc2d-e31c18c1c9d04a194fdbf4253e73afaa-19c6de565456ea493203fd34d5239705 (accessed on 26 December 2015).
  28. Drǎguţ, L.; Tiede, D.; Levick, S.R. Esp: A tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int. J. Geogr. Inf. Sci. 2010, 24, 859–871. [Google Scholar] [CrossRef]
  29. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using Spot-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  30. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. Object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  31. Laliberte, A.S.; Fredrickson, E.L.; Rango, A. Combining decision trees with hierarchical object-oriented image analysis for mapping arid rangelands. Photogramm. Eng. Remote Sens. 2007, 73, 197–207. [Google Scholar] [CrossRef]
  32. Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; Wadsworth, Inc.: Monterey, CA, USA, 1984. [Google Scholar]
  33. Rouse, J.W., Jr.; Haas, R.; Schell, J.; Deering, D. Monitoring vegetation systems in the great plains with ERTS. In Proceedings of the Third ERTS-1 Symposium NASA, NASA SP-351, Washington, DC, USA, 10–14 December 1973; pp. 309–317.
  34. Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  35. Richardson, A.J.; Everitt, J.H. Using spectral vegetation indices to estimate rangeland productivity. Geocarto Int. 1992, 7, 63–69. [Google Scholar] [CrossRef]
  36. Roujean, J.L.; Breon, F.M. Estimating par absorbed by vegetation from bidirectional reflectance measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  37. McFeeters, S. The use of the normalized difference water index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  38. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  39. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  40. Kauth, R.J.; Thomas, G. The tasselled cap—A graphic description of the spectral-temporal development of agricultural crops as seen by landsat. In Proceedings of the Symposia on Machine Processing of Remotely Sensed Data, West Lafayette, IN, USA, 29 June–1 July 1976; pp. 41–51.
  41. Woebbecke, D.; Meyer, G.; Von Bargen, K.; Mortensen, D. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  42. Meyer, G.E.; Hindman, T.W.; Laksmi, K. Machine vision detection parameters for plant species identification. Proc. SPIE 1999, 3543. [Google Scholar] [CrossRef]
  43. Camargo Neto, J. A Combined Statistical-Soft Computing Approach for Classification and Mapping Weed Species in Minimum-Tillage Systems. Ph.D. Thesis, University of Nebraska, Lincoln, NE, USA, 2004. [Google Scholar]
  44. Kataoka, T.; Kaneko, T.; Okamoto, H. Crop growth estimation system using machine vision. In Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Kobe, Japan, 20–24 July 2003; pp. 1079–1083.
  45. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Plant species identification, size, and enumeration using machine vision techniques on near-binary images. Proc. SPIE 1993, 1836, 208–219. [Google Scholar]
  46. Story, M.; Congalton, R.G. Accuracy assessment-a user’s perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  47. Stehman, S.V.; Czaplewski, R.L. Design and analysis for thematic map accuracy assessment: Fundamental principles. Remote Sens. Environ. 1998, 64, 331–344. [Google Scholar] [CrossRef]
  48. Rosenfield, G.H.; Fitzpatrick-Lins, K. A coefficient of agreement as a measure of thematic classification accuracy. Photogramm. Eng. Remote Sens. 1986, 52, 223–227. [Google Scholar]
  49. Hunt, E.R.; Hively, W.D.; Fujikawa, S.J.; Linden, D.S.; Daughtry, C.S.; McCarty, G.W. Acquisition of NIR-green-blue digital photographs from unmanned aircraft for crop monitoring. Remote Sens. 2010, 2, 290–305. [Google Scholar] [CrossRef]
Figure 1. The study area: (a) Geographic map of the study area, which is located in the bottom of Brazos basin and near College Station, Texas; and (b) image map of the study area from Google Earth.
Figure 1. The study area: (a) Geographic map of the study area, which is located in the bottom of Brazos basin and near College Station, Texas; and (b) image map of the study area from Google Earth.
Remotesensing 08 00257 g001
Figure 2. Imaging system and platform: (a) Components of the imaging system (two Nikon D90 cameras with Nikkor 24 mm lenses, two Nikon GP-1A GPS receiver, a 7-inch portable LCD video monitor, a wireless remote shutter release); (b) cameras mounted on the right step of an Air Tractor AT-402B; (c) close-up picture showing the custom-made camera box; and (d) close-up picture of the cameras in the box.
Figure 2. Imaging system and platform: (a) Components of the imaging system (two Nikon D90 cameras with Nikkor 24 mm lenses, two Nikon GP-1A GPS receiver, a 7-inch portable LCD video monitor, a wireless remote shutter release); (b) cameras mounted on the right step of an Air Tractor AT-402B; (c) close-up picture showing the custom-made camera box; and (d) close-up picture of the cameras in the box.
Remotesensing 08 00257 g002
Figure 3. Normalized spectral sensitivity of two Nikon D90 cameras and relative reflectance of 10 land use and land cover (LULC) classes. The dotted lines represent different channels of the RGB camera (Nikon-color-r, Nikon-color-g, and Nikon-color-b) and the modified NIR camera (Nikon-nir-r, Nikon-nir-g, Nikon-nir-b, and Nikon-nir-mono). The solid lines represent the relative reflectance of 10 LULC classes.
Figure 3. Normalized spectral sensitivity of two Nikon D90 cameras and relative reflectance of 10 land use and land cover (LULC) classes. The dotted lines represent different channels of the RGB camera (Nikon-color-r, Nikon-color-g, and Nikon-color-b) and the modified NIR camera (Nikon-nir-r, Nikon-nir-g, Nikon-nir-b, and Nikon-nir-mono). The solid lines represent the relative reflectance of 10 LULC classes.
Remotesensing 08 00257 g003
Figure 4. Mosaicked images for this study: (a) three-band RGB image, band 1 = blue, band 2 = green and band 3 = red; (b) NIR band image; and (c) CIR composite (NIR, red and green) extracted from the four-band image, band 1 = blue, band 2 = green, band 3 = red and band 4 = NIR.
Figure 4. Mosaicked images for this study: (a) three-band RGB image, band 1 = blue, band 2 = green and band 3 = red; (b) NIR band image; and (c) CIR composite (NIR, red and green) extracted from the four-band image, band 1 = blue, band 2 = green, band 3 = red and band 4 = NIR.
Remotesensing 08 00257 g004
Figure 5. Image segmentation results: (a) three-band image segmentation with 970 image objects; and (b) four-band image segmentation with 950 image objects.
Figure 5. Image segmentation results: (a) three-band image segmentation with 970 image objects; and (b) four-band image segmentation with 950 image objects.
Remotesensing 08 00257 g005
Figure 6. Image classification results (ten-class): (a) unsupervised classification for three-band image (3US); (b) unsupervised classification for four-band image (4US), (c) supervised classification for three-band image (3S); (d) supervised classification for four-band image (4S); (e) object-based classification for three-band image (3OB); and (f) object-based classification for three-band image (4OB).
Figure 6. Image classification results (ten-class): (a) unsupervised classification for three-band image (3US); (b) unsupervised classification for four-band image (4US), (c) supervised classification for three-band image (3S); (d) supervised classification for four-band image (4S); (e) object-based classification for three-band image (3OB); and (f) object-based classification for three-band image (4OB).
Remotesensing 08 00257 g006
Figure 7. Spectral separability between any two classes for the three-band and four-band images.
Figure 7. Spectral separability between any two classes for the three-band and four-band images.
Remotesensing 08 00257 g007
Figure 8. Decision tree models for object-based classification. Abbreviations for the 10 classes: IM=impervious, BF=bare soil and fallow, GA=grass, FE=forest, WA=water, SB=soybean, WM=watermelon, CO=corn, SG=sorghum, and CT=cotton. 1 (n) is the number ID of each feature, ranging from (1) to (42), which is described in Table 2.
Figure 8. Decision tree models for object-based classification. Abbreviations for the 10 classes: IM=impervious, BF=bare soil and fallow, GA=grass, FE=forest, WA=water, SB=soybean, WM=watermelon, CO=corn, SG=sorghum, and CT=cotton. 1 (n) is the number ID of each feature, ranging from (1) to (42), which is described in Table 2.
Remotesensing 08 00257 g008
Figure 9. Average kappa coefficient difference for crop and non-crop by three classification methods (APk6) and difference of them. Unsupervised classification method (US), supervised classification method (S), object-based classification method (OB).
Figure 9. Average kappa coefficient difference for crop and non-crop by three classification methods (APk6) and difference of them. Unsupervised classification method (US), supervised classification method (S), object-based classification method (OB).
Remotesensing 08 00257 g009
Figure 10. Overall accuracy (a) and overall kappa (b) for six class groupings based on six classification types. Classification methods were represented as unsupervised classification for three-band image (3US), unsupervised classification for four-band image (4US), supervised classification for three-band image (3S), supervised classification for four-band image (4S), object-based classification for three-band image (3OB), and object-based classification for four-band image (4OB).
Figure 10. Overall accuracy (a) and overall kappa (b) for six class groupings based on six classification types. Classification methods were represented as unsupervised classification for three-band image (3US), unsupervised classification for four-band image (4US), supervised classification for three-band image (3S), supervised classification for four-band image (4S), object-based classification for three-band image (3OB), and object-based classification for four-band image (4OB).
Remotesensing 08 00257 g010
Table 1. Definition of six different class groupings for image classification.
Table 1. Definition of six different class groupings for image classification.
Ten-ClassSix-ClassFive-ClassFour-ClassThree-ClassTwo-Class
imperviousnon-cropnon-crop
(non-vegetation)
non-crop
(with soybean and watermelon)
non-crop
(with soybean and watermelon)
non-crop
bare soil and fallow
water
grassnon-crop
(vegetation)
forest
soybeansoybeancrop
watermelonwatermelon
corncorncorncorngrain
sorghumsorghumsorghumsorghum
cottoncottoncottoncottoncotton
Table 2. List of object features for decision tree modeling.
Table 2. List of object features for decision tree modeling.
SourceFeature TypesFeature NameFor Three-BandFor Four-Band
ReferencesVI(1) 1 Normalized Difference Vegetation index (NDVI) = (NIR − R)/(NIR + R) [33]×
(2) Ratio Vegetation index (RVI) = NIR/R [34]×
(3) Difference Vegetation index (DVI) = NIR − R [35]×
(4) Renormalized Difference Vegetation Index(RDVI) = NDVI × DVI [36]×
(5) NDWI = (G − NIR)/(G + NIR) [37]×
(6) Optimization of Soil-adjusted Vegetation Index (OSAVI) = ( NIR R ) / ( NIR + R + 0.16 ) [38]×
(7) Soil Adjusted Vegetation Index (SAVI) = 1.5 × ( NIR R ) / ( NIR + R + 0.5 ) [39]×
(8) Soil Brightness Index (SBI) = ( NIR 2 + R 2 ) [40]×
(9) B* = B/(B + G + R), (10) G* = G/(B + G + R), (11) R* = R/(B + G + R)
(12) Excess Green (ExG) = 2G* − R* − B* [41], (13) Excess Red (ExR) = 1.4R* − G* [42], (14) ExG − ExR [43]
(15) CIVE = 0.441R − 0.811G + 0.385B + 18.78745 [44]
(16) Normalized Difference index (NDI) = (G − R)/(G + R) [45]
eCognitionLayerMean of (17) B,(18) G,(19) R and (20) Brightness
Mean of (21) NIR
Standard deviation of (22) B,(23) G,(24) R
(25) Standard deviation of NIR
HIS((26) Hue, (27) Saturation, (28) Intensity)
Geometry(29) Area, (30) Border length
(31) Asymmetry, (32) Compactness, (33) Density, (34) Shape index
TextureGLMC ((35) Homogeneity, (36) Contract, (37) Dissimilarity, (38) Entropy,
(39) Ang.2nd moment, (40) Mean, (41) StdDev, (42) Correlation)
Total Number of Features3242
1 (n) is the number for each feature, ranging from (1) to (42).
Table 3. Count and percentage by class type for 1200 reference points.
Table 3. Count and percentage by class type for 1200 reference points.
Class TypeCountPercentageClass TypeCountPercentage
Impervious554.6%Soybean292.4%
Bare Soil and Fallow18615.6%Watermelon494.1%
Grass16213.5%Corn1008.3%
Forest1068.8%Sorghum1159.6%
Water695.8%Cotton32927.3%
Table 4. (af) Accuracy assessment results.
Table 4. (af) Accuracy assessment results.
(a) 
Unsupervised Classification for Three-Band Image (3US)
Ten-class 2Two-class
CD 4RD 3Pa 5Ua 6Kp 7PaUaKp
IMBFGAFEWASBWMCOSGCT%%%%
Non-cropIM44 1110000000080800.79
BF91431213051374977590.71
GA02717109091744610.3875750.51
FE001571240001354590.50
WA0900610100188850.88
cropSB00000000000NaN0.0
WM0014900900018280.16
CO0167021672152972490.6876770.51
SG02354021713796269370.62
CT2322281222515848700.36
Overall kappa=0.51Overall kappa=0.51
Overall accuracy=58%Overall accuracy=76%
(b) 
Supervised Classification for Three-Band Image (3S)
Ten-class Two-class
CDRDPaUaKpPaUaKp
IMBFGAFEWASBWMCOSGCT%%%%
Non-cropIM3730000000067930.66
BF1215271612853282670.77
GA44832000194121451520.4477800.58
FE10551100001148740.45
WA0400560101081900.80
SB0001019000366830.65
WM0310300912218300.16
cropCO0910040663133563450.5882800.62
SG1819001912694860410.54
CT032830283121318456650.42
Overall kappa=0.53Overall kappa=0.60
Overall accuracy=60%Overall accuracy=80%
(c) 
Object-Based Classification For Three-Band Image (3OB)
Ten-class Two-class
CDRDPaUaKpPaUaKp
IMBFGAFEWASBWMCOSGCT%% %%
Non-cropIM5170010001093850.92
BF2142211010372876720.72
GA139152200131556690.5187870.75
FE02259631001191740.89
WA0710500000172850.71
cropSB00000210002172500.71
WM0750004600994690.94
CO114313008491784640.8288880.75
SG015000013802970630.66
CT033030430420863820.53
Overall kappa=0.68 Overall kappa=0.75
Overall accuracy=72%Overall accuracy=88%
(d) 
Unsupervised Classification for Four-Band Image (4US)
Ten-class Two-class
CDRDPaUaKpPaUaKp
IMBFGAFEWASBWMCOSGCT%% %%
Non-cropIM50280010000091630.90
BF41411111061263476650.71
GA015620175171735530.2870790.48
FE0014410000042960.39
WA00006400000931000.92
cropSB000110220001176500.75
WM006131015001231320.28
CO04600035891758600.5483750.60
SG1422212115623854420.47
CT0860330417102120061570.44
Overall kappa=0.52Overall kappa=0.54
Overall accuracy=59%Overall accuracy= 77%
(e) 
Supervised Classification for Four-Band Image (4S)
Ten-class Two-class
CDRDPaUaKpPaUaKp
IMBFGAFEWASBWMCOSGCT%%%%
Non-cropIM3820000000069950.68
BF10141102103753676660.71
GA288618306142053580.4679820.61
FE10367110001863740.60
WA00006400000931000.92
cropSB0000019000266900.65
WM05154002304147440.45
CO01415001469162769470.6584810.65
SG01013101212712962510.57
CT4620140711111519660690.47
Overall kappa =0.59Overall kappa=0.63
Overall accuracy=65%Overall accuracy=82%
(f) 
Object-Based Classification for Four-Band Image (4OB)
Ten-class Two-class
CDRDPaUaKpPaUaKp
IMBFGAFEWASBWMCOSGCT%%%%
Non-cropIM5140000000093930.92
BF115320412332182810.79
GA181037011221364750.5990910.80
FE10239230000187770.85
WA1003610000088940.88
cropSB00000230001779580.79
WM0600004400090880.89
CO07310017471174710.7292910.83
SG0090010201033890600.88
CT082231311022869850.61
Overall kappa=0.74Overall kappa=0.82
Overall accuracy=78%Overall accuracy=91%
1 Bold values correspond to number of points correctly classified. 2 IM = impervious, BF = bare soil and fallow, GA = grass, FE = forest, WA = water, SB = soybean, WM = watermelon, CO = corn, SG = sorghum, CT = cotton. 3 RD = reference data. 4 CD = classification data. 5 Pa = producer’s accuracy, 6 Ua = user’s accuracy. 7 Kp = kappa coefficient.
Table 5. Results of kappa analysis by three methods and two images for (a) Crop and (b) Non-crop.
Table 5. Results of kappa analysis by three methods and two images for (a) Crop and (b) Non-crop.
(a)CropThree-Band (Kappa)Four-Band (Kappa)
3US3S3OBAKp14US4S4OBAKp1
SB 10.000.650.710.450.750.650.790.73
WM0.160.160.940.420.280.450.890.54
CO0.680.580.820.690.540.650.720.64
SG0.620.540.660.610.470.570.880.64
CT0.360.420.530.440.440.470.610.51
AKp20.360.470.73 0.500.560.78
(b)Non-cropThree-band (Kappa)Four-band (Kappa)
3US3S3OBAKp14US4S4OBAKp1
IM0.790.660.920.790.900.680.920.83
BF0.710.770.720.730.710.710.790.74
GA0.380.440.510.440.280.460.590.44
FE0.500.450.890.610.390.600.850.61
WA0.880.800.710.800.920.920.880.91
AKp20.650.620.75 0.640.680.81
1 IM = impervious, BF = bare soil and fallow, GA = grass, FE = forest, WA = water, SB = soybean, WM = watermelon, CO = corn, SG = sorghum, CT = cotton, AKp1 = Average kappa coefficient among the three methods for each class with the same image, and AKp2 = Average kappa coefficient among the crop classes or non-crop classes with the same classification methods.
Table 6. Kappa analysis results arranged by classification method for (a) Crop and (b) Non-crop.
Table 6. Kappa analysis results arranged by classification method for (a) Crop and (b) Non-crop.
(a)CropUnsupervisedSupervisedObject-Orient
3US4USAKp53S4SAKp53OB4OBAKp5
SB 10.000.750.380.650.650.650.710.790.75
WM0.160.280.220.160.450.310.940.890.92
CO0.680.540.610.580.650.620.820.720.77
SG0.620.470.550.540.570.560.660.880.77
CT0.360.440.400.420.470.450.530.610.57
AKp6 0.43 0.51 0.76
(b)Non-cropUnsupervisedSupervisedObject-orient
3US4USAKp53S4SAKp53OB4OBAKp5
IM0.790.900.850.660.680.670.920.920.92
BF0.710.710.710.770.710.740.720.790.76
GA0.380.280.330.440.460.450.510.590.55
FE0.500.390.450.450.600.530.890.850.87
WA0.880.920.900.800.920.860.710.880.80
AKp6 0.65 0.65 0.78
1 IM=impervious, BF=bare soil and fallow, GA=grass, FE=forest, WA=water, SB=soybean, WM=watermelon, CO=corn, SG=sorghum, CT=cotton, AKp5=Average kappa coefficient for each class between the three-band and four-band images, and AKp6=Average of the AKp5 values for the crop or non-crop classes.
Table 7. Statistical results of decision tree models for object-based classification.
Table 7. Statistical results of decision tree models for object-based classification.
PropertiesThree-BandFour-Band
Number of end nodes (number of branches)3933
Maximum number of tree levels1010
First level to use non-spectral features34
Number of branches that used non-spectral features6338
Average times non-spectral features were used for each branch1.621.15
Ratio of branches that used non-spectral features (%)9582

Share and Cite

MDPI and ACS Style

Zhang, J.; Yang, C.; Song, H.; Hoffmann, W.C.; Zhang, D.; Zhang, G. Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification. Remote Sens. 2016, 8, 257. https://doi.org/10.3390/rs8030257

AMA Style

Zhang J, Yang C, Song H, Hoffmann WC, Zhang D, Zhang G. Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification. Remote Sensing. 2016; 8(3):257. https://doi.org/10.3390/rs8030257

Chicago/Turabian Style

Zhang, Jian, Chenghai Yang, Huaibo Song, Wesley Clint Hoffmann, Dongyan Zhang, and Guozhong Zhang. 2016. "Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification" Remote Sensing 8, no. 3: 257. https://doi.org/10.3390/rs8030257

APA Style

Zhang, J., Yang, C., Song, H., Hoffmann, W. C., Zhang, D., & Zhang, G. (2016). Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification. Remote Sensing, 8(3), 257. https://doi.org/10.3390/rs8030257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop