Next Article in Journal
A Comparison of Nitrogen Transfer and Transformation in Traditional Farming and the Rice–Duck Farming System by 15N Tracer Method
Next Article in Special Issue
Effect of Large-Scale Cultivated Land Expansion on the Balance of Soil Carbon and Nitrogen in the Tarim Basin
Previous Article in Journal
Response of Yellow Quality Protein Maize Inbred Lines to Drought stress at Seedling Stage
Previous Article in Special Issue
Workflow to Establish Time-Specific Zones in Precision Agriculture by Spatiotemporal Integration of Plant and Soil Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping of Olive Trees Using Pansharpened QuickBird Images: An Evaluation of Pixel- and Object-Based Analyses

by
Isabel Luisa Castillejo-González
Department of Graphic Engineering and Geomatics, University of Cordoba, Campus de Rabanales, 14071 Córdoba, Spain
Agronomy 2018, 8(12), 288; https://doi.org/10.3390/agronomy8120288
Submission received: 2 November 2018 / Revised: 27 November 2018 / Accepted: 27 November 2018 / Published: 2 December 2018
(This article belongs to the Special Issue Remote Sensing Applications for Agriculture and Crop Modelling)

Abstract

:
This study sought to verify whether remote sensing offers the ability to efficiently delineate olive tree canopies using QuickBird (QB) satellite imagery. This paper compares four classification algorithms performed in pixel- and object-based analyses. To increase the spectral and spatial resolution of the standard QB image, three different pansharpened images were obtained based on variations in the weight of the red and near infrared bands. The results showed slight differences between classifiers. Maximum Likelihood algorithm yielded the highest results in pixel-based classifications with an average overall accuracy (OA) of 94.2%. In object-based analyses, Maximum Likelihood and Decision Tree classifiers offered the highest precisions with average OA of 95.3% and 96.6%, respectively. Between pixel- and object-based analyses no clear difference was observed, showing an increase of average OA values of approximately 1% for all classifiers except Decision Tree, which improved up to 4.5%. The alteration of the weight of different bands in the pansharpen process exhibited satisfactory results with a general performance improvement of up to 9% and 11% in pixel- and object-based analyses, respectively. Thus, object-based analyses with the DT algorithm and the pansharpened imagery with the near-infrared band altered would be highly recommended to obtain accurate maps for site-specific management.

Graphical Abstract

1. Introduction

Nowadays, one of the most important objectives in agriculture is to perform precision agriculture (PA) in most possible scenarios to control efficiently the input data and, consequently, reduce the production cost and the environmental pollution produced by this activity. To facilitate and assist this change in farm management, government institutions tend to regulate and encourage different techniques based on PA. The European Commission is greatly concerned about the new challenges in agriculture and promote the changes by different legal instruments and key texts [1]. Diverse action areas such as farming, protection of natural or agricultural environment, food safety, security and traceability, or even climate change mitigation are regulated. Some of the recommended or mandatory practices are supported in an accurate control of the spatial distribution of crops. To obtain economical funds from the common agricultural policy [2], PA promotes among many other actions, the application of site-specific management or integrated management systems in crops production to reduce the use of fertilisers, herbicides, or pesticides and the establishment of certain conservation agro-environmental measures such as cover crops in olive orchards [3]. To control the correct application of PA techniques, different monitoring systems were developed. The expensive, time-consuming, and imprecise system based on sample and ground visit to small percentage of fields has forced a search for new techniques that reduce costs and increase the controlled area, maintaining high accuracy in the analyses. Remote sensing data can significantly improve the deficiencies of ground visits, allowing accurate maps.
Several studies have focused on addressing diverse PA topics by remote sensing, such as obtaining accurate maps of crops [4,5,6]; detecting the location of weeds [7,8,9], pests [10,11,12], and diseases [13,14,15] to apply site-specific management, or determining the level of water stress to design optimal irrigation systems [16,17,18]. Nevertheless, most of the precision agriculture studies with remote sensing analysed the characteristics of herbaceous crops, as these types of crops usually cover all the field and are easier to study with digital imagery. Woody crops present very different spectral responses between tree canopies, soils, and other covers presented in the field. Thus, very high spatial resolution images are needed. Most of the studies which aim is to characterize the architecture of the trees used airborne or Unmanned Aerial Vehicle (UAV) images to obtain the canopy information [19,20,21]. Nevertheless, few studies used satellite imagery [22,23,24] and most of them were aid with LiDAR information [25,26,27].
High resolution satellite imagery can be useful to accurately map tree canopies. Companies that distribute Earth Observation Satellite images usually offer the user community two separate products: a high-spatial resolution panchromatic image and a low-spatial resolution multispectral one. While the multispectral image facilitates the discrimination of land covers types, the panchromatic image allows to delimit accurately each land cover [28]. To use simultaneously the advantages of both resolutions in one image, fusion techniques have been developed. The pansharpen fusion method allows the injection of spatial detail information from the panchromatic image into each band of the multispectral image [29]. These new characteristics of the pansharpened image can help to accurately delineate the tree canopies to apply PA techniques.
Supervised classification methods are extensively used in land use classification studies [30]. These procedures extrapolate the spectral characteristics obtained from the image training sites defined for classification by the user to other areas of the image. Nowadays, there are classification routines based on spectral or angular distances, probability analysis, and more advanced data mining techniques. There is no one ideal classification routine. The most appropriate method is determined by the needs and requirements of each study [31]. Many of the remote sensing classifications are based on pixels as the minimum spatial information unit. These analyses provide very good results in homogeneous land uses. Nevertheless, the increase of spatial resolution causes an increase in the intraclass spectral variability and a reduction in classification performance and accuracy when pixel-based analyses are used [32]. This is particularly true when the pixel size is significantly smaller than the average size of the objects of interest [33]. To overcome this problem, different segmentation techniques, in which adjacent pixels are grouped into spectrally and spatially homogeneous objects, have been developed. The main segmentation algorithms can be classified into two general classes: edge-based and object-merging algorithms [34]. Most of the segmentation procedure developed are object merging algorithms, which take some pixels as seeds and grow the regions around them based on certain homogeneity criteria [35]. Since Kettig and Landgrebe [36], the object-based approach has hardly been used in favour of easier pixel-based analyses. Some researchers have reported that the segmentation techniques used in classifications reduce the local variation caused by textures, shadows and shape in forestry trees [37,38] and agricultural trees [39] classifications. However, object-based classifications in typical agricultural dryland Mediterranean areas to map accurately olive tree canopies using only high spatial resolution satellite imagery are lacking.
Therefore, the main objective of this paper was to evaluate the potential of four supervised classification routines, applied to pixel- and object-based classifications, to delineate olive tree canopies using a pansharpened QuickBird image. An additional goal was to check the effect of the variation of the spectral weight in the pansharpen process, to emphasise the spectral information of different wavelengths over another.

2. Materials and Methods

2.1. Study Area and Satellite Image Acquisition

This study was focused on dryland olive (Olea europaea L.) orchards representative of the typical continental Mediterranean climate [40]. The analysis was conducted in five olive orchards fields named A, B, C, D and E (Figure 1) located near Montilla, province of Córdoba (Andalusia, southern Spain, centre UTM X-Y coordinates 355,746–4,164,520 m, datum WGS84). This agricultural region has a typical continental Mediterranean climate characterized with short mild winters and long dry summers [41]. The study sites were located in a farmer-managed area where the farmers made decisions individually. Thus, different characteristics of the studied fields such as size and morphology of olive crowns, presence or not of vegetation cover or soil tillage, were found. The total areas of the A, B, C, D and E fields were 2.16 ha, 22.80 ha, 15.66 ha, 24.55 ha and 28.44 ha, respectively.
On 10 July 2004, a QuickBird (QB) satellite digital image was acquired for the study area. The QB satellite provided four multispectral bands (blue, B: 450–520 nm; green, G: 520–600 nm; red, R: 630–690 nm; and near-infrared, NIR: 760–900 nm) with a spatial resolution of 2.8 m, and a panchromatic band (PAN: 450–900 nm) with a spatial resolution of 0.7 m. Radiometric resolution of the QB image was 11 bit. A QB Standard image product was ordered, which included radiometric, sensor and geometric corrections previously carried out by the distributor [42]. No atmospheric corrections were needed as long as each classification was carried out in a single date image on the same relative scale [43].

2.2. Data Fusion: Pansharpening of Multispectral Images

To obtain an image of high spectral and spatial resolution, a pansharpening process was carried out with the QB bands. The pansharpen techniques allow to obtain new bands with the spectral resolution of the multispectral bands and the spatial resolution of the panchromatic band. In this study, a weighting variant of the fusion algorithm based on the wavelet transform calculated using the Á Trous algorithm was used to fuse the multispectral and panchromatic bands [44]. This fusion method consists basically of successive convolutions between the analyzed image and a low-pass filter called the scaling function, which commonly is the b3-spline function. The filters to be applied in the subsequent decomposition levels are obtained from the filter applied in the previous level, intercalating it with zeros in the rows and columns. The wavelet coefficients are obtained from the difference between two consecutive decomposition levels.
Knowing that the red and especially the near infrared bands are very important in the discrimination of vegetation, these two bands were weighted differently. Thus, to evaluate the effect of different weighted pansharpened bands in the classifications, three weight combinations were proposed: (a) 1-1-1-1, (b) 1-1-5-5 and (c) 1-1-1-10 as weight factor for B-G-R-NIR bands, respectively. As a result of this fusion or pansharpening process, three different images with four multispectral bands (B, G, R and NIR) and with a spatial resolution of 0.7 m were obtained.
Usually, the global quality of the resulting pansharpened image is estimated with the ERGAS index (Erreur Relative Globale Adimensionnelle de Synthèse) [45]. This relative dimensionless global error in synthesis offers a global picture of the spectral quality of the fused product. Nevertheless, in pansharpening techniques, high spectral quality implies low spatial quality and vice versa, which suggests the necessity to control not only the spectral quality of the process, but also the spatial. Thus, a spatial index based on ERGAS concepts and translated to the spatial domain [46] was used.

2.3. Segmentation

A segmentation procedure was performed to partition the QB pansharpened images into homogeneous objects using the Fractal Net Evolution Approach (FNEA) segmentation algorithm [47] (Figure 2). This algorithm allows the multiresolution bottom-up region-merging segmentation, a process in which individual pixels merge to objects in successive fusing iterations. The merging process continues until a threshold derived from the user-defined parameters is reached. The result is an image in which the pixels are aggregated in highly homogeneous objects at an arbitrary resolution.
The segmentation process can be controlled by the weighting of the input data and the definition of three parameters. The scale parameter controls the size of the objects, while the colour and shape parameters define the importance of the spectral and morphological information involved in the object generation, respectively. The setting of the segmentation parameters were determined by testing different segmentation input scenarios to evaluate their ability to delineate olive crowns. For each field, the first parameter to adjust was the scale parameter to control the size of the objects depending on the characteristics of the field. Then, with the scale parameter fixed, the spectral and morphological weight of the information was defined. The morphological information is divided into two characteristics, the compactness and the smoothness of the objects. For this study, as the trees crowns present a compact structure, these two characteristics were fixed in all scenarios tested to 0.8 and 0.2 for compactness and smoothness, respectively.
The segmentation procedure generates not only the mean spectral information of the objects, which is derived from the spectral information of the pixels that form each object, but also a large amount of data divided mainly in three categories: spectral, morphological and textural. In this study, only some spectral and morphological variables derived from the segmentation process were used to characterize the olive orchards. Therefore, to perform this analysis eight object-based features, three spectral and five morphological variables, were included (Table 1). The spectral feature Mean is calculated for each multispectral band independently, obtaining 4 final features: Mean (Blue), Mean (Green), Mean (Red) and Mean (Near-Infrared).

2.4. Classification and Accuracy Assessments

Both pixel- and object-based analyses with four different supervised classifiers were conducted on the five olive orchards fields. The classification algorithms were Minimum Distance (MD), Spectral Angle Mapper (SAM), Maximum Likelihood (ML) and Decision Tree (DT). MD classifies each pixel in the category that presents the minimum spectral distance between the spectral signal of the pixel and the spectral average of the class. The spectral distance is determined by Euclidean distance in N-bands spectral space [50]. Similarly, SAM measures spectral similarity but assigns the category of each pixel to the class that presents the minimum spectral angle, instead the spectral distance. The spectral angle between two spectra is calculated by taking the arccosine of the dot product of the two spectral vector [51]. ML creates classification rules based on probabilistic algorithms considering the spectral average of each class and the variance. This algorithm assigns a pixel to the most probable class and thus minimizes the probability of error using Bayesian theory [52]. Finally, DT classifier creates models of decision based on conditional control statements. In this study, the DT classification was performed with the data mining C4.5 algorithm, a top-down inductor of decision trees that expands nodes in depth-first order for each step using the divide-and-conquer strategy [53]. Ground-truth land use was randomly defined to substantiate and validate the classification procedures. For each field, a sampling with distant and independent locations was digitized directly on the image. Approximately 25% of the sampled surface were used to collect the spectral signature in the training process, and the remaining 75% were used to assess the accuracy of the classifications. To avoid any subjective estimation, the training and verification procedure did not change in any of the classifications.
To determine the accuracy obtained with every classifier in each olive orchard field, the confusion matrix of the classification and the Kappa test were analysed. The confusion matrix compares the percentage of classified pixels of each class with the verified ground truth class, indicating the correct assessment and the errors among the studied classes [38]. In addition to detailed accuracies obtained in every classification category, the confusion matrix obtain the overall accuracy (OA), which indicates the overall percentage of correctly classified pixels in the classification. The Kappa test yields the Kappa coefficient (K), which determines if the results obtained in the classification are significantly better than the results obtained in a random classification. The combination of both accuracy values is more conservative than a simple percent agreement value [54,55].
In pixel-based classification it is frequent to observe isolated-misclassified pixels dispersed within the area of another class. To reduce this commonly named salt and pepper noise and increase the accuracy of the classifications, a majority filter of 3 × 3 was applied to improve the classified maps. In object-based classification this noise is practically eliminated when pixels group in the segmentation process.
The pansharpened QB bands were generated with the IJFUSION software (Polytechnic University of Madrid, Spain). To obtain the segmented bands used in object-based classifications, the eCognition Developer 8 software (Definiens AG) was used. The Weka 3.8 software (University of Waikato, New Zealand) determined the decision tree sequences. Finally, the software ENVI 5.1 (Harris Geospatial Solutions) was used to carry out all the pixel- and object-based classifications.

3. Results and Discussion

3.1. Data Fusion: Pansharpening of Multispectral Images

Three pansharpen procedures were performed with the entire QB image. To control the quality of the process, two ERGAS indexes were calculated. Table 2 shows the spectral and spatial ERGAS indexes obtained in the study for each pansharpen combination. The spectral ERGAS index showed values of 0.72, 1.84 and 1.83 whereas the spatial ERGAS index exhibited slightly higher values of 1.3, 1.73, and 1.86, for pansharpen B-G-R-NIR combinations 1-1-1-1; 1-1-5-5 and 1-1-1-10, respectively. Being a value of 0 of each ERGAS index the maximum quality, the lower the ERGAS value, the higher the spectral quality of fused images. ERGAS error lower than 3 are considered as good quality for fused product [56]. In this study, the ERGAS errors obtained were considerably low (lower than 2 in all combinations), which implied a high spectral and spatial quality in the images obtained [57]. This premise is important in this type of study, as an accurate isolation of olive crowns need a very high spatial resolution but without losing the spectral information, especially in complex areas where a mixture of olive tree, natural cover, and soil spectral data can be observed. The increase of the ERGAS indexes values in the pansharpened image, where the red and near-infrared bands were over-weighed, was predictable. Gonzalo and Lillo-Saavedra [44] conceived the pansharpen algorithm performed in this study to apply the exact weighted factor to every band to obtain same spatial and spectral quality of the image, it means, to obtain the “best fused image”. In this study, and controlling that the quality of the fused images does not exceed the ERGAS index limit, the weight of the bands was based on the necessity of the study but not in the control of the pansharpen quality.

3.2. Segmentation

For the fifteen pansharpened fields analysed, three pansharpen combinations × five olive orchard fields, a considerable number of input parameters were tested to obtain the criteria that provided the most satisfactory inputs scenarios (Table 3). Each olive orchard field presented different characteristics, which involved different segmentation parameters. As the aim of the segmentation is to obtain objects with a similar or smaller area than an olive tree canopy, the values of the scale parameters were low, ranging from 12 in the fields A and E to 25 in the fields B and C. The pansharpen images with the weight 1-1-1-1 always required a smaller scale parameter than the other two pansharpen images. In the segmentation process, the spectral information (colour) presented more weight than the morphology of the objects (shape) in most of the scenarios evaluated, although colour never exceeded the 70% of the weight. Only in the field D, the percentage of both types of information was divided equally (50%). As mentioned in Section 2.3, the compactness and smoothness characteristics of the morphology of the objects were fixed to 0.8 and 0.2, respectively.

3.3. Olive Orchard Fields Classification

The accuracy assessments, OA and K, of the different pixel- and object-based classifications carried out in the different weighted pansharpened images of every field are displayed in Table 4. Figure 3 shows an example of the least and the most accurate olive crowns classifications in an individual field. Table 4 reveals slight differences between classifiers, yielding most of the scenarios evaluated very high classification accuracies and showing that olive trees canopies could be discriminated very accurately in most of the test carried out. All the classifiers achieved high comparable classification results, although some classifiers stood out above others. In pixel-based classifications, MP exhibited the highest average of OA and K with a value of 94.2% and 0.89, respectively. In object-based classification, MP obtained the highest accuracies values in eight of the 15 images analysed and DT obtained the highest precision in the remaining seven classifications. Nevertheless, the average results were slightly higher with the DT classifier offering OA and K values of 96.6% and 0.94%, while ML obtained values of 95.3% and 0.91%. ML and DT classification algorithms exhibited high reliability in all classifications performed, while SAM and MD yielded more erratic results, showing always the lowest accuracies of each analysis. As an example, in pixel-based classification of field E, MD obtained the highest OA values for the pansharpen combinations 1-1-5-5 and 1-1-1-10 with 93.4% and 95.2%, respectively, whereas this classifier showed very low OA value for the pansharpen combination 1-1-1-1, with a value of 77.3%. Nevertheless, all the olive orchard fields could be classified very accurately with at least one classifier, showing OA values greater than 90%. The results observed in this study satisfied the commonly accepted requirements to consider an accurate classification when the OA value is at least an 85% [58] and the Kappa coefficient exceeds the 0.75 [59]. Such high accuracies are essential if the olive orchard map obtained is going to be used in precision agriculture to design site specific management. To emphasise one classifier from all of them, ML classifier could be selected considering that yielded one of the highest precisions in all classifications performed and that the computational and expertise requirements involved in this classification method is lower than the other most accurate classifier, DT, a data mining algorithms which demands deeper knowledge. Additionally, ML algorithm is implemented in most of the image processing software.
Between pixel- and object-based analyses, no clear difference was observed. Despite that object-based classifications analysed a greater number of segmented variables than pixel-based, the precision of the classifications yielded were slightly higher. In object-based analyses, the increase of average OA values was approximately 1% for all classifiers except DT, which improved its general performance with an average increase of 4.5% and showed the most accurate average OA (96.6%). Twelve of the fifteen pansharpen combinations classified improved the accuracies when the object-based analysis were performed, but these increases usually were minimal, not exceeding in many cases the 1% of improvement. The most significant variations between object- and pixel-based analyses were observed in fields D and E. A 6.4% of improvement was observed in the pansharpen combination 1-1-1-10 of the field D, when the most accurate pixel-based analysis was performed with the MP classifier and yielded an OA value of 92.3%, whereas the most accurate object-based classification was obtained with DT algorithm and obtained an OA of 98.7%. Similarly, accuracy increases superior to 4% could be observed with the pansharpen combination 1-1-5-5 of both, field D and E, obtaining the greatest OA values of 98.3% and 97.9% with DT algorithm. Little advantage when applying object-based techniques can be observed in other studies focused on precision agriculture when pixel-based analyses offer reasonable performance. Pérez-Ortiz et al. [60] tested different scenarios of pixel- and object-based classifications to detect weeds in sunflower crops and obtained similar results, with improvements of up to 6%. Similarly, Castillejo González et al. [61] evaluated pixel- and object based classifications to distinguish late-season wild oat weed patches in wheat fields and suggested that the small sizes of the objects and the excellent behaviour of the classification algorithms in the pixel-based classifications did not produce a significant improvement over the precision obtained in the object-based classifications.
In object-based analyses eight different segmented variables were classified, which implies more information that can enhance or worsen the capacity to distinguish among categories. MD, SAM, and ML algorithms give equal weight to all the variables involved in the classification process, and use all these variables in the process, independently of the level of improvement or deterioration that the classification can suffer. Nevertheless, the DT algorithm analyses all the variables and selects only the information that really help to distinguish among categories, increasing its efficiency. Figure 4 shows the percentage of use for each variable in DT classifications. From the eight segmented variables managed in this study, DT algorithm only used six different variables in the total set of analyses, and only two or three were necessary in most of the DT classifications. All spectral variables were used in the DT analysis, but the bands that were selected more frequently were the NIR mean layer and the NDVI index. Whereas the NIR mean layer showed the highest level of intervention in DT analysis with a 44.4% for pansharpen combination 1-1-1-10 and a 37.5% for combination 1-1-5-5, the NDVI index band was used in the three pansharpen combinations with a 27.3%, 22.2%, and 12.5% of intervention for combinations 1-1-1-1, 1-1-1-10, and 1-1-5-5, respectively. From the five geometrical segmented variables, only Width, Length, and Border index were used in those classifications. With feature was the most useful, with a 45.5% of intervention for combination 1-1-1-1 and a 12.5% for combination 1-1-1-5. The scarce use of morphological variables can be explained because the olive is a tree that presents different canopy architecture depending on characteristics such as age, farm management, variety, pruning, etc. (Figure 5). The very high accuracies obtained with limited variables in DT classifications agrees with the idea exposed in [62], when they concluded that DT tends to be very efficient and robust when a large volume of predicted variables are introduced in a model, generally performing fast and being insensitive to noise in input data. This behaviour explain the rise of the accuracies that DT classifications exhibited in object-based analyses compared to ML algorithm.
The pansharpen process was necessary to obtain enough spatial resolution in the images to accurately distinguish the olive trees canopies. The original idea of alter the spectral information of the pansharpened images to emphasize the most useful spectral bands to distinguish vegetation from other land uses exhibited satisfactory results, offering increases of accuracy in three of the five fields studied. This improvement is especially significant in fields D and E, spectral complex fields which showed more difficult to isolate the olive trees. Field D showed the greatest improvement with the spectrally altered images. In pixel-based classifications, increases of approximately 6% and 4% were observed among the combination 1-1-1-1 (OA of 88.1%), and the combinations 1-1-5-5 (94.2%) and 1-1-1-10 (92.3%). More prominent were the increases observed in object-based classifications, where the combination 1-1-1-1 obtained an OA of 87.2% whereas the pansharpen combinations 1-1-5-5 and 1-1-1-10 showed OA values of 98.3% and 98.7%, respectively, which implied accuracy increases higher than 11%. Similarly, field E showed important accuracy increases of 6% and 9% in pixel-based analyses and 8% and 7% in object-based classifications, for 1-1-5-5 and 1-1-1-10 pansharpened combinations, respectively. Although studies based on pansharpened images performed with different spectral weights were not found, the use of equal weight of the multispectral bands in the pansharpen process were evaluated in agronomic scenarios. When homogeneous land uses such as herbaceous crops were analysed with pansharpen imagery, the improvements in classification accuracy due to the use of more spectral and spatially detailed imagery could not really be considered as remarkable regarding the multispectral imagery [63]. Nevertheless, in land uses with high intraclass variability such as woody crops, where the individual trees must be isolated from soil and other covers presented in the field to design a site-specific crop management, the spatial detail is needed. García-Torres et al. [64] analyzed the capacity of different spatial resolutions to isolate olive trees and suggested that imagery with spatial resolutions from 0.25 to 1.5 m were generally suitable for olive grove characterization with olive trees of over 3–4 m2. Scarce studies used pansharpen images to isolate trees, being more frequent to obtain the information of the tree based on the fusion of different types of information such as multispectral, hyperspectral or LiDAR data or with the analysis of very high spatial resolution UAV imagery. Johnson et al. [65] tested different pansharpened processes to map residential area trees and damaged oak trees. Since a hybrid approach including two pansharpening methods produced the best results in this study, they recommend users planning to process the pansharpen themselves rather than purchase pansharpened imagery directly from the image vendor, so that they can incorporate variations in the methodology for their analysis.
An accurate map of olive orchards fields enable the design of site-specific management treatments and can contribute to the follow up and assessment of agri-environmental regulations. Further studies will focus on establishing a hierarchical classification system that aims at discriminating all olive orchard fields present in the entire QB scene while evaluating the potential of image sharpening in classifying different land uses. In the first level of the hierarchical analysis, the olive orchard fields will be identified and isolated from the other land uses included in the whole studied region to, subsequently, characterize the olive trees of each field. Additionally, with each tree individually discriminated, different agri-environmental indicators of olive orchards related with the number and area of the trees (e.g., potential productivity of each tree and potential production of each plot) and related with bare soil and other vegetation covers (e.g., risk of erosion and run-off) can be predicted.

4. Conclusions

The olive production sector is characterized by a large number of small operators which directly affect production. The results of the present study show that pansharpened multi-spectral QuickBird imagery can be successfully used to map delineation of olive tree canopies. The knowledge of the accurate location and delimitation of the olive trees can be used as a basis for the precision management of fertilizers, pesticides and watering, since there is an obvious relationship between tree size and potential productivity based on the requirements of nutrients, watering doses and plant protection products such as fungicides. Although all classification algorithms tested offered accurate results, ML and DT were the two most precise classifiers. Knowing that the red and especially the near infrared bands are very important in the discrimination of vegetation, the alteration of the weight of these spectral bands in the pansharpen process enhanced significantly the accuracy of the olive trees delineation, with an average increase improvement of up to 9% and 11% in pixel- and object-based analyses, respectively.
With regard to recommending one methodology to define the olive trees crowns, two considerations should be made: the improvement in accuracy obtained and the computational or expertise requirements involved in the process. The decision of whether or not to carry out more complex analyses will depend on the importance of achieving maximum accuracy and the ratio of cost/efficiency wished to obtain in the objectives. If one aimed to create a crop inventory, then the performance of pixel-based classification with ML classifier and standard pansharpened images would be the best choice. However, if it desires to produce a map that is ready to be used for precision agriculture decision-making procedures, object-based analyses with the DT algorithm and the pansharpened imagery with the near-infrared band altered would be highly recommended.

Funding

This research received no external funding.

Acknowledgments

I extend thanks to Javier Carrasco for his assistance with data processing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Precision Agriculture in Europe. Legal, Social and Ethical Considerations. European Parliament (European Parliamentary Research Service). Available online: http://www.europarl.europa.eu/RegData/etudes/STUD/2017/603207/EPRS_STU(2017)603207_EN.pdf (accessed on 24 October 2018).
  2. The European Parliament and the Council of the European Union. Regulation 1306/2013 of the European Parliament and of the Council of 17 December 2013 on the financing, management and monitoring of the common agricultural policy and repealing Council Regulations (EEC) No 352/78, (EC) No 165/94, (EC) No 2799/98, (EC) No 814/2000, (EC) No 1290/2005 and (EC) No 485/2008 OJ L 347, 20.12.2013. Off. J. Eur. Union 2013, 347, 549–607. [Google Scholar]
  3. The European Parliament and the Council of the European Union. Regulation 1107/2009 of the European Parliament and of the Council of 21 October 2009 concerning the placing of plant protection products on the market and repealing Council Directives 79/117/EEC and 91/414/EEC. Off. J. Eur. Union 2009, 309, 1–50. [Google Scholar]
  4. de Castro, A.I.; Jiménez-Brenes, F.M.; Torres-Sánchez, J.; Peña, J.M.; Borra-Serrano, I.; López-Granados, F. 3-D characterization of vineyards using a novel UAV imagery-based OBIA procedure for precision viticulture applications. Remote Sens. 2018, 10, 584. [Google Scholar] [CrossRef]
  5. Fitzgerald, G.J.; Maas, S.J.; Detar, W.R. Spider mite detection and canopy component mapping in cotton using hyperspectral imagery and spectral mixture analysis. Precis. Agric. 2004, 5, 275–289. [Google Scholar] [CrossRef]
  6. Yang, C.; Everitt, J.H.; Bradford, J.M. Comparison of QuickBird satellite imagery and airborne imagery for mapping grain sorghum yield patterns. Precis. Agric. 2006, 7, 33–44. [Google Scholar] [CrossRef]
  7. López-Granados, F.; Torres-Sánchez, J.; Serrano-Pérez, A.; de Castro, A.I.; Mesas-Carrascosa, F.; Peña, J. Early season weed mapping in sunflower using UAV technology: Variability of herbicide treatment maps against weed thresholds. Precis. Agric. 2016, 17, 183–199. [Google Scholar] [CrossRef]
  8. Huang, Y.; Reddy, K.N.; Fletcher, R.S.; Pennington, D. UAV low-altitude remote sensing for precision weed management. Weed Technol. 2018, 32, 2–6. [Google Scholar] [CrossRef]
  9. López-Granados, F.; Torres-Sánchez, J.; De Castro, A.; Serrano-Pérez, A.; Mesas-Carrascosa, F.; Peña, J. Object-based early monitoring of a grass weed in a grass crop using high resolution UAV imagery. Agron. Sustain. Dev. 2016, 36, 67. [Google Scholar] [CrossRef]
  10. Du, Q.; Chang, N.; Yang, C.; Srilakshmi, K.R. Combination of multispectral remote sensing, variable rate technology and environmental modeling for citrus pest management. J. Environ. Manag. 2008, 86, 14–26. [Google Scholar] [CrossRef]
  11. Percival, D.C.; Gallant, D.; Harrington, T.; Brown, G. Potential for commercial unmanned aerial vehicle use in wild blueberry production. Acta Hortic. 2017, 1180, 233–240. [Google Scholar] [CrossRef]
  12. Rai, M.; Ingle, A. Role of nanotechnology in agriculture with special reference to management of insect pests. ‎Appl. Microbiol. Biotechnol. 2012, 94, 287–293. [Google Scholar] [CrossRef] [PubMed]
  13. Herrmann, I.; Vosberg, S.K.; Ravindran, P.; Singh, A.; Chang, H.; Chilvers, M.I.; Conley, S.P.; Townsend, P.A. Leaf and canopy level detection of fusarium virguliforme (sudden death syndrome) in soybean. Remote Sens. 2018, 10, 426. [Google Scholar] [CrossRef]
  14. Hillnhütter, C.; Mahlein, A.; Sikora, R.A.; Oerke, E. Remote sensing to detect plant stress induced by heterodera schachtii and rhizoctonia solani in sugar beet fields. Field Crops Res. 2011, 122, 70–77. [Google Scholar] [CrossRef]
  15. Santoso, H.; Gunawan, T.; Jatmiko, R.H.; Darmosarkoro, W.; Minasny, B. Mapping and identifying basal stem rot disease in oil palms in north sumatra with QuickBird imagery. Precis. Agric. 2011, 12, 233–248. [Google Scholar] [CrossRef]
  16. Fisher, D.K.; Hinton, J.; Masters, M.H.; Aasheim, C.; Butler, E.S.; Reichgelt, H. Improving irrigation efficiency through remote sensing technology and precision agriculture in se Georgia. In Proceedings of the ASAE Annual International Meeting 2004, Otawa, ON, Canada, 1–4 August 2004; pp. 3461–3470. [Google Scholar]
  17. Gago, J.; Douthe, C.; Coopman, R.E.; Gallego, P.P.; Ribas-Carbo, M.; Flexas, J.; Escalona, J.; Medrano, H. UAVs challenge to assess water stress for sustainable agriculture. Agric. Water Manag. 2015, 153, 9–19. [Google Scholar] [CrossRef]
  18. Rossini, M.; Fava, F.; Cogliati, S.; Meroni, M.; Marchesi, A.; Panigada, C.; Giardino, C.; Busetto, L.; Migliavacca, M.; Amaducci, S.; et al. Assessing canopy PRI from airborne imagery to map water stress in maize. ISPRS J. Photogramm. Remote Sens. 2013, 86, 168–177. [Google Scholar] [CrossRef]
  19. Gonzalez-Dugo, V.; Zarco-Tejada, P.; Nicolás, E.; Nortes, P.A.; Alarcón, J.J.; Intrigliolo, D.S.; Fereres, E. Using high resolution UAV thermal imagery to assess the variability in the water status of five fruit tree species within a commercial orchard. Precis. Agric. 2013, 14, 660–678. [Google Scholar] [CrossRef]
  20. Launeau, P.; Kassouk, Z.; Debaine, F.; Roy, R.; Mestayer, P.G.; Boulet, C.; Rouaud, J.; Giraud, M. Airborne hyperspectral mapping of trees in an urban area. Int. J. Remote Sens. 2017, 38, 1277–1311. [Google Scholar] [CrossRef]
  21. Torres-Sánchez, J.; López-Granados, F.; Serrano, N.; Arquero, O.; Peña, JM. High-throughput 3-D monitoring of agricultural-tree plantations with unmanned aerial vehicle (UAV) technology. PLoS ONE 2015, 10, e0130479. [Google Scholar] [CrossRef]
  22. Kim, C.; Hong, S. Identification of tree species from high resolution satellite imagery by using crown parameters. Presented at the SPIE—The International Society for Optical Engineering 2008, Cardiff, Wales, UK, 15–18 September 2008. [Google Scholar] [CrossRef]
  23. Molinier, M.; Astola, H. Feature selection for tree species identification in very high resolution satellite images. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Vancouver, BC, Canada, 24–29 July 2011; pp. 4461–4464. [Google Scholar]
  24. Arockiaraj, S.; Kumar, A.; Hoda, N.; Jeyaseelan, A.T. Identification and quantification of tree species in open mixed forests using high resolution QuickBird satellite imagery. J. Trop. For. Environ. 2015, 5. [Google Scholar] [CrossRef]
  25. Caughlin, T.T.; Rifai, S.W.; Graves, S.J.; Asner, G.P.; Bohlman, S.A. Integrating LiDAR-derived tree height and Landsat satellite reflectance to estimate forest regrowth in a tropical agricultural landscape. Remote Sens Ecol. Conserv. 2016, 2, 190–203. [Google Scholar] [CrossRef] [Green Version]
  26. Hawryło, P.; Wezyk, P. Predicting growing stock volume of scots pine stands using Sentinel-2 satellite imagery and airborne image-derived point clouds. Forests 2018, 9, 274. [Google Scholar] [CrossRef]
  27. Vierling, L.A.; Vierling, K.T.; Adam, P.; Hudak, A.T. Using satellite and airborne LiDAR to model woodpecker habitat occupancy at the landscape scale. PLoS ONE 2013, 8, e80988. [Google Scholar] [CrossRef] [PubMed]
  28. González-Audícana, M.; Saleta, J.L.; Catalán, R.G.; García, R. Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1291–1299. [Google Scholar] [CrossRef] [Green Version]
  29. Otazu, X.; González-Audícana, M.; Fors, O.; Núñez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef] [Green Version]
  30. Ma, L.; Li, M.; Ma, X.; Cheng, L.; Du, P.; Liu, Y. A review of supervised object-based land-cover image classification. ISPRS J. Photogramm. Remote Sens. 2007, 130, 277–293. [Google Scholar] [CrossRef]
  31. Smits, P.C.; Dellepiane, S.G.; Schowengerdt, R.A. Quality assessment of image classification algorithms for land-cover mapping: A review and a proposal for a cost-based approach. Int. J. Remote Sens. 1999, 20, 1461–1486. [Google Scholar] [CrossRef]
  32. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  33. Whiteside, T.G.; Boggs, G.S.; Maier, S.W. Comparing object-based and pixel-based classifications for mapping savannas. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 884–893. [Google Scholar] [CrossRef]
  34. Funck, J.W.; Zhong, Y.; Butler, D.A.; Brunner, C.C.; Forrer, J.B. Image segmentation algorithms applied to wood defect detection. Comput. Electron. Agric. 2003, 41, 157–179. [Google Scholar] [CrossRef]
  35. Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef]
  36. Kettig, R.L.; Landgrebe, D.A. Classification of multispectral image data by extraction and classification of homogeneous objects. IEEE Trans. Geosci. Electron. 1976, 14, 19–26. [Google Scholar] [CrossRef]
  37. Hulet, A.; Roundy, B.A.; Petersen, S.L.; Jensen, R.R.; Bunting, S.C. An object-based image analysis of pinyon and juniper woodlands treated to reduce fuels. Environ. Manag. 2014, 53, 660–671. [Google Scholar] [CrossRef] [PubMed]
  38. MacLean, M.G.; Congalton, R.G. Using object-oriented classification to map forest community types. In Proceedings of the American Society for Photogrammetry and Remote Sensing Annual Conference, Milwaukee, WI, USA, 1–4 May 2011. [Google Scholar]
  39. Jiménez-Brenes, F.M.; López-Granados, F.; Castro, A.I.; Torres-Sánchez, J.; Serrano, N.; Peña, J.M. Quantifying pruning impacts on olive tree architecture and annual canopy growth by using UAV-based 3D modelling. Plant Methods 2017, 13, 55. [Google Scholar] [CrossRef] [PubMed]
  40. International Olive Council (IOC). Olive Growing and Nursery Production. Available online: http://www.internationaloliveoil.org/projects/paginas/Section-a.htm (accessed on 26 November 2018).
  41. Ayerza, R.; Steven Sibbett, G. Thermal adaptability of olive (Olea europaea L.) to the arid Chaco of Argentina. Agric. Ecosyst. Environ. 2001, 84, 277–285. [Google Scholar] [CrossRef]
  42. DigitalGlobe. Information Products: Standard Imagery. Available online: https://dg-cms-uploads-production.s3.amazonaws.com/uploads/document/file/21/Standard_Imagery_DS_10-7-16.pdf (accessed on 20 November 2018).
  43. Song, C.; Woodcock, C.E.; Seto, K.C.; Lenney, M.P.; Macomber, S.A. Classification and change detection using Landsat TM data: When and how to correct atmospheric effects? Remote Sens. Environ. 2001, 75, 230–244. [Google Scholar] [CrossRef]
  44. Gonzalo, C.; Lillo-Saavedra, M. A directed search algorithm for setting the spectral–spatial quality trade-off of fused images by the wavelet Á Trous method. Can. J. Remote Sens. 2008, 34, 367–375. [Google Scholar] [CrossRef]
  45. Ranchin, T.; Wald, L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation. Photogramm. Eng. Remote Sens. 2000, 66, 49–61. [Google Scholar]
  46. Lillo-Saavedra, M.; Gonzalo, C. Spectral or spatial quality for fused satellite imagery? A trade-off solution using the wavelet Á Trous algorithm. Int. J. Remote Sens. 2006, 27, 1453–1464. [Google Scholar] [CrossRef]
  47. Baatz, M.; Schäpe, A. Multiresolution segmentation: An optimization approach for high quality multi-scale image segmentation. In Proceedings of the 12th Symposium for Applied Geographic Information Processing (Angewandte Geographische Informationsverarbeitung XII) AGIT 2000, Salzburg, Austria, 5–7 July 2000. [Google Scholar]
  48. Rouse, J.W., Jr.; Haas, R.; Schell, J.; Deering, D. Monitoring vegetation systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309–317. [Google Scholar]
  49. Roujean, J.L.; Breon, F.M. Estimating PAR absorbed by vegetation from bidirectional reflectance measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  50. Hodgson, M.E. Reducing the computational requirements of the minimum-distance classifier. Remote Sens. Environ. 1998, 25, 117–128. [Google Scholar] [CrossRef]
  51. Hecker, C.; Van Der Meijde, M.; Van Der Werff, H.; Van Der Meer, F.D. Assessing the influence of reference spectra on synthetic SAM classification results. IEEE Trans. Geosci. Remote Sens. 2008, 46, 4162–4172. [Google Scholar] [CrossRef]
  52. Strahler, A.H. The use of prior probabilities in maximum likelihood classification of remotely sensed data. Remote Sens. Environ. 1980, 10, 135–163. [Google Scholar] [CrossRef]
  53. Quinlan, R. C4-5: Programs for Machine Learning; Morgan Kaufmann: San Mateo, CA, USA, 1993. [Google Scholar]
  54. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  55. Rogan, J.; Franklin, J.; Roberts, D.A. A comparison of methods for monitoring multitemporal vegetation change using thematic mapper imagery. Remote Sens. Environ. 2002, 80, 143–156. [Google Scholar] [CrossRef]
  56. Wald, L. Data Fusion. Definition and Architectures- Fusion of Images of Different Spatial Resolutions; Presses de l’Ecole, Ecole des Mines de Paris: Paris, France, 2002; p. 200. ISBN 2-911762-38-X. [Google Scholar]
  57. Gonzalo-Martín, C.; Lillo-Saavedra, M. Quickbird image fusion by a multirresolution-multidirectional joint image representation. IEEE Lat. Am. Trans. 2007, 5, 32–37. [Google Scholar] [CrossRef]
  58. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  59. Monserud, R.A.; Leemans, R. Comparing global vegetation maps with the kappa statistic. Ecol. Model. 1992, 62, 275–293. [Google Scholar] [CrossRef]
  60. Pérez-Ortiz, M.; Gutiérrez, P.A.; Peña, J.M.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. An experimental comparison for the identification of weeds in sunflower crops via unmanned aerial vehicles and object-based analysis. Lect. Notes Comput. Sci. 2015, 9094, 252–262. [Google Scholar] [CrossRef]
  61. Castillejo-González, I.L.; Peña-Barragán, J.M.; Jurado-Expósito, M.; Mesas-Carrascosa, F.J.; López-Granados, F. Evaluation of pixel- and object-based approaches for mapping wild oat (Avena sterilis) weed patches in wheat fields using QuickBird imagery for site-specific management. Eur. J. Agron. 2014, 59, 57–66. [Google Scholar] [CrossRef]
  62. Brodley, C.E.; Friedl, M.A. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
  63. Castillejo-González, I.L.; López-Granados, F.; García-Ferrer, A.; Peña-Barragán, J.M.; Jurado-Expósito, M.; de la Orden, M.S.; González-Audicana, M. Object- and pixel-based analysis for mapping crops and their agro-environmental associated measures using QuickBird imagery. Comput. Electron. Agric. 2009, 68, 207–215. [Google Scholar] [CrossRef]
  64. García Torres, L.; Peña-Barragán, J.M.; López-Granados, F.; Jurado-Expósito, M.; Fernández-Escobar, R. Automatic assessment of agro-environmental indicators from remotely sensed images of tree orchards and its evaluation using olive plantations. Comput. Electron. Agric. 2008, 61, 179–191. [Google Scholar] [CrossRef] [Green Version]
  65. Johnson, B.A.; Tateishi, R.; Hoan, N.T. Satellite image pansharpening using a hybrid approach for object-based image analysis. ISPRS Int. J. Geo-Inf. 2012, 1, 228–241. [Google Scholar] [CrossRef]
Figure 1. Location of the study area in Andalusia, southern Spain. Detailed olive orchards fields are depicted by QuickBird pansharpened images.
Figure 1. Location of the study area in Andalusia, southern Spain. Detailed olive orchards fields are depicted by QuickBird pansharpened images.
Agronomy 08 00288 g001
Figure 2. Multiresolution segmentation of pansharpened QuickBird (QB) imagery in field A. Pansharpen weight (B-G-R-NIR): (a) 1-1-1-1; (b) 1-1-5-5; and (c) 1-1-1-10.
Figure 2. Multiresolution segmentation of pansharpened QuickBird (QB) imagery in field A. Pansharpen weight (B-G-R-NIR): (a) 1-1-1-1; (b) 1-1-5-5; and (c) 1-1-1-10.
Agronomy 08 00288 g002
Figure 3. Result of the least (a,c) and most accurate (b,d) olive orchard classifications of field E.
Figure 3. Result of the least (a,c) and most accurate (b,d) olive orchard classifications of field E.
Agronomy 08 00288 g003
Figure 4. Relative contribution of object-based variables for DT classifications.
Figure 4. Relative contribution of object-based variables for DT classifications.
Agronomy 08 00288 g004
Figure 5. Example of differences in the morphology of the olive crowns.
Figure 5. Example of differences in the morphology of the olive crowns.
Agronomy 08 00288 g005
Table 1. Object-based features derived from segmentation.
Table 1. Object-based features derived from segmentation.
CategoriesFeaturesBrief Description
SpectralMeanMean of the intensity values of all pixels forming an image object
NDVINormalized Difference Vegetation Index [48]
RDVIRenormalized Difference Vegetation Index [49]
ShapeAreaNumber of pixels forming an image object
AsymmetryRelative length of an image object compared to a regular ellipse polygon
Border indexRatio between the border lengths of the image object and the smallest enclosing rectangle
LengthMultiplication between the number of pixels and the length-to-width ratio of an image object
WidthRatio between the number of pixels and the length-to-width ratio of an image object
Table 2. Spectral and spatial indexes to control the quality of the pansharpened images.
Table 2. Spectral and spatial indexes to control the quality of the pansharpened images.
Pansharpen Weight (B-G-R-NIR)Spectral ERGASSpatial ERGAS
1-1-1-10.721.13
1-1-5-51.841.73
1-1-1-101.831.86
Table 3. Most satisfactory segmentation parameters obtained for each pansharpened field.
Table 3. Most satisfactory segmentation parameters obtained for each pansharpened field.
FieldPansharpen Weight (B-G-R-NIR)Scale ParameterColourShapeCompactnessSmoothness
A1-1-1-1120.60.40.80.2
1-1-5-5200.70.30.80.2
1-1-1-10200.70.30.80.2
B1-1-1-1150.70.30.80.2
1-1-5-5250.70.30.80.2
1-1-1-10250.50.50.80.2
C1-1-1-1150.60.40.80.2
1-1-5-5250.70.30.80.2
1-1-1-10170.60.40.80.2
D1-1-1-1140.60.40.80.2
1-1-5-5140.50.50.80.2
1-1-1-10140.50.50.80.2
E1-1-1-1120.60.40.80.2
1-1-5-5190.70.30.80.2
1-1-1-10220.70.30.80.2
Table 4. Classification accuracies of olive orchards at the five fields analysed in the three pansharpened images using different classification algorithms.
Table 4. Classification accuracies of olive orchards at the five fields analysed in the three pansharpened images using different classification algorithms.
Analyses
Pixel-BasedObject-Based
FieldImage 1MD 2SAMMLDTMDSAMMLDT
OA 3KOAKOAKOAKOAKOAKOAKOAK
A1-1-1-194.40.9192.60.8895.00.9291.70.8696.20.9396.60.9497.80.9694.90.91
1-1-5-591.70.8690.80.8494.40.9198.70.9796.40.9494.50.9197.90.9698.90.98
1-1-1-1089.40.8288.00.7992.80.8798.60.9891.70.8695.70.9398.80.9898.70.98
B1-1-1-191.30.8394.20.8998.70.9795.70.9188.80.7891.30.8299.10.9896.90.95
1-1-5-595.20.9195.40.9198.70.9892.10.8994.60.8997.30.9499.30.9999.30.98
1-1-1-1095.70.8686.50.7397.50.9594.30.9094.10.8877.90.5998.70.9798.50.97
C1-1-1-190.40.8193.40.8797.70.9596.50.9488.80.7888.40.7797.50.9597.00.94
1-1-5-595.80.9293.10.8697.70.9592.40.8995.60.9196.50.9397.90.9698.70.98
1-1-1-1095.50.9185.30.7196.60.9394.10.9094.90.9064.90.3198.50.9798.60.97
D1-1-1-169.70.3975.40.5188.10.7684.80.8078.90.5886.10.7279.70.5987.20.82
1-1-5-587.50.7587.90.7694.20.8888.50.8487.10.7485.60.7188.40.7798.30.96
1-1-1-1089.50.7985.30.7192.30.8587.30.8385.70.7288.00.7691.60.8398.70.97
E1-1-1-177.30.5576.70.5486.70.7386.80.8382.00.6479.20.5989.40.7989.00.84
1-1-5-593.40.8791.50.8393.00.8690.70.8997.10.9496.80.9497.50.9597.90.96
1-1-1-1095.20.9188.10.7689.90.8088.80.8596.10.9282.10.6497.30.9597.10.95
1 Pansharpen weight (B-G-R-NIR); 2 Method of classification: MD, Minimum Distance; SAM, Spectral Angel Mapper; ML, Maximum Likelihood; DT, Decision Tree; 3 Accuracy values: OA, overall accuracy (%); K, Kappa coefficient.

Share and Cite

MDPI and ACS Style

Castillejo-González, I.L. Mapping of Olive Trees Using Pansharpened QuickBird Images: An Evaluation of Pixel- and Object-Based Analyses. Agronomy 2018, 8, 288. https://doi.org/10.3390/agronomy8120288

AMA Style

Castillejo-González IL. Mapping of Olive Trees Using Pansharpened QuickBird Images: An Evaluation of Pixel- and Object-Based Analyses. Agronomy. 2018; 8(12):288. https://doi.org/10.3390/agronomy8120288

Chicago/Turabian Style

Castillejo-González, Isabel Luisa. 2018. "Mapping of Olive Trees Using Pansharpened QuickBird Images: An Evaluation of Pixel- and Object-Based Analyses" Agronomy 8, no. 12: 288. https://doi.org/10.3390/agronomy8120288

APA Style

Castillejo-González, I. L. (2018). Mapping of Olive Trees Using Pansharpened QuickBird Images: An Evaluation of Pixel- and Object-Based Analyses. Agronomy, 8(12), 288. https://doi.org/10.3390/agronomy8120288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop