1. Introduction
Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the creation of rapid and ongoing monitoring tools for crop management [
1,
2,
3,
4] and the detection/surveillance of insect pests. However, there are still challenges in remote sensing applications, such as detecting early incursions of cryptic pest species such as grape phylloxera (
Daktulosphaira vitifoliae Fitch) in vineyards. Grape phylloxera is currently present in most grape-growing countries, but relatively localised in wine districts in southeastern Australia. Grape phylloxera is a very small insect that primarily lives underground, feeds on the roots of the grapevines, and consequently damages the root system. This creates impairing water and nutrient uptake to the plant, which causes stress that is expressed above-ground by impairment and changes in photosynthesis, changes in pigment ratios, reduced canopy, slow stunted growth, and reduced yield [
5]. The symptoms of infestation appear usually after two to three years, although in some instances this can be longer [
6].
The current surveillance practice for growers is to visually inspect their grapevines during December to April for signs of grapevine damage attributable to a phylloxera infestation [
7]. A more intensive method of monitoring utilises phylloxera emergence traps at a density of one trap every fifth panel of every third grapevine row (standard trapping method) [
8]. Traps are deployed in the spring and summer months, when immature (first instar) insects move from the grapevine roots onto the soil surface [
7]. The traps are usually placed at the base of suspected infested grapevines and inspected after two to four weeks. In addition, on suspected infested grapevines, the roots can be inspected for yellow galls, swellings on the older roots, and the yellow insects [
9]. Although these practices have been extensively used as standard methods, they are time consuming, season dependent, labour intensive and require taxonomic expertise [
10].
Ground-based spectral observations using hyperspectral imagery have been conducted to characterise the spectral response of phylloxera-infested stressed grapevines [
11]. However, this study focused only on leaf-level observations. In addition, several aerial studies have been conducted for phylloxera detection and monitoring [
12,
13,
14]. A canopy-level characterisation of phylloxera has been studied [
15], but further research needs to be conducted to develop reliable predictive detection methods. Typical indices for analysis of stress vegetation include the normalised difference vegetation index (NDVI) and colour infrared composite (CIR) [
7], but they are not able to distinguish the stress caused by phylloxera infestation from the stress caused by other sources, and may only be useful in monitoring temporal changes in phylloxera distribution over known infested vineyards [
16]. Aerial thermal imagery can be used to identify grapevines showing indications of yellowing or decline that require closer inspection [
17].
A spatial and temporal analysis methodology has been applied for grapevine vigour analysis to assess the health of a vineyard [
18]; however, hyperspectral imagery was not considered. Micro UAV have been used for generating georeferenced orthophotos for precision agriculture purposes, with the aim of producing vegetation indices in the visible (VIS) and near-infrared (NIR) spectrum [
19], yet there is a need for higher spectral resolution imagery to produce different vegetation indices. UAV-based cloud point and hyperspectral imagery systems have been developed and used for a biomass estimation of wheat and barley crops [
20,
21] but this new technology still needs to be tested in broader applications. Low altitude aerial imagery has been trialed to detect diseases in avocado trees using different vegetation indices from RGB images [
22]. UAV and aerial RGB imagery have also been applied to the detection of infestation symptoms on olive trees (canopy discolouration) and palm trees mapping [
23], but only visible symptoms are considered, and further research needs to be to done to include vegetation indices and spectral signatures analysis.
This paper presents the methodology for developing a predictive detection method for pests using processed airborne RGB, multispectral, and hyperspectral data combined with ground-collected data. Among the methods used for processing the data are the photogrammetry of RGB imagery, the development of an airborne RGB-based digital elevation model in order to assess the canopy vigour, the georeferencing of different datasets, the processing of multispectral and hyperspectral imagery to generate vegetation indices, and the extraction of mean spectral signatures of grapevines for different levels of pest infestation at the canopy level.
Section 3 describes the UAV and sensors used for airborne imagery collection.
Section 4 describes the field experiments carried out in December 2016 and February 2017.
Finally, we show results for the correlation of the most relevant vegetation indices to expert vigour assessment and a digital vigour model, and provide conclusions on these preliminary findings and insights on the ongoing and further work required to generate a predictive model for pest infestation in crops (in this case grapevine phylloxera). A second paper will focus on verifying and validating a predictive model of a case study of a serious pest in vineyards. We envision the title to be Improving Plant Health Surveillance in Vineyards and Row Crops: a Predictive Model for the Early Detection, Extent, and Impact of Grape Phylloxera.
2. Methods
2.1. Predictive Detection Model Workflow
Our approach for generating a predictive model for pest and diseases in vineyards or other row crops e.g., avocado trees, macadamia trees, etc., employs multiple stages (
Figure 1). The first stage of the process is data collection. This includes the collection of airborne RGB, multispectral and hyperspectral imagery, and ground data in the form of ground control points (GCP), reflectance references, expert visual vigour assessment, EM-38 soil conductivity, and ground traps counts.
In the second stage, multispectral and hyperspectral images are processed to obtain radiance. All of the imagery was created with data input from ground control points, and using structure from motion algorithms is orthorectified. This process generates orthomosaics of the entire crop. Multi and hyperspectral orthomosaics are processed to obtain reflectance data cubes using mean white references obtained from boards and imagery taken in pre and post-flight operations.
The next stage uses these reflectance images to extract spectral signatures of the crop at the canopy level, with different levels of infestation, for the calculation of numerous vegetation indices that are selected based on the symptoms of infestation (e.g., reduced chlorophyll content, leafs yellowing). Digital surface models (DSM) and digital terrain models are obtained from the RGB orthomosaics in order to produce a digital vigour model (DVM).
The next step in this stage is to combine all of the multiple sources of data into a single information system. In order to do this, there is a segmentation process to obtain data from individual plants, with the aim of creating a table that contains different attributes for a single plant within the crop. The result of the previous step is an attribute table containing the extracted values of several vegetation indices (see
Table 1), expert vigour assessment classes, and values estimated from a DVM. The table also consists of traps data that is only used for the verification of suspected infestation, and EM38 soil conductivity data that was collected on the field, which could highlight differences in soil moisture and thus possible regions where phylloxera could establish.
The following is the list of the attributes used in the georeferenced table:
Tree number
Latitude
Longitude
Block
Row
Panel
Tree variety
Expert visual vigour assessment
Digital vigour model
Multispectral derived indices and bands
Hyperspectral derives indices and bands
EM38 data
Once the table is populated with georeferenced data, the next stage in the process is the development of a pest predictive detection model. The first step in this stage is to carry out a correlation analysis between the expert vigour assessment as ground truth, and the vegetation indices calculated using the multispectral and hyperspectral data, the EM38 data, and the DVM.
The results obtained in the correlation analysis are the foundation for the development of a preliminary phylloxera detection model, which is followed by an evaluation and a ground-based verification to obtain a final phylloxera detection model. This final stage is out of the scope of this paper, and is part of ongoing research work.
2.2. Orthorectification, Hyperspectral Imagery Processing, and GIS Tools
Data orthorectification of colour and multispectral data was conducted using Agisoft Photoscan (Agisoft LLC., St. Petersburg, Russia). The software allowed us to perform the photogrammetric processing of digital images and to generate three-dimensional (3D) spatial data including 3D models, orthomosaics, and digital elevation models (DEM).
The photogrammetry orthorectification process starts with the alignment of photos, where the software refines the camera positions of each photo and builds a sparse and dense point cloud model of objects in the multiple collected airborne images. Then, key points are projected onto the selected reference system. Finally, ground control points (GCP) with highly accurately observed geolocations can then improve the sparse point cloud.
We processed the hyperspectral data with Headwall SpectralViewer, MATLAB and Scyllarus
® DATA61 Matlab open source toolbox. Scyven software from DATA61 was used for analysing and cropping the white reference (this process is explained in
Section 2.4) for transforming radiance data cubes into reflectance [
24]. Matlab was used to produce the different indices, run a first approach for pixel classification based on these indices, generate plots of the phylloxera spectral response, and process the hyperspectral data cubes to obtain radiance.
The ArcMap 10.5 (ESRI, Redlands, CA, USA) software was used to incorporate the different sources of data into several layers, in order to analyse and visualise the data. It was also used to extract the spectral signatures from the grapevines that are next to the phylloxera traps, and to generate the attribute table generating vegetation indices for every plant.
Google Earth enabled us to visualise overlaid imagery obtained from indices and phylloxera traps, as well as to distinguish the boundaries of the different grapevine varieties in the crops.
2.3. Georeferencing
The georeferencing using GCP was conducted and reported using a free online AUSPOS service managed by Geoscience Australia, and took less than 10 min for a GCP. We included GCPs into the georeferencing of multispectral and colour data after the generation of a sparse points cloud, followed by camera locations optimisation and dense point cloud generation routines.
2.4. Ground Control Points (GCP) and White Reference Boards
We refined the georeferencing process of the collected imagery using custom-made GCPs with high contrast ground markers and applied precise point positioning (PPP) localisation methods to geolocate them. We used a triple frequency Novatel DL-V3 GPS receiver with a Novatel 703 antenna and Geoscience Australia online service in order to get the localisation of GCPs to a 3 cm accuracy level; whereas standard single frequency GPS chip in Canon/MicaSense sensors typically provided positioning accuracy within 5–10 m.
Additionally, we deployed a set of ground reflectance references in order to calculate reflectance in the multispectral and hyperspectral imagery. A MicaSense reflectance reference board (grey reference) and a 100 percent white reference Spectralon target were used before and after each flight as primary calibration tools for the cameras. We also deployed a set of nine white reference boards to the sites to be able to adjust reflectance in case of changing lighting conditions during the flight.
Figure 2 shows the GCP, white reference boards located aside, and the grey and white reflectance references.
Images of the white reference boards were taken using the hyperspectral camera before and after the flights on the field ground. The hyperspectral images of the white reference board were first verified to avoid using images that have saturation in some of their bands (
Figure 3a,b), and the portion of the image with the higher radiance values is cropped (
Figure 3c). The aerial images were then divided by the mean radiance value of the bands extracted from the cropped white reference image to obtain the reflectance of the surveyed field.
2.5. Vigour Assessment
Vigour assessment is conducted both visually by an expert and using orthorectified images to generate a digital vigour model. Estimating and analysing crop and grapevines vigour from orthorectified imagery when the crop or vineyard is on sloping terrain where elevation changes significantly over a small area is a challenging task. Even within one row of grapevines, it is possible to see elevation changes that are much more intense than the height of the grapevines. This means that using the terrain’s flat surface as a reference to estimate the vigour and height of individual grapevines is not accurate, and a better method is required.
Our approach was to generate dense point clouds (DPC), and classify these dense points to at least two classes: grapevines and terrain. We ran an optimisation of the DPC (camera = 10 px, marker accuracy = 0.001 m, marker placement = 0.1 px). We additionally controlled the reprojection error of generated DPC at level below 2 px. As a segmentation criteria in classification, we used a set of geometrically defined restrains, which we optimised experimentally. We set max distance = 0.6 m, i.e., limited a distance between the point in question and terrain model, and set max angle = 70 deg, which limited the maximum slope of the ground within the scene. Then, we built two types of meshes and DEM. One set will form a digital surface model (DSM). The second set uses the rest of the DPC augmented with inferred surfaces constructed by closing the gaps under the grapevines. The latter forms a bare-earth or digital terrain model (DTM). We then subtracted the elevations of the DTM from the DSM to get a digital vigour model (DVM) in order to analyse and better represent differences of vigour.
In this way, we generated unbiased models of grapevines’ height or vigour. Then, we localised grapevine trunks based on ground data, and performed a zonal analysis of each grapevine within a 0.2 m radius from the trunk.
2.6. Hyperspectral Processing
We implemented the workflow described in
Figure 1 (Stage 2) to pre and post-process the airborne collected hyperspectral data. First, the hyperspectral data cubes are transformed from digital numbers into radiance using the Headwall hyperspec software. Second, we orthorectify each of the scans using the GPS time-stamps from the flight. We then generated orthomosaics by stitching together multiple scans. Finally, the reflectance for the orthomosaic data cubes was determined by dividing the radiance data cubes by the mean radiance of the white target boards that are located in the field during the survey flights.
2.7. Mean Spectral Signatures for Different Grapevine Types
We extracted the mean spectral signature for plants with different values of the Modified Cab Absorption in Reflectance Index (MCARI) index. In ArcMap 10.5, we selected polygons containing the pixels of a single crop or grapevine.
Figure 4 shows an example of the process where different regions are selected for the signature extraction. Grapevines within the red, orange, and yellow areas show signs of phylloxera infestation, whereas grapevines within the green region are healthy.
2.8. Vegetation Indices
We calculated vegetation indices from multispectral and hyperspectral imagery. The indices used were selected in order to evaluate symptoms of infestation such as the premature yellowing of leaves and a reduction in chlorophyll content. A list of the indices used, as well as their equations, is found in
Table 1, where index H stands for hyperspectral data and M stands for multispectral data, respectively.
We also created six new indices based on the analysis of the spectral reflectance of infested and uninfested crop grapevines. The two distinct spectral responses were subtracted to highlight the main differences and explore relevant bands with high differences and bands with equal reflectance. In this way, we detected seven relevant spectral bands, from which we created indices PI1 to PI6, as described in
Appendix A, Equations (A1)–(A6).
3. UAV and Sensors
The methodology described in
Section 2 is platform or sensor agnostic if the UAV platform or sensor has similar characteristics. For the purpose of this work, we demonstrate the methods using the following:
3.1. UAV
The UAV used is an S800 EVO Hexacopter (DJI Ltd., Shenzhen, China) weighing 6.0 kg and capable of taking an additional payload up to 2.0 kg. The frame is fitted with a retractable undercarriage, providing a sensor field of view clear of obstacles. The UAV uses a 16,000 mAh LiPo six-cell battery, which provides a maximum hover time of approximately 20 min with no sensor payload. A WooKong-M flight controller forms the navigation and control system of the UAV, and comes with a stabilisation controller, a GPS unit with an inbuilt compass, and an inertial measurement unit (IMU). The flight controller has multiple autopilot modes to enable both remote control by operator and autonomous go home/landing with an enhanced fail-safe operation following a payload-specific predefined flight path, position and altitude hold.
3.2. High Resolution RGB Camera
A Canon 5DsR camera (Canon Inc., Tokyo, Japan) was integrated into the S800 UAV to capture ultra HD colour (RGB) GPS-stamped images. The camera features the latest full-frame 50.6-megapixel CMOS sensor with Dual DiGIC 6 processors and a 28-mm Canon lens. Specific flight patterns were flown to enable processing the images using photogrammetry software (e.g., Agisoft Photoscan) in order to generate comprehensive geospatial orthomosaics, 3D models, and DEM. The sensor brings to our study an ability to generate remarkable spatially detailed models of the crop in order to look after the spatial component of the infestation symptoms.
3.3. Multispectral Camera
We used a multispectral MicaSense RedEdge camera (MicaSense Inc., Simi Valley, CA, USA) to capture five discrete spectral bands: 475 nm (blue), 560 nm (green), 668 nm (red), 717 nm (red edge), and 840 nm (NIR). The sensor combines 1.2-megapixel CMOS spatial capability and two additional NIR and red edge spectral bands. The images produced by this camera were orthorectified and stitched together to produce rasters that were used to generate vegetation indices e.g., NDVI. Additionally, this camera supported the generation of DEM with resolutions of 3.26 cm/px for 60 m flights and 6.74 cm/px for 100 m flights.
3.4. Hyperspectral Sensor
Hyperspectral imagery was acquired using a Headwall Nano-Hyperspec (Headwall Photonics Inc., Bolton, MA, USA). The hyperspectral sensor recorded data cubes of 274 spectral bands in the visible and near-infrared (VNIR) range (400–1000 nm) with a ~2.2 nm spectral interval and a 5-nm spectral resolution (full width at half maximum (FWHM) with 20 μm slit). This camera is equipped with a calibrated f/1.8 4.8 mm Schneider lens, which results in a 50.7 deg field of view over 640 pixels. The collected hyperspectral data cubes are synchronised with GPS/inertial navigation system (INS) positioning and orientation information in order to perform data cubes orthorectification and multiple data cubes mapping. The hyperspectral sensor was integrated into a S800 UAV using a custom designed gimbal done by the Queensland University of Technology (QUT) Research Engineering Facility (REF). This gimbal has two axes, which ensures the seamless operation of the push-broom hyperspectral scanner in windy conditions. The QUT REF gimbal design features carbon reinforcement over a 3D printed structure, advanced dampening, brushless motors, and a BaseCam SimpleBGS 32 bit gimbal controller, with the total weight below 1 kg. Mounting the push-broom scanner on the gimbal enhances the camera performance in high turbulence environments by ensuring consistent and minimal required overlaps between consecutive data cubes over the large study area. This leads to an increased overall flight efficiency in open environments.
Figure 5a shows the S800 UAV with the hyperspectral sensor on the gimbal during one of the missions. The gimbal computer assisted design (CAD) model is presented in
Figure 5b.
3.5. Expert Visual Vigour Assessment and EM-38 Data
In addition to the UAV airborne data and GCP, the methods use ground truth data for confirming the presence of grape phylloxera. The ground truth data included ground traps for insect presence and abundance, and, when necessary, digging to confirm the presence of the insect on the roots, expert visual vigour assessment, and EM38 soil conductivity.
Table 2 shows an example of the vigour assessment classification for phylloxera-infested grapevines used in the visual assessment performed by the expert on the field.
4. Field Experiments
Two major aerial surveys were carried out at two locations during December 2016 and February 2017. Different sensors were deployed to collect comprehensive datasets. Our objective was to evaluate diverse state of the art remote sensing capabilities, and its combinations, in order to evaluate the methodology for the early detection of phylloxera infestations. This section describes the different aerial platforms and sensors used, and how the data was collected.
Two phylloxera-infested vineyards with multiple grapevine varieties in Yarra Valley, Victoria, Australia were surveyed (see
Figure 6). These two sites were selected because they have been continuously monitored by Victorian Department of Economic Development, Jobs, Transport and Resources (DEDTR) personnel over several grapevine-growing seasons. Site one included 104 rows of ungrafted
V. vinifera cultivars Chardonnay, Pinot Noir, Shiraz, and Merlot over an area of 8.5 ha containing 160 to 266 grapevines per row. Site two included 80 rows of ungrafted
V. vinifera cultivars Cabernet Sauvignon, Pinot Noir, Merlot, and Roussanne, with 59 to 63 grapevines per row over 3.16 ha. In this paper, we present our analysis on a single block within one vineyard containing the Chardonnay variety, and some remarkable results from different blocks for method verification purposes.
We placed eight GCP over the sites for an hour, which lead to a horizontal positional uncertainty below 0.014 m, and a vertical uncertainty below 0.046 m (95% confidence levels, GDA94) over the array of points.
Figure 7 denotes the location of four GCP as marker flags distributed over site 1.
In the studied vineyard, the top, middle, and bottom supportive wires were positioned at approximately 1.75 m, 1.55 m, and 1.2 m (average values for the two studied vineyards) above the ground, respectively. They were used as an approximate guidance to generate the canopy vigour classes (see
Table 2).
The imagery was acquired on two consecutive days during the grapevine phenological cycle: post-flowering (14–15 December 2016) and veraison (14–15 February 2017). This approach is common [
7,
11], and enables temporal studies on the identification of phylloxera symptoms and population development, as well as validation.
6. Conclusions and Further Research
In this work, we presented the methodology for processing airborne-collected imagery using RGB, multispectral, and hyperspectral cameras for assessing the condition of vineyards with the aim of generating a predictive model for the detection of pests in crops and vineyards. The methods are demonstrated for detection of phylloxera in vineyards. In particular, we presented the implementation of a digital vigour model, the evaluation of several vegetation indices, and the creation and evaluation of new indices based on hyperspectral signatures to highlight symptoms of grapevine phylloxera infestation.
We presented the results of comparing a digital vigour model of the vineyard to an expert visual assessment. We found that the two assessments correlate positively indicating that the developed method is a correct approach for generating vigour assessments in vineyards.
We generated vegetation indices to highlight possible symptoms of phylloxera infestation, such as changes in the reflectance/absorption of light indicating a reduction in the chlorophyll content in the grapevines. This could be done remotely using both the multispectral and hyperspectral collected and treated imagery.
Furthermore, we identified mean spectral signatures for different levels of infestation for the Chardonnay variety at two different times of the year which helped us find regions of interest in the spectrum in order to generate new vegetation indices to highlight grape phylloxera infestation.
The development and use of a specific phylloxera or existing vegetation index is important to vineyard managers for two reasons. Initially, such an index could be used on imagery collected from an infested vineyard to determine the extent (area) and severity of the plant pest and its impact on grape production. This would also aid in vineyard management decisions, such as where to implement hygiene protocols. The second reason is that this index can improve the potential for early detection of the pest. At present, the existing surveillance methods (visual vine and root inspection and emergence traps) are ineffectual at detecting early infection of the vines by grape phylloxera.
We have shown that hyperspectral imagery has the potential to detect grape phylloxera before it is apparent to visual inspection. The next stage is to determine if these hyperspectral indices have the potential to be adapted to the multispectral data without a loss in correlative power. Future work will focus on deriving predictive models (utilising these indices) and testing their accuracy in predicting the presence/abundance of the plant pest. A paper describing the development and validation of the predictive model (Stage 4 of
Figure 1) is in preparation as a complement of this paper.
The methods, workflow, results and analysis presented in this research will contribute to the generation of valuable information for plant pest surveillance. The presented methodology could also be extrapolated to other areas of research in remote sensing, such as minerals exploration, biodiversity, and ecological assessment.