Next Article in Journal
Tourist Attractiveness of Rural Areas as a Determinant of the Implementation of Social Tourism of Disadvantaged Groups: Evidence from Poland and the Czech Republic
Next Article in Special Issue
A Real-Time Sorting Robot System for Panax Notoginseng Taproots Equipped with an Improved Deeplabv3+ Model
Previous Article in Journal
Design of Rice Straw Fiber Crusher and Evaluation of Fiber Quality
Previous Article in Special Issue
Robust Multi-Gateway Authentication Scheme for Agriculture Wireless Sensor Network in Society 5.0 Smart Communities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

VineInspector: The Vineyard Assistant

1
Engineering Department, School of Science and Technology, UTAD—University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5000-801 Vila Real, Portugal
2
CITAB—Centre for the Research and Technology of Agro-Environment and Biological Sciences, UTAD—University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5000-801 Vila Real, Portugal
3
INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Pólo da FEUP, Faculdade de Engenharia da Universidade do Porto, 4200-465 Porto, Portugal
4
INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Pólo da UTAD, UTAD—University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5000-801 Vila Real, Portugal
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(5), 730; https://doi.org/10.3390/agriculture12050730
Submission received: 26 April 2022 / Revised: 18 May 2022 / Accepted: 18 May 2022 / Published: 22 May 2022
(This article belongs to the Special Issue Internet of Things (IoT) for Precision Agriculture Practices)

Abstract

:
Proximity sensing approaches with a wide array of sensors available for use in precision viticulture contexts can nowadays be considered both well-know and mature technologies. Still, several in-field practices performed throughout different crops rely on direct visual observation supported on gained experience to assess aspects of plants’ phenological development, as well as indicators relating to the onset of common plagues and diseases. Aiming to mimic in-field direct observation, this paper presents VineInspector: a low-cost, self-contained and easy-to-install system, which is able to measure microclimatic parameters, and also to acquire images using multiple cameras. It is built upon a stake structure, rendering it suitable for deployment across a vineyard. The approach through which distinguishable attributes are detected, classified and tallied in the periodically acquired images, makes use of artificial intelligence approaches. Furthermore, it is made available through an IoT cloud-based support system. VineInspector was field-tested under real operating conditions to assess not only the robustness and the operating functionality of the hardware solution, but also the AI approaches’ accuracy. Two applications were developed to evaluate VineInspector’s consistency while a viticulturist’ assistant in everyday practices. One was intended to determine the size of the very first grapevines’ shoots, one of the required parameters of the well known 3–10 rule to predict primary downy mildew infection. The other was developed to tally grapevine moth males captured in sex traps. Results show that VineInspector is a logical step in smart proximity monitoring by mimicking direct visual observation from experienced viticulturists. While the latter traditionally are responsible for a set of everyday practices in the field, these are time and resource consuming. VineInspector was proven to be effective in two of these practices, performing them automatically. Therefore, it enables both the continuous monitoring and assessment of a vineyard’s phenological development in a more efficient manner, making way to more assertive and timely practices against pests and diseases.

1. Introduction

Decision-making in Precision Agriculture (PA) everyday practices is progressively becoming more reliant on data, which can be periodically acquired from both environment and crop alike. Indeed, knowing the value of parameters that may have some bearing in crops’ phytosanitary condition and their development throughout a season, but that also enable a characterisation on both spatial and temporal variabilities with different degrees of granularity, can only be considered a great asset toward sustainable PA practices.
Data is usually acquired through remote and/or proximity sensing. Remote sensing data consists mostly of aerial imagery acquired by sensors that are coupled to one of three platforms: satellites, manned aircrafts and unmanned aerial vehicles (UAVs). Both sensors and platforms enable a multitude of application scenarios: while the former are able to acquire several types of data on different spectra ranges—e.g., RGB, near-infrared (NIR), thermal, multispectral, hyperspectral, LiDAR, ground penetrating radar (GPR)—, the latter provide options with regard to coverage, autonomy, cost, payload capacity (whose restrictions have been largely put aside due to the miniaturisation of sensors), geographic and atmospheric contexts, detail level, access, and the temporal frequency of data acquisition. It is fair to recognise the meaningful role that unmanned aerial systems (UAV + sensor + ground station) have had in PA in the last few years. Conversely, proximity sensing data derives from in-field sensors, able of acquire agrometeorological parameters’ samples—e.g., temperature, relative humidity, solar radiation, precipitation—, but also of those that characterise plants’ development, through the so-called phytosensors—e.g., dendrometers, granier probes to estimate sap flow. Deploying electronics in both harsh and remote environments has its own set of challenges, such as power, robustness, data transmission and granularity [1]. While the latter directly affects cost—more spatial detail usually means placing additional sensors—, power requirements are a direct consequence of the number and type of parameters to be measured, but also from the temporal detail intended (i.e., more readings mean more power).
Either in remote or in proximity sensing, image sensors have been proven to provide rigorous qualitative and quantitative assessments of plants’ phenological development, as well as of their context [2,3]. Indeed, valuable information can be extracted from this imagery, including plants’ phenological status—upon which many cultural practices are based of—, and morphological/other changes that may indica the existence of several anomalies, such as nutritional deficiencies [4,5], diseases manifestation [6], thermal stress [7], water stress [8], among others. However, there are still many contexts—geographical, environmental, crop-related, socio-economic and technological—where the trained human eye is critical in evaluating parameters in the field, so simply technological approaches are not yet able to mimic a trained human eye. Therefore, image sensors are capable of acquiring data that can potentially become useful in training automated approaches to measure parameters traditionally assessed by the trained human eye.
Viticulture has been steadily knowing significant technological advances from research & development work done worldwide [9]. Whereas the technology integration rate could be better in Precision Viticulture (PV), both the food and wine markets remain very relevant in the world’s economy and social structure. However, viticulture continues to be perceived as largely traditional and reliant on human’s experience has a support to the decision-making process. All things considered, using image sensors able to capture different spectra coupled together with monitoring systems dully scattered in the field can be particularly relevant in vineyard management practices. In fact, they will enable the training of artificial intelligence (AI) systems capable of mimicking the human eye and therefore of somehow incorporating acquired experience. This approach is presented herein and it will change today’s reality in viticulture regarding periodic trips that experts need to do to the vineyard—e.g., in the harvesting season, a daily in situ assessment is needed to determine following day operations—when just one observation could eventually be enough to support the decision-making process. Some examples of practices that require an expert’s direct and frequent observation in the vineyard are the budding of vines, checking for signs of diseases, assessing vines phenological status and also their evolution.
Current technological development in electronics, communications and embedded systems provides solutions to master both in-field data acquisition processes and devices [10]. Furthermore, these solutions are compatible with an autonomous operation, have a low energy consumption that enables them to function by harvesting power from the environment, can be integrated in the landscape so that the impact in both the culture and also in the cultural practices is minimised, and often present a very low cost [11]. These solutions can incorporate image sensors enabling the assessment of parameters that would otherwise be very difficult to quantify. VineInspector system is presented in this paper. It is based on a low-cost, autonomous single-board computer (SBC), which can easily be installed in a vineyard. It has a set of RGB cameras whose sole purpose is to enable the early detection of vine diseases incidence. Two example applications were developed within this study: downy mildew incidence risk calculation and the counting of grape moth, captured in insect traps. Image analysis through AI is used in both applications. The potential risk of downy mildew infection occurrence can be estimated through the automatic detection of vines’ phenological state [12]—specifically sprouting, as grapevine shoots’ size is key to estimate the potential risk of downy mildew infection—based on the well-known 3–10 rule [13]. The rule is called 3–10 because it assumes that primary infections are likely to happen when the following conditions occur simultaneously: air temperature is equal or higher than 10 °C during the previous 24–48 h, at least 10 mm of continuous rain has fallen during the previous 24–48 h, and grapevine shoots measure at least 10 cm long [14]. As for grape moth, the analysis, classification and tally of insects that contain a pheromone to attract grape moth males and that were captured in traps, enable an automatic risk assessment and the triggering of both preventing and mitigating interventions.
The paper is organised as follows: Section 2 presents the state-of-the-art of image classification with regard to grapevine downy mildew detection and prediction, as well as to the evolution of trapped insects automatic tally. Section 3 has a detailed description of Vineinspector’s hardware and software components. Moreover, the methodology followed to implement an experimental AI-based classification engine is also presented, together with the two case-study applications developed. Section 4 presents the results and a detailed discussion. Lastly, Section 5 finishes the paper with some conclusions drawn from practical infield evaluations and presents future work.

2. Image Classification

Having still images or video captured by in-field data acquisition devices represents (i) an added cost of equipping them with cameras; and (ii) the ability to transmit larger data files to a digital structure locate elsewhere. The upside is that crops’ dynamics may be monitored using proximity image-based approaches, provided that acquired images’ features are (automatically) detected and classified. While well-known computer vision techniques—such as feature descriptors for object detection—may be used to this purpose, they mostly require that important features are to be chosen in each image. Given both the complexity and heterogeneity present in in-field crop-related images—e.g., there are not two plants that develop exactly the same way—, together with the environmental context—e.g., lighting conditions—the number of classes to classify is bound to increase. Therefore, image’ features classification becomes more and more cumbersome, requiring more resources to be accomplished in a timely manner. An end-to-end learning concept was introduced with Machine Learning (ML). Indeed, the most common approach in PA (supervised learning) is to have a set of annotated images—where different object classes that may be present in each image are outlined—processed by learning algorithms to train neural networks. After that, the latter should be capable of automatically classifying these same object classes in other images. Since each solution is trained rather than programmed, applications such as image classification, semantic segmentation and object detection have become faster, more accurate, highly flexible and require less intervention from experts.
Image classification in PA is currently being extensively researched and has already some applications deployed. However, this paper deals specifically on proximity images acquired by in-field data acquisition devices to support PV practices. Both case studies—one to predict primary downy mildew infection by determining the size of the very first grapevines’ shoots (one of the required parameters in the 3–10 rule); and the other to tally grapevine moth males captured in sex traps—address key issues in vineyard management. As such and for the sake of conciseness, this section will only present published work that is somewhat related with these two case studies.

2.1. Downy Mildew

Downy mildew is a disease originating in the American continent and accidentally brought to Europe in the 1870s, through French territory [15,16]. It spread quickly and is now the most destructive pathogen in wine growing regions with rainy springs/summers [17]. Downy mildew is caused by the Plasmopara viticola (Berk. & M.A. Curtis) fungus [18]: an endoparasite that develops inside grapevine’ organs and can infect virtually each and every green organ, particularly the shoots, leaves, inflorescences, tendrils and even the petioles [19,20]. With favourable weather conditions, this disease can cause heavy losses for grape growers. Indeed, in extreme situations it can lead to a total production loss [21]. Early detection is not simple, given that observable symptoms usually appear 7 to 10 days after the infection. However, it is key in controlling the spread of downy mildew [22]. This is the main reason why the potential risk of infection is determined by government agricultural agencies in many countries, through prediction models that are based on weather forecasts and in data acquired from meteorological stations [23,24,25]. Monitored regions’ variability associated with data granularity coming from weather stations, makes forecasts often less accurate and timely. Furthermore, to the best of our knowledge all readily available monitoring techniques make use only common agrometeorological data, such as air temperature and accumulated precipitation. They may also include data from leaf wetness sensors, among others. However, the beginning of the vegetative cycle—bud break—varies in different agrometeorological contexts, as well as with the grapevines’ varieties. Indeed, it is usually detected by in-field direct visual observation, even though it can be predicted considering the evolution of climatic conditions.
This section reviews the most relevant published research work on the prediction and/or detection of downy mildew in crops, especially those that use images to assess the existence of early symptoms of the disease.
The 3–10 rule is used by Pérez-Expósito et al. [14,26] in their VineSens system: a platform to provide decision support in vineyard management. This work is relevant mainly because it provides further validation on the use of the 3–10 rule in downy mildew risk assessment. VineSens relies on a wireless sensor network made up of autonomous sensor nodes that acquire and store meteorological data. Through this data and using epidemiological models—specifically the 3–10 rule—the system alerts when the downy mildew infection risk reaches a certain threshold and therefore preemptive measures must be taken. Resorting to image processing, Sobolu et al. [27] developed a technique for the automatic detection of downy mildew. Segmentation techniques were applied in various colour spaces and the experimental results showed that in HSV colour space the disease was quite correctly recognised. The authors state that this technique can detect leaf symptoms even in the onset phase and is therefore able to help prevent the spread of infection throughout the whole vineyard.
Both Lloret et al. [28] and Kim et al. [29] approaches rely on images acquired from fixed spots within the field. The former presents a wireless sensor network in which each node has the capability not only to acquire images, but also to detect any abnormal state in plant’ leaves through image processing techniques. If a deficiency is identified, the sensor node notifies the farmer by sending a message. Although no images are transmitted to outside the sensor nodes, this approach resorts to proximity images and local processing to extract useful information for decision support systems. Indeed, the authors suggest that it will be possible to add a database with images of symptoms, together with a trained neural network to provide accurate diagnosis from a local perspective. As for Kim et al. [29], the authors have developed an automatic real-time disease monitoring system, for the early detection of downy mildew symptoms in onions. Images are acquired using a PTZ (pan, tilt, zoom) camera and leaves’ infected regions are identified by using a DNN (deep neural network) model, based on the VGG16 architecture. Hence, both works enable to identify the infection as soon as it onsets on leaves’ color and/or shape.
As for Abdelghafour et al. [30,31], they studied the potential of using proximity colour images to detect downy mildew symptoms in grapevines. Images are acquired through an in-field imaging sensor coupled to a tractor. Furthermore, an algorithmic strategy for the detection of various forms of leaf symptoms in high-resolution proximal images is also presented. The authors concluded that this approach enables both the reliable detection of downy mildew symptoms and is able to estimate affected tissues’ area.

2.2. Insect Tally

A possible way to deal with some crop’ pests is by installing pheromone diffusers in fields. They work by saturating the nearby atmosphere with pests’ female sex pheromones, thus creating sexual confusion in males. This technique aims at mislead the adult moth male by hindering chemical communication between sexes, therefore preventing moth females from laying fertile eggs and significantly reduce pests impact on crops [32]. These pheromones are used in sticky traps, where males are captured.
Therefore, traps yield information about the timing of the appearance and activity of certain pests and auxiliaries, allowing treatments to be carried out at the right time. However, the tally of captured insects is still mainly done visually through field work, which is time consuming, expensive and can always introduce delays in the decision-making process. There are a few more papers to address in this application, when comparing with downy mildew detection and/or infection risk prediction.
Espinoza et al. [33] proposed an approach to detect and monitor two of the most aggressive pests affecting tomato-producing greenhouses on southern Spain: the whitefly (Bemisia tabaci—Gennadius, 1889) and thrips (Frankliniella occidentalis—Pergande, 1895). Both are caught using sticky traps. This detection and monitoring is carried out based on the combination of image processing and artificial neural networks. Digital images of sticky traps are obtained using an image acquisition system and the detection of objects in the images, segmentation and estimation of morphological and colour properties are performed by an image processing algorithm for each of the detected objects. Classification is performed using a feed-forward multi-layer artificial neural network. The proposed whitefly identification algorithm achieved an accuracy of 96% and thrips identification an accuracy of 92%. Song et al. [34] proposed a method that can be applied to noisy images from sticky traps to identify and classify three insect species—Harpalus affinis (Schrank, 1781), Sternolophus rufipes (Fabricius, 1792), and Hydrophilidae spp. (Latreille, 1802)—, also enabling the tally of each species’ individuals. The authors’ aim was not to propose a method that stood out from the existing ones with regard to general performance, but rather to develop a method that had the best performance for the considered species. These species have the particularity that individuals’ body reflects light, which is key to the insect identification process. Individual insects are distinguished through the light points created by the light reflection on their backs. Accuracy was of 99.47%, 96.41% and 89.91% when identifying Harpalus affinis, Sternolophus rufipes, and Hydrophilidae spp., respectively. Ramalingam et al. [35] proposed a remote and real-time monitoring system for insect sticky traps, as well as an insect detection method using Deep Learning (DL) techniques. The monitoring system consists of end nodes with a smart wireless camera oriented to the sticky trap. Insects detection and classification is done by using a Faster Region-based Convolutional Neural Network (R-CNN) ResNet-50 that was trained using images of built environment insects and farm field insects. According to the experimental results, the authors found that the proposed system can automatically identify insects present in the traps with an average accuracy of 94%. Liu et al. [36] featured a new end-to-end convolutional neural network-based automatic pest detection architecture called PestNet. It consists of three main parts: automatic resource extraction is performed using a channel-spatial attention (CSA) module; the second part is called the region proposal network (RPN), which is adopted to provide region proposals such as positions of potential pests based on feature maps extracted from images; lastly, the third part consists of using a position-sensitive score map (PSSM) that was used instead of fully connected layers to reduce the classification computational cost. In addition, the authors also applied contextual regions of interest (RoIs) as contextual information of pest characteristics to improve detection accuracy. The authors tested this approach using a 10-year dataset they created (Multi-class Pests Dataset 2018—MPD2018) and the experimental results show that PestNet performs well in detecting multi-class pests, achieving an average accuracy of 75.46%. Ding et al. [37] proposed an automatic detection system based on DL for identifying and counting pests in images obtained from field traps. The pest detection method is based on a convolutional neural network (ConvNet), which offers the advantage of being accurate and fast, requiring minimal data pre-processing. Qualitative and quantitative experiments demonstrate the effectiveness of the proposed method on a codling moth dataset. Compared to other insect detection approaches, this method does not use pest-specific engineering, which allows it to be easily extended to other species and environments.
These last few works are even more closely related to VineInspector. Rustia et al. [38] developed and tested a system based on a wireless sensor network that uses camera modules and environmental sensors to simultaneously and continuously acquire insect traps images and measure temperature, relative humidity and light intensity in greenhouses. Each wireless sensor network node is based on a Raspberry Pi 3, to which a Raspberry Pi Camera v2 module and add-on environmental sensors are connected. An image processing algorithm was developed to automatically detect and count insects present in sticky traps with 93% average temporal detection accuracy, when compared with manual counting. The developed processing algorithm runs on a remote server and aims to segment objects from the background and filter non-insect objects. For this, the authors use colour space change and colour segmentation techniques to isolate potential insects. Then, a Support Vector Machine (SVM) classifies data to verify if it is actually an insect or not. Bakkay et al. [39] developed a method to detect, recognise and tally insects, more precisely European grapevine moth (Lobesia botrana, Denis & Schiffermüller), in trap’ images. This approach aims to analyse the tally’s evolution to adapt treatments and thus avoid whenever possible the application of pesticides. The segmentation process involves two main contributions: (i) the use of an adaptive k-means clustering that is able to eliminate different types of noise, i.e., artefacts or non-insect elements; and (ii) the use of a region merging algorithm for separating touching insects. The authors state that quantitative evaluations show that the proposed method can detect insects with higher accuracy than other commonly used approaches. Zhong et al. [40] presented an image-based system to detect, classify and tally six species of flying insects: bees, flies, mosquitoes, moths, chafers and fruit flies. The system is composed of a yellow sticky trap installed in the insect monitoring area, which in turn is observed by a camera that collects images in real time. The detection and coarse counting method is based on YOLO object detection system. With regard to the training stage, it was carried out using a single class containing all the six insect species. Classification and fine counting of insects was performed using a SVM. Based on the YOLO and SVM combination, the need for training data is minimised. This system has been implemented on a Raspberry Pi and test results can be sent to an agricultural monitoring service platform, which is the basis for providing accurate prevention and treatment methods based on a combination of pest information and other environmental data. An average counting accuracy of 92.50% and an average classification accuracy of 90.18% were obtained, thus showing a promising performance.
In Lima et al. [41], Preti et al. [42] and Júnior et al. [43] several other works developed in the scope of insects identification, classification and tally in traps can be found.
This small set of reference research work does unequivocally show that the use of proximity images in PA is swiftly progressing. Furthermore, image classification is being done by using different artificial intelligence approaches. The next section will present in detail VineInspector: a system designed to be able to capture images by multiple local cameras, acquire agro-meteorological data, and use an artificial intelligence approach with this heterogeneous data to extract valuable knowledge for PV practices.

3. The VineInspector

This section presents the VineInspector system in all it’s dimensions: (i) hardware setup to manage, acquire and transmit data from both sensors and cameras; (ii) software to self-manage and to handle acquired data; and (iii) the interaction with a remote cloud-based platform (mySense [44]) through web-services, whose aim is to classify field-acquired images. The experimental setup and the two case-study applications are also presented.

3.1. Hardware Architecture

VineInspector was built around a low-cost Single Board Computer (SBC) Orange Pi PC Plus, OPi, (Shenzhen Xunlong Software Co., Ltd., Shenzhen, China) and a shield specifically designed to accommodate auxiliary power control circuits for the entire system, as well as to provide a GSM/GPRS 2G/3G connection with a remote cloud-based platform. Furthermore, a 3S 18650 lithium battery charger and balance protection board (Sure Electronics, George Town, Malaysia) is used to recharge three 3000 mAh batteries with energy harvested from the sun through a 10 W solar panel. A simplified hardware diagram is presented in Figure 1.
The shield has a low-power microcontroller (PIC32MM0064GPL028 from Microchip Technology Inc., Chandler, AZ, USA) responsible for managing OPi’s power supply. Communication between the shield and the OPi is done through a serial communication interface (RX/TX). To ensure that the OPi is never left permanently on, a watch-dog timer (WDT) function has been implemented in this microcontroller. The WDT timeout is reset always that the OPi changes the state of a control pin. If this change does not occur within 20 s, the WDT causes a system restart by a power cycle. With regard to communication with the cloud-based remote platform, the shield uses a GSM/GPRS 2G/3G Telit GL865-QUAD-V3 modem (Telit Wireless Solutions, London, UK). As for local connections, an IEEE 802.11x (Wi-Fi) network is also available. Through this local network, VineInspector can both be configured and/or have it’s data accessed, via a smartphone app. This connection is turned on and off by a push button on the shield (not shown in Figure 1).
Orange Pi PC Plus was chosen because it is one of the best price/features ratio SBCs. Moreover, it also includes an embedded Multi-Media Card (eMMC) memory, where the entire file system can be kept. This solution makes it less vulnerable to failures such as those that occur with the traditional microSD flash memory card and corresponding mechanical contact/spring interface. One last reason to have chosen OPi is that it has a sleep mode that saves power when not in use. To extend OPi’s capability to interact with different types of sensors and also to be able to have them acquire data during the long periods in which OPi is turned off, an external data acquisition system—SPWAS’21—was used. It is important to mention that this low-cost and low-power system has a fully OPi-compatible serial interface and was developed in a previous work. It can be found at [45]. With regard to image acquisition, low-cost USB cameras are used. Each has associated an image channel.

3.2. Software Architecture

VineInspector has three software components worth to mention: (i) firmware embedded in the shield’s microcontroller, which ensures the system’s correct operation; (ii) OPi’s software that fundamentally enables data gathering, temporary storage and transmission; and (iii) remote cloud-based platform application, developed based on an AI approach. It enables the classification of visual elements present in the transmitted images. These three software components are succinctly explained in the following subsections.

3.2.1. Shield Microcontroller’s Firmware

The shield’s microcontroller is essentially used to manage OPi’s power supply at regular intervals or at a specific time. To this end, it has a real-time clock that is programmed by the OPi. Therefore, this very low-power consumption device is continuously powered on. When the microcontroller boots up for the first time, it enables OPi’s power supply long enough to have it establish an internet connection. Then, it sends the correct date/time to the shield, via the TX/RX serial connection. Whenever OPi’s software finishes doing it’s tasks, it sends a command to the microcontroller instructing it to turn off the power. It will be turned on again only at the next pre-set time. Firmware’s flowchart is illustrated in Figure 2.

3.2.2. Opi’s Software

As previously stated, VineInspector is responsible for gathering data, storage it temporarily, and transmit it to a remote cloud-based platform. A script in Python—started at boot time—automatically executes this process. Typically, OPi is powered up at pre-programmed occasions in the shield’s microcontroller, as already mentioned. Therefore, as soon as the system is started, an internet connection to the remote cloud-based platform is established using the shield’s 2G/3G modem. Then, instructions are requested by using the remote platform’s API. This request returns the remote platform’s date and time, as well as any configuration commands that may be queued for sending. The specific date and time received are used to set this parameters on both the OPi and on the shield’s microcontroller.
As soon as the date and time setting process is complete, the image acquisition procedure by the available cameras begins. Images are then stored locally and registered into a local mySQL database. SPWAS’21 is the external device responsible for acquiring data from the remainder available sensors. It operates independently. The process to retrieve this data involves a simple download command from the flash memory. Then, data—both numeric and images—are sent to the remote cloud-based platform through an HTTP POST request.
Request’s body has a JSON data envelope or the image data, base64 encoded, depending on the type of data being sent. The HTTP POST request header may optionally include Global Positioning System (GPS) coordinates to update device’s location. All requests are acknowledged by the cloud-based platform. If the acknowledgement message is received, data is deleted from the local database. Otherwise, data will be re-transmitted at the next opportunity.
When data exchange between VineInspector and the remote cloud-based platform (mySense) ends, the python script will signal the shield’s microcontroller that a system shutdown will soon follow and that therefore the power can be shut-off within just a few seconds. VineInspector is left idle—with a very reduced power consumption—and will wake up upon the shield’s microcontroller real-time clock’ (RTC) signal. A simplified OPi’s software flowchart is presented in Figure 3.

3.2.3. Remote Cloud-Based Platform

Whilst the use of in-field sensors’ data in a wide array of applications for AP is firmly established and is well-known, VineInspector’s contribution lies in it’s ability to capture images from multiple channels, classify them and automatically extract relevant features. This subsection describes with detail the procedure followed to classify and extract elements of interest from acquired images.
mySense environment (https://mysenseapi.utad.pt, accessed on 17 May 2022) is an IoT platform specifically tailored to support a range of different applications and services within the scope of PA/PV practices [44]. Figure 4 depicts the sequence of steps followed each time an image is sent to mySense by a VineInspector. It should be noted that in each request the HTTP POST identifies VineInspector’s imaging channel, thus enabling the possibility to effectively choose which classification model should be used. Therefore, each and every imaging channel will have associated a classification model in mySense.
The main purpose of this work revolves around the VineInspector as a whole system, able to acquire both in-field sensor data and also crops’ proximity images that will support visual inspection applications. Whilst two case-study applications were developed to prove that VineInspector is able to reliably perform in harsh field contexts and obtain accurate information for viticulturists, there was no special concern about the most proper AI approaches to use in each situation. Indeed, the suitability, accuracy and general performance of AI approaches is not the focus of this work. Furthermore, it was already stated that each imaging channel may have a different AI approach assigned so that every PA application can reach higher efficiency and accuracy levels. As such, future research will aim at establishing a relation between PA applications and the most suitable AI approaches, considering the available dataset, computational resources, communications and socio-economical contexts in which they are deployed. The common denominator will be VineInspector. Bearing this in mind, the AI approach chosen to implement the two case study applications—Scaled-YOLOv4 [46]—resulted not only from previous works, were it performed well in different situations, but also from the knowledge that it is a more generic approach that may be used in diverse contexts, with good results. Scaled-YOLOv4 is the new state-of-the-art in object detection and emerged from the YOLOv4 model by efficiently scaling the network design and scale (width, depth and number of stages in the convolutional neural network backbone and neck). For now that will more than suffice in proving that VineInspector is able to render quality information to viticulturists.
The training process is more complex, as it involves a dataset that should be as extensive and diversified as possible to improve classification’ accuracy. Looking at Figure 4 flowchart, whenever an image is submitted and classified with the previously trained model, it is also subjected to a supervision process (knowledge base). This enables increasing the dataset to be used in a subsequent training process and also of the dataset that will evaluate accuracy. In YOLOv4, the classification model is applied to an image at multiple locations and scales and the image’s high scoring regions are considered detections. The image is divided into multiple regions and a bounding box prediction is made. Then, the probabilities for each of these regions are weighed in [47]. This approach yields much faster classification than traditional R-CNN networks.

3.3. Experimental Setup

A VineInspector equipped with three cameras—one pointing to a grapevines’ row (ELP 2.2MP USB Camera 2.8 mm focal length with water proof case, Shenzhen Technology Co., Ltd., Shenzhen, China), another one pointing to a grapevine with greater detail (ELP 2.2MP USB Camera 3.6 mm focal length with water proof case), and the third one inside a common delta sticky trap (HVBCAM 5.0MP USB Camera with a 160-degree fish-eye lens, Huiber Vision Technology Co., Ltd., Shenzhen, China)—was placed in a 2 ha Malvasia Fina (white grape variety) vineyard located at the University of Trás-os-Montes e Alto Douro (UTAD) Campus, in Vila Real, Portugal (41.286875, −7.735219), as depicted in Figure 5. The VineInspector device was installed in the vineyard by direct fastening it to one of the bale stakes and the cameras pointed at the elements of interest. Through its Wi-Fi connection (activated by a button), it is possible to check the correct position of the cameras using a specific smartphone application where the images can be accessed in real time. Both the VineInspector and the two developed applications were assessed over 2021.
VineInspector’s standard operation mode is to have four images acquired by each of the three cameras throughout the day, in different moments: sunrise, noon, mid-afternoon, and late afternoon. Each image is then made readily available to viticulturists through mySense platform.
It is at this stage that artificial intelligence approaches come into play to further process each image. Automatic classification is then done considering the established requirements for crop monitoring. Taking the two example applications developed, the aim was to tally grapevine moth males captured in the sticky trap, and to determine the size of the grapevine’s shoots to assess downy mildew incidence probability based on the 3–10 rule. A Scaled-YOLOv4 implementation using PyTorch framework provided by Wong Kin-Yiu [48] was used for both applications. Training was done on a cloud-based machine using the Gradient Paperspace platform. This machine is equipped with an octa-core Intel® Xeon® CPU E5-2623 v4 @ 2.60 GHz, 30 GB of RAM and a NVIDIA Quadro P5000 GPU, with 16 GB GDDR5 memory and 2560 CUDA cores. Mish-CUDA [49], a PyTorch CUDA implementation of the Mish activation function, was used to run processes on the NVIDIA GPU.

3.4. Grapevine’ Shoots Application

An initial dataset of grapevine’ shoots images was built within a time frame in which grapevines (i) had no shoots growing; (ii) shoots were developing, but their size is still under 10 cm; and (iii) shoots were already developed beyond a 10 cm size. All of these images had grapevine’ shoots regions annotated and divided into three distinct classes, respectively: “no_shoots”, “shoots_smaller_than_10”, and “shoots_greater_than_10”. Annotations were made using Label-Images-Tool [50] that enables to save them in a YOLOv4 compatible format in .txt files. Figure 6 depicts some examples of grapevines’ shoots images that resulted from the annotation process.
The artificial intelligence approach was trained based on this dataset, composed of 238 grapevine images. Furthermore, the annotation process rendered 2489 images, from which 1230 are from shoots smaller than 10 cm, 985 bigger than 10 cm, and 274 of regions where shoots have not yet grow. Roboflow platform—a development tool for building computer vision-based applications—was then used to divide the dataset in 70% for training, 20% for validation, and 10% for testing, to apply data augmentation techniques, as well as to create three versions of the initial dataset to further assess the impact of images’ quantity and resolution on the accuracy of both detection and classification processes. While one version has the original images with their resolution scaled down from 2592 × 1944 px to 1900 × 1900 px, a data augmentation process was carried out to create the other two versions. Indeed, it replicated existing training images with transformations that included different rotations between 5 ° and +5°, brightness variations between 20 % and +20%, and horizontal flipping and blurring up to 1 px. The result was a total of 3849 images with grapevine’ shoots smaller than 10 cm, 4734 images where they are bigger than 10 cm, and 982 images without visible shoots at the time. Images’ resolution is the difference between these two versions: one has the original resolution scaled-down to 1024 × 1024 px and the other one to 512 × 512 px. It should be noted that whilst the images that compose the initial dataset have been acquired when all the three classes—“no_shoots”, “shoots_smaller_than_10”, and “shoots_greater_than_10”—could be represented, to have grapevines with shoots bigger than 10 cm means that the natural phenological development dictates that the non-existence of shoots is rarer within that time frame. For that reason alone, the “no_shoots” class has less images that the other two. Table 1 sums up dataset’ versions data.
Two YOLOv4 architectures—YOLOv4-CSP and YOLOv4-P7—were used in five different training studies done with the three versions of the dataset, as described in Table 2. Besides training with three classes, the cloud-based machine GPU allowed a batch size of 8 for the dataset versions with 512 × 512 px and 1024 × 1024 px images’ resolution. As for the version with the 1900 × 1900 px resolution images, only a batch size of 2 was possible. Furthermore, hyperparameters—e.g., learning rate = 0.01, momentum = 0.938, decay = 0.0005—were kept at their default values and the number of epochs was set to 500, since from this value onward precision stabilized.

3.5. Grapevine Moth Males Tally Application

Grapevine moth Lobesia botrana is one of the pests that has a relevant economic impact in some of the Portuguese wine regions. Hence, it made perfect sense to develop and test an application capable to detect, classify and tally grapevine moth males captured by field traps. This was achieved using the same approach described in the previous subsection. Indeed, a small camera equipped with a fish-eye lens was fitted inside a field sticky trap as presented in Figure 5a. All acquired images were analysed and every existing grapevine moth male was dully annotated.
The initial dataset of captured insects is composed of 36 images. After properly annotated, it yielded 1014 images of grapevine moth males. Again, Roboflow platform was not only used to divide the dataset in 70% for training, 20% for validation, and 10% for testing, but also to create a new version of the dataset, by means of a data augmentation process. Indeed, it replicated existing training images with transformations that included different rotations between 45 ° and +45 °, varying brightness between 25 % and +25%, exposure variation between 15 % and +15%, and blurring up to 0.25 px. This dataset version resulted in a total of 146 trap images with a 1024 × 1024 px resolution, where 3239 grapevine moth males were annotated. Figure 7 depicts some examples of grapevine moth images obtained after the annotation process. Considering that the initial dataset had yet a reduced number of images, this augmented version was the one used to train the AI approach. Table 3 sums up dataset-related data.
Training was done considering one class only and using a batch size of 8. As in the grapevine’ shoots approach, hyperparameters—e.g., learning rate = 0.01, momentum = 0.938, decay = 0.0005—were also kept at their default values. The number of epochs was set to 500, since precision did stabilised from then on.
Unlike the approach used in the grapevine’ shoots application and considering that the existing dataset was still quite small, there were no different training configurations compared. Indeed, the aim was just to validate this approach as an automatic way to tally grapevine moth males captured in field traps, and assess it’s performance as a viable VineInspector service. Therefore, training was done using images with a 1024 × 1024 px resolution. Moreover, YOLOv4-CSP architecture was selected as it was one of which presented the best overall results in detecting and classifying grapevine shoots, as will be shown in the results section.
VineInspector acquired images from a coupled field trap between 13 August and 27 September 2021. Each had their grapevine moth males tallied by this application and the results sent to mySense platform. This rendered them available to users, allowing a remote monitoring of the tally process evolution.

4. Results and Discussion

This section presents the results from both the case study applications, as well as the classification algorithm training process evaluation. With regard to the downy mildew infection prediction application, occurrences—days in which warnings were generated—during the year 2021 are compared with those issued by Direcção Regional de Agricultura e Pescas—Norte (DRAPN), an official government entity who is responsible for generating these type of warnings for the north of Portugal. For the other case study, and as for keeping tabs on the number of grapevine moth males captured in sticky traps, the application returned the tallies over the several days in which the a trap was monitored. Finally, VineInspector device operation is analysed to better characterise the power consumption profile, as well as data exchange with the remote platform.

4.1. Grapevine’ Shoots Application

Training assessment was done using a mean Average Precision of 0.5 ([email protected]), precision, recall and F1-score. While Figure 8 depicts the [email protected], precision and recall curves, Table 4 shows the best results obtained for each training study.
Results support that image resolution has a direct bearing in each model training process performance. Indeed, the training studies carried out with the lowest resolution images (512 × 512 px) where those that had the worst overall performance, even when resorting to a more complex architecture (YOLOv4-P7). It happens because these lowest resolution images portrait an highly complex natural environment, where no two grapevine shoots are similar. As for the training studies in which higher resolution images—1024 × 1024 px and 1900 × 1900 px—were used, they presented the best overall performance results: “0da_1900px_cps” had both a higher [email protected] and recall with only less 0.03% precision, when compared with “4da_1024px_csp”. Still, “0da_1900px_cps” precision curve shows an upward tendency. As such, training it during more epochs may eventually lead up to have it surpass “4da_1024px_csp” precision value. Lastly, the training study with the worst performance with regard to precision and F1-score was the one that used pre-trained weights (“4da_512px_csp_pretrained”). Considering these results, the chosen model to run with the testing portion of the dataset was the one from “0da_1900px_csp” training study. Figure 9 shows the detection results in four grapevine shoots images.
By automatically identifying grapevine’ shoots measuring more than 10 cm and an environmental context with an average air temperature greater than 10 ° C and rainfall above 10 mm within a 24–48 h period, a system is capable of issuing alerts in a timely manner to a setting favourable to the development of grapevine downy mildew. Action can therefore be taken swiftly to reduce or even completely avoid damage caused by the disease. Between March and July 2021—months of interest for downy mildew monitoring—these three parameters were monitored and the generated events are presented in Figure 10 timeline.
On 2, 3 and 11 April, temperature and rainfall conditions were favourable for downy mildew development. However, as grapevine’ shoots had not yet exceeded 10 cm in length, no warning was issued. On 16 April, shoots began to exceed 10 cm and from that point onward, whenever there were favourable temperature and rainfall conditions, warnings were generated: it happened on 22, 23, 25 and 27 April; 10, 12, 14, 16 and 17 May, and finally 19, 20 and 21 June. These dates were compared with those of official warnings issued by DRAPN. For the same region in which VineInspector was installed, DRAPN generated warnings on 1 April—advising treatment only if grapevine’ shoots had exceeded 10 cm—, 10th, 21st and 25th. Hereinafter, DRAPN advised continuous treatment for mildew prevention without stating specific days, as weather conditions remained unstable during the following months. By comparing VineInspector and DRAPN warning dates, the former was spot on. Indeed, warnings were even more precise as they enabled knowing downy mildew risk for a specific parcel and not for an extended region. Moreover, VineInspector issues warnings continuously and throughout all season, specifying each day were risk exists, so that prevention and treatment interventions can be managed in the best possible way. Warnings issued after 25 April meet DRAPN’s continuous treatment advice.

4.2. Grapevine Moth Males Tally

Figure 11 depicts the [email protected], precision and recall curves obtained by the training process. The highest [email protected], precision and recall values were 0.93, 0.73, 0.97, respectively. They are quite acceptable considering the dataset size.
Figure 12 presents the classification process in four example images and Figure 13 depicts the tally evolution throughout the entire monitoring period.
By paying a closer look at Figure 13, the first grapevine moth males were captured and classified only one day after placing the field trap. Late August, around 40 moth males were tallied, and at the end of September, 60. It is also clear that 25 August, 30 August and 16 September were the days that had a steeper climb in the number of moth males captured and classified. A fact worth noting is that the tally value happens to decrease several times during the monitoring period. This can be explained by the time in the day when some images were acquired. Indeed, late afternoon acquired images have a portion directly affected by the sun. As a consequence, some captured grapevine moth males are not identified. Furthermore, another reason may be that in the first few days after being captured, grapevine moths are still alive, even tough stuck on the trap’s glue. In fact, they remain capable of small movements and changing positions, which may lead to not being detected. Increasing the training dataset with more images and having some acquired in roughly these same conditions will probably solve these issues. It will be done next year, rendering the grapevine moth males tally application even more reliable and accurate.

4.3. Operating Record

VineInspector is based on a low-cost autonomous SBC, as presented in Section 3.1. While power consumption can be considered reduced within a regular operating context, it cannot be discarded as it represents an important limitation when configuring the overall system’s operation, and when selecting (and developing) power harvesting and storing solutions in the field (in this case, the source is exclusively the sun). Figure 14 depicts a common operation cycle that begins right when the shield’s microcontroller powers on the VineInspector system. VineInspector had an average current consumption of 386.51 mA during the 18 min and 10 s that it took to complete this cycle. Outside the active period, current consumption is about 1 mA. Considering the consumption profile and that this operation cycle is repeated four times a day, the average power consumption is of 20.45 mA.
Acquired images transmission is undoubtedly the process in which most of the VineInspector operation time—and thus energy budget—is spent. Indeed, the modem used (2G version) has an upload rate of around 64 kbps, which means a lengthy transmission time for an image whose size can be around 1 MB. Low-bandwidth and poor network coverage are also very common issues in PV applications that require data to be transmitted from the field. This can also weight in when considering limiting factors that may restrict the reduction of the VineInspector power consumption profile. Even so, reducing this long upload time is a mandatory improvement in an upcoming version of VineInspector.
VineInspector was thoroughly tested in the field in real operation conditions, during a year. No bugs or malfunctions that could have resulted in a data loss were detected. In about 3.9% of the operating period, data link was loss during the image upload process. However, in each case data was successfully transmitted in a second attempt. This comes to show both the robustness and reliability achieved with VineInspector.

5. Conclusions and Future Work

Data is becoming increasingly important within PA/PV context. Indeed, getting to know the context—both physical and environmental—in which a crop grows is key to have sustainable management practices, to optimise development, and to improve yield and quality. Whilst reliability and precision are often sold as two of the most important characteristics in a crop monitoring system (and they really are), spatial and temporal granularity are also equally important to have a continuous feel on what goes on with a crop in the field. Plagues and diseases are of particular relevance due to their seasonal phytosanitary and economic impacts: to be able to identify plant characteristics and/or environmental conditions favourable to their development can trigger localised and timely treatments to mitigate losses. Early detection can also do the same, while in a more advanced stage as there are already visible signs in the plants. To include proximity image sensors in the (already) wide array of monitoring technologies available enables a more realistic perception on crop dynamics, but also the use of AI/ML algorithms with locally captured images, which can render valuable automatic information for decision support systems.
The VineInspector is a mature approach on acquiring, storing and transmitting proximity field data, featuring detection/classification in captured images by means of AI/ML techniques. While data is undoubtedly important to have, information is what really matters when managing a crop—in this instance, vineyards—efficiently and in a more sustainable way. Therefore, this paper presents not only a VineInspector operating record throughout a monitoring period within a harsh field context, but also two applications that directly address (i) environmental & plant favourable conditions to the onset of diseases; and (ii) early detection of plagues. Grapevine’ shoots detection and classification was able to successfully isolate shoots bigger than 10 cm, which is particularly useful in determining the beginning of grapevines’ vegetative cycle—in turn very useful to trim grapevine phenology prediction models—, but also to the 3–10 rule, widely used for the detection of primary infections of grapevine downy mildew. In fact, the 10 cm measurement is related to an average leaf area of 6 to 8 cm2 [13]. So, this approach becomes particularly useful to evaluate the area exposed to the first primary infection. The second application successfully tallies grapevine moth males captured in field traps. It enables not only to determine when they first show up, but also to assess the intensity of the attack and the timeline. As such, it is also possible to understand in which days—and even the part of the day—more grapevine moths appeared, and thus apply the proper treatment more effectively.
One of the major VineInspector advantages is the fact that it is a very flexible system with regard to remote applications supported by AI/ML-based algorithms. Indeed, they are independent of the n existing image channels. In addition to both case study applications presented in this work and as VineInspector collects both meteorological data and images, it has potential to be used in numerous other applications and cultures. Examples are apple orchards, olive groves, tomato plantations, blueberry plantations. In fact, the monitoring of olive fruit fly through traps placed in the olive groves is being presently worked on.
As future work, we intend to press on some important issues. One of them will be tracking grapevines’ phenological states using images that are continuously acquired and sent by the VineInspector. It will be necessary to expand the training dataset and increase the number of classes. The automatic detection of these phenological states is of utmost relevance since many cultural operations in the vineyard rely on phenological changes. More accurate predictions will contribute to have more efficient and sustainable decision support systems and vineyard management practices. The idea will be to later extend this functionality to other crops, such as apple orchards. Another issue to tackle in the future is related to traps monitoring. It is intended to develop models that correlate the environmental data collected with the insect tallies to be able to predict when the insects will appear with more intensity and thus perform the necessary treatments to prevent or minimise damages.

Author Contributions

Conceptualization, J.M. and R.M.; methodology, R.M. and I.C.; software, J.M. and R.M.; validation, E.P., I.C. and J.J.S.; formal analysis, E.P.; investigation, J.M. and R.M.; writing—original draft preparation, J.M., R.M. and E.P.; writing—review and editing, E.P., N.S. and R.S.; visualization, N.S. and R.S.; supervision, R.M. and F.N.d.S.; project administration, R.M.; funding acquisition, R.M. and F.N.d.S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors would like to acknowledge Interreg VA España—Portugal (POCTEP), as part of project “SIAPD—Sistema integrado transregional de apoio ao combate de pragas e doenças na agricultura” (0655_SIAPD_6_P) for partial funding as well as the ERDF—European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation—COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961. This research was also partial funded by National Funds through the FCT—Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) as part of projects UIDB/50014/2020, UIDB/04033/2020 and as part of the doctoral fellowships with the references SFRH/BD/129813/2017 and SFRH/BD/137968/2018 with funds from the Portuguese State Budget and the European Union Community Budget through the ESF—European Social Fund.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
CPUCentral Processing Unit
CSAChannel-spatial Attention
CUDACompute Unified Device Architecture
DLDeep Learning
DNNDeep Neural Network
eMMCembedded Multi-Media Card
GDDRGraphics Double Data Rate
GPRGround Penetrating Radar
GPRSGeneral Packet Radio Service
GPUGraphics Processing Unit
GSMGlobal System for Mobiles
HTTPHyperText Transfer Protocol
JSONJavaScript Object Notation
mAPmean Average Precision
MLMachine Learning
NIRNear-infrarred
OPiOrange Pi
PAPrecision Agriculture
PSSMPosition-sensitive Score Map
PTZPan, Tilt, Zoom
PVPrecision Viticulture
R-CNNRegion-based Convolutional Neural Network
RoIRegion of Interest
RPNRegion Proposal Network
RTCReal Time Clock
SBCSingle-board Computer
SPWASSolar Powered Wireless Acquisition System
SVMSupport Vector Machine
UAVUnmanned Aerial Vehicle
WDTWatch-dog Timer
YOLOYou Only Look Once

References

  1. Kassim, M.R.M. Iot applications in smart agriculture: Issues and challenges. In Proceedings of the 2020 IEEE Conference on Open Systems (ICOS), Kota Kinabalu, Malaysia, 17–19 November 2020; pp. 19–94. [Google Scholar] [CrossRef]
  2. Reed, B.C.; Schwartz, M.D.; Xiao, X. Remote sensing phenology. In Phenology of Ecosystem Processes; Springer: New York, NY, USA, 2009; pp. 231–246. [Google Scholar] [CrossRef]
  3. Richardson, A.D.; Klosterman, S.; Toomey, M. Near-surface sensor-derived phenology. In Phenology: An Integrative Environmental Science; Springer: Dordrecht, The Netherlands, 2013; pp. 413–430. [Google Scholar] [CrossRef]
  4. Tomkiewicz, D.; Piskier, T. A plant based sensing method for nutrition stress monitoring. J. Precis. Agric. 2012, 13, 370–383. [Google Scholar] [CrossRef]
  5. Andrianto, H.; Faizal, A.; Kurniawan, N.B.; Aji, D.P. Performance evaluation of IoT-based service system for monitoring nutritional deficiencies in plants. Inf. Process. Agric. 2021, in press. [CrossRef]
  6. Barbedo, J.G. Digital image processing techniques for detecting, quantifying and classifying plant diseases. SpringerPlus 2013, 2, 660. [Google Scholar] [CrossRef] [Green Version]
  7. Pineda, M.; Baron, M.; Perez-Bueno, M.L. Thermal imaging for plant stress detection and phenotyping. Remote Sens. 2020, 13, 68. [Google Scholar] [CrossRef]
  8. Matese, A.; Baraldi, R.; Berton, A.; Cesaraccio, C.; Gennaro, S.F.D.; Duce, P.; Facini, O.; Mameli, M.G.; Piga, A.; Zaldei, A. Estimation of water stress in grapevines using proximal and remote sensing methods. Remote Sens. 2018, 10, 114. [Google Scholar] [CrossRef] [Green Version]
  9. Saiz-Rubio, V.; Rovira-Más, F. From smart farming towards agriculture 5.0: A review on crop data management. Agronomy 2020, 10, 207. [Google Scholar] [CrossRef] [Green Version]
  10. Barbato, M.; Giaconi, G.; Liparulo, L.; Maisto, M.; Panella, M.; Proietti, A.; Orlandi, G. Smart Devices and Environments: Enabling Technologies and Systems for the Internet of Things; Maia Edizioni: Rome, Italy, 2014. [Google Scholar]
  11. Diedrichs, A.L.; Tabacchi, G.; Grünwaldt, G.; Pecchia, M.; Mercado, G.; Antivilo, F.G. Low-power wireless sensor network for frost monitoring in agriculture research. In Proceedings of the 2014 IEEE Biennial Congress of Argentina (ARGENCON), Bariloche, Argentina, 11–13 June 2014; pp. 525–530. [Google Scholar] [CrossRef]
  12. Maddalena, G.; Russo, G.; Toffolatti, S.L. The study of the germination dynamics of Plasmopara viticola oospores highlights the presence of phenotypic synchrony with the host. Front. Microbiol. 2021, 12, 698586. [Google Scholar] [CrossRef]
  13. Baldacci, E. Epifitie di Plasmopara Viticola (1941–16) Nell’Oltrepò Pavese ed Adizione del Calendario di Incubazione Come Strumento di Lotta; Atti Istituto Botanico, Laboratorio Crittogamico: Pavia, Italy, 1947; pp. 45–85. [Google Scholar]
  14. Pérez-Expósito, J.P.; Fernández-Caramés, T.M.; Fraga-Lamas, P.; Castedo, L. VineSens: An Eco-Smart Decision-Support Viticulture System. Sensors 2017, 17, 465. [Google Scholar] [CrossRef]
  15. Millardet, A. Notes sur les Vignes Américaines et Opuscules Divers sur le Même Sujet; Éditions Féret: Bordeaux, France, 1881. [Google Scholar]
  16. Viennot-Bourgin, G. Les Champignons Parasites des Plantes Cultivées; Masson: Paris, France, 1949. [Google Scholar]
  17. Jackson, R.S. 4-Vineyard Practice. In Wine Science, 3rd ed.; Jackson, R.S., Ed.; Food Science and Technology; Academic Press: San Diego, CA, USA, 2008; pp. 108–238. [Google Scholar] [CrossRef]
  18. Gessler, C.; Pertot, I.; Perazzolli, M. Plasmopara viticola: A review of knowledge on downy mildew of grapevine and effective disease management. Phytopathol. Mediterr. 2011, 50, 3–44. [Google Scholar]
  19. Dubos, B. Maladies Cryptogamiques de la Vigne: Champignons Parasites des Organes Herbacés et du Bois de la Vigne; Éditions Féret: Bordeaux, France, 2002. [Google Scholar]
  20. Fontaine, S.; Remuson, F.; Caddoux, L.; Barrès, B. Investigation of the sensitivity of Plasmopara viticola to amisulbrom and ametoctradin in French vineyards using bioassays and molecular tools. Pest Manag. Sci. 2019, 75, 2115–2123. [Google Scholar]
  21. Amaral, B.D.; Viana, A.P.; Santos, E.A.; Ribeiro, R.M.; da Silva, F.A.; Ambrósio, M.; Walker, A.M. Prospecting for resistance of interspecific hybrids of Vitis spp. to Plasmopara viticola. Euphytica 2020, 216, 68. [Google Scholar] [CrossRef]
  22. Maia, M.; Maccelli, A.; Nascimento, R.; Ferreira, A.E.N.; Crestoni, M.E.; Cordeiro, C.; Figueiredo, A.; Silva, M. Early detection of Plasmopara viticola-infected leaves through FT-ICR-MS metabolic profiling. Int. Soc. Hortic. Sci. 2018, 1248, 575–580. [Google Scholar] [CrossRef]
  23. Rosa, M.; Genesio, R.; Gozzini, B.; Maracchi, G.; Orlandini, S. PLASMO: A computer program for grapevine downy mildew development forecasting. Comput. Electron. Agric. 1993, 9, 205–215. [Google Scholar] [CrossRef]
  24. Wu, B.M.; Subbarao, K.V.; van Bruggen, A.H.C.; Pennings, G.G.H. Validation of weather and leaf wetness forecasts for a lettuce downy mildew warning system. Can. J. Plant Pathol. 2001, 23, 371–383. [Google Scholar] [CrossRef]
  25. Viret, O.; Bloesch, B.; Taillens, J.; Siegfried, W.; Dupuis, D. Forecast and control of downy mildew (Plasmopara viticola) infections using weather stations. Rev. Suisse Vitic. Arboric. Hortic. 2001, 33, 1–12. [Google Scholar]
  26. Pérez-Expósito, J.P.; Fernández-Caramés, T.M.; Fraga-Lamas, P.; Castedo, L. An IoT Monitoring System for Precision Viticulture. In Proceedings of the 2017 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Exeter, UK, 21–23 June 2017; pp. 662–669. [Google Scholar] [CrossRef]
  27. Sobolu, R.; Cordea, M.; Pop, I.; Popescu, D.; Pusta, D. Grapes’ leaves disease detection through image processing. Sci. Pap. Ser. Hortic. 2019, 63, 247–252. [Google Scholar]
  28. Lloret, J.; Bosch, I.; Sendra, S.; Serrano, A. A Wireless Sensor Network for Vineyard Monitoring That Uses Image Processing. Sensors 2011, 11, 6165–6196. [Google Scholar] [CrossRef] [Green Version]
  29. Kim, W.-S.; Lee, D.-H.; Kim, Y.-J. Machine vision-based automatic disease symptom detection of onion downy mildew. Comput. Electron. Agric. 2020, 168, 105099. [Google Scholar] [CrossRef]
  30. Abdelghafour, F.; Rançon, F.; Keresztes, B.; Germain, C.; da Costa, J.-P. On-Board Colour Imaging for the Detection of Downy Mildew; Wageningen Academic Publishers: Wageningen, The Netherlands, 2019; Chapter 23; pp. 195–202. [Google Scholar] [CrossRef]
  31. Abdelghafour, F.; Keresztes, B.; Germain, C.; da Costa, J.-P. In Field Detection of Downy Mildew Symptoms with Proximal Colour Imaging. Sensors 2020, 20, 4380. [Google Scholar] [CrossRef]
  32. Moschos, T.; Souliotis, C.; Broumas, T.; Kapothanassi, V. Control of the European grapevine moth Lobesia botrana in Greece by the mating disruption technique: A three-year survey. Phytoparasitica 2004, 32, 83. [Google Scholar] [CrossRef]
  33. Espinoza, K.; Valera, D.L.; Torres, J.A.; López, A.; Molina-Aiz, F.D. Combination of image processing and artificial neural networks as a novel approach for the identification of Bemisia tabaci and Frankliniella occidentalis on sticky traps in greenhouse agriculture. Comput. Electron. Agric. 2016, 127, 495–505. [Google Scholar] [CrossRef]
  34. Song, Y.-J.; Kim, M.-H.; Lee, S.-H. A counting method for the number of Sternolophus rufipes and Hydrochara affinis in a noisy trap image. J. Asia-Pac. Entomol. 2019, 22, 802–806. [Google Scholar] [CrossRef]
  35. Ramalingam, B.; Mohan, R.E.; Pookkuttath, S.; Gómez, B.F.; Sairam Borusu, C.S.C.; Wee Teng, T.; Tamilselvam, Y.K. Remote Insects Trap Monitoring System Using Deep Learning Framework and IoT. Sensors 2020, 20, 5280. [Google Scholar] [CrossRef] [PubMed]
  36. Liu, L.; Wang, R.; Xie, C.; Yang, P.; Wang, F.; Sudirman, S.; Liu, W. PestNet: An end-to-end deep learning approach for large-scale multi-class pest detection and classification. IEEE Access 2019, 7, 45301–45312. [Google Scholar] [CrossRef]
  37. Ding, W.; Taylor, G. Automatic moth detection from trap images for pest management. Comput. Electron. Agric. 2016, 123, 17–28. [Google Scholar] [CrossRef] [Green Version]
  38. Rustia, D.J.A.; Lin, C.E.; Chung, J.-Y.; Zhuang, Y.-J.; Hsu, J.-C.; Lin, T.-T. Application of an image and environmental sensor network for automated greenhouse insect pest monitoring. J. Asia-Pac. Entomol. 2020, 23, 17–28. [Google Scholar] [CrossRef]
  39. Bakkay, M.C.; Chambon, S.; Rashwan, H.A.; Lubat, C.; Barsotti, S. Automatic detection of individual and touching moths from trap images by combining contour-based and region-based segmentation. IET Comput. Vis. 2018, 12, 138–145. [Google Scholar] [CrossRef] [Green Version]
  40. Zhong, Y.; Gao, J.; Lei, Q.; Zhou, Y. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture. Sensors 2018, 18, 1489. [Google Scholar] [CrossRef] [Green Version]
  41. Lima, M.C.F.; Leandro, M.E.D.d.; Valero, C.; Coronel, L.C.P.; Bazzo, C.O.G. Automatic Detection and Monitoring of Insect Pests—A Review. Agriculture 2020, 10, 161. [Google Scholar] [CrossRef]
  42. Preti, M.; Verheggen, F.; Angeli, S. Insect pest monitoring with camera-equipped traps: Strengths and limitations. J. Pest Sci. 2021, 94, 203–217. [Google Scholar] [CrossRef]
  43. De Cesaro Júnior, T.; Rieder, R. Automatic identification of insects from digital images: A survey. Comput. Electron. Agric. 2020, 178, 105784. [Google Scholar] [CrossRef]
  44. Morais, R.; Silva, N.; Mendes, J.; Adão, T.; Pádua, L.; López-Riquelme, J.; Pavón-Pulido, N.; Sousa, J.J.; Peres, E. mySense: A comprehensive data management environment to improve precision agriculture practices. Comput. Electron. Agric. 2019, 162, 882–894. [Google Scholar] [CrossRef]
  45. Morais, R.; Mendes, J.; Silva, R.; Silva, N.; Sousa, J.J.; Peres, E. A Versatile, Low-Power and Low-Cost IoT Device for Field Data Gathering in Precision Agriculture Practices. Agriculture 2021, 11, 619. [Google Scholar] [CrossRef]
  46. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. Scaled-YOLOv4: Scaling Cross Stage Partial Network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13029–13038. [Google Scholar]
  47. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  48. Wong, K.-Y. Implementation of Scaled-YOLOv4 Using PyTorch Framework; Zenodo: Geneva, Switzerland, 2021. [Google Scholar] [CrossRef]
  49. Brandon, T. Mish-Cuda: Self Regularized Non-Monotonic Activation Function; Zenodo: Geneva, Switzerland, 2019. [Google Scholar] [CrossRef]
  50. Lin, T.; Mendes, J.; Rflynn; Jorlogicus; ChrisDal; Jay, M.; Mattio, T.; Wang, M.; Jaewchoi; Vdalv; et al. Label-Images-Tool: Graphical Image Annotation Tool and Label Object Bounding Boxes in Images; Zenodo: Geneva, Switzerland, 2021. [Google Scholar] [CrossRef]
Figure 1. VineInspector’s hardware setup architecture.
Figure 1. VineInspector’s hardware setup architecture.
Agriculture 12 00730 g001
Figure 2. Shield microcontroller’s firmware flowchart.
Figure 2. Shield microcontroller’s firmware flowchart.
Agriculture 12 00730 g002
Figure 3. OPi’s software flowchart.
Figure 3. OPi’s software flowchart.
Agriculture 12 00730 g003
Figure 4. Remote cloud-based platform software flowchart.
Figure 4. Remote cloud-based platform software flowchart.
Agriculture 12 00730 g004
Figure 5. VineInspector installed in a UTAD’s vineyard, located on Campus: (a) the entire system, with the common delta sticky trap in foreground; (b) VineInspector’s detail, showing the upper shield.
Figure 5. VineInspector installed in a UTAD’s vineyard, located on Campus: (a) the entire system, with the common delta sticky trap in foreground; (b) VineInspector’s detail, showing the upper shield.
Agriculture 12 00730 g005
Figure 6. Grapevine’ shoots images that compose the training dataset: (Top) shoots whose size surpasses 10 cm; (Middle) shoots whose size is lesser than 10 cm; (Bottom) still no shoots growing.
Figure 6. Grapevine’ shoots images that compose the training dataset: (Top) shoots whose size surpasses 10 cm; (Middle) shoots whose size is lesser than 10 cm; (Bottom) still no shoots growing.
Agriculture 12 00730 g006
Figure 7. Grapevine moth males images that resulted from the annotation process.
Figure 7. Grapevine moth males images that resulted from the annotation process.
Agriculture 12 00730 g007
Figure 8. [email protected], precision and recall curves: (Dark blue) 4da_512px_csp; (Green) 4da_512px_ csp_pretrained; (Orange) 4da_1024px_csp; (Light blue) 0da_1900px_csp; (Red) 4da_512px_p7.
Figure 8. [email protected], precision and recall curves: (Dark blue) 4da_512px_csp; (Green) 4da_512px_ csp_pretrained; (Orange) 4da_1024px_csp; (Light blue) 0da_1900px_csp; (Red) 4da_512px_p7.
Agriculture 12 00730 g008
Figure 9. Grapevine’ shoots detection and classification results over acquired images.
Figure 9. Grapevine’ shoots detection and classification results over acquired images.
Agriculture 12 00730 g009
Figure 10. Timeline of generated events within the downy mildew prediction process. T48 represents the last 48 h average temperature and R48 the last 48 h total rainfall.
Figure 10. Timeline of generated events within the downy mildew prediction process. T48 represents the last 48 h average temperature and R48 the last 48 h total rainfall.
Agriculture 12 00730 g010
Figure 11. Training process for grapevine moth males detection and classification: [email protected] (green), precision (blue) and recall (orange) curves.
Figure 11. Training process for grapevine moth males detection and classification: [email protected] (green), precision (blue) and recall (orange) curves.
Agriculture 12 00730 g011
Figure 12. Evolution of the number of individuals of Grapevine moth males caught in the trap: (a) 18 August—1 detection; (b) 24 August—18 detections; (c) 27 August—43 detections; (d) 5 September—58 detections.
Figure 12. Evolution of the number of individuals of Grapevine moth males caught in the trap: (a) 18 August—1 detection; (b) 24 August—18 detections; (c) 27 August—43 detections; (d) 5 September—58 detections.
Agriculture 12 00730 g012
Figure 13. Grapevine moth males tally plot during the entire monitoring period.
Figure 13. Grapevine moth males tally plot during the entire monitoring period.
Agriculture 12 00730 g013
Figure 14. VineInspector common operation power consumption profile over a 18 min period.
Figure 14. VineInspector common operation power consumption profile over a 18 min period.
Agriculture 12 00730 g014
Table 1. Grapevine’ shoots initial dataset versions.
Table 1. Grapevine’ shoots initial dataset versions.
Data Augmentation
Techniques
Number of Images
(Total/Training/
Validation/Testing)
Annotations
Resolution
Shoots
<10 cm
Shoots
>10 cm
No
Shoots
None were applied238/167/47/2412309852741900 × 1900
Flip: Horizontal
Rotation: Btw. ±5°
Brightness: Btw. ±20%
Blur: Up to 1 px

906/835/47/24

3849

4734

982

1024 × 1024

512 × 512
Table 2. Training studies for grapevine’ shoots detection and classification processes.
Table 2. Training studies for grapevine’ shoots detection and classification processes.
Training
Configuration
YOLO
Architecture
Data
Augmentation
ResolutionPre-Trained
Weights
4da_512px_cspYOLOv4-CSPYes512 × 512 pxNo
4da_512px_csp_pretrainedYOLOv4-CSPYes512 × 512 pxyolo-csp.weights
4da_1024px_cspYOLOv4-CSPYes1024 × 1024 pxNo
0da_1900px_cspYOLOv4-CSPNo1900 × 1900 pxNo
4da_512px_p7YOLOv4-P7Yes512 × 512 pxNo
Table 3. Grapevine moth males augmented dataset version.
Table 3. Grapevine moth males augmented dataset version.
Data Augmentation
Techniques
Number of Images
(Total/Training/
Validation/Testing)
Grapevine Moth Males
Annotations
Resolution
Rotation: Btw. ±45°
Brightness: Btw. ±25%
Exposure: Btw. ±15%
Blur: Up to 0.25 px
146/135/7/432391024 × 1024
Table 4. Assessment results for the different training studies related with grapevine’ shoots detection and classification.
Table 4. Assessment results for the different training studies related with grapevine’ shoots detection and classification.
Training Configuration[email protected]PrecisionRecallF1-Score
4da_512px_csp0.780.730.830.78
4da_512px_csp_pretrained0.800.660.850.74
4da_1024px_csp0.810.750.850.80
0da_1900px_csp0.840.720.900.80
4da_512px_p70.800.680.830.75
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mendes, J.; Peres, E.; Neves dos Santos, F.; Silva, N.; Silva, R.; Sousa, J.J.; Cortez, I.; Morais, R. VineInspector: The Vineyard Assistant. Agriculture 2022, 12, 730. https://doi.org/10.3390/agriculture12050730

AMA Style

Mendes J, Peres E, Neves dos Santos F, Silva N, Silva R, Sousa JJ, Cortez I, Morais R. VineInspector: The Vineyard Assistant. Agriculture. 2022; 12(5):730. https://doi.org/10.3390/agriculture12050730

Chicago/Turabian Style

Mendes, Jorge, Emanuel Peres, Filipe Neves dos Santos, Nuno Silva, Renato Silva, Joaquim João Sousa, Isabel Cortez, and Raul Morais. 2022. "VineInspector: The Vineyard Assistant" Agriculture 12, no. 5: 730. https://doi.org/10.3390/agriculture12050730

APA Style

Mendes, J., Peres, E., Neves dos Santos, F., Silva, N., Silva, R., Sousa, J. J., Cortez, I., & Morais, R. (2022). VineInspector: The Vineyard Assistant. Agriculture, 12(5), 730. https://doi.org/10.3390/agriculture12050730

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop