Next Article in Journal
Towards Evaluating Proactive and Reactive Approaches on Reorganizing Human Resources in IoT-Based Smart Hospitals
Next Article in Special Issue
Predicting Forage Quality of Warm-Season Legumes by Near Infrared Spectroscopy Coupled with Machine Learning Techniques
Previous Article in Journal
Automatic Indoor Reconstruction from Point Clouds in Multi-room Environments with Curved Walls
Previous Article in Special Issue
Investigating 2-D and 3-D Proximal Remote Sensing Techniques for Vineyard Yield Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Non-Invasive Method Based on Computer Vision for Grapevine Cluster Compactness Assessment Using a Mobile Sensing Platform under Field Conditions

by
Fernando Palacios
1,2,
Maria P. Diago
1,2 and
Javier Tardaguila
1,2,*
1
Televitis Research Group, University of La Rioja, 26006 Logroño (La Rioja), Spain
2
Instituto de Ciencias de la Vid y del Vino, University of La Rioja, CSIC, Gobierno de La Rioja, 26007 Logroño, Spain
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(17), 3799; https://doi.org/10.3390/s19173799
Submission received: 8 July 2019 / Revised: 27 August 2019 / Accepted: 30 August 2019 / Published: 2 September 2019
(This article belongs to the Special Issue Emerging Sensor Technology in Agriculture)

Abstract

:
Grapevine cluster compactness affects grape composition, fungal disease incidence, and wine quality. Thus far, cluster compactness assessment has been based on visual inspection performed by trained evaluators with very scarce application in the wine industry. The goal of this work was to develop a new, non-invasive method based on the combination of computer vision and machine learning technology for cluster compactness assessment under field conditions from on-the-go red, green, blue (RGB) image acquisition. A mobile sensing platform was used to automatically capture RGB images of grapevine canopies and fruiting zones at night using artificial illumination. Likewise, a set of 195 clusters of four red grapevine varieties of three commercial vineyards were photographed during several years one week prior to harvest. After image acquisition, cluster compactness was evaluated by a group of 15 experts in the laboratory following the International Organization of Vine and Wine (OIV) 204 standard as a reference method. The developed algorithm comprises several steps, including an initial, semi-supervised image segmentation, followed by automated cluster detection and automated compactness estimation using a Gaussian process regression model. Calibration (95 clusters were used as a training set and 100 clusters as the test set) and leave-one-out cross-validation models (LOOCV; performed on the whole 195 clusters set) were elaborated. For these, determination coefficient (R2) of 0.68 and a root mean squared error (RMSE) of 0.96 were obtained on the test set between the image-based compactness estimated values and the average of the evaluators’ ratings (in the range from 1–9). Additionally, the leave-one-out cross-validation yielded a R2 of 0.70 and an RMSE of 1.11. The results show that the newly developed computer vision based method could be commercially applied by the wine industry for efficient cluster compactness estimation from RGB on-the-go image acquisition platforms in commercial vineyards.

1. Introduction

Grapevine cluster compactness is a key attribute related to grape composition, fruit health status, and wine quality [1,2]. Compactness defines the density of the cluster by the degree of the aggregation of its berries. Highly compacted winegrape clusters can be affected to a greater extent by fungal diseases, such as powdery mildew [3] and botrytis [4], than loose ones [5].
The most prevalent method for assessing cluster compactness was developed by the International Organization of Vine and Wine (OIV) [6] and has been applied in several research studies [7,8]. This OIV method procures cluster compactness assessment by visual inspection in five different classes. This compactness class takes into account several morphological features of the berries and pedicels, which are visually appraised by trained experts. This method and others designed to evaluate compactness on specific varieties [9,10,11] tend to be inaccurate due to the intrinsic subjectivity of the evaluation linked to the evaluator’s opinion. Moreover, these visual inspection methods are laborious and time-consuming, as they may also require the manual measurement of specific cluster morphological parameters. Therefore, alternative methods for objectively and accurately assessing cluster compactness are needed for wine industry applications.
Computer vision and image processing technology enables low-cost, automated information extraction and its analysis from images taken using a digital camera. This technology is being used in viticulture to estimate key parameters such as vine pruning weight [12,13], the number of flowers per inflorescence [14,15], canopy features [16], or yield [17,18], as well as to provide relevant information to grape harvesting robots [19,20].
Automated cluster compactness estimation by computer vision methods was recently attempted by Cubero et al. [21] and Chen et al. [22]. The former involved the automated extraction of image descriptors from red, green, blue (RGB) cluster images taken from different cluster views under laboratory conditions. From these descriptors, a partial least squares (PLS) calibration model was developed to predict their associated OIV compactness rating. In the approach followed by Chen et al. [22], a multi-perspective imaging system was developed, which made use of different mirror reflections that facilitated the simultaneous acquisition of images from multiple views from a single shot. Additionally, the system also included a weighing sensor for cluster mass measurement. Then, a set of image descriptors and features derived from the data provided by the sensing system were automatically extracted and used to calibrate several models. Of these, the PLS model achieved the best results.
Previously developed computer vision methods for cluster compactness assessment provided accurate and objective compactness estimation only working under controlled laboratory conditions, which requires the destructive collection of clusters in the vineyard. This is a laborious and time-consuming practice that precludes the appraisal of cluster compactness as a standard grape quality parameter prior to harvest, thus limiting its industrial applicability. Moreover, to the best of our knowledge, there is no commercial method available to assess grapevine cluster compactness under field conditions in an automated way.
The purpose of this work was to develop a new, non-invasive, and proximal method based on computer vision and machine learning technology for assessing grapevine cluster compactness from on-the-go RGB image acquisition in commercial vineyards.

2. Materials and Methods

2.1. Experimental Layout

The trials were carried out during seasons 2016, 2017, and 2018 in three commercial vineyards planted with four different red grapevine varieties (Vitis vinifera L). The vines were trained onto a vertical shoot positioned (VSP) trellis system and were partially defoliated at fruit set.
An overall set of 195 red grape clusters involving five distinct datasets were labeled in the field prior to image acquisition in three commercial vineyards.
  • Vineyard site #1: Located in Logroño (lat. 42°27′42.3″N; long. 2°25′40.4″W; La Rioja, Spain) with 2.8 m row spacing and 1.2 m vine spacing, where a set of 95 Tempranillo clusters were imaged and sampled during season 2016, denoted as T16.
  • Vineyard site #2: Located in Logroño (lat. 42°28′34.2″N; long. 2°29′10.0″W; La Rioja, Spain) with 2.5 m row spacing and 1 m vine spacing, where a set of 25 Grenache clusters were imaged and sampled during season 2017, denoted as G17.
  • Vineyard site #3: Located in Vergalijo (lat. 42°27’46.0” N; long. 1°48’13.1” W; Navarra, Spain) with 2 m row spacing and 1 m vine spacing, where three sets of 75 clusters of Syrah, Cabernet Sauvignon, and Tempranillo (25 per grapevine variety) were imaged and sampled during season 2018, denoted as S18, CS18, and T18, respectively.
The data were divided into a training set, formed by 95 clusters of T16 dataset, and an external validation test set, formed by 100 clusters of G17, S18, CS18, and T18 datasets, in order to test the system performance on new or additional varieties and vineyards.

2.2. Image Acquisition

Vineyard canopy images were taken on-the-go at a speed of 5 Km/h one week prior to harvest using a mobile sensing platform developed at the University of La Rioja. Image acquisition was performed at night using an artificial illumination system mounted onto the mobile platform in order to obtain homogeneity on the illumination of the vines and to separate the vine under evaluation from the vines of the opposite row. An all-terrain vehicle (ATV) (Trail Boss 330, Polaris Industries, Medina, Minnesota, USA) was modified to incorporate all components as described in the work of Diago et al. [23] (Figure 1).
Additionally, some elements were modified:
  • RGB camera: a mirrorless Sony α7II RGB camera (Sony Corp., Tokyo, Japan) mounting a full-frame complementary metal oxide semiconductor (CMOS) sensor (35 mm and 24.3 MP resolution) and equipped with a Zeiss 24/70 mm lens was used for image acquisition in vineyard sites #1 and #2, while a Canon EOS 5D Mark IV RGB camera (Canon Inc. Tokyo, Japan) mounting a full-frame CMOS sensor (35 mm and 30.4 MP) equipped with a Canon EF 35 mm F/2 IS USM lens was used in vineyard site #3.
  • Industrial computer: A Nuvo-3100VTC industrial computer was used for image storage and camera parameters setting for the Canon EOS 5D Mark IV using custom software developed, while the parameters of the Sony α7II camera were set in the camera itself and the storage in a Secure Digital (SD)-card.
The camera was positioned at a distance of 1.5 m from the canopy. The camera parameters were manually set at the beginning of the experiment in each vineyard.

2.3. Reference Measurements of Cluster Compactness

After image acquisition, the labeled clusters were manually collected, and their compactness was visually evaluated in a laboratory at the University of La Rioja by a panel composed of 15 experts following the OIV 204 standard [6]. In this reference method, each cluster was classified in one of five discrete classes (Figure 2) ranging from 1, the loosest clusters, to 9, the most compact clusters. In the visual assessment, several aspects related to the morphology of the cluster, such as berries’ mobility, pedicels’ visibility, and berries’ deformation by pressure, were taken into consideration. The average of the evaluators’ ratings was used as the reference compactness value for each cluster.

2.4. Image Processing

Image processing comprised several steps that can be summarized as semi-supervised image segmentation followed by cluster detection and compactness estimation for each detected cluster. While cluster detection and compactness estimation were fully automated, the image segmentation step required the intervention of the user for each dataset. The algorithm was developed and tested using Matlab R2017b (Mathworks, Natick, MA, USA). The flowchart of the algorithm process for a new set of images is described in Figure 3.
To ensure consistency in the analysis of the complete algorithm, the classifier used at each step was trained using the training set and evaluated with the output obtained by the classifier of the previous step for the test set, except for the initial image segmentation, where a model was trained on each individual dataset.

2.4.1. Semi-Supervised Image Segmentation

For the proper compactness estimation of every cluster visible in the image, a previous detection of the clusters and their main elements (grape and rachis) was needed.
Most of the red winegrape pixels are easily distinguishable from pixels of other vine elements by their color. Hence, an initial pixelwise color-based segmentation was performed on every image. For this approach, seven classes representing the elements of the grapevines potentially present in their images were defined: “grape”, “rachis”, “trunk”, “shoot”, “leaf”, “gap”, and “trellis”. For extracting cluster candidates, only groups of pixels belonging to the first class (“grape”) were used, but rachis identification was also relevant for compactness assessment. In summary, an image segmentation considering the seven classes described above as a first step eliminated the necessity of further color segmentation.
A set of 3500 pixels were manually labeled (500 pixels per class), and color features were extracted considering a combination of two color spaces: RGB and CIE L*a*b* (CIELAB) [24]. In this approach, a pixel p was mathematically represented as in Equation (1):
p = R p , G p , B p , L p , a p , b p
where R p , G p , B p and L p , a p , b p represent the values of p for the three channels of the RGB and the CIELAB color spaces, respectively. The function rgb2lab from Matlab R2017b was used for the RGB to CIELAB color space conversion.
A multinomial logistic regression was trained with the set described above in order to obtain a pixel wise color based classifier. This classifier, which is a generalization of logistic regression for multiclass problems [25], predicts the probability of each possible outcome for an observation as the relative probability of belonging to each class over belonging to another one chosen arbitrarily as the reference class. Assuming that n is the reference class of a set of 1 , ,   n classes, the output of the classifier for p is [ π 1 , , π n ] , where π i represents the probability of p belonging to the class i for i = 1 , , n 1 [Equation (2)], and π n represents the probability for the reference class [Equation (3)].
π i = e β i , 0 + j = 1 k β i , j x j 1 + l = 1 n 1 e β l , 0 + j = 1 k β l , j x j
π n = 1 1 + l = 1 n 1 e β l , 0 + j = 1 k β l , j x j = 1 l = 1 n 1 π l
where k is the number of predictor variables, x j the j -th predictor variable, and β i , j is the estimated coefficient for the i-th class.
The segmentation was performed by assigning to each pixel the class with the highest probability (Figure 4).

2.4.2. Cluster Detection

Using the initial segmentation, a mask of cluster candidates was then generated by selecting those pixels assigned to the “grape” class (Figure 4c).
While the initial color segmentation allowed filtering most of the non-cluster elements presented in the image, a second filtering step was needed to remove those non-cluster groups of pixels with similar color to the “grape” class, which can form objects with different shapes and sizes. This second filtering can be summarized as:
  • A morphological opening (morphological erosion operation followed by a dilation) of the clusters’ candidates mask using a circular kernel with a radius of three pixels.
  • An extraction of a sub-image per minimal bounding box that contains a connected component (groups of connected pixels) in the clusters’ candidates mask.
  • An extraction of features for each sub-image that represents the information contained on it. For this, the bag-of-visual-words (BoVW) was employed.
  • A classification of “cluster” vs. “non-cluster” sub-images.
The bag-of-visual-words (BoVW) model is a concept derived from document classification for image classification and object categorization [26]. In this model, images are treated as documents formed by local features denominated “visual words”. These words are grouped to form a “vocabulary” or “codebook”. Then, every image is represented by the number of occurrences of every “codeword” in the codebook. In this work, local features were 64-length speeded up robust features (SURF) descriptor vectors [27] clustered by a k-means algorithm [28].
Given a set of n training sub-images, Tr, represented as T r = t r 1 , , t r n and their class Y = y 1 , , y n manually labeled into “cluster” (total and partial clusters) and “non-cluster”, the process adopted for the training set is the following:
  • To extract SURF points for every sub-image.
  • Cluster SURF points applying k-means. The set of cluster centroids would form the codebook of k codewords.
  • Extraction of the bag-of-words per sub-image:
    • To assign each SURF point of the image to the nearest centroid of the codebook.
    • To calculate the histogram by counting the number of SURF points assigned to each centroid.
Then, tri had a feature vector x i   k that was used to train a support vector machine (SVM) classifier [29]. This is a machine learning algorithm for supervised learning classification or regression tasks that transforms input data into a high-dimensional feature space using a kernel function and finds the hyperplane that maximizes the distance to the nearest training data point of any class. With the classifier trained, only step 3 was performed for new sets of cluster candidates sub-images, and the resulting feature vectors were used to classify each sub-image into “cluster” or “non-cluster” classes, preserving only cluster sub-images for further analysis (Figure 5).

2.4.3. Cluster Compactness Estimation

This step involved the extraction of a set of features from the cluster morphology that were related to its compactness. For that purpose, the segmented pixels corresponding to the initial segmentation mask of the sub-images classified as clusters in the previous step had to be extracted. For a given detected cluster sub-image, the next procedure was followed:
  • A new mask using only pixels of “grape” and “rachis” classes was created.
  • A morphological opening on “grape” pixels using a circular kernel with a radius of two pixels was applied.
  • A morphological opening on “rachis” pixels using a circular kernel with a radius of two pixels was also applied.
  • A mask containing only the largest connected component formed by “grape” and “rachis” pixels, denoted as mask “A”, was created.
  • A mask containing the convex hulls of each “grape” pixel’s connected component (that can represent several grouped berries on compact clusters or isolated grapes on loose clusters), denoted as mask “B”, was created.
  • The final mask was created containing “grape” pixels and “rachis” pixels that were in mask “A” and inside the region of the convex hull of mask “B”. Those “rachis” pixels in mask “A” that were outside of the convex hull of mask “B” and connected at least two connected components of “grape” pixels were included as well.
The features to estimate the cluster compactness were extracted from the last mask containing only “grape” and “rachis” pixels (Figure 6). These features were the following:
  • Ratio of the area of the convex hull body of the cluster corresponding to holes (AH)
  • Ratio of the clusters area corresponding to berries (AB)
  • Ratio of the area corresponding to “rachis” (AR)
  • Average width at 25 ± 5 % of the length of the cluster (W25)
  • Average width at 50 ± 5 % of the length of the cluster (W50)
  • Average width at 75 ± 5 % of the length of the cluster (W75)
  • Ratio between “rachis” and “grape” pixels (RatioRG)
  • Roundness of “grape” pixels (RDGrape): 4.0   ×   π   ×   A G r a p e P G r a p e 2
  • Compactness shape factor of “grape” pixels (CSFGrape): P G r a p e 2 A G r a p e
  • Ratio between the maximum width and the length of the cluster (AS)
  • Ratio between W75 and W25 (RatioW75_W25)
  • Proportion of the “rachis” pixels “inside” the cluster (RRin)
  • Proportion of the “rachis” pixels “outside” the cluster (RRout)
  • Ratio of the area of the cluster over the mean area of the clusters of its set (RAoM)
Where A G r a p e and P G r a p e correspond to the area and the perimeter, considering only grape pixels.
While some of these features were already addressed by Cubero et al. [21] and Chen et al. [22], they had to be adapted to the new environmental situation of field conditions, while others were specifically designed for this study. Features based on clusters’ widths and lengths (W25, W50, W75) required a prior rotation of the cluster mask along the longest axis to match the width of the cluster with the horizontal axis, thus the whole set of clusters could have a similar orientation. The features RRin and RRout were calculated as the proportion of “rachis” pixels completely surrounded by “grape” pixels, and the rest of the “rachis” pixels that were not, respectively. The feature RAoM provided a measure about the size of the cluster over the average of the clusters on its set and added robustness by incorporating images of clusters taken at different distances.
The compactness estimation was performed by a Gaussian process regression (GPR) model trained with the data extracted from n clusters x i , y i i = 1 n where x i 14 represents the 14-feature vector, and y i 1 , 9 represents the average of the ratings of the evaluators for the i-th cluster.
Gaussian process regression models are probabilistic kernel-based machine learning models that use a Bayesian approach to solve regression problems estimating uncertainty at predictions [30]. A Gaussian process regression model is described in Equation (4):
g x = f x + h x T β
where h x is a vector of basis functions, β is the coefficient of h x , and   f x   ~   G P 0 , k x , x is a zero mean Gaussian process with a k x , x covariate function.

2.4.4. Performance Evaluation Metrics

The results obtained for each step were analyzed using a set of metrics corresponding to classification tasks in the case of multinomial logistic regression and support vector machine and regression for the Gaussian process regression. The metrics chosen for classification performance are commonly used for evaluating results of binary classifiers, where a sample can be identified as positive class or negative class. For multinomial logistic regression, the positive and the negative classes were the class under evaluation and the rest of them (e.g., “grape” class vs. “non-grape” classes), while for support vector machine, the “cluster” and the “non-cluster” classes were considered, respectively. The metrics calculated were sensitivity [Equation (5)], specificity [Equation (6)], F1 score [Equation (7)], and intersect over union [IoU; Equation (8)].
S e n s i t i v i t y = T P   T P + F N
S p e c i f i c i t y = T N   T N + F P
F 1 = 2 ×   P r e c i s i o n × S e n s i t i v i t y   P r e c i s i o n + S e n s i t i v i t y
I o U = T P   T P + F P + F N
where TP represents the “true positives” (number of positive samples correctly classified as positive class), FP represents the “false positives” (number of negative samples incorrectly classified as positive class), TN represents the “true negatives” (number of negative samples correctly classified as negative class), and FN represents the “false negatives” (number of positive samples incorrectly classified as negative class). Precision was defined as in Equation (9):
P r e c i s i o n = T P   T P   +   F P
The area under the receiver operating characteristic (ROC) curve (AUC) [31] was also considered. For regression, the determination coefficient (R2) and the root mean squared error (RMSE) were selected.

2.4.5. Hyperparameter’s Optimization Procedure

Support vector machine and Gaussian process regression are two machine learning algorithms that have a set of hyperparameters that are not learned from the data and need to be set before the training. The most traditional hyperparameter selection method is a brute-force grid search of the best subset of hyperparameters combined with a manually predefined set of values established for each hyperparameter in order to optimize a performance metric. Instead, in this work, a Bayesian optimization algorithm [32] was used for finding the best hyperparameters set, which proved to outperform other optimization algorithms [33]. This algorithm finds the best hyperparameter set that optimizes an objective function (in this context, a performance metric of the machine learning algorithm) using a Gaussian process trained with the objective function evaluations. The Gaussian process is updated with the result of each evaluation of the objective function, and an acquisition function is used to determine the next point to be evaluated in a bounded domain, i.e., the next set of hyperparameters.
The functions fitcsvm and fitrgp of Matlab R2017b were used to train the support vector machine and the Gaussian process regression models, respectively, selecting their hyperparameters with Bayesian optimization. A Gaussian process with automatic relevance determination (ARD) Matérn 5/2 kernel model and the expected-improvement-plus acquisition function were used for this purpose. The ranges considered for each hyperparameter and the final values are shown in Table 1.

3. Results and Discussion

3.1. Initial Segmentation Performance

The initial segmentation process was a key step towards the accurate assessment of cluster compactness. Therefore, a different segmentation model was applied to each grapevine variety and vineyard to avoid errors associated with slight differences in color and illumination from the images captured from one vineyard to another, which would occur if applying a unique segmentation model. Likewise, five sets of 3500 pixels each were manually labeled (500 pixels per class) for each variety and vineyard, which were used to train five distinct multinomial logistic regression models.
As shown in Table 2, overall, the five models achieved good results in terms of sensitivity, specificity, F1 score, AUC, and IoU when applied to their specific set of images. With regard to the most relevant classes (“rachis” and “grape”), similar and equally good values were obtained for specificity for all sets, while more variable outcomes were obtained for the remaining metrics. In general terms, the T18 model yielded the best results for these two relevant classes, closely followed by the S18 model, in this case only for the “grape” class, and slightly outperformed by the CS18 model for the “rachis” class in terms of sensitivity and AUC. More modest results were obtained for models G17 and T16 (Table 1). Particularly, model T16 yielded values under 0.9 in sensitivity and F1 score metrics for the “rachis” class and in IoU for both “rachis” and “grape” classes.
To compare the segmentation performance between individual models on each dataset versus a unique segmentation model, two additional cross-validation methods were applied: a five-fold cross-validation, where at each iteration, the training fold was formed by four datasets and the test fold by the remaining dataset, and a ten-fold stratified cross-validation, where at each iteration, the training and the test folds comprised data at an equal proportion of each dataset and class. The comparison of the results for these two validation methods revealed that better results were obtained for all metrics when the training and the test contained data from the same dataset (ten-fold CV). Comparing both methods with the average of the results obtained by the individual models indicated that applying individual segmentation models produced a substantial improvement in all metrics (except for specificity, for which only a slight increase was recorded) and for all classes (with the exception of the “gap” class, for which similar results were obtained using the three methods). The increase in these metrics for the relevant classes (“grape” and “rachis”) and the importance of this step highlights the need for applying individual models for each dataset. Differences between performances could be related to differences in color tonality of the vine elements segmented (e.g., different green tonalities for leaves) between grape varieties and vineyards.
The results show that, given a set of images taken on a vineyard, a multinomial logistic regression model trained with a small subset of pixels manually labeled from the images can be applied to effectively segment vine images in the predefined classes using color information. Also, the pixels needed for compactness estimation (“grape” and “rachis” pixels) can be extracted.

3.2. Cluster Detection Performance

A support vector machine was trained with 600 sub-images manually labeled into 300 “cluster” (total and partial clusters) and 300 “non-cluster” sub-images automatically extracted from the segmentation performed on the T16 set.
The classifier was validated against a set of 800 sub-images automatically extracted from the segmentation performed on sets G17, S18, CS18, and T18 (200 sub-images per set) and manually labeled into 400 “cluster” (total and partial clusters) and 400 “non-cluster” sub-images (100 sub-images of each class per set). A set of k values in a range from 10–200 was chosen for the k-means algorithm, and the performance of the classifier for the “cluster” class was evaluated (Table 3). The model trained with k = 100 yielded the best results for all metrics. Similar results were obtained for all k values in terms of sensitivity and F1 score, while for specificity and AUC, the model trained with k = 10 performed poorer than the rest of models. The model that yielded the best results (k = 100) showed similar values in sensitivity, specificity, and F1 (between 0.76 and 0.8), with specificity being slightly superior, while a higher value was obtained for AUC.
These results could be improved by incorporating new data, as is visualized in Table 3. A five-fold cross validation (each fold being a different set) was performed to train new support vector machine classifiers. The results show an improvement over all metrics for all k tested values, with the exception of specificity for the k = 100 model, whose value was slightly diminished. This model still achieved the best results for all metrics except specificity. The k = 100 model was chosen as the final model, as it yielded the best results with the test set in all metrics as well as the best results in almost all metrics at the cross validation.
The performance of the support vector machine proves that this classifier trained with the bag-of-visual-words representation of “cluster” and “non-cluster” sub-images can be applied to classify new sub-images from new datasets previously unknown to the classifier, and therefore, it can be used to filter non-cluster pixel groups before compactness estimation.

3.3. Cluster Compactness Estimation

A Gaussian process regression model was trained on the set of features extracted from the clusters of the T16 set (95 clusters) and validated with the automatically detected clusters of G17, S18, CS18, and T18 sets (100 clusters). A coefficient of determination (R2) of 0.68 and an RMSE of 0.96 were achieved (Figure 7). This is a remarkable result, considering that the four test sets were totally unknown to the classifier and were taken on different vineyards. Also, S18, CS18, and T18 test sets were photographed with a different RGB camera than the one used to photograph the T16 training set. Even more relevant is that three of the four sets (75 clusters of 100) were formed by varieties different from the one used for training (Tempranillo was used for training, while Grenache, Syrah, Cabernet Sauvignon, and Tempranillo clusters were included in the test set). This outcome paves a way to cluster compactness estimation of winegrape varieties without the requirement of representing variety in the training data—in contrast to the work presented by Cubero et al. [21]—and paves a way to real context application on new varieties and vineyards without the necessity of including specific data from those varieties and vineyards, which would require retrieving new clusters and assessing their compactness by trained experts.
On the other hand, when leave-one-out cross validation over all datasets (195 clusters) was performed, a coefficient of determination (R2) of 0.70 and an RMSE of 1.11 were obtained (Figure 8). It can be observed that the algorithm had an accurate performance along most of the compactness range but tended to slightly underestimate highly compacted clusters with an OIV rating close to 9. A feasible reason for this could be that, since this very high compact class was mainly characterized by the deformation of the berries due to the pressure among berries, this feature was difficult to extract by image analysis.
The accuracy of compactness estimation is also meant to be highly influenced by the results obtained in previous steps, i.e., a high misclassification rate in the initial segmentation of cluster and “rachis” pixels would produce wrong shapes in the final cluster mask and a poor feature extraction for the cluster. Also, for estimating the compactness of a given cluster, this has to be previously detected by the BOVW model.
The cluster compactness estimation using the developed methodology in this work could be limited by some experimental in-field conditions, as follows:
  • Occlusion of the cluster: the estimation was only performed on the visible region of the cluster. Therefore, a high level of occlusion of the cluster could increase the estimation error. An example of a cluster partially occluded by leaves is shown in Figure 9a and the final mask extracted for compactness estimation in Figure 9b, where the cluster mask presents an anomalous shape that would lead to incorrect compactness estimation.
  • Cluster overlapping: highly overlapped clusters would be identified as one, and therefore a unique estimation would be obtained, associated with the set of overlapped clusters. An example is illustrated in Figure 9c, where a set of clusters are overlapped, and in the extracted mask (Figure 9d), the clusters cannot be separated from each other for proper individual compactness estimation.
At the current state of the system, the occlusion problem could be overcome by defoliating the side of the vineyard to be photographed. For cluster overlapping, those groups of overlapped clusters could be isolated in the field, or the separation between them could be manually labeled on the images with a clearly different color than “rachis” and “grape” colors (e.g., “trunk” color).

3.4. Commercial Applicability

The developed system can be efficiently used to estimate the cluster compactness in commercial vineyards. The image acquisition carried out by a mobile sensing platform allows the user to take a high number of images in extensive vineyards, which can be automatically geo-referenced. Therefore, the geo-referenced compactness estimations could be used to generate a map that illustrates the spatial variability in cluster compactness, which could be very useful to delineate zones according to cluster compactness and to identify those with similar values. This information could be highly relevant for sorting grapes before harvest, as cluster compactness is often linked to grape quality and health status.
The non-invasive nature of the system could also enable an early identification of very compact clusters before harvest in order to establish strategies against fungal diseases, such as botrytis.
It is also remarkable that the absence of features non-extracted directly from image analysis in the model opens the possibility of a direct application of the algorithm in new vineyards and varieties, in contrast to previous works. Cubero et al [21] introduced, in the PLS model, the cluster winegrape variety as a feature, which requires collecting additional clusters of the variety whose compactness is going to be estimated, evaluating its compactness following the OIV method, and re-training the model. Chen et al. [22] introduced features derived from the cluster mass measured by a weighing sensor, which requires prior harvesting of clusters.

3.5. Future Work

While the current state of the system is capable of estimating compactness in commercial vineyards under uncontrolled field conditions, some improvements still can be made. The algorithm works properly only for red winegrape varieties due to the initial segmentation step, which uses only color information. In this regard, any white grape pixel could be easily misclassified as leaves or “rachis” pixels. A more robust segmentation algorithm could be developed to overcome this problem, combining color and texture information or recurring deep learning techniques. Also, these solutions could help to develop a fully automated system.
The compactness estimation model would also benefit from a more advanced image analysis algorithm capable of extracting features representing the deformation of berries (to increase the accuracy in detecting highly compact clusters), the degree of occlusion of the cluster (to avoid estimations on highly occluded clusters), and to separate overlapped clusters (to enable an individualized estimation on each cluster of the overlapped set).

4. Conclusions

The results of this work show that the developed system was able to estimate winegrape cluster compactness using RGB computer vision on-the-go (at 5 km/h using a mobile sensing platform) and machine learning technology under field conditions. This system enabled a semi-automated, non-invasive method for compactness estimation of a high number of red grapevine clusters under field conditions with low time-consumption. It could be applied to determine the spatial variability of cluster compactness in commercial vineyards, which could be used as new quality input to drive decisions on harvest classification or differential fungicide spraying, for example. The developed methodology constitutes a new tool to improve decision making in precision viticulture, which could be helpful for the wine industry.

Author Contributions

M.P.D. and J.T. conceived and designed the experiments. F.P. developed the algorithm and validated the results. F.P., M.P.D. and J.T. wrote the paper.

Funding

Fernando Palacios would like to acknowledge the research founding FPI grant 286/2017 by Universidad de La Rioja, Gobierno de La Rioja. Dr Maria P. Diago is funded by the Spanish Ministry of Science, Innovation and Universities with a Ramon y Cajal grant RYC-2015-18429.

Acknowledgments

Authors would like to thank Ignacio Barrio, Diego Collado, Eugenio Moreda and Saúl Río for their help collecting field data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hed, B.; Ngugi, H.K.; Travis, J.W. Relationship between cluster compactness and bunch rot in Vignoles grapes. Plant Dis. 2009, 93, 1195–1201. [Google Scholar] [CrossRef] [PubMed]
  2. Tello, J.; Marcos, J.I. Evaluation of indexes for the quantitative and objective estimation of grapevine bunch compactness. Vitis 2014, 53, 9–16. [Google Scholar]
  3. Austin, C.N.; Wilcox, W.F. Effects of sunlight exposure on grapevine powdery mildew development. Phytopathology 2012, 102, 857–866. [Google Scholar] [CrossRef] [PubMed]
  4. Vail, M.; Marois, J. Grape cluster architecture and the susceptibility of berries to Botrytis cinerea. Phytopathology 1991, 81, 188–191. [Google Scholar] [CrossRef]
  5. Molitor, D.; Behr, M.; Hoffmann, L.; Evers, D. Research note: Benefits and drawbacks of pre-bloom applications of gibberellic acid (GA3) for stem elongation in Sauvignon blanc. S. Afr. J. Enol. Vitic. 2012, 33, 198–202. [Google Scholar] [CrossRef]
  6. OIV. OIV Descriptor list for grape varieties and Vitis species. OIV 2009, 18, 178. Available online: http://www.oiv.int/public/medias/2274/code-2e-edition-finale.pdf (accessed on 2 September 2019).
  7. Palliotti, A.; Gatti, M.; Poni, S. Early leaf removal to improve vineyard efficiency: gas exchange, source-to-sink balance, and reserve storage responses. Am. J. Enol. Vitic. 2011, 62, 219–228. [Google Scholar] [CrossRef]
  8. Tardaguila, J.; Blanco, J.; Poni, S.; Diago, M. Mechanical yield regulation in winegrapes: Comparison of early defoliation and crop thinning. Aust. J. Grape Wine Res. 2012, 18, 344–352. [Google Scholar] [CrossRef]
  9. Zabadal, T.J.; Bukovac, M.J. Effect of CPPU on fruit development of selected seedless and seeded grape cultivars. HortScience 2006, 41, 154–157. [Google Scholar] [CrossRef]
  10. Evers, D.; Molitor, D.; Rothmeier, M.; Behr, M.; Fischer, S.; Hoffmann, L. Efficiency of different strategies for the control of grey mold on grapes including gibberellic acid (Gibb3), leaf removal and/or botrycide treatments. OENO One 2010, 44, 151–159. [Google Scholar] [CrossRef]
  11. Tello, J.; Aguirrezábal, R.; Hernáiz, S.; Larreina, B.; Montemayor, M.I.; Vaquero, E.; Ibáñez, J. Multicultivar and multivariate study of the natural variation for grapevine bunch compactness. Aust. J. Grape Wine Res. 2015, 21, 277–289. [Google Scholar] [CrossRef]
  12. Kicherer, A.; Klodt, M.; Sharifzadeh, S.; Cremers, D.; Töpfer, R.; Herzog, K. Automatic image-based determination of pruning mass as a determinant for yield potential in grapevine management and breeding. Aust. J. Grape Wine Res. 2017, 23, 120–124. [Google Scholar] [CrossRef]
  13. Millan, B.; Diago, M.P.; Aquino, A.; Palacios, F.; Tardaguila, J. Vineyard pruning weight assessment by machine vision: towards an on-the-go measurement system. OENO One 2019, 53. [Google Scholar] [CrossRef]
  14. Aquino, A.; Millan, B.; Gutiérrez, S.; Tardáguila, J. Grapevine flower estimation by applying artificial vision techniques on images with uncontrolled scene and multi-model analysis. Comput. Electron. Agric. 2015, 119, 92–104. [Google Scholar] [CrossRef]
  15. Liu, S.; Li, X.; Wu, H.; Xin, B.; Petrie, P.R.; Whitty, M. A robust automated flower estimation system for grape vines. Biosystems Eng. 2018, 172, 110–123. [Google Scholar] [CrossRef]
  16. Diago, M.P.; Krasnow, M.; Bubola, M.; Millan, B.; Tardaguila, J. Assessment of vineyard canopy porosity using machine vision. Am. J. Enol. Vitic. 2016, 67, 229–238. [Google Scholar] [CrossRef]
  17. Nuske, S.; Wilshusen, K.; Achar, S.; Yoder, L.; Narasimhan, S.; Singh, S. Automated visual yield estimation in vineyards. J. Field Rob. 2014, 31, 837–860. [Google Scholar] [CrossRef]
  18. Millan, B.; Velasco-Forero, S.; Aquino, A.; Tardaguila, J. On-the-Go Grapevine Yield Estimation Using Image Analysis and Boolean Model. J. Sens. 2018, 2018. [Google Scholar] [CrossRef]
  19. Luo, L.; Tang, Y.; Zou, X.; Ye, M.; Feng, W.; Li, G. Vision-based extraction of spatial information in grape clusters for harvesting robots. Biosystems Eng. 2016, 151, 90–104. [Google Scholar] [CrossRef]
  20. Luo, L.; Tang, Y.; Lu, Q.; Chen, X.; Zhang, P.; Zou, X. A vision methodology for harvesting robot to detect cutting points on peduncles of double overlapping grape clusters in a vineyard. Comput. Ind. 2018, 99, 130–139. [Google Scholar] [CrossRef]
  21. Cubero, S.; Diago, M.P.; Blasco, J.; Tardáguila, J.; Prats-Montalbán, J.M.; Ibáñez, J.; Tello, J.; Aleixos, N. A new method for assessment of bunch compactness using automated image analysis. Aust. J. Grape Wine Res. 2015, 21, 101–109. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, X.; Ding, H.; Yuan, L.-M.; Cai, J.-R.; Chen, X.; Lin, Y. New approach of simultaneous, multi-perspective imaging for quantitative assessment of the compactness of grape bunches. Aust. J. Grape Wine Res. 2018, 24, 413–420. [Google Scholar] [CrossRef]
  23. Diago, M.P.; Aquino, A.; Millan, B.; Palacios, F.; Tardaguila, J. On-the-go assessment of vineyard canopy porosity, bunch and leaf exposure by image analysis. Aust. J. Grape Wine Res. 2019, 25, 363–374. [Google Scholar] [CrossRef]
  24. Luo, M.R. CIELAB. In Encyclopedia of Color Science and Technology; Springer: Berlin/Heidelberg, Germany, 2014; pp. 1–7. [Google Scholar]
  25. Dobson, A.J.; Barnett, A. An introduction to generalized linear models; Chapman and Hall/CRC: New York, NY, USA, 2008. [Google Scholar]
  26. Csurka, G.; Dance, C.; Fan, L.; Willamowski, J.; Bray, C. Visual categorization with bags of keypoints. In Proceedings of the Workshop on statistical learning in computer vision, ECCV, Prague, Czech Republic, 15 May 2004; pp. 1–2. [Google Scholar]
  27. Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. Springer 2006, 3951, 404–417. [Google Scholar]
  28. Lloyd, S. Least squares quantization in PCM. IEEE Trans. Inf. Theory 1982, 28, 129–137. [Google Scholar] [CrossRef]
  29. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  30. Williams, C.K.; Rasmussen, C.E. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006. [Google Scholar]
  31. Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef] [Green Version]
  32. Mockus, J.; Tiesis, V.; Zilinskas, A. The application of Bayesian methods for seeking the extremum. In Towards Global Optimization; Elsevier: Amsterdam, The Netherlands, 2014; pp. 117–129. [Google Scholar]
  33. Jones, D.R. A Taxonomy of Global Optimization Methods Based on Response Surfaces. J. Glob. Optim. 2001, 21, 345–383. [Google Scholar] [CrossRef]
Figure 1. Mobile sensing platform for on-the-go image acquisition: a modified all-terrain vehicle (ATV) incorporating a red, green, blue (RGB) camera, Global Positioning System (GPS), and an artificial illumination system mounted on an adaptable structure.
Figure 1. Mobile sensing platform for on-the-go image acquisition: a modified all-terrain vehicle (ATV) incorporating a red, green, blue (RGB) camera, Global Positioning System (GPS), and an artificial illumination system mounted on an adaptable structure.
Sensors 19 03799 g001
Figure 2. Examples of clusters with different compactness ratings according to the International Organization of Vine and Wine (OIV) 204 standard: class 1 (a) very loose clusters; class 3 (b) loose clusters; class 5 (c) medium compact clusters; class 7 (d); compact clusters; class 9 (e) very compact clusters.
Figure 2. Examples of clusters with different compactness ratings according to the International Organization of Vine and Wine (OIV) 204 standard: class 1 (a) very loose clusters; class 3 (b) loose clusters; class 5 (c) medium compact clusters; class 7 (d); compact clusters; class 9 (e) very compact clusters.
Sensors 19 03799 g002
Figure 3. Flow-chart of the full algorithm for a new set of images. First, a set of pixel labeling was required to train a multinomial logistic regression model to segment the whole set. Second, the cluster candidates were extracted and filtered using a bag of visual words model. Finally, compactness features were extracted, and the estimation was performed on each cluster by the Gaussian process regression model.
Figure 3. Flow-chart of the full algorithm for a new set of images. First, a set of pixel labeling was required to train a multinomial logistic regression model to segment the whole set. Second, the cluster candidates were extracted and filtered using a bag of visual words model. Finally, compactness features were extracted, and the estimation was performed on each cluster by the Gaussian process regression model.
Sensors 19 03799 g003
Figure 4. Initial semi supervised segmentation. A set of pixels were manually labeled on the set of the original images (a) into seven predefined classes (“grape”, “rachis”, ‘trunk”, “shoot”, “leaf”, “gap”, or “trellis”), and the multinomial logistic regression model performed the segmentation on the whole set of images. (b) Some pixels of elements without a predefined class were misclassified (e.g., yellow leaves identified as “rachis”, dry leaves identified as “shoot”, or ground identified as “trunk”). The pixels classified as “grape” and marked in white (c) were used for identifying cluster candidates.
Figure 4. Initial semi supervised segmentation. A set of pixels were manually labeled on the set of the original images (a) into seven predefined classes (“grape”, “rachis”, ‘trunk”, “shoot”, “leaf”, “gap”, or “trellis”), and the multinomial logistic regression model performed the segmentation on the whole set of images. (b) Some pixels of elements without a predefined class were misclassified (e.g., yellow leaves identified as “rachis”, dry leaves identified as “shoot”, or ground identified as “trunk”). The pixels classified as “grape” and marked in white (c) were used for identifying cluster candidates.
Sensors 19 03799 g004
Figure 5. Cluster candidates’ extraction and filtering. (a) Bounding boxes were extracted from connected components of grape pixels and (b) filtered using the bag-of-visual-words (BOVW) model to estimate the cluster compactness of the final non-filtered regions.
Figure 5. Cluster candidates’ extraction and filtering. (a) Bounding boxes were extracted from connected components of grape pixels and (b) filtered using the bag-of-visual-words (BOVW) model to estimate the cluster compactness of the final non-filtered regions.
Sensors 19 03799 g005
Figure 6. Extraction of the clusters’ final masks for compactness estimation. (a) Extracted cluster sub-image, (b) its corresponding segmentation, and (c) a cluster mask obtained using “grape” and “rachis” pixels and morphological operations.
Figure 6. Extraction of the clusters’ final masks for compactness estimation. (a) Extracted cluster sub-image, (b) its corresponding segmentation, and (c) a cluster mask obtained using “grape” and “rachis” pixels and morphological operations.
Sensors 19 03799 g006
Figure 7. Performance of the GPR model on the test set (100 clusters); correlation between the cluster compactness estimation performed by the model and the OIV ratings (reference method) evaluated visually by the panel of experts.
Figure 7. Performance of the GPR model on the test set (100 clusters); correlation between the cluster compactness estimation performed by the model and the OIV ratings (reference method) evaluated visually by the panel of experts.
Sensors 19 03799 g007
Figure 8. Performance of the GPR model performing leave-one-out cross validation (LOOCV) on the whole data (195 clusters); correlation between the cluster compactness estimation performed by the model and the OIV ratings (reference method) evaluated visually by the panel of experts.
Figure 8. Performance of the GPR model performing leave-one-out cross validation (LOOCV) on the whole data (195 clusters); correlation between the cluster compactness estimation performed by the model and the OIV ratings (reference method) evaluated visually by the panel of experts.
Sensors 19 03799 g008
Figure 9. Examples of cluster occlusions and overlapping in commercial vineyards limiting compactness estimation. Cluster partially occluded by leaves (a), multiple overlapped clusters (c), and final segmented masks used for compactness estimation (b,d).
Figure 9. Examples of cluster occlusions and overlapping in commercial vineyards limiting compactness estimation. Cluster partially occluded by leaves (a), multiple overlapped clusters (c), and final segmented masks used for compactness estimation (b,d).
Sensors 19 03799 g009
Table 1. Hyperparameter range considered for each classifier and the used final values.
Table 1. Hyperparameter range considered for each classifier and the used final values.
Kernel Function (fixed)Optimized Hyperparameters RangeFinal Values
SVMRadial basis function (RBF)Box ConstraintKernel scaleBox ConstraintKernel scale
[10−3, 103][10−3, 103]1.465424.628
GPRExponentialSigmaKernel scaleSigmaKernel scale
[10−4, 22.5184][0.1216, 121.6122]0.8319491.5821
SVM: support vector machine; GPR: Gaussian process regression.
Table 2. Performance results of each multinomial logistic regression model for segmenting images in their respective dataset using a 10-fold stratified cross validation on the manually labeled pixel sets in terms of sensitivity, specificity, F1 score, area under the receiver operating characteristic (ROC) curve (AUC) and intersect over union (IoU) metrics. Their average was compared with the performance of a single segmentation with all datasets combined using a 5-fold and a 10-fold stratified cross validation.
Table 2. Performance results of each multinomial logistic regression model for segmenting images in their respective dataset using a 10-fold stratified cross validation on the manually labeled pixel sets in terms of sensitivity, specificity, F1 score, area under the receiver operating characteristic (ROC) curve (AUC) and intersect over union (IoU) metrics. Their average was compared with the performance of a single segmentation with all datasets combined using a 5-fold and a 10-fold stratified cross validation.
Vineyard
Canopy Class
T16G17DatasetCS18T18Average5-Fold CV10-Fold CV
S18
Sensitivity
Trellis0.94200.87600.95000.92000.97600.93280.52840.6700
Gap0.97400.99000.99200.99600.99400.98920.98680.9880
Leaf0.97600.93400.90600.96800.97000.95080.74760.8980
Shoot0.94400.95200.98801.00000.99000.97480.89920.9208
Rachis0.86400.90200.92200.96600.95800.92240.69120.7964
Trunk0.90200.95600.95400.96600.99800.95520.36200.6992
Grape0.93200.97200.97000.95400.98000.96160.76520.8732
Specificity
Trellis0.98970.98830.99300.98830.99700.99130.95540.9648
Gap0.99500.99870.99670.99900.99770.99740.99550.9963
Leaf0.99600.98930.98970.99600.99470.99310.97960.9829
Shoot0.99000.99630.99830.99970.99870.99660.93350.9789
Rachis0.98130.98500.98100.99270.99470.98690.91810.9673
Trunk0.98030.98200.99230.99330.99870.98930.91510.9423
Grape0.99000.99070.99600.99270.99630.99310.96610.9751
F1 Score
Trellis0.94010.90030.95380.92460.97890.93960.58840.7123
Gap0.97210.99100.98610.99500.99000.98680.98010.9831
Leaf0.97600.93490.92070.97190.96900.95450.79960.8978
Shoot0.94210.96450.98900.99900.99100.97710.78260.8996
Rachis0.87450.90560.90570.96120.96280.92200.63340.7993
Trunk0.89310.92640.95400.96310.99500.94630.38690.6836
Grape0.93570.95860.97290.95500.97900.96020.77730.8634
AUC
Trellis0.96580.93220.97150.95420.98650.96200.74190.8174
Gap0.98450.99430.99430.99750.99580.99330.99120.9922
Leaf0.98600.96170.94780.98200.98230.97200.86360.9405
Shoot0.96700.97420.99320.99980.99430.98570.91640.9499
Rachis0.92270.94350.95150.97930.97630.95470.80470.8818
Trunk0.94120.96900.97320.97970.99830.97230.63860.8207
Grape0.96100.98130.98300.97330.98820.97740.86560.9241
IoU
Trellis0.88700.81870.91170.85980.95870.88720.41690.5532
Gap0.94560.98210.97250.99010.98030.97410.96100.9667
Leaf0.95310.87780.85310.94530.93990.91390.66610.8146
Shoot0.89060.93150.97820.99800.98210.95610.64280.8175
Rachis0.77700.82750.82760.92530.92830.85710.46350.6657
Trunk0.80680.86280.91200.92880.99010.90010.23990.5193
Grape0.87920.92050.94730.91380.95890.92390.63580.7596
Table 3. Performance results of the support vector machine classifier for cluster detection validated with the external set and performing a 5-fold cross validation on the whole data for several k values of k-means tested in terms of sensitivity, specificity, F1 score, and AUC metrics.
Table 3. Performance results of the support vector machine classifier for cluster detection validated with the external set and performing a 5-fold cross validation on the whole data for several k values of k-means tested in terms of sensitivity, specificity, F1 score, and AUC metrics.
SensitivitySpecificityF1 ScoreAUC
Test Setk = 100.7600.6600.7240.751
k = 500.7380.7700.7500.828
k = 1000.7650.7950.7770.865
k = 1500.7500.7700.7580.841
k = 2000.7200.7650.7370.848
5-Fold CVk = 100.8110.6780.7610.821
k = 500.8040.7880.7980.884
k = 1000.8210.7810.8050.903
k = 1500.7900.8050.7960.888
k = 2000.7990.8130.8040.902
CV: cross validation.

Share and Cite

MDPI and ACS Style

Palacios, F.; Diago, M.P.; Tardaguila, J. A Non-Invasive Method Based on Computer Vision for Grapevine Cluster Compactness Assessment Using a Mobile Sensing Platform under Field Conditions. Sensors 2019, 19, 3799. https://doi.org/10.3390/s19173799

AMA Style

Palacios F, Diago MP, Tardaguila J. A Non-Invasive Method Based on Computer Vision for Grapevine Cluster Compactness Assessment Using a Mobile Sensing Platform under Field Conditions. Sensors. 2019; 19(17):3799. https://doi.org/10.3390/s19173799

Chicago/Turabian Style

Palacios, Fernando, Maria P. Diago, and Javier Tardaguila. 2019. "A Non-Invasive Method Based on Computer Vision for Grapevine Cluster Compactness Assessment Using a Mobile Sensing Platform under Field Conditions" Sensors 19, no. 17: 3799. https://doi.org/10.3390/s19173799

APA Style

Palacios, F., Diago, M. P., & Tardaguila, J. (2019). A Non-Invasive Method Based on Computer Vision for Grapevine Cluster Compactness Assessment Using a Mobile Sensing Platform under Field Conditions. Sensors, 19(17), 3799. https://doi.org/10.3390/s19173799

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop