Next Article in Journal
The Inorganic Carbon Fixation Improved by Long-Term Manure Fertilization in Kastanozems under Rotation System of North China
Next Article in Special Issue
Analysis of Near-Infrared Spectral Properties and Quantitative Detection of Rose Oxide in Wine
Previous Article in Journal
Nutrient and Nutraceutical Quality of Rocket as a Function of Greenhouse Cover Film, Nitrogen Dose and Biostimulant Application
Previous Article in Special Issue
Construction and Test of Baler Feed Rate Detection Model Based on Power Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Object Detection and Recognition Techniques Based on Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots: An Overview and Review

College of Engineering and Technology, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Agronomy 2023, 13(3), 639; https://doi.org/10.3390/agronomy13030639
Submission received: 4 January 2023 / Revised: 18 February 2023 / Accepted: 19 February 2023 / Published: 23 February 2023
(This article belongs to the Special Issue Agricultural Unmanned Systems: Empowering Agriculture with Automation)

Abstract

:
The accuracy, speed, and robustness of object detection and recognition are directly related to the harvesting efficiency, quality, and speed of fruit and vegetable harvesting robots. In order to explore the development status of object detection and recognition techniques for fruit and vegetable harvesting robots based on digital image processing and traditional machine learning, this article summarizes and analyzes some representative methods. This article also demonstrates the current challenges and future potential developments. This work aims to provide a reference for future research on object detection and recognition techniques for fruit and vegetable harvesting robots based on digital image processing and traditional machine learning.

1. Introduction

Fruit harvesting is an important aspect of farming. It directly affects the yield and profitability of cultivation. With the increasing scale of global cultivation (e.g., global annual production of fruits and vegetables such as tomato, citrus, apple, and strawberry, has reached 182 million tons [1], 89 million tons [2], 86 million tons [3], and 9 million tons [4], respectively), the contradiction between the large amount of labor used in traditional production methods and labor shortages has become increasingly prominent. The labor cost of fruit and vegetable harvesting has reached 30–50% of the total production cost [5,6,7,8,9]. Fruit and vegetable harvesting robots have attracted broad attention in the agricultural field (as shown in Figure 1) because of their high productivity and low production cost [10,11]. As shown in Figure 2, taking typical fruits and vegetables such as plums [12], apples [13,14,15,16], sweet peppers [17,18,19], strawberries [6,7,20], litchis [21], tomatoes [22,23], and kiwifruits [24] as objects, a series of harvesting robots have been developed and applied in greenhouses and orchards. Fruit and vegetable harvesting robots have entered a critical period in the progression from laboratory research to industrial applications.
As an important part of vision systems of fruit and vegetable harvesting robots, the accuracy, speed, and robustness of object detection and recognition are directly related to the harvesting efficiency, quality, and speed. Vision systems of harvesting robots vary for different picking targets. Their characteristics mainly include the imaging sensor and the specific content of crop visual information. Black/white, RGB, spectral, and thermal cameras (as shown in Table 1) are widely used in harvesting robots to obtain color, shape, texture, and size information of fruits in a specific operational area. Different processes of object detection and recognition of fruits and vegetables are shown in Figure 3. Many researchers have conducted extensive and in-depth research on object detection and recognition techniques for fruit and vegetable harvesting robots based on digital image processing and traditional machine learning. The research can be subdivided into the following aspects:
(1) Techniques based on digital image processing, such as color features (RGB (Red, Green, Blue) [25,26,27,28], HSV (Hue, Saturation, Value) [29,30,31], HSI (Hue, Saturation Intensity) [32,33,34], Lab (Lightness, Green to Red and Blue to Yellow) [33,35,36], HSB (Hue, Saturation, Brightness), YCbCr)-based methods, shape feature-based methods [37,38,39,40,41,42,43,44,45,46], texture feature-based methods [44,47,48,49,50,51,52], and multi-feature fusion-based methods [17,28,39,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67].
(2) Image segmentation and classifiers based on traditional machine learning, such as K-means clustering algorithm-based methods [68,69,70,71,72,73,74,75], SVM (Support Vector Machine) algorithm-based methods [54,57,69,73,76,77,78,79,80,81,82,83,84], KNN (K Nearest Neighbor) clustering algorithm-based methods [36,85,86,87,88,89,90,91], AdaBoost (Adaptive Boosting) algorithm-based methods [62,92,93,94,95,96,97,98,99], decision tree algorithm-based methods [100,101,102,103,104,105,106,107], and Bayesian algorithm-based methods [108,109,110,111,112,113].
This article provides an overview and review of the progress in object detection and recognition techniques for fruit and vegetable harvesting robots based on digital image processing and traditional machine learning. Although there have been some reviews of techniques for object detection and recognition of fruits and vegetables [114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135], the contributions of this work are to: (1) systematically summarize object detection and recognition techniques of fruit and vegetable harvesting robots based on digital image processing and traditional machine learning in recent years; (2) systematically analyze the advantages, disadvantages, and applicability of various techniques; and (3) demonstrate the current challenges and future potential developments. Through this clearer and more comprehensive overview and review, we aim to provide a reference for future research on object detection and recognition techniques of fruit and vegetable harvesting robots based on digital image processing and traditional machine learning.
The outline of this overview and review is shown in Figure 4. The organization of this paper is as follow: in Section 2, we provide an overview and review of the research and development in object detection and recognition techniques of fruits and vegetables based on digital image processing. We present separate discussions focused on color, shape, texture features, and multi-feature fusion-based methods.
In Section 3, we provide an overview and review of the research and development in object detection and recognition techniques of fruits and vegetables based on traditional machine learning. We present separate discussions focused on K-means clustering, SVM, KNN clustering, AdaBoost, decision tree, and Bayesian algorithm-based methods.
Section 4 extends our discussions to the challenges and further research of object detection and recognition techniques of fruits and vegetables. A summary of findings and conclusions are presented in Section 5.

2. Techniques Based on Digital Image Processing

Colors, shapes, and textures are important features used by fruit and vegetable harvesting robots for detecting and recognizing target objects. Many researchers have conducted extensive and in-depth research on object detection and recognition techniques of fruits and vegetables based on color features (RGB [25,26,27,28], HSV [29,30,31], HIS [32,33,34], Lab [33,35,36], HSB, YCbCr), shape features [37,38,39,40,41,42,43,44,45,46], texture features [44,47,48,49,50,51,52], and multi-feature fusion [17,28,39,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67] (as shown in Figure 5). Table 2 compares the results of different techniques by different researchers, and presents analysis of the advantages, disadvantages, and applicability of various techniques.

2.1. Techniques Based on Color Features

Mature fruits and vegetables usually have significant and stable color features. Color features provide a set of indicators for the detection and recognition of fruits and vegetables. Object detection and recognition techniques of fruits and vegetables based on color features, extract color features through Color Histogram, Color Set, Color Moment, and Color Coherence Vector. The techniques based on color features are mainly applicable to cases where the colors of fruits and vegetables are significantly different from the backgrounds (branches, leaves, trunks), such as tomatoes [28,31], apples [29,35], mangoes [34], bananas, cherries, citrus, prunes, and strawberries.
Goel and Sehgal [28] detected and recognized several ripening stages of tomatoes using RGB image information. This research has a positive implication for selecting the best ripening stage of fruits and vegetables. For example, fruits and vegetables that need to be transported over long distances can be harvested at an early stage of ripeness.
Zemmour et al. [26] analyzed different color spaces. The research results showed that evaluating different color spaces is very important, because for different kinds of fruits and vegetables, a different color space might be superior to the others. In order to improve the accuracy of the detection and recognition of tomatoes, marigold flowers, and apples, Malik et al. [31], Sethy et al. [30], Yu et al. [29], respectively, converted RGB images into HSV color space, and then separated the image luminance channels. Ratprakhon et al. [34] converted RGB images into HIS color space to detect and recognize the ripeness of mangoes. Tan et al. [36] and Biffi et al. [35], respectively, converted RGB images into Lab color space to detect and recognize blueberries and apples. Zemmour et al. [26] suggested that Lab color space could be used more for low quality images because it is more robust to noise in images. In challenging color conditions (for example, where fruit and vegetable colors are similar to the backgrounds), other features could be considered to improve the effectiveness of object detection and recognition for fruit and vegetable harvesting robots.
The detection and recognition time of fruits and vegetables based on color features is relatively long. In order to shorten the detection and recognition time, Yang et al. [25] proposed an Otsu’s thresholding method based on the two times Red minus Green minus Blue (2R-G-B) color feature to segment images. Lv et al. [27] operated the R-channel and G-channel images of orchard apple RGB images using the Adaptive Gamma Correction method. This method not only shortened the detection and recognition time, but also overcame the influence of changing lighting conditions. Zemmour et al. [26] proposed an automatic parameter tuning procedure specially developed for the dynamic adaptive thresholding algorithm for object detection and recognition of fruits and vegetables. The thresholds were selected by quantifying the required relationship between the true and false positive rates.
In general, techniques for object detection and recognition of fruits and vegetables based on color features are less dependent on image size. However, the variability and uncertainty of fruit and vegetable maturity can affect the accuracy, speed, and robustness of detection and recognition. These techniques are mainly applicable to structured environments such as greenhouses.

2.2. Techniques Based on Shape Features

Mature fruits and vegetables usually have significant and stable shape features. Geometric shape features provide another set of indicators for the detection and recognition of fruits and vegetables. Techniques for object detection and recognition of fruits and vegetables based on shape features, extract shape features using the Boundary Feature Method, Fourier Shape Descriptor, Shape Factor, and Shape Moment Invariant. These techniques are mainly applied to cases where the shapes of fruits and vegetables are significantly different from the backgrounds. For example, the shapes of apples and citrus are usually rounded compared to the branches and leaves, and a cucumber shows an elongated fruit shape (as shown in Figure 6).
For round fruits, Hannan et al. [45] detected and recognized fruits in clusters by shape analysis. This method can better detect and recognize target objects in changing lighting conditions. Jana and Parekh [42] proposed a shape-based fruit detection and recognition method. It involves a pre-processing step to normalize a fruit image with respect to variations in translation, rotation, and scaling, and utilizes features that do not change due to varying distances, growth stages, or surface appearances of fruits. The method was applied to 210 images of 7 fruit classes. The overall recognition accuracy ranged from 88 to 95%. Lu et al. [39] proposed a new shape analysis method called Hierarchical Contour Analysis (HCA). The hierarchical contour maps around each local maximum were extracted and fitted with Circular Hough Transform, and the fitted circles were predicted as fruit targets if their radii were in a predetermined range. The HCA can effectively utilize shape information, and does not need to extract and analyze the edge in an image. Therefore, it is efficient and robust under various lighting conditions and occlusions in natural environments. Lin et al. [37] also proposed a method for the detection and recognition of fruits and vegetables based on shape features. The research results showed that the method is competitive for detecting most kinds (such as green, orange, circular, and non-circular) of fruits and vegetables in natural environments.
Since the shapes of fruits and vegetables are usually not affected by the colors, object detection and recognition techniques of fruits and vegetables based on shape features are more effective for cases where the colors of fruits and vegetables are similar to the backgrounds, while the shapes of fruits and vegetables are significantly different from the backgrounds, such as green citrus [37,40,44], green apples [38,43,46], cucumbers, green peppers, and watermelons.
In general, techniques for object detection and recognition of fruits and vegetables based on shape features are less dependent on lighting conditions. However, in unstructured environments, the randomness of fruit and vegetable growth can affect the accuracy, speed, and robustness of detection and recognition of fruits and vegetables. These techniques are mainly applicable to natural orchards with certain agricultural operations.

2.3. Techniques Based on Texture Features

Mature fruits and vegetables usually have significant and stable texture features, and the surface textures of fruits and vegetables are usually smoother than the backgrounds. Texture features provide another set of indicators for the detection and recognition of fruits and vegetables. Techniques for object detection and recognition of fruits and vegetables based on texture features, extract texture features through the GLCM (Grey Level Co-Occurrence Matrix), Tamura texture features, SAR (Simultaneous Auto-Regression), Gabro transform, and Wavelet transform. These techniques are mainly applicable to cases where the textures of fruits and vegetables are significantly different from the backgrounds, such as apples [52], bitter melons [51], citrus [44], papayas [110], and pineapples [51].
Trey et al. [49] used leaf texture features as parameters for plant family detection and recognition. The research results showed that the method gives a perfect classification of three plant families of the Ivorian flora. Rahman et al. [47] detected and recognized tomato leaf diseases through 13 different statistical features calculated from tomato leaves using the GLCM algorithm. The method was implemented in the form of a cell phone application. The research results showed that the method provides excellent annotation with an accuracy of 100% for healthy leaf, 95% for early blight, 90% for Septoria leaf spot, and 85% for late blight.
Since the surface textures of fruits and vegetables are usually not affected by the colors and shapes, techniques for object detection and recognition of fruits and vegetables based on texture features are more effective for cases where the colors and shapes of fruits and vegetables are similar to the backgrounds, while the textures of fruits and vegetables are significantly different from the backgrounds. Kurtulmus et al. [44] used circular Gabor texture analysis for the detection and recognition of green citrus. The method detected and recognized target fruits by scanning the whole image, but the correct rate was only 75.3%. To improve the accuracy of detection and recognition of fruits and vegetables, Chaivivatrakul and Dailey [51] proposed a texture-based feature detection and recognition method for green fruits. The method involves interest point feature extraction and descriptor computation, interest point classification using support vector machines, candidate fruit point mapping, and morphological closing and fruit region extraction. This approach can effectively improve the correct rate of detection and recognition of green fruits (more than 85%). In addition, Hameed et al. [48] proposed a texture-based latent space disentanglement method to enhance the learning of representations for novel data samples.
In general, the main problem of techniques for object detection and recognition of fruits and vegetables based on texture features is that changing lighting conditions and complex backgrounds can affect the accuracy, speed, and robustness of detection and recognition. These techniques are mainly applicable to greenhouse environments.

2.4. Techniques Based on Multi-Feature Fusion

Techniques for object detection and recognition of fruits and vegetables based on a kind of feature can recognize fruits from natural environments, but they usually have certain limitations. Techniques for object detection and recognition of fruits and vegetables that integrate two or more features to form multi-feature fusion can effectively improve the accuracy, speed, and robustness of detection and recognition [59,92,95,136,137,138,139].
In terms of color and shape features, Liu et al. [60] proposed a method for the detection and recognition of incomplete red apples (as shown in Figure 7). The research results are shown in Figure 8. The method can be used to detect not only apples, but can also be used to detect other fruits that have different colors from the backgrounds, such as oranges, kiwifruits, and tomatoes. However, the method only detects fruits using rectangular boxes. Pixel-wise segmentation is more accurate than detection boxes. Recognizing fruits at the pixel level could be the focus of further work. Arad et al. [17], and Liu et al. [58] extracted color features from RGB color channels of fruit and vegetable images, and morphological features were extracted from the images with detected fruit and vegetable borders using morphological operations. Then, they detected and recognized bell peppers, grapefruits, and peaches.
In terms of color and texture features, to solve segmentation problems, Lin and Zou [62] proposed a new segmentation method using color and texture features. This method incorporates HSV color features and Leung–Malik texture features to detect citrus using fixed-size sub-windows. Madgi and Danti [63] classified fruits and vegetables based on color features and GLCM texture features. The research results showed that the combination of color with GLCM texture features is more effective than combined color and LBP texture features.
In terms of shape and texture features, Lu et al. [39], Mustaffa et al. [61], and Bhargava and Bansal [54] recognized fruits and vegetables by shape features including area, perimeter, and roundness, and constructed fruit and vegetable textures based on local binary patterns. Finally, they classified green citrus, multi-species durians, and multi-species apples.
In terms of color, shape, and texture features, Rakun et al. [52] achieved apple detection and recognition under uneven lighting conditions, partial fruit shading, and a similar background by combining color, shape, and texture features. Basavaiah and Anthony [56] proposed a detection and recognition method based on color, shape, and texture features for a variety of tomato diseases. Azarmdel et al. [57] and Septiarini et al. [53], respectively, achieved the detection and recognition of mulberries and oil palms based on multiple features such as color, shape, and texture features.
Currently, digital image processing techniques used by researchers for the detection and recognition of fruits and vegetables always require setting thresholds such as color, shape, and texture features, but the optimal thresholds often vary with images. In order to address this problem, Payne et al. [66] proposed using RGB and YCbCr color segmentation and texture segmentation based on the variability of neighboring pixels to divide pixels into target fruit and background pixels for high-accuracy detection and recognition. However, this method relies too much on the color features of images, and the recognition accuracy is low when the color features are not obvious. For this reason, Payne et al. [65], based on the previously proposed algorithm, reduced the reliance on color features by setting the boundary-constrained mean and edge detection filters, and increased the use of texture filtering. The research results showed that the recognition accuracy is significantly improved compared with before the improvement. Yamamoto et al. [64] used a multi-feature fusion method to simplify the tedious steps of setting thresholds for each image and improve the accuracy of detection and recognition.

3. Image Segmentation and Classifiers Based on Machine Learning

Since machine learning can derive laws from sample data that can hardly be summarized by theoretical analysis, many researchers have conducted extensive and in-depth research on techniques for object detection and recognition of fruits and vegetables based on the K-means clustering algorithm [68,69,70,71,72,73,74,75], SVM algorithm [54,57,69,73,76,77,78,79,80,81,82,83,84], KNN clustering algorithm [36,85,86,87,88,89,90,91], AdaBoost algorithm [62,92,93,94,95,96,97,98,99], decision tree algorithm [100,101,102,103,104,105,106,107], and Bayesian algorithm [108,109,110,111,112,113] (as shown in Figure 9 and Figure 10). Table 3 compares the results of different techniques of different researchers, and presents analysis of the advantages, disadvantages, and applicability of various techniques.
In general, compared to techniques based on digital image processing, techniques based on traditional machine learning have improved the speed, accuracy, and robustness of the detection and recognition of fruits and vegetables to different degrees. However, techniques based on traditional machine learning are sensitive to the inputs of abnormal data. Various parameters need to be set in advance before training, and the final classification effect is related to the setting of various parameters. Some parameters are also affected by changing lighting conditions, which make the tuning processes more complicated. At the same time, the current mainstream image segmentation and classifiers based on traditional machine learning are often solutions for specific scenes, so they usually lack generality. They are less effective for multiple classification problems, and are mainly applicable to the detection and recognition of a single species in greenhouse environments.

3.1. Techniques Based on K-Means Clustering Algorithm

The K-means clustering algorithm is a widely used unsupervised learning method. It can automatically classify input data into identical and different classes based on their fixed distances from each other. Techniques for object detection and recognition of fruits and vegetables based on the K-means clustering algorithm are widely used. Wang et al. [75] proposed a litchi detection and recognition algorithm based on K-means clustering. The research results showed that the method can be robust against the influence of changing lighting conditions. The highest average recognition rates of un-occluded and partially occluded litchi were 98.8% and 97.5%, respectively. Luo et al. [72] proposed a K-means clustering algorithm-based detection and recognition method for cutting points of double-overlapping grape clusters for harvesting robots in a complex vineyard environment. The recognition accuracy of the overlapping grape clusters was 88.33%. The success detection rate of the cutting points on the peduncles of double-overlapping grape clusters was 81.66%. Jiao et al. [70] also proposed a fast detection and localization method for overlapping apples based on K-means clustering and a local maximum algorithm.
In order to further resist the effect of changing lighting conditions, Wang et al. [74] improved the wavelet transform and used the K-means clustering algorithm to segment target images. The method not only accurately segments fruits with different colors, but also maintains high accuracy for the detection and recognition of fruits under changing lighting conditions.
In order to exclude the interference information in images as much as possible, Luo et al. [72] used the K-means clustering algorithm to obtain a complete closed target image region after segmentation, denoising, and filling operations on the captured image. To obtain more feature information of target fruits, Moallem et al. [73] applied the K-means clustering algorithm to the Cb component in YCbCr color space, and the defect segmentation was achieved using a Multi-Layer Perceptron (MLP) neural network. Then, statistical, textural, and geometric features from refined defected regions were extracted. Although the classification accuracy of this method is high, the weaknesses are obvious. First, the K-value must be given in advance, but it is difficult to do so. Second, the randomly selected K-centroids will have a large impact on the classification results.
In general, these techniques do not need to give labels, and can automatically classify target objects and backgrounds according to the fixed values between input data. Therefore, the advantages of techniques for object detection and recognition of fruits and vegetables based on the K-means clustering algorithm are short computation time, fast response time, and good clustering effect (especially when the clusters are dense and the differences are obvious). The disadvantages are that they are sensitive to abnormal data, and the randomly selected K-values have a large impact on the classification results.

3.2. Techniques Based on SVM Algorithm

The SVM algorithm is a widely used supervised learning method. It is commonly used in linear/nonlinear regression analysis and pattern classification. It achieves classification by solving the separated hyperplane that correctly partitions the training set and has the largest geometric separation. Techniques for object detection and recognition of fruits and vegetables based on the SVM algorithm are widely used. Bhargava1 and Bansal [54], Patel and Chaudhari [78], Singh and Singh [82], and Moallem et al. [73] compared the performance of different classifiers (SVM, KNN, etc.) for the detection and recognition of different fruits and vegetables. The research results showed that, in their studies, the SVM classifier performs better than the other classifiers.
To improve the cooperative capability of fruit and vegetable harvesting robots, Sepúlveda et al. [77] implemented a cooperative operation between the arms of a two-armed eggplant harvesting robot based on the SVM algorithm. To address the problems of local occlusions, irregular shapes, and high similarity to backgrounds, Ji et al. [81] proposed a green pepper recognition method based on a least-squares support vector machine optimized by the improved particle swarm optimization (IPSO-LSSVM). The research results showed that the recognition rate of green peppers was 89.04%, and the average recognition time was 320 ms. This approach meets the requirements of accuracy and time of greenhouse green pepper harvesting robots.
To further improve the accuracy, speed, and robustness of detection and recognition of fruits and vegetables, Yang et al. [80] also proposed an image segmentation method for Hangzhou white chrysanthemum based on the least-square support vector machine (LS-SVM). The research results showed that the trained LS-SVM model and SVM model could effectively segment the images of Hangzhou white chrysanthemum from complicated backgrounds under three lighting conditions, namely, front lighting, back lighting, and overshadowing, with an accuracy of above 90%. When segmenting an image, the SVM algorithm required 1.3 s, while the proposed LS-SVM algorithm needed just 0.7 s. In addition, the implementation of the proposed segmentation algorithm on the harvesting robot achieved an 81% harvesting success rate.
In general, the advantages of techniques for object detection and recognition of fruits and vegetables based on SVM algorithm are that they simplify classification and regression problems, and can achieve good classification for the data outside the training set. At the same time, they can solve the problem of small samples of target fruits in natural environments, and do not increase the computational complexity when mapping to high-dimensional space. Therefore, the segmentation of fruit and vegetable images containing many high light points can be effectively realized by these techniques. The disadvantages are that they are too sensitive to the adjustment of the algorithm parameters and the selection of the kernel function. The kernel function and its parameters must be reselected for a new dataset. In addition, the accuracy is only high for binary classification tasks, but less effective for multi-classification problems.

3.3. Technique Based on KNN Clustering Algorithm

The KNN clustering algorithm is a widely used supervised learning method. It is commonly used in classification and regression models. It achieves classification by classifying unknown feature vectors into classes of the most common attributes of the K nearest neighbors using the training set. Techniques for object detection and recognition of fruits and vegetables based on the KNN clustering algorithm are more widely used. Based on the KNN clustering algorithm, Tan et al. [36], Astuti et al. [90], Suban et al. [89], Sarimole and Rosiana [85], and Sarimole and Fadillah [86] detected and recognized the ripeness of blueberries, oil palms, papayas, betel nuts, and pomegranates, respectively.
Tanco et al. [91] studied the detection and recognition of fruits and vegetables using three types of classifiers (SVM, KNN, and decision tree). The research results showed that the KNN clustering algorithm produced the best detection and recognition results. Ghazal et al. [88] trained and tested six supervised machine learning methods (SVM, KNN, decision tree, Bayesian, Linear Discriminant Analysis, and feed-forward back propagation neural network) on a publicly available Fruits 360 dataset. The research results showed that the methods based on the KNN clustering algorithm achieve relatively high classification accuracy.
In general, techniques based on the KNN clustering algorithm are able to classify the K nearest neighbors using functions to measure the distance between different eigenvalues. The advantages of techniques based on the KNN clustering algorithm are high classification accuracy, relative insensitivity to abnormal data, and no assumptions about input data. However, it is tedious to set a reasonable scaling factor of K in these methods. With a small value of K, the model complexity is high, overfitting is likely to occur, the estimation error of learning increases, and the prediction results are very sensitive to the instance points of the nearest neighbors. With a larger value of K, the complexity of the model and the estimation error of learning decreases, which is suitable for classification of a small dataset, but the approximation error of learning increases. The disadvantages are large computational effort, and high time and space complexity. Moreover, the detection and recognition accuracy of fruits and vegetables are easily affected by the growth environments and lighting conditions.

3.4. Techniques Based on AdaBoost Algorithm

The AdaBoost algorithm is a widely used supervised learning method. It is commonly used in two-class problems, multi-class single-label problems, multi-class multi-label problems, large-class single-label problems, and regression problems. Different classifiers (weak classifiers) are trained using the same training set, and then these weak classifiers are pooled to form a stronger final classifier (strong classifier). Techniques for object detection and recognition of fruits and vegetables based on the AdaBoost algorithm are widely used. Kumar et al. [93] introduced a novel plant species classifier based on the extraction of morphological features using a Multilayer Perceptron with the AdaBoost algorithm. In addition, they tested the classification accuracy of different classifiers, such as KNN, decision tree, and the Multilayer Perceptron. The research results showed that a precision rate of 95.42% was achieved using the proposed machine learning classifier, which is one of the state-of-the-art algorithms.
Ling et al. [94] proposed a tomato detection method combining an AdaBoost classifier and color analysis, and applied them to the harvesting robot. The research results showed that the ripe tomato detection success rate was about 95%., and 5% of the ripe tomatoes missed detection because of the occluding leaves. When the leaf occlusion area is more than 50% of the tomato area, the target tomato might not be detected. The method also has good robustness, and can meet the challenges of environmental factors such as changing lighting conditions and partial occlusions and overlaps. The speed of the method is about 10 fps, which is enough for the harvesting robot to operate in real time.
To further cope with challenges such as changing lighting conditions, cluttered backgrounds, and cluster occlusions, Lin and Zou [62] also proposed a novel segmentation method using the AdaBoost classifier and texture–color features. The research results showed that the method achieved a precision of 0.867 and recall of 0.768. However, the method may over-segment images because the LM filter bank tends to be influenced by illumination changes. A possible solution is to investigate an illumination invariant version of an LM filter bank.
In general, the advantages of techniques for object detection and recognition of fruits and vegetables based on the AdaBoost algorithm are that they can use different classification algorithms as weak classifiers and make good use of weak classifiers for cascading, with high detection and recognition accuracy. The disadvantages are that during the training process, the AdaBoost algorithm will cause the weight of difficult samples to exponentially increase, and the training will be biased towards such difficult samples, which makes the AdaBoost algorithm vulnerable to noise interference. In addition, the AdaBoost algorithm relies on weak classifiers, which often have a long training time.

3.5. Techniques Based on Decision Tree Algorithm

The decision tree algorithm is a widely used supervised learning method. It is commonly used in decision-making problems. It starts from the root node. Then, the corresponding features in the item to be classified are tested and the output branches are selected according to their values until the leaf node is reached. Finally, the category stored in the leaf node is used as the decision result. Wajid et al. [105] investigated the applicability and performance of various classification algorithms including Naïve Bayes, Artificial Neural Networks, and decision trees. The research results showed that the decision tree classification method performs better than the other methods for orange detection. The results recorded for the accuracy, precision, and sensitivity using this method were 93.13%, 93.45%, and 93.24%, respectively. In addition, in order to investigate the cost of implementation relative to the classification performance, Kuang, et al. [103] compared two types of machine learning algorithms (the multivariate alternating decision tree and the deep-learning-based kiwifruit classifiers). The research results showed that traditional decision tree classifiers can achieve comparable classification performance at a fraction of the cost.
Ma et al. [104] proposed a segmentation method based on a decision tree which is constructed by a two-step coarse-to-fine procedure. Firstly, a coarse decision tree is built by the CART (Classification and Regression Tree) algorithm with a feature subset. The feature subset consists of color features that are selected by Pearson’s Rank correlations. Then, the coarse decision tree is optimized by pruning. Using the optimized decision tree, segmentation of images is achieved by conducting pixel-wise classification. Abd al karim and Karim [100] also proposed a decision tree classifier to classify fruit types. The Fruits 360 dataset was used, where 70% of the dataset was used in the training phase and 30% was used in the testing phase. Chen et al. [102] proposed a classification method for kernel and impurity particles using the decision tree algorithm.
In general, the advantages of techniques for object detection and recognition of fruits and vegetables based on the decision tree algorithm are that they enumerate the full range of feasible solutions to the decision problem, and the expected values of each feasible solution in various states. They can visually show the decision process of the whole decision problem at different stages in time and in the decision sequence. When applied to a complex multi-stage decision-making problem, the stages are obvious and the hierarchy is clear, so that various factors can be thoughtfully considered, which is conducive to making the right decision. The disadvantages are that they are easy to overfit and do not perform well when dealing with data that has relatively strong feature correlations. In addition, for data with an inconsistent number of samples in each category, the result gained in the decision tree is biased towards those features with more values.

3.6. Techniques Based on Bayesian Algorithm

The Bayesian algorithm is a widely used supervised learning method. It classifies based on minimizing Bayesian risk, minimizing probability of error, or maximizing posterior probability. It is commonly used in large-scale databases. The Bayesian algorithm was proposed because it has high accuracy and computational speed when applied to a large number of databases, is robust to isolated noise points, and only requires a small training set to estimate the parameters needed for classification.
Kusuma and Setiadi [113] proposed a classification method using feature histogram extraction and a Naïve Bayes Classifier for tomato recognition. In addition, Sari, et al. [110] proposed a classification method for papaya types based on leaf images using a Naive Bayes classifier and LBP feature extraction. In the research of Reyes et al. [108], the method based on the Bayesian algorithm, along with the off-the-shelf hardware, made it possible to perform an optimal classification of cherries in real time to meet international fruit quality standards.
In general, the advantages of techniques for object detection and recognition of fruits and vegetables based on Bayesian algorithms are the simplicity of recognition and classification processes, the fast response time, the better performance for small-scale data, the ability to handle multiple classification tasks, and the suitability for incremental training. The disadvantage is that the prior probabilities need to be calculated. Furthermore, the recognition performance is affected by the fact that the prior probabilities depend on the target image features. In addition, the recognition function may fail for data (variable features) that do not appear in the training set.

4. Challenges and Further Research

As summarized and reviewed in this article, various techniques for object detection and recognition of fruits and vegetables, each with their own pros and cons, have been investigated in the past. However, it is difficult to find studies reporting the absolute accuracy of each technique and comparisons of performance between those techniques in the same environment.
Therefore, open publishing of all reference datasets and all code is necessary. Some frequently used image databases of fruits and vegetables are shown in Table 4. As much as possible, further research should be carried out based on these open datasets to help compare different techniques. Moreover, the international community might consider continually providing and updating quality reference datasets.
In addition, there are many factors leading to the low accuracy, slow speed, and poor robustness of object detection and recognition of fruit and vegetable harvesting robots. They can be summarized into the following aspects: (1) similar backgrounds; (2) clustered/partially occluded/swaying fruits; (3) sensitivity to changing lighting conditions; (4) night image recognition; (5) blur and noise in images; (6) high computation time and real-time limitations; and (7) generalization ability. To be more specific:
(1) Object detection and recognition of fruits and vegetables require fast response capability to improve the harvesting efficiency. The current mainstream object detection and recognition techniques based on digital image processing and traditional machine learning have certain limitations, although they may have good accuracy performance. In complex environments, influenced by many factors such as changing lighting conditions and growth states of fruits, the more factors the method considers, the more complex the method, and the longer the running computation time. This will lead to low real-time performance for vision systems.
(2) When fruit and vegetable harvesting robots work, they can only detect and recognize the target objects according to the pre-trained model. In the actual harvesting process, there is often more than one kind of target object that needs to be harvested. In addition, the harvesting robots are only used during the harvesting season of the year, and are idle for the rest of the year, due to the obvious seasonality and timeliness of fruit harvesting, thus leading to the relatively poorer economics of harvesting robots. Therefore, the generalization ability of the algorithms still needs to be enhanced to achieve the detection and recognition of multiple kinds of fruits and vegetables. Future research could make the algorithms generalizable (i.e., derive the ability to recognize fruits with similar characteristics based on a kind of target object). In addition, the night image recognition algorithm could be required for vision systems, where the harvesting robots can work during the day, and then continue at night.
(3) Object detection and recognition of fruits and vegetables require the detection and recognition of clustered/partially occluded/swaying fruits. However, the presence of clustered/partially occluded/swaying parts may cause confusion in images, which is currently a greater challenge for detection and recognition in unstructured environments. A popular method is the Circular Hough Transform, which is more effective for round objects such as apples, oranges, and tomatoes. However, research results showed that this method is not only prone to false positives generated by the contours of other objects, such as leaves, but also has a long computation time. Another popular method is to use a blowing device to avoid leaf occlusions and to move adjacent fruits to one side. However, this method will increase the weight of end-effectors of harvesting robots, and may not be applicable to all kinds of crops. Future research could focus on agricultural operations, including tree pruning and pollination methods, to improve the visibility of target fruits, which may help to improve detection and recognition accuracy.
As summarized and reviewed in this article, methods based on multi-feature fusion and the SVM algorithm achieve a better accuracy rate in addressing these challenges. Furthermore, methods based on multi-algorithm fusion should be paid more attention. In addition, further research should focus on solving these challenges and improving the accuracy, speed, robustness, and generalization of vision systems, while reducing the overall complexity and cost. The optimization of network models, the accuracy of sensing systems, multi-sensor data fusion, fault-tolerant computing of machine vision, and decision making using a big data cloud platform may be key breakthroughs for further techniques for object detection and recognition of fruits and vegetables.

5. Conclusions

The intelligent harvesting robot is one of the most important artificial intelligence (AI) robots used for fruit and vegetable harvesting in modern agriculture. The excellent vision system can greatly promote the environmental perception ability of the harvesting robot. However, current visual systems of harvesting robots still cannot fully meet the requirements of commercialization. This article summarizes and reviews the progress in developing techniques for object detection and recognition of fruit and vegetable harvesting robots based on digital image processing and traditional machine learning. Although there previous reviews of techniques for object detection and recognition of fruits and vegetables have been published, the contributions of this work are: (1) systematic summary of the techniques developed in recent years for object detection and recognition of fruit and vegetable harvesting robots based on digital image processing and traditional machine learning; (2) systematic analysis of the advantages, disadvantages, and applicability of various techniques; and (3) demonstration of the current challenges and future potential developments. Through this clearer and more comprehensive overview and review, we aim to provide a reference for future research on techniques for object detection and recognition of fruit and vegetable harvesting robots based on digital image processing and traditional machine learning.
The current challenges of techniques for object detection and recognition of fruits and vegetables are mainly the similar backgrounds, clustered/partially occluded/swaying fruits, sensitivity to changing lighting conditions, night image recognition, blur and noise in images, high computation time and real-time limitations, and generalization ability.
Techniques for object detection and recognition of fruit and vegetable harvesting robots based on digital image processing can be subdivided into color feature (RGB, HSV, HSI, Lab, HSB, YCbCr)-based methods, shape feature-based methods, texture feature-based methods, and multi-feature fusion-based methods.
As summarized and reviewed in this article, techniques based on digital image processing require precise information about the target fruit features, which are usually used for object detection and recognition of fruits and vegetables based on features such as colors, shapes, and textures. However, in complex environments, these features of the target objects are affected by non-controllable factors, resulting in low accuracy, slow speed, and poor robustness of object detection and recognition of fruits and vegetables. Methods based on multi-feature fusion can improve the accuracy and robustness of object detection and recognition of fruits and vegetables. However, it is important to determine which features to integrate; for example, Lab color space could be used more for low-quality images because it is more robust to noise in images. In addition, the combination of color with GLCM texture features has proven to be more effective than combined color and LBP texture features.
Object detection and recognition techniques of fruit and vegetable harvesting robots based on traditional machine learning can be subdivided into K-means clustering algorithm-based methods, SVM algorithm-based methods, KNN clustering algorithm-based methods, AdaBoost algorithm-based methods, decision tree algorithm-based methods, and Bayesian algorithm-based methods.
In general, techniques based on traditional machine learning have good performance, but they require various parameters to be set in advance, where the parameters set in advance have a large impact on recognition accuracy. For classifiers, prior probabilities from the training set need to be obtained in advance, and the classification accuracy is affected by the weights of difficult to classify samples. As summarized and reviewed in this article, methods based on the SVM algorithm achieve a better accuracy rate. However, the current mainstream image segmentation approaches and classifiers based on traditional machine learning are often solutions for specific scenes. They usually lack generality and are less effective for multiple classification problems. They are mainly applicable to the detection and recognition of a single species in greenhouse environments. Methods based on multi-algorithm fusion should be paid more attention. This may be a breakthrough for future techniques for object detection and recognition of fruits and vegetables.
Further research into and development of techniques for object detection and recognition for fruit and vegetable harvesting robots are necessary. Commercial applications of harvesting robots need to be further addressed through integrated horticultural and engineering approaches for improved image segmentation, and for increased overall performance of crop detection and recognition.

Author Contributions

Conceptualization, F.X. and Y.C.; methodology, F.X. and Y.C.; analysis, F.X.; investigation, F.X., Y.C., X.L., G.X. and H.W.; resources, F.X., H.W. and Y.L.; data curation, F.X.; writing—original draft preparation, F.X.; writing—review and editing, F.X., H.W. and Y.L.; visualization, F.X.; supervision, H.W. and Y.L.; project administration, F.X., H.W. and Y.L.; funding acquisition, H.W. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Heilongjiang Province of China (LH2020C047), Northeast Forestry University Foundation (2572022DP01) and China Postdoctoral Science Foundation (2019T120248, 2017M611338).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tomato Production. 2020. Available online: https://ourworldindata.org/grapher/tomato-production (accessed on 1 October 2022).
  2. FAOSTAT. Available online: https://www.fao.org/faostat/en/#data/QCL (accessed on 1 October 2022).
  3. Apple Production. 2020. Available online: https://ourworldindata.org/grapher/apple-production (accessed on 1 October 2022).
  4. Strawberry—Wikipedia. Available online: https://en.wikipedia.org/wiki/Strawberry (accessed on 1 October 2022).
  5. Zhang, K.; Lammers, K.; Chu, P.; Li, Z.; Lu, R. System Design and Control of an Apple Harvesting Robot. Mechatronics 2021, 79, 102644. [Google Scholar] [CrossRef]
  6. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P.J. An Autonomous Strawberry-Harvesting Robot: Design, Development, Integration, and Field Evaluation. J. Field Robot. 2020, 37, 202–224. [Google Scholar] [CrossRef] [Green Version]
  7. Xiong, Y.; Peng, C.; Grimstad, L.; From, P.J.; Isler, V. Development and Field Evaluation of a Strawberry Harvesting Robot with a Cable-Driven Gripper. Comput. Electron. Agric. 2019, 157, 392–402. [Google Scholar] [CrossRef]
  8. Anjom, F.K.; Vougioukas, S.G.; Slaughter, D.C. Development of a Linear Mixed Model to Predict the Picking Time in Strawberry Harvesting Processes. Biosyst. Eng. 2018, 166, 76–89. [Google Scholar] [CrossRef]
  9. Silwal, A.; Davidson, J.R.; Karkee, M.; Mo, C.; Zhang, Q.; Lewis, K. Design, Integration, and Field Evaluation of a Robotic Apple Harvester. J. Field Robot. 2017, 34, 1140–1159. [Google Scholar] [CrossRef]
  10. Wang, Z.; Xun, Y.; Wang, Y.; Yang, Q. Review of Smart Robots for Fruit and Vegetable Picking in Agriculture. Int. J. Agric. Biol. Eng. 2022, 15, 33–54. [Google Scholar] [CrossRef]
  11. Zhou, H.; Wang, X.; Au, W.; Kang, H.; Chen, C. Intelligent Robots for Fruit Harvesting: Recent Developments and Future Challenges. Precis. Agric. 2022, 23, 1856–1907. [Google Scholar] [CrossRef]
  12. Brown, J.; Sukkarieh, S. Design and Evaluation of a Modular Robotic Plum Harvesting System Utilizing Soft Components. J. Field Robot. 2021, 38, 289–306. [Google Scholar] [CrossRef]
  13. Yan, B.; Fan, P.; Lei, X.; Liu, Z.; Yang, F. A Real-Time Apple Targets Detection Method for Picking Robot Based on Improved YOLOv5. Remote Sens. 2021, 13, 1619. [Google Scholar] [CrossRef]
  14. He, L.; Fu, H.; Karkee, M.; Zhang, Q. Effect of Fruit Location on Apple Detachment with Mechanical Shaking. Biosyst. Eng. 2017, 157, 63–71. [Google Scholar] [CrossRef] [Green Version]
  15. Ji, W.; Zhao, D.; Cheng, F.; Xu, B.; Zhang, Y.; Wang, J. Automatic Recognition Vision System Guided for Apple Harvesting Robot. Comput. Electr. Eng. 2012, 38, 1186–1195. [Google Scholar] [CrossRef]
  16. Zhao, D.; Lv, J.; Ji, W.; Zhang, Y.; Chen, Y. Design and Control of an Apple Harvesting Robot. Biosyst. Eng. 2011, 110, 112–122. [Google Scholar] [CrossRef]
  17. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a Sweet Pepper Harvesting Robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef] [Green Version]
  18. Lehnert, C.; English, A.; McCool, C.; Tow, A.W.; Perez, T. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar] [CrossRef] [Green Version]
  19. Bac, C.W.; Hemming, J.; Van Henten, E.J. Stem Localization of Sweet-Pepper Plants Using the Support Wire as a Visual Cue. Comput. Electron. Agric. 2014, 105, 111–120. [Google Scholar] [CrossRef]
  20. Hayashi, S.; Shigematsu, K.; Yamamoto, S.; Kobayashi, K.; Kohno, Y.; Kamata, J.; Kurita, M. Evaluation of a Strawberry-Harvesting Robot in a Field Test. Biosyst. Eng. 2010, 105, 160–171. [Google Scholar] [CrossRef]
  21. Xiong, J.; He, Z.; Lin, R.; Liu, Z.; Bu, R.; Yang, Z.; Peng, H.; Zou, X. Visual Positioning Technology of Picking Robots for Dynamic Litchi Clusters with Disturbance. Comput. Electron. Agric. 2018, 151, 226–237. [Google Scholar] [CrossRef]
  22. Feng, Q.; Zou, W.; Fan, P.; Zhang, C.; Wang, X. Design and Test of Robotic Harvesting System for Cherry Tomato. Int. J. Agric. Biol. Eng. 2018, 11, 96–100. [Google Scholar] [CrossRef]
  23. Kondo, N.; Yata, K.; Iida, M.; Shiigi, T.; Monta, M.; Kurita, M.; Omori, H. Development of an End-Effector for a Tomato Cluster Harvesting Robot. Eng. Agric. Environ. Food 2010, 3, 20–24. [Google Scholar] [CrossRef]
  24. Williams, H.A.M.; Jones, M.H.; Nejati, M.; Seabright, M.J.; Bell, J.; Penhall, N.D.; Barnett, J.J.; Duke, M.D.; Scarfe, A.J.; Ahn, H.S.; et al. Robotic Kiwifruit Harvesting Using Machine Vision, Convolutional Neural Networks, and Robotic Arms. Biosyst. Eng. 2019, 181, 140–156. [Google Scholar] [CrossRef]
  25. Yang, Q.; Chen, C.; Dai, J.; Xun, Y.; Bao, G. Tracking and Recognition Algorithm for a Robot Harvesting Oscillating Apples. Int. J. Agric. Biol. Eng. 2020, 13, 163–170. [Google Scholar] [CrossRef]
  26. Zemmour, E.; Kurtser, P.; Edan, Y. Automatic Parameter Tuning for Adaptive Thresholding in Fruit Detection. Sensors 2019, 19, 2130. [Google Scholar] [CrossRef] [Green Version]
  27. Lv, J.; Wang, Y.; Xu, L.; Gu, Y.; Zou, L.; Yang, B.; Ma, Z. A Method to Obtain the Near-Large Fruit from Apple Image in Orchard for Single-Arm Apple Harvesting Robot. Sci. Hortic. 2019, 257, 108758. [Google Scholar] [CrossRef]
  28. Goel, N.; Sehgal, P. Fuzzy Classification of Pre-Harvest Tomatoes for Ripeness Estimation—An Approach Based on Automatic Rule Learning Using Decision Tree. Appl. Soft Comput. 2015, 36, 45–56. [Google Scholar] [CrossRef]
  29. Yu, X.; Fan, Z.; Wang, X.; Wan, H.; Wang, P.; Zeng, X.; Jia, F. A Lab-Customized Autonomous Humanoid Apple Harvesting Robot. Comput. Electr. Eng. 2021, 96, 107459. [Google Scholar] [CrossRef]
  30. Sethy, P.K.; Routray, B.; Behera, S.K. Detection and Counting of Marigold Flower Using Image Processing Technique. In Advances in Computer, Communication and Control, 2nd ed.; Biswas, U., Banerjee, A., Pal, S., Biswas, A., Sarkar, D., Haldar, S., Eds.; Springer: Singapore, 2019; Volume 41, pp. 87–93. [Google Scholar] [CrossRef]
  31. Malik, M.H.; Zhang, T.; Li, H.; Zhang, M.; Shabbir, S.; Saeed, A. Mature Tomato Fruit Detection Algorithm Based on Improved HSV and Watershed Algorithm. IFAC-Paper 2018, 51, 431–436. [Google Scholar] [CrossRef]
  32. Muthukrishnan, V.; Ramasamy, S.; Damodaran, N. Disease Recognition in Philodendron Leaf Using Image Processing Technique. Environ. Sci. Pollut. Res. 2021, 28, 67321–67330. [Google Scholar] [CrossRef]
  33. Nanehkaran, Y.A.; Zhang, D.; Chen, J.; Tian, Y.; Al-Nabhan, N. Recognition of Plant Leaf Diseases Based on Computer Vision. J. Ambient. Intell. Humaniz. Comput. 2020, 1–18. [Google Scholar] [CrossRef]
  34. Ratprakhon, K.; Neubauer, W.; Riehn, K.; Fritsche, J.; Rohn, S. Developing an Automatic Color Determination Procedure for the Quality Assessment of Mangos (Mangifera Indica) Using a CCD Camera and Color Standards. Foods 2020, 9, 1709. [Google Scholar] [CrossRef] [PubMed]
  35. Biffi, L.J.; Mitishita, E.A.; Liesenberg, V.; Centeno, J.A.S.; Schimalski, M.B.; Rufato, L. Evaluating the Performance of a Semi-Automatic Apple Fruit Detection in a High-Density Orchard System Using Low-Cost Digital RGB Imaging Sensor. Bull. Geod. Sci. 2021, 27, 1–20. [Google Scholar] [CrossRef]
  36. Tan, K.; Lee, W.S.; Gan, H.; Wang, S. Recognising Blueberry Fruit of Different Maturity Using Histogram Oriented Gradients and Colour Features in Outdoor Scenes. Biosyst. Eng. 2018, 176, 59–72. [Google Scholar] [CrossRef]
  37. Lin, G.; Tang, Y.; Zou, X.; Cheng, J.; Xiong, J. Fruit Detection in Natural Environment Using Partial Shape Matching and Probabilistic Hough Transform. Precis. Agric. 2020, 21, 160–177. [Google Scholar] [CrossRef]
  38. Sun, S.; Jiang, M.; He, D.; Long, Y.; Song, H. Recognition of Green Apples in an Orchard Environment by Combining the GrabCut Model and Ncut Algorithm. Biosyst. Eng. 2019, 187, 201–213. [Google Scholar] [CrossRef]
  39. Lu, J.; Lee, W.S.; Gan, H.; Hu, X. Immature Citrus Fruit Detection Based on Local Binary Pattern Feature and Hierarchical Contour Analysis. Biosyst. Eng. 2018, 171, 78–90. [Google Scholar] [CrossRef]
  40. Zhuang, J.J.; Luo, S.M.; Hou, C.J.; Tang, Y.; He, Y.; Xue, X.Y. Detection of Orchard Citrus Fruits Using a Monocular Machine Vision-Based Method for Automatic Fruit Picking Applications. Comput. Electron. Agric. 2018, 152, 64–73. [Google Scholar] [CrossRef]
  41. Oo, L.M.; Aung, N.Z. A Simple and Efficient Method for Automatic Strawberry Shape and Size Estimation and Classification. Biosyst. Eng. 2018, 170, 96–107. [Google Scholar] [CrossRef]
  42. Jana, S.; Parekh, R. Shape-Based Fruit Recognition and Classification. In Proceedings of the International Conference on Computational Intelligence, Communications, and Business Analytics, Kolkata, India, 24–25 March 2017. [Google Scholar] [CrossRef]
  43. Linker, R.; Cohen, O.; Naor, A. Determination of the Number of Green Apples in RGB Images Recorded in Orchards. Comput. Electron. Agric. 2012, 81, 45–47. [Google Scholar] [CrossRef]
  44. Kurtulmus, F.; Lee, W.S.; Vardar, A. Green Citrus Detection Using ‘Eigenfruit’, Color and Circular Gabor Texture Features under Natural Outdoor Conditions. Comput. Electron. Agric. 2011, 78, 140–149. [Google Scholar] [CrossRef]
  45. Hannan, M.W.; Burks, T.F.; Bulanon, D.M. A Machine Vision Algorithm Combining Adaptive Segmentation and Shape Analysis for Orange Fruit Detection. Agric. Eng. Int. CIGR J. 2009, XI, 1281. [Google Scholar]
  46. Safren, O.; Alchanatis, V.; Ostrovsky, V.; Levi, O. Detection of Green Apples in Hyperspectral Images of Apple-Tree Foliage Using Machine Vision. Trans. Am. Soc. Agric. Biol. Eng. 2007, 50, 2303–2313. [Google Scholar] [CrossRef]
  47. Rahman, S.U.; Alam, F.; Ahmad, N.; Arshad, S. Image Processing Based System for the Detection, Identification and Treatment of Tomato Leaf Diseases. Multimed. Tools Appl. 2022, 82, 9431–9445. [Google Scholar] [CrossRef]
  48. Hameed, K.; Chai, D.; Rassau, A. Texture-Based Latent Space Disentanglement for Enhancement of a Training Dataset for ANN-Based Classification of Fruit and Vegetables. Inf. Process. Agric. 2021, 10, 85–105. [Google Scholar] [CrossRef]
  49. Trey, Z.F.; Goore, B.T.; Bagui, K.O.; Tiebre, M.S. Classification of Plants into Families Based on Leaf Texture. Int. J. Comput. Sci. Netw. Secur. 2021, 21, 205–211. [Google Scholar] [CrossRef]
  50. Pulido, C.; Solaque, L.; Velasco, N. Weed Recognition by SVM Texture Feature Classification in Outdoor Vegetable Crops Images. Ing. E Investig. 2017, 37, 68–74. [Google Scholar] [CrossRef]
  51. Chaivivatrakul, S.; Dailey, M.N. Texture-Based Fruit Detection. Precis. Agric. 2014, 15, 662–683. [Google Scholar] [CrossRef]
  52. Rakun, J.; Stajnko, D.; Zazula, D. Detecting Fruits in Natural Scenes by Using Spatial-Frequency Based Texture Analysis and Multiview Geometry. Comput. Electron. Agric. 2011, 76, 80–88. [Google Scholar] [CrossRef]
  53. Septiarini, A.; Sunyoto, A.; Hamdani, H.; Kasim, A.A.; Utaminingrum, F.; Hatta, H.R. Machine Vision for the Maturity Classification of Oil Palm Fresh Fruit Bunches Based on Color and Texture Features. Sci. Hortic. 2021, 286, 110245. [Google Scholar] [CrossRef]
  54. Bhargava, A.; Bansal, A. Classification and Grading of Multiple Varieties of Apple Fruit. Food Anal. Methods 2021, 14, 1359–1368. [Google Scholar] [CrossRef]
  55. Yu, L.; Xiong, J.; Fang, X.; Yang, Z.; Chen, Y.; Lin, X.; Chen, S. A Litchi Fruit Recognition Method in a Natural Environment Using RGB-D Images. Biosyst. Eng. 2021, 204, 50–63. [Google Scholar] [CrossRef]
  56. Basavaiah, J.; Anthony, A.A. Tomato Leaf Disease Classification Using Multiple Feature Extraction Techniques. Wirel. Pers. Commun. 2020, 115, 633–651. [Google Scholar] [CrossRef]
  57. Azarmdel, H.; Jahanbakhshi, A.; Mohtasebi, S.S.; Muñoz, A.R. Evaluation of Image Processing Technique as an Expert System in Mulberry Fruit Grading Based on Ripeness Level Using Artificial Neural Networks (ANNs) and Support Vector Machine (SVM). Postharvest Biol. Technol. 2020, 166, 111201. [Google Scholar] [CrossRef]
  58. Liu, T.; Ehsani, R.; Toudeshki, A.; Zou, X.; Wang, H. Identifying Immature and Mature Pomelo Fruits in Trees by Elliptical Model Fitting in the Cr–Cb Color Space. Precis. Agric. 2019, 20, 138–156. [Google Scholar] [CrossRef]
  59. Wu, J.; Zhang, B.; Zhou, J.; Xiong, Y.; Gu, B.; Yang, X. Automatic Recognition of Ripening Tomatoes by Combining Multi-Feature Fusion with a Bi-Layer Classification Strategy for Harvesting Robots. Sensors 2019, 19, 612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Liu, X.; Zhao, D.; Jia, W.; Ji, W.; Sun, Y. A Detection Method for Apple Fruits Based on Color and Shape Features. IEEE Access 2019, 7, 67923–67933. [Google Scholar] [CrossRef]
  61. Mustaffa, M.R.; Yi, N.X.; Abdullah, L.N.; Nasharuddin, N.A. Durian Recognition Based on Multiple Features and Linear Discriminant Analysis. Malays. J. Comput. Sci. 2018, 57–72. [Google Scholar] [CrossRef]
  62. Lin, G.; Zou, X. Citrus Segmentation for Automatic Harvester Combined with AdaBoost Classifier and Leung-Malik Filter Bank. IFAC-Paper 2018, 51, 379–383. [Google Scholar] [CrossRef]
  63. Madgi, M.; Danti, A. An Enhanced Classification of Indian Vegetables Using Combined Color and Texture Features. Int. J. Comput. Eng. Appl. 2018, XII(III), 1–8. [Google Scholar]
  64. Yamamoto, K.; Guo, W.; Yoshioka, Y.; Ninomiya, S. On Plant Detection of Intact Tomato Fruits Using Image Analysis and Machine Learning Methods. Sensors 2014, 14, 12191–12206. [Google Scholar] [CrossRef] [Green Version]
  65. Payne, A.; Walsh, K.; Subedi, P.; Jarvis, D. Estimating Mango Crop Yield Using Image Analysis Using Fruit at ‘Stone Hardening’ Stage and Night Time Imaging. Comput. Electron. Agric. 2014, 100, 160–167. [Google Scholar] [CrossRef]
  66. Payne, A.B.; Walsh, K.B.; Subedi, P.P.; Jarvis, D. Estimation of Mango Crop Yield Using Image Analysis—Segmentation Method. Comput. Electron. Agric. 2013, 91, 57–64. [Google Scholar] [CrossRef]
  67. Stajnko, D.; Rakun, J.; Blanke, M. Modelling Apple Fruit Yield Using Image Analysis for Fruit Colour, Shape and Texture. Eur. J. Hortic. Sci. 2009, 74, 260–267. [Google Scholar]
  68. Fan, P.; Lang, G.; Guo, P.; Liu, Z.; Yang, F.; Yan, B.; Lei, X. Multi-Feature Patch-Based Segmentation Technique in the Gray-Centered RGB Color Space for Improved Apple Target Recognition. Agriculture 2021, 11, 273. [Google Scholar] [CrossRef]
  69. Habib, M.T.; Majumder, A.; Jakaria, A.Z.M.; Akter, M.; Uddin, M.S.; Ahmed, F. Machine Vision Based Papaya Disease Recognition. J. King Saud Univ. Comput. Inf. Sci. 2020, 32, 300–309. [Google Scholar] [CrossRef]
  70. Jiao, Y.; Luo, R.; Li, Q.; Deng, X.; Yin, X.; Ruan, C.; Jia, W. Detection and Localization of Overlapped Fruits Application in an Apple Harvesting Robot. Electronics 2020, 9, 1023. [Google Scholar] [CrossRef]
  71. Sun, S.; Song, H.; He, D.; Long, Y. An Adaptive Segmentation Method Combining MSRCR and Mean Shift Algorithm with K-Means Correction of Green Apples in Natural Environment. Inf. Process. Agric. 2019, 6, 200–215. [Google Scholar] [CrossRef]
  72. Luo, L.; Tang, Y.; Lu, Q.; Chen, X.; Zhang, P.; Zou, X. A Vision Methodology for Harvesting Robot to Detect Cutting Points on Peduncles of Double Overlapping Grape Clusters in a Vineyard. Comput. Ind. 2018, 99, 130–139. [Google Scholar] [CrossRef]
  73. Moallem, P.; Serajoddin, A.; Pourghassem, H. Computer Vision-Based Apple Grading for Golden Delicious Apples Based on Surface Features. Inf. Process. Agric. 2017, 4, 33–40. [Google Scholar] [CrossRef] [Green Version]
  74. Wang, C.; Tang, Y.; Zou, X.; SiTu, W.; Feng, W. A Robust Fruit Image Segmentation Algorithm against Varying Illumination for Vision System of Fruit Harvesting Robot. Optik 2017, 131, 626–631. [Google Scholar] [CrossRef]
  75. Wang, C.; Zou, X.; Tang, Y.; Luo, L.; Feng, W. Localisation of Litchi in an Unstructured Environment Using Binocular Stereo Vision. Biosyst. Eng. 2016, 145, 39–51. [Google Scholar] [CrossRef]
  76. Zhang, Z.; Zhou, J.; Yan, Z.; Wang, K.; Mao, J.; Jiang, Z. Hardness Recognition of Fruits and Vegetables Based on Tactile Array Information of Manipulator. Comput. Electron. Agric. 2021, 181, 105959. [Google Scholar] [CrossRef]
  77. Sepúlveda, D.; Fernández, R.; Navas, E.; Armada, M.; González-De-Santos, P. Robotic Aubergine Harvesting Using Dual-Arm Manipulation. IEEE Access 2020, 8, 121889–121904. [Google Scholar] [CrossRef]
  78. Patel, C.C.; Chaudhari, V.K. Comparative Analysis of Fruit Categorization Using Different Classifiers. Adv. Eng. Optim. Through Intell. Tech. 2020, 949, 153–164. [Google Scholar] [CrossRef]
  79. Dhakshina Kumar, S.; Esakkirajan, S.; Bama, S.; Keerthiveena, B. A Microcontroller Based Machine Vision Approach for Tomato Grading and Sorting Using SVM Classifier. Microprocess. Microsyst. 2020, 76, 103090. [Google Scholar] [CrossRef]
  80. Yang, Q.; Luo, S.; Chang, C.; Xun, Y.; Bao, G. Segmentation Algorithm for Hangzhou White Chrysanthemums Based on Least Squares Support Vector Machine. Int. J. Agric. Biol. Eng. 2019, 12, 127–134. [Google Scholar] [CrossRef]
  81. Ji, W.; Chen, G.; Xu, B.; Meng, X.; Zhao, D. Recognition Method of Green Pepper in Greenhouse Based on Least-Squares Support Vector Machine Optimized by the Improved Particle Swarm Optimization. IEEE Access 2019, 7, 119742–119754. [Google Scholar] [CrossRef]
  82. Singh, S.; Singh, N.P. Machine Learning-Based Classification of Good and Rotten Apple. In Recent Trends in Communication, Computing, and Electronics, 2nd ed.; Khare, A., Tiwary, U.S., Sethi, I.K., Singh, N., Eds.; Springer: Singapore, 2019; Volume 524, pp. 377–386. [Google Scholar] [CrossRef]
  83. Liu, G.; Mao, S.; Kim, J.H. A Mature-Tomato Detection Algorithm Using Machine Learning and Color Analysis. Sensors 2019, 19, 2023. [Google Scholar] [CrossRef] [Green Version]
  84. Lv, Q.; Cai, J.; Liu, B.; Deng, L.; Zhang, Y. Identification of Fruit and Branch in Natural Scenes for Citrus Harvesting Robot Using Machine Vision and Support Vector Machine. Int. J. Agric. Biol. Eng. 2014, 7, 115–121. [Google Scholar] [CrossRef]
  85. Sarimole, F.M.; Rosiana, A. Classification of Maturity Levels in Areca Fruit Based on HSV Image Using the KNN Method. J. Appl. Eng. Technol. Sci. 2022, 4, 64–73. [Google Scholar] [CrossRef]
  86. Sarimole, F.M.; Fadillah, M.I. Classification of Guarantee Fruit Murability Based on HSV Image With K-Nearest Neighbor. J. Appl. Eng. Technol. Sci. 2022, 4, 48–57. [Google Scholar] [CrossRef]
  87. Behera, S.K.; Rath, A.K.; Sethy, P.K. Maturity Status Classification of Papaya Fruits Based on Machine Learning and Transfer Learning Approach. Inf. Process. Agric. 2021, 8, 244–250. [Google Scholar] [CrossRef]
  88. Ghazal, S.; Qureshi, W.S.; Khan, U.S.; Iqbal, J.; Rashid, N.; Tiwana, M.I. Analysis of Visual Features and Classifiers for Fruit Classification Problem. Comput. Electron. Agric. 2021, 187, 106267. [Google Scholar] [CrossRef]
  89. Suban, I.B.; Paramartha, A.; Fortwonatus, M.; Santoso, A.J. Identification the Maturity Level of Carica Papaya Using the K-Nearest Neighbor. In Proceedings of the International Conference on Electronics Representation and Algorithm “Innovation and Transformation for Best Practices in Global Community”, Yogyakarta, Indonesia, 12–13 December 2019. [Google Scholar] [CrossRef]
  90. Astuti, I.F.; Nuryanto, F.D.; Widagdo, P.P.; Cahyadi, D. Oil Palm Fruit Ripeness Detection Using K-Nearest Neighbour. In Proceedings of the International Conference on Mathematics, Science and Computer Science, Balikpapan, Indonesia, 24 October 2018. [Google Scholar] [CrossRef]
  91. Tanco, M.M.; Tejera, G.; Martino, J.M.D. Computer Vision Based System for Apple Detection in Crops. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Funchal, Madeira, Portugal, 27–29 January 2018. [Google Scholar] [CrossRef]
  92. Tu, S.; Pang, J.; Liu, H.; Zhuang, N.; Chen, Y.; Zheng, C.; Wan, H.; Xue, Y. Passion Fruit Detection and Counting Based on Multiple Scale Faster R-CNN Using RGB-D Images. Precis. Agric. 2020, 21, 1072–1091. [Google Scholar] [CrossRef]
  93. Kumar, M.; Gupta, S.; Gao, X.Z.; Singh, A. Plant Species Recognition Using Morphological Features and Adaptive Boosting Methodology. IEEE Access 2019, 7, 163912–163918. [Google Scholar] [CrossRef]
  94. Ling, X.; Zhao, Y.; Gong, L.; Liu, C.; Wang, T. Dual-Arm Cooperation and Implementing for Robotic Harvesting Tomato Using Binocular Vision. Robot. Auton. Syst. 2019, 114, 134–143. [Google Scholar] [CrossRef]
  95. Fu, L.; Duan, J.; Zou, X.; Lin, G.; Song, S.; Ji, B.; Yang, Z. Banana Detection Based on Color and Texture Features in the Natural Environment. Comput. Electron. Agric. 2019, 167, 105057. [Google Scholar] [CrossRef]
  96. Wang, C.; Lee, W.S.; Zou, X.; Choi, D.; Gan, H.; Diamond, J. Detection and Counting of Immature Green Citrus Fruit Based on the Local Binary Patterns (LBP) Feature Using Illumination-Normalized Images. Precis. Agric. 2018, 19, 1062–1083. [Google Scholar] [CrossRef]
  97. Fernandes, A.; Utkin, A.; Eiras-Dias, J.; Silvestre, J.; Cunha, J.; Melo-Pinto, P. Assessment of Grapevine Variety Discrimination Using Stem Hyperspectral Data and AdaBoost of Random Weight Neural Networks. Appl. Soft Comput. 2018, 72, 140–155. [Google Scholar] [CrossRef]
  98. Luo, L.; Tang, Y.; Zou, X.; Wang, C.; Zhang, P.; Feng, W. Robust Grape Cluster Detection in a Vineyard by Combining the AdaBoost Framework and Multiple Color Components. Sensors 2016, 16, 2098. [Google Scholar] [CrossRef] [Green Version]
  99. Zhao, Y.; Gong, L.; Zhou, B.; Huang, Y.; Liu, C. Detecting Tomatoes in Greenhouse Scenes by Combining AdaBoost Classifier and Colour Analysis. Biosyst. Eng. 2016, 148, 127–137. [Google Scholar] [CrossRef]
  100. Abd al karim, M.H.; Karim, A.A. Using Texture Feature in Fruit Classification. Eng. Technol. J. 2021, 39, 67–79. [Google Scholar] [CrossRef]
  101. Abasi, S.; Minaei, S.; Jamshidi, B.; Fathi, D. Development of an Optical Smart Portable Instrument for Fruit Quality Detection. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
  102. Chen, J.; Lian, Y.; Li, Y. Real-Time Grain Impurity Sensing for Rice Combine Harvesters Using Image Processing and Decision-Tree Algorithm. Comput. Electron. Agric. 2020, 175, 105591. [Google Scholar] [CrossRef]
  103. Kuang, Y.C.; Streeter, L.; Cree, M.J.; Ooi, M.P.L. Evaluation of Deep Neural Network and Alternating Decision Tree for Kiwifruit Detection. In Proceedings of the IEEE International Instrumentation and Measurement Technology Conference, Auckland, New Zealand, 20–23 May 2019. [Google Scholar] [CrossRef]
  104. Ma, J.; Du, K.; Zheng, F.; Zhang, L.; Sun, Z. A Segmentation Method for Processing Greenhouse Vegetable Foliar Disease Symptom Images. Inf. Process. Agric. 2019, 6, 216–223. [Google Scholar] [CrossRef]
  105. Wajid, A.; Singh, N.K.; Junjun, P.; Mughal, M.A. Recognition of Ripe, Unripe and Scaled Condition of Orange Citrus Based on Decision Tree Classification. In Proceedings of the International Conference on Computing, Mathematics and Engineering Technologies, Sukkur, Pakistan, 3–4 March 2018. [Google Scholar] [CrossRef]
  106. Ilic, M.; Ilic, S.; Jovic, S.; Panic, S. Early Cherry Fruit Pathogen Disease Detection Based on Data Mining Prediction. Comput. Electron. Agric. 2018, 150, 418–425. [Google Scholar] [CrossRef]
  107. Ishikawa, T.; Hayashi, A.; Nagamatsu, S.; Kyutoku, Y.; Dan, I.; Wada, T.; Oku, K.; Saeki, Y.; Uto, T.; Tanabata, T.; et al. Classification of Strawberry Fruit Shape by Machine Learning. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Technical Commission II Mid-term Symposium “Towards Photogrammetry 2020”, Riva del Garda, Italy, 4–7 June 2018. [Google Scholar] [CrossRef] [Green Version]
  108. Reyes, J.F.; Contreras, E.; Correa, C.; Melin, P. Image Analysis of Real-Time Classification of Cherry Fruit from Colour Features. J. Agric. Eng. 2021, 52, 1–6. [Google Scholar] [CrossRef]
  109. Chithra, P.L.; Henila, M. Apple Fruit Sorting Using Novel Thresholding and Area Calculation Algorithms. Soft Comput. 2021, 25, 431–445. [Google Scholar] [CrossRef]
  110. Sari, C.A.; Puspa Sari, I.; Rachmawanto, E.H.; Rosal Ignatius Moses Setiadi, D.; Proborini, E.; Bijanto; Ali, R.R.; Rizqa, I. Papaya Fruit Type Classification Using LBP Features Extraction and Naive Bayes Classifier. In Proceedings of the International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 19–20 September 2020. [CrossRef]
  111. Muhathir; Santoso, M.H.; Muliono, R. Analysis Naïve Bayes in Classifying Fruit by Utilizing Hog Feature Extraction. J. Inform. Telecommun. Eng. 2020, 4, 151–160. [Google Scholar] [CrossRef]
  112. Abdelghafour, F.; Rosu, R.; Keresztes, B.; Germain, C.; Da Costa, J.P. A Bayesian Framework for Joint Structure and Colour Based Pixel-Wise Classification of Grapevine Proximal Images. Comput. Electron. Agric. 2019, 158, 345–357. [Google Scholar] [CrossRef]
  113. Kusuma, A.; Setiadi, D.R.I.M.; Putra, M.D.M. Tomato Maturity Classification Using Naive Bayes Algorithm and Histogram Feature Extraction. J. Appl. Intell. Syst. 2018, 3, 39–48. [Google Scholar] [CrossRef] [Green Version]
  114. Lv, J.; Xu, H.; Xu, L.; Zou, L.; Rong, H.; Yang, B.; Niu, L.; Ma, Z. Recognition of Fruits and Vegetables with Similar-Color Background in Natural Environment: A Survey. J. Field Robot. 2022, 39, 888–904. [Google Scholar] [CrossRef]
  115. Ukwuoma, C.C.; Qin, Z.; Heyat, M.B.B.; Ali, L.; Almaspoor, Z.; Monday, H.N. Recent Advancements in Fruit Detection and Classification Using Deep Learning Techniques. Math. Probl. Eng. 2022, 2022, 9210947. [Google Scholar] [CrossRef]
  116. Li, Y.; Feng, Q.; Li, T.; Xie, F.; Liu, C.; Xiong, Z. Advance of Target Visual Information Acquisition Technology for Fresh Fruit Robotic Harvesting: A Review. Agronomy 2022, 12, 1366. [Google Scholar] [CrossRef]
  117. Aslam, F.; Khan, Z.; Tahir, A.; Parveen, K.; Albasheer, F.O.; Abrar, S.U.; Khan, D.M. A Survey of Deep Learning Methods for Fruit and Vegetable Detection and Yield Estimation. Big Data Anal. Comput. Intell. Cybersecur. 2022, 111, 299–323. [Google Scholar] [CrossRef]
  118. Li, Z.; Yuan, X.; Wang, C. A Review on Structural Development and Recognition–Localization Methods for End-Effector of Fruit–Vegetable Picking Robots. Int. J. Adv. Robot. Syst. 2022, 19, 172988062211049. [Google Scholar] [CrossRef]
  119. Darwin, B.; Dharmaraj, P.; Prince, S.; Popescu, D.E.; Hemanth, D.J. Recognition of Bloom/Yield in Crop Images Using Deep Learning Models for Smart Agriculture: A Review. Agronomy 2021, 11, 646. [Google Scholar] [CrossRef]
  120. Maheswari, P.; Raja, P.; Apolo-Apolo, O.E.; Pérez-Ruiz, M. Intelligent Fruit Yield Estimation for Orchards Using Deep Learning Based Semantic Segmentation Techniques—A Review. Front. Plant Sci. 2021, 12, 684328. [Google Scholar] [CrossRef]
  121. Bhargava, A.; Bansal, A. Fruits and Vegetables Quality Evaluation Using Computer Vision: A Review. J. King Saud Univ. Comput. Inf. Sci. 2021, 33, 243–257. [Google Scholar] [CrossRef]
  122. Saleem, M.H.; Potgieter, J.; Arif, K.M. Automation in Agriculture by Machine and Deep Learning Techniques: A Review of Recent Developments. Precis. Agric. 2021, 22, 2053–2091. [Google Scholar] [CrossRef]
  123. Tang, Y.; Chen, M.; Wang, C.; Luo, L.; Li, J.; Lian, G.; Zou, X. Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review. Front. Plant Sci. 2020, 11, 510. [Google Scholar] [CrossRef]
  124. Jia, W.; Zhang, Y.; Lian, J.; Zheng, Y.; Zhao, D.; Li, C. Apple Harvesting Robot under Information Technology: A Review. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420925310. [Google Scholar] [CrossRef]
  125. Tripathi, M.K.; Maktedar, D.D. A Role of Computer Vision in Fruits and Vegetables among Various Horticulture Products of Agriculture Fields: A Survey. Inf. Process. Agric. 2020, 7, 183–203. [Google Scholar] [CrossRef]
  126. Naranjo-Torres, J.; Mora, M.; Hernández-García, R.; Barrientos, R.J.; Fredes, C.; Valenzuela, A. A Review of Convolutional Neural Network Applied to Fruit Image Processing. Appl. Sci. 2020, 10, 3443. [Google Scholar] [CrossRef]
  127. Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep Learning—Method Overview and Review of Use for Fruit Detection and Yield Estimation. Comput. Electron. Agric. 2019, 162, 219–234. [Google Scholar] [CrossRef]
  128. Rehman, T.U.; Mahmud, M.S.; Chang, Y.K.; Jin, J.; Shin, J. Current and Future Applications of Statistical Machine Learning Algorithms for Agricultural Machine Vision Systems. Comput. Electron. Agric. 2019, 156, 585–605. [Google Scholar] [CrossRef]
  129. Shamshiri, R.R.; Weltzien, C.; Hameed, I.A.; Yule, I.J.; Grift, T.E.; Balasundram, S.K.; Pitonakova, L.; Ahmad, D.; Chowdhary, G. Research and Development in Agricultural Robotics: A Perspective of Digital Farming. Int. J. Agric. Biol. Eng. 2018, 11, 1–14. [Google Scholar] [CrossRef]
  130. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep Learning in Agriculture: A Survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  131. Zhu, N.; Liu, X.; Liu, Z.; Hu, K.; Wang, Y.; Tan, J.; Huang, M.; Zhu, Q.; Ji, X.; Jiang, Y.; et al. Deep Learning for Smart Agriculture: Concepts, Tools, Applications, and Opportunities. Int. J. Agric. Biol. Eng. 2018, 11, 32–44. [Google Scholar] [CrossRef]
  132. Iqbal, Z.; Khan, M.A.; Sharif, M.; Shah, J.H.; Rehman, M.H.U.; Javed, K. An Automated Detection and Classification of Citrus Plant Diseases Using Image Processing Techniques: A Review. Comput. Electron. Agric. 2018, 153, 12–32. [Google Scholar] [CrossRef]
  133. Hameed, K.; Chai, D.; Rassau, A. A Comprehensive Review of Fruit and Vegetable Classification Techniques. Image Vis. Comput. 2018, 80, 24–44. [Google Scholar] [CrossRef]
  134. Zhao, Y.; Gong, L.; Huang, Y.; Liu, C. A Review of Key Techniques of Vision-Based Control for Harvesting Robot. Comput. Electron. Agric. 2016, 127, 311–323. [Google Scholar] [CrossRef]
  135. Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and Systems for Fruit Detection and Localization: A Review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
  136. Mao, S.; Li, Y.; Ma, Y.; Zhang, B.; Zhou, J.; Wang, K. Automatic Cucumber Recognition Algorithm for Harvesting Robots in the Natural Environment Using Deep Learning and Multi-Feature Fusion. Comput. Electron. Agric. 2020, 170, 105254. [Google Scholar] [CrossRef]
  137. Zhao, S.; Liu, J.; Wu, S. Multiple Disease Detection Method for Greenhouse-Cultivated Strawberry Based on Multiscale Feature Fusion Faster R_CNN. Comput. Electron. Agric. 2022, 199, 107176. [Google Scholar] [CrossRef]
  138. Wu, G.; Li, B.; Zhu, Q.; Huang, M.; Guo, Y. Using Color and 3D Geometry Features to Segment Fruit Point Cloud and Improve Fruit Recognition Accuracy. Comput. Electron. Agric. 2020, 174, 105475. [Google Scholar] [CrossRef]
  139. Lin, G.; Tang, Y.; Zou, X.; Xiong, J.; Fang, Y. Color-, Depth-, and Shape-Based 3D Fruit Detection. Precis. Agric. 2020, 21, 1–17. [Google Scholar] [CrossRef]
Figure 1. Digital farming with agricultural robotics (source: www.AdaptiveAgroTech.com (accessed on 1 October 2022)).
Figure 1. Digital farming with agricultural robotics (source: www.AdaptiveAgroTech.com (accessed on 1 October 2022)).
Agronomy 13 00639 g001
Figure 2. Typical harvesting robots: (a) a plum harvesting robot (Photo: Reprinted with permission from Ref. [12]. 2021, Brown, J.); (b,df) apple harvesting robots (Photo: Reprinted with permission from Ref. [13]. 2021, Yan, B.; Ref. [14]. 2017, He, L.; Ref. [15]. 2012, Ji, W.; Ref. [16]. 2011, Zhao, D.); (c,np) sweet pepper harvesting robots (Photo: Reprinted with permission from Ref. [17]. 2020, Arad, B.; Ref. [18]. 2017, Lehnert, C.; Ref. [19]. 2014, Bac, C.W.); (gi) strawberry harvesting robots (Photo: Reprinted with permission from Ref. [6]. 2020, Xiong, Y.; Ref. [7]. 2019, Xiong, Y.; Ref. [20]. 2010, Hayashi, S.); (j) a litchi harvesting robot (Photo: Reprinted with permission from Ref. [21]. 2018, Xiong, J.); (k,m) tomato harvesting robots (Photo: Reprinted with permission from Ref. [22]. 2018, Feng, Q.; Ref. [23]. 2010, Kondo, N.); (l) a kiwifruit harvesting robot (Photo: Reprinted with permission from Ref. [24]. 2019, Williams, H.A.M.).
Figure 2. Typical harvesting robots: (a) a plum harvesting robot (Photo: Reprinted with permission from Ref. [12]. 2021, Brown, J.); (b,df) apple harvesting robots (Photo: Reprinted with permission from Ref. [13]. 2021, Yan, B.; Ref. [14]. 2017, He, L.; Ref. [15]. 2012, Ji, W.; Ref. [16]. 2011, Zhao, D.); (c,np) sweet pepper harvesting robots (Photo: Reprinted with permission from Ref. [17]. 2020, Arad, B.; Ref. [18]. 2017, Lehnert, C.; Ref. [19]. 2014, Bac, C.W.); (gi) strawberry harvesting robots (Photo: Reprinted with permission from Ref. [6]. 2020, Xiong, Y.; Ref. [7]. 2019, Xiong, Y.; Ref. [20]. 2010, Hayashi, S.); (j) a litchi harvesting robot (Photo: Reprinted with permission from Ref. [21]. 2018, Xiong, J.); (k,m) tomato harvesting robots (Photo: Reprinted with permission from Ref. [22]. 2018, Feng, Q.; Ref. [23]. 2010, Kondo, N.); (l) a kiwifruit harvesting robot (Photo: Reprinted with permission from Ref. [24]. 2019, Williams, H.A.M.).
Agronomy 13 00639 g002
Figure 3. Different processes of object detection and recognition of fruits and vegetables.
Figure 3. Different processes of object detection and recognition of fruits and vegetables.
Agronomy 13 00639 g003
Figure 4. The outline of this overview and review.
Figure 4. The outline of this overview and review.
Agronomy 13 00639 g004
Figure 5. Techniques based on digital image processing.
Figure 5. Techniques based on digital image processing.
Agronomy 13 00639 g005
Figure 6. Samples of cucumbers in a natural complex environment (Photo: Reprinted with permission from Ref. [136]. 2020, Mao S.).
Figure 6. Samples of cucumbers in a natural complex environment (Photo: Reprinted with permission from Ref. [136]. 2020, Mao S.).
Agronomy 13 00639 g006
Figure 7. Two kinds of apple fruits: (a) completely red fruits; (b) incompletely red fruits (Photo: Reprinted with permission from Ref. [60]. 2019, Liu X.).
Figure 7. Two kinds of apple fruits: (a) completely red fruits; (b) incompletely red fruits (Photo: Reprinted with permission from Ref. [60]. 2019, Liu X.).
Agronomy 13 00639 g007
Figure 8. Detection results of different images: (a1a4) images taken under front light; (b1b4) images taken under backlight; (c1c4) images taken under side light; (d1d4) images taken under artificial light (Photo: Reprinted with permission from Ref. [60]. 2019, Liu X.).
Figure 8. Detection results of different images: (a1a4) images taken under front light; (b1b4) images taken under backlight; (c1c4) images taken under side light; (d1d4) images taken under artificial light (Photo: Reprinted with permission from Ref. [60]. 2019, Liu X.).
Agronomy 13 00639 g008
Figure 9. The idea of image segmentation and classifiers based on traditional machine learning.
Figure 9. The idea of image segmentation and classifiers based on traditional machine learning.
Agronomy 13 00639 g009
Figure 10. Techniques based on traditional machine learning.
Figure 10. Techniques based on traditional machine learning.
Agronomy 13 00639 g010
Table 1. Comparison of frequently used sensors for fruit and vegetable recognition.
Table 1. Comparison of frequently used sensors for fruit and vegetable recognition.
SensorsFeatures ExploitedAdvantagesDisadvantages
Black/white cameraShape and texture featuresA negligible effect on
changing lighting conditions
Lack of color information of target objects
RGB cameraColor, shape, and
texture features
Exploits all the basic features
of target objects
Highly sensitive to changing
lighting conditions
Spectral cameraColor features and
spectral information
Provides more information
about reflectance
Computationally expensive for
complete spectrum analysis
Thermal cameraThermal signaturesColor InvariantDependency on minute thermal difference
Table 2. Comparison of techniques based on digital image processing.
Table 2. Comparison of techniques based on digital image processing.
Applied CropsDescriptionSensorsAdvantagesImprovementsValue of
Metrics Used %
Ref.
AppleThe near-large fruit from the apple image in orchards should be obtainedRGB cameraThe R-channel and G-channel images of orchard apple RGB images are operated by the Adaptive Gamma Correction methodFuture work may include improving the detection rate70[27]
TomatoA new mature tomato detection algorithm based on the improved HSV color space and the improved watershed segmentation RGB cameraMature red tomatoes are detected
successfully even with light effect
The accuracy of recognition needs to be improved81.6[31]
AppleThe potential use of close-range and low-cost terrestrial RGB imaging sensors for fruit detection in a high-density apple orchardRGB cameraBand combinations are generated as
additional parameters for fruit detection
Unripe fruits with poor lighting are not detected in the methodology75[35]
BlueberryRecognizing blueberry fruit of different maturity using histogram-oriented gradients and color features in outdoor scenesRGB cameraUsing a* and b* features in the L*a*b*
color space to discard non-fruit regions
The speed of detection needs to be improvedmature fruit: 96.1
intermediate fruit: 94.2
young fruit: 86
[36]
AppleThe Hough Circle Transformation algorithm is proposed to fit and extract apple shapesRGB cameraIn order to overcome the problem of Global Hough Transform, a local parameter Adaptive Hough Transform is usedWhen the recognition algorithm is faced with multiple overlapping apples, if the apples are not arranged in a straight line, it is easy to obtain recognition errors91.3
(72 ms)
[25]
Citrus, tomato, pumpkin, bitter gourd, towel gourd, and mangoFruit detection in natural environments using Partial Shape Matching and Probabilistic Hough TransformRGB cameraPSM and PHT are used for sub-fragment
detection and aggregation without necessitating the painstaking design of specific features for each type of fruit. This makes the proposed algorithm a generalized method
PHT utilizes a scale-variant dissimilarity metric to determine the probability value of a vote. So, it may fail to detect fruits with large scale changes78.3; 84.8; 74.5; 76.2; 80.7; 91.9[37]
OrangeA machine vision algorithm combining adaptive segmentation and shape analysis for orange fruit detectionRGB cameraIn the segmentation of the fruit, the orange is enhanced by using the red chromaticity coefficient, which enables adaptive segmentation under variable outdoor illuminationThe speed of detection needs to be improved93[45]
Green fruitsA technique based on texture analysis is proposed for detecting green fruits RGB cameraThe method is sufficiently accurate for precise location and monitoring of textured fruit in the fieldThe method needs to be improved to better handle some disadvantageous conditions such as strong sunlight and occlusionspineapple: 85 bitter melon: 100[51]
Green appleDetection of green apples in hyperspectral images of apple-tree foliage using machine visionSpectral cameraThe method uses several techniques, such as extraction and classification of homogenous objects for analyzing hyperspectral dataIndependent studies need to be conducted in a variety of conditions and with a number of crop varieties to verify the robustness of the method88.1[46]
Green citrusGreen citrus detection using ‘eigenfruit’, color and circular
Gabor texture features under
natural outdoor conditions
RGB cameraThe method proposes the use of color, shape, and texture features together to detect immature green citrus fruits, including scanning an image using a sub-window, and merging results of different classifiers with majority votingFuture work may include improving the detection rate, reducing the processing time, and accommodating more varied outdoor conditions75.3[44]
Immature
citrus
Immature citrus fruit detection based on local binary pattern features and hierarchical contour analysisRGB cameraThe good performance of occlusion tolerance of the proposed method is mainly due to the robust LBP texture descriptor and hierarchical contour analysis which uses the pattern of light intensity distribution on the fruit surfaceThe fruit occluded very seriously or even completely by leaves and other fruits couldn’t be detected by the proposed method82.3[39]
LitchiA method of ripe litchi recognition for two varieties of litchis using RGB-D images is proposedRGB-D cameraThe random forest binary classification model is trained employing color and texture features to recognize litchi fruitsDepth segmentation can effectively reduce the false positive rate of litchi recognitiongreen litchi: 89.92
red litchi: 94.5
[55]
Oil palm fresh fruit bunchThe maturity classification of oil palm fresh fruit bunches based on color and texture featuresRGB cameraForty features are extracted from several color spaces, which were reduced to five features using the PCA method to optimize the computation timeThe speed of detection needs to be improved98.3[53]
StrawberryA simple color thresholding algorithm based on the RGB channels for detecting strawberriesRGB-D cameraThe vision system uses color thresholding combined with screening of the object area and the depth range to select ripe and reachable strawberries, which is fast for processingFuture work could merge the detection from multiple frames so that occluded strawberries can be visible from a different viewisolated strawberry:96.8
occluded strawberry:53.6
[18]
Table 3. Comparison of techniques based on traditional machine learning.
Table 3. Comparison of techniques based on traditional machine learning.
Applied CropsDescriptionSensorsAdvantagesImprovementsValue of Metric Used %Ref.
LitchiA litchi recognition algorithm based on K-means clustering is presented to separate litchi from leaves, branches and backgroundTwo CCD color camerasThe method can be robust against the influences of varying illumination and precisely recognize litchiFuture research could improve the localization accuracy of litchi via hardware and software improvementsunoccluded: 98.8; partially occluded: 97.5[75]
AppleThe development of a real-time machine vision recognition system to guide a harvesting robotic for picking apples in different conditionsCCD cameraThe segmentation method based on seeded region growing methods and color features is applied, and color and shape features of color images are extractedReducing the recognition execution time is still a challenge89
(352 ms)
[14]
AubergineTo detect and locate the aubergines automatically, an algorithm based on SVM classifier is implementedTOF cameraThe occlusion algorithm is applied to aubergines that have low visibility due to leaf occlusions by planning a collaborative behavior between the arms to solve the problem of occlusion and proceed with dual-arm harvestingMost of the failures are related to changing lighting conditions. So, future work to enhance the harvester robot should prioritize improvements to image acquisition91.67
(26 ms)
[77]
CitrusIdentification of fruits and branches in natural scenes for a citrus harvesting robot using machine vision and support vector machineColor CCD cameraA multi-class support vector machine, which
succeeds by morphological operation, was used
to simultaneously segment the fruits and branches
The effect on feature extraction, and real-time response of the identification method, have to be further optimized92.4[73]
TomatoAn algorithm is proposed for tomato detection in regular color images to reduce the influence of illumination and occlusionRGB cameraThe proposed method used a combination of shape, texture, and color information. HOG descriptors are adopted in this work. An SVM classifier is used to implement the classification taskFuture research could focus on further improving the detection accuracy and extension to other stages of tomatoes94.41
(950 ms)
[83]
Green pepperA green pepper recognition method based on least-squares support vector machine optimized by improved particle swarm optimizationRGB cameraIn order to reduce the complexity of data calculations and improve the efficiency, the extracted feature vectors are normalized. The feature vector is used as the input eigenvector of the least-squares support vector machine (LSSVM).Due to the high rate of leak recognition, the correct recognition rate of green pepper needs to be improved89.04
(320 ms)
[81]
TomatoA dual-arm cooperative approach for a tomato harvesting robot using a binocular vision sensorStereo cameraA tomato detection algorithm combining an AdaBoost classifier and color analysis is proposed and employed by the harvesting robotFuture work could focus on the improvement in the successful harvesting rate under uncertain conditions96[93]
TomatoDetecting tomatoes in greenhouse scenes by combining an AdaBoost classifier and color analysisRGB cameraTo use shape, texture, and color information, Haar-like features, an AdaBoost algorithm, and APV-based color analysis are implementedFuture work could include enhanced detection rates, reducing the processing time, and various cultivars of tomatoes, and
accommodate more varied unstructured environments
96[99]
Immature green citrusUsed only regular RGB images of the citrus canopy to detect immature green citrus fruit in natural environmentsRGB cameraA local binary patterns feature-based Adaptive Boosting (AdaBoost) classifier is built to remove false positives. A sub-window is used to scan the difference image between the illumination-normalized image and the resulting image from CHT detection in order to detect small areas and partially occluded fruitIt can improve image processing speed by decreasing false positive removal time85.6[96]
Grain impurity of riceReal-time grain impurity sensing for rice combines harvesters using image processing and decision tree algorithmCMOS cameraThe illumination method is optimized by histogram equalization. Decision tree classification is usedFuture work may include improving the detection rate, reducing the processing time, and accommodating more varied outdoor conditions76[102]
Table 4. Some frequently used image databases of crops: fruits and vegetables.
Table 4. Some frequently used image databases of crops: fruits and vegetables.
DatasetsSamplesSpeciesWeb-LinkYear
TotalTraining SetsTesting Sets
Fruits-36090,38067,69222,688131 (100 × 100 pixels)https://www.kaggle.com/datasets/moltean/fruits (accessed on 16 February 2023)2020
Fruit-A22,49516,854564133 (100 × 100 pixels)https://www.kaggle.com/datasets/sshikamaru/fruit-recognition (accessed on 16 February 2023)2022
Fruit-B21,00015,000vail: 3000
text: 3000
15 (224 × 224 pixels)https://www.kaggle.com/datasets/misrakahmed/vegetable-image-dataset
(accessed on 16 February 2023)
2021
Fruit quality classification19,526--18 (256 × 256/192 pixels)https://www.kaggle.com/datasets/ryandpark/fruit-quality-classification
(accessed on 16 February 2023)
2022
Fresh and rotten fruits13,59910,90126986https://www.kaggle.com/datasets/sriramr/fruits-fresh-and-rotten-for-classification
(accessed on 16 February 2023)
2019
Lemon quality control dataset2533--3 (256 × 256 pixels)https://github.com/robotduinom/lemon_dataset
(accessed on 16 February 2023)
2022
Pistachio2148--2https://www.muratkoklu.com/datasets/
(accessed on 16 February 2023)
2022
Grapevine leaves dataset500--52022
Apple130010003002https://data.nal.usda.gov/search/type/dataset
(accessed on 16 February 2023)
2020
Cauliflower656--4https://www.kaggle.com/datasets/noamaanabdulazeem/cauliflower-dataset
(accessed on 16 February 2023)
2022
Sweet pepper and peduncle segmentation620--8https://www.kaggle.com/datasets/lemontyc/sweet-pepper (accessed on 16 February 2023)2021
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xiao, F.; Wang, H.; Li, Y.; Cao, Y.; Lv, X.; Xu, G. Object Detection and Recognition Techniques Based on Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots: An Overview and Review. Agronomy 2023, 13, 639. https://doi.org/10.3390/agronomy13030639

AMA Style

Xiao F, Wang H, Li Y, Cao Y, Lv X, Xu G. Object Detection and Recognition Techniques Based on Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots: An Overview and Review. Agronomy. 2023; 13(3):639. https://doi.org/10.3390/agronomy13030639

Chicago/Turabian Style

Xiao, Feng, Haibin Wang, Yaoxiang Li, Ying Cao, Xiaomeng Lv, and Guangfei Xu. 2023. "Object Detection and Recognition Techniques Based on Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots: An Overview and Review" Agronomy 13, no. 3: 639. https://doi.org/10.3390/agronomy13030639

APA Style

Xiao, F., Wang, H., Li, Y., Cao, Y., Lv, X., & Xu, G. (2023). Object Detection and Recognition Techniques Based on Digital Image Processing and Traditional Machine Learning for Fruit and Vegetable Harvesting Robots: An Overview and Review. Agronomy, 13(3), 639. https://doi.org/10.3390/agronomy13030639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop