Next Article in Journal
Impact of Thermal Processing on the Composition of Secondary Metabolites of Ginger Rhizome—A Review
Next Article in Special Issue
DeepMDSCBA: An Improved Semantic Segmentation Model Based on DeepLabV3+ for Apple Images
Previous Article in Journal
Insoluble Fiber in Barley Leaf Attenuates Hyperuricemic Nephropathy by Modulating Gut Microbiota and Short-Chain Fatty Acids
Previous Article in Special Issue
The Changes in Bell Pepper Flesh as a Result of Lacto-Fermentation Evaluated Using Image Features and Machine Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Computer Vision System for Mango Fruit Defect Detection Using Deep Convolutional Neural Network

1
School of Computing, SASTRA Deemed University, Thanjavur 613401, India
2
Faculty of Electrical and Computer Engineering, Shahr-e-Rey Branch, Islamic Azad University, Tehran 1815163111, Iran
3
Faculty of Engineering and Information Systems, University of Technology Sydney, Sydney, NSW 2007, Australia
*
Authors to whom correspondence should be addressed.
Foods 2022, 11(21), 3483; https://doi.org/10.3390/foods11213483
Submission received: 24 August 2022 / Revised: 14 October 2022 / Accepted: 27 October 2022 / Published: 2 November 2022
(This article belongs to the Special Issue Vision-Based Sensors and Algorithms for Food Processing)

Abstract

:
Machine learning techniques play a significant role in agricultural applications for computerized grading and quality evaluation of fruits. In the agricultural domain, automation improves the quality, productivity, and economic growth of a country. The quality grading of fruits is an essential measure in the export market, especially defect detection of a fruit’s surface. This is especially pertinent for mangoes, which are highly popular in India. However, the manual grading of mango is a time-consuming, inconsistent, and subjective process. Therefore, a computer-assisted grading system has been developed for defect detection in mangoes. Recently, machine learning techniques, such as the deep learning method, have been used to achieve efficient classification results in digital image classification. Specifically, the convolution neural network (CNN) is a deep learning technique that is employed for automated defect detection in mangoes. This study proposes a computer-vision system, which employs CNN, for the classification of quality mangoes. After training and testing the system using a publicly available mango database, the experimental results show that the proposed method acquired an accuracy of 98%.

1. Introduction

Mango is a major fruit crop in India that is rich in vitamins and minerals [1]. The total production of mangoes is estimated to be 50.6 million tons worldwide, 39% of which occurs in India. Thailand and China are the next highest producers. Uttar Pradesh, Tamil Nadu, Telangana, Andhra Pradesh, Kerala, Bihar, and Karnataka are the main producers in India [2]. While mango production is increasing every year in India, the total export of mangoes from India is very low due to the lack of nondestructive, quality and reliable automated tools and techniques. The quality of mangoes can be identified by their skin, which displays green spots and yellowish speckles. In India, quality grading of mango fruit is done manually by experienced evaluators, which is based on defect detection of the fruit’s surface. Manual sorting involves workers having to perform sensory tasks in a large capacity and for long working hours. Thus, the manual grading of mango fruit is a time-consuming process [3,4,5]. In addition, this process requires a significant number of employees, which increases the cost of production and can lead to uncertainty and inaccurate results because individual judgments are subjective and inconsistent across fruit objects. Overcoming these limitations requires the help of computer image sensors to be more effective and efficient. Therefore, we propose a reliable method for automatic defect detection of mangoes and a computer vision system for automated grading.
The defect detection of mango fruits using a computer vision system includes three macro steps: preprocessing of an input image, feature extraction, and classification of the input image. The computer vision system in the agricultural domain enhances the quality of food products for the export market, in which fruit grading is an essential process to select quality fruits. Hence, there is a need for an intelligent computer vision system for fruit grading. Over the past few years, non-destructive machine learning techniques have been employed for efficient fruit quality assurance and have become an integral part of developing computer vision systems for fruit defect detection [6,7]. In addition, image processing and data mining techniques are widely employed in the agricultural domain for computerized defect detection and quality grading of produce. The objective of this work was to develop a computer vision system for mango defect detection using advanced machine learning techniques, such as the convolutional neural network (CNN).
The remainder of the paper is organized as follows. A summary of the related works is described in Section 2. In Section 3, the outline of the proposed model and methodology are explained. Section 4 presents the performance measures to assess the effectiveness of the proposed methodology. Section 5 discusses the experimental results, and Section 6 provides a comparative analysis of the proposed method and existing works. Section 7 contains concluding remarks about the proposed methodology and obtained results.

2. Related Work

Machine learning and image processing techniques have been extensively used in the agricultural domain over the last few years. In particular, computer vision systems provide a nondestructive, low-cost, fast and reliable means for fruit defect detection. Thus far, several works have investigated automated fruit defect detection based on the fruit’s surface.
Patel et al. [8] proposed a computer vision system for the non-destructive physical characterization of mangoes considering various morphological features and multilinear regression models for quality grading and obtained an accuracy of 97.9%. Similarly, Patel et al. [9] developed a computer vision system for defect detection of mangoes using a reflected ultraviolet imaging technique. They found that a band-pass filter of 400 nm is appropriate to detect the defective mangoes which were not identified by the RGB color camera. Nandi et al. [10] introduced a computer vision methodology for grading mangoes based on maturity and quality. They used fuzzy incremental learning for grading mangoes and obtained an accuracy of 87%. Nandi et al. [11] presented a machine vision system for the prediction of mango maturity level. After various texture features were extracted from mango images, the relevant features were selected by the recursive feature elimination technique using SVM as a classifier. Huang et al. [12] proposed a computer vision methodology for the non-destructive detection of mango quality, which includes a colorimetric sensor array, principal component analysis, and support vector classification for qualitative discrimination. They classified the mangoes into three grades and obtained an accuracy of 97.5%. Guojinet et al. [13] introduced a computer vision technique for mango appearance rank classification based on appearance characteristics using extreme learning machine neural network to rank the mangoes. Sahuet et al. [14] developed an automated tool for maturity and defect identification of mango fruits using digital image analysis considering size, color, and shape features of the mangoes. Andrushia et al. [15] presented an automatic skin disease identification system for mangoes in which extracted features such as texture, color, and shape are selected from a digital image using the artificial bee colony optimization and SVM as a classifier. Momin et al. [16] developed a computer vision system for grading mango fruits based on geometry and shape features using image processing techniques such as global thresholding, color binarization, median filter, and morphological processing and achieved an accuracy of 97%.
Additionally, Ragavendra et al. [17] proposed an optimal wavelength selection methodology for mango defect detection and obtained an accuracy of 84.5%. Kumari et al. [18] developed an automated system for mango defect detection using enhanced fuzzy k-means clustering, maximally correlated principal component analysis, and back propagation-based discriminant classifier. Patel et al. [19] presented a monochrome computer vision system for mango defect detection, which achieved an accuracy of 97.88%.

3. Methodology

The proposed CAD system includes the following phases: (a) preprocessing to enhance the image quality; (b) data augmentation to increase the data samples, and (c) classification of mango images as either good or defective. The outline of the proposed methods is shown in Figure 1.

3.1. Dataset

The dataset used in this study contains 50 good and 50 defective Kent mango images 1024 × 1024 pixels in size. This database is publically available on the following website: http://www.cofilab.com/portfolio/mangoesdb/ and accessed on 1 March 2022.

3.2. Preprocessing

Preprocessing is an important step in the computer vision system that is performed to enhance image quality. Noise removal and image enhancement to show defective areas on the surface of fruits are preprocessing techniques considered in this work.

3.2.1. Histogram Equalization

Histogram equalization (HE) is a widely used preprocessing technique to improve the quality of the digital image. This technique is useful to improve the contrast of the image with the histogram of the image. HE works by uniformly distributing the pixel intensity values [20] which is accomplished by effectively distributing the most frequent pixel values. This approach is suitable for images with foregrounds and backgrounds that are both dark and bright. Let I be a given image represented as an m × n matrix of integer pixel intensities ranging from 0 to L − 1. L is the number of grey levels in the image, often 256. Let p denote the normalized histogram of I with a bin for each possible intensity.
p g = n u m b e r   o f   p i x e l s   w i t h   i n t e n s i t y   ( g ) t o t a l   n u m b e r   o f   p i x e l s   g = 0 , 1 , 2 , ,   L 1 .
The histogram equalized image I1 is represented by
I 1 ( i , j ) = ( L 1 ) g = 0 I i , j p g .

3.2.2. Adaptive Wiener Filter

A Linear filter like Adaptive Wiener Filter is useful for de-noising and smoothing. This filter reduces the mean squared error. The amount of smoothing performed by this filter depends on the local image variance. For the large variance, less smoothing is performed, and for the small variance, more smoothing is performed. This low-pass filter was applied in a local neighborhood of 3 × 3 pixel blocks of the image. The adaptive Wiener filter reduces the background noise without blurring, retaining edges high frequency and edge parts of an image.

3.3. Data Augmentation

The data augmentation approach is widely employed in deep learning models to enlarge the data samples [21], which includes transformation techniques like rotation, flipping, shearing, and cropping. This is an essential process in deep learning models such as CNN that require a large number of data samples for training. This technique helps the deep learning model to enhance the classification performance by generalizing better and thereby reducing overfitting. While the database consists of 100 images of Kent mangoes (50 good and 50 defective) as mentioned, more images are needed to train the deep learning model. Therefore, scaling and rotation transformation were employed to augment the data samples. Through data augmentation, the initial 100 mango images were augmented into 800 images by applying rotation (90°, 180°, 270°, and 360°), flipping and scaling transformations. The sample images of the data augmentation process are shown in Figure 2.

3.4. Convolutional Neural Network

The convolutional neural network (CNN) is an improved artificial neural network [22] that is capable of classifying and recognizing defect regions in mango images via computer vision system. This neural network works by processing visual inputs and performing tasks such as object recognition, segmentation, and classification of images [23]. CNN is similar to a multilayer perceptron neural network.

3.4.1. Convolution Layer

In CNN, the convolution layer applies a filter to obtain features from the input image and produce feature maps, or activation maps, as the output. The parameters used in convolution operation are the filter size F and stride S. The first convolution layer produces low-level feature maps such as edges, corners, and lines, while the subsequent layers generate high-level feature maps. The input image of size W × H × C (width, height, channels) is convolved with N kernel size of k × k × D, where D is the number of RGB channels and k is less than the image dimensions [24]. The convolution operations with N kernels generate N features. The convolution operations start from the top-left corner of the image and are repeated until the kernel reaches the bottom-right corner.

3.4.2. Pooling Layer

The pooling layer, also known as the down sampling layer, reduces feature maps and the computational complexity of the CNN. This pooling operation is applied after a convolution by performing spatial invariance. The pooling operation minimizes the dimension of each feature map while preserving the most important features. The most common approaches employed in pooling operations are average pooling and max-pooling. In average pooling, the average of the values in the region of the feature map covered by the filter is used, whereas max-pooling takes the maximum value from the region of the feature map covered by the filter. In this study, max-pooling was employed.

3.4.3. Fully Connected Layer

This layer uses the feature obtained from the pooling layer for the classification of input images. The fully connected layer flattened out the pooling output to a large vector. The last fully connected layer uses a softmax activation function to compute an output from 0 to 1 for each of the input images to predict. One or more fully connected layers are at the end of the CNN architecture.

3.4.4. Rectified Linear Units (ReLU)

CNN applies a ReLU activation function to the convolved feature after every convolution operation in order to introduce nonlinearity into the model. ReLU is an activation function that is used to improve the training of deep convolutional neural networks. The advantage of the ReLU activation function is faster training [25].

3.4.5. CNN Architecture

Herein, a CNN architecture with 13 layers, namely 6 convolutions, 5 pooling, and 2 fully connected layers, was developed to perform automatic defect detection of mangoes without feature extraction and selection processes. The CNN operations, including convolution, non-linearity, pooling or sub-sampling, and classification, were performed on the input images that can be represented as a matrix with pixel values. Each convolution layer (1, 3, 5, 7, 9 and 10) is convolved with their respective kernel size (3, 4 and 8). A max-pooling operation is applied to the feature maps after the convolution operation. The final layer is the fully connected layer, which predicts the output. As seen in Figure 3 and Table 1, the input image of size is fed into the CNN, and then the input image pixel values are normalized from [0, 255] to [0, 1]. The convolution layer performs the convolution operation with kernel of size (3, 4 and 8), followed by a pooling operation, and then another convolution layer.

4. Performance Measures

A classification test and k-fold cross-validation were performed to determine the efficiency of the proposed method at identifying mangoes as either good or defective. In k-fold cross-validation, all images were utilized to train and test the proposed model, whereby all data samples were randomly divided into k groups [26]. One-fold was used for testing, and other k − 1 folds were employed for training. This process is repeated for other k − 1 folds. In this work, 10-fold cross-validation is used. In the classification test, the presence of a defective mango is positive, whereas absence of a defect is negative. Four different outcomes were possible as described in the confusion matrix in Table 2: True Negative (TN), True Positive (TP), False Negative (FN), and False Positive (FP) [27]. The classification performance was estimated by various measures, including area under the ROC (Receiver Operating Characteristic) curve (AUC), accuracy, sensitivity, and specificity based on the four possible outcomes, which are calculated as follows:
  • TP—Mango images are classified as defective.
  • FP—Mango images are misclassified as good.
  • TN—Mango images are classified as good.
  • FN—Mango images are misclassified as defective.
Accuracy = (TP + TN)/(TP + FP + TN + FN)
Sensitivity = TP/(TP + FN)
Specificity = TN/(TN + FP)

5. Experimental Results

In this work, we employed a deep learning method and binary classification to identify defective mangoes using a proposed computer vision system. Images of Kent mangoes were obtained from a publically available web database. The size of the Kent mango dataset images is 1028 × 1028 pixels. The data augmentation technique was applied to the database to artificially increase the number of mango images from 100 to 800 images which were used for the performance evaluation of the proposed system. In this system, human intervention is not required to obtain features from the input images. Section 5: The proposed CNN model is implemented in MATLAB and a computer with processor @2.83 GHz and 8 GB RAM. The proposed CNN model was trained with an Adam optimizer, 10 epochs and a different learning rate of 0.1, 0.01, 0.001, and 0.0001. The highest classification accuracy was obtained with a learning rate of 0.001 and a batch size of 32. The preprocessing technique, namely histogram equalization, was used to enhance the input images. Standard performance measures such as accuracy, recall, and precision were determined and used to assess the proposed system, and 10-fold cross-validation was employed for evaluation. An optimal fit is the objective of the proposed deep learning model that exists between an underfit and overfit model. An optimal fit is identified by a training and validation loss which is depicted in Figure 4.
The confusion matrix of 10-fold cross-validation for Kent mango defect detection is shown in Table 2. In this work, a positive value indicates defective mangoes, whereas good quality mangoes are considered negative. The classification performance measures (sensitivity, accuracy and specificity) of the proposed model in Table 2 reveal that less than 3% of the mangoes were misclassified in all folds. The average accuracy of the proposed model was 98.5%, suggesting that it can efficiently identify defective mangoes. Figure 5 presents the classification results of 10-folds. Figure 6 shows the ROC curve of the proposed method, which describes the classification ability of the binary classifiers and the obtained AUC value of 0.98.

6. Discussion

The proposed computer vision system is a simple and efficient tool for the automated detection of defective mangoes using advanced machine learning techniques. Experiments were carried out on a dataset of 800 mangoes. Because CNN requires a maximum number of images for better classification, data augmentation was performed to increase the number of data samples. In addition, the proposed model produced consistent results for all iterations. Table 3 summarizes the various approaches proposed by the researchers for automated defect detection of mango, including image processing techniques. Compared to related works, the classification result of the proposed deep learning model obtained the highest classification accuracy. As seen in the confusion matrix, the proposed CNN model correctly classified the good quality and defective mangoes.
Moreover, classification on the Kent mango database also demonstrates that the proposed model obtained good classification accuracy. One of the most important advantages of the proposed deep learning method is that, unlike the traditional machine learning model, it does not need segmentation, feature extraction, or selection processes. However, a disadvantage of the proposed deep learning model is that the training is computationally expensive and requires a large amount of data. The small dataset is one of the major challenges to train the deep learning model. Therefore, we applied data augmentation to obtain a larger dataset.
Training the deep learning model is essential to increasing its classification performance. Herein, the experimental results reveal that normal mangoes were efficiently distinguished from defective mangoes. Importantly, the proposed computer vision system can be used in export marketing to improve the objective evaluation of quality mangoes. It can also be used in retail stores to ensure the quality of the mangoes. To the best of our knowledge, this is the first study to propose a deep learning model for detecting mango defects. In future studies, the proposed deep learning methodology can be employed to develop a generalized computer vision system for defect identification of various fruits and vegetables.

7. Conclusions

This study aimed to develop a computer vision system for defect identification in mangoes using advanced machine learning techniques, which greatly benefits countries that demand enhanced export marketing of this fruit. The proposed system, which employs the CNN deep learning model, was evaluated on 800 images of mangoes and obtained a classification accuracy of 98.5%. The experimental results show that the proposed model can efficiently detect defective mangoes. This computerized system is developed to replace the manual evaluation of mango fruit, providing automated non-destructive defect detection. Therefore, the developed computer vision system is useful for the evaluators to easily detect defect in mangoes.

Author Contributions

Drafting, R.N.; collecting the dataset, B.S.; experimental analysis, B.S., R.M. and M.R.; proposing the new method or methodology, R.N.; and proofreading, B.S., R.M., M.R. and A.H.G.; supervising, M.R. and A.H.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by University of Technology Sydney Internal Fund for A.H.G.

Data Availability Statement

Data will be shared for review based on the editorial reviewer’s request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhen, O.P.; Hashim, N.; Maringgal, B. Quality evaluation of mango using non-destructive approaches: A review. J. Agric. Food Eng. 2020, 1, 0003. [Google Scholar]
  2. Verma, M.K.; Srivastav, M.; Usha, K. Calender of Operations for Mango Cultivation. Division of Fruits and Horticultural Technology; ICAR-Indian Agricultural Research Institute: New Delhi, India, 2015. [Google Scholar]
  3. Sadegaonkar, V.D.; Wagh, K.H. Quality inspection and grading of mangoes by computer vision & Image Analysis. Int. J. Eng. Res. Appl. 2013, 3, 1208–1212. [Google Scholar]
  4. Bhargava, A.; Bansal, A. Fruits and vegetables quality evaluation using computer vision: A review. J. King Saud Univ.-Comput. Inf. Sci. 2021, 33, 243–257. [Google Scholar] [CrossRef]
  5. Nagle, M.; Intani, K.; Romano, G.; Mahayothee, B.; Sardsud, V.; Müller, J. Determination of surface color of ‘all yellow’mango cultivars using computer vision. Int. J. Agric. Biol. Eng. 2016, 9, 42–50. [Google Scholar]
  6. Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A Deep Learning-Based Model for Date Fruit Classification. Sustainability 2022, 14, 6339. [Google Scholar] [CrossRef]
  7. Vélez-Rivera, N.; Blasco, J.; Chanona-Pérez, J.; Calderón-Domínguez, G.; de Jesús Perea-Flores, M.; Arzate-Vázquez, I.; Farrera-Rebollo, R. Computer vision system applied to classification of “Manila” mangoes during ripening process. Food Bioprocess Technol. 2014, 7, 1183–1194. [Google Scholar] [CrossRef]
  8. Patel, K.K.; Kar, A.; Khan, M.A. Development and an application of computer vision system for nondestructive physical characterization of mangoes. Agric. Res. 2020, 9, 109–124. [Google Scholar] [CrossRef]
  9. Patel, K.K.; Kar, A.; Khan, M.A. Potential of reflected UV imaging technique for detection of defects on the surface area of mango. J. Food Sci. Technol. 2019, 56, 1295–1301. [Google Scholar] [CrossRef]
  10. Nandi, C.S.; Tudu, B.; Koley, C. A machine vision technique for grading of harvested mangoes based on maturity and quality. IEEE Sens. J. 2016, 16, 6387–6396. [Google Scholar] [CrossRef]
  11. Nandi, C.S.; Tudu, B.; Koley, C. A machine vision-based maturity prediction system for sorting of harvested mangoes. IEEE Trans. Instrum. Meas. 2014, 63, 1722–1730. [Google Scholar] [CrossRef]
  12. Huang, X.; Lv, R.; Wang, S.; Aheto, J.H.; Dai, C. Integration of computer vision and colorimetric sensor array for nondestructive detection of mango quality. J. Food Process Eng. 2018, 41, e12873. [Google Scholar] [CrossRef]
  13. Guojin, L.; Diyong, D.; Shuang, C. Research on Mango Detection and Classification by Computer Vision. J. Agric. Mech. Res. 2015, 10, 4. [Google Scholar]
  14. Sahu, D.; Potdar, R.M. Defect identification and maturity detection of mango fruits using image analysis. Am. J. Artif. Intell. 2017, 1, 5–14. [Google Scholar]
  15. Andrushia, A.D.; Trephena, P.A. Artificial bee colony based feature selection for automatic skin disease identification of mango fruit. In Nature Inspired Optimization Techniques for Image Processing Applications; Springer: Cham, Switzerland, 2019; pp. 215–233. [Google Scholar]
  16. Momin, M.A.; Rahman, M.T.; Sultana, M.S.; Igathinathane, C.; Ziauddin, A.T.M.; Grift, T.E. Geometry-based mass grading of mango fruits using image processing. Inf. Process. Agric. 2017, 4, 150–160. [Google Scholar] [CrossRef]
  17. Raghavendra, A.; Guru, D.S.; Rao, M.K. Mango internal defect detection based on optimal wavelength selection method using NIR spectroscopy. Artif. Intell. Agric. 2021, 5, 43–51. [Google Scholar] [CrossRef]
  18. Kumari, N.; Belwal, R. Hybridized approach of image segmentation in classification of fruit mango using BPNN and discriminant analyzer. Multimed. Tools Appl. 2021, 80, 4943–4973. [Google Scholar] [CrossRef]
  19. Patel, K.K.; Kar, A.; Khan, M.A. Monochrome computer vision for detecting common external defects of mango. J. Food Sci. Technol. 2021, 58, 4550–4557. [Google Scholar] [CrossRef]
  20. Xie, Y.; Ning, L.; Wang, M.; Li, C. Image enhancement based on histogram equalization. J. Phys. Conf. Ser. 2019, 1314, 012161. [Google Scholar] [CrossRef] [Green Version]
  21. Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  22. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Gertych, A.; San Tan, R. A deep convolutional neural network model to classify heartbeats. Comput. Biol. Med. 2017, 89, 389–396. [Google Scholar] [CrossRef]
  23. Acharya, U.R.; Fujita, H.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Tan, R.S. Deep convolutional neural network for the automated diagnosis of congestive heart failure using ECG signals. Appl. Intell. 2019, 49, 16–27. [Google Scholar] [CrossRef]
  24. Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  25. Schmidt-Hieber, J. Nonparametric regression using deep neural networks with ReLU activation function. Ann. Stat. 2020, 48, 1875–1897. [Google Scholar]
  26. Gebrehiwot, A.; Hashemi-Beni, L.; Thompson, G.; Kordjamshidi, P.; Langan, T.E. Deep convolutional neural network for flood extent mapping using unmanned aerial vehicles data. Sensors 2019, 19, 1486. [Google Scholar] [CrossRef] [Green Version]
  27. Ruuska, S.; Hämäläinen, W.; Kajava, S.; Mughal, M.; Matilainen, P.; Mononen, J. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle. Behav. Process. 2018, 148, 56–62. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed methodology.
Figure 1. Overview of the proposed methodology.
Foods 11 03483 g001
Figure 2. Images of mango samples: (a) Sample mango images; (b) Images rotated at 90°; (c) Images rotated at 180°; (d) Images rotated at 270°.
Figure 2. Images of mango samples: (a) Sample mango images; (b) Images rotated at 90°; (c) Images rotated at 180°; (d) Images rotated at 270°.
Foods 11 03483 g002aFoods 11 03483 g002b
Figure 3. The proposed CNN architecture model.
Figure 3. The proposed CNN architecture model.
Foods 11 03483 g003
Figure 4. Training loss vs. validation loss.
Figure 4. Training loss vs. validation loss.
Foods 11 03483 g004
Figure 5. The classification accuracy of the 10 folds of cross-validation.
Figure 5. The classification accuracy of the 10 folds of cross-validation.
Foods 11 03483 g005
Figure 6. ROC curve for mango defect detection using the proposed CNN model.
Figure 6. ROC curve for mango defect detection using the proposed CNN model.
Foods 11 03483 g006
Table 1. The proposed CNN architecture model.
Table 1. The proposed CNN architecture model.
LayersTypeNumber of Feature MapsNumber of Neurons in the LayerSize of the Kernel Involves to form Each Feature MapStride
0Input3224 × 224 × 3--
1Convolution16222 × 222 × 163 × 3 × 31
2Max-pooling16111 × 111 × 162 × 22
3Convolution32109 × 109 × 323 × 3 × 161
4Max-pooling3255 × 55 × 322 × 22
5Convolution3253 × 53 × 323 × 3 × 321
6Max-pooling3227 × 27 × 322 × 22
7Convolution6425 × 25 × 643 × 3 × 321
8Max-pooling6423 × 23 × 643 × 3 × 641
9Convolution6412 × 12 × 642 × 22
10Convolution12810 × 10 × 1283 × 3 × 641
11Max-pooling1288 × 8 × 1283 × 3 × 1281
12Fully Connected-128--
13Fully Connected-64--
14Output-2--
Table 2. Performance measures of the proposed method.
Table 2. Performance measures of the proposed method.
k-FoldTest ResultActualSensitivity (%)Specificity (%)Accuracy (%)
GoodDefect
1Good39197.597.597.5
Defect139
2Good40197.510098.75
Defect039
3Good40197.510098.75
Defect039
4Good3801009597.5
Defect240
5Good400100100100
Defect040
6Good40197.510098.75
Defect039
7Good40197.510098.75
Defect039
8Good39010097.598.75
Defect140
9Good40197.510098.75
Defect039
10Good39010010097.5
Defect039
Table 3. Comparison of the proposed model with existing works.
Table 3. Comparison of the proposed model with existing works.
ReferencesFeaturesClassifierAccuracy (%)
Proposed model CNN98.5
Patel et al. [19]MorphologicalMulti-linear regression models88.75 and 97.88
Nandi et al. [10]Color-basedFuzzy incremental learning87
Nandi et al. [11]Color-basedSupport vector machine96
Huang et al. [12]Colormetric sensor array and principal component analysisSupport vector classification97.5
Momin et al. [16]Geometry and shape featuresGlobal thresholding, median filter color binarization, and morphological processing97
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nithya, R.; Santhi, B.; Manikandan, R.; Rahimi, M.; Gandomi, A.H. Computer Vision System for Mango Fruit Defect Detection Using Deep Convolutional Neural Network. Foods 2022, 11, 3483. https://doi.org/10.3390/foods11213483

AMA Style

Nithya R, Santhi B, Manikandan R, Rahimi M, Gandomi AH. Computer Vision System for Mango Fruit Defect Detection Using Deep Convolutional Neural Network. Foods. 2022; 11(21):3483. https://doi.org/10.3390/foods11213483

Chicago/Turabian Style

Nithya, R., B. Santhi, R. Manikandan, Masoumeh Rahimi, and Amir H. Gandomi. 2022. "Computer Vision System for Mango Fruit Defect Detection Using Deep Convolutional Neural Network" Foods 11, no. 21: 3483. https://doi.org/10.3390/foods11213483

APA Style

Nithya, R., Santhi, B., Manikandan, R., Rahimi, M., & Gandomi, A. H. (2022). Computer Vision System for Mango Fruit Defect Detection Using Deep Convolutional Neural Network. Foods, 11(21), 3483. https://doi.org/10.3390/foods11213483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop