Next Article in Journal
Immune System Programming: A Machine Learning Approach Based on Artificial Immune Systems Enhanced by Local Search
Next Article in Special Issue
Deep Learning-Based Intrusion Detection Methods in Cyber-Physical Systems: Challenges and Future Trends
Previous Article in Journal
Real-Time Fault Location Using the Retardation Method
Previous Article in Special Issue
An Experimental Analysis of Various Machine Learning Algorithms for Hand Gesture Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models

1
School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Korea
2
Doganhisar Vocational School, Selcuk University, Konya 42930, Turkey
3
Guneysinir Vocational School, Selcuk University, Konya 42490, Turkey
4
Department of Computer Engineering, Selcuk University, Konya 42031, Turkey
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(7), 981; https://doi.org/10.3390/electronics11070981
Submission received: 24 February 2022 / Revised: 21 March 2022 / Accepted: 21 March 2022 / Published: 22 March 2022

Abstract

:
Pistachio is a shelled fruit from the anacardiaceae family. The homeland of pistachio is the Middle East. The Kirmizi pistachios and Siirt pistachios are the major types grown and exported in Turkey. Since the prices, tastes, and nutritional values of these types differs, the type of pistachio becomes important when it comes to trade. This study aims to identify these two types of pistachios, which are frequently grown in Turkey, by classifying them via convolutional neural networks. Within the scope of the study, images of Kirmizi and Siirt pistachio types were obtained through the computer vision system. The pre-trained dataset includes a total of 2148 images, 1232 of Kirmizi type and 916 of Siirt type. Three different convolutional neural network models were used to classify these images. Models were trained by using the transfer learning method, with AlexNet and the pre-trained models VGG16 and VGG19. The dataset is divided as 80% training and 20% test. As a result of the performed classifications, the success rates obtained from the AlexNet, VGG16, and VGG19 models are 94.42%, 98.84%, and 98.14%, respectively. Models’ performances were evaluated through sensitivity, specificity, precision, and F-1 score metrics. In addition, ROC curves and AUC values were used in the performance evaluation. The highest classification success was achieved with the VGG16 model. The obtained results reveal that these methods can be used successfully in the determination of pistachio types.

1. Introduction

Pistachio (Pistacia vera L.) is an agricultural product native to the Middle East and Central Asia. The world’s major pistachio producers, Iran, the USA, Turkey, and Syria, contribute close to 90% of the total production worldwide [1]. Pistachio production in Turkey covers a large number of types in different names. Among these pistachio types, the most preferred ones are Kirmizi, Siirt, and Halabi. The Red Aleppo, widely grown in Syria, is the Turkey variety. However, it is a less preferred variety. Again, Syria-specific Achoury, Alemi, El Bataury, Obaid, and Ayimi are types also grown in Turkey [2,3,4].
Pistachio kernel, which is a good source of fat (50%–60%), contains unsaturated fatty acids necessary for the nutrition of humans. It is widely used in the manufacture of confectionery and snack foods. Due to the dark green color of its kernel, pistachio is highly preferred in ice cream and pastry industries [5]. The shell (endocarp) of the pistachio is hulled along its seams. This is desirable since pistachios are often marketed in their shells to be eaten by hand as a kind of snack food [2,3,4,6].
As an expensive agricultural product, the pistachio’s reflected price to the consumer depends on the quality of the product. Therefore, determining the quality of shelled pistachios is an important issue in terms of economy, export, and marketing. Increased quality will lead to improvements in consumption and marketing. Besides, it is equally important to determine the quality of pistachios with high accuracy and easy applications via smart systems to prevent economic losses in terms of export and marketing [7,8]. Consequently, new methods and technologies are needed for the separation and classification of pistachios.
The aim of this study is to be able to classify pistachio types by using their images in a quick and effective way. Within the scope of the study,
  • Kirmizi and Siirt pistachio kernels were collected for this study. Each pistachio image was created by us for this study with a specially designed computer vision system.
  • A dataset of 2148 images was obtained, with collected images of two pistachio types that are commonly grown.
  • In order to determine the most suitable classification model, the most successful model was determined via classification performed by using different CNN architectures.
  • A comprehensive analysis of CNN models was carried out and preliminary preparations were made for future studies.
The remainder of this study is organized as follows: in the second section, studies in the literature related to this subject are mentioned, followed by information about the image acquisition, dataset, confusion matrix, performance metrics, and CNN architectures used in the study. In the third section, transfer learning is given, and in the fourth section, the experimental results are presented. Finally, in the fifth section, the evaluation of the experimental results and suggestions are given.

2. Related Works

In recent years, there have been many studies in the literature focusing on the classification of agricultural products with deep learning and machine learning methods [9,10,11,12,13]. However, the number of studies about the classification of pistachio types, especially studies where deep learning is used, is quite limited. These studies in the literature are summarized below.
Mahdavi-Jafari et al. introduced an intelligent system for classification of pistachios (open shell, filled and closed shell, empty and closed shell) using ANN. The ANN was trained based on the analysis of acoustic signals generated from pistachio impacts with a steel plate. Fast Fourier transform (FFT), discrete cosine transform (DCT), and Discrete Wavelet Transform (DWT) were used for signal processing. According to the results of the study, the proposed method has a performance with an accuracy higher than 99.89% [7].
In their study, Farazi et al. used deep learning and machine learning architectures to distinguish open-shelled pistachios from other rotten pistachios and shells. A set of 1000 individual images obtained from pistachios and shells was increased to 20,000 by data augmentation. From these images, features were extracted using the architectures of AlexNet and GoogleNet, and the 300 most effective features were selected with PCA, which is among these features. Finally, these features were given as input to the SVM classifier and the classification operations were performed. The highest classification accuracy of 99% was achieved with the features obtained from the GoogleNet architecture [14].
Omid et al., in their study, proposed a system based on image processing and machine learning techniques to classify peeled pistachios. The peeled pistachios were graded into five classes using artificial neural networks (ANN) and support vector machine (SVM) for classification. While the classification accuracy is 99.4% with ANN, this rate is 99.8% with SVM [15].
In their study, Abbaszadeh et al. used deep auto-encoder neural networks to classify pistachios as defective and flawless. As a result of the study, a classification accuracy of 80.3% was obtained in the detection of defective pistachios [16].
In another study on the classification of defective and perfect pistachios, Dini et al. benefited from pre-trained CNN-based algorithms. The results obtained from a total of 958 images show that classification accuracy of 95.8%, 97.2%, and 95.83% was achieved from the models GoogleNet, ResNet, and VGG16, respectively [17].
In the study carried out by Dheir et al., CNN algorithms were utilized to classify nuts. Within the scope of the study, five types of nuts, namely, chestnut, hazelnut, nut forest, nut pecan, and walnut, were classified with a dataset including 2868 images. As a result, an accuracy of 98% was achieved via pre-trained ConvNet [18].
In their study, Vidyarthi et al. used the random forest (RF) machine learning technique to estimate the size and mass of pistachio kernels. It was stated that a similar close correlation for the mass of manually measured pistachio kernel and the estimated mass is at the 95% confidence level [19].
Rahimzadeh and Attar proposed a system with computer vision to determine whether different pistachio types are open-mouth or closed-mouth. CNN-based ResNet50, ResNet152, and VGG16 models were used to extract features and classify pistachio images. The average classification success achieved via these models was 85.28%, 85.19%, and 83.32%, respectively [20].
In the study conducted by Ozkan et al., image processing and machine learning techniques were utilized to classify two different types of pistachios. A total of 16 feature extractions were performed by using pistachio images. At the same time, principal component analysis (PCA) has been applied to improve accuracy performance, reduce the number of features, and improve the distribution of samples. For classification, a k-nearest neighbors (KNN) algorithm was used and a classification accuracy of 94.18% was obtained [21]. In Table 1, the results obtained from the studies conducted with Pistachio are given.

3. Materials and Methods

3.1. Image Acquisition

Kirmizi and Siirt types of pistachio images were used in this study. Pistachio kernels were collected for this study. In the computer vision system developed for this study, the images of the pistachios placed in a special lighting box were obtained through the Prosilica GT2000C camera. The computer vision system provides a high level of repeatability at a relatively low cost. At the same time, more importantly, it provides high-quality images without compromising accuracy. Equipped with a CMOS type sensor, the Prosilica GT2000C is an RGB camera capable of capturing images with a resolution of 2048 × 1088 and a maximum frame rate of 53.7 fps. During the image acquisition process, a background surface was used by fixing the camera at a certain distance in order to prevent shadows. These images were used by the authors with different features in a different study [21].

3.2. Pistachio Image Dataset

A total of 2148 images were obtained, 1232 of which were Kirmizi pistachios and 916 Siirt pistachios. Acquired images contain original pistachio images for deep learning models [21]. Each image used in the study was sized as 600 × 600 pixels. The sample pistachio images in the dataset are given in Figure 1.

3.3. Convolutional Neural Network

Deep learning, a popular machine learning technique that has been studied broadly in recent years, is a multi-layered method used to extract and define features from large number of data [22]. It contains separate layers with specific tasks such as convolution layer, activation layer, pooling, flatten layer, and fully connected layers [23].
Convolution layer: The convolution layer is a functional layer used to extract features from input data. The input vectors are scanned with a defined filter and the data are transformed into feature space with a locally weighted sum aggregation [24]. In this layer, which is the first convolution layer directly connected to the image set, low-level feature extraction such as colors and edges can be performed [25].
Activation Layer (Non-linearity layer): An activation layer is the layer on which a nonlinear function is applied for each pixel on the images [26]. In recent studies, the rectified linear units (ReLu) activation function has started to be used instead of the most commonly used sigmoid and hyperbolic tangent activation functions [27].
Pooling (Down-sampling) layer: Another building block of the CNN architecture, the pooling layer, reduces the number of parameters and the amount of computation in the network, which has two benefits. The first is to reduce the amount of computation for the next layer, and the second is to prevent the network from memorizing. Average, maximum, sum, and mean pooling are the methods commonly used for the pooling layer [28].
Flatten layer: The task of this layer is simply to prepare the input data for the last layer. Since neural networks take input data as one-dimensional arrays, it is a layer where matrix-type data from other layers are converted into one-dimensional arrays. As each pixel of the image is represented by a single line, this process is called smoothing [28].
Fully connected layers: This layer is dependent on all fields of the previous layer. The number of this layer may vary in different architectures. At the nodes in these layers, the features are kept, and the learning process is carried out by changing the weight and bias values. This layer, which is responsible for performing the actual processing by taking input from all the various feature extraction stages, analyzes the outputs of all the processing layers [29].

3.4. Transfer Learning

Transfer learning, which aims to use the knowledge learned during the education phase in solving different or similar problems, has recently become more popular. Transfer learning is basically the process of transferring weights to networks using features previously learned from a large dataset [30]. Using a pre-trained CNN architecture allows researchers to apply features extracted from the last layer with different classifiers before fully connected layers. These architectures’ pre-training can yield positive results in terms of performance [31].

3.5. Pre-Trained CNN Models

There are many CNN architectures in the literature. Classification successes, model sizes, and speeds are taken into account when deciding which of these architectures to use. The models used in the study were decided after many trials, and the most successful and fastest models were preferred.

3.5.1. AlexNet

Developed by Krizhevsky, Ilya Sutskever, and Geoff Hinton, AlexNet was the first to become popular among CNN architectures [32]. AlexNet performed significantly well in the 2012 ILSVRC competition. Having an architecture similar to LeNet, AlexNet is one of the deepest and most important architectures where all convolution layers are brought together [33].

3.5.2. VGG16

The VGG16 architecture, developed by Karen Simonyan and Andrew Zisserman, was introduced at the ILSVRC 2014 competition. The highlight of this architecture is that the system depth can be a major factor for a better performance. The number of “convolutional” and “fully connected layers” of VGG16 is 16 in total. The difference from the AlexNet architecture is that it uses fixed-sized filters. Due to the extensive training of the network, VGG16 is able to offer accuracy to an excellent degree even on datasets with a small number of images [34,35].

3.5.3. VGG19

VGG19 has a total of 24 main layers: 16 convolutional, 5 pooling, and 3 fully connected layers. Convolution layers use filters of 3 × 3 to preserve spatial dimensions during activation phases. After each convolution, maximum pooling is applied to reduce the spatial dimension by using the ReLu activation function. A filter of 2 × 2 steps with no padding is used to improve accuracy in the pooling layer [36,37].

3.6. Confusion Matrix

The complexity matrix is utilized to evaluate the predictive performance of training and test data. The values in the matrix are widely used for performance measurement of classification problems [38]. The complexity matrix of a two-class classification problem in the study is as in Table 2. The meanings of the rows and columns are as follows.
  • TP: True Positive. Examples where the true value of the model is 1 and the predicted value is 1.
  • TN: True Negative. Examples where the true value of the model is 0 and the predicted value is 0.
  • FP: False Positive. Examples where the true value of the model is 0 and the predicted value is 1.
  • FN: False Negative. Examples where the true value of the model is 1 and the predicted value is 0.

3.7. Performance Metrics

Performance metrics are used to evaluate classifier performances. Within the scope of this study, true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN) values, and basic performance metrics of accuracy, F-1 score, sensitivity, precision, and specificity, were calculated [39,40,41,42]. The calculation of these metrics is shown in Table 3.

3.8. ROC and AUC

One of the commonly used techniques to evaluate the classification rule’s performance in classification problems is the receiver operating characteristic (ROC) curve. The x-axis of this curve represents specificity and the y-axis represents sensitivity. The area under the curve (AUC) is a generally accepted measure that determines whether a certain condition exists about test data. That the value of this field close to 1 indicates that the model has a good performance [43].

4. Experimental Results

In this chapter, classification analyzes of three different models created with transfer learning are performed in order to classify pistachio images. The models were trained through the pistachio images with pre-trained models, AlexNet, VGG16, and VGG19 using the transfer learning method. The outputs of the models were adjusted into two classes, in accordance with the pistachio dataset used in the study. The experiments were carried out through a computer with an Intel i5 10200H 2.4 GHz processor, 16 GB RAM, and GTX 1650 Ti graphics card. The working environment was MATLAB 2020b. The models’ training was performed by determining eight epochs and a mini-batch size of 11. The learning rate was determined as 0.0001 and the SGDM optimization method was chosen as the solvent.
The pistachio dataset used in the training of AlexNet, VGG16, and VGG19 models is divided as 80% training and 20% validation. Figure 2 shows the block diagram of the processes from data acquisition to classification.
Due to the difference in the number of layers in AlexNet, VGG16, and VGG19 architectures, the learning level and classification success of the model may vary. When a new CNN model is created, it may take a long time to find training convolution, pooling, number of activation layers, and parameter values. Hence, new models were created via the transfer learning method by using previously trained models with high classification success. In the transfer learning method, the training of the model can be carried out successfully by making changes in the layers of the existing architectures. By making changes in the layers of the existing architectures in the transfer learning method, the model can be trained successfully.
In order to train with pistachio images by using the weights of the pre-trained models used in the study, some layers needed to be revised. In all models, the input of the fc8 layer was 4096. However, since there were two classes in the dataset used in the study, the input of the fc8 layer was changed to two for all models. As the softmax and classification layers were associated with the fc8 layer, these layers were changed for all models.
AlexNet architecture consists of one image input, five convolution, seven ReLu, two normalization, five pooling, three fully connected, two dropout, one softmax, and one classification layers. Figure 3 shows the fine-tuning performed on the AlexNet model used in the study.
VGG16 architecture has 1 image input, 13 convolutions, 12 ReLu,1 normalization, 5 pooling, 3 fully connected, 2 dropouts, 1 softmax, and 1 classification layer. Figure 4 shows fine-tuning performed on the VGG16 model. Unlike VGG16 architecture, VGG16 and VGG19 architectures have extra convolution, ReLu, and pooling layers in blocks 3, 4, and 5. Figure 5 shows fine-tuning performed on the VGG19 model.
As can be seen in Figure 3, Figure 4 and Figure 5, the number of layers in all three models is different. Therefore, the features extracted from the images also differ. After each layer of the models, different features of the image are extracted and classified. Figure 6 gives the views of some activation maps of the sample image given as input to AlexNet, VGG16, and VGG19 models.
As a result of fine-tuning performed on AlexNet, VGG16, and VGG19 architectures, the models were trained by using pistachio images via the transfer learning method. In order to objectively measure the performance of the models, the parameters were not changed during the training. The training times also differed due to the different number of layers in the models. Table 4 shows the training times of the three models.
As a result of the training of all three models, a classification success of 94.42%, 98.84%, and 98.14% was obtained from the AlexNet, VGG16, and VGG19 models, respectively. Figure 7 shows the training, validation, and loss graphs for all three models.
The dataset is divided as 80% training and 20% testing. Figure 8 shows the confusion matrices obtained as a result of the classifications performed through the models.
When Figure 9 is examined, it can be seen that the number of misclassified pistachio images is 23 in the AlexNet model, while this number is 5 in the VGG16 model and 8 in the VGG19 model. The number of correctly classified pistachio images in AlexNet, VGG16, and VGG19 models, is 406, 425, and 422, respectively. Performance metrics of the models were calculated by using the confusion matrix data. The models’ performance metrics are shown in Table 5 and the graphs for these metrics are given in Figure 9.
According to Table 5, the highest classification success belongs to the VGG16 model. In the VGG16, it can be seen that sensitivity, specificity, precision, and F-1 score metrics are also the highest as in the classification success. Table 6 shows the classification success of the models in percent.
ROC curves provide information about the learning levels of the models. Figure 10 gives the ROC curves of the models created by using sensitivity and specificity metrics.
Figure 10 shows the ROC curves of all models. AUC, the area under the ROC curve, shows the learning level of the models. The AUC takes a value between 0 and 1. It is seen that the closer the AUC value is to 1, the higher the learning level of the model. The AUC values of AlexNet, VGG16, and VGG19 models are 0.989, 1, and 0.996, respectively.

5. Discussion and Conclusions

This study aimed to classify pistachio images by using three different convolutional neural network models. The dataset of the study includes a total of 2148 images of Kirmizi and Siirt pistachio types. In the creation of the models, pre-trained CNN architectures with proven classification success were used. The pre-trained models, AlexNet, VGG16, and VGG19, were fine-tuned and the models were trained through pistachio images. The dataset is split into parts, 80% training and 20% test. As a result of the performed classifications, respectively, 94.42%, 98.84%, and 98.14% classification success was obtained from the AlexNet, VGG16, and VGG19 models. It is seen that the highest classification success was achieved with the VGG16 model. Various confusion matrices and performance metrics were used to analyze the models’ performances in more detail. Again, the highest values in these metrics were obtained with the VGG16 model. In CNN architectures, the number of layers is not always directly proportional to the classification success. High classification success can be achieved with models containing the optimum number of layers according to the dataset used. VGG16 architecture has come to the fore as the most suitable CNN architecture for pistachio dataset. Therefore, the highest classification success was obtained from the VGG16 architecture. Using the confusion matrix, it was observed which pistachio type is classified correctly or incorrectly in what number.
When the literature is examined, there are cases of pistachios being open and closed shell [7,20], being defective [14,16], classification of pistachios [15], and classification of pistachio varieties by machine learning [21]. Only one of these studies is related to our study [21]. Ozkan, Koklu, and Saracoglu achieved the highest classification success of 94.18% in their study. In this study, the highest classification success was 98.84%.
The methods used in the study set as an example for future studies in this field. On the other hand, it is possible to achieve different classification successes by utilizing different artificial intelligence methods. Depending on the number of images in the dataset, different classification successes can be obtained from different models. Pistachios can be distinguished quickly and effectively. A diverse classification study can be conducted by collecting different types of pistachio images. The application can be made mobile and used to determine the type of pistachio in an agriculture field.

Author Contributions

Conceptualization, D.S. and Y.S.T.; methodology, M.K.; software, I.C.; validation, R.K., I.A.O. and I.C.; formal analysis, Y.S.T.; investigation, M.K.; resources, R.K.; data curation, I.A.O. and M.K.; writing—original draft preparation, Y.S.T.; writing—review and editing, D.S.; visualization, R.K. and H.-N.L.; supervision, I.C.; project administration, M.K.; funding acquisition, H.-N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Research Foundation of Korea (NRF) Grant funded by the Korean government (MSIP) (NRF-2021R1A2B5B03002118), and this research was supported by the Ministry of Science and ICT (MSIT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2021-0-01835) supervised by the IITP (Institute of Information & Communications Technology Planning & Evaluation).

Data Availability Statement

The dataset used in the study can be accessed from the link https://www.muratkoklu.com/datasets/Pistachio_Image_Dataset.zip (accessed on 15 March 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saglam, C.; Cetin, N. Prediction of Pistachio (Pistacia vera L.) Mass Based on Shape and Size Attributes by Using Machine Learning Algorithms. Food Anal. Methods 2022, 15, 739–750. [Google Scholar] [CrossRef]
  2. Parfitt, D.; Kafkas, S.; Batlle, I.; Vargas, F.J.; Kallsen, C.E. Pistachio. In Fruit Breeding; Springer: Berlin/Heidelberg, Germany, 2012; pp. 803–826. [Google Scholar] [CrossRef]
  3. Barghchi, M.; Alderson, P. Pistachio (Pistacia vera L.). In Trees II; Springer: Berlin/Heidelberg, Germany, 1989; pp. 68–98. [Google Scholar] [CrossRef]
  4. Kashaninejad, M.; Mortazavi, A.; Safekordi, A.; Tabil, L. Some physical properties of Pistachio (Pistacia vera L.) nut and its kernel. J. Food Eng. 2006, 72, 30–38. [Google Scholar] [CrossRef]
  5. Bonifazi, G.; Capobianco, G.; Gasbarrone, R.; Serranti, S. Contaminant detection in pistachio nuts by different classification methods applied to short-wave infrared hyperspectral images. Food Control 2021, 130, 108202. [Google Scholar] [CrossRef]
  6. Maskan, M.; Şükrü, K. Fatty acid oxidation of pistachio nuts stored under various atmospheric conditions and different temperatures. J. Sci. Food Agric. 1998, 77, 334–340. [Google Scholar] [CrossRef]
  7. Mahdavi-Jafari, S.; Salehinejad, H.; Talebi, S. A Pistachio Nuts Classification Technique: An ANN Based Signal Processing Scheme. In Proceedings of the 2008 International Conference on Computational Intelligence for Modelling Control & Automation, Vienna, Austria, 10–12 December 2008; IEEE: New York City, NY, USA, 2008. [Google Scholar] [CrossRef]
  8. Mahmoudi, A.; Omid, M.; Aghagolzadeh, A. Artificial neural network based separation system for classifying pistachio nuts varieties. In Proceedings of the International Conference on Innovations in Food and Bioprocess Technologies, Pathum Thani, Thailand, 12 December 2006. [Google Scholar]
  9. Khan, M.A.; Alqahtani, A.; Khan, A.; Alsubai, S.; Binbusayyis, A.; Ch, M.M.I.; Yong, H.-S.; Cha, J. Cucumber Leaf Diseases Recognition Using Multi Level Deep Entropy-ELM Feature Selection. Appl. Sci. 2022, 12, 593. [Google Scholar] [CrossRef]
  10. Yasmeen, U.; Khan, M.A.; Tariq, U.; Khan, J.A.; Yar, M.A.E.; Hanif, C.A.; Mey, S.; Nam, Y. Citrus Diseases Recognition Using Deep Improved Genetic Algorithm. Comput. Mater. Contin. 2021, 71, 3667–3684. [Google Scholar] [CrossRef]
  11. Shah, F.A.; Khan, M.A.; Sharif, M.; Tariq, U.; Khan, A.; Kadry, S.; Thinnukool, O. A Cascaded Design of Best Features Selection for Fruit Diseases Recognition. Comput. Mater. Contin. 2022, 70, 1491–1507. [Google Scholar] [CrossRef]
  12. Koklu, M.; Cinar, I.; Taspinar, Y.S. Classification of rice varieties with deep learning methods. Comput. Electron. Agric. 2021, 187, 106285. [Google Scholar] [CrossRef]
  13. Cinar, I.; Koklu, M. Classification of Rice Varieties Using Artificial Intelligence Methods. Int. J. Intell. Syst. Appl. Eng. 2019, 7, 188–194. [Google Scholar] [CrossRef] [Green Version]
  14. Farazi, M.; Abbas-Zadeh, M.J.; Moradi, H. A machine vision based pistachio sorting using transferred mid-level image representation of Convolutional Neural Network. In Proceedings of the 2017 10th Iranian Conference on Machine Vision and Image Processing (MVIP), Isfahan, Iran, 22–23 November 2017; IEEE: New York City, NY, USA, 2017. [Google Scholar] [CrossRef]
  15. Omid, M.; Firouz, M.S.; Nouri-Ahmadabadi, H.; Mohtasebi, S.S. Classification of peeled pistachio kernels using computer vision and color features. Eng. Agric. Environ. Food 2017, 10, 259–265. [Google Scholar] [CrossRef]
  16. Abbaszadeh, M.; Rahimifard, A.; Eftekhari, M.; Zadeh, H.G.; Fayazi, A.; Dini, A.; Danaeian, M. Deep Learning-Based Classification of the Defective Pistachios via Deep Autoencoder Neural Networks. arXiv 2019, arXiv:1906.11878. [Google Scholar]
  17. Dini, A.; Zadeh, H.G.; Rahimifard, A.; Fayazi, A.; Eftekhari, M.; Abbaszadeh, M. Designing a Hardware System to separate Defective Pistachios From Healthy Ones Using Deep Neural Networks. Iran. J. Biosyst. Eng. 2020, 51, 149–159. [Google Scholar] [CrossRef]
  18. Dheir, I.M.; Mettleq, A.S.A.; Elsharif, A.A. Nuts Types Classification Using Deep learning. Int. J. Acad. Inf. Syst. Res. 2020, 3, 12–17. [Google Scholar]
  19. Vidyarthi, S.K.; Tiwari, R.; Singh, S.K.; Xiao, H. Prediction of size and mass of pistachio kernels using random Forest machine learning. J. Food Process Eng. 2020, 43, e13473. [Google Scholar] [CrossRef]
  20. Rahimzadeh, M.; Attar, A. Detecting and counting pistachios based on deep learning. Iran J. Comput. Sci. 2021, 5, 69–81. [Google Scholar] [CrossRef]
  21. Ozkan, I.A.; Koklu, M.; Saraçoğlu, R. Classification of Pistachio Species Using Improved K-NN Classifier. Health 2021, 23, e2021044. [Google Scholar] [CrossRef]
  22. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  23. Dandıl, E.; Polattimur, R. Dog behavior recognition and tracking based on faster R-CNN. J. Fac. Eng. Archit. Gazi Univ. 2020, 35, 819–834. [Google Scholar] [CrossRef] [Green Version]
  24. Chen, H.; Chen, A.; Xu, L.; Xie, H.; Qiao, H.; Lin, Q.; Cai, K. A deep learning CNN architecture applied in smart near-infrared analysis of water pollution for agricultural irrigation resources. Agric. Water Manag. 2020, 240, 106303. [Google Scholar] [CrossRef]
  25. Arsa, D.M.S.; Susila, A.A.N.H. VGG16 in batik classification based on random forest. In Proceedings of the 2019 International Conference on Information Management and Technology (ICIMTech), Jakarta/Bali, Indonesia, 19–20 August 2019; IEEE: New York City, NY, USA, 2019. [Google Scholar] [CrossRef]
  26. Bayar, B.; Stamm, M.C. A deep learning approach to universal image manipulation detection using a new convolutional layer. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, Vigo, Spain, 20–22 June 2016. [Google Scholar]
  27. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial İntelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011. [Google Scholar] [CrossRef]
  28. Akhtar, N.; Ragavendran, U. Interpretation of intelligence in CNN-pooling processes: A methodological survey. Neural Comput. Appl. 2020, 32, 879–898. [Google Scholar] [CrossRef]
  29. Habib, G.; Qureshi, S. Optimization and Acceleration of Convolutional Neural Networks: A Survey. J. King Saud Univ.-Comput. Inf. Sci. 2020. [Google Scholar] [CrossRef]
  30. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable Are Features in Deep Neural Networks? arXiv 2014, arXiv:1411.1792. [Google Scholar]
  31. Kolar, Z.; Chen, H.; Luo, X. Transfer learning and deep convolutional neural networks for safety guardrail detection in 2D images. Autom. Constr. 2018, 89, 58–70. [Google Scholar] [CrossRef]
  32. Smirnov, E.A.; Timoshenko, D.M.; Andrianov, S.N. Comparison of regularization methods for imagenet classification with deep convolutional neural networks. Aasri Procedia 2014, 6, 89–94. [Google Scholar] [CrossRef]
  33. Sakib, S.; Ahmed, N.; Kabir, A.J.; Ahmed, H. An overview of convolutional neural network: Its architecture and applications. Preprints 2019, 2018110546. [Google Scholar] [CrossRef]
  34. Theckedath, D.; Sedamkar, R. Detecting affect states using VGG16, ResNet50 and SE-ResNet50 networks. SN Comput. Sci. 2020, 1, 1–7. [Google Scholar] [CrossRef] [Green Version]
  35. Koklu, M.; Cinar, I.; Taspinar, Y.S. CNN-based bi-directional and directional long-short term memory network for determination of face mask. Biomed. Signal Process. Control 2022, 71, 103216. [Google Scholar] [CrossRef]
  36. Carvalho, T.; de Rezende, E.R.S.; Alves, M.T.P.; Balieiro, F.K.C.; Sovat, R.B. Exposing Computer Generated Images by Eye’s Region Classification via Transfer Learning of VGG19 CNN. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; IEEE: New York City, NY, USA, 2017. [Google Scholar] [CrossRef]
  37. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  38. Tutuncu, K.; Cataltas, O.; Koklu, M. Occupancy detection through light, temperature, humidity and CO2 sensors using ANN. Int. J. Ind. Electron. Electr. Eng. 2016, 5, 63–67. [Google Scholar]
  39. Koklu, M.; Tutuncu, K. Classification of chronic kidney disease with most known data mining methods. Int. J. Adv. Sci. Eng. Technol. 2017, 5, 14–18. [Google Scholar]
  40. Acharya, U.R.; Fernandes, S.L.; WeiKoh, J.E.; Ciaccio, E.J.; Fabell, M.K.M.; Tanik, U.J.; Rajinikanth, V.; Yeong, C.H. Automated Detection of Alzheimer’s Disease Using Brain MRI Images—A Study with Various Feature Extraction Techniques. J. Med. Syst. 2019, 9, 302. [Google Scholar] [CrossRef] [PubMed]
  41. Rajinikanth, V.; Raj, A.N.J.; Thanaraj, K.P.; Naik, G.R. A Customized VGG19 Network with Concatenation of Deep and Handcrafted Features for Brain Tumor Detection. Appl. Sci. 2020, 10, 3429. [Google Scholar] [CrossRef]
  42. Koklu, M.; Kursun, R.; Taspinar, Y.S.; Cinar, I. Classification of Date Fruits into Genetic Varieties Using Image Analysis. Math. Probl. Eng. 2021, 2021, 4793293. [Google Scholar] [CrossRef]
  43. Taspinar, Y.S.; Cinar, I.; Koklu, M. Classification by a stacking model using CNN features for COVID-19 infection diagnosis. J. X-ray Sci. Technol. 2022, 30, 73–88. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Examples of pistachio images used in the study.
Figure 1. Examples of pistachio images used in the study.
Electronics 11 00981 g001
Figure 2. Block diagram showing the operation of the study.
Figure 2. Block diagram showing the operation of the study.
Electronics 11 00981 g002
Figure 3. Transfer learning process for AlexNet.
Figure 3. Transfer learning process for AlexNet.
Electronics 11 00981 g003
Figure 4. Transfer learning process for VGG16.
Figure 4. Transfer learning process for VGG16.
Electronics 11 00981 g004
Figure 5. Transfer learning process for VGG19.
Figure 5. Transfer learning process for VGG19.
Electronics 11 00981 g005
Figure 6. Some activation maps of AlexNet (a), VGG16 (b), and VGG19 (c) models.
Figure 6. Some activation maps of AlexNet (a), VGG16 (b), and VGG19 (c) models.
Electronics 11 00981 g006
Figure 7. Training–validation accuracy and loss graphs of the developed models (a): AlexNet, (b): VGG16, (c): VGG19.
Figure 7. Training–validation accuracy and loss graphs of the developed models (a): AlexNet, (b): VGG16, (c): VGG19.
Electronics 11 00981 g007
Figure 8. Confusion matrix of all models.
Figure 8. Confusion matrix of all models.
Electronics 11 00981 g008
Figure 9. Comparison of performance metrics.
Figure 9. Comparison of performance metrics.
Electronics 11 00981 g009
Figure 10. ROC curves of all models.
Figure 10. ROC curves of all models.
Electronics 11 00981 g010
Table 1. Pistachio studies in the literature.
Table 1. Pistachio studies in the literature.
NoData
Pieces
ClassClassifierAccuracy (%)References
11503ANN99.89Mahdavi-Jafari, Salehinejad, and Talebi (2008)
210003AlexNet+SVM
GoogleNet+SVM
98
99
Farazi, Abbas-Zadeh, and Moradi (2017)
38505ANN
SVM
99.40
99.80
Omid et al. (2017)
43052Deep Auto-encoder Neural Networks80.30Abbaszadeh et al. (2019)
59582GoogleNet
ResNet
VGG16
95.80
97.20
95.83
Dini et al. (2020)
628685ConvNet98Dheir, Abu Mettleq, and Elsharif (2020)
739272ResNet50
ResNet152
VGG16
85.28
85.19
83.32
Rahimzadeh and Attar (2021)
821482KNN94.18Ozkan, Koklu, and Saracoglu (2021)
Table 2. Two-class confusion matrix.
Table 2. Two-class confusion matrix.
True Class
Positive (P)Negative (N)
Predicted ClassTrue (T)TP TN
False (F)FP FN
Table 3. Formulas for performance metrics.
Table 3. Formulas for performance metrics.
Performance MetricsFormulas
Accuracy T P + T N T P + T N + F P + F N
F-1 Score 2 T P 2 T P + F P + F N
Sensitivity T P T P + F N
Precision T P T P + F P
Specificity T N T N + F P
Table 4. Training times of AlexNet, VGG16, and VGG19 models.
Table 4. Training times of AlexNet, VGG16, and VGG19 models.
AlexNetVGG16VGG19
Elapsed Time17 min. 7 sec.90 min. 28 sec.99 min. 0 sec.
Table 5. Performance metrics of all models.
Table 5. Performance metrics of all models.
AlexNetVGG16VGG19
Accuracy0.94420.98840.9814
Sensitivity0.98690.99560.9913
Specificity0.89550.98010.9701
Precision0.91500.98280.9742
F-1 Score0.94960.98840.9827
Table 6. Classification accuracy of all models (%).
Table 6. Classification accuracy of all models (%).
AlexNetVGG16VGG19
Accuracy94.4298.8498.14
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Singh, D.; Taspinar, Y.S.; Kursun, R.; Cinar, I.; Koklu, M.; Ozkan, I.A.; Lee, H.-N. Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models. Electronics 2022, 11, 981. https://doi.org/10.3390/electronics11070981

AMA Style

Singh D, Taspinar YS, Kursun R, Cinar I, Koklu M, Ozkan IA, Lee H-N. Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models. Electronics. 2022; 11(7):981. https://doi.org/10.3390/electronics11070981

Chicago/Turabian Style

Singh, Dilbag, Yavuz Selim Taspinar, Ramazan Kursun, Ilkay Cinar, Murat Koklu, Ilker Ali Ozkan, and Heung-No Lee. 2022. "Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models" Electronics 11, no. 7: 981. https://doi.org/10.3390/electronics11070981

APA Style

Singh, D., Taspinar, Y. S., Kursun, R., Cinar, I., Koklu, M., Ozkan, I. A., & Lee, H. -N. (2022). Classification and Analysis of Pistachio Species with Pre-Trained Deep Learning Models. Electronics, 11(7), 981. https://doi.org/10.3390/electronics11070981

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop