Next Article in Journal
Adaptive Linear Quadratic Attitude Tracking Control of a Quadrotor UAV Based on IMU Sensor Data Fusion
Next Article in Special Issue
Prediction of the Biogenic Amines Index of Poultry Meat Using an Electronic Nose
Previous Article in Journal
Characterization of Elastic Polymer-Based Smart Insole and a Simple Foot Plantar Pressure Visualization Method Using 16 Electrodes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Bionic Electronic Nose Based on MOS Sensors Array and Machine Learning Algorithms Used for Wine Properties Detection

1
School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
2
COFCO Huaxia Greatwall Wine Co., Ltd. No. 555, Changli 066600, China
3
School of Artificial Intelligence, Hebei University of Technology, Tianjin 300130, China
4
Beijing Advanced Innovation Center for Soft Matter Science and Engineering, Beijing University of Chemical Technology, Beijing 100029, China
5
Department of Chemistry, Institute of Inorganic and Analytical Chemisty, Goethe-University, 60438 Frankfurt, Germany
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(1), 45; https://doi.org/10.3390/s19010045
Submission received: 28 November 2018 / Revised: 19 December 2018 / Accepted: 21 December 2018 / Published: 22 December 2018
(This article belongs to the Special Issue Biomimetic Sensor Arrays)

Abstract

:
In this study, a portable electronic nose (E-nose) prototype is developed using metal oxide semiconductor (MOS) sensors to detect odors of different wines. Odor detection facilitates the distinction of wines with different properties, including areas of production, vintage years, fermentation processes, and varietals. Four popular machine learning algorithms—extreme gradient boosting (XGBoost), random forest (RF), support vector machine (SVM), and backpropagation neural network (BPNN)—were used to build identification models for different classification tasks. Experimental results show that BPNN achieved the best performance, with accuracies of 94% and 92.5% in identifying production areas and varietals, respectively; and SVM achieved the best performance in identifying vintages and fermentation processes, with accuracies of 67.3% and 60.5%, respectively. Results demonstrate the effectiveness of the developed E-nose, which could be used to distinguish different wines based on their properties following selection of an optimal algorithm.

1. Introduction

Wine is one of the most popular drinks in the world and plays a relatively important role around the table and socially. Approximately 24.3 billion liters of wine were consumed in 2017 (International Organisation of Vine and Wine), with the United States named the world's largest consumer at 3.26 billion liters; China ranked fifth at 1.79 billion liters. Facing a vast consumer market, wine identification or classification has gained increasing popularity as a means of detecting mislabeling given the wide variability of wine sale prices depending on vintage year, fermentation processes, age, varietal, or geographical origin [1].
To assess the quality of wine in a timely manner with regard to the production process, aroma is an important indicator that cannot be ignored. Aroma is composed of hundreds of volatile chemical compounds with different concentrations that are closely related to wine attributes [2]. Typically, distinguishing wines is challenging due to the complexity and heterogeneity of its headspace [3]. However, wine classification is essential in preserving the high economic value of wine products, protecting wine quality, preventing illegal labeling, guaranteeing wine quality in the import–export market, and controlling beverage processing [4]. Although gas chromatography, mass spectrometry, and other methods are acceptable substitutes for volatile analysis of wine, they are time-consuming and labor-intensive [5].
The electronic nose (E-nose), an apparatus designed to mimic human olfactory perception, has recently become a powerful tool in the food industry [6,7,8,9,10] and other fields [11,12,13]. Regarding wine quality detection, Wei et al. [14] reported an E-nose application to distinguish between wines aged in oak barrels and others. Lozano et al. [15] developed an in-situ, on-line E-nose system for monitoring wine preservation and evolution in tanks in real time. Most relevant reports have focused on single properties of wine, whereas systematic analysis using an E-nose is lacking.
In this study, an E-nose prototype was developed to identify wines with different areas of production, vintage years, fermentation processes, and varietals. The device was mainly composed of a sensor array and a STM32F4 series-based microcontroller unit (MCU).Support vector machine (SVM) [16], random forest (RF) [17], extreme gradient boosting (XGBoost) [18], and backpropagation neural network (BPNN) algorithms were employed for pattern recognition to analyze volatile odorants in wines. The aim of this study was to classify wines with different attributes using the developed E-nose system. The remainder of this paper is organized as follows. First, we outline the development of a portable E-nose system based on a metal oxide semiconductor (MOS)-based sensor array and STM32F4 MCU; then, we present an analytical method for wine evaluation according to wine properties using the embedded E-nose system.

2. Materials and Methods

2.1. Independently Developed E-Nose Prototype

2.1.1. Sensor Array

In this work, a sensor array used in the E-nose was composed of six different metal oxide semiconductor (MOS) sensors, which were assembled in an acrylic box. The selected sensors were manufactured by Figaro Engineering Inc., Osaka, Japan. Each MOS sensor selectively adsorbed different volatile molecules during the process, resulting in conductivity changes. Therefore, a unique set of response curves from the sensor array could be obtained for each distinct object substance. The nomenclature and characteristics of the sensors are listed in Table 1. Photos of the printed circuit board (PCB), which was designed by us and produced by an original equipment manufacturer (J&C CO., LTD, Shenzhen, China), are presented in Figure 1, and it was used for data acquisition.

2.1.2. Microprocessor and Peripheral Modules

In the proposed E-nose device, an STM32F407 microcontroller was used for the system and algorithm. The embedded software had two main functions: (1) acquiring sensors’ response; and (2) processing data and communicating with the computer.
Each of the sensors in the array responded to different sets of volatile organic compounds in tested substances. For subsequent computer analysis and identification, sensor responses were digitized for relevant feature extraction. A multiplexer Analog-to-Digital Converter (ADC) was thus included in the design, wherein sensor-required information was converted into a digital signal directly by the ADC according to the control signal from the MCU via the serial peripheral interface. Then, the digital signal after being processed was transmitted to the upper computer by an RS485 standard serial port for model training and testing.
Considering the auxiliary heating requirements of MOS sensors, we designed a high-powered supply circuit and isolated the power supply used for auxiliary heating from the other one used for chips to enhance system stability. The other reserved interfaces or modules (e.g., debugger interface) on the PCB are not discussed in detail. The MOS-based E-nose prototype is depicted in Figure 2.

2.2. Wine Samples

As presented in Table 2, Table 3, Table 4 and Table 5, 14 wine samples (15 bottles per type) provided by COFCO Huaxia Greatwall Wine Co., Ltd. (in Qinhuangdao, Hebei, China) were divided into four groups to analyze their different properties (producing area, varietal, vintage, and fermentation processes). For each kind of wine, samples were taken from three different manufacturer lots. Furthermore, two of the three lots of wine were used for model training; the remaining lot was used for testing.
In the last column of Table 2, Table 3 and Table 4, “*” indicates that the processes for given samples were the same, manufacturer-announced, and details were not made public, respectively. All experiments were performed in the authors’ laboratory at a temperature of 25 ± 1 °C and a relative humidity of 50 ± 2%.
For each sample, as shown in Figure 3, 50 mL wine was put into a vial (100 ml) and was allowed to equilibrate with the air in the vial for 15 minutes. The workflow of the E-nose prototype is divided into the capturing process and the cleaning process. In the capturing process, the headspace gas of the sample is drawn into the E-nose by the flow-control unit in which they interact with the sensor array. They are adsorbed by the MOS sensors, which leads to conductivity increase and a stabilization to constant value because of the saturation of the sensor surface. During the cleaning process, air washed by carbon adsorbent is drawn into the E-nose homogeneously by the flow-control unit and analytes are removed from the sensor surface, which leads to conductivity decrease and stabilization to another constant value because of the complete removal of the analytes. Both capturing and cleaning last for 90 s.

2.3. Pattern Recognition Methods

2.3.1. Back-Propagation Neural Network (BPNN)

BPNN is usually considered a multi-layered feedforward artificial neural network in which the backpropagation learning method is used to calculate a gradient required for calculation of the weights to be used in the network. The backpropagation process involves two stages: a feedforward stage in which exterior input information on the input nodes is propagated forward to compute output information indicators at the output unit; and a backward phase in which the connection weights are adjusted based on differences between the computed and actual indications at output units [19]. Through repeated iterations, the network’s response best matches the desired response.

2.3.2. Support Vector Machines (SVMs)

SVMs are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis [16]. The SVM algorithm operates by finding the hyperplane that gives the largest margin to the training samples. Therefore, the optimal separating hyperplane maximizes the margin of the training data. For nonlinear separable classification problems, SVM applies a kernel function to transform the original space into a higher-dimensional space, and a hyperplane is constructed in the higher-dimensional space to solve problems of nonlinear separable classification in the original low-dimensional space. The four best-known kernels are linear, polynomial, radial basis function (RBF), and sigmoid [20].

2.3.3. Random Forest (RF)

Random forests, or random decision forests, comprise an ensemble learning method for classification, regression, and other tasks, which operate by constructing multiple decision trees at different training times and outputting the class representing the mode of classes (classification) or mean prediction (regression) of individual trees [17]. First, for a given training set, some bootstrap samples (the amount depending on the number of classification and regression trees) were obtained by bootstrapping. Second, the RF algorithm incorporated growing classification and regression trees (CARTs). Each CART was built using random vectors. The general approach used to insert random vectors in tree formation is to choose the number of features (NF) in the random subset at each node, as NF attributes input to be split at each node in the CART to be formed; NF can be defined using the empirical formula N F = M , where M denotes the total number of features. Finally, an RF classifier was built by growing CARTs under supervised training to determine the final classification results based on CART voting (majority rule).

2.3.4. Extreme Gradient Boosting (XGBoost)

XGBoost is a scalable machine learning system for tree boosting that is a highly effective and widely used machine learning method [18]. The algorithm is based on the idea of ‘boosting’, which combines all predictions of a set of ‘weak’ learners to develop a ‘strong’ learner through additive training strategies [21]. XGBoost aims to prevent overfitting while optimizing computation resources by redefining the objective function and tree structure and optimizing the execution efficiency of the algorithm.

3. Results and Discussion

3.1. Response Curves and Features

Before training and testing, sensors’ response data were analyzed and preprocessed to obtain a good feed of input features in models. Figure 4 shows the typical response signals of the sensor array to different batch samples during 90 s of measurement, respectively. Each response curve represents the voltage variation of each sensor with time when the wines’ volatiles reached the measurement chamber. The voltage value of each sensor increased rapidly and then flattened-off as the process reached steady state.

3.2. Principal Component Analysis (PCA) for Wine Volatiles

Principal component analysis (PCA) is widely used for feature extraction (otherwise known as dimensionality reduction) in pattern recognition. PCA is mathematically defined as an orthogonal linear transformation that transforms data to a new coordinate system such that the greatest variance by some projection of the data comes to lie on the first coordinate (the first principal component), the second greatest variance on the second coordinate, and so on [22]. In general, the first few principal components whose cumulative variance contribution exceeds 95% are considered dimensionality-reduced data and often contain nearly all information from the original data.
According to the MOS-based sensor principle, the response curve of the sensor to affinity substances quickly rises initially and then gradually flattens. Generally, several kinds of features (e.g., stable value [SV], mean-differential coefficient value, and response area value [23,24]), extracted from E-nose signals can be used in pattern recognition algorithms. In this work, we used simpler feature parameters, namely the SV. Because detection lasted for 90 s per sample and the response value of each sensor stabilized after approximately 70 s, as shown in Figure 4, the value after the 70th second of each sensor was taken as the SV. In this study, the last 10 data points (from the 81st to 90th seconds) were used as input features for model training and testing. Therefore, four datasets used for four group experiments were formed, and datasets were expressed as a 450 × 6 matrix, 600 × 6 matrix, 450 × 6 matrix, or 600 × 6 matrix, respectively.
PCA results of four sets of experimental data are presented in Figure 5, reducing the dimension from six variables to two principal components. The four subplots illustrate that clustering among the various classes is present, but in many cases is highly overlapping. No sets of experimental samples could be easily separated artificially in the new two-dimensional projections space based on PCA. Therefore, we concluded that the odor of wine was strong and rich; in the four sets of experiments, the sample within each group had only one different attribute, and the difference between them was minimal. In PCA, principal components with a small contribution rate were neglected, but it may reflect important differences among sample types. In particular, some useful information was lose after data dimension reduction.
In addition, the loadings plot of PCA is shown in Figure 6. All six variables (MOS1, …, MOS6) were represented in the subplots by a vector, respectively, and the direction and length of the vector indicate how each variable (sensor) contributes to the two principal components. In the first subplot, the first principal component had positive coefficients for the MOS2, MOS3, and MOS4, and the largest was the MOS4, which showed that MOS4 had the largest contribution to the first principal component in the process of dimension reduction. Similarly, as shown in Figure 6b–d, the sensor which had the largest contribution to the first principal component was MOS4, MOS5 and MOS5, respectively. It was worth noting that the contribution of the same sensor was different in different tasks. Actually, removing the sensors with low contribution did not improve the experimental performance in this work.

3.3. Comparison of Properties Classification Based on Four Methods

In this work, four methods, namely BPNN, SVM, RF, and XGBoost, were used to classify different properties of wines. The above methods were implemented via PC programming using Python language and Tensorflow (an open source software library for high-performance numerical computation). For ease of comparison, all experimental results (accuracy of the four models in different experiments, respectively) are illustrated in Table 6, where “Original”, “4-D”, and “2-D” indicate the input features used in models to be original features, 4-dimensional features reduced by PCA, and 2-dimensional features reduced by PCA, respectively.
In BPNN training, several neurons in the hidden layer were explored. During training, three-fold cross-validation was applied to evaluate generalization performance of the BPNN model with final evaluation on the testing set. Taking the first set of experiments as an example, the optimized classification model is shown in Figure 7. The number of neurons in the hidden layer was determined to be 12. During training, dropout refers to dropping out units (hidden and visible) in a neural network, involving randomly setting the fraction rate of input units to zero at each update to help to prevent overfitting.
In addition, to verify the previous guess (i.e., the contribution of small components containing important information), we compared the performance of original features and dimension-reduced features based on the model we built in each set of experiments. Experimental results are shown in Table 6. The accuracy of discrimination declined as the feature dimensions decreased. In identifying wine production areas and varietals, BPNN achieved the best performance, with accuracies of 94% and 92.5%, respectively, using original features. Results indicate that BPNN possessed strong nonlinear fitting capabilities in the two classification tasks. Note that, compared to the other methods, the training of BPNN was the most time-consuming (BPNN consumed about three to six seconds, while the others only took tens of milliseconds). Therefore, it seems futile to compare computing time when the input was so small, and we will not discuss training time hereinafter.
For the RF-based classifier model, the main parameters were the number of decision trees and number of features (NF) in the random subset at each node in the growing trees. During model construction, the number of decision trees was optimized first, after which the NF was determined. For the number of trees, a larger amount is better but takes longer to compute. Results stop improving substantially beyond a critical number of trees, related to the NF considered when splitting a node. A lower NF leads to a greater reduction in variance but larger increase in bias. Only 15 decision trees were used in our experiments to build the classifier model; NF was defined using the empirical formula mentioned earlier. The performance of RF is also reflected in the Table 6, revealing that the performance of RF was mediocre in all experiments.
In the SVM-based model, an RBF was chosen as the kernel function. To optimize the penalty parameter (C) and kernel parameter gamma (c) in the SVM model, a grid search method with exponentially growing sequences of C and c was applied. Then, the optimal combination of parameters was determined according to the distinguishability of the computed hyperplane. Finally, four parameter combinations ([C, c]) used for the four tasks were determined to be [10, 0.15], [10, 0.15], [20, 0.1], and [15, 0.1], respectively. According to the experimental results in the Table 6, SVM achieved the best performance in identifying wine vintage and fermentation processes, with accuracies of 67.3% and 60.5%, respectively.
XGBoost is an efficient implementation of the gradient boosting machine, a representative of ensemble learning. Model parameters were highly similar to those of the RF model, thus, the parameter selection process referenced that of the RF model. Moreover, the experimental results of the XGBoost-based method are also depicted in the Table 6. It is showed that the XGBoost model did not obtain ideal performance.

4. Conclusions

In this study, an E-nose prototype was developed based on MOS sensors and STM32F4 MCU. As an alternative detection method to traditional monitoring technologies, the device was employed to identify different wines. Four popular machine learning algorithms (BPNN, RF, SVM, and XGBoost) were used to build identification models for different classification tasks. Their performance was compared based on the accuracy of testing samples. The following conclusions can be drawn.
(1) PCA is unnecessary to distinguish different wines in this work, resulting in details of wine aromas being missed after the dimensions were reduced. Because the system included only six sensors, 6-dimensional features are insufficient for detailing the odor information. In particular, the accuracy of experiments with dimension-reduced samples was bad for all tasks. Overall, the distinction of vintage and fermentation processes is more difficult than producing area and varietal.
(2) Among the experimental results, BPNN and SVM performed better than RF and XGBoost; also, SVM consumed less time than the BPNN. It was found that RF and XGBoost were not good choices when dealing with low-dimensional samples, even though they performed well on many tasks. Due to the practicality of SVM, an SVM-based algorithm can be easily migrated to our developed E-nose.
(3) Results were encouraging and demonstrated that the E-nose, as a non-destructive instrument, can be used to differentiate wines when an optimal pattern recognition algorithm is selected. Although the identification accuracy of wine vintages and fermentation processes was not high enough, experiments demonstrated the effectiveness of the developed E-nose and algorithms. Insufficient samples or sensors may have affected performance; we will explore this possibility further in the next phase.
This study provides a critical outlook on the development of wine evaluation and process control, which will assist manufacturers in standardizing operation processes and reducing costs while helping to protect consumer rights.

Author Contributions

All authors contributed extensively to the study presented in this manuscript. Y.G. and Q.L. contributed significantly to the conception of the study. H.L. designed the device and analyzed the measurements. B.Y. and L.Z. provided, marked, and analyzed the experimental samples. Y.G. supervised the work and contributed with valuable discussions and scientific advice. All authors contributed in writing this manuscript.

Funding

The authors would like to thank the National Natural Science Foundation of China (Grant Nos. 61672094 and U1501251) for its support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yu, H.; Dai, X.; Yao, G.; Xiao, Z. Application of gas chromatography-based electronic nose for classification of Chinese rice wine by wine age. Food Anal. Methods 2014, 7, 1489–1497. [Google Scholar] [CrossRef]
  2. Vera, L.; Mestres, M.; Boqué, R.; Busto, O.; Guasch, J. Use of synthetic wine for models transfer in wine analysis by HS-MS e-nose. Sens. Actuators B 2010, 143, 689–695. [Google Scholar] [CrossRef]
  3. Berna, A. Metal oxide sensors for electronic noses and their application to food analysis. Sensors 2010, 10, 3882–3910. [Google Scholar] [CrossRef] [PubMed]
  4. Lozano, J.; Arroyo, T.; Santos, J.P.; Cabellos, J.M.; Horrillo, M.C. Electronic nose for wine ageing detection. Sens. Actuators B 2008, 133, 180–186. [Google Scholar] [CrossRef]
  5. Brudzewski, K.; Osowski, S.; Golembiecka, A. Differential electronic nose and support vector machine for fast recognition of tobacco. Expert Syst. Appl. 2012, 39, 9886–9891. [Google Scholar] [CrossRef]
  6. Zhu, J.C.; Chen, F.; Wang, L.Y.; Niu, Y.; Xiao, Z. Evaluation of the synergism among volatile compounds in Oolong tea infusion by odour threshold with sensory analysis and E-nose. Food Chem. 2017, 221, 1484–1490. [Google Scholar] [CrossRef] [PubMed]
  7. Kodogiannis, V.S. Application of an electronic nose coupled with fuzzy-wavelet network for the detection of meat spoilage. Food Bioprocess Technol. 2017, 10, 730–749. [Google Scholar] [CrossRef]
  8. Haddi, Z.; Mabrouk, S.; Bougrini, M.; Tahri, K.; Sghaier, K.; Barhoumi, H.; El Bari, N.E.; Maaref, A.; Jaffrezic-Renault, N.; Bouchikhi, B. E-Nose and e-Tongue combination for improved recognition of fruit juice samples. Food Chem. 2014, 150, 246–253. [Google Scholar] [CrossRef]
  9. Li, Q.; Gu, Y.; Wang, N.F. Application of random forest classifier by means of a QCM-based e-nose in the identification of Chinese liquor flavors. IEEE Sens. J. 2017, 17, 1788–1794. [Google Scholar] [CrossRef]
  10. Mei, C.; Yang, M.; Shu, D.; Jiang, H.; Liu, G. Monitoring the wheat straw fermentation process using an electronic nose with pattern recognition methods. Anal. Method 2015, 7, 6006–6011. [Google Scholar] [CrossRef]
  11. Jia, P.; Tian, F.; He, Q.; Fan, S.; Liu, J.; Yang, S.X. Feature extraction of wound infection data for electronic nose based on a novel weighted KPCA. Sens. Actuators B 2014, 201, 555–566. [Google Scholar] [CrossRef]
  12. De Cesare, F.; Pantalei, S.; Zampetti, E.; Macagnano, A. Electronic nose and SPME techniques to monitor phenanthrene biodegradation in soil. Sens. Actuators B 2008, 131, 63–70. [Google Scholar] [CrossRef] [Green Version]
  13. Brudzewski, K.; Osowski, S.; Pawlowski, W. Metal oxide sensor arrays for detection of explosives at sub-parts-per million concentration levels by the differential electronic nose. Sens. Actuators B 2012, 161, 528–533. [Google Scholar] [CrossRef]
  14. Wei, Y.J.; Yang, L.L.; Liang, Y.P.; Li, J.M. Application of electronic nose for detection of wine-aging methods. Adv. Mater. Res. 2014, 875–877. [Google Scholar] [CrossRef]
  15. Lozano, J.; Santos, J.P.; Suárez, J.I.; Cabellos, M.; Arroyo, T.; Horrillo, C. Automatic Sensor System for the Continuous Analysis of the Evolution of Wine. Am. J. Enol. Vitic. 2015, 66, 148–155. [Google Scholar] [CrossRef]
  16. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef] [Green Version]
  17. Kuswanto, H.; Salamah, M.; Fachruddin, M.I. Random Forest Classification and Support Vector Machine for Detecting Epilepsyusing Electroencephalograph Records. Am. J. Appl. Sci. 2017, 14, 533–539. [Google Scholar] [CrossRef]
  18. Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  19. Aguilera, T.; Lozano, J.; Paredes, J.A.; Alvarez, F.J.; Suárez, J. Electronic nose based on independent component analysis combined with partial least squares and artificial neural networks for wine prediction. Sensors 2012, 12, 8055–8072. [Google Scholar] [CrossRef]
  20. Li, Q.; Gu, Y.; Jia, J. Classification of multiple Chinese liquors by means of a QCM-based e-nose and MDS-SVM classifier. Sensor 2017, 17, 272. [Google Scholar] [CrossRef]
  21. Fan, J.; Wang, X.; Wu, L.; Zhou, H.; Zhang, F.; Yu, X.; Liu, X.; Xiang, Y. Comparison of Support Vector Machine and Extreme Gradient Boosting for predicting daily global solar radiation using temperature and precipitation in humid subtropical climates: A case study in China. Energy Convers. Manag. 2018, 164, 102–111. [Google Scholar] [CrossRef]
  22. Jolliffe, I.T. Principal Component Analysis; Springer: New York, NY, USA, 2002. [Google Scholar]
  23. Zhao, Y.; Guo, Z.H.; Meng, F.L.; Liu, J.H. Dynamic detection and recognition system based on the segmental average differentiation. Chin. J. Sens. Actuators 2007, 8, 1706–1711. [Google Scholar]
  24. Wei, Z.; Wang, J.; Jin, W. Evaluation of varieties of set yogurts and their physical properties using a voltammetric electronic tongue based on various potential waveforms. Sens. Actuators B 2013, 177, 684–694. [Google Scholar] [CrossRef]
Figure 1. Printed circuit board (PCB) for data acquisition based on metal oxide semiconductor (MOS) sensor array. (a) front view; (b) back view.
Figure 1. Printed circuit board (PCB) for data acquisition based on metal oxide semiconductor (MOS) sensor array. (a) front view; (b) back view.
Sensors 19 00045 g001
Figure 2. MOS-based E-nose prototype.
Figure 2. MOS-based E-nose prototype.
Sensors 19 00045 g002
Figure 3. Illustration of the sample detection.
Figure 3. Illustration of the sample detection.
Sensors 19 00045 g003
Figure 4. Response curves to data from different batches: (a) Batch 1; (b) Batch 4; (c) Batch 8; (d) Batch 11.
Figure 4. Response curves to data from different batches: (a) Batch 1; (b) Batch 4; (c) Batch 8; (d) Batch 11.
Sensors 19 00045 g004
Figure 5. Principal component analysis (PCA) plots of different wine samples measurements (each kind of sample from the training lots were presented). (a) the samples with different product areas; (b) the samples with different varietals; (c) the samples with different vintages; (d) the samples with different fermentation processes.
Figure 5. Principal component analysis (PCA) plots of different wine samples measurements (each kind of sample from the training lots were presented). (a) the samples with different product areas; (b) the samples with different varietals; (c) the samples with different vintages; (d) the samples with different fermentation processes.
Sensors 19 00045 g005
Figure 6. PCA loadings plots of different wine samples measurements (each kind of sample from the training lots were presented). (a) the samples with different product areas; (b) the samples with different varietals; (c) the samples with different vintages; (d) the samples with different fermentation processes.
Figure 6. PCA loadings plots of different wine samples measurements (each kind of sample from the training lots were presented). (a) the samples with different product areas; (b) the samples with different varietals; (c) the samples with different vintages; (d) the samples with different fermentation processes.
Sensors 19 00045 g006
Figure 7. Classification model based on back-propagation neural network (BPNN).
Figure 7. Classification model based on back-propagation neural network (BPNN).
Sensors 19 00045 g007
Table 1. Standard sensor array in E-nose system.
Table 1. Standard sensor array in E-nose system.
NumberSensorObject Substances for SensingCross-Sensitive Object
MOS1TGS826AmmoniaIsobutane, ethanol, etc.
MOS2TGS832Halocarbon gasEthanol, R134a refrigerant, etc.
MOS3TGS2600Air pollutants (hydrogen, ethanol, etc.)Isobutane, carbon monoxide, etc.
MOS4TGS2602Air pollutants (VOCs, ammonia, H2S, etc.)Ammonia, hydrogen sulfide, toluene, etc.
MOS5TGS2611MethaneHydrogen
MOS6TGS2620Alcohol, Solvent vaporsCarbon monoxide, hydrogen, etc.
Table 2. Details of wine samples with different producing area.
Table 2. Details of wine samples with different producing area.
Label No.Producing AreaVarietalVintageFermentation Processes (Yeast ID, Fermentation Container, Storage Container)
1HuaxiaCabernet sauvignon2016*
2RenxuanCabernet sauvignon2016*
3ZuimeiCabernet sauvignon2016*
Table 3. Details of wine samples with different varietal.
Table 3. Details of wine samples with different varietal.
Label No.Producing AreaVarietalVintageFermentation Processes (Yeast ID, Fermentation Container, Storage Container)
4HuaxiaCabernet sauvignon2017*
5HuaxiaMarselan2017*
6HuaxiaLong Zibao2017*
7HuaxiaMerlot2017*
Table 4. Details of wine samples with different vintage.
Table 4. Details of wine samples with different vintage.
Label No.Producing AreaVarietalVintageFermentation Processes (Yeast ID, Fermentation Container, Storage Container)
8RenxuanMarselan2017*
9RenxuanMarselan2016*
10RenxuanMarselan2014*
Table 5. Details of wine samples with different fermentation processes.
Table 5. Details of wine samples with different fermentation processes.
Label No.Producing AreaVarietalVintageFermentation Processes (Yeast ID, Fermentation Container, Storage Container)
11HuaxiaCabernet sauvignon2017CC17, Stainless steel tank, Stainless steel tank
12HuaxiaCabernet sauvignon2017SC5, Stainless steel tank, Stainless steel tank
13HuaxiaCabernet sauvignon2017CC17, Stainless steel tank, Oak barrel
14HuaxiaCabernet sauvignon2017SC5, Stainless steel tank, Oak barrel
Table 6. Comparisons of the four methods in the classification tasks.
Table 6. Comparisons of the four methods in the classification tasks.
Producing AreaVarietalVintageFermentation Processes
Original4-D2-DOriginal4-D2-DOriginal4-D2-DOriginal4-D2-D
BPNN94.033.333.392.550.046.052.732.730.752.038.550.0
RF87.364.036.779.024.548.547.321.332.056.538.539.5
SVM70.028.728.791.052.539.067.333.333.360.539.555.5
XGBoost90.766.056.759.539.549.550.033.333.357.539.039.5
Bold values indicate the best results.

Share and Cite

MDPI and ACS Style

Liu, H.; Li, Q.; Yan, B.; Zhang, L.; Gu, Y. Bionic Electronic Nose Based on MOS Sensors Array and Machine Learning Algorithms Used for Wine Properties Detection. Sensors 2019, 19, 45. https://doi.org/10.3390/s19010045

AMA Style

Liu H, Li Q, Yan B, Zhang L, Gu Y. Bionic Electronic Nose Based on MOS Sensors Array and Machine Learning Algorithms Used for Wine Properties Detection. Sensors. 2019; 19(1):45. https://doi.org/10.3390/s19010045

Chicago/Turabian Style

Liu, Huixiang, Qing Li, Bin Yan, Lei Zhang, and Yu Gu. 2019. "Bionic Electronic Nose Based on MOS Sensors Array and Machine Learning Algorithms Used for Wine Properties Detection" Sensors 19, no. 1: 45. https://doi.org/10.3390/s19010045

APA Style

Liu, H., Li, Q., Yan, B., Zhang, L., & Gu, Y. (2019). Bionic Electronic Nose Based on MOS Sensors Array and Machine Learning Algorithms Used for Wine Properties Detection. Sensors, 19(1), 45. https://doi.org/10.3390/s19010045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop