Next Article in Journal
An Array of On-Chip Integrated, Individually Addressable Capacitive Field-Effect Sensors with Control Gate: Design and Modelling
Next Article in Special Issue
Correction Model for Metal Oxide Sensor Drift Caused by Ambient Temperature and Humidity
Previous Article in Journal
Modifications in Prefrontal Cortex Oxygenation in Linear and Curvilinear Dual Task Walking: A Combined fNIRS and IMUs Study
Previous Article in Special Issue
Magnesium Zirconate Titanate Thin Films Used as an NO2 Sensing Layer for Gas Sensor Applications Developed Using a Sol–Gel Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Identification of Specific Substances in the FAIMS Spectra of Complex Mixtures Using Deep Learning

1
Guangxi Colleges and Universities Key Laboratory of Biomedical Sensing and Intelligent Instrument, Guilin University of Electronic Technology, Guilin 541004, China
2
School of Life and Environmental Sciences, Guilin University of Electronic Technology, Guilin 541004, China
*
Authors to whom correspondence should be addressed.
Sensors 2021, 21(18), 6160; https://doi.org/10.3390/s21186160
Submission received: 4 August 2021 / Revised: 1 September 2021 / Accepted: 9 September 2021 / Published: 14 September 2021
(This article belongs to the Collection Gas Sensors)

Abstract

:
High-field asymmetric ion mobility spectrometry (FAIMS) spectra of single chemicals are easy to interpret but identifying specific chemicals within complex mixtures is difficult. This paper demonstrates that the FAIMS system can detect specific chemicals in complex mixtures. A homemade FAIMS system is used to analyze pure ethanol, ethyl acetate, acetone, 4-methyl-2-pentanone, butanone, and their mixtures in order to create datasets. An EfficientNetV2 discriminant model was constructed, and a blind test set was used to verify whether the deep-learning model is capable of the required task. The results show that the pre-trained EfficientNetV2 model completed convergence at a learning rate of 0.1 as well as 200 iterations. Specific substances in complex mixtures can be effectively identified using the trained model and the homemade FAIMS system. Accuracies of 100%, 96.7%, and 86.7% are obtained for ethanol, ethyl acetate, and acetone in the blind test set, which are much higher than conventional methods. The deep learning network provides higher accuracy than traditional FAIMS spectral analysis methods. This simplifies the FAIMS spectral analysis process and contributes to further development of FAIMS systems.

1. Introduction

High-field asymmetric ion mobility spectrometry (FAIMS) is a new technology that uses nonlinear ion mobility variation under high electric fields to separate and recognize materials. FAIMS offers high sensitivity, fast detection speed, and miniaturization. Thus, it is expected to become an alternative to mass spectrometry (MS), ion mobility spectroscopy (IMS), and other analytical techniques [1,2,3,4].
FAIMS distinguishes ions using the differences between their ion mobility coefficients in low and high electric fields. When a sample is detected, FAIMS generates a unique chromatogram of the substance, which is called a fingerprint spectrum. Fingerprint spectra represent the compression of multiple FAIMS curves into a three-dimensional image, where the horizontal dimension represents the compensating voltage (CV), and the vertical dimension represents the radio frequency (RF) voltage. The intensity of the detected charged ions is presented using color.
Analysis of FAIMS spectra has become particularly important as FAIMS technology has developed. In the past, most FAIMS systems, including commercial equipment, focused on identification of individual chemical substances [5,6], such as benzene, toluene, and ethanol, for specific applications. Individual chemicals are distinguished well by their spectral shapes and numerical values. However, identifying specific chemicals within a mixture of substances is difficult because the ionic manifestations of different substances may overlap in a nonlinear manner and may be accompanied by the generation of new ions. In several cases, FAIMS will be used to analyze biological samples, including urine, feces, etc., which often contain a variety of volatile substances. Therefore, achieving substance-specific detection from complex mixtures is critical to further application of the FAIMS system.
Researchers have typically relied on a simple set of analysis tools that includes principal component analysis (PCA), linear discriminant analysis (LDA) [7,8,9,10], and extraction of image shapes. Such methods have achieved good results with regard to discriminant analysis of single substances [11,12] but cannot be implemented well in the analysis of mixtures. The research team of Cristina E. Davis used computer vision and natural language processing to address this problem and demonstrated the ability to maintain high levels of substance-specific identification despite the presence of other substances [13]. However, their method requires manual extraction of features from the data, which introduces a subjective element and is cumbersome.
Machine learning may provide an avenue for the analysis of FAIMS spectra. Recent discoveries in the fields of deep learning, computer vision, and natural language processing might be applied to analysis of FAIMS spectra. In particular, these advances may help to identify specific substances within mixtures. Deep learning has proven to be a great success in the field of computer vision. The proposed deep learning models such as AlexNet [14], GoogLeNet [15], VGG [16], ResNet [17], RNN [18], and LSTM [19,20] have enabled deep learning to show powerful performance in image classification, image detection, and sequence prediction. The deep learning models’ excellent performance has prompted researchers to apply them to a number of fields including medicine [21,22,23,24], agriculture [25,26,27], genomics [28,29,30], sentiment analysis [31,32,33,34], and knowledge graphs [35,36,37]. However, we have not yet seen the application of deep learning in the field of FAIMS. The purpose of this paper is to combine FAIMS and deep learning and to explore the possible application of deep learning to FAIMS spectral map analysis.
This study used the EfficientV2 model, which is cutting edge in the field of deep learning, to test spectra from a homemade FAIMS system. Abandoning traditional data processing methods such as wavelet transform and PCA dimensionality reduction, the collected spectral data were analyzed and judged in the presence of interfering chemicals. The final results showed that pre-trained deep learning network models can be established to identify specific chemicals in mixtures of substances. This supports the further application of FAIMS to practical detection, especially for analysis of biological samples that contain multiple substances [38].

2. Materials and Methods

2.1. FAIMS System

A homemade FAIMS system was used for sample acquisition. The entire FAIMS system is shown in Figure 1. The system was composed of a sampler module, FAIMS chip, weak current detector, power module, microprocessor controller, and spectrum display module. The carrier gas blew the sample through the sampler and into the ionization zone. After ionization by the 10.6 eV photo-discharge UV lamp (PKS106, Heraeus Co., Ltd., Hanau, Germany), the charged ions entered the migration zone.
The separation region of the homemade FAIMS experimental platform is composed of two parallel copper plates, with a size of 15 mm × 10 mm, which is the key structure to achieve ion separation. By applying high-field asymmetric waveform voltage and compensation voltage, the ions can pass through the separation region horizontally and then reach the detection region.
When the electric field intensity is less than 10,000 V/cm, the ion mobility basically does not change with the change of electric field intensity. However, when the ion is in a high-field condition (E > 10,000 V/cm), the ion mobility will change with the change of electric field intensity and presents a nonlinear change trend.
The asymmetric square waveform applied on the separation region is shown in Figure 1 (L and H). The maximum voltage is represented by U max , and the minimum voltage is represented by U min Since the distance between the separation region is constant (0.2 mm), the voltage can be represented by electric field strength. That is, the maximum value of electric field strength is E max and the minimum value is E min . It is assumed that the movement time of ions under high-field conditions is t 1 , the ion mobility is k 1 , the movement time under low-field conditions is t 2 , the ion mobility is k 2 , and it satisfies the formula (1):
E max t 1 = E min t 2 = H = L
Then the distance moved by ions in a period can be expressed as follows:
Δ S = S 1 S 2 = v 1 t 1 v 2 t 2 = k 1 E max t 1 k 2 E min t 2 = ( k 1 k 2 ) H
In formula (2), v 1 and S 1 represent the velocity and displacement of ions in a high field, v 2 and S 2 represent the velocity and displacement of ions in a low field, and ion mobility k = v / E .
As the high-field asymmetric square waveform load has a relatively fast frequency (1 MHz), the ions in the separation region will have multiple tiny displacements superimposed, and eventually the ions will collide with the plate and be submerged, as shown in Figure 1 (ions a and c).
When a high-field asymmetric square waveform and a compensation voltage is applied to the separation region at the same time, the displacement difference of the ions within a period is neutralized, so that the ion can reach the detection region through the separation region horizontally, as shown in Figure 1 (ion b).
Different ions correspond to different ion mobility, resulting in different displacement differences. Finally, specific ions can be screened by modifying the displacement differences with specific compensation voltage values.
The ions that reached the detection zone were amplified by the weak current detector and transmitted to the microprocessor controller. The microprocessor controller communicated with the host computer via a serial port. The host computer software was written in Qt and communicated with the host computer via a serial port. When a sample was tested, the data acquisition command was sent to the host computer by clicking “Start.” The host computer received the CV and weak current data via the serial port and displayed a single curve that represented the sample in the spectral display area in real time. After one data acquisition process was completed, the host computer entered the idle state, changed the RF voltage, and repeated the above operation to complete plotting of multiple FAIMS curves. After all operations were completed, the data were added to the database and exported to the data table to facilitate data management. The plot function enabled the plotting of FAIMS spectra for subsequent data analysis and processing.

2.2. Sample Information

Ethanol (Xilong Chemical Co., Ltd., Shantou, China, 99.7%), ethyl acetate (Guangfu Technology Development Co., Ltd., Tianjin, China, 99.5%), acetone (Xilong Chemical Co., Ltd., Shantou, China, 99.5%), 2-butanone (Xilong Chemical Co., Ltd., Shantou, China, 99.5%), and 4-methyl-2-pentanone (Xilong Chemical Co., Ltd., Shantou, China, 99.0%) were used as experimental samples. A mixture was made by mixing equal volumes of the above five pure compounds. The pure compounds and their mixtures were reconfigured before each experiment.

2.3. Experimental Protocol

Pure compound data acquisition: 0.5 mL of a pure compound was placed in brown glassware using a pipette. The container was then put into a sampler using high-purity nitrogen (Ruida Chemical Technology Co., Ltd., Nanning, China, 99.999%) as the carrier gas. The carrier gas blew the volatile vapor from the headspace of the liquid sample into the FAIMS instrument for detection. In the homemade FAIMS system, the RF voltage was ramped from 180 V to 280 V in 10 V steps. The range of the compensating voltage was −13 V–+13 V and 1000 sampling points were used. Each sample took approximately 4–5 min to measure. Before each sample change, the sampler was purged with the carrier gas and the vacant sampler was collected to ensure that the FAIMS system was not contaminated with the original sample.
Mixed substance data acquisition: 0.5 mL of each pure compound was placed into brown glassware using a pipette, shaken and mixed well, and assayed as above. A total of 295 spectra were collected during the experiment. All spectra were collected within the same month and at a temperature of 25 °C to ensure the generalizability of the data. Fresh pure compounds and mixtures were used for each experiment.
The carrier gas flow rate through the sampler was controlled to 1500 mL/min using a D08-1F flowmeter (Sevenstar Electronic Technology Co., Ltd., Beijing, China). The carrier gas flow rate affects the acquired FAIMS spectra because higher flow rates blow larger quantities of volatile samples into the FAIMS system per unit of time. This leads to increases in the detected current. Therefore, all experimental conditions, except for the choice of samples, were designed to be consistent. The carrier gas flow rate was 1500 mL/min, the ballast resistance was 6 MΩ, and the RF voltage step was 10 V.
Computer software written in Qt was used to collect data and draw the spectra. The resolution of the spectra could be set. Higher resolutions could provide more effective information but lead to long model training times. The spectra resolution used in the experiments was 479 × 381. Python 3.7 was used as the programming language and Pytorch 1.7.0 [39] was the deep learning framework used with the model.

2.4. Principal Component Analysis and Support Vector Machine

Principal component analysis (PCA) is a commonly used method for data dimensionality reduction. It generates low-dimensional new variables by linear combination of high-dimensional original variables, and the new variables reflect the signal information of the original variables to the maximum extent [40,41,42]. Support vector machine (SVM) is an effective traditional classification model. It uses mathematical methods to find the best decision surface in a sample, thus separating the sample data [43,44,45].

2.5. Deep Learning Network

A convolutional neural network (CNN) is a type of deep-learning algorithm that has proven to be quite successful in the field of computer vision. Unlike traditional FAIMS spectral analysis methods (PCA or wavelet analysis), CNNs first rely on convolution layers to extract image features, followed perform down-sampling operations by pooling layers (this improves the efficiency and robustness of the model), and the data eventually reaches the output layer via the activation function. The output layer converts the probabilities for each class label by a logical function. The class label with the highest probability is the output result [46,47]. CNNs have no need to artificially select features that are subjectively important, which increases the objectivity of the results to some extent. Figure 2 represents the basic structure of a CNN.
In CNNs, the depth of the network, the width of the network, and the input image resolution largely determine the performance of the model. Researchers have been exploring the best architectures for CNNs, such as AlexNet, VGG, GoogLeNet and ResNet. Before the Google Brain team proposed the EfficientNet network architecture in 2019 [48], previous architectures achieved better performance by expanding one dimension of the CNN; unlike previous models, the EfficientNet network determines the optimal values of the three parameters by using neural architecture search techniques to simultaneously search the depth, width, and size of the model’s input image resolution. Compared with other networks, the EfficientNet model ensures higher accuracy while significantly reducing the parameter size of the model. Proposed by Google in 2021, EfficientNetV2 is an update of the EfficientNet architecture that offers better efficiency and smaller model parameters [49]. EfficientNet networks have been used to recognize and classify images in other fields and have achieved excellent performance [50,51,52,53,54]. Based on the high accuracy and small parameters of EfficientNet, especially the parameter size has unparalleled advantages compared to other models (hardware resources will be one of the main considerations for portable analytical instruments), we chose EfficientV2 as the experimental model. This study used the model to explore whether deep learning can detect specific substances within mixtures via the FAIMS system.

3. Results and Discussion

3.1. FAIMS Spectra

The FAIMS spectra provide three-dimensional CV, RF voltage, and current data. Five chemicals commonly found in the detection field were selected for the experiment: ethanol, ethyl acetate, acetone, 2-butanone, and 4-methyl-2 pentanone. Their corresponding FAIMS spectra are shown in Figure 3. In fact, the recognition of pure compounds can be easily achieved, whether from the shape of the spectra or the design of the algorithm. The pure compounds were mixed and placed into FAIMS. The resulting spectra are shown in Figure 4, where the experiments considered are ethanol + ethyl acetate, ethanol + acetone, ethanol + acetone + ethyl acetate, acetone + ethyl acetate, ethanol + 4-methyl-2-pentanone, ethanol + 4-methyl-2-pentanone + ethyl acetate, and 4-methyl-2-2-butanone + ethyl acetate. The goal of this experiment was to explore whether the deep-learning model could specifically identify certain substances within a mixture. All the experimental conditions were the same as in Section 2.3 and Section 2.4.
Each sample’s color column range is different in Figure 3 and Figure 4 which is to better illustrate the differences among the samples. In the experiments, the color columns of the different samples were in a fixed range (0 pA–350 pA), which was favorable for the experimental results. The dataset is characterized in Table 1.

3.2. CNNs and FAIMS Spectra

FAIMS spectra can be viewed as images. Different substances produce different FAIMS spectra. Unlike traditional analytical discrimination methods, convolutional neural network methods preserve local two-dimensional features as well as sequential information. Since FAIMS spectral data are sequential in nature (any pixel in each row of the FAIMS spectra is related to the pixels before and after it) the use of CNNs for FAIMS spectral analysis is further justified. However, it should be noted that the dataset required to train a deep-learning model is huge. While it was not possible to provide such a huge dataset in this experiment, transfer learning provided a convenient path. Specifically, a model that was previously trained on a huge dataset was then trained using a smaller dataset. Within CNNs, different layers learn different image features. The shallower the layer, the more generic the features learned. The deeper the layer, the more relevant the features learned are to a specific task. Therefore, generic features can be pre-trained using a large dataset and the resulting model later trained for specific tasks [55]. Fine-tuning of pre-trained models is one way to implement migration learning [56]. In this study, the impacts of training a completely new model and using a pre-trained model are explored. The results show that a fine-tuned, pre-trained model provides better performance than a completely new model.

3.3. Experimental Results

Pure ethanol and mixtures that contain ethanol are classified as class 1 (serial numbers 1, 6, 7, 8, 11, 13, and 14 in Table 1) samples. Other pure compounds and mixtures that do not contain ethanol are classified as class 2 (serial numbers 2, 3, 4, 5, 9, 10, 12, and 15 in Table 1) samples. The same experiment was performed again by classifying the mixtures as above but first with ethyl acetate and then acetone taking the place of the ethanol. The dataset obtained according to this method is shown in Table 2. The EfficientNetV2 model was built using Python and the homemade dataset was used to train the model. Before feeding the data into the model, the dataset was divided into training, validation, and blind test sets. The training set was provided to the model in order to train the model parameters. The validation set was used to test the performance of the model in order to update the parameters. Finally, after a model that performed well on the validation set was obtained, a blind test set was used to simulate the real situation. The final results obtained using the blind test set were considered representative of the real situation [57]. The blind test set used in the experiment has 30 spectra, and its distribution is shown in the column “Blind test set number” in Table 1.
In this experiment, 80% of the data were used to train the model, 20% of the data were used to validate the model, and finally an additional blind test set of 30 images was used to test model performance. When the model produces an output of 0, the sample contains the substance of interest. In contrast, the sample does not contain the substance when the output is 1. The hyperparameters that affect the model include the learning rate (lr) and number of epochs. A smaller learning rate might improve model accuracy but can cause overfitting. In overfitting, the model performs well on the training set but poorly in the validation set [58]. Use of too many epochs increases the time required to train the model and use of too few epochs can lead to non-convergence of the model. It should be noted that different datasets have different optimal learning rates and the number of epochs (lr = 0.1 and epochs = 200 in this experimental dataset), must be determined on a task-specific basis. Due to the small dataset, standard five-fold cross-validation was used to verify model robustness. Briefly, five-fold cross-validation randomly divides all model datasets into five equal parts, where four portions are used to train the model and one portion is used to validate model performance [59]. Each fold will generate an accuracy rate, and finally, the average of the five results will be the most final evaluation metric of the model.
It is stated above that the use of pre-trained models is feasible for small-sample tasks. The impacts of full training and use of pre-trained models that are fine-tuned on the experimental task are explored in Figure 5. The pre-trained model with fine-tuning is more accurate than the model with full training. This shows the potential for the application of transfer learning to FAIMS spectral analysis and indicates a new path for future FAIMS spectral analysis tasks. The remainder of this paper references the pre-trained, fine-tuned model.
As shown in Figure 5a, the deep-learning model can achieve excellent performance on the dataset, regardless of whether we seek to distinguish ethanol, ethyl acetate, or acetone from the mixtures. Average accuracies of 98.1%, 96.7%, and 94.2% are obtained for ethanol, ethyl acetate, and acetone in the five-fold cross-validation study. This indicates that EfficientNetV2 is a robust algorithm that can detect specific substances within complex mixtures.
In order to simulate the performance of the model in real situations, the study used a blind test set to test the model. The blind test contained 30 FAIMS spectra, which were divided via the method used to construct the model. The results are shown in Table 3. Ethanol has the highest accuracy rate of 100%, ethyl acetate has almost the same accuracy rate as in the model validation set, and acetone exhibits a decrease of about 8%. This may be related to the small amounts of acetone and its mixtures in the samples. All samples exhibit an accuracy rate that exceeds 85%. A larger dataset may provide better results, but the data are sufficient to show that applying deep learning to a homemade FAIMS spectra dataset can enable the detection of specific substances in mixtures.
In addition, traditional analysis methods were used to test the data. A normalization operation was first performed on all images, which sped up the convergence of the model. PCA was used to downscale the images and select the top 50 principal components as features, and then an SVM model was used for classification. The results produced using the entire dataset are shown in Table 4.
Upon comparing Table 3 and Table 4, it can be seen that deep learning provides higher accuracy than traditional PCA + SVM analysis methods. This may be because the mechanisms that govern FAIMS are complex. Simple linear methods may not be sufficient to explain FAIMS spectra well. In Figure 3 and Figure 4, the FAIMS spectra of the different substances cannot be represented linearly. Neural networks offer better performance on nonlinear tasks. This is because neural networks can approximate arbitrary functions. Thus, neural networks are more suitable for FAIMS spectral analysis than linear methods. More importantly, deep learning does not require data preprocessing or manual extraction of subjectively important features. The model automatically extracts features, thus simplifying the use of FAIMS in practical applications.
In short, the experiments show that the deep-learning model can detect specific compounds within mixtures. In particular, the pre-trained model shows its potential for application to FAIMS spectral analysis. The pre-trained deep-learning model is more efficient and provides more accurate results than traditional FAIMS spectral analysis methods.

3.4. Application

One potential application of this method is for the detection of complex biological samples. Studies have already shown that diabetic patients have different levels of acetone in exhaled gas than healthy individuals, which can be used as a biomarker for diabetes [60,61]; dimethyl disulphide in the feces of cholera patients can be used as a biomarker [62,63], and trimethylaminuria patients have much higher levels of trimethylamine in their urine and sweat than healthy individuals [64,65]. Samples such as exhaled gas, urine, and feces contain a large number of different volatiles from which identifying specific disease markers would be a challenge, which may be possible using deep learning. There is also the promise of enabling prediction of disease marker concentrations, which can vary at different stages of disease, and we expect FAIMS to enable early prediction of disease using deep learning, which would be exciting.

4. Conclusions

This study used a deep-learning model to detect specific substances in complex mixtures via a homemade FAIMS system. This is the first application of deep learning to the analysis of FAIMS mixtures for component identification. It demonstrated the transfer learning techniques has potential for FAIMS spectra analysis. The fine-tuned EfficientNetV2 deep-learning model was used to test for the presence of ethanol, ethyl acetate, and acetone in complex mixtures using a self-constructed dataset. The results showed that the deep-learning model could identify specific substances in complex mixtures. In addition to providing better results than traditional methods, the deep-learning model did not require data pre-processing or manual feature extraction. This further improves the efficiency of FAIMS and, to a certain extent, removes human judgment from the process. This makes the results more objective.
In conclusion, this study used a pre-trained deep-learning model to identify specific substances within complex mixtures. Model performance is expected to improve as the dataset is expanded. This result is exciting with regard to future research. We can use deep learning to experiment on more mixtures and attempt to analyze complex biological samples. This will promote the development of the field of FAIMS spectral analysis. However, the model size remains an issue to be addressed. It might be solved by deploying the model in the cloud. Of course, this study is only a small step forward. In addition to substance characterization, future work may include the quantification of specific substances in mixtures, which can be achieved by collecting the FAIMS spectra of mixtures that contain different amounts of substances. Future work may also include further cloud deployment of the model and development of faster, more portable FAIMS systems.

Author Contributions

Conceptualization, H.L.; methodology, J.P.; software, J.P.; validation, H.L. and J.P.; formal analysis, H.Z.; investigation, W.X.; resources, Z.C.; data curation, X.D.; writing—original draft preparation, H.L. and J.P.; writing—review and editing, X.D.; visualization, X.D.; supervision, W.X.; project administration, H.L..; funding acquisition, H.L. and W.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China under grant numbers 62163009, 61864001 and 61761013, the state key program of Guangxi Natural Science Foundation Program under grant number 2021JJD170019, and the Foundation of Guangxi Key Laboratory of Automatic Detecting Technology and Instruments (Guilin University of Electronic Technology) under grant number YQ21111.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maziejuk, M.; Szczurek, A.; Maciejewska, M.; Pietrucha, T.; Szyposzyńska, M. Determination of Benzene, Toluene and Xylene Concentration in Humid Air Using Differential Ion Mobility Spectrometry and Partial Least Squares Regression. Talanta 2016, 152, 137–146. [Google Scholar] [CrossRef]
  2. Arasaradnam, R.P.; Ouaret, N.; Thomas, M.G.; Gold, P.; Quraishi, M.N.; Nwokolo, C.U.; Bardhan, K.D.; Covington, J.A. Evaluation of gut bacterial populations using an electronic e-nose and field asymmetric ion mobility spectrometry: Further insights into ‘fermentonomics’. J. Med. Eng. Technol. 2012, 36, 333–337. [Google Scholar] [CrossRef]
  3. Guevremont, R. High-field asymmetric waveform ion mobility spectrometry: A new tool for mass spectrometry. J. Chromatogr. A 2004, 1058, 3–19. [Google Scholar] [CrossRef]
  4. Schweppe, D.K.; Prasad, S.; Belford, M.W.; Navarrete-Perea, J.; Bailey, D.J.; Huguet, R.; Jedrychowski, M.P.; Rad, R.; McAlister, G.; Abbatiello, S.E.; et al. Characterization and Optimization of Multiplexed Quantitative Analyses Using High-Field Asymmetric-Waveform Ion Mobility Mass Spectrometry. Anal. Chem. 2019, 91, 4010–4016. [Google Scholar] [CrossRef]
  5. Krylov, E.V.; Coy, S.L.; Vandermey, J.; Schneider, B.B.; Covey, T.R.; Nazarov, E.G. Selection and generation of waveforms for differential mobility spectrometry. Rev. Sci. Instrum. 2010, 81, 24101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Krebs, M.D.; Zapata, A.M.; Nazarov, E.G.; Miller, R.A.; Costa, I.S.; Sonenshein, A.L.; Davis, C.E. Detection of biological and chemical agents using differential mobility spectrometry (dms) technology. IEEE Sens. J. 2005, 5, 696–703. [Google Scholar] [CrossRef]
  7. Arasaradnam, R.P.; Ouaret, N.; Thomas, M.G.; Quraishi, N.; Heatherington, E.; Nwokolo, C.U.; Bardhan, K.D.; Covington, J.A. A Novel Tool for Noninvasive Diagnosis and Tracking of Patients with Inflammatory Bowel Disease. Inflamm. Bowel Dis. 2013, 19, 999–1003. [Google Scholar] [CrossRef]
  8. Martinez-Vernon, A.S.; Covington, J.A.; Arasaradnam, R.P.; Esfahani, S.; O’Connell, N.; Kyrou, I.; Savage, R.S. An Improved Machine Learning Pipeline for Urinary Volatiles Disease Detection: Diagnosing Diabetes. PLoS ONE 2018, 13, e0204425. [Google Scholar] [CrossRef]
  9. Covington, J.; Westenbrink, E.; Ouaret, N.; Harbord, R.; Bailey, C.; O’Connell, N.; Cullis, J.; Williams, N.; Nwokolo, C.; Bardhan, K.; et al. Application of a Novel Tool for Diagnosing Bile Acid Diarrhoea. Sensors 2013, 13, 11899–11912. [Google Scholar] [CrossRef] [PubMed]
  10. Sinha, R.; Khot, L.R.; Schroeder, B.K. FAIMS based sensing of Burkholderia cepacia caused sour skin in onions under bulk storage condition. Food Meas. 2017, 11, 1578–1585. [Google Scholar] [CrossRef]
  11. Li, J.; Gutierrez-Osuna, R.; Hodges, R.D.; Luckey, G.; Crowell, J. Odor Assessment of Automobile Interior Components Using Ion Mobility Spectrometry. In Proceedings of the IEEE Sensors, Orlando, FL, USA, 30 October–2 November 2016. [Google Scholar]
  12. Li, J.; Gutierrez-Osuna, R.; Hodges, R.D.; Luckey, G.; Crowell, J.; Schiffman, S.S.; Nagle, H.T. Using Field Asymmetric Ion Mobility Spectrometry for Odor Assessment of Automobile Interior Components. IEEE Sens. J. 2016, 16, 5747–5756. [Google Scholar] [CrossRef]
  13. Yeap, D.; Hichwa, P.T.; Rajapakse, M.Y.; Peirano, D.J.; McCartney, M.M.; Kenyon, N.J.; Davis, C.E. Machine Vision Methods, Natural Language Processing, and Machine Learning Algorithms for Automated Dispersion Plot Analysis and Chemical Identification from Complex Mixtures. Anal. Chem. 2019, 91, 10509–10517. [Google Scholar] [CrossRef] [PubMed]
  14. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  15. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
  16. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
  17. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  18. Cho, K.; van Merrienboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  19. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  20. Xingjian, S.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]
  21. Cai, L.; Gao, J.; Zhao, D. A Review of the Application of Deep Learning in Medical Image Classification and Segmentation. Ann. Transl. Med. 2020, 8, 713. [Google Scholar] [CrossRef]
  22. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  23. Shen, D.; Wu, G.; Suk, H.-I. Deep Learning in Medical Image Analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [Green Version]
  24. He, X.; Yang, X.; Zhang, S.; Zhao, J.; Zhang, Y.; Xing, E.; Xie, P. Sample-Efficient Deep Learning for COVID-19 Diagnosis Based on CT Scans; Health Informatics: New York, NY, USA, 2020. [Google Scholar]
  25. Delnevo, G.; Girau, R.; Ceccarini, C.; Prandi, C. A Deep Learning and Social IoT Approach for Plants Disease Prediction toward a Sustainable Agriculture. IEEE Internet Things J. 2021, 1. [Google Scholar] [CrossRef]
  26. Zhang, Q.; Liu, Y.; Gong, C.; Chen, Y.; Yu, H. Applications of Deep Learning for Dense Scenes Analysis in Agriculture: A Review. Sensors 2020, 20, 1520. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep Learning in Agriculture: A Survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  28. Zou, J.; Huss, M.; Abid, A.; Mohammadi, P.; Torkamani, A.; Telenti, A. A Primer on Deep Learning in Genomics. Nat. Genet. 2019, 51, 12–18. [Google Scholar] [CrossRef]
  29. Kopp, W.; Monti, R.; Tamburrini, A.; Ohler, U.; Akalin, A. Deep Learning for Genomics Using Janggu. Nat. Commun. 2020, 11, 3488. [Google Scholar] [CrossRef]
  30. Eraslan, G.; Avsec, Ž.; Gagneur, J.; Theis, F.J. Deep Learning: New Computational Modelling Techniques for Genomics. Nat. Rev. Genet. 2019, 20, 389–403. [Google Scholar] [CrossRef]
  31. Wang, X.; Kou, L.; Sugumaran, V.; Luo, X.; Zhang, H. Emotion correlation mining through deep learning models on natural language text. IEEE Trans. Cybern. 2020, 1–14. [Google Scholar] [CrossRef]
  32. Khalil, R.A.; Jones, E.; Babar, M.I.; Jan, T.; Zafar, M.H.; Alhussain, T. Speech Emotion Recognition Using Deep Learning Techniques: A Review. IEEE Access 2019, 7, 117327–117345. [Google Scholar] [CrossRef]
  33. Tian, Y.; Luo, P.; Wang, X.; Tang, X. Pedestrian Detection Aided by Deep Learning Semantic Tasks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  34. Zhang, L.; Wang, S.; Liu, B. Deep Learning for Sentiment Analysis: A Survey. WIREs Data Min. Knowl. Discov. 2018, 8, e1253. [Google Scholar] [CrossRef] [Green Version]
  35. Chen, X.; Jia, S.; Xiang, Y. A Review: Knowledge Reasoning over Knowledge Graph. Expert Syst. Appl. 2020, 141, 112948. [Google Scholar] [CrossRef]
  36. Xu, J.; Kim, S.; Song, M.; Jeong, M.; Kim, D.; Kang, J.; Rousseau, J.F.; Li, X.; Xu, W.; Torvik, V.I.; et al. Building a PubMed Knowledge Graph. Sci. Data 2020, 7, 205. [Google Scholar] [CrossRef] [PubMed]
  37. Xiong, W.; Hoang, T.; Wang, W.Y. DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning. arXiv 2018, arXiv:1707.06690. [Google Scholar]
  38. Covington, J.A.; van der Schee, M.P.; Edge, A.S.L.; Boyle, B.; Savage, R.S.; Arasaradnam, R.P. The Application of FAIMS Gas Analysis in Medical Diagnostics. Analyst 2015, 140, 6775–6781. [Google Scholar] [CrossRef]
  39. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv 2019, arXiv:1912.01703. [Google Scholar]
  40. Qi, J.; Jiang, G.; Li, G.; Sun, Y.; Tao, B. Surface EMG Hand Gesture Recognition System Based on PCA and GRNN. Neural Comput. Applic. 2020, 32, 6343–6351. [Google Scholar] [CrossRef]
  41. McCartney, M.M.; Spitulski, S.L.; Pasamontes, A.; Peirano, D.J.; Schirle, M.J.; Cumeras, R.; Simmons, J.D.; Ware, J.L.; Brown, J.F.; Poh, A.J.Y.; et al. Coupling a Branch Enclosure with Differential Mobility Spectrometry to Isolate and Measure Plant Volatiles in Contained Greenhouse Settings. Talanta 2016, 146, 148–154. [Google Scholar] [CrossRef] [Green Version]
  42. Yang, J.; Zhang, D.; Frangi, A.F.; Yang, J. Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 131–137. [Google Scholar] [CrossRef] [Green Version]
  43. Kaucha, D.P.; Prasad, P.W.C.; Alsadoon, A.; Elchouemi, A.; Sreedharan, S. Early Detection of Lung Cancer Using SVM Classifier in Biomedical Image Processing. In Proceedings of the 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI), Chennai, India, 21–22 September 2017; pp. 3143–3148. [Google Scholar]
  44. Strack, R.; Kecman, V. Minimal Norm Support Vector Machines for Large Classification Tasks. In Proceedings of the 2012 11th International Conference on Machine Learning and Applications, Boca Raton, FL, USA, 12–15 December 2012; pp. 209–214. [Google Scholar]
  45. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  46. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  47. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Proceedings of the Computer Vision—ECCV 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer International Publishing: Cham, Germany, 2014; pp. 818–833. [Google Scholar]
  48. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 24 May 2019; pp. 6105–6114. [Google Scholar]
  49. Tan, M.; Le, Q.V. EfficientNetV2: Smaller Models and Faster Training. arXiv 2021, arXiv:2104.00298. [Google Scholar]
  50. Duong, L.T.; Nguyen, P.T.; Di Sipio, C.; Di Ruscio, D. Automated Fruit Recognition Using EfficientNet and MixNet. Comput. Electron. Agric. 2020, 171, 105326. [Google Scholar] [CrossRef]
  51. Marques, G.; Agarwal, D.; de la Torre Díez, I. Automated Medical Diagnosis of COVID-19 through EfficientNet Convolutional Neural Network. Appl. Soft Comput. 2020, 96, 106691. [Google Scholar] [CrossRef]
  52. Alhichri, H.; Alswayed, A.S.; Bazi, Y.; Ammour, N.; Alajlan, N.A. Classification of Remote Sensing Images Using EfficientNet-B3 CNN Model With Attention. IEEE Access 2021, 9, 14078–14094. [Google Scholar] [CrossRef]
  53. Chowdhury, N.K.; Kabir, M.A.; Rahman, M.d.M.; Rezoana, N. ECOVNet: A Highly Effective Ensemble Based Deep Learning Model for Detecting COVID-19. PeerJ Comput. Sci. 2021, 7, e551. [Google Scholar] [CrossRef]
  54. Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant Leaf Disease Classification Using EfficientNet Deep Learning Model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
  55. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  56. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [Green Version]
  57. Anttalainen, A.; Mäkelä, M.; Kumpulainen, P.; Vehkaoja, A.; Anttalainen, O.; Oksala, N.; Roine, A. Predicting Lecithin Concentration from Differential Mobility Spectrometry Measurements with Linear Regression Models and Neural Networks. Talanta 2021, 225, 121926. [Google Scholar] [CrossRef]
  58. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  59. Wong, T.-T. Performance Evaluation of Classification Algorithms by K-Fold and Leave-One-out Cross Validation. Pattern Recognit. 2015, 48, 2839–2846. [Google Scholar] [CrossRef]
  60. Righettoni, M.; Tricoli, A.; Gass, S.; Schmid, A.; Amann, A.; Pratsinis, S.E. Breath Acetone Monitoring by Portable Si: WO3 Gas Sensors. Anal. Chim. Acta 2012, 738, 69–75. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Wang, Z.; Wang, C. Is Breath Acetone a Biomarker of Diabetes? A Historical Review on Breath Acetone Measurements. J. Breath Res. 2013, 7, 37109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Ubeda, C.; Lepe-Balsalobre, E.; Ariza-Astolfi, C.; Ubeda-Ontiveros, J.M. Identification of Volatile Biomarkers of Giardia Duodenalis Infection in Children with Persistent Diarrhoea. Parasitol. Res. 2019, 118, 3139–3147. [Google Scholar] [CrossRef] [PubMed]
  63. Garner, C.E.; Smith, S.; Bardhan, P.K.; Ratcliffe, N.M.; Probert, C.S.J. A Pilot Study of Faecal Volatile Organic Compounds in Faeces from Cholera Patients in Bangladesh to Determine Their Utility in Disease Diagnosis. Trans. R. Soc. Trop. Med. Hyg. 2009, 103, 1171–1173. [Google Scholar] [CrossRef] [PubMed]
  64. Schmidt, A.C.; Leroux, J.-C. Treatments of Trimethylaminuria: Where We Are and Where We Might Be Heading. Drug Discov. Today 2020, 25, 1710–1717. [Google Scholar] [CrossRef]
  65. Mackay, R.J.; McEntyre, C.J.; Henderson, C.; Lever, M.; George, P.M. Trimethylaminuria: Causes and Diagnosis of a Socially Distressing Condition. Clin. Biochem. Rev. 2011, 32, 33–43. [Google Scholar] [PubMed]
Figure 1. The homemade FAIMS experimental platform.
Figure 1. The homemade FAIMS experimental platform.
Sensors 21 06160 g001
Figure 2. The structure of a typical convolutional neural network.
Figure 2. The structure of a typical convolutional neural network.
Sensors 21 06160 g002
Figure 3. FAIMS spectra of pure compounds: (a) ethanol, (b) ethyl acetate, (c) acetone, (d) 4-methyl-2-pentanone, and (e) 2-butanone.
Figure 3. FAIMS spectra of pure compounds: (a) ethanol, (b) ethyl acetate, (c) acetone, (d) 4-methyl-2-pentanone, and (e) 2-butanone.
Sensors 21 06160 g003
Figure 4. FAIMS spectra of several mixtures: (a) ethanol + ethyl acetate, (b) ethanol + acetone, (c) ethanol + acetone + ethyl acetate, (d) acetone + ethyl acetate, (e) ethanol + 4-methyl-2-pentanone, (f) ethanol + 4-methyl-2-pentanone + ethyl acetate, (g) 4-methyl-2-pentanone + ethyl acetate, (h) ethanol + 2-butanone, (i) ethanol + 2-butanone + ethyl acetate, and (j) 2-butanone + ethyl acetate.
Figure 4. FAIMS spectra of several mixtures: (a) ethanol + ethyl acetate, (b) ethanol + acetone, (c) ethanol + acetone + ethyl acetate, (d) acetone + ethyl acetate, (e) ethanol + 4-methyl-2-pentanone, (f) ethanol + 4-methyl-2-pentanone + ethyl acetate, (g) 4-methyl-2-pentanone + ethyl acetate, (h) ethanol + 2-butanone, (i) ethanol + 2-butanone + ethyl acetate, and (j) 2-butanone + ethyl acetate.
Sensors 21 06160 g004
Figure 5. Results produced by fine-tuned and fully trained models. (a) Performance on the model validation set and (b) the number of epochs required for model convergence.
Figure 5. Results produced by fine-tuned and fully trained models. (a) Performance on the model validation set and (b) the number of epochs required for model convergence.
Sensors 21 06160 g005
Table 1. Characterization of the experimental dataset.
Table 1. Characterization of the experimental dataset.
Serial NumberSampleModel Dataset NumberBlind Test Set Number
1ethanol182
2ethyl acetate132
3acetone182
44-methyl-2-pentanone182
52-butanone192
6ethanol + ethyl acetate182
7ethanol + acetone182
8ethanol + acetone + ethyl acetate182
9acetone + ethyl acetate172
10ethanol + 4-methyl-2-pentanone182
11ethanol + 4-methyl-2-pentanone + ethyl acetate182
124-methyl-2-pentanone + ethyl acetate172
13ethanol + 2-butanone192
14ethanol + 2-butanone + ethyl acetate182
152-butanone + ethyl acetate182
Total 26530
Table 2. Data distribution.
Table 2. Data distribution.
SampleNumber of Class 1 Samples *Number of Class 2 SamplesTotal
ethanol145120265
ethyl acetate137126265
acetone71194265
* Class 1 indicates pure compounds of interest to the model as well as mixtures containing the substance of interest, whereas class 2 indicates other pure compounds as well as mixtures that do not contain the substance.
Table 3. The performance of the blind test set.
Table 3. The performance of the blind test set.
Blind Test SetNumber of Real LabelsNumber of Predicted LabelsAccuracy
1other1414100%
ethanol1616
2other141396.7%
ethyl acetate1617
3other221886.7%
acetone812
Table 4. Performance of PCA and the SVM.
Table 4. Performance of PCA and the SVM.
SampleTraining SetValidation SetBlind Dataset
ethanol74.5%79.2%70.0%
ethyl acetate81.5%75.0%66.7%
acetone85.4%78.8%80.0%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, H.; Pan, J.; Zeng, H.; Chen, Z.; Du, X.; Xiao, W. Identification of Specific Substances in the FAIMS Spectra of Complex Mixtures Using Deep Learning. Sensors 2021, 21, 6160. https://doi.org/10.3390/s21186160

AMA Style

Li H, Pan J, Zeng H, Chen Z, Du X, Xiao W. Identification of Specific Substances in the FAIMS Spectra of Complex Mixtures Using Deep Learning. Sensors. 2021; 21(18):6160. https://doi.org/10.3390/s21186160

Chicago/Turabian Style

Li, Hua, Jiakai Pan, Hongda Zeng, Zhencheng Chen, Xiaoxia Du, and Wenxiang Xiao. 2021. "Identification of Specific Substances in the FAIMS Spectra of Complex Mixtures Using Deep Learning" Sensors 21, no. 18: 6160. https://doi.org/10.3390/s21186160

APA Style

Li, H., Pan, J., Zeng, H., Chen, Z., Du, X., & Xiao, W. (2021). Identification of Specific Substances in the FAIMS Spectra of Complex Mixtures Using Deep Learning. Sensors, 21(18), 6160. https://doi.org/10.3390/s21186160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop