Next Article in Journal
Adversarial Patch Attacks on Deep-Learning-Based Face Recognition Systems Using Generative Adversarial Networks
Previous Article in Journal
Automatic Registration of Homogeneous and Cross-Source TomoSAR Point Clouds in Urban Areas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Cost CO Sensor Calibration Using One Dimensional Convolutional Neural Network

1
Department of Mechanical and Electrical Engineering, Massey University, Auckland 0632, New Zealand
2
Massey Agrifood Digital Lab., Massey University, Palmerston North 4410, New Zealand
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(2), 854; https://doi.org/10.3390/s23020854
Submission received: 20 December 2022 / Revised: 3 January 2023 / Accepted: 6 January 2023 / Published: 11 January 2023
(This article belongs to the Section Sensor Networks)

Abstract

:
The advent of cost-effective sensors and the rise of the Internet of Things (IoT) presents the opportunity to monitor urban pollution at a high spatio-temporal resolution. However, these sensors suffer from poor accuracy that can be improved through calibration. In this paper, we propose to use One Dimensional Convolutional Neural Network (1DCNN) based calibration for low-cost carbon monoxide sensors and benchmark its performance against several Machine Learning (ML) based calibration techniques. We make use of three large data sets collected by research groups around the world from field-deployed low-cost sensors co-located with accurate reference sensors. Our investigation shows that 1DCNN performs consistently across all datasets. Gradient boosting regression, another ML technique that has not been widely explored for gas sensor calibration, also performs reasonably well. For all datasets, the introduction of temperature and relative humidity data improves the calibration accuracy. Cross-sensitivity to other pollutants can be exploited to improve the accuracy further. This suggests that low-cost sensors should be deployed as a suite or an array to measure covariate factors.

1. Introduction

Urban air pollution has been linked to adverse effects on the environment, public health and quality of life [1]. Therefore, there is a concerted effort to alleviate the effects of air pollution [2]. Monitoring air pollution can raise awareness among the general public and subsequently lead to a sustainable urban environment [3]. Conventional air quality monitoring system typically involves the deployment of a small number of expensive stationary stations [4]. While the data from these stations are accurate, the poor spatial resolution hinders the generation of robust, city-wide air quality data. Low-cost sensors have been identified as an option to supplement the information captured by conventional air quality monitoring systems [4,5]. Many countries around the world [6,7,8] have started to adopt this approach to monitor urban pollutant at high spatial resolution.
Researchers have developed many low-cost sensors in the past few decades to capture real-time air pollution data [9,10]. Two of the most widely reported low-cost gas sensors to measure ambient air pollution are Metal Oxide (MOX) [11] and Electrochemical (EC) [12] gas sensors. Such sensors have been used in various scenarios such as road-side pollution measurement [7,13], rural and urban air pollution measurement [8,14,15,16,17,18], mobile vehicular pollution measurement [19], source attribution [20], personal exposure monitoring [4], etc. However, the data generated by these sensors are not as accurate as the measurements produced by the conventional sensors [6,21,22] due to the influence of many covariate factors. For example, EC sensors developed for measuring common pollutants, such as Carbon Monoxide (CO), Nitrogen Dioxide (NO2), and Ozone (O3) are impacted by ambient temperature, relative humidity, and cross-sensitivity to other gases [12,13,17,21,23,24]. Researchers have been working to develop calibration strategies and techniques to improve the accuracy of low-cost sensors. Such sensors can be calibrated by co-locating them with accurate sensors so that the calibrated measurements of the low-cost sensor closely agree with the co-located accurate reference sensor [9]. The co-located measurements are often performed during “field deployment” as it is difficult to emulate the inherently complex nature of the ambient conditions in a controlled lab setup [8,14,25,26]. The key component of the calibration is the training of regression models to capture the complex, often nonlinear, relationship between the raw sensor output and the ground truth provided by the accurate reference sensor.
Many computational calibration approaches to improve the accuracy of low-cost sensors have been reported in the literature. Classic statistical regressions such as Multiple Linear Regression (MLR) are still being employed in recent works [6,27,28,29,30,31,32,33]. State-of-the-art calibration methods include supervised Machine Learning (ML) techniques such as Support Vector Regression (SVR) [34,35,36,37,38,39], ensemble ML techniques, such as Random Forest Regression (RFR) [8,34,36,40,41,42,43], and Neural Networks (NN) such as Multilayer Perceptron (MLP) [25,27,28,37,38,39,43,44]) and Recurrent Neural Networks (RNN) [37,38,39,40,45,46]. Table 1 summarizes the ML techniques used for the calibration of low-cost gas sensors. With a few exceptions [6,37], most of these studies typically utilize one set of data to demonstrate the calibration performance, making it difficult to ascertain the generalizability of the techniques.
Our literature review shows that among the NN-based techniques, One Dimensional Convolutional Neural Network (1DCNN) has not been well investigated for low-cost gas sensor calibration. 1DCNN has demonstrated excellent performance for a variety of applications (e.g., indoor localization [47], human activity recognition [48], and time series forecasting [49]). However, there are only two reports [50,51] of 1DCNN being utilized for the calibration of air pollution monitoring. Kureshi et al. [50] employed it for the calibration of Particulate Matter (PM) sensors. In a recent publication that investigated the impact of the pandemic on air quality, Vajs et al. [51] employed 1DCNN to calibrate low-cost NO2 sensors. However, they did not benchmark its performance against any other ML techniques; therefore, it is impossible to ascertain its (comparative) efficacy. Similarly, the Gradient Boosting Regression (GBR), an ensemble learning technique, has also not been widely utilized for gas sensor calibration, although it has shown good performance in other applications (e.g., PM sensor calibration [52] and prediction and forecasting [53,54]). Bagkis et al. [41] report the only work that employed GBR for gas sensor calibration. However, their work mainly focused on temporal drift correction, and the performance of GBR was not benchmarked against sophisticated techniques such as NNs.
Contribution Statement:
  • This paper proposes applying 1DCNN and GBR for calibrating low-cost CO sensors. As far as we know, this is the first work that benchmarked these algorithms against NN-based algorithms.
  • Furthermore, this work, in contrast to most studies reported in the literature, evaluates the calibration models across multiple datasets, enabling us to draw more robust conclusions.
  • We show that 1DCNN-based calibration is consistently accurate compared to several Machine Learning (ML) based techniques across three large CO datasets.
  • We also highlight that GBR, an ML technique that has not been investigated widely for low-cost gas sensor calibration, performs quite accurately for all three datasets.

2. Dataset Description

Table 2 provides a summary of the three datasets collected from two locations in Italy and one in China. These datasets are multisensory, but we have focused on the calibration of the CO sensor as this gas is an essential component of the Air Quality Index (AQI) [55], and both the raw (from low-cost sensors) and reference CO data are available for all three deployments. We found that the data collected by the cost-effective multisensory devices and reference sensors have missing samples. Previous research has found evidence of cross-sensitivity in these gas measurements (e.g., see [21]). Therefore, for any given instant, all pollutant (and temperature and relative humidity) data need to be available from the cost-effective sensor alongside the reference CO data for multivariate calibration. As a result, we removed readings of select time instants from each dataset if any pollutant data from the cost-effective sensors or the CO ground truth data were missing. Please see Figure 1 for the CO ground truth distributions and temperature/relative humidity data for all three datasets. World Health Organization (WHO) recommended limits for CO exposure are no more than 9–10 ppm/8 h, 25–35 ppm/1 h, and 90–100 ppm/15 min [56]. As can be seen, the CO concentrations in all three monitoring sites are lower than these thresholds.

2.1. Dataset 1

The dataset was recorded by a multi-sensor device [37] containing an array of five low-cost MOX sensors that measured CO, NO2, O3, Non-methanic Hydrocarbons (NMHC), and NOX along with temperature (T) and relative humidity (RH). It includes 9357 samples of hourly averaged responses recorded between 10 March 2004, to 4 April 2005, from the Lombardy Region, Italy. A co-located certified reference analyzer provided ground truth, a conventional monitoring station with a spectrometer [37] that provided hourly averaged CO concentrations. After removing missing data points, we were left with 6941 samples for each pollutant, T, RH, and CO ground truth. More details of the dataset can be found in [17,21].

2.2. Dataset 2

This dataset includes the responses of a MONICA multi-sensor device [44] deployed in the Italian city of Naples. The gas sensor hardware consists of an array of electrochemical gas sensors to measure CO, NO2, and O3, along with T and RH. Hourly average responses were recorded along with reference CO concentrations from a certified analyzer (Teledyne 300, manufactured by Teledyne API). After discarding the missing data, a total of 13,595 samples collected over 31 months (5 April 2018–24 November 2020) are available. More details of the dataset can be found in [28]. It should be noted that the auxiliary electrode data of the CO data is available for the MONICA sensor and has been utilized during calibration.

2.3. Dataset 3

This dataset was recorded by a Sniffer4D multi-sensor device [29] deployed in the Chinese city of Guangzhou. The array of EC gas sensors measured CO, NO2, and O3 along with T and RH. A total of 3450 samples of hourly average data collected over a span of six months between 1 October 2018, and 1 March 2019, are utilized along with reference CO concentration collected from a certified analyzer (Thermo Scientific 48i-TLE). More details of the dataset can be found in [29]. Please note that this dataset is also available at a higher per-minute sampling rate.

3. Methodology

The calibration was framed as a supervised regression problem such that
CO c a l i b r a t e d = Φ { CO r a w , X ¯ }
Here CO c a l i b r a t e d is the calibrated CO reading computed from the raw CO reading of the sensor ( CO r a w ) and X ¯ , that comprises covariate factors, such as T, RH, and other pollutant readings from the sensor array (e.g., uncalibrated NO2 and O3 readings from the low-cost sensor array). For Dataset 2, CO r a w includes both the working electrode and auxiliary electrode data. Φ is the regression model whose parameters are derived from the training data to minimize the Mean Square Error (MSE) between the calibrated output and the ground truth received from the reference CO sensor. The training set is a subset of the dataset. We have considered two different Train Test Split (TTS) for this study. In TTS1, each of the three datasets is split so that 90% of the data is used to train (and validate, as discussed later) the calibration model, while the remaining 10% is used for evaluating the performance of the trained model. This 90/10 split represents the scenario where a co-located low-cost sensor is being used as a backup in case the reference grade monitor is out of commission for a short period due to fault or maintenance. In TTS2, the train/test split is 20/80. This emulates a scenario when a low-cost sensor is co-located with a reference sensor for a set period for calibration and afterward deployed in the field for monitoring pollutants at locations where no reference AQM station is available. It should be noted that for both the train test splits, we have used consecutive samples. The first 90 or 20 percent samples were used for training, and the remaining data were used for testing. This imitates a practical scenario where the sensor is collocated with the reference for a set period of time for calibration and then taken for field deployment. This also helps the calibration algorithms to exploit the temporal correlation between contiguous samples.
Three different regression cases were considered for each of the ML algorithms.

3.1. Scenarios 1–3

3.1.1. Scenario 1 (SC1)

This involves deriving regressors or calibration models so that
C O c a l i b r a t e d S C 1 = Φ S C 1 { CO r a w }
The regressor, Φ S C 1 , is derived solely based on the raw CO sensor input to minimize the MSE between C O c a l i b r a t e d S C 1 and the ground truth. For datasets 1 and 3, CO r a w represents the working electrode data of the low-cost CO sensor. For dataset 2, CO r a w comprises both working and auxiliary electrode data.

3.1.2. Scenario 2 (SC2)

The second case introduces temperature and relative humidity readings as part of the input so that
C O c a l i b r a t e d S C 2 = Φ S C 2 { CO r a w , T , R H }
The regressor, Φ S C 2 , is now derived from three input variables, raw CO sensor data, temperature, and relative humidity, to minimize the MSE between C O c a l i b r a t e d S C 2 and the ground truth. Accurate T and RH sensors are inexpensive, and it is reasonable to expect the availability of these readings for any deployment. As mentioned before, the literature suggests that low-cost gas sensor operations are impacted by T and RH. Therefore, introducing a multivariate calibration strategy is the next logical step.

3.1.3. Scenario 3 (SC3)

Cross-sensitivity is a known issue with low-cost gas sensors. However, this dependency can also be exploited to improve the calibration if the covariate pollutant data is available. In fact, it is quite common to construct and deploy a sensor array consisting of multiple pollutant sensors (as was the case in the three deployments that produced the datasets used in this research). Therefore, the last case further introduces other pollutant readings from the sensor array as part of the input that leads to
C O c a l i b r a t e d S C 3 = Φ S C 3 { C O r a w , T , R H , N O 2 r a w , O 3 r a w } .

3.2. Machine Learning Algorithms

Convolutional Neural Networks (CNNs) have become a popular machine learning technique during the last decade [57]. Conventional CNNs are mainly designed to process two-dimensional (2D) data, e.g., videos and images [58]. This structure can be modified as 1DCNN to deal with one-dimensional signals [59,60,61]. The 1DCNN algorithms have less computational complexity, compact structure (1–2 hidden CNN layers), and are less time-consuming to train, and thus are suitable for low-cost real-time applications as compared to their 2D counterparts [58]. In this paper, we propose to use the 1DCNN-based regressor for the calibration of low-cost CO sensors. 1DCNN is well-suited to deal with time series. Figure 2 shows an example of the 1DCNN structure used in our work.
As mentioned in Section 1, we also develop calibration models using GBR, an ensemble learning technique that has not been widely utilized for gas sensor calibration.

ML Algorithms for Benchmarking

We trained and evaluated/benchmarked the calibration performance of 1DCNN and GBR alongside three other ML-based techniques commonly reported in the literature, as discussed in Section 1. These are
  • MLP has been employed in many reported works on gas sensor calibration. Please note that in the literature, it is sometimes referred to as an Artificial Neural Network (ANN), Feedforward Neural Network (FNN), Back Propagation Neural Network (BPNN), or simply Neural Network.
  • Recent literature suggests that Recurrent Neural Networks, or RNNs, are well suited for sensor calibration due to their ability to exploit temporal correlation in the data. After some preliminary investigation, we selected Long Short-Term Memory (LSTM) as the RNN-based technique for our benchmark work.
  • Random Forest Regressor, or RFR, is an ensemble learning technique that has shown good performance in several works for low-cost gas sensor calibration and was, therefore, also selected to benchmark against.
  • Furthermore, linear regression is the most commonly employed technique for calibrating low-cost gas sensors and is, therefore, also utilized for benchmarking purposes.
A rigorous training, validation, and testing approach has been followed in this work. All the regressors have hyperparameters that have been tuned on the relevant training datasets and tested on the corresponding testing sets. One way to make sure that the parameters are more generalized is through validation. A k-fold (k = 10) cross-validation has been implemented in this work. Multiple models with various hyperparameters are trained on the training dataset. The trained models are tested on the validation dataset, and the best-performing model is selected. The best-performing model is then finally trained using both the training and validation datasets. Lastly, this newly trained model is evaluated on the testing dataset. The following steps and Figure 3 provide a detailed description of this process:
Step 1: The dataset is split into training and testing datasets.
Step 2: The training is conducted using a ten-fold cross-validation, where the training dataset is divided into ten equal-sized parts. Each time nine out of the ten parts are used to perform a grid search for hyperparameters tuning and then evaluated against the remainder 10th part (validation). This process is repeated ten times, and the best hyperparameter combination is found across all ten evaluations.
Step 3: The best-performing model is further trained using the entirety of the training dataset. This training is done ten times over, and an average value of the predicted output is calculated.
Step 4: The final output is evaluated on the (unseen) testing dataset by computing the performance metrics.
Table 3 lists the hyperparameters that were tuned for all the ML algorithms. The final hyperparameters for every calibration model can be found online (https://github.com/Sharafat-Ali/AirQualityResults, accessed on 19 December 2022).
Instead of letting the training datasets run for a set number of epochs, an early stopping method has been used during the training-validating stage. Given the extensive and differing number of hyperparameters used across the various ML algorithms, it is difficult to exhaustively define the parameters in the manuscript. We have added a reference for interested readers [62]. This method allowed the training to end once the model performance stopped improving on the validation set. The validation sets’ mean absolute error (MSE) was monitored for each epoch. The training would stop when the MSE ceased to decrease by a certain tolerance threshold for a select number of epochs (patience). The model weights with the minimum MSE within that patience were taken as the validation set’s final weights. Figure 4 shows an example of the training and validation losses for 1DCNN.
Several performance metrics have been used in this study to benchmark and evaluate the different calibration models. These metrics, in various ways, measure the residuals or errors, i.e., deviations of the calibrated output of the low-cost sensors ( C O c a l i b r a t e d ) from the ground truth ( C O r e f e r e n c e ) for the test data (10% or 80% of every dataset, depending on the split) that has never been used for training.

3.3. Performance Metrics

The Root Mean Square Error (RMSE), which is the standard deviation of the residuals and is commonly used as a performance metric for sensor calibration [29,63,64,65,66], was used where
R M S E = 1 N i = 0 N 1 [ C O c a l i b r a t e d C O r e f e r e n c e ] 2
Here N is the number of samples in the relevant test data set.
Another metric utilized is the Coefficient of Determination ( R 2 ), which is the goodness of fit in regression analysis [67,68,69]. It is computed as
1 i = 0 N 1 [ C O c a l i b r a t e d C O r e f e r e n c e ] 2 i = 0 N 1 [ C O r e f e r e n c e m e a n ( C O r e f e r e n c e ) ] 2
While the appropriateness of R2 to determine the fit of nonlinear regressors has been questioned, it is still commonly used within the discipline of air pollutant measurement (e.g., see [14,15,40,66,68]).
In some instances, we have also plotted the Cumulative Distribution Function (CDF) of absolute errors, a b s [ C O c a l i b r a t e d C O r e f e r e n c e ] , for a more detailed investigation.
Target diagrams [15,40] were constructed to visualize the calibration models. The y-axis represents the Mean Bias Error (MBE) normalized by the standard deviation of the ground truth so that
M B E = m e a n ( C O c a l i b r a t e d ) m e a n ( C O r e f e r e n c e )
N o r m a l i s e d   M B E = M B E σ r e f e r e n c e
Here, σ r e f e r e n c e is the standard deviation of the ground truth for the relevant test data set. The x-axis of the diagram represents the normalized unbiased estimate of the RMSE, the Centered RMSE (CRMSE) given as
C R M S E = R M S E 2 M B E 2
N o r m a l i s e d   C R M S E = C R M S E σ r e f e r e n c e
The normalized CRMSE is multiplied by s i g n { σ c a l i b r a t e d σ r e f e r e n c e } to produce the target diagrams with σ c a l i b r a t e d being the standard deviation of the calibrated data for the relevant test data set.

4. Result & Discussion

Table 4 shows the performance of the calibration algorithms for all datasets in terms of RMSE (in ppm) and R2. We can make the following observations:
  • The accuracy of every calibration algorithm improves (lower RMSE, higher R2) as we go from SC1 (CO only) to SC2 (CO with T & RH) to SC3 (all inputs). The accuracy improves when temperature and humidity are included (SC2) alongside the raw CO data. There is further improvement in accuracy when the other pollutant data are also introduced (SC3) to exploit the dependencies arising from cross-sensitivity. This clearly emphasizes the importance of deploying low-cost sensors as multi-sensor platforms. Not only that allows for monitoring multiple pollutants with a single unit, but the accuracy of the measured data also improves.
  • Every ensemble and neural network-based algorithm outperforms linear regression-based calibration methods for all scenarios. In almost every instance, 1DCNN is the best-performing algorithm. This shows that 1DCNN could potentially improve the relative accuracy of low-cost multi-sensor air pollutant monitors. GBR and LSTM are the subsequent most accurate algorithms. While LSTM has gained much attraction for gas sensor calibration, GBR-based calibration appears to have received far less attention and warrants further investigation.
  • The accuracy of any given algorithm is better for the 90/10 split (TTS1) compared to the 20/80 split (TTS2). Interestingly, the accuracy improvement from SC1 (CO only) to SC3 (all inputs) is more noticeable than from TTS2 (20/80) to TTS1 (90/10). The covariate factors appear more important than longer training/co-location time. For example, consider the performance of 1DCNN for Dataset 1. The RMSE improves from 0.599 ppm to 0.393 ppm from SC1 (CO only) to SC3 (all inputs) for TTS2 (or from 0.542 ppm to 0.349 ppm for TTS1). Whereas for SC3 (all inputs) models, the RMSE improves from 0.393 ppm to 0.349 ppm when going from TTS2 (20/80) to TTS1 (90/10) (for SC1, it is from 0.599 ppm to 0.542 ppm).
  • The accuracy of the models derived and evaluated with TTS2 (20/80) is not significantly worse than those for TTS1 (90/10). This seems to suggest that with sophisticated calibration models, such as the ones presented in this work, not only the low-cost sensor platforms could be utilized as a backup for a reference grade monitor (TTS1), but they can also be deployed for reasonably accurate CO monitoring for a long duration after a short co-location (TTS2). It should be noted that the accuracy of the calibration models could be further improved by further periodic co-location and recalibration (please see [44]).
  • The accuracy of the algorithms appears to be the best for dataset 3. It could be due to the comparatively small number of low-concentration CO readings in dataset 3. The low-cost sensors typically struggle to register low gas concentrations. This is corroborated later in the section with box plots of residuals. Dataset 3 covers a considerably shorter time; therefore, the sensor may have experienced lower drift and degradation than the sensors of the other two measurement campaigns. It should be noted that the sensor platforms used to collect datasets 2 and 3 are constructed from the same CO sensors and therefore presents an opportunity for a reasonably objective evaluation of this effect.
Figure 5 shows the CDF of absolute errors for the 1DCNN-based calibration (Error CDF plots for all scenarios can be found at https://github.com/Sharafat-Ali/AirQualityResults, accessed on 19 December 2022). The impact of T and RH on accuracy improvement seems more prominent for datasets 2 and 3. However, the cross-sensitivity impact to other pollutants appears more significant for Dataset 1. This could be because datasets 2 and 3 were collected using EC sensors, whereas those used for dataset 1 are MOX-based. It should be noted that these observations are consistent across all calibration techniques.
Target diagrams for the 1DCNN-based calibration models are shown in Figure 6 (Target diagrams for all algorithms can be found at https://github.com/Sharafat-Ali/AirQualityResults, accessed on 19 December 2022). The following observation can be made:
  • All points lie within the unit circle (radius = 1); therefore, the variance of the residuals is smaller than the variance of the reference measurements. It is an essential characteristic of a functional calibration model [15], indicating that the variability of the dependent variable (calibrated output) is explained by the independent variable (the reference data) and not the residual [27]. It should be noted that all calibration algorithms presented in this work fulfill this criterion.
  • The distance from the origin, which measures the normalized RMSE (RMSE/ σ r e f e r e n c e ) clearly show that the SC3 (all covariate inputs) regressors are more accurate than the SC2 (CO with T and RH) regressors, which are more accurate than the SC1 (CO only) regressors. This once again demonstrates the importance of the availability of covariate factors, such as temperature, relative humidity, and other pollutants.
  • The majority of the points lie on the left plane, indicating that the standard deviation of the calibrated sensor data for most models is smaller than the ground truth standard deviation.
  • For TTS1 (90/10), the points lie above the x-axis, indicating that the models, on average, slightly overestimate the CO concentration. For TTS2 (20/80), a few models also slightly underestimate the CO concentration.
Low-cost sensors have been reported to suffer from variability in accuracy for different dynamic ranges of pollutants [27,40]. Therefore, we performed a quantitative investigation by constructing box plots of residuals as a percentage of ground truth at each decile of the ground truth (please see Figure 7 for the performance of the 1DCNN-based calibration). The figures show that at a lower concentration range of CO, the accuracy is the worst (median values further from zero) and exhibits more variability (larger boxes). The models underestimate at lower concentrations and slightly overestimate at higher concentrations. The variability appears to be more prominent for datasets 1 and 2. The likely reason for such increased variability and degraded performance is the difference in CO level experienced during the measurement campaigns. For datasets 1 and 2, the first deciles of CO data start at 0.0873 ppm and 0.066 ppm, respectively, compared to dataset 3 (starting at 0.296 ppm). Since the low-cost sensors are likely to struggle with sensitivity at low pollutant concentrations [22,70], this hardware limitation is probably causing performance degradation at the lowest deciles for datasets 1 and 2.

Computational Cost

The computation cost of the ML algorithms can be mainly attributed to the training and tuning of the hyperparameters. However, it should be noted that these activities are not performed on sensor devices, which are constrained in terms of computational capability and energy storage. The low-cost sensors are deployed to collect and send the data to a backend server or a cloud-based infrastructure that performs the calibration/training offline. By investing in this infrastructure, we can deploy a city-wide low-cost sensor network at a very high spatial resolution. This is far more cost-effective than deploying a large number of expensive reference-grade sensors. We trained the ML algorithms on a workstation equipped with a 16-core AMD Ryzen processor, 128 GB RAM, and two NVIDIA A40 GPUs. The cost of this workstation (less than $20,000) is far lower than a single reference-grade gas sensor.
Once an algorithm has been trained, it can produce a calibrated output almost instantaneously. For example, even for the largest dataset (dataset 2), the trained 1DCNN algorithm takes less than 10 s to produce the calibrated output for the entirety of 80% of the data (20/80 split) for SC3, even on a simple PC (Intel Core i7-8700 3.20GHz CPU, 16 GB RAM) with no additional GPU. Therefore, it is possible to have a real-time operation with ML-based algorithms, where the data (electrode reading, T, RH, other pollutant readings) for one time instant will be sent to the cloud for the algorithm to produce the calibrated output only for that instant.
The computational complexity of ML algorithms can be benchmarked by comparing the total number of learnable parameters. Table 5 shows the number of learnable parameters for the three most consistent ML algorithms (1DCNN, GBR, and LSTM) and the linear regression for scenario 3 for the 90/10 splits (TTS1). The performance improvement of the ML algorithms comes at the cost of increased computational complexity, and GBR requires the highest number of parameters to be learned.
The number of learnable parameters increases as more covariate factors are included. Table 6 shows the total number of learnable parameters for 1DCNN for all three scenarios for the 90/10 splits. We can observe that the number of learnable parameters increases as we go from SC1 to SC2 and then from SC2 to SC3. Thus, the increased accuracy comes at the cost of learning more parameters.

5. Conclusions and Future Work

We proposed 1DCNN-based multivariate regressors for the calibration of low-cost CO sensors. The 1DCNN-based calibration algorithms were benchmarked against several machine learning-based calibration techniques and linear regression and were found to be the most accurate based on several performance metrics. GBR-based calibration models also, in general, perform better than many of the popular ML techniques for all datasets. Therefore, both 1DCNN and GBR should be further explored for low-cost gas sensor calibration.
We can also conclude that the CO sensors’ accuracy can be significantly improved by using data from other sensors of a multi-sensor platform. Therefore low-cost sensors should be deployed as multi-sensor arrays. It also appears that low-cost CO sensors can be reasonably accurately calibrated through a short co-location with a reference sensor and then deployed for a significantly more extended period for monitoring.
Data augmentation can improve the performance of ML algorithms and can be explored for improving the accuracy of the calibration algorithms. It should be noted that the sensors were tested on data that were collected from the same location. It is not apparent how the calibration models would perform for the same sensors located in an area with significantly different pollutant concentrations and meteorological conditions. In fact, the accuracy of ML algorithms may deteriorate when dealing with data that are outside of the calibration range. One way to deal with this issue would be to train multiple models at various concentration levels or distributions, and a heuristic-based algorithm can then be utilized to switch between the models depending on the pollutant concentration. The sensors will need to be collocated at multiple locations (e.g., central city, industrial area, suburb, etc.) to develop multiple calibration models. The models calibrated for a certain CO concentration can be used to develop the calibration models for other concentration levels. Future work can investigate the efficacy of such a strategy and transfer calibration if the same sensor-platform data is available from multiple locations.
In this work, we have only calibrated CO data. All three datasets also contain the raw and reference data for NO2. In the future, we will investigate and benchmark the performance of 1DCNN and GBR-based calibration for NO2.
Since low-cost sensor platforms are not likely to have an identical response, one might need a slightly different calibration model for each sensor. However, the development of such calibration models can be expedited through transfer calibration, where a base model, developed through extensive collocated deployments, is fine-tuned with a short collocation to address the slightly dissimilar hardware responses. This can be explored in a future study.

Author Contributions

Conceptualization, S.A. and F.A.; methodology, S.A. and F.A.; software, S.A.; formal analysis, S.A.; investigation, S.A.; writing—original draft preparation, S.A. and F.A.; writing—review and editing, S.A., F.A., K.M.A. and J.P.; supervision, F.A., K.M.A. and J.P.; funding acquisition, J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by a doctoral scholarship provided by the NZ Product Accelerator for S.A. The APC was funded by Massey University School of Food and Advanced Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset 1 is open access and can be found here: https://archive.ics.uci.edu/mL/datasets/Air+Quality#, accessed on 19 December 2022. Datasets 2 and 3 were collected from De Vito et al. and Liang et al., respectively.

Acknowledgments

The authors wish to thank De Vito et al. and Liang et al. for making their data available for this research.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Kampa, M.; Castanas, E. Human health effects of air pollution. Environ. Pollut. 2008, 151, 362–367. [Google Scholar] [CrossRef]
  2. Cohen, A.J.; Brauer, M.; Burnett, R.; Anderson, H.R.; Frostad, J.; Estep, K.; Balakrishnan, K.; Brunekreef, B.; Dandona, L.; Dandona, R. Estimates and 25-year trends of the global burden of disease attributable to ambient air pollution: An analysis of data from the Global Burden of Diseases Study 2015. Lancet 2017, 389, 1907–1918. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Alshamsi, A.; Anwar, Y.; Almulla, M.; Aldohoori, M.; Hamad, N.; Awad, M. Monitoring pollution: Applying IoT to create a smart environment. In Proceedings of the 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA), Ras Al Khaimah, United Arab Emirates, 21–23 November 2017; pp. 1–4. [Google Scholar]
  4. Castell, N.; Dauge, F.R.; Schneider, P.; Vogt, M.; Lerner, U.; Fishbain, B.; Broday, D.; Bartonova, A. Can commercial low-cost sensor platforms contribute to air quality monitoring and exposure estimates? Environ. Int. 2017, 99, 293–302. [Google Scholar] [CrossRef] [PubMed]
  5. Hu, K.; Sivaraman, V.; Luxan, B.G.; Rahman, A. Design and Evaluation of a Metropolitan Air Pollution Sensing System. IEEE Sens. J. 2016, 16, 1448–1459. [Google Scholar] [CrossRef]
  6. Jiao, W.; Hagler, G.; Williams, R.; Sharpe, R.; Brown, R.; Garver, D.; Judge, R.; Caudill, M.; Rickard, J.; Davis, M. Community Air Sensor Network (CAIRSENSE) project: Evaluation of low-cost sensor performance in a suburban environment in the southeastern United States. Atmos. Meas. Technol. 2016, 9, 5281–5292. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Mead, M.I.; Popoola, O.; Stewart, G.; Landshoff, P.; Calleja, M.; Hayes, M.; Baldovi, J.; McLeod, M.; Hodgson, T.; Dicks, J. The use of electrochemical sensors for monitoring urban air quality in low-cost, high-density networks. Atmos. Environ. 2013, 70, 186–203. [Google Scholar] [CrossRef] [Green Version]
  8. Borrego, C.; Costa, A.M.; Ginja, J.; Amorim, M.; Coutinho, M.; Karatzas, K.; Sioumis, T.; Katsifarakis, N.; Konstantinidis, K.; De Vito, S.; et al. Assessment of air quality microsensors versus reference methods: The EuNetAir joint exercise. Atmos. Environ. 2016, 147, 246–263. [Google Scholar] [CrossRef] [Green Version]
  9. Maag, B.; Zhou, Z.; Thiele, L. A Survey on Sensor Calibration in Air Pollution Monitoring Deployments. IEEE Internet Things J. 2018, 5, 4857–4870. [Google Scholar] [CrossRef] [Green Version]
  10. Yi, W.; Lo, K.; Mak, T.; Leung, K.; Leung, Y.; Meng, M. A Survey of Wireless Sensor Network Based Air Pollution Monitoring Systems. Sensors 2015, 15, 29859. [Google Scholar] [CrossRef] [Green Version]
  11. Fine, G.F.; Cavanagh, L.M.; Afonja, A.; Binions, R. Metal Oxide Semi-Conductor Gas Sensors in Environmental Monitoring. Sensors 2010, 10, 5469–5502. [Google Scholar] [CrossRef]
  12. Cross, E.S.; Williams, L.R.; Lewis, D.K.; Magoon, G.R.; Onasch, T.B.; Kaminsky, M.L.; Worsnop, D.R.; Jayne, J.T. Use of electrochemical sensors for measurement of air pollution: Correcting interference response and validating measurements. Atmos. Meas. Technol. 2017, 10, 3575–3588. [Google Scholar] [CrossRef] [Green Version]
  13. Popoola, O.A.; Stewart, G.B.; Mead, M.I.; Jones, R.L. Development of a baseline-temperature correction methodology for electrochemical sensors and its implications for long-term stability. Atmos. Environ. 2016, 147, 330–343. [Google Scholar] [CrossRef] [Green Version]
  14. Spinelle, L.; Gerboles, M.; Villani, M.G.; Aleixandre, M.; Bonavitacola, F. Field calibration of a cluster of low-cost available sensors for air quality monitoring. Part A: Ozone and nitrogen dioxide. Sens. Actuators B Chem. 2015, 215, 249–257. [Google Scholar] [CrossRef]
  15. Spinelle, L.; Gerboles, M.; Villani, M.G.; Aleixandre, M.; Bonavitacola, F. Field calibration of a cluster of low-cost commercially available sensors for air quality monitoring. Part B: NO, CO and CO2. Sens. Actuators B Chem. 2017, 238, 706–715. [Google Scholar] [CrossRef]
  16. Brienza, S.; Galli, A.; Anastasi, G.; Bruschi, P. A Low-Cost Sensing System for Cooperative Air Quality Monitoring in Urban Areas. Sensors 2015, 15, 12242. [Google Scholar] [CrossRef] [Green Version]
  17. De Vito, S.; Piga, M.; Martinotto, L.; Di Francia, G. CO, NO2 and NOx urban pollution monitoring with on-field calibrated electronic nose by automatic bayesian regularization. Sens. Actuators B Chem. 2009, 143, 182–191. [Google Scholar] [CrossRef]
  18. Fang, X.; Bate, I. Using multi-parameters for calibration of low-cost sensors in urban environment. Networks 2017, 7, 33. [Google Scholar]
  19. Wang, Y.; Chen, G. Efficient Data Gathering and Estimation for Metropolitan Air Quality Monitoring by Using Vehicular Sensor Networks. IEEE Trans. Veh. Technol. 2017, 66, 7234–7248. [Google Scholar] [CrossRef]
  20. Heimann, I.; Bright, V.B.; McLeod, M.W.; Mead, M.I.; Popoola, O.A.M.; Stewart, G.B.; Jones, R.L. Source attribution of air pollution by spatial scale separation using high spatial density networks of low cost air quality sensors. Atmos. Environ. 2015, 113, 10–19. [Google Scholar] [CrossRef] [Green Version]
  21. De Vito, S.; Massera, E.; Piga, M.; Martinotto, L.; Di Francia, G. On field calibration of an electronic nose for benzene estimation in an urban pollution monitoring scenario. Sens. Actuators B Chem. 2008, 129, 750–757. [Google Scholar] [CrossRef]
  22. Hagan, D.H.; Isaacman-VanWertz, G.; Franklin, J.P.; Wallace, L.M.; Kocar, B.D.; Heald, C.L.; Kroll, J.H. Calibration and assessment of electrochemical air quality sensors by co-location with regulatory-grade instruments. Atmos. Meas. Technol. 2018, 11, 315–328. [Google Scholar] [CrossRef]
  23. De Vito, S.; Fattoruso, G.; Pardo, M.; Tortorella, F.; Di Francia, G. Semi-supervised learning techniques in artificial olfaction: A novel approach to classification problems and drift counteraction. IEEE Sens. J. 2012, 12, 3215–3224. [Google Scholar] [CrossRef]
  24. Masey, N.; Gillespie, J.; Ezani, E.; Lin, C.; Wu, H.; Ferguson, N.S.; Hamilton, S.; Heal, M.R.; Beverland, I.J. Temporal changes in field calibration relationships for Aeroqual S500 O3 and NO2 sensor-based monitors. Sens. Actuators B Chem. 2018, 273, 1800–1806. [Google Scholar] [CrossRef]
  25. Esposito, E.; De Vito, S.; Salvato, M.; Bright, V.; Jones, R.L.; Popoola, O. Dynamic neural network architectures for on field stochastic calibration of indicative low cost air quality sensing systems. Sens. Actuators B Chem. 2016, 231, 701–713. [Google Scholar] [CrossRef] [Green Version]
  26. Holstius, D.M.; Pillarisetti, A.; Smith, K.R.; Seto, E. Field calibrations of a low-cost aerosol sensor at a regulatory monitoring site in California. Atmos. Meas. Technol. 2014, 7, 1121–1131. [Google Scholar] [CrossRef] [Green Version]
  27. Topalović, D.B.; Davidović, M.D.; Jovanović, M.; Bartonova, A.; Ristovski, Z.; Jovašević-Stojanović, M. In search of an optimal in-field calibration method of low-cost gas sensors for ambient air pollutants: Comparison of linear, multilinear and artificial neural network approaches. Atmos. Environ. 2019, 213, 640–658. [Google Scholar] [CrossRef]
  28. De Vito, S.; Esposito, E.; Massera, E.; Formisano, F.; Fattoruso, G.; Ferlito, S.; Del Giudice, A.; D’Elia, G.; Salvato, M.; Polichetti, T. Crowdsensing IoT Architecture for Pervasive Air Quality and Exposome Monitoring: Design, Development, Calibration, and Long-Term Validation. Sensors 2021, 21, 5219. [Google Scholar] [CrossRef] [PubMed]
  29. Liang, Y.; Wu, C.; Jiang, S.; Li, Y.J.; Wu, D.; Li, M.; Cheng, P.; Yang, W.; Cheng, C.; Li, L.; et al. Field comparison of electrochemical gas sensor data correction algorithms for ambient air measurements. Sens. Actuators B Chem. 2021, 327, 128897. [Google Scholar] [CrossRef]
  30. Badura, M.; Batog, P.; Drzeniecka-Osiadacz, A.; Modzel, P. Low- and Medium-Cost Sensors for Tropospheric Ozone Monitoring—Results of an Evaluation Study in Wrocław, Poland. Atmosphere 2022, 13, 542. [Google Scholar]
  31. Hofman, J.; Nikolaou, M.; Shantharam, S.P.; Stroobants, C.; Weijs, S.; La Manna, V.P. Distant calibration of low-cost PM and NO2 sensors; evidence from multiple sensor testbeds. Atmos. Pollut. Res. 2022, 13, 101246. [Google Scholar] [CrossRef]
  32. Rogulski, M.; Badyda, A.; Gayer, A.; Reis, J. Improving the Quality of Measurements Made by Alphasense NO2 Non-Reference Sensors Using the Mathematical Methods. Sensors 2022, 22, 3619. [Google Scholar] [CrossRef] [PubMed]
  33. Zuidema, C.; Schumacher, C.S.; Austin, E.; Carvlin, G.; Larson, T.V.; Spalt, E.W.; Zusman, M.; Gassett, A.J.; Seto, E.; Kaufman, J.D.; et al. Deployment, Calibration, and Cross-Validation of Low-Cost Electrochemical Sensors for Carbon Monoxide, Nitrogen Oxides, and Ozone for an Epidemiological Study. Sensors 2021, 21, 4214. [Google Scholar] [CrossRef]
  34. Cordero, J.M.; Borge, R.; Narros, A. Using statistical methods to carry out in field calibrations of low cost air quality sensors. Sens. Actuators B Chem. 2018, 267, 245–254. [Google Scholar] [CrossRef]
  35. Djedidi, O.; Djeziri, M.A.; Morati, N.; Seguin, J.-L.; Bendahan, M.; Contaret, T. Accurate detection and discrimination of pollutant gases using a temperature modulated MOX sensor combined with feature extraction and support vector classification. Sens. Actuators B Chem. 2021, 339, 129817. [Google Scholar] [CrossRef]
  36. Bigi, A.; Mueller, M.; Grange, S.K.; Ghermandi, G.; Hueglin, C. Performance of NO, NO2 low cost sensors and three calibration approaches within a real world application. Atmos. Meas. Technol. 2018, 11, 3717–3735. [Google Scholar] [CrossRef] [Green Version]
  37. De Vito, S.; Esposito, E.; Salvato, M.; Popoola, O.; Formisano, F.; Jones, R.; Di Francia, G. Calibrating chemical multisensory devices for real world applications: An in-depth comparison of quantitative machine learning approaches. Sens. Actuators B Chem. 2018, 255, 1191–1210. [Google Scholar] [CrossRef] [Green Version]
  38. Esposito, E.; De Vito, S.; Salvato, M.; Fattoruso, G.; Bright, V.; Jones, R.L.; Popoola, O. Stochastic Comparison of Machine Learning Approaches to Calibration of Mobile Air Quality Monitors. Springer Int. Publ. 2018, 294–302. [Google Scholar] [CrossRef]
  39. Esposito, E.; De Vito, S.; Salvato, M.; Fattoruso, G.; Di Francia, G. Computational Intelligence for Smart Air Quality Monitors Calibration. In Computational Science and Its Applications—ICCSA 2017, Proceedings of the 17th International Conference, Trieste, Italy, 3–6 July 2017; Springer: Cham, Switzerland, 2017; pp. 443–454. [Google Scholar] [CrossRef]
  40. Zimmerman, N.; Presto, A.A.; Kumar, S.P.; Gu, J.; Hauryliuk, A.; Robinson, E.S.; Robinson, A.L.; Subramanian, R. A machine learning calibration model using random forests to improve sensor performance for lower-cost air quality monitoring. Atmos. Meas. Technol. 2018, 11, 291–313. [Google Scholar] [CrossRef] [Green Version]
  41. Bagkis, E.; Kassandros, T.; Karatzas, K. Learning Calibration Functions on the Fly: Hybrid Batch Online Stacking Ensembles for the Calibration of Low-Cost Air Quality Sensor Networks in the Presence of Concept Drift. Atmosphere 2022, 13, 416. [Google Scholar] [CrossRef]
  42. Bittner, A.S.; Cross, E.S.; Hagan, D.H.; Malings, C.; Lipsky, E.; Grieshop, A.P. Performance characterization of low-cost air quality sensors for off-grid deployment in rural Malawi. Atmos. Meas. Technol. 2022, 15, 3353–3376. [Google Scholar] [CrossRef]
  43. Malings, C.; Tanzer, R.; Hauryliuk, A.; Kumar, S.P.; Zimmerman, N.; Kara, L.B.; Presto, A.A.; Subramanian, R. Development of a general calibration model and long-term performance evaluation of low-cost sensors for air pollutant gas monitoring. Atmos. Meas. Technol. 2019, 12, 903–920. [Google Scholar] [CrossRef]
  44. De Vito, S.; Di Francia, G.; Esposito, E.; Ferlito, S.; Formisano, F.; Massera, E. Adaptive machine learning strategies for network calibration of IoT smart air quality monitoring devices. Pattern Recognit. Lett. 2020, 136, 264–271. [Google Scholar] [CrossRef]
  45. Athira, V.; Geetha, P.; Vinayakumar, R.; Soman, K.P. DeepAirNet: Applying Recurrent Networks for Air Quality Prediction. Procedia Comput. Sci. 2018, 132, 1394–1403. [Google Scholar]
  46. Fonollosa, J.; Sheik, S.; Huerta, R.; Marco, S. Reservoir computing compensates slow response of chemosensor arrays exposed to fast varying gas concentrations in continuous monitoring. Sens. Actuators B Chem. 2015, 215, 618–629. [Google Scholar] [CrossRef]
  47. Tariq, O.B.; Lazarescu, M.T.; Lavagno, L. Neural networks for indoor person tracking with infrared sensors. IEEE Sens. Lett. 2021, 5, 6000204. [Google Scholar] [CrossRef]
  48. Cho, H.; Yoon, S.M. Divide and conquer-based 1D CNN human activity recognition using test data sharpening. Sensors 2018, 18, 1055. [Google Scholar] [CrossRef] [Green Version]
  49. Cavalli, S.; Amoretti, M. CNN-based multivariate data analysis for bitcoin trend prediction. Appl. Soft Comput. 2021, 101, 107065. [Google Scholar] [CrossRef]
  50. Kureshi, R.R.; Mishra, B.K.; Thakker, D.; John, R.; Walker, A.; Simpson, S.; Thakkar, N.; Wante, A.K. Data-Driven Techniques for Low-Cost Sensor Selection and Calibration for the Use Case of Air Quality Monitoring. Sensors 2022, 22, 1093. [Google Scholar] [CrossRef]
  51. Vajs, I.; Drajic, D.; Cica, Z. COVID-19 Lockdown in Belgrade: Impact on Air Pollution and Evaluation of a Neural Network Model for the Correction of Low-Cost Sensors’ Measurements. Appl. Sci. 2021, 11, 10563. [Google Scholar] [CrossRef]
  52. Johnson, N.E.; Bonczak, B.; Kontokosta, C.E. Using a gradient boosting model to improve the performance of low-cost aerosol monitors in a dense, heterogeneous urban environment. Atmos. Environ. 2018, 184, 9–16. [Google Scholar] [CrossRef] [Green Version]
  53. Persson, C.; Bacher, P.; Shiga, T.; Madsen, H. Multi-site solar power forecasting using gradient boosted regression trees. Sol. Energy 2017, 150, 423–436. [Google Scholar] [CrossRef]
  54. Johnson, N.E.; Ianiuk, O.; Cazap, D.; Liu, L.; Starobin, D.; Dobler, G.; Ghandehari, M. Patterns of waste generation: A gradient boosting model for short-term waste prediction in New York City. Waste Manag. 2017, 62, 3–11. [Google Scholar] [CrossRef] [PubMed]
  55. Idrees, Z.; Zou, Z.; Zheng, L. Edge Computing Based IoT Architecture for Low Cost Air Pollution Monitoring Systems: A Comprehensive System Analysis, Design Considerations & Development. Sensors 2018, 18, 3021. [Google Scholar] [PubMed] [Green Version]
  56. World Health Organization. Ambient (Outdoor) Air Pollution. Available online: https://www.who.int/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health (accessed on 1 January 2023).
  57. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
  58. Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Process. 2021, 151, 107398. [Google Scholar] [CrossRef]
  59. Ince, T.; Kiranyaz, S.; Eren, L.; Askar, M.; Gabbouj, M. Real-time motor fault detection by 1-D convolutional neural networks. IEEE Trans. Ind. Electron. 2016, 63, 7067–7075. [Google Scholar] [CrossRef]
  60. Kiranyaz, S.; Ince, T.; Gabbouj, M. Real-time patient-specific ECG classification by 1-D convolutional neural networks. IEEE Trans. Biomed. Eng. 2015, 63, 664–675. [Google Scholar] [CrossRef]
  61. Liu, X.; Zhou, Q.; Zhao, J.; Shen, H.; Xiong, X. Fault diagnosis of rotating machinery under noisy environment conditions based on a 1-D convolutional autoencoder and 1-D convolutional neural network. Sensors 2019, 19, 972. [Google Scholar] [CrossRef] [Green Version]
  62. Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2022. [Google Scholar]
  63. Zhu, S.; Lian, X.; Liu, H.; Hu, J.; Wang, Y.; Che, J. Daily air quality index forecasting with hybrid models: A case in China. Environ. Pollut. 2017, 231, 1232–1244. [Google Scholar] [CrossRef]
  64. Zhu, J.; Wu, P.; Chen, H.; Zhou, L.; Tao, Z. A Hybrid Forecasting Approach to Air Quality Time Series Based on Endpoint Condition and Combined Forecasting Model. Int. J. Environ. Res. Public Health 2018, 15, 1941. [Google Scholar] [CrossRef] [Green Version]
  65. Wang, P.; Liu, Y.; Qin, Z.; Zhang, G. A novel hybrid forecasting model for PM10 and SO2 daily concentrations. Sci. Total Environ. 2015, 505, 1202–1212. [Google Scholar] [CrossRef] [PubMed]
  66. Jiang, P.; Li, C.; Li, R.; Yang, H. An innovative hybrid air pollution early-warning system based on pollutants forecasting and Extenics evaluation. Knowl. Based Syst. 2019, 164, 174–192. [Google Scholar] [CrossRef]
  67. Colls, J.; Tiwary, A. Air Pollution: Measurement, Modelling and Mitigation, 3rd ed.; CRC Press: London, UK, 2017. [Google Scholar]
  68. Wang, J.; Niu, T.; Wang, R. Research and Applicaton of an Air Quality Early Warning System Based on a Modified Least Squares Support Vector Machine and a Cloud Model. Int. J. Environ. Res. Public Health 2017, 14, 249. [Google Scholar] [CrossRef] [Green Version]
  69. Wu, Q.; Lin, H. Daily urban air quality index forecasting based on variational mode decomposition, sample entropy and LSTM neural network. Sustain. Cities Soc. 2019, 50, 101657. [Google Scholar] [CrossRef]
  70. Rai, A.C.; Kumar, P.; Pilla, F.; Skouloudis, A.N.; Di Sabatino, S.; Ratti, C.; Yasar, A.; Rickerby, D. End-user perspective of low-cost sensors for outdoor air pollution monitoring. Sci. Total Environ. 2017, 607, 691–705. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Distribution (histogram) of reference CO data in ppm for datasets 1, 2, and 3 in (a), (b) and (c), respectively. The median and standard deviations of the CO data are (1.66, 1.26), (0.49, 0.40), and (0.67, 0.25), respectively. Box plots of ambient temperature are shown in (df). Median temperature values are 17.8 °C, 29.3 °C, and 25.4 °C, respectively. Box plots of relative humidity (RH) are shown in (gi). Median RH values are 48.9%, 28.5%, and 52.1%, respectively. All data are presented in hourly averaged samples.
Figure 1. Distribution (histogram) of reference CO data in ppm for datasets 1, 2, and 3 in (a), (b) and (c), respectively. The median and standard deviations of the CO data are (1.66, 1.26), (0.49, 0.40), and (0.67, 0.25), respectively. Box plots of ambient temperature are shown in (df). Median temperature values are 17.8 °C, 29.3 °C, and 25.4 °C, respectively. Box plots of relative humidity (RH) are shown in (gi). Median RH values are 48.9%, 28.5%, and 52.1%, respectively. All data are presented in hourly averaged samples.
Sensors 23 00854 g001
Figure 2. An example of a 1DCNN model developed for this study. Please note that many parameters are determined through grid search and tuning. Therefore, they vary depending on the data set, scenario (input variables used), and time split.
Figure 2. An example of a 1DCNN model developed for this study. Please note that many parameters are determined through grid search and tuning. Therefore, they vary depending on the data set, scenario (input variables used), and time split.
Sensors 23 00854 g002
Figure 3. Details of hyperparameter training.
Figure 3. Details of hyperparameter training.
Sensors 23 00854 g003
Figure 4. Example of training and validation losses for 1DCNN during k-fold cross-validation.
Figure 4. Example of training and validation losses for 1DCNN during k-fold cross-validation.
Sensors 23 00854 g004
Figure 5. Empirical CDF plots of calibration error for 1DCNN. (ac) Compare the CDF of error of the 1DCNN algorithm to the same of linear regression for both 90/10 (TTS1) and 20/80 (TTS2) splits for scenario 3 (with all available input parameters) for datasets 1–3. (df) Show how the CDF of er-ror of the 1DCNN algorithm improves from scenarios 1–3 for TTS1. (gi) Show how the CDF of error of the 1DCNN algorithm improves from scenarios 1–3 for TTS2.
Figure 5. Empirical CDF plots of calibration error for 1DCNN. (ac) Compare the CDF of error of the 1DCNN algorithm to the same of linear regression for both 90/10 (TTS1) and 20/80 (TTS2) splits for scenario 3 (with all available input parameters) for datasets 1–3. (df) Show how the CDF of er-ror of the 1DCNN algorithm improves from scenarios 1–3 for TTS1. (gi) Show how the CDF of error of the 1DCNN algorithm improves from scenarios 1–3 for TTS2.
Sensors 23 00854 g005
Figure 6. Target diagrams of 1DCNN for Datasets (a) TTS1 (90/10) and (b) TTS2 (20/80).
Figure 6. Target diagrams of 1DCNN for Datasets (a) TTS1 (90/10) and (b) TTS2 (20/80).
Sensors 23 00854 g006
Figure 7. Box plots of residuals from the 1DCNN algorithm as a percentage of reference CO for TTS1 (90/10) with all available input parameters (SC3). (a) Datasets 1. (b) Datasets 2. (c) Datasets 3.
Figure 7. Box plots of residuals from the 1DCNN algorithm as a percentage of reference CO for TTS1 (90/10) with all available input parameters (SC3). (a) Datasets 1. (b) Datasets 2. (c) Datasets 3.
Sensors 23 00854 g007aSensors 23 00854 g007b
Table 1. Summarization of the ML techniques used for the calibration of low-cost gas sensors.
Table 1. Summarization of the ML techniques used for the calibration of low-cost gas sensors.
ML TechniqueKey AspectsReferences
Multiple Linear Regression (MLR)Classical statistical regression technique that uses a linear combination of several explanatory variables (e.g., temperature, relative humidity, etc.) to compute the calibrated sensor output.[6,27,28,29,30,31,32,33]
Random Forest Regression (RFR)An ensemble learning method for regression that develops a nonlinear regression model that uses several explanatory variables (e.g., temperature, relative humidity, etc.) to compute the calibrated sensor output.[8,34,36,40,41,42,43]
Support Vector Regression (SVR)A supervised learning algorithm that uses several explanatory variables (e.g., temperature, relative humidity, etc.) to compute the calibrated sensor output.[34,35,36,37,38,39]
Multilayer Perceptron (MLP)A classical neural network that uses backpropagation for training to develop a model that uses several explanatory variables (e.g., temperature, relative humidity, etc.) to compute the calibrated sensor output.[25,27,28,37,38,39,43,44]
Recurrent Neural Networks (RNN)A neural network that extracts data’s sequential characteristics and then uses backpropagation through time algorithm develops a model that uses several explanatory variables (e.g., temperature, relative humidity, etc.) to compute the calibrated sensor output.[37,38,39,40,45,46]
Table 2. Details of the three datasets used in this study.
Table 2. Details of the three datasets used in this study.
DatasetTime Span (Days)LocationNumber of SamplesLow-Cost Sensor
Array
Other Pollutant
Measured
Reference CO Sensor
1 [37]391Lombardy Region, Italy6941MOXNO2, O3, NMHC, NOXFixed conventional monitoring station equipped with spectrometer analyzers
2 [44]965Naples, Italy13595ECNO2, O3Teledyne T300
3 [29]152Guangzhou, China3639ECNO2, O3Thermo Scientific 48i-TLE
Table 3. List of hyperparameters that were tuned for each ML-based algorithm.
Table 3. List of hyperparameters that were tuned for each ML-based algorithm.
AlgorithmList of Hyperparameters
RFRMaximum depth of the tree, maximum number of leaf nodes, and number of trees in the forest.
GBRMaximum depth of the individual regression estimators, minimum number of samples required to be at a leaf node, minimum number of samples required to split an internal node.
MLPNumber of hidden layers, number of neurons in the hidden layer, activation function in the hidden layer, dropout rate in the dropout layer, the learning rate of the optimizer, and batch size.
LSTMNumber of LSTM layers, time steps, number of units in the LSTM layers, activation function, the dropout rate in dropout layers, the learning rate of the optimizer, and batch size.
1DCNNNumber of 1D convolution layers, lookback, number of filters in the convolution layer, activation function in the convolution layer, kernel size, pool size in max pooling layer, the dropout rate in the dropout layer, number of neurons in the dense layer, activation function in the dense layer, the learning rate of the optimizer, batch size.
Table 4. The accuracy of the calibration algorithms in terms of R2 and RMSE (in ppm). The best performance for each scenario is bolded.
Table 4. The accuracy of the calibration algorithms in terms of R2 and RMSE (in ppm). The best performance for each scenario is bolded.
DatasetTest Train SplitScenarioPerformance MetricAlgorithm
LR/MLRRFRGBRMLPLSTM1DCNN
Dataset 1TTS1SC1RMSE0.5540.5540.5560.5510.5450.541
SC20.5430.5080.5060.5260.5010.502
SC30.3840.3460.3490.3500.3440.344
TTS2SC10.6130.6090.6090.6130.6000.598
SC20.5970.5940.5940.5880.5720.564
SC30.4370.4040.4090.4150.4050.396
TTS1SC1R20.8030.8060.8050.8100.8080.811
SC20.8120.8370.8380.8330.8380.841
SC30.9050.9240.9230.9230.9250.924
TTS2SC10.7670.7700.7710.7680.7800.781
SC20.7790.7810.7810.7860.7980.805
SC30.8820.8990.8970.8960.9000.904
Dataset 2TTS1SC1RMSE0.3050.1750.1720.2520.1750.173
SC20.2540.1320.1270.2190.1300.124
SC30.2340.1290.1200.2040.1190.117
TTS2SC10.3000.1870.1820.2380.1830.185
SC20.2490.1470.1450.2170.1470.145
SC30.2290.1430.1410.1880.1340.136
TTS1SC1R20.5070.8370.8420.7130.8370.841
SC20.6580.9070.9140.7550.9090.918
SC30.7100.9100.9230.8680.9240.927
TTS2SC10.4430.7840.7950.6750.7920.789
SC20.6160.8650.8690.7130.8700.871
SC30.6740.8730.8770.8420.8900.887
Dataset 3TTS1SC1RMSE0.0750.0750.0720.0730.0740.072
SC20.0600.0460.0440.0490.0450.044
SC30.0540.0430.0380.0490.0390.038
TTS2SC10.0800.0820.0820.0820.0800.079
SC20.0670.0640.0630.0630.0620.062
SC30.0600.0600.0530.0560.0530.049
TTS1SC1R20.9200.9190.9260.9260.9220.926
SC20.9480.9700.9720.9650.9710.973
SC30.9580.9740.9800.9710.9780.979
TTS2SC10.8950.8900.8900.8970.8940.901
SC20.9270.9330.9360.9360.9370.937
SC30.9410.9410.9540.9570.9540.967
Table 5. The number of learnable parameters for GBR, LSTM, and 1DCNN in SC3 for TTS1.
Table 5. The number of learnable parameters for GBR, LSTM, and 1DCNN in SC3 for TTS1.
DatasetAlgorithmNumber of Learnable Parameters
Dataset 1LR/MLR8
GBR1,750,000
LSTM35,371
1DCNN227,241
Dataset 2LR/MLR9
GBR22,500,000
LSTM137,021
1DCNN393,551
Dataset 3LR/MLR9
GBR14,400,000
LSTM287,701
1DCNN520,591
Table 6. The number of learnable parameters for 1DCNN in different scenarios.
Table 6. The number of learnable parameters for 1DCNN in different scenarios.
DatasetScenarioNumber of Learnable Parameters
Dataset 1SC148,191
SC258,921
SC3227,241
Dataset 2SC173,241
SC2129,681
SC3393,551
Dataset 3SC1187,841
SC2400,601
SC3520,591
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, S.; Alam, F.; Arif, K.M.; Potgieter, J. Low-Cost CO Sensor Calibration Using One Dimensional Convolutional Neural Network. Sensors 2023, 23, 854. https://doi.org/10.3390/s23020854

AMA Style

Ali S, Alam F, Arif KM, Potgieter J. Low-Cost CO Sensor Calibration Using One Dimensional Convolutional Neural Network. Sensors. 2023; 23(2):854. https://doi.org/10.3390/s23020854

Chicago/Turabian Style

Ali, Sharafat, Fakhrul Alam, Khalid Mahmood Arif, and Johan Potgieter. 2023. "Low-Cost CO Sensor Calibration Using One Dimensional Convolutional Neural Network" Sensors 23, no. 2: 854. https://doi.org/10.3390/s23020854

APA Style

Ali, S., Alam, F., Arif, K. M., & Potgieter, J. (2023). Low-Cost CO Sensor Calibration Using One Dimensional Convolutional Neural Network. Sensors, 23(2), 854. https://doi.org/10.3390/s23020854

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop