Next Article in Journal
Automated Transverse Crack Mapping System with Optical Sensors and Big Data Analytics
Previous Article in Journal
Using Machine Learning Methods to Provision Virtual Sensors in Sensor-Cloud
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fiber Bragg Grating Dynamic Calibration Based on Online Sequential Extreme Learning Machine

Department of Electronic and Communication Engineering, North China Electric Power University, No. 619 Yong Hua Street, Baoding 071003, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(7), 1840; https://doi.org/10.3390/s20071840
Submission received: 27 January 2020 / Revised: 22 March 2020 / Accepted: 24 March 2020 / Published: 26 March 2020
(This article belongs to the Section Optical Sensors)

Abstract

:
The fiber Bragg grating (FBG) sensor calibration process is critical for optimizing performance. Real-time dynamic calibration is essential to improve the measured accuracy of the sensor. In this paper, we present a dynamic calibration method for FBG sensor temperature measurement, utilizing the online sequential extreme learning machine (OS-ELM). During the measurement process, the calibration model is continuously updated instead of retrained, which can reduce tedious calculations and improve the predictive speed. Polynomial fitting, a back propagation (BP) network, and a radial basis function (RBF) network were compared, and the results showed the dynamic method not only had a better generalization performance but also had a faster learning process. The dynamic calibration enabled the real-time measured data of the FBG sensor to input calibration models as online learning samples continuously, and could solve the insufficient coverage problem of static calibration training samples, so as to improve the long-term stability, accuracy of prediction, and generalization ability of the FBG sensor.

1. Introduction

Fiber Bragg grating (FBG) sensors have considerable advantages, such as high sensitivity, high accuracy, immunity to electromagnetic interference, stable chemical properties, compact size, and light weight. They are widely used in the measuring and monitoring of physical quantities, including strain, temperature, and humidity [1,2,3,4,5]. In recent decades, the development of optoelectronic technology has gradually expanded the application range of FBG sensors. FBG sensors currently find relevant applications in structural health monitoring [6,7], aeronautic prospecting [8], electric measurement [9], the production of medical devices [10,11], composite detection [12], and other fields. By monitoring the Bragg wavelength, it is possible to monitor the parameters that induce the wavelength shift of the FBG sensor, namely temperature and/or strain. The calibration is used to determine the mapping relationship between the wavelength and the physical quantity, and it is one of the critical factors affecting the performance of the sensor.
The static calibration of FBG sensor temperature measurement has been researched for a long time. As early as 1998, the authors of [13] pointed out that the Bragg wavelength of fiber gratings has a non-linear relationship with temperature over the range of 4.2–350 K, and determined the effect of embedding and the manufacturing process on the fibers’ temperature dependence, therefore it is essential to calibrate the measurement of fiber grating sensors. In 2006, the authors of [14] used a fifth-order polynomial to describe the temperature–wavelength correspondence and found that the wavelength drift caused by temperature change is highly non-linear over the range of 4.2–350 K. In 2012, the authors of [15] proposed a calibration algorithm based on a lookup table. The size of the lookup table can be selected according to the accuracy of the measurement data and the processing time requirements. The lookup table calibration algorithm reduces the processing time and measurement errors due to the imperfect fitting of polynomial functions when compared with polynomial fitting calibration. In addition, the authors of [16] and [17] put forward temperature calibration methods for FBG sensors based on a back propagation (BP) network and a radial basis function (RBF) network, respectively. They also found that neural networks have a higher calibration accuracy than polynomial fitting. The feasibility of neural networks was verified for complex calibration relationships.
However, in actual engineering, we find that the wavelength–temperature response curve of an FBG sensor changes with time. This change is mainly caused by the temperature drift property of Fabry–Perot (F–P) etalons [18], the FBG pre-stretching amplitude, and the sealability between the FBG and the packaging material [19]. If the static calibration method is adopted, the measured error should be greatly increased. Therefore, we propose a dynamic calibration method that is based on an online sequential extreme learning machine (OS-ELM), which has the advantages of a fast learning speed, strong adaptability, and good generalization [20,21]. Additionally, it has been proven that an OS-ELM can be used in online prediction tasks in some fields [22,23,24]. To the best of our knowledge, this is the first study to report a long-term improvement in stability predictions for FBG dynamic calibration in the past ten years. This study may provide a new recognition of FBG sensors for measuring temperature.

2. Methods and Experiment Setup

2.1. Extreme Learning Machine

The extreme learning machine (ELM) is the basis of an OS-ELM. The ELM is a single hidden layer forward neural network, including the input layer, hidden layer, and output layer. The N training samples and network’s output are described by ( x j , t j ) R n × R m , j = 1 , 2 , , N and f N ˜ ( x j ) = i = 1 N ˜ β i h i ( x j ) , j = 1 , 2 , , N , respectively.
Here, x j is an n × 1 input vector and t j is an m × 1 target vector. N ˜ is the number of hidden nodes, which is an approximation of N , β i is the weight between nodes i th and the output layer. h i ( x j ) is the output of the i th node when input x j ,which is shown in Equation (1). i is the weight between the input and node i th , b i is the bias of the i th node.
h i ( x j ) = g ( i x j + b i ) , j = 1 , 2 , , N
According to [25], if N = N ˜ , then i = 1 N β i h i ( x j ) = t j , j = 1 , 2 , , N , its matrix form is
H β = T
The execution process of the ELM can be equivalent to the minimum norm of solving Equation (2), which means solving minimizing H β T . We assume that β ˜ is the least square solution of Equation (2), it is obtained β ˜ = H T . Where H is the Moore–Penrose generalized inverse of H , which can be solved by the orthogonalization method and the iterative method [25].

2.2. OS-ELM

The ELM is a static batch learning process. The training sample is not updated with the arrival of new data. The OS-ELM was proposed by G. Huang’s team to address this issue. The OS-ELM is generally divided into initial training and online learning. In the initial training phase, the network learns the initial N 0 training samples ( x j , t j ) R n × R m , j = 1 , 2 , , N 0 . At the same time, β 0 is the solution to minimizing H 0 β T 0 , where β 0 = K 0 1 H 0 T T 0 and K 0 = H 0 T H 0 . When entering the online learning phase, the first new data or data block of size N 1 are newly learned, and the training sample is updated as ( x j , t j ) R n × R m , j = 1 , 2 , , N 0 + N 1 . At this point, the network is updated, shown in Equations (3) and (4).
β 1 = K 1 1 [ H 0 H 1 ] T [ T 0 T 1 ]
K 1 = [ H 0 H 1 ] T [ H 0 H 1 ]
To infer the characteristics of continuous online learning, considering the relationship between β 0 and β 1 , leads to
β 1 = β 0 + K 1 1 H 1 T ( T 1 H 1 β 0 )
Pushing to generalization, when learning the i th data or data block, there are
β k + 1 = β k + K k + 1 1 H k + 1 T ( T k + 1 H k + 1 β k )
K k + 1 = K k + H k + 1 T H k + 1

2.3. Experiment Setup

In this paper, the experiment setup was as depicted in Figure 1. The temperature change can be detected by measuring the wavelength shift of FBG. As shown in Figure 1, line segments without arrows represent optical transmission, while those with arrows represent electrical transmission. The light from the broadband light source passed through the isolator. An F–P filter with a center wavelength of 1550 nm, a free spectral range (FSR) of 98.8 nm, and a bandwidth of 0.177 nm was adopted in this system. The tunable F–P filter was utilized to obtain a narrow-band tunable light, which scanned broadband light under the driving of a triangle wave. The narrow-band tunable light was split into two branches using an optical coupler. The upper branch was transmitted to the FBG through the circulator, and the reflected light was detected by the photodetector (PD1). When the transmission wavelength of the tunable F–P filter coincided with the reflection wavelength of the FBG, the PD1 detected the maximum light intensity. The lower branch into the F–P etalon was detected by another photodetector (PD2). The F–P etalon was similar in structure to the F–P filter, and its main part was also composed of an F–P cavity. The F–P etalon which had an FSR of 0.798 nm and a fineness of 6.61, was selected with a wavelength marking function as a wavelength reference. PD1 and PD2 had an operating wavelength range of 1100–1650 nm, a bandwidth of 4 MHz, a dark current of less than 0.85 nA, and a sensitivity of −52 dBm. PD1 and PD2 converted the detected optical signal into an electrical signal, then the electrical signal was sent to a Personal Computer (PC) via a data acquisition card, and then the PC performed denoising and peak detection. The F–P etalon was used as the wavelength reference to calculate the Bragg wavelength of the FBG. In this paper, the data acquisition card was used to simultaneously acquire the FBG reflection spectrum and the transmission spectrum of the F–P etalon, since the wavelength value of each positive peak in the transmission spectrum of the F–P etalon was known. Therefore, the Bragg wavelength of the FBG was determined by comparing the peak position of the FBG reflection spectrum with that of the F–P etalon transmission spectrum.

3. Results and Discussion

3.1. Data Set

In order to verify the improvement in the measurement accuracy, generalization ability, and long-term stability of an OS-ELM for FBG sensor dynamic calibration, four data acquisition experiments were conducted, respectively, and wavelength–temperature pairs were provided. The experimental results are displayed in Figure 2. The four experiments were conducted in chronological order, with an interval of five months between the first experiment and the second experiment, five days between the second experiment and the third experiment, and nine months between the third experiment and the fourth experiment. In the first experiment, six temperatures were taken: 10, 15, 20, 24, 28, and 32 °C. The second experiment also took six temperatures, unevenly: 12, 14, 18, 22, 26, and 30 °C. The ranges of the third and fourth experiments were (13–16 °C) and (5–9 °C), respectively. It can be seen from Figure 2 that the FBG wavelength and temperature maps of the four experiments are different curves, and it is impossible to fit a single curve to represent their relationship. The discrepancy between measurement sets was mainly caused by the temperature drift property of the F–P etalon, the FBG pre-stretching amplitude, and the sealability between the FBG and the packaging material. Since eliminating the discrepancy from the optical path with hardware would increase the complexity and cost of the system, this paper studies a method of dynamic calibration to eliminate the discrepancy.

3.2. Simulated Analysis

When comparing the performance of the dynamic calibration model with other static calibration models, an ELM was employed as the static model of an OS-ELM to compare with other calibration models for better control variables. Due to the limited space of the article, 352 pairs of data from the third experiment with the most severe noise among the four experiments were taken for verification. The 352 pairs of data were first randomly divided into 300 pairs as a training data set, and the remaining 52 pairs as a testing data set for accuracy testing, as shown in Figure 3. Figure 4 gives the data set for generalized performance testing. A total of 110 data points in the temperature range (14–15 °C) were taken as the training data set, and the remaining 242 data points were used as the testing data set.
The ELM commonly used activation functions that have a sigmoidal function (sig), a sine function (sin), a hardlim function (hardlim), and a radial basis function (radbas). The ELM model performs differently when different activation functions are used. The performances of these four activation functions were compared in terms of the root mean square error (RMSE) and goodness of fit ( R 2 ) given by Equations (8) and (9), respectively, and the results are shown in Table 1.
R M S E = 1 N i = 1 N ( f N ˜ ( x j ) t j ) 2
R 2 = i = 1 N ( t i f N ˜ ( x j ) ¯ ) 2 i = 1 N ( f N ˜ ( x j ) f N ˜ ( x j ) ¯ ) 2
where N is the total number of testing samples, N ˜ is the number of hidden layer neurons. f N ˜ ( x j ) and t j are the temperature measurement from the ELM and thermometer, respectively. f N ˜ ( x j ) ¯ is the mean of f N ˜ ( x j ) .
Because the RMSE described the precision of the prediction, its value was close to 0, which means a better prediction performance. Nevertheless, the closer the R 2 value is to 1, the better the fitting degree of the observed regression curve. As shown in Table 1, it is evident that the hardlim function had the worst performance. The sigmoid function returned the smallest value of RMSE, and the sine function received the shortest training time. The result of using a radial basis function is close to the sigmoid function in the evaluation criterion of R 2 . However, the RMSE of the sigmoid function was smaller than that of the radial basis function. A comprehensive analysis showed that the ELM performs best when using a sigmoid function.
The prediction accuracy of the ELM, polynomial, BP, and RBF were also compared. In order to make the comparison fair, the results with the best performance in each calibration model were compared. As shown in Table 2, the polynomial took the least time, the RMSE for the ELM was the lowest, and R 2 for all models was very close. In terms of real-time, the polynomial was the best, and ELM was the best in terms of accuracy. As a prediction model, the generalization performance should also be considered. The generalization performance of each model is compared below.
Table 3 compares the generalization performance of the polynomial, BP, RBF, and ELM. The RBF and ELM performed best in RMSE and R 2 , but the RBF took more time than the ELM. Figure 5 shows the boxplot of the differences between the predicted and observed values of the four models to analyze their stability. It can be seen from Figure 5 that the generalization prediction error of the polynomial was the largest, while the error of the ELM was the smallest. Meanwhile, the line extending from both sides of the ELM box is the shortest in the boxplot, so the ELM prediction was more stable than the others.
By comparing the prediction accuracy and generalization performance of the four models, including the polynomial, BP, RBF, and ELM, it was found that the prediction accuracy and generalization performance of the ELM model was better than that of the other three models.

3.3. Dynamic Calibration

The ELM model can be considered as a model in which the OS-ELM only has initial training, so the ELM is a static model. The calibration based on the OS-ELM is dynamic, and the calibration model can be continuously updated as new data arrives, rather than retraining the model. The dynamic calibration of the performances of the two aspects of stability and generalization was evaluated. The long-term stability was verified by the first and third experimental data (interval of five months), and the short-term stability was verified with the second and third experimental data (interval of five days). Date from the first experiment (10–32 °C) and data from the fourth experiment (5–9 °C) (interval of 14 months) verified the long-term generalization performance.
Firstly, the short-term stability was studied. The data set of the second experiment was used to train and calibrate the network. Then the trained network was used to predict the data of the third calibration experiment. The prediction results are shown in Figure 6, and the prediction error boxplot is shown in Figure 7. The polynomial, BP, RBF, and ELM prediction errors were 1.2436 °C, 1.2316 °C, 1.2350 °C, and 1.1956 °C, respectively, and the OS-ELM prediction error was 0.2 °C.
When verifying long-term stability, the data set of the first experiment was used to train and calibrate the network, and then the data of the third experiment were predicted by the trained network. The prediction results are shown in Figure 8, and the prediction error boxplot is shown in Figure 9. The advantages of the OS-ELM dynamic calibration in predicting accuracy and stability can clearly be observed.
In order to study the long-term generalization performance of dynamic calibration, the data set of the first experiment was used to train the calibration network, and then the trained network was used to predict the data of the fourth experiment. The online learning sample was the data of the fourth experiment in the range of (5–6 °C). The prediction results are shown in Figure 10, and the prediction errors boxplot is shown in Figure 11. The apparent advantages of the OS-ELM can also be seen.
The comparative analyses above show that the dynamic calibration model based on the OS-ELM not only has an excellent generalization performance but also has a high prediction accuracy. The dynamic calibration can realize the sensor field-measured data and continuously input it into the network model as the online learning sample, which solves the problem of large drift errors of the static calibration model and insufficient coverage of the initial training sample.

4. Discussion

This paper provides a new dynamic model updating method, which is different from the traditional static calibration. In the dynamic updating phase, both the current prediction accuracy and the historical record are considered, which helps to reduce the fitting error of insufficient online learning samples. Besides, the dynamic calibration based on the OS-ELM significantly improved the prediction accuracy and generalization performance compared with previous static calibration methods. The maximum absolute error of the short-term stability experiment was 0.502 °C, that of the long-term stability experiment was 0.516 °C, and that of the long-term generalization experiment was 0.374 °C. Future research will focus on improving the calibration model according to the data characteristics.

Author Contributions

Formal analysis, Q.S. and W.Q.; Funding acquisition, Q.S.; Investigation, W.Q.; Methodology, W.Q.; Project administration, Q.S.; Writing—original draft, W.Q.; Writing—review & editing, Q.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (grant number 61775057), and Natural Science Foundation of Hebei Province (grant number E2019502179).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, S.; Guo, J.; Guo, L.; Cao, J.; Zhang, B. On-line monitoring system for downhole temperature and pressure. Opt. Eng. 2014, 8, 087102. [Google Scholar]
  2. Antunes, P.; Travanca, R.; Rodrigues, H.; Melo, J.; Jara, J.; Varum, H.; Andre, P. Dynamic structural health monitoring of slender structures using optical sensors. Sensors 2012, 5, 6629–6644. [Google Scholar] [CrossRef] [Green Version]
  3. Mohanty, L.; Tjin, S.C.; Lie, D.T.T.; Panganiban, S.E.C.; Chow, P.K.H. Fiber grating sensor for pressure mapping during total knee arthroplasty. Sens. Actuators A Phys. 2007, 2, 323–328. [Google Scholar] [CrossRef]
  4. Xiong, L.; Jiang, G.; Guo, Y.; Liu, H. A Three-dimensional Fiber Bragg Grating Force Sensor for Robot. IEEE Sens. J. 2018, 9, 3632–3639. [Google Scholar] [CrossRef]
  5. Huang, F.; Chen, T.; Si, J.; Pham, X.; Hou, X. Fiber laser based on a fiber Bragg grating and its application in high-temperature sensing. Opt. Commun. 2019, 452, 233–237. [Google Scholar] [CrossRef]
  6. Hong, C.; Zhang, Y.; Zhang, M.; Gordon, L.L.M.; Liu, L. Application of FBG sensor for geotechnical health monitoring, a review of sensor design, implementation methods and packaging techniques. Sens. Actuators A Phys. 2016, 244, 184–197. [Google Scholar] [CrossRef]
  7. Zhang, X.; Wang, P.; Liang, D.; Fan, C.; Li, C. A soft self-repairing for FBG sensor network in SHM system based on PSO–SVR model reconstruction. Opt. Commun. 2015, 343, 38–46. [Google Scholar] [CrossRef]
  8. Lamberti, A.; Chiesura, G.; Luyckx, G.; Degrieck, J.; Kaufmann, M.; Vanlanduit, S. Dynamic strain measurements on automotive and aeronautic composite components by means of embedded fiber Bragg grating sensors. Sensors 2015, 10, 27174–27200. [Google Scholar] [CrossRef]
  9. Marignetti, F.; de Santis, E.; Avino, S.; Tomassi, G.; Giorgini, A.; Malara, P.; De Natale, P.; Gagliardi, G. Fiber bragg grating sensor for electric field measurement in the end windings of high-voltage electric machines. IEEE Trans. Ind. Electron. 2016, 5, 2796–2802. [Google Scholar] [CrossRef]
  10. Dziuda, L.; Skibniewski, F.W.; Krej, M.; Lewandowski, J. Monitoring respiration and cardiac activity using fiber Bragg grating-based sensors. IEEE Trans. Biomed. Eng. 2012, 7, 1934–1942. [Google Scholar] [CrossRef]
  11. Domingues, F.; Alberto, N.; Leitão, C.; Tavares, C.; Lima, E.; Radwan, A.; Sucasas, V.; Rodriguez, J.; André, P.; Antunes, P. Insole optical fiber sensor architecture for remote gait analysis-an eHealth Solution. IEEE Internet Things J. 2017, 6, 207–214. [Google Scholar] [CrossRef] [Green Version]
  12. Okabe, Y.; Tsuji, R.; Takeda, N. Application of chirped fiber Bragg grating sensors for identification of crack locations in composites. Compos. Part A Appl. Sci. 2003, 1, 59–65. [Google Scholar] [CrossRef]
  13. Reid, M.B.; Ozcan, M. Temperature dependence of fiber optic Bragg gratings at low temperatures. Opt. Eng. 1998, 1, 237–242. [Google Scholar] [CrossRef]
  14. Roths, J.; Andrejevic, G.; Kuttler, R.; Süßer, M. Calibration of fiber Bragg cryogenic temperature sensors. In International Optical Fiber Sensors Conference; OSA: Washington, DC, USA, 2006; pp. 81–85. [Google Scholar]
  15. Saccomanno, A.; Breglio, G.; Irace, A.; Bajko, M.; Szillasi, Z.; Buontempo, S.; Giordano, M.; Cusano, A. A calibration method based on look-up-table for cryogenic temperature Fiber Bragg Grating sensors. In Proceedings of the 3rd Asia Pacific Optical Sensors Conference, Sydney, Australia, 31 January–3 February 2012; pp. 83513–83515. [Google Scholar]
  16. An, Y.; Wang, X.; Qu, Z.; Liao, T.; Nan, Z. Fiber Bragg grating temperature calibration based on BP neural network. Optik 2018, 172, 753–759. [Google Scholar] [CrossRef]
  17. An, Y.; Wang, X.; Qu, Z.; Liao, T.; Wu, L.; Nan, Z. Stable temperature calibration method of fiber Bragg grating based on radial basis function neural network. Opt. Eng. 2019, 9, 096105. [Google Scholar] [CrossRef]
  18. Jiang, J.; Zang, C.; Wang, Y.; Zhang, X.; Liu, Y.; Yang, Y.; Xie, R.; Fan, X.; Liu, T. Investigation of composite multi—Wavelength reference stabilization method for FBG demodulator in unsteady temperature environment. J. Optoelectron. Laser 2018, 6, 5–11. [Google Scholar]
  19. Xie, R.; Zhang, X.; Wang, S.; Jiang, J.; Liu, K.; Zang, C.; Chu, Q.; Liu, T. Research on influencing factors of FBG temperature sensors stability. J. Optoelectron. Laser 2018, 4, 363–369. [Google Scholar]
  20. Huang, G.; Zhu, Q.; Siew, C. Extreme learning machine: Theory and applications. Neurocomputing 2006, 1, 489–501. [Google Scholar] [CrossRef]
  21. Rong, H.; Huang, G.; Sundararajan, N.; Saratchandran, P. Online Sequential Fuzzy Extreme Learning Machine for Function Approximation and Classification Problems. IEEE Trans. Syst. Man Cybern. B 2009, 4, 1067–1072. [Google Scholar] [CrossRef]
  22. Li, Z.; Fan, X.; Chen, G.; Yang, G.; Sun, Y. Optimization of iron ore sintering process based on ELM model and multi-criteria evaluation. Neural Comput. Appl. 2017, 8, 2247–2253. [Google Scholar] [CrossRef]
  23. Lu, F.; Wu, J.; Huang, J.; Qiu, X. Aircraft engine degradation prognostics based on logistic regression and novel OS-ELM algorithm. Aerosp. Sci. Technol. 2018, 84, 661–671. [Google Scholar] [CrossRef]
  24. Wang, X.; Yang, K.; Kalivas, J.H. Comparison of extreme learning machine models for gasoline octane number forecasting by near-infrared spectra analysis. Optik 2020, 200, 163325. [Google Scholar] [CrossRef]
  25. Liang, N.; Huang, G.; Saratchandran, P.; Sundararajan, N. A Fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 2006, 6, 1411–1423. [Google Scholar] [CrossRef]
Figure 1. The fiber Bragg grating (FBG) sensing system.
Figure 1. The fiber Bragg grating (FBG) sensing system.
Sensors 20 01840 g001
Figure 2. The four data acquisition experiments.
Figure 2. The four data acquisition experiments.
Sensors 20 01840 g002
Figure 3. Data for the prediction accuracy experiments.
Figure 3. Data for the prediction accuracy experiments.
Sensors 20 01840 g003
Figure 4. Data for the generalization ability experiments.
Figure 4. Data for the generalization ability experiments.
Sensors 20 01840 g004
Figure 5. A boxplot of prediction errors for different calibration models.
Figure 5. A boxplot of prediction errors for different calibration models.
Sensors 20 01840 g005
Figure 6. Performance comparisons of different calibration models in terms of short-term stability.
Figure 6. Performance comparisons of different calibration models in terms of short-term stability.
Sensors 20 01840 g006
Figure 7. A boxplot of prediction errors for different calibration models in terms of short-term stability.
Figure 7. A boxplot of prediction errors for different calibration models in terms of short-term stability.
Sensors 20 01840 g007
Figure 8. Performance comparisons of different calibration models in terms of long-term stability.
Figure 8. Performance comparisons of different calibration models in terms of long-term stability.
Sensors 20 01840 g008
Figure 9. A boxplot of prediction errors for different calibration models in terms of long-term stability.
Figure 9. A boxplot of prediction errors for different calibration models in terms of long-term stability.
Sensors 20 01840 g009
Figure 10. Performance comparisons of different calibration models in terms of generalization.
Figure 10. Performance comparisons of different calibration models in terms of generalization.
Sensors 20 01840 g010
Figure 11. A boxplot of prediction errors for different calibration models in terms of generalization.
Figure 11. A boxplot of prediction errors for different calibration models in terms of generalization.
Sensors 20 01840 g011
Table 1. Performance of the prediction accuracy of different activation functions.
Table 1. Performance of the prediction accuracy of different activation functions.
Types of Activation FunctionTime (s)RMSE (°C) R 2
TrainingTestingTrainingTestingTrainingTesting
sig0.109400.05330.10270.97720.9698
sin0.015400.05550.18030.97900.8983
hardlim0.015600.36770.8426−5.5816 × 10−123.0073 × 10−13
radbas0.062500.37740.86600.97720.9698
Table 2. Performances of prediction accuracy of different calibration models.
Table 2. Performances of prediction accuracy of different calibration models.
Types of Calibration ModelTime (s)RMSE (°C) R 2
TrainingTestingTrainingTestingTrainingTesting
Polynomial0.015600.05440.13270.97810.9686
BP0.40630.01560.05350.16010.97890.9220
RBF7.96880.05440.06250.13210.97810.9686
ELM0.109400.05330.10270.97720.9698
Table 3. Performances of generalization of different calibration models.
Table 3. Performances of generalization of different calibration models.
Types of Calibration ModelTime (s)RMSE (°C) R 2
TrainingTestingTrainingTestingTrainingTesting
Polynomial0.046900.07640.04760.91200.9787
BP0.468800.07990.10670.90520.8920
RBF1.484400.07620.04670.91250.9819
ELM0.015600.07620.04560.91250.9818

Share and Cite

MDPI and ACS Style

Shang, Q.; Qin, W. Fiber Bragg Grating Dynamic Calibration Based on Online Sequential Extreme Learning Machine. Sensors 2020, 20, 1840. https://doi.org/10.3390/s20071840

AMA Style

Shang Q, Qin W. Fiber Bragg Grating Dynamic Calibration Based on Online Sequential Extreme Learning Machine. Sensors. 2020; 20(7):1840. https://doi.org/10.3390/s20071840

Chicago/Turabian Style

Shang, Qiufeng, and Wenjie Qin. 2020. "Fiber Bragg Grating Dynamic Calibration Based on Online Sequential Extreme Learning Machine" Sensors 20, no. 7: 1840. https://doi.org/10.3390/s20071840

APA Style

Shang, Q., & Qin, W. (2020). Fiber Bragg Grating Dynamic Calibration Based on Online Sequential Extreme Learning Machine. Sensors, 20(7), 1840. https://doi.org/10.3390/s20071840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop