To validate the reconstruction effect of the regularized extreme learning machine (RELM) algorithm, we carried out a series of numerical simulation experiments. As a result of comparing the reconstruction results of the linear back projection (LBP) technique, the Tikhonov regularization (TR) technique, the truncated singular value decomposition (TSVD) technique, the conjugate gradient (CG) technique, the Landweber technique, the simultaneous iterative reconstruction technique (SIRT), the Gaussian process regression (GPR) method, the extreme learning machine (ELM) method, and the RELM technique, this section qualitatively and quantitatively analyzes the excellent numerical characteristics of the RELM algorithm. The square sensor with 12 electrodes, as shown in
Figure 6, is adopted in this simulation study. The parameters of the sensor are listed in
Table 1. Through numerical simulation and experimental measurements, we obtained 1500 sets of training samples in this study. The test sample set data is not included in the training sample set. In the sample generation process for the simulation experiments as well as the actual experiments, the inputs are the capacitance correlation coefficients and the outputs are the corresponding true images. In
Table 2, we show several training sample pairs. We use relative error (RE) to evaluate the quality of reconstruction results, which is defined as follows:
where
is the gray value of the real image, and
is the gray value of the predicted image. The smaller the RE, the higher the reconstructed image quality.
5.1. Case 1
To test the performance of the RELM algorithm, seven groups of simulation experiments are carried out in this section, and the imaging quality of various algorithms is compared. The seven kinds of dielectric constant distributions shown in
Figure 7 are taken as the phantom objects for numerical simulation, in which the high permittivity is the yellow area and the low permittivity is the blue area. In
Figure 7a, the radius of the circle is 10 mm. For
Figure 7b, the radius of the circle is 8 mm. The height of laminar flow in
Figure 7c is 20 mm. In
Figure 7d and
Figure 7g, the sides of the square are 16 mm and 15 mm, respectively. For
Figure 7e, there are two rectangles with a length of 60 mm and a width of 15 mm. In
Figure 7f, the side length of the square is 10 mm, and the radius of the circle is 5 mm. The TR algorithm with the parameter value of 0.01 provides the initial value for the iterative algorithm mentioned in this paper. The parameters of the CG algorithm, the Landweber algorithm, and the SIRT are listed in
Table 3,
Table 4 and
Table 5, respectively. The number of hidden layer nodes in both ELM and RELM is 1000. The reconstruction results of various algorithms are shown in
Figure 8 and
Figure 9. To quantitatively evaluate the reconstruction accuracy of the various algorithms, we calculate the REs, shown in
Table 6, between the reconstructed and true permittivity.
The imaging results of the LBP technique, the TR technique, and the TSVD method are shown in
Figure 8. Although they can roughly recognize the contour of the phantom objects, their imaging results are poor when faced with complex imaging targets. In
Figure 9, we can see that the reconstruction results of iterative algorithm, i.e., the CG technique, the Landweber technique, and the SIRT, are significantly superior over non-iterative algorithms. However, the imaging results of all three algorithms still have distortion, which cannot reconstruct the phantom objects well.
In
Figure 9(a4–g4) and
Figure 9(a5–g5), the GPR algorithm and the ELM algorithm take full use of the advantages of the machine learning algorithm, which considerably improves the final imaging quality compared to the aforementioned traditional imaging algorithms in this paper. When the imaging target is complex or has a right angle, the imaging quality of the classical imaging algorithms is poor. Due to the learning of a large number of datasets, the GPR algorithm and the ELM algorithm can perform better on imaging tasks. However, from
Figure 9(a4–g4) and
Figure 9(a5–g5), we see that for seven kinds of imaging targets, the imaging results of the ELM algorithm and the GPR algorithm still have different degrees of artifacts. From
Figure 9(a6–g6), we observe that the RELM algorithm has less artifacts compared to the ELM algorithm, and the imaging results are closest to the phantom objects. The reason why the RELM can improve the image reconstruction accuracy in engineering applications is that it not only has strong generalization ability and classification capabilities, but also improves the robustness of the numerical solution. From the above analysis, the RELM algorithm can be divided into two steps: (1) the RELM mapping model between the capacitance correlation coefficient and the imaging target is extracted based on the training samples, and (2) the capacitance correlation coefficient is calculated and then it is used as the input of the RELM model to predict the final imaging. The implementation process is relatively simple, and there is no need to manually set the weights and offsets of the hidden layers, which is in line with the practical needs of the project.
High-quality reconstructed images are an important guarantee for reliable measurement results. From the results listed in
Table 6, we see that, compared with other algorithms mentioned, the RELM algorithm achieves the highest imaging accuracy for different simulation targets. For phantom objects in
Figure 7, the REs of the RELM algorithm are 2.13%, 2.06%, 1.09%, 1.68%, 1.49%, 1.84%, and 1.31%, respectively. The quantitative error comparison results successfully verify the RELM algorithm has high reliability in solving the inverse problem of ECT and greatly improves the reconstruction accuracy.
5.2. Case 2
The robustness of image reconstruction algorithm is another important index to evaluate the algorithm. In practical application, the ECT system will be affected by noise when collecting capacitance data. Small input errors will cause the output result to deviate significantly from the true result. Consequently, to make the simulation results of the RELM algorithm more convincing, we add 5% and 10% random noise to the original capacitance to assess the RELM algorithm. The phantom objects to be reconstructed and the imaging results are presented in
Figure 7 and
Figure 10 and
Figure 11. We executed 100 computations, and the image error mean is displayed in
Table 7. The noise level is defined by the following equation:
where
is the capacitance noise level,
is the noisy capacitance data, and
is the raw capacitance data.
From the reconstruction results in
Figure 10 and
Figure 11, we find that under the noise levels of 5% and 10%, the imaging results of phantom objects are still satisfactory, which shows that the RELM algorithm has strong robustness and stability under different noise levels. As can be seen quantitatively from the image error mean values in
Table 7, the image errors still keep a small value under 10% noise level. The image error means of phantom object (a)–phantom object (g) are 4.51%, 4.16%, 4.79%, 3.61%, 4.27%, 3.79%, and 3.43%, respectively. The conclusion of the qualitative analysis and quantitative analysis shows that the RELM algorithm has a strong anti-noise performance under different levels of noise for various complex conditions.
5.3. Case 3
When acquiring the RELM mapping model between the capacitance correlation coefficient and the imaging target, it is first necessary to confirm the number of hidden layer nodes (NHLNs). In the actual numerical simulation and experimental process, it is found that the NHLN has an impact on the quality of imaging results. To explore the mechanism of interaction between image accuracy and the NHLN, a series of simulation experiments are carried out in this section, focusing on the relationship between the NHLN and the image errors of the imaging results. The relationship between image errors and NHLN is intuitively illustrated by numerical methods. The reconstructed targets are shown in
Figure 7, and the image error as a function of the NHLN is shown in
Figure 12.
From
Figure 12, we observe that the image errors of the seven kinds of phantom objects decrease rapidly with the increase of the NHLN. When the NHLN is greater than 700, the image errors change very little. However, it is worth noting that although increasing the NHLN can increase the imaging quality of the RELM algorithm, it also increases the computational burden. Especially when the training sample set is large, it increases the time cost significantly. Therefore, in the actual ECT application, the relationship between image errors and the NHLN should be balanced, and the appropriate NHLN should be selected according to the actual needs.