Next Article in Journal
Exploiting Duplications for Efficient Task Offloading in Multi-User Edge Computing
Next Article in Special Issue
A Fully Polarity-Aware Double-Node-Upset-Resilient Latch Design
Previous Article in Journal
A Coalition Formation Game-Based Multi-User Grouping Approach in the Jamming Environment
Previous Article in Special Issue
TAISAM: A Transistor Array-Based Test Method for Characterizing Heavy Ion-Induced Sensitive Areas in Semiconductor Materials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantitative Research on Generalized Linear Modeling of SEU and Test Programs Based on Small Sample Data

1
Department of Economics and Management, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2
Beijing Microelectronics Technology Institute, Beijing 100076, China
3
College of Engineering and Computer Science, Australian National Unversity, Canberra 2600, Australia
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(14), 2242; https://doi.org/10.3390/electronics11142242
Submission received: 29 May 2022 / Revised: 16 July 2022 / Accepted: 17 July 2022 / Published: 18 July 2022
(This article belongs to the Special Issue Radiation Tolerant Electronics, Volume II)

Abstract

:
Complex integrated circuits (ICs) have complex functions and various working modes, which have many factors affecting the performance of a single event effect. The single event effect performance of complex ICs is highly program-dependent and the single event sensitivity of a typical operating mode is generally used to represent the single event performance of the circuits. Traditional evaluation methods fail to consider the cross effects of multiple factors and the comprehensive effects of each factor on the single event soft error cross section. In order to solve this problem, a new quantitative study method of single event error cross section based on a generalized linear model for different test programs is proposed. The laser test data is divided into two groups: a training set and a validation set. The former is used for model construction and parameter estimation based on five methods, such as the generalized linear model and Ensemble, while the latter is used for quantitative evaluation and validation of a single event soft error cross section of the model. In terms of percentage error, the minimum mean estimation error on the validation set is 13.93%. Therefore, it has a high accuracy to evaluate the single event soft error cross section of circuits under different testing programs based on the generalized linear model, which provides a new idea for the evaluation of a single event effect on complex ICs.

1. Introduction

With the rapid development of aerospace technology and deeper exploration in space, the requirements for performance of spacecraft are also increasing. Correspondingly, the reliability and the radiation-hardened performance of complex integrated circuits (ICs) are facing higher requirements [1]. Nowadays, with the reduction in the process size of semiconductor devices, the single event effect (SEE), is seriously affecting the safety of space missions, which is becoming more and more significant [2]. Therefore, it is necessary to evaluate the SEE sensitivity of complex ICs before they are applied to space missions [3]. At present, the SEE evaluation methods of ICs recognized by the industry mainly include a heavy ion test, a proton test, and other radiation tests. The SEE sensitivity of complex ICs has a strong program dependence on which different users have different concerns and applications, so the test programs cannot be traversed. In addition, due to the limited time and high cost of the heavy ion accelerator, it is not suitable for all ICs to carry out radiation tests. Therefore, several simulation methods and fault injection methods have been used to study the SEE sensitivity of complex ICs.
VSenek et al. [4] proposed a Single Event Upset (SEU) simulation prediction method based on the duty cycle to predict the SEU cross section of processors under different test programs. A simple error rate prediction model was preliminarily established. However, the method of analyzing the duty factor was difficult to be applied to complex applications, such as programs with conditional branches. Emmanuel et al. [5] carried out the SEU simulation on processor under different programs based on fault injection, and applied a new fault model of multi-fault injection to dual-fault injection. The model was designed to represent a possible non-concurrent radiation-induced soft error, which was only useful for specific processors in this paper. Gao jie, li qiang et al. [6] studied the relationship between the dynamic and the static SEU rates of satellite microprocessors by using the concept of program duty ratio and fault injection technology, which verified only by fault injection but not by radiation experiments. Zhao Yuanfu et al. [7] proposed a method to predict the SEE of complex ICs, which required a detailed analysis of different test programs and a large amount of work. The prediction method has been verified by radiation experiments, which has a good guiding significance.
The simulation method needs to establish different models for different circuits, which is complicated and time-consuming, and the simulation results have relatively large errors compared with the real test results. The method of fault injection has some problems of precision, accuracy, and speed for the modeling. None of the above methods have taken the cross influence of multiple factors and the comprehensive influence and accurate quantification of SEE soft error cross section into account.
In view of the above shortcomings, laser SEE test data as small sample training set has been used for modeling based on the generalized linear model. Quantitative evaluation on the SEE cross section of the circuit under different test programs has been conducted and the evaluation errors of training set and validation set under different methods are verified, and they have been compared. It provides a new idea for the evaluation of SEE soft error cross section in complex ICs. The SEU cross section of devices under different test programs can be predicted by the new method without carrying out radiation tests on all test modes. It can effectively solve the problem of evaluating the radiation performance of complex ICs under different test modes and obtain high accuracy.

2. Circuit Descriptions and Radiation Experiments

2.1. Circuit Description

The research object is a 32-bit radiation-hardened microprocessor, which has all the typical characteristics of complex ICs, such as large scale, high frequency, multiple modules and complex functions. The system-level error detection and correction are adopted by the radiation-hardened microprocessor. The circuit consists of an integer processing unit (IU), a floating point processing unit (FPU), CACHE, register (REGFILE), a debugging support unit (DSU), a serial port (UART), a storage/interrupt controller, a watchdog, a timer, and other units, which realize data interaction through AMBA bus. The functional block diagram of a microprocessor is shown in Figure 1.

2.2. Experiment Setting

In this paper, a set of function test programs is developed to simulate the typical function state of a user, which makes the CPU instruction coverage reach 100%, and it also covers all the logic units of the circuit. Taking P1–P4 test programs as the training test, they perform a single-precision integer operation and a double-precision floating point operation in the CACHE open and closed states, respectively, which cover eight standard test functions. P1 is a single-precision integer operation in CACHE ON mode, P2 is a single-precision integer operation in CACHE OFF mode. P3 is the double-precision floating-point operation in CACHE ON mode, and P4 is the double-precision floating-point operation in CACHE OFF mode. P5–P8 test programs are designed as validation test programs. In order to better verify the effectiveness of the model, this validation program has randomness, and it does not require instruction coverage and logical unit coverage. The processor will continuously access Refile data when executing different test programs. The statistics of register usage of the target circuit under eight different test programs are shown in Table 1.

2.3. Radiation Experiments

The single event upset soft error cross sections (SEU cross sections) of the circuit under different test programs are obtained by pulse laser test [8]. During the test, the working voltage of the target circuit is set as the lowest level, where IO voltage is 2.97V and core voltage is 1.62 V. The backside irradiation laser test is carried out using PL2210A-P17 pulsed laser with 100 Hz frequency. The laser SEE test site is shown in Figure 2. The laser test data of the training set is shown in Table 2, and the laser test data of the validation set is shown in Table 3 in which the effective laser energy (i.e., laser energy focused in the active region E eff ) has been equivalent to the LET value of heavy ion [9].
The conversion relation between the initial laser energy E 0 and the LET value of the heavy ion is shown in Formula (1):
E eff = f 1 R e α h 1 + R E 0 LET = 0.082 E eff + 2.07
where f is the effect factor of the spot, R is the reflectance of the device surface, R is the metal layer reflectance of the device, α is the silicon substrate absorption coefficient of the device and its measured value, and h is the Planck constant.
Four validation programs for P5–P8 were designed, and laser tests were carried out under different laser energies, respectively. The obtained laser test data of P5–P8 are shown in Table 3.

3. Modeling and Parameter Estimation

Compared with the data obtained by software simulation, the laser test data is closer to the radiation sensitivity of the circuit in the actual radiation environment, so the model established based on laser test data have a higher accuracy. There are 16 observations of laser test data, which are typical small sample data, and the use of second-order and above polynomial or tree models may lead to overfitting [10,11,12]. The generalized linear model is linear with respect to the unknown parameters, but nonlinear with respect to the known variables. The nonlinear functional relational quantization model between the independent variable and the dependent variable can be established based on the linear parameters and multiple bases. The corresponding model has a good fitting effect and prediction accuracy. Therefore, the pulsed laser SEE test data of P1–P4 as the training set are used to build the model based on the generalized linear model. Four methods of generalized least squares method [10,11], the weighted least squares estimation method [13], the median regression method [12], and the least trimmed squares method [12] are used to obtain the estimation parameter of the generalized linear model. The four methods mentioned above are combined with optimal weights, namely the Ensemble method as the fifth method. Then, a laser test is performed on the target circuit under the P5–P8 validation programs, and the obtained laser test data is used as the validation set to verify the generalized linear model. The flow chart of the method to quantitatively evaluate the SEE soft error cross section of complex ICs based on the generalized linear model is shown in Figure 3.
Firstly, the P1–P4 test program for the training set is written, and 16 groups of small sample test data are obtained by laser test. The test data is used as the training set for the generalized linear model. Then, GLS, WLS, MR, LTS, and Ensemble methods are established and parameters are optimized on the target functions. The P5–P8 test programs as the validation set are written and then the generalized linear model is used to predict the SEE soft error cross section. Meanwhile, the soft error cross section under radiation is obtained by laser test. By evaluating the prediction error and the confidence interval on the test data, the precision of the quantitative prediction model is obtained.

3.1. Model Building

The generalized linear model is established using Formula (2).
g S E U = β 0 + i = 1 p β i f i L E T , t i m e s , T 1 , T 2 + ε
Formula (2) represents the quantitative relationship between SEU and its impact factor, such as L E T , t i m e s , T 1 , T 2 . Since the value of the SEU soft error cross section must be positive, and the right side of the equation takes the value of the entire real number domain, we take the logarithm of the SEU and put it into the model, that is, taken as the link function of Poisson regression. The model is linear with respect to unknown parameters, but nonlinear with respect to several known independent variables. Here, the function f i L E T , t i m e s , T 1 , T 2 can be made of any nonlinear function about independent variables according to your observations and assumptions, such as neural networks, GBDT (gradient boosting decision tree), spline functions, or polynomials about independent variables, etc. [10,11]. In this paper, f i L E T , t i m e s , T 1 , T 2 is taken as each independent variable itself to minimize the number of parameters to prevent overfitting and to achieve better predictions. The simulation software used for modeling and parameter estimation in this paper is R-3.5.3.
At this point, the above model can be simplified to Formula (3)
Y = X β + ε
Among them, Y = log ( S E U ) = ( y 1 , y 2 , , y n ) T , n is the number of experiments, which is 16 in this paper, and S E U is a column vector of length n. Each column of X   is the number of registers read times (times), program execution cycle (T1), average register access time (T2), LET and an intercept term (I) that is all 1, each row of X is an observation data. The dimension of X matrix is n × 5 . β is the unknown parameter column vector of length 5 to be solved, and ε is the measurement error column vector of length n .

3.2. Variable Selection

There are multiple criteria such as C p , AIC, and BIC for variable selection of the model [11]. In this paper, AIC criterion is chosen for variable selection, which is the sum of the negative average log-likelihood and the penalty term considering the number of parameters. The lower the value, the better the prediction effect of the model. The result of variable selection using the AIC criterion is shown in Figure 4.
As can be seen from Figure 3, the final result of variable selection retains the number of register reads (times), program execution cycle (T1), LET and the intercept term (I) that is all 1, and it excludes the average register access time (T2).

3.3. Parameter Estimation

For different assumptions of measurement error or different loss functions, different parameter estimates β ^ and predicted values of SEU cross section S E ^ U can be obtained. In order to further analyze the experimental data and to obtain a more accurate and robust model, the following four methods are firstly used to carry out model building, in which the first two are based on the Gaussian distribution assumption, and the latter two are robust parameter estimation methods, both of them are used to solve Formula (3), and β ^ is the estimated value of β . Methods for parameter estimation are shown in Table 4.
The estimation parameter for the training set obtained by the above four methods are shown in Table 5. It can be seen that the influence coefficients of LET and times obtained by different methods are positive, while T1 is on the contrary, and the values of the four methods are relatively close, that is, they increase with the growing of LET and times and decrease with the increase of T1.

3.4. Model Optimization

Considering that each of the four methods in Section 3.3 may have advantages and disadvantages, the Ensemble method [10,11], which performs optimized linear weighting on the predicted values of these four methods by minimizing the combined variance, is adopted to reduce the prediction variance, shrink the confidence interval, and make predictions more robust. It can be seen from Formula (4) that after Ensemble, the variance of the forecast value, which is the weighted average of the evaluation from the four methods mentioned above, is reduced to minimize and thus becomes more reliable.
m i n p p T Σ p s . t .   i = 1 4 p i = 1   p i 0
Σ is the covariance matrix of the evaluation errors of the four methods.
The covariance matrix of these four methods in Section 3.3 calculated based on the training dataset is shown in Table 6, where the diagonal is the variance of the evaluation errors of the four methods. It can be seen that the evaluation variances of the two robust parameter estimation methods, MR and LTS, are all less than 50, which means the standard deviation of the evaluation errors of the SEU cross section prediction value is less than 50 = 7.07 . The covariance of the WLS method and the LTS method is the smallest, which is −2.9, thus the correlation coefficient of these two is ρ W L S , L T S = - 2 . 9 230 . 93 48 . 53 = 0.03 . The negative correlation of these two is used by the Ensemble method to further reduce the evaluation error variance of SEU.
The inner point method of constraint optimization algorithm [12,13] is used to calculate the optimal weights p = 0 . 00118461 0 . 09161878 0 . 68351429 0 . 22368231 . It can be seen that MR method has the largest weight, while GLS method has almost zero weight.

3.5. Evaluation Error

In order to verify the evaluation performance of the model on the test data, the above five evaluation methods are used in the training set and the validation set to verify the evaluation effect of the model, respectively. For the evaluation of error, root mean square error (RMSE) [10,11], average error percentage, and other various evaluation indicators are selected. The smaller the value, the smaller the evaluation error and the higher the evaluation accuracy. The evaluation errors of the training set and the validation set under different methods are shown in Table 7.
Taking RMSE [10,11], the most commonly used evaluation method, as an example, Ensemble and the MR method have relatively low RMSE in the training set and the validation set, while the other three methods have relatively large evaluation errors in validation set. The MR method, as one of the robust parameter estimation methods, shows a lower evaluation error in the training set and the validation set compared with the GLS and the WLS methods based on Gaussian distribution, indicating that the error of SEU data collected in the experiment may be non-Gaussian distribution, or Gaussian characteristics are not obvious due to small samples. It also verified the reliability of the MR and the Ensemble methods in data modeling, that is, the quantitative evaluation model established by us is effective, and it has a certain accuracy.

4. Confidence Interval Analysis

In the above section, the evaluation errors are compared. Considering the quantity of data is small, the performance of five methods can be affected by accidental data, therefore in this section further analysis on the confidence interval is carried out. The shorter confidence interval means a smaller evaluation error variance and a higher reliability. So, in this section, further comparison of confidence interval with five methods is performed.
In this paper, the bootstrap method based on statistical sampling is used to calculate the confidence interval of SEU cross section. Multiple SEU cross sections can be obtained by repeated sampling with replacements. The bootstrap times in this paper is set as B = 300.
Figure 5, Figure 6, Figure 7 and Figure 8 show the evaluation value and a 95% confidence interval of the five methods on the training set and the validation set, respectively. The evaluation error is the difference between the evaluation value and the soft error cross section value observed in the laser experiment. Gray bars representing known experimental values, points and line segments of five colors are the respective evaluation values and confidence intervals of the five methods. The narrower the confidence interval is, the higher the evaluation accuracy is. Figure 7 and Figure 8 show the probability density function of the evaluation error of the five methods on the training set and validation set, which is calculated using kernel method (gaussian kernel is used, bandwidth is determined based on cross-validation). It can be seen that the value of the error corresponding to the highest density of Ensemble and MR methods on the validation set is concentrated near zero, but the former is more concentrated at zero than the latter. The error distribution of the other three methods is relatively flat, and the error corresponding to the highest probability density is far from zero. As can be seen from Figure 5, Figure 6, Figure 7 and Figure 8 and Table 7, the Ensemble method has the best evaluation accuracy RMSE and the narrowest confidence interval in the validation set, which can be used to evaluate the SEE soft error cross section based on different test programs and laser energy.

5. Conclusions

A new method based on the generalized linear model for quantitative evaluation of a SEE soft error cross section under different test programs is presented. Different test programs are designed, and the data sets of a laser test under several test programs and register calls of different test programs are divided into training and validation groups. The training set is used for modeling and parameter estimation of the five methods, and the validation set is used to evaluate the model accuracy. The evaluation value, evaluation error, 95% confidence interval, and the probability density function of the five methods on the training set and the validation set are computed. The results show that the quantitative evaluation method of complex ICs based on generalized linear models can achieve a high accuracy of 13.93%. Various factors of the SEE sensitivity of comprehensive effect and quantitative evaluation are considered in the evaluation method. Quantitative evaluation is suitable for small sample experiment data under different test programs, which is an applicable innovation.

Author Contributions

Conceptualization, F.C. and H.C.; methodology, F.C.; software, C.Y. and L.Y.; validation, F.C., L.W. and Y.L.; formal analysis, F.C.; investigation, F.C. and C.Y.; resources, C.Y.; data curation, L.Y.; writing—original draft preparation, F.C. and Y.L.; writing—review and editing, H.C. and L.W.; visualization, L.Y.; supervision, C.Y.; project administration, F.C. and H.C.; funding acquisition, F.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, R.; Zhang, F.; Chen, W.; Ding, L.; Guo, X.; Shen, C.; Luo, Y.; Zhao, W.; Zheng, L.; Guo, H.; et al. Single-Event Multiple Transients in Conventional and Guard-Ring Hardened Inverter Chains Under Pulsed Laser and Heavy-Ion Irradiation. IEEE Trans. Nucl. Sci. 2018, 64, 2511–2518. [Google Scholar] [CrossRef]
  2. Raine, M.; Hubert, G.; Gaillardin, M.; Artola, L.; Paillet, P.; Girard, S.; Sauvestre, J.-E.; Bournel, A. Impact of the Radial Ionization Profile on SEE Prediction for SOI Transistors and SRAMs Beyond the 32-nm Technological Node. IEEE Trans. Nucl. Sci. 2011, 58, 840–847. [Google Scholar] [CrossRef]
  3. Huang, P.; Chen, S.; Chen, J.; Liang, B.; Chi, Y. Heavy-Ion-Induced Charge Sharing Measurement with a Novel Uniform Vertical Inverter Chains (UniVIC) SEMT Test Structure. IEEE Trans. Nucl. Sci. 2015, 62, 3330–3338. [Google Scholar] [CrossRef]
  4. Asenek, V.; Underwood, C.; Velazco, R. SEU induced errors observed in microprocessor systems. IEEE Trans. Nucl. Sci. 1998, 45, 2876–2883. [Google Scholar] [CrossRef]
  5. Touloupis, E.; Flint, A.; Chouliaras, V.A. Study of the Effects of SEU-Induced Faults on a Pipeline-Protected Microprocessor. IEEE Trans. Comput. 2007, 56, 1585–1596. [Google Scholar]
  6. Gao, J.; Li, Q. An SEU rate prediction method for microprocessors of space applications. J. Nucl. Tech. 2012, 35, 201–205. [Google Scholar]
  7. Yu, C.Q.; Fan, L. A Prediction Technique of Single Event Effects on Complex Integrated Circuits. J. Semicond. 2015, 36, 115003-1–115003-5. [Google Scholar]
  8. Buchner, S.P.; Miller, F.; Pouget, V.; McMorrow, D.P. Pulsed-Laser Testing for Single-Event Effects Investigations. IEEE Trans. Nucl. Sci. 2013, 60, 1852–1875. [Google Scholar] [CrossRef]
  9. Shangguan, S.P. Experimental Study on Single Event Effect Pulsed Laser Simulation of New Material Devices. Ph.D. Thesis, National Space Science Center, The Chinese Academy of Sciences, Beijing, China, 2020. [Google Scholar]
  10. Murphy, K.P. Machine Learning: A Probabilistic Perspective; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  11. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  12. Rousseeuw, P.J.; Hampel, F.R.; Ronchetti, E.M.; Stahel, W.A. Robust Statistics: The Approach Based on Influence Functions; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
  13. Jun, S. Mathematical Statistics, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
Figure 1. Functional block diagram of the microprocessor.
Figure 1. Functional block diagram of the microprocessor.
Electronics 11 02242 g001
Figure 2. Laser SEE test site.
Figure 2. Laser SEE test site.
Electronics 11 02242 g002
Figure 3. The flow chart of the method.
Figure 3. The flow chart of the method.
Electronics 11 02242 g003
Figure 4. Variable selection using AIC criteria.
Figure 4. Variable selection using AIC criteria.
Electronics 11 02242 g004
Figure 5. Evaluations, 95% confidence intervals and real values of the five methods on the training set.
Figure 5. Evaluations, 95% confidence intervals and real values of the five methods on the training set.
Electronics 11 02242 g005
Figure 6. Evaluations, 95% confidence intervals and real values of the five methods on the validation set.
Figure 6. Evaluations, 95% confidence intervals and real values of the five methods on the validation set.
Electronics 11 02242 g006
Figure 7. Density function of evaluation errors of five methods on training set.
Figure 7. Density function of evaluation errors of five methods on training set.
Electronics 11 02242 g007
Figure 8. Density function of evaluation errors of five methods on validation set.
Figure 8. Density function of evaluation errors of five methods on validation set.
Electronics 11 02242 g008
Table 1. Statistics of register usage under eight test programs.
Table 1. Statistics of register usage under eight test programs.
ProgramsRegister Reads
(Times)
Program Execution Cycle T1 (s)Average Register Access Time T2 (us)
P1~29,8430.0632.11
P2~29,8430.71423.93
P3~327,6600.0720.22
P4~327,6600.8572.62
P5~74,4680.33017.48
P6~71,0150.8372.45
P7~301,4300.52421.20
P8~193,3270.24714.62
Table 2. Laser SEE soft error cross section of Training set (10−7 errors/cm2).
Table 2. Laser SEE soft error cross section of Training set (10−7 errors/cm2).
Equivalent LET324pJ = 28.7
(MeV.cm2/mg)
432pJ = 37.5
(MeV.cm2/mg)
621pJ = 53
(MeV.cm2/mg)
950pJ = 80
(MeV.cm2/mg)
Test Programs
P11131.851.7114.3
P26.3223369.9
P316.946.572147.8
P49.324.43576.8
Table 3. Laser SEE soft error cross section of validation set (10−7 errors/cm2).
Table 3. Laser SEE soft error cross section of validation set (10−7 errors/cm2).
Equivalent LET135pJ = 13.1
(MeV.cm2/mg)
241pJ = 21.8
(MeV.cm2/mg)
241pJ = 21.8
(MeV.cm2/mg)
889pJ = 75
(MeV.cm2/mg)
Test Programs
P518.123.533.6112.2
P621.116.526.059.6
P721.914.739.5131.7
P84.811.715.270.8
Table 4. Methods for parameter estimation.
Table 4. Methods for parameter estimation.
Methods for Parameter EstimationFormulasParameter Description
GLS, generalized least squares [10,11] β ^ G L S = a r g m i n β i = 1 n ( Y i X i T β ) 2 = ( X T X ) 1 X T Y X i is the column vector formed for the i row of the matrix X.
WLS, weighted least squares [13] β ^ W L S = a r g m i n β i = 1 n w i ( Y i X i T β ) 2 = ( X T W X ) 1 X T W Y
W = d i a g w 1 , w 2 , , w n , w i = 1 S E U i
X i is the column vector formed for the i row of the matrix X.
MR, median regression [12] β ^ M R = a r g m i n β i = 1 n Y i X i T β To calculate the β ^ M R , slack variables can be introduced and the simplex method can be used [13].
LTS [12], least trimmed squares β ^ L T S = a r g m i n β i = 1 h ε i 2
h = n + p + 1 2
ε 1 ε 2 ε n
p is the number of columns of X ,   m represents the largest integer not greater than m .   ( ε 1 , ε 2 , , ε n ) T is the vector sorted by absolute value from smallest to largest for each element in ε = ( ε 1 , ε 2 , , ε n ) T .
Table 5. Estimation parameters for training set under different methods.
Table 5. Estimation parameters for training set under different methods.
Column Names of XGLSWLSMRLTS
(Intercept)1.716851.522322.340292.64312
LET0.039340.042370.030530.02697
times1.0578 × 10−61.18 × 10−68.26 × 10−77.48 × 10−6
T1−0.78625−0.79952−0.78671−0.83454
Table 6. Covariance matrix of the evaluation errors of the four methods.
Table 6. Covariance matrix of the evaluation errors of the four methods.
GLSWLSMRLTS
GLS122.15165.0333.7310.30
WLS165.03230.9331.04−2.90
MR33.7331.0438.3938.97
LTS10.30−2.9038.9748.53
Table 7. Evaluation errors of training set and validation set under different methods.
Table 7. Evaluation errors of training set and validation set under different methods.
Evaluation Errors of Training Set under Different MethodsEvaluation Errors of Validation Set under Different Methods
MethodGLSWLSMRLTSEnsembleGLSWLSMRLTSEnsemble
Root mean square error10.7414.876.228.036.408.9211.116.447.576.41
Mean absolute error9.1111.314.385.054.767.589.345.036.034.83
Mean absolute error in percent %22.3830.9812.9616.7313.3319.3924.1514.016.413.93
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chu, F.; Chen, H.; Yu, C.; You, L.; Wang, L.; Liu, Y. Quantitative Research on Generalized Linear Modeling of SEU and Test Programs Based on Small Sample Data. Electronics 2022, 11, 2242. https://doi.org/10.3390/electronics11142242

AMA Style

Chu F, Chen H, Yu C, You L, Wang L, Liu Y. Quantitative Research on Generalized Linear Modeling of SEU and Test Programs Based on Small Sample Data. Electronics. 2022; 11(14):2242. https://doi.org/10.3390/electronics11142242

Chicago/Turabian Style

Chu, Fei, Hongzhuan Chen, Chunqing Yu, Lihua You, Liang Wang, and Yun Liu. 2022. "Quantitative Research on Generalized Linear Modeling of SEU and Test Programs Based on Small Sample Data" Electronics 11, no. 14: 2242. https://doi.org/10.3390/electronics11142242

APA Style

Chu, F., Chen, H., Yu, C., You, L., Wang, L., & Liu, Y. (2022). Quantitative Research on Generalized Linear Modeling of SEU and Test Programs Based on Small Sample Data. Electronics, 11(14), 2242. https://doi.org/10.3390/electronics11142242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop