Next Article in Journal
Stress Detection System for Working Pregnant Women Using an Improved Deep Recurrent Neural Network
Previous Article in Journal
Rapid Harmonic Detection Scheme Based on Expanded Input Observer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An AI-Based Adaptive Surrogate Modeling Method for the In-Service Response of UVLED Modules

Department of Mechanical and Computer-Aided Engineering, Feng Chia University, Taichung 40724, Taiwan
Electronics 2022, 11(18), 2861; https://doi.org/10.3390/electronics11182861
Submission received: 15 August 2022 / Revised: 6 September 2022 / Accepted: 7 September 2022 / Published: 9 September 2022
(This article belongs to the Special Issue Applications of AI in Intelligent System Development)

Abstract

:
The response forecasting of in-service complex electronic systems remains a challenge due to its uncertainty. An AI-based adaptive surrogate modeling method, including offline and online learning procedures, is proposed in this research for different systems with significant variety. The offline learning aims to abstract the knowledge from the known information and represent it as root models. The in-service response is modeled by a linear combination of the online learning of these root models against the continuous new measurement. This research applies a performance measurement dataset of the UVLED modules with considerable deviation to verify the proposed method. Part of the datasets is selected to generate the root models by offline learning, and these root models are applied to the online learning procedures for the adaptive surrogate model (ASM) of the different systems. The results show that after approximately 10 online learning iterations, the ASM achieves the capability of predicting 1000 h of response.

1. Introduction

The complexity and dynamics of current electronic systems induce a high level of performance uncertainty during field application. Many of these complex systems consist of multiple components/subsystems, and the complexity induces multiple failure/deterioration mechanisms under multiple physical loadings, eventually increasing the performance uncertainties among the individuals [1,2]. An LED module is a good example of a complex system. It exhibits a variety of physics because it converts electrical energy to light and heat; it comprises multiple components, including the LED chip, wires, die-bonding material, and substrate. Considering the manufacturing uncertainties and different thermal degradation rates, the light output of the LED module might deviate from the statistical averages [3,4]. These biases increase in-service inconsistency and system maintenance costs.
Dragičević et al. [5] reviewed the recent reliability research regarding power electronics, identifying a paradigm shift toward the design-for-reliability (DfR) approach, and finding that the reliability performance of power devices is always investigated under different thermal loadings, because high temperature plays an important role in many relevant degradation mechanisms. Fan et al. [6] investigated the long-term reliability of LED packages, using thermal loading as one of the degradation factors. Yazdan et al. [7] studied the degradation of polymer materials, and correlated its impact to the light/color output of LED modules. Moreover, Lu et al. [8] further studied the color shift of LED modules. Sun et al. [9] studied the driver electronics in LED systems.
To facilitate the DfR concept, researchers have developed modeling techniques to predict the average reliability responses from a given set of design and loading parameters for complex electronic systems [10,11]. Conventionally, the physics-of-failure concept is always applied. By analyzing the failure/degradation mechanisms in detail, corresponding physical-driven models have been developed for decades [12]. Due to the interactions between multiple failure/degradation mechanisms, AI-based reliability modeling methods have also been developed [4]. Zhao et al. [13] indicated that neural network methods are viable for power electronics because the significant development of computing hardware unleashes the potential of neural network methods in dealing with complex tasks, and the structure of neural networks is flexible enough for performance improvement.
Chou et al. [14] and Hsiao et al. [15] proposed deep machine learning modeling methods to replace the expert-driven finite element model. Yuan et al. [16] applied the long short-term memory (LSTM) method for the solder joint risk assessment of wafer-level chip-scale packaging (WLCSP) with limited datasets. Panigrahy et al. [17] overviewed the efficiencies and accuracies of the finite element method based on AI-assisted design-on-simulation methods, including artificial neural networks, recurrent neural networks, support-vector regression, kernel ridge regression, k-nearest neighbors, and random forests. Fan et al. [18] applied neural network architecture to model the spectral power distribution (SPD) of a light source. Yuan et al. [4] improved this method using a gated network with a two-step learning algorithm to build the empirical relationships between the design parameters, the thermal aging loading, and the SPD of LED products.
The maintenance costs of electronic systems grow as their complexity increases. Due to the uncertainties of the system, the maintenance cost is more than the material and operational costs, but the resource planning, storage, and management should also be included [19]. The accurate prediction of an in-service electronic system contributes to cost reduction of the system’s maintenance cost. Jin et al. [20] applied a stochastic model to predict the failures of an in-service electric system by considering the latent failures. Grenyer et al. [21] reviewed the recent scientific approaches to the uncertainty engineering problem, and identified two major research gaps: the lack of frameworks to aggregate the multivariate uncertainty, and limited approaches to forecasting individual and aggregate uncertainty for complex engineering systems—especially for the in-service phase. Moreover, Grenyer et al. believe that deep learning techniques might contribute to the uncertainty forecast methods.
Li et al. [22] developed a structure-adjustable online learning neural network for gradually available data, such as in-service data, by applying an adjustable hidden layer to overcome the stability–plasticity dilemma of the online learning. Hu and Du [23] developed a time-dependent surrogate model with inner and outer loops. Moreover, Hu and Mahadevan [24] reported that the crossing points did not need to be highly accurate to predict the reliability during their study using the single-loop Kriging surrogate modeling method. Lieu et al. [25] developed an adaptive surrogate model based on a deep neural network by introducing a threshold to switch from a global prediction to a local one.
In this research, an AI-based adaptive surrogate modeling method that is able to represent and predict the individual product performance characteristics is established. As indicated in Figure 1, the hybrid machine learning method comprises offline learning and online learning parts. The experimental information is (partially) provided to the offline machine learning procedure in order to train the root models. Consequently, the same root models are input into the online machine learning to obtain the ASM against different training data. In this research, three sets of direct measurements of the UVLED module were applied to verify this algorithm. With careful definitions of error estimation, the function of offline learning, the predictive capability of the online learning procedure, and the ASM control scheme were analyzed.
This paper is organized as follows: The fundamental scientific issues, field application requirements, and literature review are presented in the first section, “Introduction”. The following section, “Methods”, presents hybrid offline/online machine learning methods for adaptive surrogate models, along with the mathematical definition of the error estimations. The section “UVLED Measurement” describes the test object and the characteristics of the data. The section “Offline/Online Machine Learning” describes the implementation and results of hybrid machine learning. The following section, “Discussion”, discusses the offline/online learning parameters and online learning control scheme to stabilize the learning against the measurement noise. The Conclusions of this paper are presented in the final section.

2. Methods

For an in-service electronic system, the response at time t is written as f t . To emphasize the “future” response of such an in-service electronic system, we defined t > t m , where t m is the current measurement time.
The f t can be expressed as a linear combination of several evolved root models, as follows:
f t = i α i , t m · F i ,   t m ( w i , t m , p t ) ,  
where α i , t m   and F i , t m are the weightings and evolved root models that are obtained at time t m , respectively. In this study, F i , t m represents a neural network, while w i , t m and p t represent the parameters obtained at time t m and the inputs at time t , respectively. This linear combination of evolved root models (Equation (1)) is defined as the adaptive surrogate model (ASM). It should be noted that the root model in this research is defined as a neural network model due to its numerical flexibility and information abstracting ability.
The hybrid machine learning consists of two steps, as illustrated in Figure 1. The first offline learning contributes to the F i , 0 , and the collection of F i , 0 ( i = 1 n ) corresponds to the root models. By learning the measurement points at t m , the weightings α i , t m and the evolved root models F i , t m are achieved by the subsequent online learning procedure. It should be noted that one set of the root models can, in principle, be the start of several online learning procedures, as illustrated in Figure 1.

2.1. The Offline Machine Learning

The main purpose of the offline machine learning is to obtain stable and reliable root models. The experimental measurements are categorized into groups to form the training datasets for the offline machine learning, as D i = p i , q i t = 0 , p i , q i t = 1 ,   , where p i and q i are the input and output vectors, respectively. The backpropagation-based approaches with optimizers are taken to obtain the optimized weightings against the inputs and time, and form the i -th root models F i w i , p i = q i .
To reduce the instability of the weightings of the neural-network-based approach, the combination of a genetic algorithm (GA) and the principal component analysis (PCA) framework, as described by Yuan et al. [26], was implemented. Moreover, the progressing GA optimizer and exponential kernel function for PCA can be expressed as follows:
K x , x = σ · exp x x 2 2 ,
where x and x are the genes obtained by different GA procedures, and the parameter σ is set to 1.

2.2. The Online Machine Learning

The goal of the online machine learning is to obtain the stable weighting α i , t m and the evolved root models F i , t m . The in-service response f t can then be expressed by the linear combination using Equation (1). Since the information obtained at time t m is prescribed by the root models, the stability–plasticity dilemma mentioned by Li et al. [22] can be reduced without complicating the neural network structure.
The online training set at the time t m is defined as D t m = q t m , q t m 1 , , q t m n , where q is the measured response vector, n > 0 , and m n 0 . In this research, balancing between using a large n to avoid measurement instability and a small n to increase the online machine learning speed, n was set to 3.
The ASM weightings— α i , t m in Equation (1)—are defined as the ratio of the inverse of the online training errors (against D t m ). In each online machine learning step, in the parameters of the neural network F i , t m , w i , t m is updated by the machine learning process against D t m . Learning whether w i , t m will remain at t m + 1 is an option for the online machine learning procedure.
To stabilize the online machine learning, the weighing change ratio is defined as follows:
W C R w , w 0 = j = 1 n w j w j 0 w j 0 2 n ,
where w 0 and w are the neural network weighting vectors before and after the online machine learning, respectively, while w j 0 and w j are the components of the neural network weighting vectors, and n is the component count of the vectors.

2.3. The Definitions of the Predictive Capability

Given a prediction accuracy tolerance l c ( l c > 0 ), and considering an in-service system at time t m , the system response before t m is recorded by the hybrid machine learning algorithm into the ASM, but the information after t m is unknown. The predictive capability (PC) at t m is defined in two ways: First, from the practical point of view, the PC is defined by P C P , and is an estimation based on the information obtained before t m . Define l p as the error estimation of the ASM at time t p , as l p = f t p i α i , t m · F i ,   t m ( w i , t m , p t p ) 2 . Define the set s α as a collection of l p , which satisfies l p < l c for t p t m , but β i > 0 ,   l β i s α if l β l c . Define the function C · as a component counter of a set, and P C p is defined as follows:
P C p = C s α 1 · Δ t ,
where Δ t is the data measurement period.
On the other hand, to fine-tune the parameters for the hybrid machine learning procedure, one may apply known in-service historical responses. We define the prediction capability as PC from the oracle’s point of view ( P C O ). The term “oracle” comes from Greek mythology, and refers to someone able to communicate directly with the gods and give a response or message from the gods to someone else. In the actual application, the prediction at time t + n ( n ) can be achieved at time t by the ASM model, but not the prediction accuracy. This is because the actual system reaction at time t + n is not available. However, in this parameter-tuning stage, the t + n response can be known by using the historical responses. Therefore, the term “oracle” is applied to obtain a clear distinction between P C o and P C p . Define the actual response as g t and l t as the error estimation of the ASM at time t , which becomes
l t = g t i α i , t m · F i ,   t m ( w i , t m , p t ) 2 .
The P C o can be defined as follows:
P C o = max t   subject to   l t , l t 1 , l t γ l c   and   t γ > t m   ,   γ 0
From the application point of view, P C p should be smaller than and as close to P C o as possible, so as to ensure that the prediction is accurate and conservative. It depends on the characteristics of g t and the online machine learning settings to stabilize the neural network training against D t m and the selection of l c . The selection of l c is recommended to refer to the learning performance of the root models against D i .

3. UVLED Measurement

The UVLED packages (left panel of Figure 2a) were first mounted onto the metal-core printed circuit board (right panel of Figure 2a). The peak emission wavelength of the test samples ranged from 365 nm to 375 nm, with a rated driving current of 350 mA. Both the constant-stress acceleration degradation and step-stress acceleration degradation tests were implemented by Liang et al. [27]. Three experiments, with the input currents within the UVLED design range, were selected for this investigation, and are noted as sets A, B, and C. Table 1 lists the loading conditions of each test set. There were 14 samples in each set and the reliability measurement results, in terms of the radiation power reduction (in %) over time, are plotted in Figure 2b–d.
Analyzing the measurement data from Figure 2b–d, we can first eliminate the statistical outliers, such as A19 of Figure 2b, B1 and B9 of Figure 2b, and C46 and C52 of Figure 2c. The deviation of sets A, B, and C is plotted against time in Figure 3a–c, respectively. Deviation existed regardless of the constant-stress (set A) or the step-stress tests (sets B and C). The average deviation (i.e., the difference between the maximum and the minimum) over time was 0.0806, 0.0611, and 0.0860 for sets A, B, and C. Since the degradation tests were carried out within a controlled oven and power supplier, and the light output of the UVLED was measured using a calibrated integrated sphere [27], these deviations indicate that the in-service response of the UVLED system is influenced by the interaction of multiple causes of degradation.

4. Offline/Online Machine Learning

4.1. The Offline Machine Learning

The measurement data of sets B and C were categorized into four groups to increase their internal numerical similarity. Four offline training datasets were then formulated, as listed in Table 2. Each training set comprised one data group from set B and one from set C to increase the balance. Referring to Table 1, if only set B is selected for offline learning, the corresponding root model exhibits very limited ability to vary the base temperature. The same is true for only selecting set C.
At the initial stage of the offline model, the same structure was applied to the four root models. The “3,3,3,1” structure was applied, which has three inputs (i.e., base temperature, input current, and time), one output (the radiation power reduction), and two hidden layers (each with three neurons). The activation function was fixed to sigmoid.
Using the genetic algorithm proposed by Yuan et al. [26], 2000 initial neural network parameter sets were randomly generated for each GA run with the “progressing” GA optimizer, and each component of the chromosomes followed a zero-mean Gaussian distribution. Each chromosome was trained against the offline training datasets, and the best error norm was defined as the fitness ranking factor. The fitness ranking ensures that only the best five chromosomes can enter the next population. The iteration is converged when the norms of the new generation of chromosome vectors are less than 0.1. Moreover, a PCA was applied to the GA optimization results to obtain the PCA gene. Four GA runs were accomplished, resulting in the four best chromosomes for each offline training dataset. The super chromosome was then obtained by inputting these four chromosomes into the principal component analysis with the kernel function shown in Equation (2). To achieve the root models, an extra 10,000 backpropagation iterations (with a learning rate of 0.3) were applied to these four super chromosomes. Applying the error estimation given in Equation (5), the averaged errors of the data related to sets B and C are listed in Table 2, with the plots in Figure 4a–d. It should be noted that the “3,3,3,1” structure was chosen because it is the simplest structure with which the B- and C-related averaged errors are less than 0.03.

4.2. The Online Machine Learning

During the online machine learning, the dataset size was limited to three. The A17 measurement case within set A was first selected as the test case. Figure 5 shows the online machine learning procedure.
Figure 5a shows the initial stage, where the ASM curve (blue line) shows a significant difference from the dashed line (the ground truth). Figure 5b shows the first online learning, where three measurement points are collected for the root model training, and the ASM shows a change compared to Figure 5a. Figure 5c–j show the continuous online training processes, and one can observe that the ASM gradually comes closer to the ground truth.
Considering the quality of input data, four online learning points are worth mentioning: the radiation power reductions at 672, 1008, 1344, and 1512 h show significant discontinuity, and Figure 5c,e,g,h show the online learning processes, respectively. Although the discontinuous points worsen the ASMs’ slope and their predictive capability, as depicted in Figure 5e,h, no significant divergence nor catastrophic failure of the ASM training was detected. This is mainly because there are multiple measurement points in the training dataset. The online machine training stopped at 1848 h due to the 11th ASM being very close to the ground truth, as shown in Figure 5j.

5. Discussion

5.1. The Quality of the Data

Beyond case A17, Figure 6 shows the hybrid learning results of cases A17, A25, and A28. From the helicopter’s point of view, although the measurement data show significant differences (the dotted lines), with an average difference of 0.0635% at each time, all three ASMs (the solid lines) provide good predictive capabilities after a certain amount of online training. It should be noted that all ASMs in Figure 6 started with the same root models, shown in Figure 4.
The error estimations of the ASMs were obtained by Equation (5) from the oracle’s point of view. Considering the ASM obtained at each t m , each error estimation is the average of the product of Equation (5) throughout the whole time scale. Analyzing the changing trend of the error estimation, the error was high at the initial state and decreased after learning. However, the error increased at approximately 1344 h, and decreased again until a relatively low error plateau was reached. To understand this mechanism, one should refer to Figure 5g, which shows the ASM at the same time slot. When the measured data show a significant discontinuity, the data quality might impact the slope of the ASM, worsening its predictive capability. A similar condition occurs in cases A25 and A28 as well.
From the practical point of view, when the online learning reaches 1344 h, one should not avoid or ignore this data discontinuity, because the future ( t > 1344 h) is unknown. However, when the time reaches the next measurement point—i.e., t = 1344 + 168 h—the previous discontinuity becomes the historical information, and is less interesting to the online learning. Hence, this paper applies only the algorithm of multiple online training sets to stabilize the online training quality.

5.2. The Use of Predictive Capability ( P C p and P C o )

Using Equations (5) and (6) in the prediction of the in-service electronic system is ideal, but is not applicable, because the in-service response is unknown (until it reaches the measurement point) and comes with certain uncertainties. Another practical prediction capability estimator should be defined.
Figure 7a shows the online machine learning of case A17 at t = 840 h. The thick blue curve is the third ASM, and is written as ASM (3). The red dotted curve and the grey dashed curve are derived from ASM (3), which is known at t = 840 . The light-grey dotted curve shows the ground truth, and the data after t = 840 h are unknown from the practical point of view. Hence, from the practical point of view, P C p is required to identify the predictive capability of ASM (3), shown as the red dotted curve in Figure 7a. A similar situation applies at t = 1848 , as shown in Figure 7b.
Due to the unavailability of the future data, the computation of P C p is limited to the information acquired at and before the time at which the ASM is obtained. First, l c was defined as 0.02 by considering the learning capabilities of the root models, as shown in Table 2. Then, Equation (4) was applied to compute the P C p by the quality of ASM.
Applying Equations (4) and (6), Table 3 was generated against cases A17, A25, and A28. The value of P C o P C p represents the quality of P C p . If the value is positive, P C p is conservative; otherwise, P C p is aggressive. Analyzing P C o P C p in Table 3, one can observe that the negative values occur approximately between t = 1008 and 1176 h, which is near the discontinuity point of 1344. Such discontinuity cannot be foreseen before the measurement point. The prediction is conservative after approximately 1680–1848 h due to the ground truth showing few fluctuations. Throughout the online machine learning process, the average difference is approximately lower than 2, meaning that the index of P C p is reasonable and conservative.

5.3. The Contribution of the Root Models

Referring to Equation (1), the weightings of the evolved root models and the changes in α i , t m at each online learning process represent the contribution of the evolved root models. The weightings during the online learning in cases A17, A25, and A28 are plotted in Figure 8. The weightings at the beginning of the online learning show certain similarities. However, the weightings evolved diversely along with the learning. All of the (evolved) root models showed the potential to be the one with the highest weighting at all learning stages. This also shows the equal importance of the four root models in this research.

5.4. The Neural Network Weighting Control Scheme

Each online machine learning procedure involves a backpropagation process that adjusts the weighting of the evolved root models. There are at least two starting stages for online learning: In the first, the backpropagation uses the weightings of the zero-hour root models as the initial weightings. After a considerable number of machine learning iterations, a low error can be achieved. This is called the “back-to-original” approach. On the other hand, the backpropagation can be triggered using the previous learning results and weightings as the starting point. This is called the “progressing” approach.
Both the “back-to-original” and “progressing” approaches were carried out against case B15 (Figure 9a), which was not used in any of the training sets of the root models. The same root models were prepared for the online machine training. The contributions of the root models in terms of α i , t m in Equation (1), for the “back-to-original” and “progressing” approaches, are plotted in Figure 9b,c, respectively. Compared to the differences between Figure 8a–c, a certain similarity can be found between Figure 9b,c. Moreover, the average difference in the error l t (Equation (4)) for all ASMs is approximately 0.008, and it is significantly lower than the errors in Table 2; therefore, one can conclude that the ASMs obtained from the “back-to-original” and “progressing” approaches are similar.
However, the difference between the “back-to-original” and “progressing” approaches can be detected in the weighting change (based on Equation (3)) in the neural network. It is reasonable that the weighting change of the “progressing” approaches should be small, because the training sets are similar between any two online training steps next to one another. The weighting changes of the four root models were computed by Equation (3), as shown in Figure 10. Figure 10a shows that the weighting change of RM1 under the “progressing” approach is mostly small, but reaches its peak at approximately 2184 h, corresponding to the measurement data discontinuity point. The weighting changes of RM2 are small (Figure 10b), as is its contribution (Figure 9b,c). The weighting change of RM3 shows no significant change under the “back-to-original” approach, but not under the “progressing” approach. The weighting change of RM4 captures a significant peak at 2016 h. Hence, the “progressing” approach shows a low weighting change if the measured data are continuous, and it becomes sensitive when discontinuity of measurement is detected.

6. Conclusions

In this research, an AI-based hybrid machine learning method, including offline and online machine learning, was developed to obtain an adaptive surrogate model (ASM) for the performance prediction of an in-service complex electronic system. The offline machine learning aims to obtain the root models (i.e., neural network models) based on known experience. Since the root models’ quality impacts the later online learning performance, a genetic algorithm is recommended to obtain a stable root model. The ASM is available through the online learning algorithm against the available measurements, via a liner combination based on Equation (1).
Three sets of UVLED module performance measurements were used for the validation of the hybrid machine learning, with three input parameters, including the case temperature, input current, and time, as well as the output of the radiation power reduction (in %). Via the offline learning, including the genetic algorithm and principle component analysis processes, four root models were obtained, with an error norm of 0.0179, and these four root models were applied for all of the online machine learning.
During the online machine learning, three data points are provided to the backpropagation to generate the ASM at each measurement point. The results show that when the measurement data exhibit significant discontinuity, the error of the ASM increases. These errors can be decreased after a few more online learning iterations.
Considering the unavailability of future measurement at the present time, the predictive capabilities from the practical point of view are defined by Equation (4). By defining the error criterion of 0.02, the average difference between the actual capability and the prediction is approximately 2 · Δ t ( Δ t = 168 h), and the definition of the practical predictive capabilities is believed to be reasonable and conservative.
The contribution of the four root models was studied, and their equal importance was detected. To stabilize the online learning, the “back-to-original” and “progressing” approaches were investigated for their stability and sensitivity. The ASMs from both approaches performed with high similarity. However, the weighting change of the evolved root model was stable, and was very sensitive to the changes in the incoming data in the progressing approach. The quality of the root model influences the online learning performance, and more advanced genetic algorithms should be implemented, such as non-dominated sorting genetic algorithms (NSGAs).
In this research, the same four root models were applied to all online learning optimizations. When the loading condition ranges were within the root models, the ASM approached the real response after approximately 9–10 learning iterations (approximately 1500–1800 h), with the real prediction capability of more than 6 · Δ t ( Δ t = 168 h), considering an average response deviation of 0.0760 and a given accuracy requirement of 0.02.
In engineering terms, in this research, the success of the ASM was not intrinsic [28], but was a combination of many factors, including full coverage of the potential degradation mechanisms through a priori knowledge, flexible and robust root models that obtained via offline learning, and sufficient online learning against reliable real-time measurement. Using the known data, one should fine-tune these offline/online training parameters and validate the accuracy of the ASM to improve its applicability.

Funding

This research is partially supported by the “Applying the real-time machine learning for the AI reliability modeling of the light output depreciation and color shifting of the micro LED array packaging” project of Feng Chia University, sponsored by the National Science and Technology Council, under the grant no. MOST 111-2221-E-035-043.

Data Availability Statement

The data presented in this study are available in this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Newman, M.E.J. Resource Letter CS–1: Complex Systems. Am. J. Phys. 2011, 79, 800–810. [Google Scholar] [CrossRef]
  2. ElMaraghy, W.; ElMaraghy, H.; Tomiyama, T.; Monostori, L. Complexity in engineering design and manufacturing. CIRP Ann. 2012, 61, 793–814. [Google Scholar] [CrossRef]
  3. van Driel, W.D.; Fan, X.J. Solid State Lighting Reliability: Components to Systems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 1. [Google Scholar]
  4. Yuan, C.C.A.; Fan, J.; Fan, X. Deep machine learning of the spectral power distribution of the LED system with multiple degradation mechanisms. J. Mech. 2021, 37, 172–183. [Google Scholar] [CrossRef]
  5. Dragicevic, T.; Wheeler, P.; Blaabjerg, F. Artificial Intelligence Aided Automated Design for Reliability of Power Electronic Systems. IEEE Trans. Power Electron. 2019, 34, 7161–7171. [Google Scholar] [CrossRef]
  6. Fan, J.; Zhang, M.; Luo, X.; Qian, C.; Fan, X.; Ji, A.; Zhang, G. Phosphor–silicone interaction effects in high power white light emitting diode packages. J. Mater. Sci. Mater. Electron. 2017, 28, 17557–17569. [Google Scholar] [CrossRef]
  7. Yazdan Mehr, M.; Bahrami, A.; van Driel, W.D.; Fan, X.J.; Davis, J.L.; Zhang, G.Q. Degradation of optical materials in solid-state lighting systems. Int. Mater. Rev. 2019, 65, 102–128. [Google Scholar] [CrossRef]
  8. Lu, G.; van Driel, W.D.; Fan, X.; Fan, J.; Qian, C.; Zhang, G.Q. Color shift acceleration on mid-power LED packages. Microelectron. Reliab. 2017, 78, 294–298. [Google Scholar] [CrossRef]
  9. Sun, B.; Fan, X.; Qian, C.; Zhang, G. PoF-Simulation-Assisted Reliability Prediction for Electrolytic Capacitor in LED Drivers. IEEE Trans. Ind. Electron. 2016, 63, 6726–6735. [Google Scholar] [CrossRef]
  10. van Roosmalen, A.J.; Zhang, G.Q. Reliability challenges in the nanoelectronics era. Microelectron. Reliab. 2006, 46, 1403–1414. [Google Scholar] [CrossRef]
  11. Tarashioon, S.; Baiano, A.; van Zeijl, H.; Guo, C.; Koh, S.W.; van Driel, W.D.; Zhang, G.Q. An approach to “Design for Reliability” in solid state lighting systems at high temperatures. Microelectron. Reliab. 2012, 52, 783–793. [Google Scholar] [CrossRef]
  12. Zhang, G.Q.; Driel, W.D.; Fan, X.J. Mechanics of Microelectronics; Springer: Dordrecht, The Netherlands, 2006. [Google Scholar]
  13. Zhao, S.; Blaabjerg, F.; Wang, H. An Overview of Artificial Intelligence Applications for Power Electronics. IEEE Trans. Power Electron. 2021, 36, 4633–4658. [Google Scholar] [CrossRef]
  14. Chou, P.H.; Chiang, K.N.; Liang, S.Y. Reliability Assessment of Wafer Level Package using Artificial Neural Network Regression Model. J. Mech. 2019, 35, 829–837. [Google Scholar] [CrossRef]
  15. Hsiao, H.Y.; Chiang, K.N. AI-assisted reliability life prediction model for wafer-level packaging using the random forest method. J. Mech. 2021, 37, 28–36. [Google Scholar] [CrossRef]
  16. Yuan, C.C.A.; Lee, C.-C. Solder Joint Reliability Modeling by Sequential Artificial Neural Network for Glass Wafer Level Chip Scale Package. IEEE Access 2020, 8, 143494–143501. [Google Scholar] [CrossRef]
  17. Panigrahy, S.K.; Tseng, Y.C.; Lai, B.R.; Chiang, K.N. An Overview of AI-Assisted Design-on-Simulation Technology for Reliability Life Prediction of Advanced Packaging. Materials 2021, 14, 5342. [Google Scholar] [CrossRef]
  18. Fan, J.; Li, Y.; Fryc, I.; Qian, C.; Fan, X.; Zhang, G. Machine-Learning Assisted Prediction of Spectral Power Distribution for Full-Spectrum White Light-Emitting Diode. IEEE Photonics J. 2020, 12, 8945382. [Google Scholar] [CrossRef]
  19. Caswell, N.S.; Nikolaou, C.; Sairamesh, J.; Bitsaki, M.; Koutras, G.D.; Iacovidis, G. Estimating value in service systems: A case study of a repair service system. IBM Syst. J. 2008, 47, 87–100. [Google Scholar] [CrossRef]
  20. Jin, T.; Liao, H.; Kilari, M. Reliability growth modeling for in-service electronic systems considering latent failure modes. Microelectron. Reliab. 2010, 50, 324–331. [Google Scholar] [CrossRef]
  21. Grenyer, A.; Erkoyuncu, J.A.; Zhao, Y.; Roy, R. A systematic review of multivariate uncertainty quantification for engineering systems. CIRP J. Manuf. Sci. Technol. 2021, 33, 188–208. [Google Scholar] [CrossRef]
  22. Li, G.; Liu, M.; Dong, M. A new online learning algorithm for structure-adjustable extreme learning machine. Comput. Math. Appl. 2010, 60, 377–389. [Google Scholar] [CrossRef] [Green Version]
  23. Hu, Z.; Du, X. Mixed Efficient Global Optimization for Time-Dependent Reliability Analysis. J. Mech. Des. 2015, 137, 051401. [Google Scholar] [CrossRef]
  24. Hu, Z.; Mahadevan, S. A Single-Loop Kriging Surrogate Modeling for Time-Dependent Reliability Analysis. J. Mech. Des. 2016, 138, 061406. [Google Scholar] [CrossRef]
  25. Lieu, Q.X.; Nguyen, K.T.; Dang, K.D.; Lee, S.; Kang, J.; Lee, J. An adaptive surrogate model to structural reliability analysis using deep neural network. Expert Syst. Appl. 2022, 189, 116104. [Google Scholar] [CrossRef]
  26. Yuan, C.; Fan, X.; Zhang, G. Solder Joint Reliability Risk Estimation by AI-Assisted Simulation Framework with Genetic Algorithm to Optimize the Initial Parameters for AI Models. Materials 2021, 14, 4835. [Google Scholar] [CrossRef]
  27. Liang, B.; Wang, Z.; Qian, C.; Ren, Y.; Sun, B.; Yang, D.; Jing, Z.; Fan, J. Investigation of Step-Stress Accelerated Degradation Test Strategy for Ultraviolet Light Emitting Diodes. Materials 2019, 12, 3119. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Molnar, C. Interpretable Machine Learning; Lulu Press: Morrisville, NC, USA, 2020. [Google Scholar]
Figure 1. The architecture of the adaptive surrogate modeling method.
Figure 1. The architecture of the adaptive surrogate modeling method.
Electronics 11 02861 g001
Figure 2. The UVLED reliability experiment and results. (a) is the sample of the sample, (bd) are the radiation power reduction with respect to the Set A, B and C, respectively. The test conditions of Set A, B and C are listed in Table 1.
Figure 2. The UVLED reliability experiment and results. (a) is the sample of the sample, (bd) are the radiation power reduction with respect to the Set A, B and C, respectively. The test conditions of Set A, B and C are listed in Table 1.
Electronics 11 02861 g002
Figure 3. The deviation of the UVLED reliability test results: (a) set A; (b) set B; (c) set C.
Figure 3. The deviation of the UVLED reliability test results: (a) set A; (b) set B; (c) set C.
Electronics 11 02861 g003
Figure 4. The training results of the root models (RMs): (a) RM1; (b) RM2; (c) RM3; (d) RM4. The dashed lines represent the measurement data listed in Table 2. The thick purple and dark yellow curves of each panel show the root model using the loading conditions set by sets B and C, respectively.
Figure 4. The training results of the root models (RMs): (a) RM1; (b) RM2; (c) RM3; (d) RM4. The dashed lines represent the measurement data listed in Table 2. The thick purple and dark yellow curves of each panel show the root model using the loading conditions set by sets B and C, respectively.
Electronics 11 02861 g004
Figure 5. The online learning against case A17: (a) the initial state; (bj) the 9 online learning steps.
Figure 5. The online learning against case A17: (a) the initial state; (bj) the 9 online learning steps.
Electronics 11 02861 g005
Figure 6. The online machine learning results of A17, A25, and A28. The upper curves show the ground truth (dotted line) and the best ASM, with the Y-axis on the left-hand side. The lower dashed curves with symbols (grey-square, red circle and blue-triangle) represent the error estimations of A17, A25 and A28, respectively. These errors are computed by Equation (5), with the Y-axis on the right-hand side.
Figure 6. The online machine learning results of A17, A25, and A28. The upper curves show the ground truth (dotted line) and the best ASM, with the Y-axis on the left-hand side. The lower dashed curves with symbols (grey-square, red circle and blue-triangle) represent the error estimations of A17, A25 and A28, respectively. These errors are computed by Equation (5), with the Y-axis on the right-hand side.
Electronics 11 02861 g006
Figure 7. The illustration of the prediction capability concept: The hybrid machine learning of case A17 is applied. (a) and (b) are at t = 840 and 1848 h, respectively.
Figure 7. The illustration of the prediction capability concept: The hybrid machine learning of case A17 is applied. (a) and (b) are at t = 840 and 1848 h, respectively.
Electronics 11 02861 g007
Figure 8. The contribution of the root models during the online machine learning: (a) A17; (b) A25; (c) A28. The dotted curves in panels (ac) are the ground truth, with the Y-axis on the left-hand side. The bottom stacked column plots are the weightings of the evolved root models after each online learning iteration, and the corresponding Y-axis is located on the right-hand side.
Figure 8. The contribution of the root models during the online machine learning: (a) A17; (b) A25; (c) A28. The dotted curves in panels (ac) are the ground truth, with the Y-axis on the left-hand side. The bottom stacked column plots are the weightings of the evolved root models after each online learning iteration, and the corresponding Y-axis is located on the right-hand side.
Electronics 11 02861 g008
Figure 9. The contribution of root models for (a) case B15 using the (b) “back-to-original” and (c) “progressing” online machine learning optimizers.
Figure 9. The contribution of root models for (a) case B15 using the (b) “back-to-original” and (c) “progressing” online machine learning optimizers.
Electronics 11 02861 g009
Figure 10. The neural network weighting change of root models during the online learning procedure when the “back-to-original” and “progressing” optimizers are applied: (a) RM1; (b) RM2; (c) RM3; (d) RM4.
Figure 10. The neural network weighting change of root models during the online learning procedure when the “back-to-original” and “progressing” optimizers are applied: (a) RM1; (b) RM2; (c) RM3; (d) RM4.
Electronics 11 02861 g010
Table 1. The loading conditions of the UVLED reliability experiments.
Table 1. The loading conditions of the UVLED reliability experiments.
Sets
ABC
Samples141414
Temperature (°C)355555–85 *
Current (mA)350350–450 **350
Time (hours)Measured every 168 h
*: Base temperature started from 55 °C, and continuously increased every 504 h at a step size of 5 °C until it reached 85 °C. **: Input current started from 350 mA, and continuously increased every 504 h at a step size of 50 mA until it reached 450 mA.
Table 2. The offline training datasets and the learning results of the root models.
Table 2. The offline training datasets and the learning results of the root models.
Root ModelTraining SetsTraining Results
B-RelatedC-RelatedB-Related Averaged ErrorsC-Related Averaged Errors
RM1b1 1c1 30.01450.0147
RM2b2 2c2 40.01470.0272
RM3b1 1c2 40.01540.0267
RM4b2 2c1 30.01420.0146
1: Group b1 comprises cases B1, B2, B4, B13, B12, and B14. 2: Group b2 comprises cases B3, B5, B7, B8, B10, and B11. 3: Group c1 comprises cases C47, C50, C53, C55, C58, and C59. 4: Group c2 comprises cases C46, C48, C49, C54, C56, C57.
Table 3. Comparison of predictive capabilities from the practical point of view and the oracle’s point of view for cases A17, A25, and A28.
Table 3. Comparison of predictive capabilities from the practical point of view and the oracle’s point of view for cases A17, A25, and A28.
Aging Time (Hours) A17A25A28
P C p * P C o * P C 0 P C p * P C p * P C o * P C 0 P C p * P C p * P C o * P C 0 P C p *
504110110110
672242242242
840253253253
100842−2440440
117643−141−341−3
1344143121143
1512242220264
16803107220352
1848495297462
2016484286484
2184473374572
(Avg) 1.93 1.71 1.50
*: With the unit of Δt = 168 h.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, C. An AI-Based Adaptive Surrogate Modeling Method for the In-Service Response of UVLED Modules. Electronics 2022, 11, 2861. https://doi.org/10.3390/electronics11182861

AMA Style

Yuan C. An AI-Based Adaptive Surrogate Modeling Method for the In-Service Response of UVLED Modules. Electronics. 2022; 11(18):2861. https://doi.org/10.3390/electronics11182861

Chicago/Turabian Style

Yuan, Cadmus. 2022. "An AI-Based Adaptive Surrogate Modeling Method for the In-Service Response of UVLED Modules" Electronics 11, no. 18: 2861. https://doi.org/10.3390/electronics11182861

APA Style

Yuan, C. (2022). An AI-Based Adaptive Surrogate Modeling Method for the In-Service Response of UVLED Modules. Electronics, 11(18), 2861. https://doi.org/10.3390/electronics11182861

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop