Next Article in Journal
Effective SQL Injection Detection: A Fusion of Binary Olympiad Optimizer and Classification Algorithm
Previous Article in Journal
On LP-Kenmotsu Manifold with Regard to Generalized Symmetric Metric Connection of Type (α, β)
Previous Article in Special Issue
Predictive Resilience Modeling Using Statistical Regression Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reliability Modeling of Systems with Undetected Degradation Considering Time Delays, Self-Repair, and Random Operating Environments

by
Hoang Pham
Department of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ 08854, USA
Mathematics 2024, 12(18), 2916; https://doi.org/10.3390/math12182916
Submission received: 29 August 2024 / Revised: 13 September 2024 / Accepted: 14 September 2024 / Published: 19 September 2024

Abstract

:
In some settings, systems may not fail completely but instead undergo performance degradation, leading to reduced efficiency. A significant concern arises when a system transitions into a degraded state without immediate detection, with the degradation only becoming apparent after an unpredictable period. Undetected degradation can result in failures with significant consequences. For instance, a minor crack in an oil pipeline might go unnoticed, eventually leading to a major leak, environmental harm, and costly cleanup efforts. Similarly, in the nuclear industry, undetected degradation in reactor cooling systems could cause overheating and potentially catastrophic failure. This paper focuses on reliability modeling for systems experiencing degradation, accounting for time delays associated with undetected degraded states, self-repair mechanisms, and varying operating environments. The paper presents a reliability model for degraded, time-dependent systems, incorporating various aspects of degradation. It first discusses the model assumptions and formulation, followed by numerical results obtained from system modeling using the developed program. Various scenarios are illustrated, incorporating time delays and different parameter values. Through computational analysis of these complex systems, we observe that the probability of the system being in the undetected degraded state tends to stabilize shortly after the initial degradation begins. The model is valuable for predicting and establishing an upper bound on the probability of the undetected, degraded state and the system’s overall reliability. Finally, the paper outlines potential avenues for future research.

1. Introduction

In today’s competitive business world, there is an urgent demand for methods to predict the reliability characteristics of complex systems, including airplanes, submarines, nuclear power plants, drones, robots, and patient monitoring systems. Additionally, reducing avoidable breakdowns during field operations and extending the overall lifespan of these systems has become crucial [1]. In many systems, particularly in mechanical systems, the failure rate increases over time due to aging and environmental effects [2]. Vehicles, for example, are complex systems consisting of mechanical, hydraulic, electrical, sensor, and software subsystems. As time processes, the performance and efficiency of vehicles gradually decrease because of the effects of fatigue damage accumulation, environmental corrosion, aging, outer vibrations, and random shocks, which are common and typical for automotive systems [3].
Moreover, the failure rate of systems, such as data centers, power generation stations, medical equipment, oil pipeline networks, and security network systems, may be influenced not only by time but also by the system’s condition and environments. Factors such as vibration levels, external conditions, and the number of random shocks the system endures can contribute to its degradation. As a result, these systems may not fail completely but can degrade, leading to reduced overall efficiency [4]. However, a critical concern arises when the system’s status deteriorates without immediate detection, only being identified after some time—if it has not already failed. In many vital systems, such degradation can occur without immediate awareness or clear indicators, making prevention challenging. For example, an undetected failure to a sensor that indicates an inadequate level of transmission fluid can cause a car engine to stop suddenly. Similarly, an electrical power system might suddenly cease to function due to an undetected degraded state that has transitioned from a normal operating mode. Another example is cardiac arrest, where the heart unexpectedly stops beating due to an undetected degraded state, leading to a complete interruption of blood flow throughout the body.
Unlike abrupt failures that might trigger immediate alerts, the undetected degraded state poses a significant risk due to its inconspicuous nature. In high performance computing systems, data centers are expensive facilities housing thousands of computers, which perform critical tasks such as serving web sites [5]. These facilities must ensure reliability, data accessibility, privacy, and security. But network congestion can lead to data unavailability or loss due to a high number of users at any given time, therefore impacting the overall performance of the data center. For example, if a single computer is expected to crash three times a year, a data center with 10,000 computers could experience nearly 100 crashes a day. In a data center with 100,000 computers, there could be around 1000 crashes each day [6]. Stefanovici et al. [6] reported in 2015 that about 4% of Google’s millions of computers experienced undetectable errors each year, resulting in unexpected shutdowns.
Detecting when a system enters an undetected degraded state poses a challenging task. Some applications may experience such situations. For example, the following:
(1)
Components in a power grid may degrade gradually, reducing efficiency or leading to failures. These causes may go unnoticed until an unexpected event, such as a peak load, triggers the underlying problem.
(2)
A medical device monitoring a patient’s vital signs may gradually deteriorate, resulting in inaccurate readings. This unnoticed issue may only become evident when the system fails to provide observed data during a situation that leads to compromised patient safety.
(3)
Degradation in the performance of storage devices such as hard drives may go unnoticed until a system failure occurs.
In these cases, undetected degradation in systems can result in failures with severe damage. Identifying such issues often requires advanced monitoring systems and sensors capable of recognizing patterns and processes differing from expected behavior. These applications raise the critical challenge of detecting undetected degraded states in industrial and biological systems, thereby enhancing overall system reliability and minimizing the risk of unexpected system failures.
In this paper, the author presents a reliability model to study undetected degraded states. The analysis takes into account time delays, time-dependent external influences arising from various environment factors, and self-healing mechanisms.
Many researchers [2,3,4,7,8,9,10,11,12,13,14] have developed models to assess the reliability of systems subject to multiple degradation processes and random shocks. Numerous studies in the literature focus on reliability modeling subject to degradation processes and random shocks. For example, Pham et al. [2] presented a model to predict the reliability and availability of multi-state degraded systems with partial repairs. Yu et al. [3] extended the model presented in [2] with the aim of analyzing the reliability of degraded automotive systems subjected to minimal repairs and negative repair effects.
Li and Pham [4] developed a generalized multi-state degraded system reliability model that considered multiple competing degradation processes and random shocks without accounting for maintenance aspects. Subsequently, they [7] developed optimal inspection policies for the condition-based maintenance of degraded systems subject to multiple competing processes. Wang and Pham [8] developed a dependent competing risk model for systems subject to multiple degradation processes and random shocks using time-varying copulas. In their model, they considered two types of random shocks: fatal and non-fatal shocks. An extended study by Hu et al. [9] focused on condition-based maintenance planning for systems with dependent soft and hard failures. Chang et al. [10] developed a model for the reliability analysis of systems subject to dependent degradation-shock failure processes by considering the effects of degradation levels on both degradation rates and hard failure thresholds.
Continuing the idea of dependent competing degraded systems, Hao et al. [11] introduced a model for mutually dependent competing processes. Wang et al. [12] developed an interdependent framework that integrates natural degradation and random shock processes into the competing failure model. Park and Pham [13] recently developed a reliability model and condition-based maintenance strategy under warranty, exploring dependence between the degradation process and random shocks. Castro et al. [14] recently studied a dependent complex degrading system with non-periodic inspection times and multiple dependent degradation processes. A comprehensive review of the applications of reinforcement and deep reinforcement learning for maintenance planning and degradation maintenance modeling can be found in [15] and [16], respectively.
Time-delay processes in biological systems have been extensively studied in recent years using stochastic delay differential equations [17]. Pham [18] presents a dynamic model of multiple time-delay interactions between virus-infected cells and the body’s immune system in the context of autoimmune diseases. Additionally, he studies a time-delay model considering the interactions between tumor viruses and the immune system, incorporating the effects of chemotherapy and autoimmune diseases [19] using delay differential equations. Kumaran et al. [20] discuss a comprehensive analysis of stochastic delay differential equations applied to various systems. However, little work has been done in reliability.
It is important to note that the cited literature does not thoroughly examine undetected degraded system states or explore the impact of self-healing, self-repair, or systems with time delays. This paper presents a reliability model for degraded time-dependent systems, considering factors such as time delays, self-healing and self-repair capabilities, undetected degraded states, and random operating environments. While existing research has explored reliability modeling for various aspects of degraded systems with competing risks and shocks, our proposed model is the first to address the relationship among undetected degraded states, self-healing mechanisms, and random operating environments within a system.
The paper is structured as follows: Section 2 discusses the model description and presents the results of the mathematical analysis. Section 3 provides numerical results for the proposed model. Finally, Section 4 includes a brief discussion and outlines potential directions for future research.

2. Model Description

The system is assumed to be operating as intended, ensuring reliable functionality. However, systems can sometimes transition into an undetected degraded state while still functioning, which is characterized by a gradual decline in performance. We will first introduce some notations and model assumptions, followed by a discussion on the formulation of the system model. The states of the system and their corresponding probabilities are as follows:
S1:
The system operates normally (i.e., operating state).
S2:
The system is in the degradation state but undetected (i.e., undetected degraded state).
S3:
The system is in the degradation state and detected (i.e., detected degraded state).
S4:
The system undergoes minor repair (i.e., minor repaired state).
S5:
The system undergoes major repair (i.e., major repaired state).
S6:
The system undergoes severe repair (i.e., severe repaired state).
S7:
The system has failed (i.e., failed state).
P1(t)
probability that the system operates normally at time t (i.e., P1(t) = P(S1(t))).
P2(t)
probability that the system is in the degradation state but undetected at time t.
P3(t)
probability that the system is in the degradation state and detected at time t.
P4(t)
probability that the system undergoes minor repair at time t.
P5(t)
probability that the system undergoes major repair at time t.
P6(t)
probability that the system undergoes severe repair at time t.
P7(t)
probability that the system has failed at time t.
Additional notations are included in the discussion of the model assumptions and explanations below. As a result, we will not repeat them in the list of notations above.

2.1. Model Assumptions and Explanation

  • A system initially operates normally, but it may enter an undetected degraded state (S2) with a constant rate, say ‘a’, where performance diminishes without immediate awareness. This state can only be identified after a random time interval, which follows an exponential distribution, with a constant rate ‘b’. In simpler terms, during the undetected degraded state transition, the system may outwardly appear to operate fine, making it challenging to notice the subtle decline in performance. Detection mechanisms, whether automated or through manual monitoring, play a crucial role in identifying these subtle deviations and enabling timely corrective measures to maintain system reliability.
  • The degradation may occur due to various factors, such as wear and tear, software bugs, or external influences. After a certain period, anomalies in the system’s behavior may become more apparent. Through monitoring, analysis, or the activation of built-in diagnostic mechanisms, the system’s degraded state can be detected. This detection is crucial, as it allows for proactive measures to be taken before the system reaches a critical failure point. In the detected degraded state, the system experiences a reduction in efficiency or capability, but this does not lead to a complete failure.
  • The system in the normal state is subject to transitions, with constant rates a, n, and r, into an undetected degraded state, detected degraded state, and failed state, respectively.
  • The system in the undetected degraded state is subject to transitions into either the detected degraded state or the failed state, with rates b and w, respectively. It can also enter the major repair state, with a constant rate u2, and with a delay time τ3 due to undetected degradation.
  • As soon as the degraded state of the system is detected, subject to a time delay time τ1, the system is inspected. It then undergoes either minor repair (with a probability d1), major repair (with a probability d2), or severe repair (with a probability 1 − d1d2). It can also transition to the failed state, with a constant rate c. The time needed for inspection is exponentially distributed, with a constant rate f. The time needed for minor repair, with a constant rate e, can return the system to a normal state. Additionally, it is subject to the functions   w 1 e m 1 t and   w 2 e m 1 t , which can transition to the major and severe states, respectively, due to imperfect repairs.
  • The time needed for a major repair, with constant rates qh and (1 − q)h, can return the system to a normal state and an undetected degraded state, respectively. Moreover, it can enter the severe state, with a constant rate z, subject to the time delay τ4. It is also subjected to the function   z e m 2 t which can transition to the failed state due to imperfect repairs.
  • Similarly, the time needed for severe repair, with constant rates (1 − d3d4)gsubject to the time delay τ2and d4g, which can return to a normal state and an undetected degraded state, respectively, is considered. Additionally, there is a rate of d3g for transition to the failed state due to imperfect repair.
  • The system is assumed to operate as intended with the assistance of self-healing resources, resulting in reliable functionality characterized by the time-dependent function   k e m   t 5 and uncertainty in maintaining a constant rate v in the random environment. For example, when k = 0, indicating the absence of self-healing, we assume that the system lacks the capability to recover autonomously. When k > 0, a self-healing resource becomes available to support system operation. This resource may include inspections or online support to ensure continuous system functionality.
  • The system is assumed to be capable of self-repair or recovering from a temporary fault, restoring the system’s status to a normal condition. For example, the system may have a transition rate u1, allowing it to transit from an undetected degraded state back to a normal operating state due to self-repair, which addresses a temporary fault that can disappear after a random short time.

2.2. Model Formulation

Based on the assumptions and explanations provided, the model includes seven time-dependent probability functions: P1(t), P2(t), …, P7(t). A detailed derivation of P1(t), which represents the probability of the system operating normally, is presented below. The remaining probability functions can be derived in a similar manner.
Derivation of P1(t):
The probability of the system operating normally would increase due to its time-dependent self-healing function km(t) which enables autonomous recovery as follows:
P 1 ( t ) t = k e m t 5
where we consider the function m(t) in this form: m ( t ) = e m t 5 .
The probability of a system operating normally increases with the completion of constant repair rates for e, qh, and (1 − d3d4)g, which correspond to minor, major, and severe states, respectively. This improvement is subject to a time delay, denoted as τ2, due to the complexity involved in the severe state:
P 1 ( t ) t = e P 4 ( t ) + q h P 5 ( t ) + 1 d 3 d 4 g P 6 ( t τ 2 )  
The probability of a system operating normally increases due to a self-repair constant rate, denoted as u1. This rate involves automatic recovery from temporary faults, restoring the system to normal conditions from an undetected state:
P 1 ( t ) t = u 1 P 2 ( t τ 3 )  
The probability of a system operating normally decreases due to the constant failure rates of a, n, or r, which lead to transitions into the undetected degraded state, detected state, or failure state, respectively:
P 1 ( t ) t = a P 1 ( t ) n + r P 1 ( t )  
From the equations above, we present the probability rate that the system operates normally over time t as follows:
P 1 ( t ) t = k e m t 5 a P 1 ( t ) + e P 4 ( t ) + q h P 5 ( t ) + u 1 P 2 ( t τ 3 ) + 1 d 3 d 4 g P 6 ( t τ 2 ) n + r P 1 ( t )  
In summary, when the system is in a normal operating state, it may transition into the undetected degraded state, the detected degraded state, or the failed state, with transition rates denoted as a, n, or r, respectively. Additionally, the effects of self-healing and self-repair mechanisms are considered, represented by the function e m t 5  and the constant rate u1, respectively.
Similarly, we can derive the probability rates of other states in the system based on the model assumptions and explanations provided. Thus, the mathematical model of the degraded system includes the rates of time-dependent probability states over time, denoted as P1(t), P2(t),…, P7(t) as follows:
P 1 ( t ) t = k e m t 5 a P 1 t + e P 4 t + q h P 5 t + u 1 P 2 t τ 3 + 1 d 3 d 4 g P 6 t τ 2     n + r P 1 ( t )
P 2 ( t ) t = b P 2 ( t ) + a P 1 ( t ) + ( 1 q ) h P 5 ( t ) + d 4 g P 6 ( t τ 2 ) + v u 1 P 2 ( t τ 3 ) u 2 P 2 ( t τ 3 ) w P 2 ( t τ 3 )
P 3 ( t ) t = c P 3 ( t τ 1 ) f P 3 ( t τ 1 ) + b 1 i = 1 & i 2 7 P i ( t ) + n P 1 ( t )
P 4 ( t ) t = e P 4 ( t ) + d 1 f P 3 ( t τ 1 ) w 1 + w 2 e m 1 t P 4 ( t )
P 5 ( t ) t = h P 5 ( t ) + d 2 f P 3 ( t τ 1 ) + u 2 P 2 ( t τ 3 ) z P 5 ( t ) + w 1 e m 1 t P 4 ( t ) z e m 2 t P 5 ( t )
P 6 ( t ) t = g P 6 ( t τ 2 ) + 1 d 1 d 2 f P 3 ( t τ 1 ) + z P 5 ( t τ 4 ) + w 2 e m 1 t P 4 ( t )
P 7 ( t ) t = c P 3 ( t τ 1 ) + d 3 g P 6 ( t τ 2 ) + w P 2 ( t τ 3 ) + r P 1 ( t ) + z e m 2 t P 5 ( t ) .
Figure 1 illustrates the proposed mathematical model’s presentation for the above equations.
I have developed algorithms using R software to numerically calculate the results of Pi(t), for i = 1, 2, …, 7. This study evaluates the reliability of the time-delay degraded-dependent system, denoted as R(t). Reliability, in this context, is defined as the probability that the system is functioning normally at time t, or it is in a degraded state but remains undetected. Specifically,
R ( t ) =   P 1 t + P 2 t .
The availability of the time-delay degraded system, denoted as A(t), is defined as the probability that the system is operational at time t. This is expressed as,
A ( t ) = i = 1 6 P i t .
It should be noted that when the model structure, as shown in Figure 1, does not take into account time delays, random self-repair mechanisms, undetected system degradation, and random operating environments, the modeling results may align with some existing studies in the literature [16].

3. Numerical Results

In this section, we present numerical results for the model based on Equations (6)–(14). Although estimating all 25 parameter values in the model can be challenging, practitioners and analysts can derive reliable estimates based on their knowledge of a specified application. These estimates can then be used to evaluate the parameter values. Our goal is to illustrate the proposed model for a degraded system subject to time delays, random self-healing, undetected degradation, and normal operating conditions. To demonstrate the model results, we assume that the model parameter values are provided as shown in Table 1. Researchers can adapt the proposed model to their specific application by using relevant parameter values for their research.
In this analysis, we will examine the numerical results for cases with and without time delays, considering various values of the self-healing function. The investigation is based on two scenarios for the initial conditions, detailed as follows:
  • Scenario 1: The set of initial probability rate conditions (i.e., time t = 0) is as follows: P1(0) = 1.0; P2(0) = 0.0; P3(0) = P4(0) = P5(0) = P6(0) = P7(0) = 0.0. This implies that the system always works as intended from the beginning.
  • Scenario 2: The set of initial probability rate conditions (i.e., time t = 0) is:
P1(0) = 0.9; P2(0) = 0.1; P3(0) = P4(0) = P5(0) = P6(0) = P7(0) = 0.0.
In scenario 2, the system consistently operates as intended initially, but there is a 10% chance that the system might be in an undetected degraded state.
It is important to note that the results do not necessarily depend on the assumptions underlying the initial numbers. Analysts can assign any values as initial conditions to numerically solve the system of differential equations. We will now present various cases with different parameter values for self-healing factors, such as k and v, along with different sets of time delays.
  • Case 1: System with no time delay for various values of k (k = 0.0; 0.00008; 0.0001) and the initial condition P1(0) = 1
Figure 2 illustrates the probability results for Case 1, where the system initially operates in the normal state, which is P1(0) = 1.0 and k = 0. We observe that the probability of the system being in state S2 (i.e., undetected degraded state) and in state S3 (i.e., detected degraded state) both increase significantly at the beginning, stabilizing around the 40th time unit. Concurrently, the reliability of the system experiences a notable decrease initially, leveling off around the 175th time unit. As expected, the probability of system failure increases linearly, as depicted in Figure 2.
Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 present the probability results for various states with different values of k when k = 0, 0.00008, and 0.0001. In the absence of self-healing, which is k = 0, the system lacks the capability to recover autonomously. When k > 0, a self-healing resource becomes available to support system operation. This resource may include inspections or online support to ensure continuous system functionality. The results in Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7 show minimal variation due to the very small values of k, which are nearly approaching zero. Significant, impactful results are expected as k increases towards larger values.
  • Case 2: System with no time delay for various values of k (k = 0.0; 0.00008; 0.0001), P1(0) = 0.9, and P2(0) = 0.1
Figure 8 illustrates the probability results for Case 2, where the system initially operates in the normal state, but it can also be in the undetected degraded state, with P1(0) = 0.90, P2(0) = 0.10, and k = 0. We observe that the probability of the system being in state S2 (i.e., undetected degraded state) notably decreases initially, stabilizing around the 60th time unit. This is in contrast to Case 1, with the only difference being because of the initial conditions. In state S3 (i.e., detected degraded state), the probability increases significantly at the beginning until around the 20th time unit then starts to decrease, stabilizing around the 90th time unit.
The reliability of the system experiences a notable, initial decrease, leveling off around the 225th time unit. As expected, the probability of system failure increases linearly, as depicted in Figure 8.
Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 present the probability results for various states with different values of k, specifically when k = 0.0, 0.00008, and 0.0001. It is worth noting that there is a slight variation in the probability of the operating state P1, when k = 0 and 0.0001, as is shown in Figure 9. This implies that significant, impactful results are expected as k increases towards larger values.
  • Case 3: System with various time delays with P1(0) = 1.0, P2(0) = 0.0, and k = 0
Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18 present the probability results for various states with three sets of time delays when k = 0. The three sets of time delays are: (0,0,0,0), (10,10,10,15), and (20,20,20,30). The time delay set (10,10,10,15) is denoted as τ1 = 10, τ2 = 10, τ3 = 10, and τ4 = 15. The set (0,0,0,0) is equivalent to no time delay. To simplify, we refer to the time delay sets (0,0,0,0), (10,10,10,15), and (20,20,20,30) as TD set 1, TD set 2, and TD set 3, respectively.
We observe that the results are consistent, except for the probability of the detected degraded state P3, as shown in Figure 16, which differs between TD set 1 and set 2. Additionally, based on Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18, there is significant variation in the probability of each state for TD set 3 compared to both TD set 1 and set 2. In general, we anticipate significant, impactful results as the time delays increase.
  • Case 4: System with various time delays with P1(0) = 1.0, P2(0) = 0.0, and k = 0.0001
Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23 present the probability results for various states with three sets of time delays when k = 0.0001. We observe consistent results across various parameters, with the exception of the probability of the detected degraded state, P3, as illustrated in Figure 21. Notably, this probability (P3) varies between TD set 1 and set 2. Based on Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23, there is significant variation in the probability of each state for TD set 3 compared to both TD set 1 and set 2. In general, we anticipate significant, impactful results with increasing time delays. The findings of Case 4 align with those of Case 3.
  • Case 5: same as Case 4 except P1(0) = 0.9, P2(0) = 0.1; k = 0.0001
Figure 24 illustrates the probability results for Case 5 with TD set 3 (20,20,20,30), where the system initially operates in the normal state, which is P1(0) = 0.9, P2(0) = 0.1, and k = 0.0001. Figure 25, Figure 26, Figure 27, Figure 28 and Figure 29 present the probability results for various states, with three sets of time delays when k = 0.0001 and with the initial conditions P1(0) = 0.9 and P2(0) = 0.1. We observe consistent results across various parameters, except for the probability of detecting the degraded state, P3, as illustrated in Figure 27. Notably, this probability (P3) varies between TD set 1 and set 2. Based on Figure 25, Figure 26, Figure 27, Figure 28 and Figure 29, there is significant variation in the probability of each state for TD set 3 compared to both TD set 1 and set 2. In general, we anticipate significant, impactful results with increasing time delays. The findings for Case 5 align with those of Cases 3 and 4.

4. Conclusions and Remarks

Detecting when a system enters a degraded state is crucial, as an undetected degradation can lead to failures with significant consequences, including risks not only to the system but also to its environment and to business losses. This paper presents, what the author believes to be, the first reliability model that considers this important aspect of systems undergoing degradation, accounting for time delays associated with undetected degraded states, self-repair mechanisms, and varying operating environments. The system is designed to self-repair or recover from temporary faults, restoring it to a normal condition. We also provide numerical results to illustrate the proposed model with various sets of time delays and parameter values. Notably, reliability modeling reveals significant impacts as time delays increase, which is crucial for accurately predicting the reliability of new industrial systems or products and avoiding overestimation of their reliability.
Future research should focus on determining a mathematical upper bound for the probability of undetected degraded states, which will aid in setting appropriate targets for these probabilities. Additionally, further exploration using real data from actual industrial systems is essential to fully account for time delays, undetected degraded states, self-healing and self-repair mechanisms, and operating environments. Although this paper contributes to reliability modeling, the author recognizes that selecting all parameter values in the system structure for model assessment can be challenging in practice, as mentioned in Section 3. In addition, while system analysts can derive reliable estimates based on their knowledge of specific applications, future research should explore methods such as failure mode and effects analysis (FMEA) and the use of machine learning models to address these practical challenges more effectively.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pham, H. Mathematical maintenance theory: A historical perspective. IEEE Trans. Reliab. 2024, 73, 38–40. [Google Scholar] [CrossRef]
  2. Pham, H.; Suprasad, A.; Misra, R.B. Availability and mean life time prediction of multi-stage degraded system with partial repairs. Reliab. Eng. Syst. Saf. 1997, 56, 169–173. [Google Scholar] [CrossRef]
  3. Yu, J.; Zheng, S.; Pham, H.; Chen, T. Reliability modeling of multi-state degraded repairable systems and its applications to automotive systems. Qual. Reliab. Eng. Int. 2020, 34, 459–474. [Google Scholar] [CrossRef]
  4. Li, W.; Pham, H. Reliability modeling of multi-state degraded systems with multi-competing failures and random shocks. IEEE Trans. Reliab. 2005, 54, 297–303. [Google Scholar] [CrossRef]
  5. Available online: https://www.sunbirddcim.com/what-is-data-center (accessed on 12 May 2024).
  6. Stefanovici, T.; Hwang, A.; Schroeder, B. DRAM’s damning defects—And how they cripple computers. IEEE Spectrum 2015. [Google Scholar]
  7. Wang, Y.; Pham, H. A multi-objective optimization of imperfect preventive maintenance policy for dependent competing risk systems with hidden failure. IEEE Trans. Reliab. 2011, 60, 770–781. [Google Scholar] [CrossRef]
  8. Wang, Y.; Pham, H. Modeling the dependent competing risks with multiple degradation processes and random shock using time-varying copulas. IEEE Trans. Reliab. 2012, 61, 13–22. [Google Scholar] [CrossRef]
  9. Hu, J.; Sun, Q.; Ye, Z.-S. Condition-based maintenance planning for systems subject to dependent soft and hard failures. IEEE Trans. Reliab. 2020, 70, 1468–1480. [Google Scholar] [CrossRef]
  10. Chang, M.; Huang, X.; Coolen, F.P.A.; Coolen-Maturi, T. Reliability analysis for systems based on degradation rates and hard failure thresholds changing with degradation levels. Reliab. Eng. Syst. Saf. 2021, 216, 108007. [Google Scholar] [CrossRef]
  11. Hao, S.; Yang, J.; Ma, X.; Zhao, Y. Reliability modeling for mutually dependent competing failure processes due to degradation and random shocks. Appl. Math. Model. 2017, 51, 232–249. [Google Scholar] [CrossRef]
  12. Wang, J.; Bai, G.; Zhang, L. Modeling the interdependency between natural degradation process and random shocks. Comput. Ind. Eng. 2020, 145, 106551. [Google Scholar] [CrossRef]
  13. Park, M.; Pham, H. Condition-based maintenance for a degradation-shock dependence system under warranty. Int. J. Prod. Res. 2023, 61, 5212–5227. [Google Scholar] [CrossRef]
  14. Castro, I.T.; Landesa, L. A dependent complex degrading system with non-periodic inspection times. arXiv 2024. [Google Scholar] [CrossRef]
  15. Ogunfowora, O.; Najjaran, H. Reinforcement and deep reinforcement learning-based solutions for machine maintenance planning, scheduling policies, and optimization. J. Manuf. Syst. 2023, 70, 244–263. [Google Scholar] [CrossRef]
  16. Pham, H.; Li, W. Statistical maintenance modeling for complex systems. In Springer Handbook of Engineering Statistics, 2nd ed.; Pham, H., Ed.; Springer: London, UK, 2023. [Google Scholar]
  17. Babasola, O.; Omondi, E.O.; Oshinubi, K.; Imbusi, N.M. Stochastic delay differential equations: A comprehensive approach for understanding biosystems with application to disease modeling. Appl. Math. 2023, 3, 702–721. [Google Scholar] [CrossRef]
  18. Pham, H. A dynamic model of multiple time-delay interactions between the virus cells and body’s immune system with autoimmune diseases. Axioms 2021, 10, 216. [Google Scholar] [CrossRef]
  19. Pham, H. Mathematical modeling for time-delay interactions between tumor viruses and the immune system with the effects of chemotherapy and autoimmune diseases. Mathematics 2022, 10, 756. [Google Scholar] [CrossRef]
  20. Kumaran, C.; Venkatesh, T.G.; Swarup, K.S. Stochastic delay differential equations: Analysis and simulation studies. Chaos Solitons Fractals 2022, 165, 112819. [Google Scholar] [CrossRef]
Figure 1. The proposed model configuration, where   m ( t ) = e m t 5 ,   w 1 ( t ) =   w 1 e m 1 t ,   w 2 ( t ) =   w 2 e m 1 t , z ( t ) =   z e m 2 t .
Figure 1. The proposed model configuration, where   m ( t ) = e m t 5 ,   w 1 ( t ) =   w 1 e m 1 t ,   w 2 ( t ) =   w 2 e m 1 t , z ( t ) =   z e m 2 t .
Mathematics 12 02916 g001
Figure 2. The reliability, availability, and probability results of each state for Case 1, where, initially, the system is at the normal operating state, which is P1(0) = 1.0 and k = 0.
Figure 2. The reliability, availability, and probability results of each state for Case 1, where, initially, the system is at the normal operating state, which is P1(0) = 1.0 and k = 0.
Mathematics 12 02916 g002
Figure 3. The presentation of results for the probability of the operating state P1(t) for various values of k.
Figure 3. The presentation of results for the probability of the operating state P1(t) for various values of k.
Mathematics 12 02916 g003
Figure 4. The presentation of results for the probability of the operating state P2(t) for various values of k.
Figure 4. The presentation of results for the probability of the operating state P2(t) for various values of k.
Mathematics 12 02916 g004
Figure 5. The presentation of results for the probability of the operating state P3(t) for various values of k.
Figure 5. The presentation of results for the probability of the operating state P3(t) for various values of k.
Mathematics 12 02916 g005
Figure 6. The presentation of results for the reliability of the system R(t) for various values of k.
Figure 6. The presentation of results for the reliability of the system R(t) for various values of k.
Mathematics 12 02916 g006
Figure 7. The presentation of results for the availability of the system A(t) for various values of k.
Figure 7. The presentation of results for the availability of the system A(t) for various values of k.
Mathematics 12 02916 g007
Figure 8. The reliability, availability, and probability results of each state for Case 2, where, initially, the system is at the normal operating state, which is P1(0) = 0.9, P2(0) = 0.1, and k = 0.
Figure 8. The reliability, availability, and probability results of each state for Case 2, where, initially, the system is at the normal operating state, which is P1(0) = 0.9, P2(0) = 0.1, and k = 0.
Mathematics 12 02916 g008
Figure 9. The presentation of results for the probability of the operating state P1(t) for various values of k.
Figure 9. The presentation of results for the probability of the operating state P1(t) for various values of k.
Mathematics 12 02916 g009
Figure 10. The presentation of results for the probability of the operating state P2(t) for various values of k.
Figure 10. The presentation of results for the probability of the operating state P2(t) for various values of k.
Mathematics 12 02916 g010
Figure 11. The presentation of results for the probability of the operating state P3(t) for various values of k.
Figure 11. The presentation of results for the probability of the operating state P3(t) for various values of k.
Mathematics 12 02916 g011
Figure 12. The presentation of results for the reliability of the system R(t) for various values of k.
Figure 12. The presentation of results for the reliability of the system R(t) for various values of k.
Mathematics 12 02916 g012
Figure 13. The presentation of results for the availability of the system A(t) for various values of k.
Figure 13. The presentation of results for the availability of the system A(t) for various values of k.
Mathematics 12 02916 g013
Figure 14. The presentation of results for the probability of the operating state P1(t) for various sets of time delays when k = 0.
Figure 14. The presentation of results for the probability of the operating state P1(t) for various sets of time delays when k = 0.
Mathematics 12 02916 g014
Figure 15. The presentation of results for the probability of the operating state P2(t) for various sets of time delays when k = 0.
Figure 15. The presentation of results for the probability of the operating state P2(t) for various sets of time delays when k = 0.
Mathematics 12 02916 g015
Figure 16. The presentation of results for the probability of the operating state P3(t) for various sets of time delays when k = 0.
Figure 16. The presentation of results for the probability of the operating state P3(t) for various sets of time delays when k = 0.
Mathematics 12 02916 g016
Figure 17. The presentation of results for the reliability of the system R(t) for various sets of time delays when k = 0.
Figure 17. The presentation of results for the reliability of the system R(t) for various sets of time delays when k = 0.
Mathematics 12 02916 g017
Figure 18. The presentation of results for the availability of the system A(t) for various sets of time delays when k = 0.
Figure 18. The presentation of results for the availability of the system A(t) for various sets of time delays when k = 0.
Mathematics 12 02916 g018
Figure 19. The presentation of results for the probability of the operating state P1(t) for various sets of time delays when k = 0.0001.
Figure 19. The presentation of results for the probability of the operating state P1(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g019
Figure 20. The presentation of results for the probability of the operating state P2(t) for various sets of time delays when k = 0.0001.
Figure 20. The presentation of results for the probability of the operating state P2(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g020
Figure 21. The presentation of results for the probability of the operating state P3(t) for various sets of time delays when k = 0.0001.
Figure 21. The presentation of results for the probability of the operating state P3(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g021
Figure 22. The presentation of results for the probability of the operating state R(t) for various sets of time delays when k = 0.0001.
Figure 22. The presentation of results for the probability of the operating state R(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g022
Figure 23. The presentation of results for the probability of the operating state A(t) for various sets of time delays when k = 0.0001.
Figure 23. The presentation of results for the probability of the operating state A(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g023
Figure 24. The reliability, availability, and probability results of each state for Case 5 with TD set 3 (20,20,20,30), where, initially, the system is at the normal operating state, which is P1(0) = 0.9, P2(0) = 0.1, and k = 0.0001.
Figure 24. The reliability, availability, and probability results of each state for Case 5 with TD set 3 (20,20,20,30), where, initially, the system is at the normal operating state, which is P1(0) = 0.9, P2(0) = 0.1, and k = 0.0001.
Mathematics 12 02916 g024
Figure 25. The presentation of results for the probability of the operating state P1(t) for various sets of time delays when k = 0.0001.
Figure 25. The presentation of results for the probability of the operating state P1(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g025
Figure 26. The presentation of results for the probability of the operating state P2(t) for various sets of time delays when k = 0.0001.
Figure 26. The presentation of results for the probability of the operating state P2(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g026
Figure 27. The presentation of results for the probability of the operating state P3(t) for various sets of time delays when k = 0.0001.
Figure 27. The presentation of results for the probability of the operating state P3(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g027
Figure 28. The presentation of results for the probability of the operating state R(t) for various sets of time delays when k = 0.0001.
Figure 28. The presentation of results for the probability of the operating state R(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g028
Figure 29. The presentation of results for the probability of the operating state A(t) for various sets of time delays when k = 0.0001.
Figure 29. The presentation of results for the probability of the operating state A(t) for various sets of time delays when k = 0.0001.
Mathematics 12 02916 g029
Table 1. Model parameter values (unit per day).
Table 1. Model parameter values (unit per day).
a = 0.025, b = 0.045, c = 0.00005, d1 = 0.45, d2 = 0.5, d3 = 0.02, d4 = 0.003, e = 0.08, f = 0.085
g = 0.03, h = 0.035, k = 0.005, m = 0.00001, m1 = 0.00002, m2 = 0.00004, n = 0.0025, q = 0.075
r = 0.00002, u1 = 0.008, u2 = 0.002, v = 0.001, w = 0.0007, w1 = 0.0005, w2 = 0.00008, z = 0.004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pham, H. Reliability Modeling of Systems with Undetected Degradation Considering Time Delays, Self-Repair, and Random Operating Environments. Mathematics 2024, 12, 2916. https://doi.org/10.3390/math12182916

AMA Style

Pham H. Reliability Modeling of Systems with Undetected Degradation Considering Time Delays, Self-Repair, and Random Operating Environments. Mathematics. 2024; 12(18):2916. https://doi.org/10.3390/math12182916

Chicago/Turabian Style

Pham, Hoang. 2024. "Reliability Modeling of Systems with Undetected Degradation Considering Time Delays, Self-Repair, and Random Operating Environments" Mathematics 12, no. 18: 2916. https://doi.org/10.3390/math12182916

APA Style

Pham, H. (2024). Reliability Modeling of Systems with Undetected Degradation Considering Time Delays, Self-Repair, and Random Operating Environments. Mathematics, 12(18), 2916. https://doi.org/10.3390/math12182916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop