Next Article in Journal
Application of Large Language Models and Assessment of Their Ship-Handling Theory Knowledge and Skills for Connected Maritime Autonomous Surface Ships
Next Article in Special Issue
Reliability Modeling of Systems with Undetected Degradation Considering Time Delays, Self-Repair, and Random Operating Environments
Previous Article in Journal
Offset Linear Canonical Stockwell Transform for Boehmians
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predictive Resilience Modeling Using Statistical Regression Methods

1
Department of Electrical and Computer Engineering, University of Massachusetts Dartmouth, Dartmouth, MA 02747, USA
2
Department of Industrial, Manufacturing and Systems Engineering, The University of Texas at El Paso, El Paso, TX 79968, USA
3
Aerojet Rocketdyne, Jupiter, FL 33478, USA
4
Faculty of Business and Law, Deakin University, Burwood, VIC 3125, Australia
5
Engineer Research and Development Center, U.S. Army Corps of Engineers, Concord, MA 01742, USA
*
Authors to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2380; https://doi.org/10.3390/math12152380
Submission received: 1 July 2024 / Revised: 23 July 2024 / Accepted: 24 July 2024 / Published: 31 July 2024

Abstract

:
Resilience describes the capacity of systems to react to, withstand, adjust to, and recover from disruptive events. Despite numerous metrics proposed to quantify resilience, few studies predict these metrics or the restoration time to nominal performance levels, and these studies often focus on a single domain. This paper introduces three methods to model system performance and resilience metrics, which are applicable to various engineering and social science domains. These models utilize reliability engineering techniques, including bathtub-shaped functions, mixture distributions, and regression analysis incorporating event intensity covariates. Historical U.S. job loss data during recessions are used to evaluate these approaches’ predictive accuracy. This study computes goodness-of-fit measures, confidence intervals, and resilience metrics. The results show that bathtub-shaped functions and mixture distributions accurately predict curves possessing V, U, L, and J shapes but struggle with W and K shapes involving multiple disruptions or sudden performance drops. In contrast, covariate-based models effectively track all curve types, including complex W and K shapes, like the successive shocks in the 1980 U.S. recession and the sharp decline in the 2020 U.S. recession. These models achieve a high predictive accuracy for future performance and resilience metrics, evidenced by the low sum of square errors and high adjusted coefficients of determination.

1. Introduction

Resilience engineering [1,2] may be regarded as a generalization of the theory of repairable systems [3] from reliability engineering. In this context, system performance can deteriorate due to aging or external shocks, but there are also proactive efforts to preserve performance at a level comparable to its optimal operating state. Resilience engineering finds applications across various domains, including economics [4,5,6], the social sciences [7,8,9,10,11,12,13], and engineering [14,15,16,17,18]. Consequently, researchers have proposed numerous metrics [19] to measure system resilience. However, these metrics are typically applied retrospectively, after recovery from a disruption. While this approach enables the assessment of system performance under stresses and shocks and to guide future decisions, it lacks the ability to predict when the system will return to a specified performance level or how to quickly and cost-effectively it will achieve a target performance level. Without predictive models, policymakers and emergency management organizations may struggle to respond optimally to disruptive scenarios impacting thousands to billions of lives.
The majority of studies on resilience engineering include resilience metrics [20,21,22,23], which enable quantitative analyses of system performance preserved or lost after a single disruptive event. Fewer studies explore the effects of disruptions [24,25] on the performance of the overall business process in order to identify effective methods to mitigate failures or employ stochastic techniques such as fuzzy methods [26,27], Markov processes [28,29], Bayesian networks [30], Petri nets [31,32], vector auto-regression [33], and machine learning [34] to model the performance of systems that are subject to multiple shocks. Another line of research [35,36,37,38,39,40,41] optimally allocates internal components to increase performance or enhance maintainability. However, few of these past studies provide the means to predict the time required for a system to be restored to the desired level of performance and do not explicitly consider the external activities that drive changes in performance.
To address this gap in past research, this paper explores three alternative methods to characterize system performance and predict various resilience metrics using techniques from reliability engineering. Figure 1 presents the research steps. First, datasets on system performance are collected. Second, resilience models based on (i) bathtub-shaped hazard functions [42], (ii) mixture distributions, and (iii) regression models that incorporate covariates to characterize deterioration and recovery are built. Third, to validate the performance of each model on the datasets considered, we calculate goodness-of-fit measures and interval-based resilience metrics. Fourth, these results are then compared to select the best-performing model. Lastly, the selected model is used to predict system performance and consequently resilience metrics, which subsequently guide decision-making for enhancing system resilience. For illustration, we evaluate these alternative approaches using widely accessible historical data on seven U.S. recessions to predict changes in the nonfarm payroll employment index during economic crises. Our findings suggest that the bathtub-shaped hazard function and mixture distributions perform effectively on data sets displaying V, U, L, and J-shaped curves. However, these models fail to characterize the W and K-shaped curves that often appear in various domains, particularly in economic systems, which deviate from the assumption of a single decline followed by a recovery. This limitation arises because the models do not explicitly account for the underlying factors driving degradation and recovery. In contrast, models incorporating explanatory covariates can capture curves of any shape, including those with multiple shocks, sharp drops, and slow recovery, which the other models struggle to fit. These covariate models offer a superior goodness-of-fit and predictive accuracy for resilience metrics, providing a versatile framework for data collection and predictive modeling across various resilience curves.
This paper is structured as follows: Section 2 reviews relevant research on resilience engineering. Section 3 develops three resilience models. Section 4 presents model fitting and validation techniques. Section 5 describes interval-based metrics to quantify resilience. Section 6 illustrates the proposed approaches to model resilience. Section 7 provides conclusions and future research ideas.

2. Related Research

This section reviews relevant research on resilience metrics to quantify system performance subject to single and multiple disruptive events as well as optimal allocation strategies to enhance system resilience.
Research on quantitative resilience metrics includes Hosseini et al. [19], who reviewed resilience definitions across domains, classifying methods into qualitative and quantitative, with quantitative methods further being divided into generic metrics and structural modeling. Cheng et al. [20] summarized methods for assessing system resilience, including interval-based, point-based, and probabilistic metrics. Bruneau and Reinhorn [21] quantified resilience as the area under the performance curve, while Ouyang and Dueñas-Osorio [22] defined this area as a ratio relative to a minimum level of performance. Yang and Frongopol [43] measured performance loss due to hazards, and Zhou et al. [44] quantified resilience as a performance loss ratio regarding a performance target. Zobel [45] focused on maintained performance from the lowest point to recovery. Reed et al. [46] focused the concept of average maintained performance, and Cimerallo [47] emphasized pre- and post-critical condition performance using weighted areas under the curve.
Fewer studies have proposed methods that are specific to the resilience assessment of systems subjected to multiple hazards. For example, Copeland and Hall [24] demonstrated how automobile industry production responds to demand shocks using differential equations that capture the dynamics of automakers’ prices, sales, and inventories. Dhulipala and Flint [28] modeled system recovery as a semi-Markov process, where each new hazard is considered to be a new process and is connected to prior events through inter-event dependency. Zeng et al. [29] modeled multistate energy systems using a continuous time discrete state Markov chain via a Markov reward process, where losses incurred by extreme events are characterized by the reward rates associated with the sojourns in degraded states and the transitions among the states. Yin et al. [30] presented three resilience models based on Bayesian networks, including one incorporating data and expert knowledge in order to identify relationships between different perturbing events. Yan et al. [32] modeled nuclear power plant resilience relevant to safety using Petri nets, including the system’s state of health as well as mitigation, recovery, and maintenance processes. Grant and Yung [33] analyzed the network robustness and vulnerability of a global firm by applying a vector auto-regressive model that considers various factors related to the firm’s health, such as equity returns, credit default swap spreads, and profit growth. Han and Goetz [48] estimated growth in performance as a linear function of policies adopted to overcome degradation. Lopez et al. [27] defined personalized fuzzy numbers to integrate uncertainty and imprecision while modeling the most important performance indicators in supply chain resilience. Garbero and Letta [34] explored machine learning algorithms to predict household resilience based on factors such as military conflict, natural disasters, or economic crises. However, one challenge associated with many of these studies is the lack of data or simple data collection requirements. Moreover, failure and recovery times are often assumed to be exponentially distributed, where events occur independently at a constant average rate, yet this assumption rarely holds in practice because economic systems are subject to complex human behavior, and the failure rate of physical systems tends to increase as time passes and wearout occurs.
Another common theme in the resilience literature is the optimal allocation of components and/or plans to mitigate failures in infrastructure systems in order to increase performance after extreme events. For example, Schryen et al. [35] applied heuristic algorithms to identify an optimal schedule that assigns rescue units to incidents during the response phase of disasters. Zou and Chen [36] proposed a simulation approach that optimizes maintanance and repair schedules to maximize the anticipated resilience enhancement of a power system. Najarian and Lim [37] formulated an optimal allocation of supplies to enhance infrastructure resilience, considering a mix of linear curves for single degradation and recovery events. Pan et al. [38] developed a resilience model to optimize the recovery of a transportation network, including the recovery sequence and resource allocation, respectively. Yang et al. [39] presented a hybrid resilience metric for simple and complex equipment considering the key elements of reliability engineering such as failure rate, mean time to repair (MTTR), mean time between failures (MTBF), and maintainability. Jalilpoor et al. [40] developed a microgrid optimal allocation algorithm based on an attacker–defender model to improve the infrastructural resilience of power systems. Masruroh et al. [41] adjusted production and distribution based on the available capacity of supply chains and decided whether unfulfilled demand becomes backorders or lost sales. While there are several studies on optimal effort allocation to guide rapid and efficient increases in system performance, they are typically restricted to the reliability of internal components and redundancy allocation or maintenance schedule, not external activities that degrade or enhance system performance in a manner characteristic of reliability deterioration and improvement.
In contrast to past research, this paper applies statistical techniques to develop resilience models possessing predictive capabilities that can (i) predict the time at which a system will recover to a specified level of performance as well as interval-based resilience metrics regardless of the application domain, (ii) guide data collection on relevant external hazards and activities that impact system performance to enhance predictions, (iii) accommodate both single or multiple shocks without further modification of the mathematical formulation, and (iv) benefit from but not require detailed information about the system such as state transition rates or probability distributions.

3. Resilience Models

This section presents quantitative methods based on reliability engineering and statistics techniques to model system resilience. While the concept of performance varies by domain, it can be generalized as the extent to which a system or task achieves its goals, measured as a percentage of the baseline performance level under nominal conditions. For example, in the context of economic systems [49], performance may be measured as long-term achievements like sustainable growth and development of firms or countries, as well as short-term results such as the time it takes to stabilize an economy after unpredictable disruptions. In industrial and manufacturing [50,51], resilience is assessed based on the capacity to sustain and swiftly restore production processes, operations, and quality levels after disruptions, including cyber attacks, supply chain interruptions, or equipment failure.
To illustrate the concept of changes in performance, Figure 2 shows notional resilience curves possessing a bathtub shape. The original performance P ( t ) at time t h , when a disruption is encountered, is represented by the horizontal line. Performance declines until it reaches a degraded performance at time t d . If the degradation is instantaneous, then t d = t h . In some cases, the system is not resilient, resulting in the performance dropping to P ( t d ) = 0 . Conversely, if it is resilient, system performance increases from t d until a high-level performance is achieved at time t r . Figure 2 shows three scenarios: (i) when system performance is degraded (dotted), (ii) when system performance is nominal (solid), and (iii) when system performance is enhanced (dashed). Some systems, such as power generation, may recover only to the maximum nominal performance, while economic and automatic systems involving artificial intelligence and machine learning can achieve an improved performance.

3.1. Bathtub-Shaped Functions

Alternative resilience curves similar to the curves depicted in Figure 2 may be specified from methods used in reliability engineering such as bathtub-shaped functions λ ( t ) , defined as follows:
P ( t ) = P ( t h ) t < t h c λ ( t ) t h t < t r P ( t r ) t t r
where P ( t h ) is the nominal performance before a hazard, P ( t r ) is the performance after recovery, and c is a constant to ensure continuity because P ( t h ) λ ( t h ) . Common bathtub-shaped functions that are capable of deriving analytical forms of system performance P ( t ) include the quadratic model:
λ ( t ) = α + β t + γ t 2
when 2 ( α γ ) 1 / 2 β < 0 and α , γ 0 , and the competing risks:
λ ( t ) = α 1 + β t + 2 γ t
when α , β 0 and 0 < γ α β 2 .

3.2. Mixture Distributions

Alternative resilience curves may be written as mixture distributions following:
P ( t ) = a 1 ( t ) ( 1 F 1 ( t ) ) + a 2 ( t ) F 2 ( t )
where a 1 ( t ) and a 2 ( t ) are the transition from degradation and the transition to recovery, respectively, with lim t 0 a 1 ( t ) = 1 and lim t a 1 ( t ) = 0 . The terms ( 1 F 1 ( t ) ) and F 2 ( t ) represent the processes of degradation and recovery, respectively, where F 1 ( t ) and F 2 ( t ) are random cumulative distribution functions.

3.3. Covariates

A resilience curve can also be expressed in discrete intervals as follows:
P ( i ) = P ( i 1 ) + Δ P ( i )
where P ( i ) and P ( i 1 ) are the current and past performance, respectively, and Δ P ( i ) represents changes in performance. Alternative regression methods [52] incorporating covariates that influence the deterioration and recovery phases are used to model Δ P ( i ) . These methods include (i) multiple linear regression, (ii) multiple linear regression with interactions, and (iii) polynomial regression, assuming normally distributed residuals. The normality assumption guarantees that the model findings are reliable and can be generalized from the sample data to an overall population with confidence. The normality assumption of the residuals can be verified in three ways: (i) the histogram is roughly bell-shaped, (ii) the quantile–quantile plot (QQ plot) falls along a roughly straight line at a 45-degree angle, and (iii) the scatter plots of the residuals are independent, linear, and exhibit a constant variance over time. In cases where the normality assumption is violated, strategies to address violations can be applied such as data transformation to stabilize and normalize the variability of the residuals, or analysis without outliers to assess their influence in the model, if any are detected.

3.3.1. Multiple Linear Regression (MLR)

Multiple linear regression characterizes performance changes as follows:
Δ P ^ ( i ) = β 0 + j = 1 m β j X j ( i )
where β 0 represents an initial change in system performance. The X 1 ( i ) , X 2 ( i ) ,…, X m ( i ) are the m detrimental or restorative covariates driving changes in performance, and β 1 , β 2 , …, β m are coefficients explaining the impact of these efforts. Thus, coefficients associated with hazards (restoration activities) are likely to possess a negative (positive) value, indicating that their presence leads to decreased (increased) performance. Alternative forms for regression include interaction and polynomial terms.

3.3.2. Multiple Linear Regression with Interaction

Multiple linear regression with interaction models performance changes as follows:
Δ P ^ ( i ) = β 0 + j = 1 m β j X j ( i ) + j = 1 m k = j + 1 m β j ( m + k ) X j ( i ) X k ( i )
where β j ( m + k ) is the coefficient characterizing the interaction between the two covariates.

3.3.3. Polynomial Regression

Polynomial regression predicts performance changes as follows:
Δ P ^ ( i ) = β 0 + j = 1 ρ k = 1 m β j ( j + k ) X k ( i ) j
where ρ is the maximum degree of the polynomial and β j ( j + k ) is the coefficient characterizing the impact of the hazard or effort on performance.

4. Model Fitting, Validation, and Inference

This section outlines methods for fitting, validating, and making inferences of modeling techniques.

4.1. Model-Fitting Techniques

Least squares estimation (LSE) and maximum likelihood estimation (MLE) are common methods [53] to estimate model parameters. LSE explains a curve involving deterministic variables, while MLE aims to find the optimal model parameters to account for the uncertainty in the random variables.
Least squares estimation [54] reduces the discrepancy between the actual and predicted performance data to estimate the parameters values of a model as follows:
min i = 1 n ( R ( t i ) P ( t i ) ) 2
where n data samples are used to fit the model, and R ( t i ) and P ( t i ) are the actual and predicted performance at time t i .
To apply a maximum likelihood estimation, the performance changes Δ P ( t i ) between interval ( i 1 ) and ( i ) are assumed to be normally distributed with variance σ 2 as follows:
f ( Δ P ; μ , σ 2 ) = 1 2 π σ 2 exp 1 2 σ 2 ( Δ P ( i ) μ ) 2
where μ is the change in performance. For example, substituting Equation (5) into Equation (9) produces a resilience curve that is characterized by multiple linear regression:
f ( Δ P ; β 0 , β i , σ 2 ) = 1 2 π σ 2 exp 1 2 σ 2 Δ P ( i ) β 0 + j = 1 m β j X j ( i ) 2
where Δ P ( i ) , ( 1 i n ) are mutually independent, and the likelihood function of Δ P ( i ) is as follows:
L ( Δ P ; β 0 , β i , σ 2 ) = i = 1 n f ( Δ P i ; β 0 , β 1 , σ 2 ) = 1 ( 2 π σ 2 ) ( n ) / 2 exp 1 2 σ 2 i = 1 n Δ P ( i ) β 0 + j = 1 m β j X j ( i ) 2
Then, MLE estimates the parameters by taking the natural logarithm of Equation (11), computing partial derivatives with respect to each model parameter, and equating to zero:
θ ln L ( Δ P ; θ ) = 0
where θ = { β 0 , β i , σ 2 } . Maximizing this system of equations produces maximum likelihood estimates θ ^ .

4.2. Goodness-of-Fit Measures

Goodness-of-fit measures offer an objective and quantitative way to evaluate and compare different models on a specific data set. In practice, no model excels in all measures, but those having lower errors are preferred. Thus, choosing a model is ultimately a subjective decision made by the decision-maker, who must balance model complexity with predictive accuracy. Table 1 summarizes commonly used goodness-of-fit measures, including root mean squared error ( R M S E ), predictive root mean squared error ( P R M S E ), and adjusted coefficient of determination ( r a d j 2 ). Here, n is the number of observations used for model fitting, is the number of observations reserved for testing the model’s ability to predict unseen data, R ( i ) P ( i ) denotes the residual between an observation and the model prediction, R ¯ is the mean of the actual dataset over all intervals considered, and ( n ) p is the degrees of freedom representing the information that is free to vary. R M S E measures the magnitude of the residuals during the model-fitting processes to assess model accuracy. P R M S E is similar to R M S E , but it specifically evaluates the model performance on unseen data. Lower values of R M S E and P R M S E are preferred. Lastly, r a d j 2 represents the proportion of variation in the dependent variable explained by the model, penalizing the model in case additional parameters are not significant for predictions. An r a d j 2 value approaching 1.0 represents a strong correlation between the actual and predicted data. Conversely, negative or near-zero r a d j 2 values suggest no or a weak linear relationship, often due to poor model predictions.
Table 1. Goodness-of-fit measures.
Table 1. Goodness-of-fit measures.
MeasuresEquation
R M S E
1 ( n ) p i = 1 n ( R ( i ) P ( i ) ) 2
P R M S E
1 i = ( n + 1 ) n ( R ( i ) P ( i ) ) 2
r a d j 2
1 1 i = 1 n R ( i ) R ¯ 2 i = 1 n ( R ( i ) P ( i ) ) 2 i = 1 n R ( i ) R ¯ 2 ( n ) 1 ( n ) p 1

4.3. Confidence Intervals

A Confidence interval (CI) [55] represents the uncertainty in the model predictions according to a level specified by the user. A wider interval indicates higher uncertainty, whereas a narrower interval implies greater precision. For bathtub-shaped functions and mixture distribution models, the CI is computed as follows:
C I = Δ P ( t i ) ± t 1 α / 2 , ( n p ) σ ^ 2
where
σ ^ 2 = 1 n p i = 1 n ( R ( t i ) P ( t i ) ) 2
is the variance between predictions and the average observations considering n p degrees of freedom, and t 1 α / 2 , ( n p ) is the t-distribution critical value with an user-specified α significance level. Increasing the sample size makes the t-distribution converge toward the z-distribution. For covariate models, the CI is expressed as follows:
C I = Δ P ^ ( i ) ± t 1 α / 2 , ( n p ) i = 1 n ν ( R ( i ) P ( i ) ) 2 n p 1 X i T ( X T X ) 1 X i
where the square root term represents the residuals of the fit based on the matrix X n × ( m + 1 ) consisting of n observations and m covariates, with the first column containing ones.
Empirical coverage (EC) is calculated by dividing the total number of samples (n) by the number of samples within the CI, resulting in the percentage of observations that fall within these intervals.

5. Interval-Based Resilience Metrics

Interval-based resilience metrics [20] measure system performance over a certain time to quantify how resilient a system is from hazard to recovery periods. Table 2 summarizes eight resilience metrics that are commonly applied in the literature, including (i) performance preserved (PP) [21], (ii) average normalized performance preserved (ANPP) [22,56], (iii) performance lost (PL) [43], (iv) average normalized performance lost (ANPL) [44], (v) performance from minimum (PFM) [45], (vi) average performance preserved (APP) [46], (vii) average performance lost (APL) [46], and (viii) weighted average of performance preserved (WAPP) [47] prior and after the lowest performance.
Table 2. Interval-based resilience metrics.
Table 2. Interval-based resilience metrics.
MetricEquationMetricEquation
P P
t h t r P ( t ) d t
A N P P
t h t r P ( t ) d t P ( t h ) ( t r t h )
P L
P ( t h ) ( t r t h ) t h t r P ( t ) d t
A N P L
t h t r ( P ( t h ) P ( t ) ) d t P ( t h ) ( t r t h )
P F M
t d t r P ( t ) d t P ( t d ) ( t r t d )
A P P
t h t r P ( t ) d t t r t h
A P L
P ( t h ) ( t r t h ) t h t r P ( t ) d t t r t h
W A P P
α t h t d P ( t ) d t t d t h + ( 1 α ) t d t r P ( t ) d t t r t d
Table 2 shows that the performance preserved ( P P ) calculates the area under the curve between the hazard and recovery times. This can be averaged and normalized as the ratio of actual to optimal system performance ( A N P P ). The performance lost ( P L ) measures the area above the curve between hazard and recovery times, which can also be averaged and normalized as A N P L . The performance from minimum ( P F M ) is defined as the performance retained from the minimum point to recovery, excluding the rectangular area below the minimum performance level. The average performance preserved ( A P P ) and average performance lost ( A P L ) compute the area under and above the curve, respectively, over the duration between hazard and recovery. Lastly, the weighted average performance preserved (WAPP) characterizes the disruption and recovery processes using a weight factor α within ( 0 , 1 ) specified by the user to emphasize either the hazard or recovery processes from the minimum performance level.
Engineers and decision-makers typically use historical data to compute resilience metrics, retrospectively assessing system performance under extreme events to identify weak points for future improvements. By replacing t h with the first time interval not used in the model fit ( t n + 1 ) and using projections such as the predicted minimum performance time ( t ^ d ) when the actual lowest performance has not been encountered, these metrics can also be applied predictively. This approach provides policymakers and emergency management teams with a framework for early vulnerability detection and insights to proactively plan for resilience investments without waiting for disasters to occur.

6. Illustrations

This section illustrates the three proposed modeling approaches to characterize system performance and assess resilience using different examples. Seven U.S. recession datasets documented by the Bureau of Labor Statistics Current Employment Statistics Program [57,58] are shown in Figure 3, including the most recent recession caused by the COVID-19 pandemic. In each example, we evaluate each model’s performance on these data sets by computing goodness-of-fit measures and resilience metrics. Nonetheless, these modeling efforts aim to enhance the overall theory of predictive resilience engineering across multiple domains.
Each curve depicted in Figure 3 represents the number of employments normalized to time step zero, indicating the employment peak before the start of a recession. Economists describe recession curves [59] using the English alphabet, including letters J, K, L, U, V, and W. Simpler recessions include J-shaped curves with a slow recovery returning to growth, L-shaped trends with a decline followed by prolonged underperformance, U-shaped curves with gradual deterioration and recovery, and V-shaped curves with a sharp, brief decline followed by a strong recovery. More complex recessions feature K-shaped curves, with a sharp drop and divergent recovery paths, and W-shaped curves, with two periods of decline and recovery.

6.1. Example I: Bathtub-Shaped Functions

In this first example, the least squares method (Equation (8)) was applied to estimate the parameters of the quadratic and competing risk bathtub models (Equations (1) and (2)) using 90 % of each dataset shown in Figure 3. The remaining 10 % of the data not used for model fitting were then predicted by the models, and the RMSE (Equation (13)), PRMSE (Equation (14)), r a d j 2 (Equation (15)), and EC were computed, as shown in Table 3. In each case, the sample size was four years or ( n = 48 ) months, with the exception of the 2020–2021 time series, which contained n = 34 months of data at the time the calculations were performed. Table 3 shows that the competing risk model yielded better results on four out of the seven datasets. Additionally, is was very close across almost all the measures when the quadratic model achieved the best performance. However, both models performed poorly on the 1980 and 2020–2022 datasets due to their respective W- and K-shaped curves, resulting in undesirable low and negative r a d j 2 values for the quadratic model.
Figure 4 illustrates the U.S. recession data from 2001–2005, along with the fitted quadratic model and the 95 % confidence interval (represented by the grey region around the quadratic model fit), calculated using Equation (16). At t = 42 , we have a dashed vertical line indicating the use of the initial 43 months to fit the model and the final 5 months to check its predictive capabilities. Two out of forty-eight observed datapoints fall outside the confidence interval, resulting in a conservative empirical coverage of 95.83 % .
Figure 5 presents the 1980–1993 dataset, along with the fitted competing risk model and its confidence interval. Five out of the forty-eight observed data points are outside the confidence interval, achieving a slightly optimistic empirical coverage (EC) of 89.58 % . A better model fit and more accurate predictions result in a lower sum of squared errors (SSE) and consequently a smaller variance of SSE (Equation (17)). This leads to a narrower confidence interval (CI) (Equation (16)), indicating more precise predictions for that model. However, narrower confidence intervals tend to have lower empirical coverage, as more datapoints fall outside the confidence limits.
In addition to validating the ability of the bathtub-shaped function to fit the resilience curves using statistical methods, the interval-based metrics described in Section 5 are also predicted. These metrics include the performance preserved (PP, Equation (19)), normalized performance preserved (ANPP, Equation (20)), performance lost (PL, Equation (21)), normalized performance lost (ANPL, Equation (22)), performance preserved from the minimum (PFM, Equation (23)), average performance preserved (APP, Equation (24)), average performance lost (APL, Equation (25)), and weighted average performance preserved before/after the minimum (WAPP, Equation (26)). Figure 6 shows the relative error of the quadratic and competing risk resilience metrics computed using the 1990–1993 dataset, assuming α = 0.5 , which assigns equal weight to the average performance preserved before and after the minimum. For example, the relative error for performance preserved is calculated as follows:
δ = P P a c t u a l P P p r e d i c t e d P P a c t u a l
where P P a c t u a l and P P p r e d i c t e d are the empirical performance preserved and model predictions, respectively.
Figure 6 shows that the quadratic model better predicted four out of seven metrics, but the relative error was lower than 0.01 across all metrics for both models, with the exception of the average normalized performance lost (ANPL, Equation (22)), because the system improved its performance to a higher level than the initial performance. As a result, a negative performance loss is experienced in the predictive period, and the respective relative errors for the ANPL of the quadratic and competing risk models are δ = 0.1440 and 0.2695 . Since these values are substantially larger than the errors in the other metrics, they are not shown in Figure 6 to enable an easier comparison of the other seven metrics.

6.2. Example II: Mixture Distributions

In this second example, the least squares method (Equation (8)) was applied to estimate the parameters of the mixture distribution models (Equation (3)) with 90 % of the seven datasets present in Figure 3. Four different mixtures of F 1 ( t ) and F 2 ( t ) were adopted based on reliability engineering [60] to model disruptions and recoveries, including the Weibull (Wei) distribution
F ( t ) = 1 e ( t λ ) k
and the exponential (Exp) distribution, which is obtained when k = 1 in Equation (28). For simplicity, the transition from degradation is considered to be a constant a 1 ( t ) = 1 . Different recovery transition forms were considered, including
a 2 ( t ) = { β , β t , e β t , β ln ( t ) }
to illustrate an increasing trend characteristic of the economic data. The remaining 10 % of the data not used for model fitting were then predicted using the models, and the RMSE (Equation (13)), PRMSE (Equation (14)), r a d j 2 (Equation (15)), and EC were calculated, as presented in Table 4 for a 2 ( t ) = β ln ( t ) , which worked well for all the recessions considered in Figure 3.
Table 4 shows that mixing Exponential distributions (Exp–Exp) was not appropriate to fit any of the datasets. In contrast, at least one of the other F 1 ( t ) and F 2 ( t ) combinations resulted in a high r a d j 2 for all the datasets, except for the 1980 and 2020–2022 datasets, due to their W-shaped and K-shaped curves, respectively. In a few cases, the bathtub-shaped curves performed slightly better than the mixture distribution models, perhaps because the mixture models include numerous parameters that increase model complexity and do not significantly improve the fit.
Figure 7 presents the (Wei–Exp) model fitted to the 1990–1993 data, and Figure 8 shows the (Exp–Wei) and (Wei–Wei) model fitted to the 1981–1983 data, since the (Exp–Wei) model performed better with respect to the RMSE and r a d j 2 . However, the (Wei–Wei) model achieved a lower PRMSE, and corresponding confidence interval. In Figure 7, since one observed datapoint is outside the confidence interval, the empirical coverage is 97.91 % . In Figure 8, all but one datapoint are within the (Wei–Wei) confidence interval (light grey), achieving 97.91% empirical coverage. However, three datapoints are outside the (Exp–Wei) confidence interval (dark grey), resulting in 93.75% empirical coverage for this model.
The relative error (Equation (27)) of the predictions for each of the interval-based metrics described in Section 5, from performance preserved (PP, Equation (19)) to weighted average performance preserved before/after minimum (WAPP, Equation (26)) with α = 0.5 , is reported in Figure 9 for mixture distribution combinations fitted to the 1990–1993 data. The average normalized performance lost (ANPL, Equation (22)) exhibited higher errors due to the normalization step, with δ values of 0.1018 , 0.4347 , 0.4348 , and 0.4514 for Exp–Exp, Wei–Exp, Exp–Wei, and Wei–Wei, respectively. These are not reported in Figure 9 to maintain comparability among the smaller errors in the other seven metrics. Figure 9 and the numerical values from Equation (22) indicate that the Wei–Exp model achieved the lowest relative error in four of the eight predicted metrics, while the Exp–Exp and Wei–Wei models each achieved the lowest in the two metrics. Considering Table 4 and Figure 9 together, the combination of Weibull for deterioration and exponential for recovery most frequently predicted performance and metrics accurately for the datasets considered, although several other combinations also exhibited a similar accuracy.

6.3. Example III: Covariates

In this third example, the covariate models based on multiple linear regression (Equation (5)), multiple linear regression with interaction (Equation (6)), and polynomial regression (Equation (7)) were applied to all of the data sets shown in Figure 3, and their goodness-of-fit and predictive accuracy were assessed. To this end, covariates were collected from January of the year in which a recession began and subsequently normalized by dividing all values in all intervals by the maximum value observed for that covariate in all of the intervals considered.
Forward stepwise selection [52] was performed to identify a subset of covariates that exhibited a strong relationship to the change in performance. The forward stepwise selection method begins with a model (MLR, MLR with interaction, or polynomial regression) containing no covariates, applies maximum likelihood estimation with 90 % of a dataset according to Equation (12) in order to estimate the parameters of the model to predict the changes in performance, predicts the remaining 10 % of the data not used for model fitting, computes the RMSE (Equation (13)), PRMSE, (Equation (14)), and r a d j 2 (Equation (15)), tests the addition of each individual covariate one at a time using the r a d j 2 criterion, adds the covariate (if any) whose inclusion produces the greatest improvement in model fit according to the r a d j 2 value, and repeats this process until none of the remaining covariates increase the proportion of the variation in the change in performance explained by the model.
To characterize the degradation and recovery curves of each recession, it was important to investigate, identify, and collect covariates that could be useful to forecast changes in economy systems [61,62] over time. Therefore, eight covariates were considered for each of the datasets, with the exception of the 2020-22 U.S. recession, which considered twenty-one covariates. The eight covariates included in all of the datasets are shown in Table 5, while the 2020–2022 U.S. recession induced by the COVID-19 pandemic also considered another thirteen covariates related to additional market factors for which data are more readily available in recent years, as well as the pandemic, and are shown in Table 6.
For each dataset, Table 7 reports the set of covariates selected for inclusion in the model according to the forward stepwise selection procedure, as well as the corresponding goodness-of-fit measures for each of the three types of regressions considered. Table 7 indicates that at least one of the three alternative regression models achieved a very high r a d j 2 on each dataset, with the exception of the 1980 data, where MLR achieved the highest r a d j 2 = 0.4772 , which was still much higher relative to the values attained by the bathtub-shaped functions and mixture distributions reported in Table 3 and Table 4. Table 7 also indicates that both the interaction and polynomial regression outperformed the simpler MLR on all measures on six of the seven data sets, with fewer or the same number of covariates, and it achieved competitive values on the dataset where MLR obtained the smallest values of RMSE and highest r a d j 2 . These results indicate that both MLR with interaction or polynomial regression may also adequately characterize the 1980 data set, despite these more complex models being penalized for including additional coefficients for interactions between covariates or higher-order terms. Thus, MLR with interactions achieved the best or very competitive values on all seven datasets.
Since neither bathtub-shaped functions nor mixture distributions were able to characterize the 1980 and 2020–2022 U.S. recessions well, Figure 10 and Figure 11 show these datasets and the fitted covariate model, respectively, as well as their 95% confidence intervals computed using Equation (18). The causes of the 1980 U.S. recession included the Federal Reserve’s contractionary monetary policy to combat double-digit inflation and the lingering effects of the energy crisis. Since MLR with interaction performed well on all datasets and exhibited the smallest PRMSE and was a close second to the MLR with respect to the RMSE and r a d j 2 measures given in Table 7 on the 1980 data set, the results of this model are shown in Figure 10. Five of the eight covariates used to produce the resilience curve were included in the stepwise procedure in the following order: crude oil price, Standard & Poor’s 500 Index stock market, treasury yield curve, mortgage rate, and federal funds. The covariate model tracked the trends well, despite multiple shocks. Moreover, all but one datapoint of the predicted points fell within the confidence intervals, demonstrating a high empirical coverage.
For the job losses caused by COVID-19 in 2020, the model fit data showed in Figure 11 were computed using five of twenty-one available covariates using the stepwise procedure, including durable goods orders, new orders index, unemployment benefits claims, workplace closures, and consumer activity. The covariate model based on multiple linear regression with interaction achieved a very high r a d j 2 of 0.9827 , as well as very low RMSE and PRMSE values, and it characterized the sharp drop in the data very well. A good fit was not possible using the bathtub-shaped functions or any of the mixture distribution models, since the curve was neither U or V-shaped, preventing good predictions for this model. Moreover, all but three datapoints were within the confidence interval, achieving an empirical coverage of 93.75% and substantially improved predictive abilities relative to the models without covariates.
For the sake of comparison with the results achieved using the bathtub-shaped functions and mixture distributions shown in Figure 6 and Figure 9, the three types of regression fit to the 1990–1993 recession data, and the resilience metrics were computed by discretizing the integration in Section 5 into summations over the time intervals. Thus, Figure 12 shows the relative errors (Equation (27)) of the resilience metrics, except for the normalized average performance lost (ANPL, Equation (22)), where δ = 0.0826 , 0.1070 , and 0.1239 for multiple linear regression, multiple linear regression with interaction, and polynomial regression, respectively, in order preserve visual comparability among the smaller errors in the remaining metrics.
Figure 12 and the numerical values determined from Equation (22) indicate that the simple MLR and MLR with interaction were competitive, since both of these regression methods achieved the lowest relative error on four of the eight metrics predicted. Moreover, a comparison of Figure 6, Figure 9, and Figure 12 indicates that the covariate models improved the prediction of the resilience metrics substantially, since all three types of regression achieved relative errors of less than 0.008 across all the metrics.
Normality assumption validation: As described in Section 3.3, in order to make valid inferences from regression models, the normality assumption for a model’s residuals should be verified according to methods such as the inspection of a histogram, QQ plot, and scatter plot of the variance of the residuals.
Figure 13 and Figure 14 show these methods of validation for the 1980 and 2020–2022 model residuals, respectively. Figure 13 exhibits a right-skewed bell-shaped histogram, which may be a result of the second more severe shock, but it does not violate the normality assumption. The line of the QQ plot is also straight, which suggests that the residuals are normally distributed. Both scatter plots of the residuals against time and P ( t ) exhibit constant variance, which support the independence and linearity assumptions, respectively. The shocks occurring in the 1980 recession were caused by unusual events in the US. Therefore, two outliers were observed in each of the plots shown in Figure 13, accurately reflecting the potential surprises inherent to economic systems. Hence, there is no justifiable reason to remove them from the model. Keeping these outliers in the model enables conservative predictions in future intervals, where there is a possibility that extreme events may occur again.
Figure 14 exhibits a left-skewed bell-shaped histogram, which may be a result of the sharp degradation but does not violate the normality assumption. The QQ plot also follows an approximately straight line, suggesting that the residuals are normally distributed. Both scatter plots of the residuals against time and P ( t ) exhibit non-constant variance, where the variance increases over time or as P ( t ) increases, indicating that the independence and linearity assumptions cannot be verified. These observations do not necessarily violate the normality assumption [52] because these patterns may be a result of the small sample size, since the 2020–2022 U.S. recession data contain only 34 intervals. Since the histogram and QQ plot suggest that the residuals are normally distributed, and it is not possible to increase the sample size, the subtle patterns and possible violations observed in the scatter plots are not considered further. The outliers visible in the histogram and QQ plot shown in Figure 14 are legitimate observations that explain the strong shock experienced during the 2020–2022 US recession. Therefore, these outliers were not excluded due to their extremeness, because doing so could have distorted the results by removing valuable information about the variability inherent in economic systems.
Formal statistical tests [52] such as the Kolmogorov–Smirnov, Shapiro–Wilk, and Anderson–Darling tests can be performed to check whether the null-hypothesis that a data sample is normally distributed is rejected. For all tests, the null hypothesis may be rejected with a confidence interval of 95 % when the respective p-values are less than 0.05 . Applying these three tests to the residuals of the 1980 model, the respective p-values were 0.00145068, 0.00000006, and 0.00000012, indicating that all the tests rejected the null hypothesis that the data were normally distributed, which disagrees with the visual results inferred from the plots shown in Figure 13. For the 2020–2022 model residuals, the respective p-values for the three tests were 0.251607, 0.051159, and 0.112513, which do not reject the null hypothesis, agreeing with the histogram and QQ plot shown in Figure 14. Formal statistical tests for normality may also possess a low power on small sample sizes, and tests can be sensitive to any deviation from an idealized normal bell curve, such as the outliers observed in the data that suggest a lack of normality. As discussed previously, the outliers were not related to typos during data collection; therefore, their removal from the analysis cannot be justified. Hence, a visual assessment of normality is often more valuable than a formal test, especially for data possessing of a small sample size.

6.4. Example IV: Comparison of Resilience Models

In this final example, the best bathtub, mixture, and covariate model for each of the datasets from Figure 3 are compared. Table 8 summarizes the goodness-of-fit measures for the best bathtub, mixture, and covariate model on each dataset. The results indicate that the covariate models performed best across all seven data sets considered for all measures, with the exception of PRMSE on the 1974–1976 and 2001–2005 datasets, where the bathtub-shaped hazard functions and mixture distributions also achieved very good results because of the J-shaped curve that was exhibited. Moreover, MLR with interaction consistently performed best among the alternative regression models. Thus, the bathtub and mixture models are too rigid to model curves exhibiting more than one decrease and increase. Often, bathtub and mixture models also lack the flexibility to fit curves exhibiting a single decrease and increase. In both cases, poor predictions may result and should therefore be used with caution.
For each of the seven datasets, Table 8 shows that the best covariate model achieved an r a d j 2 > 0.9 , with the exception of the 1980 data, where the system experienced two shocks. Nevertheless, the MLR with the interaction model still improved the r a d j 2 value from 0.023 (competing risk) and 0.809 (Weibull-Weibull) to 0.472 . Moreover, the bathtub-shaped hazard functions and mixture distributions did not fit the 2020–2022 data possessing a K-shaped curve well, but the MLR with the interaction model improved the r a d j 2 value from 0.493 (competing risk) and 0.647 (Weibull–Weibull) to 0.982 . Thus, we conclude that bathtub-shaped functions can predict curves possessing J, L, U, and V shapes, but not K and W shapes. On the other hand, covariate models are able to accurately capture the trends in any curve, as long as an appropriate set of covariates that describe the change in the performance of a system are available. It should also be noted that RMSE, r a d j 2 , and empirical coverage explicitly penalize covariate models for their additional parameters, yet the covariate models consistently achieved higher r a d j 2 values and produced more accurate empirical coverage values while simultaneously tracking and predicting performance and resilience metrics much more precisely than the simpler parametric models, which do not explain the underlying factors driving degradation and recovery.
Figure 15 and Figure 16 compare the best models from each of the three classes of models for the 1990–1993 and 2001–2005 recessions, respectively, since each of the best models fit these two datasets well. Figure 15 shows that although the competing risk bathtub-shaped hazard function and Wei–Exp mixture distribution both fit both the datasets well and achieve satisfactory goodness-of-fit measures and resilience metrics, their fitted curves are smooth and cannot capture the small variations in the change in performance. In contrast, the covariate model using MLR with interaction both tracks and predicts the observed trends more precisely, thanks to the inclusion of additional information about external processes that degrade and restore system performance. Similarly, Figure 16 indicates that the covariate model based on MLR with interaction also tracks and predicts the 2001–2005 dataset better than the quadratic bathtub hazard function and Wei–Exp mixture distribution.
To further compare the three classes of the proposed models, Figure 17 and Figure 18 illustrate the best-fitting model for each class for the 1980 and 2020–2022 recessions. These figures highlight the shortcomings of the bathtub-shaped hazard functions and mixture distributions, which performed poorly in capturing the complexity of these recessions. This poor performance underscores the importance of the flexibility of the covariate modeling approach. Figure 17 and Figure 18 reveal that bathtub-shaped functions and mixture distribution models are inadequate for modeling multiple degradation events, such as the W-shaped curve observed in the 1980 recession, or the sharp drops characteristic of the K-shaped curves seen in the 2020–2022 data. In contrast, the covariate model based on multiple linear regression (MLR) with interaction terms effectively tracks these curves, accurately capturing both small variations and rapid changes in performance. Including additional relevant covariates could further reduce residuals and enhance the adjusted r 2 value, indicating even better model performance. These results suggest that when a system experiences multiple disruptive events or sudden performance declines, predictions can be substantially improved by consistently collecting information about external activities that impact system behavior. This approach not only enhances model accuracy but also provides a more comprehensive understanding of the factors influencing system resilience.
To compare the resilience metrics, the relative errors (Equation (27)) of the predictions for each of the interval-based metrics described in Section 5 computed based on competing risks, Wei–Exp, and MLR with interaction models are shown in Figure 19 for the 1990–1993 data, since all these models fit this dataset well. Due to the normalization step, the relative errors of the normalized average performance lost (ANPL, Equation (22)) were δ = 0.2695 , 0.4347 , and 0.1355 for the competing risk, Wei–Exp and interaction models, respectively, and they are not shown in Figure 19 to preserve comparability among the smaller errors in the other seven metrics. The results suggest that the covariate model predicted resilience metrics for the 1990–1993 data best.
In general, bathtub-shaped functions and mixture distribution models lack flexibility, imposing strong assumptions about trends that can result in poor model fits, low goodness-of-fit, and inaccurate predictions when the data do not conform to a prescribed shape. Thus, the resilience models based on multiple linear regression appear to be most suitable for a wide variety of trends. Hence, they also outperform traditional models when quantifying interval-based metric system resilience. Given these findings, it is recommended that policymakers adopt the covariate modeling approach for its superior flexibility, accuracy, and comprehensive assessment capabilities. Covariate models enable more informed decisions regarding the resilience of systems, helping to identify efficient resource allocation to enhance system resilience, which leads to better preparedness for multiple disruptive events and may reduce the costs associated with emergency responses. It is important to note that the covariate models presented in this study might be enhanced by (i) adding additional relevant covariates to the model to provide a more comprehensive understanding of the factors influencing resilience; (ii) detecting and addressing nonlinear collinearity between covariates to simplify interpretability and reduce model complexity; (iii) continuously updating the models with new data and insights to ensure that models remain relevant and accurate in changing environments; (iv) integrating sophisticated statistical techniques to improve the predictive capabilities; and (v) engaging with experts from various fields to provide diverse perspectives and innovative solutions to improve resilience modeling.

7. Conclusions

This paper presented three alternative approaches to model performance and predict the resilience metrics of systems, including (i) bathtub-shaped functions, (ii) mixture distributions, and (iii) regression models incorporating detrimental and restorative covariates. These models were evaluated using historical data on nonfarm payroll employment during U.S. recessions. Our results indicated that traditional methods from reliability such as bathtub-shaped functions and mixture distributions perform well for systems possessing a single smooth degradation and recovery event. However, systems undergoing multiple shocks or experiencing sharp drops followed by slow recoveries could not be accurately captured by either of these models, as their predictive power is restricted by their parametric formulations. In contrast, covariate models effectively captured disruptions and recoveries of all shapes, demonstrating enhanced predictive capabilities, the most accurate confidence interval coverage, and lower relative errors for resilience metrics. Consequently, it is highly recommended to collect data on potentially relevant covariates to enable a more precise assessment of system performance. In summary, resilience models empower decision-makers to make well-informed choices, aiding in their understanding of the impacts of shocks and stresses.
The next steps of this research involve investigating more sophisticated statistical methods, such as time series analogs and machine learning models, to move beyond relying solely on performance changes due to hazards and activities in the current time interval. Furthermore, statistical methods will be tested to handle outliers such as (i) transformation of the dependent variable to address the violation of the assumption of multivariate normality, (ii) bootstrapping techniques to avoid inferring residual distribution assumptions, and (iii) non-parametric hypothesis tests to analyze small sample sizes and potentially non-normal data. Moreover, optimal allocation problems will be formulated and solved to distribute limited resources among the covariates describing restoration activities to achieve a desired system performance within a specific time frame.

Author Contributions

Conceptualization, P.S., M.H. (Mariana Hidalgo), I.L. and L.F.; methodology, P.S., M.H. (Mariana Hidalgo) and L.F.; validation, P.S. and L.F.; formal analysis, M.H. (Mindy Hotchkiss) and L.D.; investigation, P.S., M.H. (Mindy Hotchkiss), L.D. and L.F.; resources, P.S.; data curation, P.S.; writing—original draft preparation, P.S. and M.H. (Mariana Hidalgo); writing—review and editing, I.L. and L.F.; visualization, I.L., M.H. (Mindy Hotchkiss) and L.F.; supervision, L.F.; project administration, L.F.; funding acquisition, L.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Homeland Security Community of Best Practices (HS CoBP) through the U.S. Department of the Air Force under contract FA8075-18-D-0002/FA8075-21-F-0074. The views and conclusions expressed in this paper are those of the authors and do not reflect the official policy or position of the U.S. Department of Homeland Security, the U.S. Air Force, or the U.S. Government. Igor Linkov was funded by the US Army Engineer Research and Development Center FLEX project on Compounding Threats to assist with this article.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Mindy Hotchkiss was employed by Aerojet Rocketdyne. Igor Linkov was employed by U.S. Army Corps of Engineers. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The companies had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Hollnagel, E.; Woods, D.D.; Leveson, N. Resilience Engineering: Concepts and Precepts; Ashgate Publishing, Ltd.: Farnham, UK, 2006. [Google Scholar]
  2. Linkov, I.; Trump, B. The Science and Practice of Resilience. Risk, Systems and Decisions; Springer International Publishing: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  3. Trivedi, K.S. Probability & Statistics with Reliability, Queuing and Computer Science Applications; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  4. Holden, K.; Peel, D.A. Combining Economic Forecasts. J. Oper. Res. Soc. 1988, 39, 1005–1010. [Google Scholar] [CrossRef]
  5. Gamze Ozturk Danisman, E.D.; Zaremba, A. Financial resilience to the COVID-19 Pandemic: The role of banking market structure. Appl. Econ. 2021, 53, 4481–4504. [Google Scholar] [CrossRef]
  6. Keil, M.; Leamer, E.; Li, Y. An investigation into the probability that this is the last year of the economic expansion. J. Forecast. 2023, 42, 1228–1244. [Google Scholar] [CrossRef]
  7. Bruneau, M.; Chang, S.E.; Eguchi, R.T.; Lee, G.C.; O’rourke, T.D.; Reinhorn, A.M.; Shinozuka, M.; Tierney, K.J.; Wallace, W.A.; Winterfeldt, D.V. A Framework to Quantitatively Assess and Enhance the Seismic Resilience of Communities. Earthq. Spectra 2003, 19, 733–752. [Google Scholar] [CrossRef]
  8. Rose, A. Economic resilience to natural and man-made disasters: Multidisciplinary origins and contextual dimensions. Environ. Hazards 2007, 7, 383–398. [Google Scholar] [CrossRef]
  9. Chen, L.; Miller-Hooks, E. Resilience: An Indicator of Recovery Capability in Intermodal Freight Transport. Transp. Sci. 2011, 46, 109–123. [Google Scholar] [CrossRef]
  10. Crowe, S.; Vasilakis, C.; Skeen, A.; Storr, P.; Grove, P.; Gallivan, S.; Utley, M. Examining the feasibility of using a modelling tool to assess resilience across a health-care system and assist with decisions concerning service reconfiguration. J. Oper. Res. Soc. 2014, 65, 1522–1532. [Google Scholar] [CrossRef]
  11. Fairbanks, R.J.; Wears, R.L.; Woods, D.D.; Hollnagel, E.; Plsek, P.; Cook, R.I. Resilience and Resilience Engineering in Health Care. Jt. Comm. J. Qual. Patient Saf. 2014, 40, 376–383. [Google Scholar] [CrossRef] [PubMed]
  12. Sahebjamnia, N.; Torabi, S.; Mansouri, S. Integrated business continuity and disaster recovery planning: Towards organizational resilience. Eur. J. Oper. Res. 2015, 242, 261–273. [Google Scholar] [CrossRef]
  13. Guidotti, R.; Gardoni, P.; Rosenheim, N. Integration of physical infrastructure and social systems in communities’ reliability and resilience analysis. Reliab. Eng. Syst. Saf. 2019, 185, 476–492. [Google Scholar] [CrossRef]
  14. Verdaguer, M.; Molinos-Senante, M.; Clara, N.; Santana, M.; Gernjak, W.; Poch, M. Optimal fresh water blending: A methodological approach to improve the resilience of water supply systems. Sci. Total Environ. 2018, 624, 1308–1315. [Google Scholar] [CrossRef] [PubMed]
  15. Giahi, R.; MacKenzie, C.A.; Hu, C. Design optimization for resilience for risk-averse firms. Comput. Ind. Eng. 2020, 139, 106122. [Google Scholar] [CrossRef]
  16. Lowe, C.J.; Macdonald, M. Space mission resilience with inter-satellite networking. Reliab. Eng. Syst. Saf. 2020, 193, 106608. [Google Scholar] [CrossRef]
  17. Toroghi, S.S.H.; Thomas, V.M. A framework for the resilience analysis of electric infrastructure systems including temporary generation systems. Reliab. Eng. Syst. Saf. 2020, 202, 107013. [Google Scholar] [CrossRef]
  18. Patriarca, R.; De Paolis, A.; Costantino, F.; Di Gravio, G. Simulation model for simple yet robust resilience assessment metrics for engineered systems. Reliab. Eng. Syst. Saf. 2021, 209, 107467. [Google Scholar] [CrossRef]
  19. Hosseini, S.; Barker, K.; Ramirez-Marquez, J.E. A review of definitions and measures of system resilience. Reliab. Eng. Syst. Saf. 2016, 145, 47–61. [Google Scholar] [CrossRef]
  20. Cheng, Y.; Elsayed, E.A.; Huang, Z. Systems resilience assessments: A review, framework and metrics. Int. J. Prod. Res. 2021, 60, 595–622. [Google Scholar] [CrossRef]
  21. Bruneau, M.; Reinhorn, A. Exploring the concept of seismic resilience for acute care facilities. Earthq. Spectra 2007, 23, 41–62. [Google Scholar] [CrossRef]
  22. Ouyang, M.; Duenas-Osorio, L. Time-dependent resilience assessment and improvement of urban infrastructure systems. Chaos: Interdiscip. J. Nonlinear Sci. 2012, 22, 033122. [Google Scholar] [CrossRef]
  23. Kott, A.; Linkov, I. To Improve Cyber Resilience, Measure It. Computer 2021, 54, 80–85. [Google Scholar] [CrossRef]
  24. Copeland, A.; Hall, G. The Response of Prices, Sales, and Output to Temporary Changes in Demand. J. Appl. Econom. 2011, 26, 232–269. [Google Scholar] [CrossRef]
  25. Röglinger, M.; Plattfaut, R.; Borghoff, V.; Kerpedzhiev, G.; Becker, J.; Beverungen, D.; vom Brocke, J.; Van Looy, A.; del Río-Ortega, A.; Rinderle-Ma, S.; et al. Exogenous Shocks and Business Process Management. Bus. Inf. Syst. Eng. 2022, 64, 669–687. [Google Scholar] [CrossRef]
  26. Azadeh, A.; Salehi, V.; Arvan, M.; Dolatkhah, M. Assessment of resilience engineering factors in high-risk environments by fuzzy cognitive maps: A petrochemical plant. Saf. Sci. 2014, 68, 99–107. [Google Scholar] [CrossRef]
  27. López, C.; Ishizaka, A.; Gul, M.; Yücesan, M.; Valencia, D. A calibrated Fuzzy Best-Worst-method to reinforce supply chain resilience during the COVID 19 pandemic. J. Oper. Res. Soc. 2023, 74, 1968–1991. [Google Scholar] [CrossRef]
  28. Dhulipala, S.L.; Flint, M.M. Series of semi-Markov processes to model infrastructure resilience under multihazards. Reliab. Eng. Syst. Saf. 2020, 193, 106659. [Google Scholar] [CrossRef]
  29. Zeng, Z.; Fang, Y.P.; Zhai, Q.; Du, S. A Markov reward process-based framework for resilience analysis of multistate energy systems under the threat of extreme events. Reliab. Eng. Syst. Saf. 2021, 209, 107443. [Google Scholar] [CrossRef]
  30. Yin, J.; Ren, X.; Liu, R.; Tang, T.; Su, S. Quantitative analysis for resilience-based urban rail systems: A hybrid knowledge-based and data-driven approach. Reliab. Eng. Syst. Saf. 2022, 219, 108183. [Google Scholar] [CrossRef]
  31. Miehle, D.; Björn, H.; Stefan, P.; Übelhör, J. Modeling IT Availability Risks in Smart Factories. Bus. Inf. Syst. Eng. 2020, 62, 323–345. [Google Scholar] [CrossRef]
  32. Yan, R.; Dunnett, S.; Andrews, J. A Petri net model-based resilience analysis of nuclear power plants under the threat of natural hazards. Reliab. Eng. Syst. Saf. 2023, 230, 108979. [Google Scholar] [CrossRef]
  33. Grant, E.; Yung, J. The Double-Edged Sword of Global Integration: Robustness, Fragility, and Contagion in the International Firm Network. J. Appl. Econom. 2021, 36, 760–783. [Google Scholar] [CrossRef]
  34. Garbero, A.; Letta, M. Predicting Household Resilience with Machine Learning: Preliminary Cross-Country Tests. Empir. Econ. 2022, 63, 2057–2070. [Google Scholar] [CrossRef]
  35. Schryen, G.; Rauchecker, G.; Comes, T. Resource Planning in Disaster Response. Bus. Inf. Syst. Eng. 2015, 57, 243–249. [Google Scholar] [CrossRef]
  36. Zou, Q.; Chen, S. Enhancing resilience of interdependent traffic-electric power system. Reliab. Eng. Syst. Saf. 2019, 191, 106557. [Google Scholar] [CrossRef]
  37. Najarian, M.; Lim, G.J. Optimizing infrastructure resilience under budgetary constraint. Reliab. Eng. Syst. Saf. 2020, 198, 106801. [Google Scholar] [CrossRef]
  38. Pan, X.; Dang, Y.; Wang, H.; Hong, D.; Li, Y.; Deng, H. Resilience model and recovery strategy of transportation network based on travel OD-grid analysis. Reliab. Eng. Syst. Saf. 2022, 223, 108483. [Google Scholar] [CrossRef]
  39. Yang, B.; Zhang, L.; Zhang, B.; Wang, W.; Zhang, M. Resilience Metric of Equipment System: Theory, Measurement and Sensitivity Analysis. Reliab. Eng. Syst. Saf. 2021, 215, 107889. [Google Scholar] [CrossRef]
  40. Jalilpoor, K.; Oshnoei, A.; Mohammadi-Ivatloo, B.; Anvari-Moghaddam, A. Network hardening and optimal placement of microgrids to improve transmission system resilience: A two-stage linear program. Reliab. Eng. Syst. Saf. 2022, 224, 108536. [Google Scholar] [CrossRef]
  41. Masruroh, N.A.; Putra, R.K.E.; Mulyani, Y.P.; Rifai, A.P. Strategic insights into recovery from supply chain disruption: A multi-period production planning model. J. Oper. Res. Soc. 2023, 74, 1775–1799. [Google Scholar] [CrossRef]
  42. Nafreen, M.; Fiondella, L. Software Reliability Models with Bathtub-shaped Fault Detection. In Proceedings of the 2021 Annual Reliability and Maintainability Symposium (RAMS), Orlando, FL, USA, 24–27 May 2021; pp. 1–7. [Google Scholar] [CrossRef]
  43. Yang, D.Y.; Frangopol, D.M. Life-cycle management of deteriorating civil infrastructure considering resilience to lifetime hazards: A general approach based on renewal-reward processes. Reliab. Eng. Syst. Saf. 2019, 183, 197–212. [Google Scholar] [CrossRef]
  44. Zhou, Y.; Wang, J.; Yang, H. Resilience of Transportation Systems: Concepts and Comprehensive Review. IEEE Trans. Intell. Transp. Syst. 2019, 20, 4262–4276. [Google Scholar] [CrossRef]
  45. Zobel, C.W. Representing perceived tradeoffs in defining disaster resilience. Decis. Support Syst. 2011, 50, 394–403. [Google Scholar] [CrossRef]
  46. Reed, D.A.; Kapur, K.C.; Christie, R.D. Methodology for assessing the resilience of networked infrastructure. IEEE Syst. J. 2009, 3, 174–180. [Google Scholar] [CrossRef]
  47. Cimellaro, G.P.; Reinhorn, A.M.; Bruneau, M. Seismic resilience of a hospital system. Struct. Infrastruct. Eng. 2010, 6, 127–144. [Google Scholar] [CrossRef]
  48. Han, Y.; Goetz, S.J. Predicting US County Economic Resilience from Industry Input-Output Accounts. Appl. Econ. 2019, 51, 2019–2028. [Google Scholar] [CrossRef]
  49. Phillips, P.C.B.; Sul, D. Economic Transition and Growth. J. Appl. Econom. 2009, 24, 1153–1185. [Google Scholar] [CrossRef]
  50. Fowler, D.S.; Epiphaniou, G.; Higgins, M.D.; Maple, C. Aspects of resilience for smart manufacturing systems. Strateg. Chang. 2023, 32, 183–193. [Google Scholar] [CrossRef]
  51. Leng, J.; Zhong, Y.; Lin, Z.; Xu, K.; Mourtzis, D.; Zhou, X.; Zheng, P.; Liu, Q.; Zhao, J.L.; Shen, W. Towards resilience in Industry 5.0: A decentralized autonomous manufacturing paradigm. J. Manuf. Syst. 2023, 71, 95–114. [Google Scholar] [CrossRef]
  52. Kleinbaum, D.G.; Kupper, L.L.; Nizam, A.; Rosenberg, E.S. Applied Regression Analysis and Other Multivariable Methods; Cengage Learning: Boston, MA, USA, 1999. [Google Scholar]
  53. Casella, G.; Berger, R. Statistical Inference; Cengage Learning: Boston, MA, USA, 2001. [Google Scholar]
  54. Pham, H. Software Reliability; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  55. Hogg, R.; McKean, J.; Allen, C. Introduction to Mathematical Statistics; Pearson Prentice Hall: London, UK, 2005. [Google Scholar]
  56. Ouyang, M.; Duenas Osorio, L. Multi-dimensional hurricane resilience assessment of electric power systems. Struct. Saf. 2014, 48, 15–24. [Google Scholar] [CrossRef]
  57. U.S. Bureau of Labor Statistics. The Recession of 2007–2009. 2022. Available online: https://www.bls.gov/spotlight/2012/recession/ (accessed on 17 January 2022).
  58. U.S. Bureau of Labor Statistic. Comparing Labor Force Participation in the Recovery from the 2020 Recession with Prior Recessions. 2022. Available online: https://www.bls.gov/opub/ted/2021/comparing-labor-force-participation-in-the-recovery-from-the-2020-recession-with-prior-recessions.htm (accessed on 17 January 2022).
  59. Challet, D.; Solomon, S.; Yaari, G. The Universal Shape of Economic Recession and Recovery after a Shock. Economics 2009, 3, 1–24. [Google Scholar] [CrossRef]
  60. Leemis, L. Reliability: Probabilistic Models and Statistical Methods, 2nd ed.; Lightning Source: La Vergne, TN, USA, 2009. [Google Scholar]
  61. Maas, B. Short-term forecasting of the US unemploy-ment rate. J. Forecast. 2021, 39, 394–411. [Google Scholar] [CrossRef]
  62. Fendel, R.; Mai, N.; Mohr, O. Recession probabilities for the Eurozoneat the zero lower bound: Challenges to the termspread and rise of alternatives. J. Forecast. 2021, 40, 1000–1026. [Google Scholar] [CrossRef]
  63. Macrotrends. 1 Year Treasury Rate—54 Year Historical Chart. 2022. Available online: https://www.macrotrends.net/2492/1-year-treasury-rate-yield-chart (accessed on 1 October 2022).
  64. Official Data. Stock Market Returns Since 1980. 2022. Available online: https://www.officialdata.org/us/stocks/s-p-500/1980 (accessed on 1 October 2022).
  65. FRED Economic Data. Consumer Price Index for All Urban Consumers: Durables in U.S. City Average. 2022. Available online: https://fred.stlouisfed.org/series/CUUR0000SAD (accessed on 1 October 2022).
  66. Macrotrends. Federal Funds Rate—62 Year Historical Chart. 2022. Available online: https://www.macrotrends.net/2015/fed-funds-rate-historical-chart (accessed on 1 October 2022).
  67. FRED Economic Data. Personal Consumption Expenditures: Durable Goods. 2022. Available online: https://fred.stlouisfed.org/series/PCDG (accessed on 1 October 2022).
  68. Macrotrends. Crude Oil Prices—70 Year Historical Chart. 2022. Available online: https://www.macrotrends.net/1369/crude-oil-price-history-chart (accessed on 1 October 2022).
  69. FRED Economic Data. Industrial Production: Manufacturing: Durable Goods: Motor Vehicles and Parts (NAICS = 3361-3). 2022. Available online: https://fred.stlouisfed.org/series/IPG3361T3N (accessed on 1 October 2022).
  70. Macrotrends. S&P 500 vs. Durable Goods Orders. 2022. Available online: https://www.macrotrends.net/2601/sp-500-vs-durable-goods-chart (accessed on 1 October 2022).
  71. YCharts. U.S. ISM Manufacturing New Orders Index. 2022. Available online: https://ycharts.com/indicators/us_ism_manufacturing_new_orders_index (accessed on 1 October 2022).
  72. The Balance. What Is the Consumer Confidence Index? 2022. Available online: https://www.thebalancemoney.com/consumer-confidence-index-news-impact-3305743 (accessed on 1 October 2022).
  73. Hartman, T.; Schneider, H. The Shape of Things to Come. 2022. Available online: https://graphics.reuters.com/HEALTH-CORONAVIRUS/ECONOMICS/yxmpjozjyvr/ (accessed on 3 October 2022).
  74. Our World in Data. COVID-19 Stringency Index. 2022. Available online: https://ourworldindata.org/grapher/covid-stringency-index?tab=chart (accessed on 3 October 2022).
  75. Facts. U.S. COVID-19 Cases and Deaths by State. 2022. Available online: https://usafacts.org/visualizations/coronavirus-covid-19-spread-map/ (accessed on 3 October 2022).
  76. Our World in Data. Workplace Closures during the COVID-19 Pandemic. 2022. Available online: https://ourworldindata.org/grapher/workplace-closures-covid?region=NorthAmerica (accessed on 4 October 2022).
  77. Our World in Data. Workplaces: How Did the Number of Visitors Change Since the Beginning of the Pandemic? 2022. Available online: https://ourworldindata.org/grapher/workplace-visitors-covid?tab=chart&country=~USA (accessed on 4 October 2022).
  78. Facts, U. US Coronavirus Vaccine Tracker. 2022. Available online: https://usafacts.org/visualizations/covid-vaccine-tracker-states/ (accessed on 4 October 2022).
  79. Centers for Disease Control and Prevention. Rates of COVID-19 Cases and Deaths by Vaccination Status. 2022. Available online: https://covid.cdc.gov/covid-data-tracker/#rates-by-vaccine-status (accessed on 4 October 2022).
  80. Our World in Data. Face Covering Policies during the COVID-19 Pandemic. 2022. Available online: https://ourworldindata.org/grapher/face-covering-policies-covid (accessed on 4 October 2022).
Figure 1. Steps of this research.
Figure 1. Steps of this research.
Mathematics 12 02380 g001
Figure 2. Conceptual resilience curves possessing a bathtub shape.
Figure 2. Conceptual resilience curves possessing a bathtub shape.
Mathematics 12 02380 g002
Figure 3. Payroll change in U.S. recessions from peak employment.
Figure 3. Payroll change in U.S. recessions from peak employment.
Mathematics 12 02380 g003
Figure 4. Quadratic model fit to 2001–2005 U.S recession data.
Figure 4. Quadratic model fit to 2001–2005 U.S recession data.
Mathematics 12 02380 g004
Figure 5. Competing risk model fit to the 1990–1993 U.S recession dataset.
Figure 5. Competing risk model fit to the 1990–1993 U.S recession dataset.
Mathematics 12 02380 g005
Figure 6. Bathtub-shaped resilience metrics for the 1990–1993 data.
Figure 6. Bathtub-shaped resilience metrics for the 1990–1993 data.
Mathematics 12 02380 g006
Figure 7. 1990–1993 U.S recession data set and the (Wei–Exp) model fit.
Figure 7. 1990–1993 U.S recession data set and the (Wei–Exp) model fit.
Mathematics 12 02380 g007
Figure 8. 1981–1983 U.S recession data set and the (Exp–Wei) and (Wei–Wei) model fits.
Figure 8. 1981–1983 U.S recession data set and the (Exp–Wei) and (Wei–Wei) model fits.
Mathematics 12 02380 g008
Figure 9. Mixture distribution resilience metrics for the 1990–1993 U.S. recession data.
Figure 9. Mixture distribution resilience metrics for the 1990–1993 U.S. recession data.
Mathematics 12 02380 g009
Figure 10. 1980 U.S recession dataset and regression model fit with six covariates.
Figure 10. 1980 U.S recession dataset and regression model fit with six covariates.
Mathematics 12 02380 g010
Figure 11. 2020–2022 U.S recession data set and regression model fit with four covariates.
Figure 11. 2020–2022 U.S recession data set and regression model fit with four covariates.
Mathematics 12 02380 g011
Figure 12. Covariate model resilience metrics for the 1990–1993 U.S. recession data.
Figure 12. Covariate model resilience metrics for the 1990–1993 U.S. recession data.
Mathematics 12 02380 g012
Figure 13. Verification of the normality assumptions of the 1980 model.
Figure 13. Verification of the normality assumptions of the 1980 model.
Mathematics 12 02380 g013
Figure 14. Verification of the normality assumptions of the 2020–2022 model.
Figure 14. Verification of the normality assumptions of the 2020–2022 model.
Mathematics 12 02380 g014
Figure 15. Best model fits applied to the 1990–1993 U.S recession dataset.
Figure 15. Best model fits applied to the 1990–1993 U.S recession dataset.
Mathematics 12 02380 g015
Figure 16. Best model fits applied to 2001–2005 U.S recession dataset.
Figure 16. Best model fits applied to 2001–2005 U.S recession dataset.
Mathematics 12 02380 g016
Figure 17. Best model fits applied to the 1980 U.S recession dataset.
Figure 17. Best model fits applied to the 1980 U.S recession dataset.
Mathematics 12 02380 g017
Figure 18. Best model fits applied to the 2020–2022 U.S recession dataset.
Figure 18. Best model fits applied to the 2020–2022 U.S recession dataset.
Mathematics 12 02380 g018
Figure 19. Interval-based resilience metrics using competing risk, Wei–Exp, and MLRI models and 1990–1993 U.S. recession data.
Figure 19. Interval-based resilience metrics using competing risk, Wei–Exp, and MLRI models and 1990–1993 U.S. recession data.
Mathematics 12 02380 g019
Table 3. Bathtub-shaped function validation based on seven U.S. recessions.
Table 3. Bathtub-shaped function validation based on seven U.S. recessions.
U.S RecessionnMeasureQuadraticCompeting Risk
1974–197648RMSE 0 . 0071 0.0075
PRMSE 0 . 0016 0.0021
r a d j 2 0 . 9179 0.9077
EC 95.83 % 97.91 %
198048RMSE 0.0102 0 . 0097
PRMSE 0.0137 0 . 0135
r a d j 2 0.0716 0 . 0231
EC 62.50 % 64.58 %
1981–198348RMSE 0.0105 0 . 0063
PRMSE 0.0249 0 . 0078
r a d j 2 0.8434 0 . 9537
EC 93.75 % 97.91 %
1990–199348RMSE 0.0010 0 . 0009
PRMSE 0.0016 0 . 0010
r a d j 2 0.9951 0 . 9964
EC 85.41 % 89.58 %
2001–200548RMSE 0 . 0013 0.0015
PRMSE 0 . 0008 0.0009
r a d j 2 0 . 9591 0.9483
EC 95.83 % 91.66 %
2007–200948RMSE 0 . 0060 0.0064
PRMSE 0 . 0061 0.0079
r a d j 2 0 . 9205 0.9107
EC 95.83 % 95.83 %
2020–202234RMSE 0.0301 0 . 0271
PRMSE 0.0334 0 . 0134
r a d j 2 0.3784 0 . 4930
EC 88.23 % 88.23 %
Bold values represent the best measure achieved for each data set.
Table 4. Mixture distribution validation using data from seven U.S. recessions.
Table 4. Mixture distribution validation using data from seven U.S. recessions.
U.S RecessionMeasuresExp–ExpWei–ExpExp–WeiWei–Wei
1974–1976RMSE 0.0201 0 . 0074 0.0074 0.0116
PRMSE 0.0147 0 . 0083 0.0084 0.0129
r a d j 2 0.3447 0 . 9109 0.9109 0.7908
EC 95.83 % 97.91 % 97.91 % 97.91 %
1980RMSE 0.0136 0.0136 0.0136 0 . 0133
PRMSE 0.0322 0.0322 0.0322 0 . 0322
r a d j 2 0.8093 0.8958 0.8961 0 . 8090
EC 91.66 % 91.66 % 91.66 % 91.66 %
1981–1983RMSE 0.0229 0.0089 0 . 0089 0.0099
PRMSE 0.0291 0.0194 0.0194 0 . 0126
r a d j 2 0.4039 0.9091 0 . 9188 0.8924
EC 95.83 % 91.66 % 93.75 % 97.91 %
1990–1993RMSE 0.0216 0 . 0021 0.0021 0.0036
PRMSE 0.0155 0 . 0029 0.0029 0.0046
r a d j 2 0.7228 0 . 9800 0.9799 0.9463
EC 93.75 % 97.91 % 97.91 % 95.83 %
2001–2005RMSE 0.0209 0 . 0021 0.0021 0.0033
PRMSE 0.0142 0 . 0027 0.0027 0.0050
r a d j 2 0.0068 0 . 8966 0.8965 0.7544
EC 93.75 % 97.91 % 97.91 % 97.91 %
2007–2009RMSE 0.0193 0.0198 0 . 0068 0.0113
PRMSE 0.0222 0.0222 0.0135 0 . 0107
r a d j 2 0.1892 0.1506 0 . 8984 0.7365
EC 95.83 % 95.83 % 95.83 % 97.91 %
2020–2022RMSE 0.0241 0.0284 0.0284 0 . 0234
PRMSE 0.0057 0.0384 0.0384 0 . 0057
r a d j 2 0.5985 0.4434 0.4434 0 . 6478
EC 91.17 % 91.17 % 91.17 % 88.23 %
Bold values represent the best measure achieved for each data set.
Table 5. Eight market-related covariates considered for each dataset.
Table 5. Eight market-related covariates considered for each dataset.
CovariateNameReference
X1Treasury yield curve [63]
X2Standard & Poor’s 500 Index stock market [64]
X3Consumer price index [65]
X4Federal funds rate [66]
X5Personal consumption expenditures [67]
X6Crude oil price [68]
X7Mortgage rate [69]
X8Industrial production [69]
Table 6. Thirteen additional covariates considered for the 2020–2022 US recession.
Table 6. Thirteen additional covariates considered for the 2020–2022 US recession.
CovariateNameReference
X9Durable goods orders [70]
X10New order index [71]
X11Consumer confidence index [72]
X12Consumer activity [73]
X13Unemployment benefits claim [73]
X14Stringency index [74]
X15Number of deaths [75]
X16Number of cases [75]
X17Workplace closures [76]
X18Number of visitors to the workplace [77]
X19Population fully vaccinated [78]
X20Number of cases among vaccinated people [79]
X21Face covering [80]
Table 7. Regression validation on seven U.S. recessions.
Table 7. Regression validation on seven U.S. recessions.
U.S RecessionGoodness of FitMultiple Linear RegressionMLR with InteractionPolynomial Regression
1974–1976CovariatesX5, X3, X8, X6, X7X5, X3, X8, X2X5, X3, X8, X6, X7
RMSE 0.0053 0 . 0023 0.0033
PRMSE 0.0038 0.0025 0 . 0012
r a d j 2 0.9457 0 . 9892 0.9788
EC 97.91 % 97.91 % 95.83 %
1980CovariatesX6, X2, X1, X7, X3, X5X6, X2, X1, X7, X4X6, X2, X1, X7, X3, X5
RMSE 0 . 0022 0.0022 0.0022
PRMSE 0.0034 0 . 0032 0.0035
r a d j 2 0 . 4772 0.4728 0.457
EC 95.83 % 97.91 % 95.83 %
1981–1983CovariatesX7, X3, X4, X5, X1X7, X3, X4, X2X7, X3, X4, X5
RMSE 0.0018 0 . 0016 0.0018
PRMSE 0.0011 0 . 0008 0.0024
r a d j 2 0.9954 0 . 9966 0.9954
EC 97.91 % 95.83 % 97.91 %
1990–1993CovariatesX3, X7, X1, X8, X6X3, X7, X1, X2X3, X7, X1, X5
RMSE 0.0009 0 . 0007 0.0007
PRMSE 0.0015 0 . 0009 0.0009
r a d j 2 0.9962 0 . 9974 0.9973
EC 95.83 % 95.83 % 95.83 %
2001–2005CovariatesX5, X3, X4, X8, X7X5, X3, X4, X8, X6X5, X3, X1
RMSE 0.0013 0 . 0009 0.0010
PRMSE 0.0017 0.0015 0 . 0007
r a d j 2 0.9604 0 . 9773 0.9746
EC 95.83 % 93.75 % 95.83 %
2007–2009CovariatesX7, X5, X1, X2, X4X7, X5, X1, X2, X6X7, X5, X1, X2, X3
RMSE 0.0031 0 . 0014 0.0023
PRMSE 0.0036 0 . 0005 0.0011
r a d j 2 0.9791 0 . 9955 0.9878
EC 97.91 % 97.91 % 97.91 %
2020–2022CovariatesX9, X10, X13, X2, X17X9, X10, X13, X17, X12X9, X10, X13, X2, X17
RMSE 0.0196 0 . 0031 0.0194
PRMSE 0.0034 0 . 0011 0.0023
r a d j 2 0.3578 0 . 9827 0.3679
EC 79.41 % 93.75 % 82.35 %
Bold values represent the best measure achieved for each data set.
Table 8. Comparison of resilience models on data from seven U.S. recessions.
Table 8. Comparison of resilience models on data from seven U.S. recessions.
U.S RecessionGoodness of FitBathtub-Shaped FunctionsMixture DistributionsCovariates
1974–1976ModelQuadraticWei-ExpInteraction
RMSE 0.0071 0.0074 0 . 0023
PRMSE 0 . 0016 0.0083 0.0025
r a d j 2 0.9179 0.9109 0 . 9892
EC 95.83 % 97.91 % 97.91 %
1980ModelCompeting RiskWei-WeiInteraction
RMSE 0.0097 0.0133 0 . 0022
PRMSE 0.0135 0.0322 0 . 0032
r a d j 2 0.0231 0.8090 0 . 4728
EC 64.58 % 91.66 % 97.91 %
1981–1983ModelCompeting RiskExp-WeiInteraction
RMSE 0.0063 0.0089 0 . 0016
PRMSE 0.0078 0.0194 0 . 0008
r a d j 2 0.9537 0.9188 0 . 9966
EC 97.91 % 93.75 % 97.91 %
1990–1993ModelCompeting RiskWei-ExpInteraction
RMSE 0.0009 0.0021 0 . 0007
PRMSE 0.0010 0.0029 0 . 0009
r a d j 2 0.9964 0.9800 0 . 9974
EC 89.58 % 97.91 % 95.83 %
2001–2005ModelQuadraticWei-ExpInteraction
RMSE 0.0013 0.0021 0 . 0009
PRMSE 0 . 0008 0.0027 0.0015
r a d j 2 0.9591 0.8966 0 . 9773
EC 95.83 % 97.91 % 93.75 %
2007–2009ModelQuadraticExp-WeiInteraction
RMSE 0.0060 0.0068 0 . 0014
PRMSE 0.0061 0.0135 0 . 0005
r a d j 2 0.9205 0.8984 0 . 9955
EC 95.83 % 95.83 % 97.91 %
2020–2022ModelCompeting RiskWei-WeiInteraction
RMSE 0.0271 0.0234 0 . 0031
PRMSE 0.0134 0.0057 0 . 0011
r a d j 2 0.4930 0.6478 0 . 9827
EC 88.23 % 88.23 % 93 . 75 %
Bold values represent the best measure achieved for each data set.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Silva, P.; Hidalgo, M.; Hotchkiss, M.; Dharmasena, L.; Linkov, I.; Fiondella, L. Predictive Resilience Modeling Using Statistical Regression Methods. Mathematics 2024, 12, 2380. https://doi.org/10.3390/math12152380

AMA Style

Silva P, Hidalgo M, Hotchkiss M, Dharmasena L, Linkov I, Fiondella L. Predictive Resilience Modeling Using Statistical Regression Methods. Mathematics. 2024; 12(15):2380. https://doi.org/10.3390/math12152380

Chicago/Turabian Style

Silva, Priscila, Mariana Hidalgo, Mindy Hotchkiss, Lasitha Dharmasena, Igor Linkov, and Lance Fiondella. 2024. "Predictive Resilience Modeling Using Statistical Regression Methods" Mathematics 12, no. 15: 2380. https://doi.org/10.3390/math12152380

APA Style

Silva, P., Hidalgo, M., Hotchkiss, M., Dharmasena, L., Linkov, I., & Fiondella, L. (2024). Predictive Resilience Modeling Using Statistical Regression Methods. Mathematics, 12(15), 2380. https://doi.org/10.3390/math12152380

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop