Next Article in Journal
Perfect Codes over Non-Prime Power Alphabets: An Approach Based on Diophantine Equations
Previous Article in Journal
Analyzing and Simulating Evolution of Subsidy–Operation Strategies for Multi-Type China Railway Express Operation Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion

1
Department of Computer Science and Statistics, Chosun University, Gwangju 61452, Republic of Korea
2
Institute of Well-Aging Medicare & Chosun University LAMP Center, Chosun University, Gwangju 61452, Republic of Korea
3
Department of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ 08855-8018, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(11), 1641; https://doi.org/10.3390/math12111641
Submission received: 17 April 2024 / Revised: 10 May 2024 / Accepted: 22 May 2024 / Published: 23 May 2024
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
It is becoming increasingly common for software to operate in various environments. However, even if the software performs well in the test phase, uncertain operating environments may cause new software failures. Traditional proposed software reliability models under uncertain operating environments suffer from the problem of being well-suited to special cases due to the large number of assumptions involved. To improve these problems, this study proposes a new software reliability model that assumes an uncertain operating environment. The new software reliability model is a model that minimizes assumptions and minimizes the number of parameters that make up the model, so that the model can be applied to general situations better than the traditional proposed software reliability models. In addition, various criteria based on the difference between the predicted and estimated values have been used in the past to demonstrate the superiority of the software reliability models. Also, we propose a new multi-criteria decision method that can simultaneously consider multiple goodness-of-fit criteria. The multi-criteria decision method using ranking is useful for comprehensive evaluation because it does not rely on individual criteria alone by ranking and weighting multiple criteria for the model. Based on this, 21 existing models are compared with the proposed model using two datasets, and the proposed model is found to be superior for both datasets using 15 criteria and the multi-criteria decision method using ranking.

1. Introduction

Software is a set of programs organized by an algorithm to perform a specific task. With the development of technology, many fields have become increasingly dependent on software. As software is used in many areas, there has been significant interest in its reliability. Software reliability is a measurement of how long software can be used without failure. Because software is utilized in many fields, if it fails or is unreliable, it will cause significant problems, highlighting the importance of software reliability.
Considerable research has been conducted on software reliability. Various methods have been used to estimate and predict software failures, such as error-seeding models, failure rate models, curve-fitting models, time series models, and nonhomogeneous Poisson processes (NHPPs). The NHPP software reliability model (SRM) is the most representative of these research methodologies. The NHPP SRM was developed by Goel and Okumoto in 1979. They proposed an SRM assuming that the occurrence of failures increases exponentially over time [1,2]. Based on this, an SRM that assumes that the cumulative failure of software increases in an S-shaped curve was developed [3,4,5,6], and research on SRMs that reflects the various efforts put into the testing phase to measure software reliability was conducted. In addition, SRMs that assume incomplete debugging, that is, where failures and defects that occur may not be corrected, have been proposed [7], and many SRMs that assume generalized incomplete debugging have been studied [8,9,10,11,12].
Software structures have become increasingly diverse and complex because they are actively used in many industries. Because of the complex organization of software, the failures that occur are not independent, and software failures can have dependent effects on other software failures [13,14]. Kim et al. [15] and Lee et al. [16] proposed an SRM that assumed that software failures occur in a dependent manner.
Because software is used in many different ways, its roles have become very diverse, and the environment provided to each piece of software has also become very diverse. Many studies have been conducted on SRMs that assume uncertainty in operating environments [17,18,19,20,21,22,23,24,25,26]. Here, uncertain operating environments refer to the environments used by actual consumers. To assume uncertain operating environments, Teng and Pham [18] and Pham [19] proposed an SRM utilizing a gamma distribution, and Song et al. [20] assumed the uncertain operating environments to be exponential and applied the failure detection rate function as a monotonically increasing function to present a new model for uncertain operating environments. Many researchers have proposed an SRM that considers uncertain factors [21,22,23,24]. Chang et al. [25] proposed an SRM that considers testing coverage, which indicates whether sufficient tests have been performed in uncertain operating environments. Lee et al. [26] proposed an uncertain operating environment SRM that assumes dependent failures to reflect a complex software structure.
In this study, we propose a new NHPP SRM that considers the uncertain operating environments of software. Traditional NHPP SRMs that consider uncertain operating environments include several mathematical assumptions, which make the model very complicated and suffer from the problem that it only fits well in special cases with assumptions. Our proposed new NHPP SRM minimizes the mathematical assumptions and can be better applied to general situations than traditional SRMs. In addition, while past software reliability studies judged the excellence of a model based on various criteria such as the difference between the predicted and estimated values of the developed SRM, we propose a multi-criteria decision method using ranking (MCDMR) that considers multiple criteria simultaneously through ranking. Section 2 introduces the SRM and the new NHPP model that considers uncertain operating environments. Section 3 introduces integrated criteria enabling the consideration of multiple criteria simultaneously, and Section 4 explains the numerical example results. Finally, Section 5 concludes this paper.

2. A New Software Reliability Model

2.1. Software Reliability Model

Software reliability assesses how long a piece of software can be used without failure. The reliability function R ( t ) used to evaluate the software reliability is given by the following equation:
R t = P ( T > t ) = t f u d u
It is defined as the probability that the system is still operating after time t . The reliability function R ( t ) is derived from the probability density function f ( t ) by assuming the lifetime or failure time to be a random variable T . To calculate the reliability function R ( t ) , the number of failures per unit time is assumed to be Poisson distributed with mean λ . However, the mean number of failures per unit time is a constant and does not depend on time. To improve this, many traditional SRMs follow a non-homogeneous Poisson process (NHPP) with a time-dependent mean value function m ( t ) per unit time. The NHPP is represented by the following equation:
Pr { N t = n } = { m ( t ) } n n ! e m ( t ) ,   n = 0 , 1 , 2 , , t 0
where N ( t ) is the total number of events in a given time and m ( t ) is the mean value function from time 0 to time t . The m ( t ) is derived as follows:
m t = 0 t λ s d s
m ( t ) is the integral of the intensity function λ ( t ) , which is the instantaneous number of failures at time t , and the mean number of failures from time 0 to time t . From this relationship, the reliability function R ( t ) is derived as follows.
R t = e 0 t λ s d s = e m t
The reliability function R ( t ) is calculated from the intensity function λ ( t ) and m ( t ) to improve software reliability.

2.2. Proposed NHPP SRM

The mean value function m ( t ) through the NHPP is calculated as a differential equation, where a ( t ) is the number of failures at each time point and b ( t ) is the failure detection rate. The most basic differential equation is shown in Equation (1).
d m ( t ) d t = b t [ a t m ( t ) ]
It assumes that the software has the same operating environment and that software failures occur independently. Traditional SRMs define a ( t ) and b ( t ) as in Equation (1) and solve the differential equation to obtain the SRM in Equation (2).
m ( t ) = e t 0 t b τ d τ [ m 0 + t 0 t a τ b τ e t 0 t b τ d τ d τ ]
where m 0 is the initial condition, and t 0 is the start time of the debugging process.
However, the environment in which the software operates varies significantly depending on the function or environment of the software. For example, the operating system and computer specifications of users of the developed software vary widely, but it is not practical for companies to release tests that consider all situations; therefore, the errors that may occur in different environments vary widely. Therefore, considering the uncertainty of the environment in which the software is operated can increase software reliability. In this study, we propose an NHPP SRM that considers uncertain operating environments. Furthermore, our proposed NHPP SRM that considers uncertain operating environments follows the following assumptions.
The initial condition of the m t is m 0 = 0 ;
a t = N is the expected number of software faults before testing;
η follows a gamma distribution with α and β ;
β is a parameter containing the failure detection rate.
Our proposed SRM considers uncertain operating environments by extending the differential equations that derive the traditional SRM to assume uncertain operating environments. The differential equation for the model is derived by adding the uncertain operating environment parameter η , as in Equation (3):
d m ( t ) d t = η b t a t m t
where the parameter η follows a gamma distribution with parameters α and β , and the scale parameter, β , contains the fault detection rate of the software.
The proposed model uses α and β drawn from the gamma distribution rather than the other parameters that make up the traditional a ( t ) and b ( t ) . The parameter η in Equation (3) follows a gamma distribution with parameters α and β , where a ( t ) is a t = N and b t is b t = β 2 t β t + 1 . N is the total expected number of failures, and α and β are the shape and scale parameters for parameter η in uncertain operating environments. Substituting this into Equation (3) gives Equation (4):
m t = N 1 β α + 0 t b s d s α
In Equation (4), because b t is b t = β 2 t β t + 1 , through substitution and summary, the NHPP SRM assumes uncertain operating environments expressed in the final form of Equation (5). Traditional NHPP SRMs for uncertain operating environments have proposed models with multiple assumptions, often consisting of four or more parameters [18,19,20,25,26]. However, these models make a number of assumptions that may be appropriate for particular situations. The proposed NHPP SRM contains fewer parameters than the previously proposed uncertain operating environment SRM. It does not contain many assumptions and can be judged as a model for general situations rather than for special situations.
m t = N 1 β α + 0 t β 2 t β t + 1 d s α = N 1 β α + β t l n β t + 1 α
Table 1 shows the mean value functions of the traditional SRM and the proposed model. Models 1–14 are SRMs with independent assumptions, models 15–18 are SRMs considering uncertain operating environments, models 19 and 20 are SRMs considering dependent failures, model 21 is an SRM considering dependent failure models and uncertain operating environments, and model 22 is the proposed SRM considering uncertain operating environments. For detailed construction information on models 1 to 21, see the papers on the proposed models.

3. Multi-Criteria Decision Method Using Ranking

To demonstrate the reliability of the software, multiple criteria are used based on the distance between the actual and estimated values. We propose a new measure that integrates the proposed multiple-criteria measures using multi-criteria decision-making (MCDM). The traditional MCDM method integrates qualitative measures of quantitative analysis to reflect the variability of the indicators. In MCDM, the most important issue is determining weights based on these criteria. Singh [27] recommended an approach using Monte Carlo simulation, and Saxena et al. [28], Kumar et al. [29], and Grag et al. [30] used the entropy method to determine the criteria weights, and a technique for order preference by similarity to an ideal solution (TOPSIS) method was used to rank the criteria of SRMs. Our proposed integrated MCDMR reflects the ranking of each suitability among the methods to determine the weights.
First, matrix C is constructed using the SRMs to be compared, and the criteria are calculated by the models. The ranking of each model for each criterion is then organized in matrix R :
C = C 11 C 12 C 21 C 22 C 1 k C 2 k C s 1 C s 1 C s k 1 ,     R = R 11 R 12 R 21 R 22 R 1 k R 2 k R s 1 R s 1 R s k 1
Based on this, a new y i j is created by normalizing each fitness by y i j = C i j i = 1 s C i j ( i = 1 , 2 , , m , j = 1 , 2 , , n ) for objective criteria, and the ranking of each fitness by w i j = r i j i = 1 m r i j ( j = 1 , 2 , , n ) is transformed to create a weight w i j . Matrix V is created by multiplying the new normalized y i j by weight w i j .
V = V 11 V 12 V 21 V 22 V 1 k V 2 k V s 1 V s 1 V s k 1 = w 11 y 11 w 12 y 12 w 21 y 21 w 22 y 22 w 1 k y 1 k w 2 k y 2 k w s 1 y s 1 w s 2 y s 2 w s k y s k
The MCDMR defines V 1 , , V s as the sum of all components of each row in a matrix V . For example, V 1 for the first row is the sum of the components from V 11 to V 1 k .
M C D M R = V 11 + V 12 + + V 1 k V 21 + V 22 + + V 2 k V s 1 + V s 2 + + V s k = V 1 V 2 V s
After computing the MCDMR for each V 1 to V s , the smaller the value displayed, the better the corresponding SRGM. This is useful for a comprehensive judgment because it not only relies on individual criteria but also considers criteria with multiple features simultaneously. Therefore, our proposed MCDMR simultaneously considers multiple fits by ranking.

4. Numerical Example

4.1. Data Information

Two datasets were used in this study. The first dataset comprises failure data from software developed by Bell Labs [31]. The software consists of 21,700 instructions written by nine programmers. Failure data were observed for 12 h, and 104 failures were observed. The second dataset was obtained by testing an online software package developed by IBM [32]. The software, which consists of 40,000 lines of code, was tested for 21 days, and 46 cumulative failures were recorded. We demonstrated that the proposed SRM performs well in estimating software failures using these two datasets.

4.2. Criteria

We compared the proposed SRM with several existing SRMs using 15 criteria. The 15 criteria are derived from the difference between the actual and estimated values; Table 2 lists the criteria for each criterion. In Table 2, m ^ ( t ) is the estimated value of the model m ( t ) , y i is the actual value, n is the number of observations, and m is the number of parameters in each model.
Mean squared error (MSE) is a measure of the sum of the squares of the differences between the estimated and true values, considering the number of observations and parameters [33]. Root mean squared error (RMSE) is the square root of the mean squared error [33,34]. Predictive ratio risk (PRR) and predictive power (PP) are defined as the difference between the actual and predicted values divided by the predicted and actual values with respect to the model estimation [35]. R 2 is the coefficient of determination of the regression equation, and a d j _ R 2 is the modified coefficient of determination of the regression equation that considers the number of parameters to determine the explanatory power [36]. Mean absolute error (MAE) is the sum of the absolute values of the difference between the estimated and true values, considering the number of observations and parameters [37,38]. Akaike’s information criterion (AIC) is used to compare likelihood function maximization. It aims to maximize the Kullback–Leibler level of agreement between the model and probability distribution of the data. The Bayes information criterion (BIC) is a fit that increases the sum of the squared residuals and the number of effects in the model and is a modified form of the penalty of AIC [39,40]. It penalizes the model more strongly than AIC. Predicted relative variation (PRV) is the standard deviation of the prediction bias [41]. Root mean square prediction error (RMSPE) provides an estimate of the closeness of each model’s predictions [41,42]. Mean error of prediction (MEOP) is defined as the sum of the absolute values of the deviations between the true and estimated values [37,43]. The Theil statistic (TS) is the average percentage deviation of all the time points from the true value [37]. Pham’s information criterion (PIC) considers the trade-off between the model and the number of parameters included in the model by slightly increasing the penalty for each additional parameter added to the model when the sample is fairly small [44], and Pham’s criteria (PC) is defined to consider more penalties than PIC [45]. Based on the above scale, we compared the existing NHPP SRM with our proposed model. The closer R 2 and a d j _ R 2 are to 1 and the closer the other 13 scales are to 0, the better the fit. In addition, we utilized the MCDMR proposed in Section 3 to comprehensively evaluate the various criteria proposed in the literature to determine the superiority of the model by comparing the comprehensive criteria. Using R and MATLAB, we estimated the parameters of each model using the least square estimator method [46], which estimates parameters based on the difference between the model-estimated values and the actual number of failures, and calculated the criteria to compare their superiority.

4.3. Results on Dataset 1

Table 3 shows the estimated parameter values for each model obtained from dataset 1. Our proposed model has α , β , and N , which are denoted as α ^ = 2.7507 , β ^ = 1.4068 , and N ^ = 127.7906 . Figure 1 shows a graphical representation of the cumulative number of failures and the estimated values of each time point in dataset 1. The black dots represent the actual data, and the solid red line represents the estimated value of the proposed SRM. The thick black dashed line represents the 95% confidence interval of the proposed model, and the thin black dashed line represents the 99% confidence interval of the proposed model.
Since it is not easy to judge how well the model fits from Figure 1 alone, we interpret the results in conjunction with the information in Table 4. Table 4 shows the results for the criteria calculated from the estimates of each model, based on the parameter estimates obtained from Table 3. M S E , R M S E , A d j _ R 2 , M A E , M E O P , P C , and P I C are the best, with values of 3.834 , 1.958 , 0.992 , 1.878 , 1.878 , 1.691 , 8.420 , and 38.176 , whereas R 2 , P R V , R M S P E , and T C are the second best, with values of 0.994 , 1.771, 1.771 , and 2.156 . This is also among the best fits of the other fits. The newly proposed MCDMR, which considers all 15 fits together, is also the best at 0.00253 .

4.4. Results on Dataset 2

Table 5 shows the estimated parameter values for each model obtained from dataset 2. The estimated parameter values of the proposed model are α ^ = 33.6027 , β ^ = 4.9010 , and N ^ = 167.9314 . Figure 2 shows a graphical representation of the cumulative number of failures and the estimated values at each time point in dataset 2. The black dots represent the actual data, and the solid red line represents the estimated values of the proposed SRM. The thick black dashed line represents the 95% confidence interval of the proposed model, and the thin black dashed line represents the 99% confidence interval of the proposed model.
Also, since it is not easy to judge how well the model fits from Figure 2 alone, we interpret the results in conjunction with the information in Table 6. Table 6 shows the results for the criteria calculated from the estimates of each model, based on the parameter estimates obtained from Table 5. M S E , R M S E , R 2 , A d j _ R 2 , A I C , P C , and P I C have the best results with 1.387 , 1.178 , 0.995 , 0.994 , 76.587 , 4.891 , and 28.301 , whereas P P , M A E , B I C , P R V , P M S P E , M E O P , and T C are the second best with values of 0.231 , 0.998 , 79.720 , 1.117 , 1.117 , 0.946 , and 4.047 . The newly proposed MCDMR, which considers all 15 fits together, is also the best at 0.00271.

5. Conclusions and Remark

In this paper, we proposed a new NHPP SRM that considers uncertain operating environments. Unlike the existing NHPP SRMs that consider uncertain operating environments, the proposed model minimizes assumptions and fits general situations with fewer parameters. In addition, to demonstrate the superiority of the proposed model, a new MCDMR incorporating multiple criteria for comparing the models has been proposed. The proposed NHPP SRM, which considers uncertain operating environments, was compared with several traditional NHPP SRMs using two datasets, and its superiority was demonstrated.
The development of traditional SRMs is based on limited circumstances and fixed assumptions. However, as the software industry develops, software structures become increasingly complex, and various assumptions are added to predict software failures. This complicates the structure of SRMs for predicting software failures and causes problems because they work well only in special cases. To improve this, there is a significant need for research that minimizes these assumptions. In addition, it is very difficult for traditional NHPP SRMs to model data generated in real time on the fly. Therefore, research is being conducted on SRMs using data-dependent machine learning algorithms [47]. Kumar and Singh [48] and Jaiswal and Malhotra [49] applied well-known machine learning methods such as artificial neural networks, support vector machines, cascade correlation neural networks, decision trees, and fuzzy inference systems to predict software reliability. Wang et al. [50] and Chen et al. [51] predicted software failures using recurrent neural networks that exploit the characteristics of sequential data as a method for predicting SRMs and showed good performance. Zainuddin et al. [52] and Wang et al. [53] proposed a model for predicting software failures using long short-term memory (LSTM) and gated recurrent units (GRU), which are extensions of recurrent neural networks. Research is required on new SRMs that extend the model proposed in this study and assume general rather than specialized situations using machine learning and deep learning.

Author Contributions

Conceptualization, H.P.; funding acquisition, I.H.C. and K.Y.S.; software, Y.S.K.; writing—original draft, K.Y.S. and Y.S.K.; writing—review and editing, K.Y.S., I.H.C. and H.P.; visualization, K.Y.S. and Y.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation (NRF) of Korea, funded by the Ministry of Education (NRF-2021R1F1A1048592 and NRF-2021R1I1A1A01059842), and was supported by the Global-Learning & Academic Research Institution for Master’s and PhD students, and Postdoc (LAMP) Program of the National Research Foundation of Korea (NRF) grant funded by the Ministry of Education (RS-2023-00285353).

Data Availability Statement

Data are available in a publicly accessible repository.

Acknowledgments

This research was supported by the Basic Science Research Program through the National Research Foundation (NRF) of Korea and the Global-Learning & Academic Research Institution for Master’s and PhD students and Postdoc (LAMP) Program of the National Research Foundation of Korea (NRF).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Goel, A.L.; Okumoto, K. Time-Dependent Error-Detection Rate Model for Software Reliability and Other Performance Measures. IEEE Trans. Reliab. 1979, R-28, 206–211. [Google Scholar] [CrossRef]
  2. Hossain, S.A.; Dahiya, R.C. Estimating the Parameters of a Non-Homogeneous Poisson-Process Model for Software Reliability. IEEE Trans. Reliab. 1993, 42, 604–612. [Google Scholar] [CrossRef]
  3. Yamada, S.; Ohba, M.; Osaki, S. S-Shaped Reliability Growth Modeling for Software Error Detection. IEEE Trans. Reliab. 1983, R-32, 475–484. [Google Scholar] [CrossRef]
  4. Osaki, S.; Hatoyama, Y. Inflexion S-Shaped Software Reliability Growth Models. In Stochastic Models in Reliability Theory; Springer: Berlin/Heidelberg, Germany, 1984; pp. 144–162. [Google Scholar]
  5. Zhang, X.; Teng, X.; Pham, H. Considering Fault Removal Efficiency in Software Reliability Assessment. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2003, 33, 114–120. [Google Scholar] [CrossRef]
  6. Yamada, S.; Ohtera, H.; Narihisa, H. Software Reliability Growth Models with Testing-Effort. IEEE Trans. Reliab. 1986, 35, 19–23. [Google Scholar] [CrossRef]
  7. Yamada, S.; Tokuno, K.; Osaki, S. Imperfect Debugging Models with Fault Introduction Rate for Software Reliability Assessment. Int. J. Syst. Sci. 1992, 23, 2241–2252. [Google Scholar] [CrossRef]
  8. Pham, H.; Zhang, X. An NHPP Software Reliability Model and Its Comparison. Int. J. Reliab. Qual. Saf. Eng. 1997, 04, 269–282. [Google Scholar] [CrossRef]
  9. Pham, H.; Nordmann, L.; Zhang, Z. A General Imperfect-Software-Debugging Model with S-Shaped Fault-Detection Rate. IEEE Trans. Reliab. 1999, 48, 169–175. [Google Scholar] [CrossRef]
  10. Kapur, P.K.; Pham, H.; Anand, S.; Yadav, K. A Unified Approach for Developing Software Reliability Growth Models in the Presence of Imperfect Debugging and Error Generation. IEEE Trans. Reliab. 2011, 60, 331–340. [Google Scholar] [CrossRef]
  11. Pham, H. System Software Reliability; Springer: London, UK, 2006. [Google Scholar]
  12. Roy, P.; Mahapatra, G.S.; Dey, K.N. An NHPP Software Reliability Growth Model with Imperfect Debugging and Error Generation. Int. J. Reliab. Qual. Saf. Eng. 2014, 21, 1450008. [Google Scholar] [CrossRef]
  13. Li, Q.; Pham, H. Modeling Software Fault-Detection and Fault-Correction Processes by Considering the Dependencies between Fault Amounts. Appl. Sci. 2021, 11, 6998. [Google Scholar] [CrossRef]
  14. Pan, Z.; Nonaka, Y. Importance Analysis for the Systems with Common Cause Failures. Reliab. Eng. Syst. Saf. 1995, 50, 297–300. [Google Scholar] [CrossRef]
  15. Kim, Y.S.; Song, K.Y.; Pham, H.; Chang, I.H. A Software Reliability Model with Dependent Failure and Optimal Release Time. Symmetry 2022, 14, 343. [Google Scholar] [CrossRef]
  16. Lee, D.H.; Chang, I.H.; Pham, H. Software Reliability Model with Dependent Failures and SPRT. Mathematics 2020, 8, 1366. [Google Scholar] [CrossRef]
  17. Pradhan, V.; Dhar, J.; Kumar, A. Testing Coverage-based Software Reliability Growth Model Considering Uncertainty of Operating Environment. Syst. Eng. 2023, 26, 449–462. [Google Scholar] [CrossRef]
  18. Teng, X.; Pham, H. A New Methodology for Predicting Software Reliability in the Random Field Environments. IEEE Trans. Reliab. 2006, 55, 458–468. [Google Scholar] [CrossRef]
  19. Pham, H. A New Software Reliability Model with Vtub-Shaped Fault-Detection Rate and the Uncertainty of Operating Environments. Optimization 2014, 63, 1481–1490. [Google Scholar] [CrossRef]
  20. Song, K.Y.; Chang, I.H.; Pham, H. A Three-Parameter Fault-Detection Software Reliability Model with the Uncertainty of Operating Environments. J. Syst. Sci. Syst. Eng. 2017, 26, 121–132. [Google Scholar] [CrossRef]
  21. Haque, M.A.; Ahmad, N. Software Reliability Modeling under an Uncertain Testing Environment. Int. J. Model. Simul. 2023, 1–7. [Google Scholar] [CrossRef]
  22. Cao, P.; Tang, G.; Zhang, Y.; Luo, Z. Qualitative Evaluation of Software Reliability Considering Many Uncertain Factors. In Ecosystem Assessment and Fuzzy Systems Management; Springer: Cham, Switzerland, 2014; pp. 199–205. [Google Scholar]
  23. Pachauri, B.; Jain, A.; Raman, S. An Improved SRGM Considering Uncertain Operating Environment. Palest. J. Math. 2022, 11, 38–45. [Google Scholar]
  24. Asraful Haque, M.; Ahmad, N. A Logistic Growth Model for Software Reliability Estimation Considering Uncertain Factors. Int. J. Reliab. Qual. Saf. Eng. 2021, 28, 2150032. [Google Scholar] [CrossRef]
  25. Chang, I.H.; Pham, H.; Lee, S.W.; Song, K.Y. A Testing-Coverage Software Reliability Model with the Uncertainty of Operating Environments. Int. J. Syst. Sci. Oper. Logist. 2014, 1, 220–227. [Google Scholar] [CrossRef]
  26. Lee, D.; Chang, I.; Pham, H. Study of a New Software Reliability Growth Model under Uncertain Operating Environments and Dependent Failures. Mathematics 2023, 11, 3810. [Google Scholar] [CrossRef]
  27. Singh, P. A Neutrosophic-Entropy Based Adaptive Thresholding Segmentation Algorithm: A Special Application in MR Images of Parkinson’s Disease. Artif. Intell. Med. 2020, 104, 101838. [Google Scholar] [CrossRef] [PubMed]
  28. Saxena, P.; Kumar, V.; Ram, M. A Novel CRITIC-TOPSIS Approach for Optimal Selection of Software Reliability Growth Model (SRGM). Qual. Reliab. Eng. Int. 2022, 38, 2501–2520. [Google Scholar] [CrossRef]
  29. Kumar, V.; Saxena, P.; Garg, H. Selection of Optimal Software Reliability Growth Models Using an Integrated Entropy–Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) Approach. Math. Methods Appl. Sci. 2021, 1–21. [Google Scholar] [CrossRef]
  30. Garg, R.; Raheja, S.; Garg, R.K. Decision Support System for Optimal Selection of Software Reliability Growth Models Using a Hybrid Approach. IEEE Trans. Reliab. 2022, 71, 149–161. [Google Scholar] [CrossRef]
  31. Musa, J.D. Software Reliability Data, Deposited in IEEE Computer Society Repository; IEEE: New York, NY, USA, 1979. [Google Scholar]
  32. Ohba, M. Software Reliability Analysis Models. IBM J. Res. Dev. 1984, 28, 428–443. [Google Scholar] [CrossRef]
  33. Armstrong, J.S.; Collopy, F. Error Measures for Generalizing about Forecasting Methods: Empirical Comparisons. Int. J. Forecast. 1992, 8, 69–80. [Google Scholar] [CrossRef]
  34. Piper, E.L.; Boote, K.J.; Jones, J.W.; Grimm, S.S. Comparison of Two Phenology Models for Predicting Flowering and Maturity Date of Soybean. Crop Sci. 1996, 36, 1606–1614. [Google Scholar] [CrossRef]
  35. Iqbal, J. Software Reliability Growth Models: A Comparison of Linear and Exponential Fault Content Functions for Study of Imperfect Debugging Situations. Cogent Eng. 2017, 4, 1286739. [Google Scholar] [CrossRef]
  36. Akossou, A.Y.J.; Palm, R. Impact of Data Structure on the Estimators R-Square and Adjusted R-Square in Linear Regression. Int. J. Math. Comput. 2013, 20, 84–93. [Google Scholar]
  37. Sharma, K.; Garg, R.; Nagpal, C.K.; Garg, R.K. Selection of Optimal Software Reliability Growth Models Using a Distance Based Approach. IEEE Trans. Reliab. 2010, 59, 266–276. [Google Scholar] [CrossRef]
  38. Willmott, C.; Matsuura, K. Advantages of the Mean Absolute Error (MAE) over the Root Mean Square Error (RMSE) in Assessing Average Model Performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  39. Kuha, J. AIC and BIC. Sociol. Methods Res. 2004, 33, 188–229. [Google Scholar] [CrossRef]
  40. Weakliem, D.L. A Critique of the Bayesian Information Criterion for Model Selection. Sociol. Methods Res. 1999, 27, 359–397. [Google Scholar] [CrossRef]
  41. Xu, J.; Yao, S. Software Reliability Growth Model with Partial Differential Equation for Various Debugging Processes. Math. Probl. Eng. 2016, 2016, 2476584. [Google Scholar] [CrossRef]
  42. Witt, S.F.; Witt, C.A. Modeling and Forecasting Demand in Tourism; Academic Press: London, UK, 1992. [Google Scholar]
  43. Allen, D.M. Mean Square Error of Prediction as a Criterion for Selecting Variables. Technometrics 1971, 13, 469–475. [Google Scholar] [CrossRef]
  44. Ali, N.N. A Comparison of Some Information Criteria to Select a Weather Forecast Model. Turk. J. Comput. Math. Educ. (TURCOMAT) 2021, 12, 2494–2500. [Google Scholar]
  45. Pham, H. On Estimating the Number of Deaths Related to COVID-19. Mathematics 2020, 8, 655. [Google Scholar] [CrossRef]
  46. Wang, L.; Hu, Q.; Liu, J. Software Reliability Growth Modeling and Analysis with Dual Fault Detection and Correction Processes. IIE Trans. 2016, 48, 359–370. [Google Scholar] [CrossRef]
  47. Gamiz, M.L.; Navas-Gomez, F.J.; Raya-Miranda, R. A Machine Learning Algorithm for Reliability Analysis. IEEE Trans. Reliab. 2021, 70, 535–546. [Google Scholar] [CrossRef]
  48. Kumar, P.; Singh, Y. An Empirical Study of Software Reliability Prediction Using Machine Learning Techniques. Int. J. Syst. Assur. Eng. Manag. 2012, 3, 194–208. [Google Scholar] [CrossRef]
  49. Jaiswal, A.; Malhotra, R. Software Reliability Prediction Using Machine Learning Techniques. Int. J. Syst. Assur. Eng. Manag. 2018, 9, 230–244. [Google Scholar] [CrossRef]
  50. Wang, J.; Zhang, C. Software Reliability Prediction Using a Deep Learning Model Based on the RNN Encoder–Decoder. Reliab. Eng. Syst. Saf. 2018, 170, 73–82. [Google Scholar] [CrossRef]
  51. Li, C.; Zheng, J.; Okamura, H.; Dohi, T. Software Reliability Prediction through Encoder-Decoder Recurrent Neural Networks. Int. J. Math. Eng. Manag. Sci. 2022, 7, 325–340. [Google Scholar] [CrossRef]
  52. Zainuddin, Z.; EA, P.A.; Hasan, M.H. Predicting Machine Failure Using Recurrent Neural Network-Gated Recurrent Unit (RNN-GRU) through Time Series Data. Bull. Electr. Eng. Inform. 2021, 10, 870–878. [Google Scholar]
  53. Wang, H.; Zhuang, W.; Zhang, X. Software Defect Prediction Based on Gated Hierarchical LSTMs. IEEE Trans. Reliab. 2021, 70, 711–727. [Google Scholar] [CrossRef]
Figure 1. Prediction of all models for dataset 1.
Figure 1. Prediction of all models for dataset 1.
Mathematics 12 01641 g001
Figure 2. Prediction of all models for dataset 2.
Figure 2. Prediction of all models for dataset 2.
Mathematics 12 01641 g002
Table 1. SRMs.
Table 1. SRMs.
No.ModelMean Value FunctionNote
1Goel-Okumoto (GO) [1] m t = a 1 e b t Concave
2Hossain-Dahiya (HDGO) [2] m t = log e a c e a e b t c Concave
3Yamada et al. (DS) [3] m t = a 1 1 + b t e b t S-Shape
4Ohba (IS) [4] m t = a t 1 e b t 1 + β e b t S-Shape
5Yamada et al. (YE) [6] m t = a 1 e γ α 1 e β t Concave
6Yamada et al. (YR) [6] m t = a 1 e γ α 1 e β t 2 / 2 S-Shape
7Yamada et al. (YID 1) [7] m t = a b α + b e α t e b t Concave
8Yamada et al. (YID 2) [7] m t = a 1 e b t 1 α b + α a t Concave
9Pham-Zhang (PZ) [8] m t = ( c + a 1 e b t a b b α e a t e b t ) 1 + β e b t Both
10Pham et al. (PNZ) [9] m t = a 1 e b t 1 α b + α a t 1 + β e b t Both
11Pham (IFD) [11] m t = a 1 e b t 1 + b + d t + b d t 2 Concave
12Roy et al. (RMD) [12] m t = a α 1 e b t a b b β e β t e b t Concave
13Zhang et al. (ZFR) [5] m t = a p β 1 1 + α e b t 1 + α e b t c b p β S-Shape
14Kapur et al. (KSRGM) [10] m t = A 1 α 1 1 + b t + b 2 t 2 2 e b t p ( 1 α ) S-Shape
15Chang et al. (TC) [25] m t = N 1 β β + a t b α Both
16Teng-Pham (TP) [18] m t = a p q 1 β β + p q ln c + e b t c + 1 α S-Shape
17Song et al. (3P) [20] m t = N 1 β β a b ln 1 + c e b t 1 + c e b t S-Shape
18Pham (Vtub) [19] m t = N 1 β β + a b t 1 α S-Shape
19Kim et al. (DPF1) [15] m t = a 1 + a h 1 + c c + e b t a Concave,
Dependent
20Lee et al. (DPF2) [16] m t = a 1 + a h b + c c + b e b t a b Concave,
Dependent
21Lee et al. (UOP) [26] m t = N 1 β α + c b l n α + e b t 1 + α α S-Shape,
Dependent
22Proposed Model m t = N 1 β α + β t l n β t + 1 α S-Shape
Table 2. Criteria for model comparisons.
Table 2. Criteria for model comparisons.
No.CriteriaFormula
1MSE i = 1 n m ^ t i y i 2 n m
2RMSE i = 1 n m ^ t i y i 2 n m
3PRR i = 1 n m ^ t i y i m ^ t i 2
4PP i = 1 n m ^ t i y i y i 2
5 R 2 1 i = 1 n m ^ t i y i 2 i = 1 n y i y i ¯ 2
6 a d j _ R 2 1 1 R 2 n 1 n m 1
7MAE i = 1 n m ^ t i y i n m
8AIC 2 log L + 2 m
9BIC 2 log L + m log n
10PRV i = 1 n y i m ^ t i i = 1 n m ^ t i y i n 2 n 1
11RMSPE i = 1 n y i m ^ t i i = 1 n m ^ t i y i n 2 n 1 + i = 1 n m ^ t i y i n 2 .
12MEOP i = 1 n m ^ t i y i n m + 1
13TS 100 i = 1 n y i m ^ t i 2 i = 1 n y i 2
14PIC i = 1 n m ^ t i y i 2 + m n 1 n m
15PC n m 2 log i = 1 n m ^ t i y i 2 n + m n 1 n m
Table 3. Parameter estimation of model from dataset 1.
Table 3. Parameter estimation of model from dataset 1.
NoModel a ^ b ^ α ^ β ^ N ^ γ ^ c ^ p ^ q ^ d ^ h ^
1GO103.99570.25072
2HDGO103.99560.25072 0.003999
3DS94.429890.64986
4IS103.99190.250789 0.000336
5YE135.4096 0.0799550.118615 22.03046
6YR99.87051 0.303010.081458 8.609839
7YID180.078770.3683160.026422
8YID277.600450.3817310.033943
9PZ94.98354.7287410.2031463.469121 13.93669
10PNZ77.577430.3821120.0339730.000898
11IFD11.112510.830547 1.07 × 10−0.7
12RMD20.849120.2892095.463940.069861
13ZFR16.759030.0097650.0104960.158331 1.5719650.319499
14KSRGM50.3461267.639620.504539 0.008162
15TC0.1239630.8870392.7640661.760686125.3202
16TP273.8670.00187678.322470.094677 15.266083.24140.611472
173P7.0098425.44 × 10−8 0.131806137.9787 234.0264
18Vtub1.0506730.9218871.2578070.251814127.3638
19DPF199.21430.00489 0.060524 28.30202
20DPF299.215820.000271 0.058441 28.30183
21UOP0.9591060.2481681.0215670.927368133.764
22NEW 2.7506541.406831127.7906
Table 4. Comparison of all criteria from dataset 1.
Table 4. Comparison of all criteria from dataset 1.
CriteriaGOHDGODSISYEYRYID1YID2PZPNZIFD
MSE6.4747.19344.3067.1965.72887.8483.9944.0405.4134.546291.502
RMSE2.5442.6826.6562.6832.3939.3731.9992.0102.3272.13217.073
PRR0.0370.0371.2050.0370.0193.1550.0120.0110.0050.0113.110
PP0.0290.0290.3230.0290.0160.5050.0110.0110.0050.0110.900
R20.9900.9900.9280.9900.9930.8860.9940.9940.9940.9940.574
adjR20.9870.9860.9120.9860.9880.8210.9920.9920.9890.9910.414
MAE2.2742.5265.5662.5262.5108.8721.9481.9632.3962.21018.217
AIC67.15669.156115.30669.16067.385158.80561.39461.66768.85063.665100.809
BIC68.12670.610116.27570.61469.325160.74562.84863.12171.27565.605102.264
PRV2.4122.4126.2302.4122.0367.8231.8071.8171.8561.81714.116
RMSPE2.4252.4256.3372.4252.0417.9791.8081.8181.8561.81815.337
MEOP2.0672.2745.0602.2742.2317.8861.7531.7662.0971.96516.395
TS2.9532.9537.7252.9532.4849.7292.2002.2132.2592.21318.797
PC10.62711.25120.24411.25310.86021.7818.6048.65611.8819.93527.910
PIC66.93968.405445.26168.42951.327708.28739.61240.03045.74641.8642627.181
MCDMR0.014860.020020.108480.021810.013220.186990.002540.004140.013810.005770.00904
CriteriaRMDZFRKSRGMTCTP3PVtubDPF1DPF2UOPNEW
MSE6.76010.7976.5425.16612.8335.3525.19210.76010.7604.7983.834
RMSE2.6003.2862.5582.2733.5822.3142.2793.2803.2802.1901.958
PRR0.0280.0370.0620.0060.0370.0090.0060.0260.0260.0050.005
PP0.0230.0290.0430.0060.0290.0090.0060.0300.0300.0050.005
R20.9910.9900.9920.9940.9900.9940.9940.9860.9860.9950.994
adjR20.9860.9770.9870.9890.9710.9890.9890.9780.9780.9900.992
MAE2.6563.7902.4952.4254.5282.5312.4583.2543.2542.3261.878
AIC68.69775.16461.96967.14977.04867.22666.97381.71281.71067.23563.825
BIC70.63678.07463.90869.57380.44369.65069.39883.65183.64969.66065.280
PRV2.2082.4132.1521.8132.4011.8451.8182.7972.7971.7471.771
RMSPE2.2172.4262.1791.8132.4141.8461.8182.7972.7971.7471.771
MEOP2.3613.2492.2182.1223.7732.2152.1502.8932.8932.0361.691
TS2.6992.9542.6552.2072.9402.2462.2123.4053.4052.1272.156
PC11.52216.05811.39111.71819.59111.84211.73613.38213.38111.4598.420
PIC59.58175.77957.83644.02179.56645.32344.20391.58191.57941.44438.176
MCDMR0.034360.012890.009870.393570.019030.031640.014720.033340.032620.007450.00253
Table 5. Parameter estimation of model from dataset 2.
Table 5. Parameter estimation of model from dataset 2.
NoModel a ^ b ^ α ^ β ^ N ^ γ ^ c ^ p ^ q ^ d ^ h ^
1GO258479.78.27 × 10−0.6
2HDGO709.78270.003083 0.462394
3DS77.252990.096622
4IS59.285460.16839 8.278172
5YE4709.105 2.4118670.000372 0.507832
6YR77.83392 2.6771620.004541 0.522922
7YID119614.078.39 × 10−0.50.030908
8YID21.4910360.3068431.745694
9PZ17.016910.1536190.165345.82304 44.56855
10PNZ11.711050.1952930.1981441.382584
11IFD8.6849590.138925 0.023631
12RMD104.2180.0338061.1464290.149771
13ZFR38.401690.1939535.658560.17475 0.1780890.753317
14KSRGM32.748040.989460.821748 0.108743
15TC0.0190641.567033839.154221.173578.78594
16TP102.8440.1736281.2061021.133787 16.863871.5564270.089124
173P0.492330.168722 0.24065961.55869 116.1341
18Vtub1.9700590.6891590.29276719.8529187.25193
19DPF151.447180.004767 0.028006 2.634778
20DPF251.459855.03 × 10−5 0.010786 2.633591
21UOP0.4674910.0423191.5370431.497623226.1843
22NEW 33.620654.901007167.9314
Table 6. Comparison of all criteria from dataset 2.
Table 6. Comparison of all criteria from dataset 2.
CriteriaGOHDGODSISYEYRYID1YID2PZPNZIFD
MSE6.5687.7281.6371.3957.5592.4202.9361.7011.5641.6462.104
RMSE2.5632.7801.2791.1812.7491.5561.7141.3041.2511.2831.451
PRR0.8050.86326.3210.6790.81755.6880.3563.0640.8430.9680.451
PP1.8522.0411.2080.2971.8891.5880.5260.6080.3310.3690.351
R20.9720.9690.9930.9940.9720.9910.9880.9930.9940.9940.992
adjR20.9690.9640.9920.9930.9640.9890.9860.9920.9930.9920.990
MAE2.2322.5251.1070.9732.5451.4771.5421.1701.1041.1571.305
AIC77.32579.55478.11876.69981.38683.03179.12778.66280.79379.56078.284
BIC79.41482.68880.20779.83285.56487.20982.26181.79686.01683.73881.417
PRV2.3252.4591.2241.1202.3711.3751.6061.2361.1181.1831.372
RMSPE2.4902.6291.2461.1202.5271.4321.6251.2371.1191.1831.376
MEOP2.1202.3921.0520.9212.4031.3951.4611.1091.0391.0931.236
TS9.0479.5524.5164.0589.1815.1955.8874.4814.0524.2844.984
PC19.03520.3505.8344.94020.10310.42311.6396.7267.6557.1468.642
PIC126.890142.43733.20028.438133.21445.85156.18033.94831.28032.69041.213
MCDMR0.092650.115000.035680.005120.109670.084810.048130.022630.013160.019630.03299
CriteriaRMDZFRKSRGMTCTP3PVtubDPF1DPF2UOPNEW
MSE1.6161.6711.8501.7231.7941.5691.5442.0022.0011.5731.387
RMSE1.2711.2931.3601.3131.3391.2531.2431.4151.4151.2541.178
PRR3.1120.79736.3106.0120.7070.6810.5610.3360.3360.2710.375
PP0.6110.3211.1740.7600.3030.2970.2700.5830.5820.1930.231
R20.9940.9940.9930.9940.9940.9940.9950.9920.9920.9940.995
adjR20.9920.9920.9910.9920.9910.9930.9930.9910.9910.9930.994
MAE1.1501.1791.2391.2071.2571.0951.0941.2041.2031.1310.998
AIC80.09082.79083.33782.56684.73880.70280.60278.79578.79280.56376.587
BIC84.26889.05787.51587.78892.04985.92485.82482.97382.97085.78679.720
PRV1.1701.1191.2451.1691.1201.1201.1101.3001.3001.1221.117
RMSPE1.1721.1191.2541.1741.1211.1201.1111.3041.3041.1221.117
MEOP1.0861.1051.1701.1361.1731.0301.0301.1371.1361.0650.946
TS4.2444.0544.5424.2524.0594.0584.0254.7254.7234.0634.047
PC6.9879.3268.1408.42511.2527.6797.5498.8108.8057.7004.891
PIC32.17133.06136.16033.81035.11431.35630.95238.74038.71931.42328.301
MCDMR0.019990.021430.056850.030900.028500.013730.009410.031480.029530.015230.00271
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, K.Y.; Kim, Y.S.; Pham, H.; Chang, I.H. A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion. Mathematics 2024, 12, 1641. https://doi.org/10.3390/math12111641

AMA Style

Song KY, Kim YS, Pham H, Chang IH. A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion. Mathematics. 2024; 12(11):1641. https://doi.org/10.3390/math12111641

Chicago/Turabian Style

Song, Kwang Yoon, Youn Su Kim, Hoang Pham, and In Hong Chang. 2024. "A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion" Mathematics 12, no. 11: 1641. https://doi.org/10.3390/math12111641

APA Style

Song, K. Y., Kim, Y. S., Pham, H., & Chang, I. H. (2024). A Software Reliability Model Considering a Scale Parameter of the Uncertainty and a New Criterion. Mathematics, 12(11), 1641. https://doi.org/10.3390/math12111641

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop