This Special Issue is devoted to probability, statistics, stochastic processes, and their different applications in systems and networks analysis. The Special Issue will include works related to the analysis and applications of different queuing models, which begin with general approaches to modeling queuing systems and networks. Significant attention will be devoted to the analysis of probabilistic and statistical methods in telecommunication; asymptotic analysis of queuing networks in the condition of a large load will be considered since original approaches are being developed in the asymptotic analysis of queuing networks in the condition of a large load and in the calculation of distributions in retrial queuing systems. We welcome considerations of general complex networks and their structures in terms of, e.g., topology and graph theory; mathematical methods and models in smart cities; exclusive statistical methods, such as statistical estimates in bio/ecology, medicine, and neural networks; and works that estimate parameters in complex technical systems, etc.
The authors’ geographical distribution is shown in
Table 1; the 21 authors are from eight different countries. Note that it is usual for a paper to be written by more than one author and for authors to collaborate with authors with different or multiple affiliations.
Mass serving systems are widely used in many areas of real life. While single-server queue systems work in some cases, multi-server systems can efficiently handle the most complex applications. Multi-server mass service systems (compared to well-designed single-server systems) are more complex and more difficult to handle, especially when the arrival time distribution is arbitrary. The paper [
1] is devoted to the analytical and computational analysis of queue length distributions for a complex multi-server mass service system. Introducing a quorum further complicates the model. In view of this, a two-dimensional Markov chain must be employed. It now appears that this system has not been considered so far. An elegant closed-form analytical solution and an efficient algorithm for obtaining a queue length distribution in three different epochs are presented.
Specialists in medical and zoogeography, mining, applications of meteorology to field problems, etc., have considerable interest in large or extreme outliers in sets of empirical information. For the following purposes, specialists are important: the essential importance of large emissions, the fear of errors in the study of large emissions by standard and previously applied methods, the speed of information processing, and the ease of interpretation of the results obtained. To meet these requirements, algorithms for interval pattern recognition and accompanying auxiliary computational procedures were developed in [
2]. These algorithms were developed for specific samples provided by users (short samples, the presence of rare events in them, or the difficulty of constructing interpretation scenarios). What they have in common is that original optimization procedures are constructed for them or known optimization procedures are used. The authors present a series of results on the processing of observations through the extraction of large outliers, both in the time series and in planar and spatial observations. The algorithms presented in [
2] are fast and sufficiently valid in terms of specially selected indices and have been tested on specific measurements and accompanied by meaningful interpretations.
In [
3], the authors present an alternative and simpler approach to finding stationary distributions of the number of jobs for a mass service model with finite space using roots of its own characteristic equation. The main advantage of this alternative process is that it provides a unified approach to working with both finite-buffer and infinite-buffer systems. The queue length distribution is obtained both at the departure epoch and at the random epoch.
Typically, a complex system consists of various components that are usually subject to service policies. In [
4], the authors consider systems containing components that are under preventive maintenance and repair maintenance. Preventive maintenance is treated as a failure-based preventive maintenance model in which a complete update is implemented after every nth failure occurs. It proposes an imperfect corrective maintenance model in which each repair worsens the lifetime of a component or system, whose probability distribution gradually changes by increasing the failure rate. The paper demonstrates reliability mathematics for quantifying unavailability. A model of the renewal process involving preventive maintenance based on failure arises from a new corresponding renewal cycle, which is denoted as the real aging process. Imperfect corrective maintenance leads to an undesirable increase in the unavailability function, which can be corrected by a correctly chosen failure-based preventive maintenance policy, i.e., replacing the correctly chosen component considering both cost and unavailability after the n-th failure occurs. The number n is considered the decision variable, while cost is the target function in the optimization process. The paper describes a new method for finding the optimal preventive maintenance policy based on failures for a system considering a given reliability constraint. The decision variable n is optimally chosen for each component from a set of possible realistic maintenance policies. The authors focus on a discrete maintenance model in which each component is implemented in one or more maintenance modes. A fixed value of the decision variable determines one mode of service as well as the cost of the mode. The system optimization process requires computation time because if the system contains k components, each with three service modes, 3k service configurations need to be estimated. Discrete service optimization is shown for two systems taken from the literature.
Today’s smart grids make it possible to efficiently manage energy supply and consumption while avoiding various safety risks. System disturbances can be caused by both natural and man-made events. Operators must be aware of the different types and causes of power system disturbances to make informed decisions and respond appropriately. Research [
5] proposes a solution to this problem with a deep learning-based attack-detection model for power systems that can be trained using data and logs collected from vector measurement units (PMUs). Creating properties or specifications is used to create features, and the data are sent to various machine learning methods, of which the random forest was chosen as the main classifier by AdaBoost. Data from simulated energy systems from open sources are used to test a model containing 37 case studies of energy system events. The proposed model was compared to other layouts on various evaluation metrics. Simulation results showed that this model provides a detection rate of 93.6% and an accuracy rate of 93.91%, which is higher than existing methods.
In [
6], a variant of group testing (GT) models, called noise threshold group testing (NTGT), is considered, in which if there is more than one defective sample in the pool, its test result is positive. The authors are dealing with a variant model of GT in which, as in the diagnosis of COVID-19 infection, not only do false positives and false negatives occur if the virus concentration falls below the threshold, but unexpected measurement noise can change the correct result above the threshold to become incorrect. The authors aim to determine how many tests are needed to recover a small set of defective samples in such an NTGT problem. To do this, they find necessary and sufficient conditions for the number of tests needed to recover all defective samples. First, Fano’s Inequality was used to obtain a lower bound on the number of tests needed to satisfy the necessary condition. Second, an upper bound was found using the MAP decoding method, which leads to a sufficient condition to recover defective samples in the NTGT problem. As a result, the authors show that the necessary and sufficient conditions for successful reconstruction of defective samples in the NTGT coincide. In addition, they show a tradeoff between the percentage of defective samples and the density of the group matrix, which is then used to construct the optimal NTGT structure.
The paper [
7] introduces a stochastic process of an inhomogeneous Markov system in a stochastic environment in continuous time (S-NHMSC). The ordinary inhomogeneous Markov process is a special case of S-NHMSC. The author studied the expected population structure of the S-NHMSC, the first central classical problem of finding the conditions under which the asymptotic behavior of the expected population structure exists, and the second central problem of finding which expected relative population structures are possible limits if the limiting vector of input probabilities into the population is controlled. Finally, the rate of convergence is studied.
In various areas of human activity, there is inevitably a need to select the best (rational) courses of action from the alternatives proposed. In the case of retrospective statistics, risk analysis is a convenient tool for solving the choice problem. However, when planning the growth and development of complex systems, a new approach to decision making is needed. The article [
8] deals with the concept of risk synthesis in comparing alternatives for the development of a special class of complex systems, which the authors call smart expansive systems. “Smart” in this case implies a system capable of balancing its growth and development, considering possible external and internal risks and constraints. Smart expansive systems are considered in the quasi-linear approximation and under stationary problem-solving conditions. In the general case, when the alternative comparison is not the object itself, but some scalar way of determining risks, the problem of selecting the objects most exposed to risk is reduced to the evaluation of weights of factors influencing the integral risk. As a result, there is a complex problem of analyzing the risks of objects, which is solved through the value by which the integral risk can be minimized. Risks are considered as the antipotential of the system development, which are the retarders of the reproduction rate of the system. The authors give a brief characteristic of an intellectual expansive system and propose approaches to modeling the type of functional dependence of the integral risk of functioning of such a system on the set of risks, measured, as a rule, in synthetic scales of pair comparisons. The solution to the problem of reducing the dimensionality of the influencing factors (private risks) by the vector compression method (in group and interscale formulations) is described. The paper presents an original method of processing matrices of incomplete pairwise comparisons with fuzzy information based on the idea of constructing benchmark-consistent solutions. Examples of applications of the vector compression method to solve practical problems are given. The paper presents an original method of processing matrices of incomplete pairwise comparisons with fuzzy specified information, based on the idea of constructing benchmark-consistent solutions.
In [
9], the following two optimization problems on analysis of acyclic orgraphs are solved. The first one consists of determining the minimal (by volume) set of arcs whose removal from the acyclic orgraph breaks all paths passing through a subset of its vertices. The second problem is to determine the smallest set of arcs, whose introduction into the acyclic orgraph turns it into a strongly connected one. The first problem was solved by reducing it to the problem of maximal flow rate and minimal section. The second problem was solved by calculating the minimum number of input arcs and determining the smallest set of input arcs in terms of the minimum coverage of the arcs of the acyclic orgraph. The solution of these problems extends to an arbitrary orgraph by distinguishing it in the components of cyclic equivalence and the arcs between them.
The paper [
10] considers the reliability function of a system consisting of k of n, under the conditions when the failures of its components lead to an increase in the load on the remaining ones and, consequently, to a change in their residual lifetime. It should be noted that the development of models is able to consider that failures of system components lead to a decrease in the residual lifetime of the remaining ones, which is of crucial importance in the tasks of increasing the reliability of the system. In [
10], a new approach based on the application of order statistics of the system components’ service life to model this situation is proposed. An algorithm for calculating the system reliability function and two moments of its no-failure operation time is developed. Numerical research includes sensitivity analysis for cases of the considered model based on two real systems. The obtained results show the sensitivity of system reliability characteristics to the form of service life distribution, as well as to the value of variation coefficient at a fixed average value.