1. Introduction
Multi-functional integrated RF systems (MIRFSs) [
1] play a pivotal role in various applications, including communication, radar, and electronic warfare. These systems [
2], which integrate multiple functions into a single unit, are essential for modern electronic warfare and communication systems [
3]. However, their complexity presents significant challenges in terms of testability and evaluation. Ensuring the reliability and performance of MIRFSs throughout their lifecycle requires comprehensive testability strategies that can address their unique requirements and complexities.
The current state of research on MIRFSs highlights the necessity of effective testability modeling to ensure system reliability and performance. Researchers have concentrated on developing methods and tools to predict, analyze, and improve the ability to detect and diagnose faults within these complex systems. Traditional testability models often fall short due to the intricate nature of MIRFSs, which includes diverse functionalities and interdependencies between components. For instance, MIRFSs are characterized by the integration of multiple RF functions [
4], each with their own set of performance parameters and failure modes, making fault isolation and diagnosis particularly challenging. Therefore, more advanced and tailored modeling approaches are necessary to effectively evaluate and improve the testability of these systems, ensuring that faults can be accurately detected and diagnosed to maintain system performance.
In the field of multi-criteria decision-making for complex engineering systems [
5], significant advancements have been made to address the challenges [
6] posed by the diversity and complexity of testability parameters [
7]. This paper applies MCDM methods to the testability analysis of multi-functional integrated RF systems (MIRFSs), leveraging the advantages of MCDM models in the decision-making process of complex system design and testability to prioritize and select the most critical parameters that impact system performance. These models facilitate the evaluation of trade-offs between different design and testability parameters [
8], enabling the identification of optimal solutions that balance performance, cost, and reliability [
9]. However, single MCDM methods [
10] often struggle to integrate and balance the multifaceted requirements of such complex systems [
11]. This is due to the dynamic interactions between various system parameters [
12] and the need to consider the interdependencies between different subsystems. Therefore, this paper proposes an integrated MCDM model to provide a comprehensive assessment of system-level performance and testability.
The innovative contributions of our work can be summarized as follows:
(1) A comprehensive testability framework applicable to the entire lifecycle and system level of MIRFS has been proposed, enabling testability analysis to fully cover all stages and critical links. This approach ensures that testability is not limited to isolated components but rather spans the entire system, allowing for a more thorough evaluation of MIRFS performance. The framework includes all stages, from design to deployment and operation, supporting continuous assessment and enhancement of system testability and reliability.
(2) To address the diversity and complexity of testability parameters, this study constructs a basic testability parameter system for MIRFSs from both usage requirements and system requirements. This parameter system provides a scientifically effective method and tool for quantitatively evaluating the testability performance of MIRFSs, ensuring that all critical aspects are adequately measured and assessed. By establishing a comprehensive set of parameters that reflect both operational and technical requirements, the system allows for a detailed and accurate evaluation of the MIRFS, supporting better decision-making and optimization.
(3) Based on the integrated MCDM model, this study develops a parameter indicator allocation model that considers the mutual influence between units. The determined system-level testability indicators are refined and allocated to each subsystem, ensuring that the testability requirements are met throughout the entire system. This approach enhances the precision and effectiveness of testability analysis in MIRFSs, leading to more reliable and robust systems. By taking into account the interdependencies between subsystems, the model ensures that the allocation of testability indicators is optimized, supporting improved fault detection and diagnosis capabilities across the entire system.
The remainder of this paper is organized as follows.
Section 2 presents a qualitative analysis of the testability framework that comprehensively spans the entire lifecycle and all system levels.
Section 3 offers a detailed explanation of the proposed method, including the testability indicator framework as well as the testability indicator allocation framework.
Section 4 uses illustrative examples to validate the proposed methods. Finally,
Section 5 provides the concluding remarks and future research directions.
2. Literature Review
In the process of conducting testability allocation for MIRFSs, multi-criteria decision-making (MCDM) provides an effective tool to address challenges involving multiple conflicting indicators and complex decision-making environments [
13]. Testability allocation is an important link in ensuring system reliability, maintenance, and availability.
In the testability modeling and allocation of a MIRFS, the application of multi-criteria decision-making (MCDM) can significantly improve the scientificity [
14,
15] and effectiveness [
16,
17] of decision-making. The Analytic Hierarchy Process (AHP) can effectively allocate and rank the weights of various indicators in testability allocation by constructing a hierarchical structure model and paired comparison matrix [
18], thereby optimizing testability strategies [
19]. The Technical Economics Indicator method (TOPSIS) ranks each scheme by calculating their relative closeness to the ideal solution [
20], helping to select the optimal testability scheme. The Analytic Hierarchy Process (ANP) considers the interdependence and feedback relationship between indicators [
21], making the decision-making process more comprehensive and accurate [
22]. The Decision Laboratory Analysis and Evaluation Method (DEMATEL) identifies and analyzes the interactions and degrees of influence among various factors by constructing an impact relationship diagram [
23], ensuring more reasonable testability allocation decisions in complex systems [
24]. These methods have shown significant advantages in MIRFS testability modeling and allocation, helping decision-makers make scientifically reasonable choices under multiple indicators and constraints.
Specifically, the hierarchical structure analysis and weight allocation of the AHP [
25,
26] make complex decision-making problems transparent and operable [
27,
28,
29], making it suitable for handling priority issues of multiple testability indicators in MIRFSs. The TOPSIS, when evaluating the relative advantages and disadvantages of multiple solutions, is based on the concepts of ideal solutions and negative ideal solutions [
30], making the decision-making process intuitive and easy to understand, and effectively identifying the testability plan closest to the ideal state. The ANP considers the interdependence [
31] and feedback between various indicators [
32,
33], making the decision model more realistic [
34,
35] and able to handle complex correlation relationships in MIRFS testability modeling, improving the accuracy and reliability of decision-making. DEMATEL helps identify key factors [
36] and main influencing paths [
37] by constructing and analyzing factor impact diagrams [
38], enabling the better understanding and optimization of testability allocation schemes in complex systems [
39].
Recent advancements in cross-domain fault diagnosis have also demonstrated significant potential for enhancing testability in complex systems. For example, subgraph convolutional networks (SGCNs) [
40,
41] have been effectively utilized to manage and analyze complex, interconnected data structures in systems with multi-source sensor data. These networks can capture the complex dependencies and nonlinear patterns that often exist in heterogeneous data from various sensors, making them highly applicable for MIRFS testability modeling and allocation, where multi-source data fusion is a critical challenge [
42,
43]. Integrating these advanced approaches with existing MCDM frameworks, such as the AHP and DEMATEL, could provide a more comprehensive analysis of a MIRFS’s complex system structure and interactions, leading to more optimized testability allocation strategies [
44]. This integrated method would not only enhance the precision of resource allocation but also leverage deep learning capabilities to process large volumes of multi-source data, improving system reliability and stability.
The combination of these methods can not only improve the scientificity and comprehensiveness of MIRFS testability modeling and allocation [
45], but also significantly reduce the risks and costs caused by decision errors [
46], ensuring efficient operation and maintenance of the system throughout its entire lifecycle [
47]. The comprehensive application of the MCDM method, which can better cope with the complexity [
48] and uncertainty in MIRFS testability allocation, can thereby lead to the achievement of a better system performance.
3. The Proposed Methodology
Up to now, over a hundred types of testability parameters have been defined, and it is inevitable that issues such as strong professional specificity, ambiguous or overlapping meanings, and difficulties in verification arise. Therefore, it is crucial to carefully select parameters that accurately reflect the testability characteristics of the MIRFS from the many available options, and subsequently establish a robust foundational testability parameter framework for the MIRFS.
The testability design process requires coordination and cooperation between the manufacturer and the ordering user to complete. The contractor needs to carry out testability design work according to the system requirements, while the ordering user conducts a trade-off analysis based on the specific usage of the system engineering project, proposes testability design suggestions for usage requirements, and hands them over to the contractor for implementation. Therefore, testability requirements not only reflect the system requirements but also meet the usage requirements of the system.
Therefore, this article will comprehensively analyze and meticulously determine the testability index framework of MIRFSs from the perspectives of both system requirements and practical usage needs, ensuring a holistic and well-rounded approach.
The MIRFS utilizes channel synthesis, aperture synthesis, and software synthesis to eliminate the isolation between subsystems like traditional modular systems. Component units can effectively serve multiple subsystems to assist them in efficiently completing specific functions and various tasks. Aperture synthesis enables free and fast switching of multiple RF functions seamlessly between different apertures.
Channel synthesis involves the re-partitioning and integration of analog and digital circuits between antennas and integrated processing units, achieving channel- and resource-sharing, as well as the construction of a reconfigurable and universal transmission and reception system. Software integration provides a unified and scalable platform for various signal-processing, data-handling, and storage applications within the system, thereby enhancing the information-sharing capabilities among the MIRFS’s functions. The combination of these three technologies has significantly strengthened the interconnection between units, rendering traditional testability allocation methods insufficient to meet the evolving trend of functional structure integration in MIRFSs.
The MCDM method has its unique advantages over other existing methods in the testability modeling and allocation process of MIRFSs.
(1) Multiple criteria can be handled: MCDM is essentially designed to handle multiple (usually conflicting or influencing) criteria. In the whole life process of a MIRFS, decision-makers are allowed to consider various factors such as cost, reliability, maintainability, and testability at the same time, and fully weigh the impact of various factors on the system. It has obvious advantages over the traditional methods that cannot effectively capture multiple influencing factors.
(2) It can cope with the complexity of the system: The MIRFS contains multiple subsystems, which are interdependent. DEMATEL, the ANP, and other methods can well model this interdependence and capture the hierarchical structure or network characteristics of the complex system. By comparison, fault tree analysis (FTA) or failure mode and effect analysis (FMEA), which can more easily identify key fault points, fully capture the multi-dimensional dependency between components.
(3) Fully integrates the preferences of stakeholders: The preferences of stakeholders (customer requirements) need to be fully considered in the whole life process of a MIRFS. The MCDM method can deal with the different priorities of multiple stakeholders (such as engineers, managers, and customers) in the testability aspect of the system design process, while the traditional method does not consider the preferences of stakeholders.
According to the above analysis, this paper adopts the MCDM method for application to the testability modeling and allocation of a MIRFS, so as to meet the testability of the whole life process of the MIRFS. Therefore, a testability allocation method based on MCDM is proposed, as shown in
Figure 1. This method is mainly divided into two parts. The first part is to screen a large number of testability indicators to obtain the MIRFS basic parameter framework. The second part is to establish a MIRFS testability allocation model and allocate testability indicators to the parameters in the basic parameter framework obtained in the first part to verify the rationality of the allocation model.
In the first part, the AHP and TOPSIS are integrated. The AHP is used to determine the weight of each factor, and the TOPSIS uses these weights to sort the candidate schemes. This combination method makes up for the lack of an AHP in sorting, and enhances the accuracy of the decision-making. The AHP relies more on the subjective judgment of experts, while the TOPSIS can use objective data for calculation. The combination of the two method can completely consider problems more comprehensively and avoid the limitations of a single method.
In the second part, the ANP and DEMATEL are integrated. DEMATEL is used to identify and quantify the interaction between factors, and the ANP can make decision analyses based on these relationships. This combination is more in line with the actual situation, because the factors in many decision-making problems are not completely independent. The ANP considers the interdependence between factors, and DEMATEL can help identify which factors have the greatest impact on other factors, making the network structure of the ANP more reasonable and effective.
AHP-TOPSIS and DEMATEL-ANP can be used to solve different levels of problems in MIRFS testability modeling and allocation, respectively. The former is suitable for determining priorities and ranking, and the latter is suitable for analyzing the complex relationships and dependencies between factors.
3.1. Construction of Testability Indicator Framework
To align with the testability requirements analysis concept across the entire framework and lifecycle, it is essential to establish a comprehensive testability indicator framework for the MIRFS grounded in the foundational testability parameter framework.
(1) Testability indicator framework under different testability methods
Automatic Test Equipment (ATE), Built-In Test (BIT), and manual testability are currently the three main testability methods for system maintenance in the field of aviation engineering. The testability methods for system equipment vary in different maintenance environments and task stages, and the testability parameter indicators that each testability method focuses on also vary. The testability parameters Mean Time Between False Alarms (MTBEB), Mean Time to Repair for BIT (MTTRB), and Mean Time Between Repairs for BIT (MBRT) proposed for BIT can be included in the basic testability parameters when necessary for BIT testability. However, BIT detection and isolation execution speeds are fast, and their testability time can be ignored. Therefore, BIT may not consider Mean Fault Detection Time (MFDT) and Mean Fault Isolation Time (MFIT).
(2) The testability index framework of multi-level structure in MIRFS
The testability of structural properties and functional differences between units at various levels of the MIRFS necessitates testability parameters that cannot be generalized. In other words, the testability of any system unit should not be defined by a fixed set of testability parameter indicators. Instead, it should be refined and expanded upon the basic testability parameter indicators to accommodate the specific testability requirements of individual units.
Figure 2 shows a sample diagram of the MIRFS’s multi-level testability parameter correlation framework. Taking the example of a three-tier maintenance system, assume the foundational testability parameter set for the MIRFS is
. The testability parameter sets for subsystems 1 to 3 are denoted as
,
, and
, respectively.
Based on the system testability parameters, new parameters are included according to specific requirements. At this point, , , and . Unlike traditional systems, synthesis enables the of the MIRFS to serve multiple subsystems under algorithmic allocation. The parameter set corresponding to m is denoted as , which needs to retain the parameters of subsystem m. The layer parameter is determined to be consistent with the subsystem layer. For example, in the three-level maintenance system, and are also referred to as and , meaning that and should retain subsystem parameters 1 and 3, while also retaining subsystem 2 parameters.
The multi-level parameter framework of the MIRFS can be expressed in set language as Equation (1).
(3) A testability index framework for the full design process of MIRFS
The testability indicator requirements outlined in
Figure 3 summarize the different engineering phases, serving as the theoretical foundation for designing a comprehensive testability indicator framework that spans the entire MIRFS design process.
In different engineering phases, the requirements for the same parameter indicator often vary. During the validation phase, analysts review system requirements and propose usage indicators, which can be divided into threshold values and target values. The threshold values reflect the minimum acceptable usage requirements, while the target values represent the desired expectations for usage indicators. In the design phase, the research and production department conduct an implementable analysis based on these usage indicators. Feedback is then provided to the validation department, leading to modifications and adjustments. Contract indicators are subsequently discussed and agreed upon in consultation with the design department. These contract indicators consist of minimum acceptable values and specified values, where the former represents the minimum contractual requirements, and the latter indicates the desired performance expectations.
During the research and production phase, constrained by technical defects, slightly higher design indicators are often proposed to control and guide the contract indicators, ensuring that the completed design can meet the contractual requirements.
(4) MIRFS testability index framework for the whole system lifecycle
The testability indicator framework constructed from the first three perspectives is closely interrelated and permeates throughout the entire system design process. The MIRFS testability design process should comprehensively incorporate all three key aspects and establish a three-dimensional testability indicator framework that is applicable to various testability methods, the complete system lifecycle, and the overall system structural hierarchy, as clearly illustrated in
Figure 4.
3.2. Construction of Parameter Indicator Framework
The usage requirements reflect the subjective demands put forward by the ordering user based on their past experience in similar equipment development and actual engineering needs. It covers specific requirements such as reliability, maintenance assurance, and limiting constraints. Only by comprehensively considering these usage requirements and building a testability index framework based on these basic conditions can the system characteristics and requirements be truly integrated into the system design. System requirements are a collection of various requirements, including performance, functional structure, etc., established based on the established task objectives that need to be achieved. From a usage perspective, system requirements are objective requirements, and using them to evaluate testability parameters can, to some extent, avoid the influence of subjective experience on parameter selection. This section will first use the Analytic Hierarchy Process (AHP) to prioritize the testability candidate parameter set from the usage requirement dimension. Then, it will use the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to effectively prioritize the testability candidate parameter set based on the system requirement dimension, thereby assisting in the comprehensive construction of the testability parameter framework.
3.2.1. Evaluation and Analysis of Alternative Parameters Based on Usage Requirements
After the model construction is completed, the solution process must be executed according to the hierarchical structure. The specific steps are as follows:
Step 1.1 Establish the inter-level relationship judgment matrices.
Relying on the relationships between adjacent levels, use the nine-point scale method to construct judgment matrices indicating the relative importance of elements within the same level with respect to an element from the previous level. The value of the matrix element is defined as follows:
1—The factor is equally important compared to the factor .
3—The factor is slightly more important than the factor .
5—The factor is noticeably more important than the factor .
7—The factor is strongly more important than the factor .
9—The factor is extremely more important than the factor .
The remaining numbers indicate the importance between the adjacent judgment values mentioned above, and is the reciprocal of .
Step 1.2 Consistency check.
When comparing the importance of elements within the same level to avoid logical conflicts in judgment, it is essential to perform a consistency check on each judgment matrix. The degree of consistency is measured by calculating the Consistency Ratio (
), as shown in Equation (2):
where
represents the consistency indicators and
represents the random consistency index. The
calculation equation is shown in Equation (3), where
represents the maximum eigenvalue and
represents the matrix order.
When there is inconsistency, the eigenvalue adjustment method is first adopted to minimize the while preserving the original data as much as possible. If the direct adjustment is difficult, the standard can be reevaluated and discussed with experts to refine the judgment and optimize the original data.
Step 1.3 Calculation of weight vectors for adjacent levels.
The method used to calculate weights can lead to different results. Therefore, this study will use the arithmetic mean method, geometric mean method, and eigenvalue method to calculate weights separately, and the average will be taken as the final weight. The weights
and
are obtained using both arithmetic mean and geometric mean calculations as follows:
Step 1.4 Calculation of priority for the scheme layer.
Taking the average weight vector of the scheme layer to the criterion layer obtained in step 1.3 and arranging them in order, a weight matrix
is formed. According to Equation (5), the comprehensive weight vector for the indicator layer with respect to the goal layer is obtained. In other words, using the MIRFS usage requirements as evaluation criteria, the priorities of the testability alternative parameters are determined as
.
3.2.2. Analysis of System Requirement Dimension Parameter Indicators
Step 1.5 Building a testability parameter evaluation matrix.
Quantitatively analyze the correlation between multiple system requirements and alternative testability parameters to obtain a correlation evaluation matrix . Nine-scale fuzzy numbers are used to reflect the correlation between system requirements and indicator parameters. Larger values indicate stronger correlations, while a value of zero signifies no correlation.
Step 1.6 Calculation of the impact of system requirements on entropy weight.
For the evaluation matrix
, standardize it sequentially through Equation (7) to (8). After obtaining the membership matrix
, utilize Equation (9) to calculate the entropy values H
i. For each requirement,
is calculated by Equation (10).
Step 1.7 Calculation of the scores for system testability parameters.
From matrix R, take each column to form the ideal solution vector R
+ and the worst solution vector R
−; R
+ and R
− are calculated by Equation (11).
Subsequently, use Equation (12) to calculate the distances D
i+ and D
i− for the positive and negative ideal solutions, respectively. D
i− is calculated by Equation (13).
3.2.3. Integration of Evaluation Results
Step 1.8 Calculation of system testability parameter score.
Using the method of seeking equilibrium solutions in game theory, balance and integrate the evaluation results
and
. Firstly, the linear combination of
and
forms a parameter comprehensive evaluation vector
, as shown in Equation (14), where λ
1 and λ
2 are the combination coefficient to be solved, with the objective of minimizing the sum of deviations between
, and
. The constraint conditions are shown in Equation (15). According to the fundamental principle of differentiation, the equation system conditions that must be met in order to obtain the optimal solution for the above objective function are clearly shown in Equation (16).
By solving Equation (16) and normalizing the absolute values, and are obtained. Substituting these values into Equation (13) yields the comprehensive importance of alternative parameters. The top three parameters are then selected based on priority to further develop the MIRFS testability parameter indicator system for the airborne MIRFS.
3.3. MIRFS Testability Indicator Allocation Model
The previous section completed the screening of testability parameters for the MIRFS, and now parameter allocation is carried out for each parameter.
Step 2.1 Constructing a testability allocation model for MIRFS.
Select typical influencing factors as criterion layer elements (P
i) and divide the MIRFS into three subsystems, communication, radar, and electronic countermeasures, according to their functions, as network layer element groups (
).
consist of system reconfigurable antenna units (
), integrated transceiver units (
), and data comprehensive processing units (
). The detailed MIRFS testability allocation model is clearly illustrated and shown in
Figure 5 below.
The elements within a group belong to the same subsystem and are interconnected through circuits, leading to cascading effects, which manifest as self-feedback within the element group. Structural reconstruction of similar elements across different element groups is significantly influenced by the sharing of units between various subsystems, which results from the functional switching and dynamic reallocation of resources.
Step 2.2 Solving element unit weight vectors.
Using a certain element in the control layer as the criterion, and the element in a certain element group in the network layer as the sub-criterion, the dominance of the influence of element group element on can be compared. The quantification of the influence of different criteria can be achieved through data collection or expert scoring methods. The Saaty scale is used for pairwise comparison, and the results are filled in the comparison matrix according to the judgment of experts. By adjusting the weight, a balance can be achieved between multiple testability indicators to achieve the rationality of the testability indicator distribution. The specific values of the Saaty scaling method and their corresponding detailed meanings are as follows:
1 (equally important)—the two factors have the same contribution or influence on the goal;
3 (slightly important)—one factor is slightly more important than another;
5 (obviously important)—one particular factor is significantly more important and influential than another specific factor;
7 (very important)—one factor is more important than another;
9 (extremely important)—one specific factor is considered to be absolutely more important and significant than another factor;
2, 4, 6, 8 (intermediate value)—these values are used to indicate the intermediate degree between the above adjacent judgments. For example, 2 is between 1 (equally important) and 3 (slightly important), which is used to indicate that it is slightly more important than “equally important”;
1/3, 1/5, 1/7, 1/9 (reverse judgment scale value)—these values are used to indicate reverse judgment. For example, if one factor is slightly less important than another, use 1/3; if significantly unimportant, use 1/5; and so on. These values are the reciprocal of the original scale value and are used to reflect the relative importance at the symmetrical position of the comparison matrix.
After comparison, the feature influence matrix
can be constructed and combined into a super weight matrix
by block. Then, using
as the criterion and
as sub-criteria, the influence of element groups is compared to obtain the weight influence matrix
between subsystems. The weight influence matrix as and the super weight matrix
are weighted to obtain the weighted super
matrix
.
and
can be calculated by Equations (17) and (18). The parameter
can be calculated directly by using Equation (19), provided in the text.
Theorem 1. When a certain layer of a cyclic system is internally dependent, its hypermatrix W limit exists, namely, exists. And each column is a normalized vector of eigenvalue 1. According to the principle of limit vector sorting, the limit of exists and the jth column of is the relative sorting vector of the network layer elements under with respect to the limit of the element; therefore, when the limit of exists, perform multiple products until the elements in each column are similar. It is believed that the vectors in column represent the weights of each element in the network layer under the criterion. According to the calculation equation, a weighted judgment hypermatrix can be obtained, and the weight vector k of each element under can be solved.
Step 2.3 Solving the comprehensive importance of elements.
The comprehensive importance calculation method can be divided into linear and nonlinear approaches. The linear calculation equation is shown as Equation (20).
The weights of ,, and are determined by the relative importance vector of the criterion relative to the target, and can be adjusted appropriately according to the focus, ensuring that the sum is 1. For the convenience of calculation, this article chooses the linear solution method.
Step 2.4 Obtaining a comprehensive impact matrix.
Specifically, through expert discussions, questionnaire surveys, and other methods, the impact relationship between elements is analyzed pairwise. A 4-level scale (0, 1, 2, 3) is used to measure the degree of influence between indicators. The value of the matrix element is defined as follows:
0—Factor has no effect on factor .
1—Factor has little effect on factor , which can be ignored.
2—Factor has a certain impact on factor , which needs to be considered.
3—Factor has a significant impact on factor and is an important consideration in the decision-making process.
Through this scale, experts score the direct impact of each pair of factors to form an initial direct relationship matrix
. After the direct relationship matrix is constructed, the data need to be standardized to ensure that the data in the matrix are between 0 and 1. The purpose of this is to ensure the stability and consistency of the calculation. The normalized direct relationship matrix
is calculated as shown in Equation (21):
where
is a scaling factor, and its calculation method is shown as Equation (22):
After obtaining the standardized direct relation matrix
, calculate the comprehensive influence matrix
. The comprehensive impact matrix
includes direct and indirect impacts, which are the basis for constructing the impact relationship diagram. The calculation equation is shown as Equation (23):
The specific process is illustrated in
Figure 6.
Step 2.5 Construction of MIRFS testability indicator allocation framework.
To avoid significant differences between
and
, which could lead to considerable variations in the detection rate and isolation rate between units
and
, a sigmoid function is introduced. This ensures that
remains within the range of (0, 1), with the sigmoid slope decreasing as
increases, thereby mitigating unreasonable allocation caused by large gaps in
. The calculated
is then used to allocate the fault detection rate, fault isolation rate, and false alarm rate. The fault detection rate, fault isolation rate, and false alarm rate can be calculated by Equations (24) and (25).
where
represents the
element of
,
is the system failure rate,
denotes the system fault detection rate,
indicates the failure rate of unit
,
represents the fault detection rate of unit
, and
and
indicate the system fault detection rate and isolation rate required values.
4. Result and Discussion
4.1. Parameter Screening and Construction of Parameter Framework
(1) Preliminary screening of testability parameters
Testability parameters can be roughly divided into two categories: characteristic parameters and capability parameters. The physical characteristic parameters reflect the inherent properties of the test equipment, including aspects such as its physical volume, the number of components it contains, and its deployment location within the system. Additionally, these parameters encompass the structural design and material quality, which are crucial for determining the equipment’s robustness. The use characteristics, on the other hand, are reflected through metrics like reliability, maintainability, and supportability, which together influence the long-term performance and efficiency of the equipment. The technical level and experience of the maintenance personnel are the most common parameters of subjective ability evaluation, which can vary greatly depending on training and expertise. Meanwhile, the objective ability parameters can be further divided into three categories: range parameters, which define the operational limits; time parameters, which assess the duration of certain processes; and ratio parameters, which provide insight into the efficiency and effectiveness of operations. The range parameters include test coverage, fuzzy group, etc.; time parameters include fault detection time, bit fault interval time, etc.; ratio parameters mainly include far, fire, unrepeatable rate, etc. The classification diagram of testability parameters is shown in
Figure 7.
Through the preliminary classification and analysis of testability parameters, the range of parameters is preliminarily controlled, yet a considerable number of optional parameters remain, and the issue of repeated correlation between these parameters persists. To address this and further narrow down the range of alternative parameters for MIRFS testability, a series of scientific and reasonable parameter-screening criteria will be proposed as the foundation for more effective decision-making. The specific criteria are as follows: these criteria will be based on both quantitative and qualitative assessments, ensuring comprehensive coverage and applicability across different scenarios.
Clarity: The selected testability parameters should have clear and definite definitions and well-established mathematical calculation methods, as specified in the existing outline standards or technical manuals.
Universality: Try to avoid selecting testability parameters based on subjective needs and experiences to ensure the universality of the index system.
Reflexivity: Testability parameters should not only clearly reflect testability performance characteristics but also accurately reflect their relationship with other performance-related characteristics and attributes.
Comprehensiveness: Give priority to the testability parameters that are representative, that is, can comprehensively reflect the test characteristics and level, and realize the simplification of the testability index system structure.
Independence: It is quite difficult to completely avoid the correlation between testability parameters, which is generally reflected by the set relationships of adjacency, complementarity, inclusion, and others. Only by carefully selecting parameters that are relatively independent of each other’s meaning and unique characteristics can the parameter architecture be effectively optimized.
Testability: According to the actual test conditions, the testability parameters that are easy to measure and obtain, are easy to quantify and compare, and can be accurately measured are selected.
Verifiability: The testability parameter value can be calculated according to the data obtained in the design stage, and then the subsequent assessment, evaluation, and verification related to the parameter index can be completed.
Convertibility: Try to select testability parameters that can be converted from use parameters to contract parameters, for parameters that cannot be verified can only be used as usage parameters, and cannot be further converted into contract parameters.
Based on the above model construction principles, the target layer (T) is to screen the basic testability parameters of the airborne MIRFS. Combining the literature and analysis of the actual use process of the MIRFS, 12 usage requirements, including accurate reporting of system status (G1), continuous uninterrupted operation (G2), and downtime caused by faults (G3), are ultimately selected as the criteria layer (G1–G12). At the solution layer, ten distinct alternative parameters have been defined, including fault detection rate (FDR), fault isolation rate (FIR), false alarm rate (FAR), Mean Fault Detection Time (MFDT), Mean Fault Isolation Time (MFIT), Built-In Test Mean Time Between Failures (MTBEB), Built-In Test Mean Time to Repair (MTTRB), Non-Reproducibility Rate (CNDR), Retest Qualification Rate (RTOKR), and Mean Effective Operating Time (MBRT). These parameters are designated as the solution layer indicators (M1~M10).
(2) Calculation of the importance of alternative parameters based on usage requirements
As shown in the hierarchical relationship in
Figure 5, one target layer can be constructed—the criterion layer judgment matrix is denoted as
, and the 12 criterion layers to the indicator layer judgment matrix are denoted as
. Among them,
,
, and
are shown in
Table 1,
Table 2 and
Table 3. Therefore, the average weight vector of the criterion layer to the target layer is calculated based on the judgment matrix
. It can be calculated that
= {0.0582,0.925,0.1765,0.0294,0.263,0.0244,0.1939,0.0349,0.341,0.0497,0.0510,0.0291}.
Similarly, for Q1 to Q12, the average weight vector of the scheme layer over the criterion layer, , is obtained. By taking the transposes of and arranging them in sequence, a weight matrix WM is formed. The comprehensive weight vector from the indicator layer to the target layer is then derived from Equation (10), resulting in the priority of the test candidate parameters based on the MIRFS usage requirements. The values for are given by the following set: {0.3510, 0.1735, 0.1278, 0.0948, 0.0600, 0.0369, 0.0383, 0.0261, 0.0578, 0.0338}.
(3) Calculation of the relative importance of candidate parameters based on the system requirement dimension criteria
For the evaluation matrix
,
is obtained according to Equations (11) and (12). The weight vector, composed of entropy weights
for each system requirement, is calculated using Equations (13) and (14). The results are shown in
Table 4, with
= {0.974,0.1103,0.1324,0.1289,0.1155,0.0667,0.1077,0.1474,0.0937}. Subsequently,
is calculated and normalized to obtain the evaluation score for the candidate parameter framework requirements. The values for
are given by the following set: (0.2059,0.2048,0.1523,0.0912,0.1132,0.0184,0.0184,0.0672,0.0639,0.0646).
According to game theory, the comprehensive importance of the candidate parameters is obtained from Equation (15). Based on priority, the top three parameters—fault detection rate (FDR), fault isolation rate (FIR), and false alarm rate (FAR)—are selected as the basic testability parameter indicators for the airborne MIRFS. These parameters are used to construct the MIRFS testability parameter indicator framework.
4.2. Establishment and Verification of Testability Allocation Method
In the actual engineering situation, four typical influencing factors, including failure rate, failure importance, mean time to repair failure, and cost, are selected as criteria layer elements (P1–P4). The MIRFS is divided into communication, radar, and electronic countermeasure subsystems as network layer element group (). are composed of antenna units (), integrated transceiver units (), and data comprehensive processing units () that can be integrated and reconstructed by the system. The elements in the element group belong to the same subsystem and are interconnected through circuits. There is a certain cascade effect, which is manifested by the self-feedback of the element group; the structural reconfiguration of similar elements in different element groups due to function switching has the influence of unit-sharing among the subsystems. In this example, assume , , and .
First, the control weight of the corresponding control layer is calculated. According to the expert score, the control criterion weight
and the element weight
of the scheme layer are obtained. According to the control criterion integration equation, as shown in Equation (26), the control weight is obtained. The control weight is used to appropriately weight the hypermatrix. The calculation results of
and
are provided in
Table 5 and
Table 6, respectively.
Taking as the criterion and in as the sub-criterion, the influence of any element group is compared with . The influence quantification of different criteria can be achieved through data collection and solution or the expert scoring method. After comparison, the characteristic influence matrix can be constructed and combined into a super weight matrix by blocks.
Take
as an example. With
as the criterion and
as the sub-criterion, the influence of element groups is compared to obtain the weight influence matrix
A1 between subsystems, as shown in
Table 7, and all local priority vectors are integrated into a supermatrix to obtain the unweighted supermatrix
, as shown in
Table 8. The weighted supermatrix is obtained by weighting
W1 with the integrated weight (
), as shown in
Table 9. Multiple power operations are performed on the weighted supermatrix, such as Equation (27), until the matrix converges. The resulting matrix is called the limit supermatrix, as shown in
Table 10.
The resulting matrix is called the limit supermatrix, as shown in
Table 10.
The column corresponding to this criterion can be extracted from the limit supermatrix, which is the final weight vector of each element under the control criterion
P1. Solve the weight vector
of each element under
P1. This process is repeated to obtain the weight vectors
,
, and
, corresponding to criteria
,
, and
, respectively. Here,
P2,
P3, and
P4 are given as weighted hypermatrices under the criteria, as shown in
Table 11,
Table 12 and
Table 13. The calculation results of
,
,
, and
are as follows:
Establish an optimization model to solve the optimized comprehensive importance vector. It can be obtained that
= {0.0515,0.0661,0.1046,0.1109,0.1069,0.0837,0.1874,0.1189,0.1700}. From
Table 14, the mixed weight
can be obtained by using the mixed weight matrix calculation in Equation (28).
Based on the analysis, it can be determined that the values for are as follows: {0.0951,0.1014,0.1093,0.1118,0.1208,0.1146,0.1108,0.1137,0.1225}.
After calculating , the fault detection rate and isolation rate can be allocated. The values of obtained from the ordinary traditional formula method and the MCDM method are substituted into Equations (21) and (22) to derive the allocation results for the fault detection rate (FDR), fault isolation rate (FIR), and false alarm rate (FAR).
4.3. Comparison and Analysis of Results
To comprehensively evaluate the effectiveness of the MCDM method in the allocation of testability indicators, three comparison methods are introduced: Simple Weighted Method (SWM), Equal Distribution Method (EDM), and Historical Data-Based Allocation (HDBA). By comparing these methods, the performance of the MCDM method in different testability indicators (fault detection rate, fault isolation rate, and false alarm rate) can be more clearly understood, ensuring that the research results are persuasive and practically valuable. Below are brief descriptions of these three different methods along with their respective calculation results.
(1) Simple Weighted Method (SWM)
The Simple Weighted Method allocates testability indicators proportionally based on the failure rates and importance of each system component. The weighting factors are simply weighted according to the importance and failure rates of the components, ultimately obtaining the fault detection rate (FDR), fault isolation rate (FIR), and false alarm rate (FAR) for each component.
According to Equation (29), the weight factor
can be calculated. The failure rates of each individual unit are clearly shown in
Table 15 below. It can be obtained that
= {0.095,0.095,0.1,0.1,0.105,0.105,0.11,0.11,0.115}.
(2) Equal Distribution Method (EDM)
The Equal Distribution Method is a method of evenly distributing testability indicators among all system components. This method assumes that each component is equally important to the system’s testability, resulting in the same FDR, FIR, and FAR for each component. It can be obtained that = {0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1}.
(3) Historical Data-Based Allocation (HDBA)
The Historical Data-Based Allocation method utilizes historical failure data to allocate testability indicators. By calculating the failure rates of each component based on historical data, the FDR, FIR, and FAR are allocated accordingly. It can be obtained that = {0.093,0.099,0.107,0.109,0.118,0.112,0.108,0.111,0.120}.
The obtained weight factors are substituted into Formulas (20) and (21) to derive the allocation results of the testing indicators, as shown in
Table 16.
Quantitative Analysis: The MCDM-based method consistently outperforms other methods in terms of fault detection rate (FDR) and fault isolation rate (FIR) across all test elements (E11 to E33). For example, the highest FDR achieved by the MCDM method is 0.9898, compared to a maximum of about 0.9666 for other methods. Similarly, the FIR values for the MCDM method are higher, with a maximum of 0.9783, significantly exceeding those of the other methods. Additionally, the false alarm rate (FAR) for the MCDM method is notably lower, with a minimum value of 0.0309, which is substantially lower than the corresponding values for the other methods. These results indicate that the MCDM method is more effective at reducing false alarms while maintaining high fault detection and isolation capabilities.
Qualitative Analysis: Compared to traditional methods, the MCDM method offers several qualitative advantages. It provides a more comprehensive approach by considering multiple factors, particularly those that are often conflicting, leading to a more holistic and optimized decision-making process. Unlike traditional methods that may rely heavily on experience or a single criterion, the MCDM method integrates both expert judgment and quantitative data, enhancing the accuracy and reliability of the decisions. Moreover, the MCDM method demonstrates greater adaptability and robustness when handling various metrics simultaneously, allowing it to maintain stable performance under different conditions, providing more reliable outcomes in real-world applications.
5. Conclusions
MIRFSs present significant challenges due to their complex nature and their diversity of testability parameters. The intricate interactions among their components and their wide range of functionalities can result in instability and false alarms, which can substantially affect the reliability and performance of electronic systems. This study introduces a comprehensive testability framework and an integrated MCDM model specifically designed for MIRFSs, addressing the complexities of fault detection and system testability. A series of comparative and ablation experiments were conducted to validate the effectiveness of the proposed method. Based on the results, the main conclusions are as follows:
(1) The proposed testability framework, suitable for the entire lifecycle and system level of MIRFS, ensures comprehensive testability coverage across all stages and key links of the system. This holistic approach significantly enhances the ability to detect and diagnose faults, improving overall system reliability.
(2) By constructing a foundational testability parameter framework based on both usage requirements and system requirements, a scientifically sound method for quantitatively evaluating the testability performance of a MIRFS has been established. This framework enables detailed and precise assessments, thereby supporting better decision-making and optimizing system performance.
(3) The parameter indicator allocation model, developed using the integrated MCDM model, accounts for the mutual influence between units and refines system-level testability indicators for each subsystem. This approach ensures that testability requirements are consistently fulfilled across the entire system, thereby significantly improving the overall precision and effectiveness of the testability analysis.
Despite these strengths, there are several limitations to the current approach. The reliance on expert judgment for assigning weights and determining priorities within the MCDM framework may introduce subjectivity, potentially leading to biases in the decision-making process. Additionally, the computational complexity associated with integrating multiple MCDM methods could pose challenges, particularly for large-scale systems. Future work could focus on developing more efficient algorithms and thoroughly exploring data-driven methods to significantly reduce dependency on expert input and further enhance model scalability and flexibility.
The application of the proposed MCDM framework and testability methods in real-world MIRFSs will be explored in future work to achieve practical implementation and validation. To further enhance the testability framework and the integrated MCDM model, future research could explore the integration of recent advancements in subgraph convolutional networks (SGCNs). Subgraph convolutional networks have shown promising results in handling complex, interconnected data structures, which could be particularly beneficial for the MIRFS, given its intricate component interactions and dependencies. By incorporating a SGCN, the model could better capture the hierarchical and relational information within the MIRFS, allowing for more precise fault detection and diagnosis. Furthermore, integrating a SGCN with the existing MCDM-based approach could provide a dual advantage: leveraging the MCDM model’s decision-making capabilities while enhancing it with the deep learning abilities of the SGCN to analyze complex graph structures. This integration would enable a more refined and comprehensive analysis of the MIRFS’s subsystem interactions, potentially identifying novel testability indicators and further refining existing ones based on continuous dynamic data analysis techniques.
In conclusion, the proposed testability framework and MCDM model offer a robust foundation for enhancing the MIRFS’s testability. However, by actively exploring these advanced research directions, the model can be further refined and effectively adapted to better meet the evolving demands of complex integrated systems, thereby ensuring continued improvements in both reliability and performance.