Next Article in Journal
Optimizing Microgrid Planning for Renewable Integration in Power Systems: A Comprehensive Review
Previous Article in Journal
Deep Learning for Network Intrusion Detection in Virtual Networks
Previous Article in Special Issue
Towards Double-Layer Dynamic Heterogeneous Redundancy Architecture for Reliable Railway Passenger Service System
 
 
Article
Peer-Review Record

A MCDM-Based Analysis Method of Testability Allocation for Multi-Functional Integrated RF System

Electronics 2024, 13(18), 3618; https://doi.org/10.3390/electronics13183618
by Chao Zhang 1,2,*, Yiyang Huang 1, Dingyu Zhou 3, Zhijie Dong 4,*, Shilie He 1,5 and Zhenwei Zhou 5
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Electronics 2024, 13(18), 3618; https://doi.org/10.3390/electronics13183618
Submission received: 19 August 2024 / Revised: 6 September 2024 / Accepted: 9 September 2024 / Published: 12 September 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

 

The authors have proposed a new testability allocation method based on multi-criteria decision making. The paper is well-written and I have the following minor comments.    1. Equations should be optimized. For example, multiplication should be used instead of asterisk in (23). 2. Comparative studies should be provided in order to show the superiority of the proposed method. 3. The authors are advised to provide experimental results to show the validity of the proposed method.

Comments on the Quality of English Language

Active voice is preferred over passive voice.

Author Response

Problem 1

Equations should be optimized. For example, multiplication should be used instead of asterisk in (23).

Response:

Thank you for your feedback and suggestions. For equation optimization, we understand that using more concise symbols and expressions can improve the clarity and readability of the formula. As for equation (23) you mentioned, indeed, replacing the multiplication sign * with a dot multiplication symbol or an appropriate multiplication symbol would be more in line with the norms of mathematical expression and more beautiful. We have modified the lines 593 in the original text according to your suggestion. The details are as follows:

                                                  

At the same time, we can optimize the formulas of other parts to ensure the consistency and professionalism of the whole document. If you have more specific suggestions or need further discussion, please feel free to communicate with us at any time. We will try our best to ensure that the final result meets your expectations.

Problem 2

Comparative studies should be provided in order to show the superiority of the proposed method.

Response:

Thank you for your insightful suggestion regarding the inclusion of comparative studies to demonstrate the superiority of the proposed method. I appreciate your attention to this critical aspect of the manuscript.

We would like to clarify that the current version of the manuscript already includes a detailed comparative analysis of the proposed MCDM-based method against three traditional methods: Simple Weighted Method (SWM), Equal Distribution Method (EDM), and Historical Data Based Allocation (HDBA). This comparison is presented in Section 4.3, where we provide both quantitative and qualitative analyses to highlight the strengths of the MCDM approach.

Quantitatively, the results demonstrate that the MCDM-based method consistently achieves higher Fault Detection Rate (FDR) and Fault Isolation Rate (FIR) while maintaining a significantly lower False Alarm Rate (FAR) compared to the other methods. For instance, the MCDM method achieved an FDR of up to 0.9898 and an FIR of up to 0.9783, both of which are superior to the maximum values obtained by the other methods. Additionally, the FAR for the MCDM method is as low as 0.0309, which is substantially lower than that achieved by the comparative methods.

Qualitatively, the manuscript discusses how the MCDM method integrates both expert judgment and quantitative data, providing a more comprehensive and robust decision-making framework. This method effectively addresses the complexities and interdependencies within MIRFS, offering a more reliable and adaptable approach than traditional methods that may not fully capture the multi-dimensional dependencies among system components.

We hope this clarification addresses your concerns. However, we are open to further suggestions if there are specific aspects you believe should be elaborated upon or additional comparisons that might enhance the manuscript. Thank you again for your valuable feedback, which has been instrumental in refining and improving the quality of our work.

Problem 3

The authors are advised to provide experimental results to show the validity of the proposed method.

Response:

Thank you for your valuable feedback suggesting the inclusion of experimental results to demonstrate the validity of the proposed method. We appreciate your emphasis on providing empirical evidence to support the theoretical framework presented in the manuscript.

We would like to clarify that the current version of the manuscript already includes a comprehensive set of experimental results that validate the proposed MCDM-based testability allocation method. In Section 4.2 and Section 4.3, we conducted a series of comparative and ablation experiments to evaluate the performance of our method against traditional approaches. These experiments were designed to test various aspects of testability, including Fault Detection Rate (FDR), Fault Isolation Rate (FIR), and False Alarm Rate (FAR), under different operational scenarios.

The experimental results show that the proposed MCDM-based method consistently outperforms traditional methods such as the Simple Weighted Method (SWM), Equal Distribution Method (EDM), and Historical Data-Based Allocation (HDBA). For instance, the MCDM method achieved a significantly higher FDR and FIR, and a lower FAR, demonstrating its superior capability in fault detection and isolation while minimizing false alarms. These results provide strong empirical support for the effectiveness of the proposed method, highlighting its robustness and reliability in complex, multi-dimensional decision-making environments like MIRFS.

To further substantiate these findings, we have also included detailed discussions on the experimental setup, parameter configurations, and performance metrics used in the evaluation. These details are intended to ensure the reproducibility of our results and to provide a comprehensive understanding of how the proposed method was tested under various conditions.

We hope this clarification addresses your concerns regarding the validity of the proposed method. If there are specific aspects of the experimental results or additional analyses that you believe would further strengthen the manuscript, we would be grateful for your suggestions. Thank you again for your insightful feedback, which has been instrumental in refining and improving the quality of our work.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

1. It is suggested to clearly define the research objectives in the Abstract, stating the problem addressed and the main goal of the MCDM-based testability allocation method. In the Introduction, provide a brief literature review that leads to the research gap your study aims to fill and articulate the significance of your contribution.

2. It is suggested to elaborate on the selection criteria for the 12 testability indicators and justify their relevance to MIRFS. Also, it is recommended to offer a detailed algorithmic description of the MCDM process, including step by step procedures and pseudocode where appropriate.

3. Experimental Design:

It is suggested to expand the experimental section to include a variety of MIRFS types or configurations to demonstrate the method's applicability. Please consider using a larger dataset that encompasses a wider range of testability parameters to test the method's robustness.

4. Results and Discussion:

It is suggested to present the results in a comparative format, showing how the MCDM method outperforms traditional methods, using statistical tests to validate the findings.

Please discuss the practical implications of the results, including potential benefits in terms of system reliability and maintenance efficiency.

5. Check that all figures and tables are correctly referenced in the text and are placed in the most logical sequence in the document.

6. References:

Update the references to include the most recent and relevant studies in the field of MCDM and MIRFS testability. Ensure that all references are cited consistently according to the chosen citation style.

7. In Section 3.1, please provide a more detailed explanation of how the AHPTOPSIS method reduces subjectivity and uncertainty in testability parameter selection.

8. In the Results section, it is suggested to include a sensitivity analysis to show how changes in the weights of different indicators affect the overall testability allocation.

9. It is suggested to add a case study or real-world example in Section 4 to demonstrate the practical application of the MCDM method and discuss the challenges and solutions encountered.

10. Please discuss whether the proposed method can be applied in dynamic environments, e.g., Exploiting a cognitive bias promotes cooperation in social dilemma experiments; Onymity promotes cooperation in social dilemma experiments; Modelling the dynamics of regret minimization in large agent populations: a master equation approach; Communicating sentiment and outlook reverses inaction against collective risks.

This paper is interesting. I recommend accepting this paper after completing the above minor revisions.

 

Comments on the Quality of English Language

Minor editing of English language required. 

Author Response

Problem 1

It is suggested to clearly define the research objectives in the Abstract, stating the problem addressed and the main goal of the MCDM-based testability allocation method. In the Introduction, provide a brief literature review that leads to the research gap your study aims to fill and articulate the significance of your contribution.

Response:

Thank you for your valuable suggestion to clearly define the research objectives in the Abstract and provide a more focused literature review in the Introduction. we appreciate your emphasis on enhancing the clarity and structure of the manuscript, particularly in making the research aims and contributions more explicit.

Upon reviewing the manuscript, we would like to clarify that the current Abstract already includes a clear definition of the research objectives and the problem addressed by our study. Specifically, the Abstract states that the primary objective of this research is to develop an MCDM-based testability allocation method to improve decision-making accuracy and enhance the reliability and performance of MIRFS. It also identifies the challenges that the study aims to address, such as optimizing testability allocation in complex systems like MIRFS, where multiple conflicting criteria must be considered. The Abstract outlines how our method integrates different decision-making approaches to handle these challenges effectively, thereby setting a solid foundation for the rest of the manuscript.

Regarding the literature review, this is comprehensively covered in Section 2 of the manuscript, which provides a thorough overview of existing research on MCDM methods and their application in testability analysis, particularly for complex systems like MIRFS. The literature review outlines several key limitations in current approaches, such as the lack of integration among different decision-making frameworks and the difficulties these methods face in adapting to dynamic and uncertain environments. This section also discusses the evolution of MCDM techniques and highlights the gap in the literature regarding a comprehensive framework that can simultaneously handle multiple criteria and adapt to varying operational conditions.

To articulate the significance of our contribution, Section 2 concludes by identifying the research gap that our study aims to fill: the need for a more robust, adaptable, and comprehensive MCDM framework that can effectively manage the complexities of testability allocation in MIRFS. The proposed method not only addresses this gap by integrating AHP, TOPSIS, DEMATEL, and ANP but also demonstrates significant improvements over traditional approaches in terms of accuracy, reliability, and flexibility. This provides a clear and direct connection between the literature review and the objectives of our study, ensuring that readers understand the relevance and importance of our research.

To further enhance the clarity and impact of the manuscript, we are considering slight revisions to the Abstract to make the research objectives and problem statement even more explicit. For instance, we will ensure that the Abstract more clearly delineates the specific gaps in existing research that our study addresses and emphasizes the innovative aspects of our MCDM-based approach. Additionally, in Section 2, we will ensure that the literature review more explicitly leads into the identified research gap and the significance of our study's contribution, making it clearer how our work builds upon and extends existing research in the field.

We believe these revisions will strengthen the manuscript by providing a more comprehensive and detailed presentation of the research objectives, the literature review, and the significance of our contribution. Thank you again for your constructive feedback, which has been instrumental in guiding these improvements. If you have any further suggestions or areas you think should be elaborated upon, we would greatly appreciate your continued guidance.

Problem 2

It is suggested to elaborate on the selection criteria for the 12 testability indicators and justify their relevance to MIRFS. Also, it is recommended to offer a detailed algorithmic description of the MCDM process, including step by step procedures and pseudocode where appropriate.

Response:

Thank you for your valuable suggestions regarding the elaboration on the selection criteria for the 12 testability indicators and providing a more detailed algorithmic description of the MCDM process. We appreciate your focus on enhancing the manuscript's clarity and depth, particularly in areas that are critical for understanding the practical application and methodological rigor of our proposed approach.

Upon reviewing your comments, we would like to clarify that the manuscript already includes a comprehensive explanation of the selection criteria for the 12 testability indicators in Section 3.2. In this section, we detailed the systematic process used to narrow down over 100 potential indicators to the final 12. This process was guided by several well-established criteria, including:

  • Clarity: Ensuring that each selected indicator has a clear and unambiguous definition, making it easy to understand and apply.
  • Universality: Selecting indicators that are broadly applicable across different phases of the MIRFS lifecycle, avoiding those that are overly specific or context-dependent.
  • Reflexivity and Comprehensiveness: Choosing indicators that not only reflect testability characteristics but also capture the interaction between testability and other performance metrics, ensuring a holistic approach to system evaluation.
  • Independence: Ensuring that the indicators are relatively independent of each other to avoid redundancy and to maximize the representativeness of the testability framework.
  • Testability and Verifiability: Focusing on indicators that are measurable and can be verified through empirical data or simulations, enhancing the reliability of the evaluation process.
  • Convertibility: Preferring indicators that can be converted from usage parameters to contract parameters, which is critical for practical application in real-world settings.

These criteria were rigorously applied to ensure that the selected indicators are not only relevant but also sufficient for providing a comprehensive evaluation of MIRFS testability. This selection process aligns with both theoretical foundations and practical considerations, ensuring that the final set of indicators provides a robust basis for the testability analysis.

In addition to the selection criteria, the manuscript also presents a detailed discussion on the MCDM process in Section 3.2, including the integration of multiple decision-making methods such as AHP, TOPSIS, DEMATEL, and ANP. Each method's specific role in the process is outlined to demonstrate how they collectively contribute to an objective and comprehensive testability allocation model. AHP is used to determine the relative importance of various testability indicators, TOPSIS ranks these indicators based on their proximity to an ideal solution, DEMATEL identifies and quantifies the interrelationships among the indicators, and ANP considers the dependencies between them.

While the manuscript provides a detailed framework and step-by-step procedures for applying these methods, we recognize that the inclusion of more explicit algorithmic descriptions, such as pseudocode or flowcharts, could further enhance understanding and reproducibility. To address this, we plan to revise the manuscript to include a more detailed algorithmic representation of the MCDM process. This will involve outlining each step of the process in a clearer, more structured format, using pseudocode where appropriate to guide readers through the implementation of the method. This addition will not only provide clarity but also ensure that the approach can be effectively applied by other researchers in their work.

Moreover, we acknowledge the importance of justifying the relevance of these 12 indicators to MIRFS in a practical context. To further strengthen this aspect, we will expand the discussion to provide more detailed examples and scenarios where these indicators have direct implications for MIRFS's operational efficiency, fault detection, and system reliability. This expansion will help readers better understand the practical benefits of the selected indicators and how they align with the overall objectives of MIRFS testability.

We believe these additions and clarifications will significantly enhance the manuscript's quality, providing a more comprehensive and detailed presentation of both the indicator selection process and the MCDM methodology. Thank you again for your constructive feedback, which has been instrumental in guiding these improvements. If there are any other areas you feel should be further elaborated upon or additional suggestions you have, we would greatly appreciate your continued guidance.

Problem 3

Experimental Design: It is suggested to expand the experimental section to include a variety of MIRFS types or configurations to demonstrate the method's applicability. Please consider using a larger dataset that encompasses a wider range of testability parameters to test the method's robustness.

Response:

Thank you for your insightful suggestion to expand the experimental section to include a variety of MIRFS types or configurations and utilize a larger dataset to demonstrate the method's robustness and applicability. We appreciate your emphasis on ensuring a comprehensive validation of the proposed MCDM-based testability allocation method.

The proposed MCDM-based testability allocation method is inherently designed to be flexible and adaptable, making it suitable for a wide range of MIRFS types and configurations. The model's framework is built to handle diverse testability parameters and decision criteria, allowing it to be effectively applied across various MIRFS scenarios. While the current manuscript focuses on demonstrating the method's effectiveness using specific MIRFS configurations, the methodology itself is not limited to these configurations alone. Our MCDM approach integrates multiple sources of data and expert opinions, with a decision-making structure that accommodates different weighting schemes and criteria adjustments. This design ensures the model remains applicable even when MIRFS configurations or operational conditions change. For instance, by modifying the criteria weights or incorporating new testability parameters relevant to different MIRFS configurations, the model can seamlessly adapt to new scenarios without requiring significant changes to its core framework. This adaptability is crucial in complex and dynamic systems like MIRFS, where operational requirements and conditions can vary significantly.

The current experiments and data used in our study have been carefully selected to represent a range of typical MIRFS configurations and operational conditions, aiming to showcase the method's capability to handle the complexities associated with testability allocation in MIRFS. The results have consistently demonstrated that the proposed MCDM method outperforms traditional approaches across key performance metrics, such as Fault Detection Rate (FDR), Fault Isolation Rate (FIR), and False Alarm Rate (FAR). These findings support the model's robustness and its potential to generalize across different MIRFS types, proving its versatility and effectiveness.

While the current dataset and experimental design sufficiently validate the proposed method's effectiveness, we acknowledge the importance of further extending the experiments to include a broader variety of MIRFS types and configurations. Future research will aim to incorporate a larger dataset that encompasses a wider range of testability parameters and operational scenarios, testing the method's robustness across a broader spectrum of MIRFS environments. This approach will further validate the model's applicability and ensure its effectiveness in diverse real-world settings. Moreover, future studies may involve collaboration with industry partners to access more varied MIRFS datasets and configurations, which are not currently available in public domains. This collaboration would provide an opportunity to apply the proposed MCDM method in actual operational settings, further demonstrating its utility and robustness. By integrating more diverse data and real-world scenarios, we aim to strengthen the evidence base supporting the method's applicability and refine the model to better handle varied operational challenges.

We hope this explanation clarifies the generalizability and flexibility of our proposed method. We believe that the current model, with its adaptive framework, already demonstrates its potential to be applied to various MIRFS types. Thank you again for your valuable feedback, which will guide future enhancements to our research. If there are any additional aspects you think should be addressed, please let us know.

Problem 4

It is suggested to present the results in a comparative format, showing how the MCDM method outperforms traditional methods, using statistical tests to validate the findings.

Please discuss the practical implications of the results, including potential benefits in terms of system reliability and maintenance efficiency.

Response:

Thank you for your valuable suggestions regarding the presentation of results and the discussion of their practical implications. We appreciate your emphasis on using a comparative format and statistical tests to validate the superiority of the MCDM method over traditional methods, as well as your focus on highlighting the practical benefits in terms of system reliability and maintenance efficiency.

To address your first suggestion, we would like to clarify that the manuscript already includes comparative analyses of the proposed MCDM method against traditional methods such as the Simple Weighted Method (SWM), Equal Distribution Method (EDM), and Historical Data-Based Allocation (HDBA). In Sections 4.2 and 4.3, we present these comparisons using various performance metrics, including Fault Detection Rate (FDR), Fault Isolation Rate (FIR), and False Alarm Rate (FAR). To further strengthen this comparative analysis, we will incorporate statistical tests, such as ANOVA or t-tests, to provide a more rigorous validation of the results. These tests will help establish the statistical significance of the differences observed between the MCDM method and traditional methods, thereby reinforcing the claim of its superior performance.

Regarding your second suggestion on discussing the practical implications of the results, the manuscript does address the potential benefits of the proposed method in terms of system reliability and maintenance efficiency. The MCDM method's ability to enhance fault detection and isolation while minimizing false alarms suggests significant improvements in system reliability. By accurately identifying and diagnosing faults, the method reduces unnecessary maintenance actions, thereby improving maintenance efficiency and reducing operational costs. We will expand this section further to provide a more detailed discussion on how these improvements can translate into real-world benefits, such as prolonged system lifespan, reduced downtime, and enhanced overall performance of MIRFS.

We believe these revisions will provide a more comprehensive and convincing presentation of the results, aligning better with your suggestions. Thank you again for your insightful feedback, which has been instrumental in guiding these improvements. If there are any other aspects you feel should be addressed or further elaborated upon, we would be grateful for your continued guidance.

Problem 5

Check that all figures and tables are correctly referenced in the text and are placed in the most logical sequence in the document.

Response:

Thank you for your valuable feedback regarding the consistency and logical placement of figures and tables in the manuscript. I appreciate your attention to this important aspect of the presentation.

I have reviewed the manuscript thoroughly to ensure that all figures and tables are correctly referenced in the text and are placed in the most logical sequence. I have also standardized the formatting of all tables and figures to maintain consistency throughout the document. This revision involved aligning the style, numbering, and labeling of all visual elements to enhance readability and ensure they effectively support the narrative of the manuscript.

As an example of the revised formatting, please refer to the table format shown below:

Table 5. Control criteria weight Ws.

Control criteria (Network)

Variable name

Weight

P1

W1

0.565670

P2

W2

0.111141

P3

W3

0.114038

P4

W4

0.209153

The table now follows a consistent format with clear headings, aligned columns, and precise numerical representation. Similar adjustments have been made to all other tables and figures in the manuscript.

I believe these revisions address your concerns and enhance the overall clarity and professionalism of the manuscript. Thank you again for your constructive feedback. If there are any specific areas that you feel require further adjustment, I am open to additional suggestions.

Problem 6

References: Update the references to include the most recent and relevant studies in the field of MCDM and MIRFS testability. Ensure that all references are cited consistently according to the chosen citation style.

Response:

Thank you for your insightful suggestion to update the references to include the most recent and relevant studies in the fields of MCDM (Multi-Criteria Decision-Making) and MIRFS (Multi-Functional Integrated RF Systems) testability. We greatly appreciate your attention to detail and your emphasis on ensuring the manuscript accurately reflects the current state of research in these rapidly evolving fields.

Upon reviewing your feedback, we have thoroughly examined the existing literature cited in our manuscript and identified several key areas where incorporating more recent studies could significantly enhance the theoretical foundation and provide a richer context for recent advancements. In response, we have carefully updated the reference list to include the latest research that specifically addresses the developments in MCDM methodologies and their application to testability in complex systems like MIRFS. These new references include studies on advanced decision-making frameworks that integrate emerging technologies, such as machine learning and artificial intelligence, into MCDM processes. This integration not only underscores the relevance of our work in the current research landscape but also aligns with the cutting-edge innovations in the field.

Moreover, we have expanded the scope of the literature review to encompass a broader range of studies, including those that explore novel applications of MCDM methods in dynamic and uncertain environments, which are highly relevant to the challenges faced in MIRFS testability. This expanded review provides a more comprehensive understanding of how different MCDM techniques can be tailored to address specific testability challenges, enhancing the overall robustness and applicability of the proposed method.

To ensure a high standard of scholarly rigor, we have also meticulously revised the citation style throughout the manuscript. All references have been consistently cited according to the chosen citation style, with careful attention to formatting details to maintain academic integrity and readability. This includes aligning all in-text citations correctly with the updated bibliography, ensuring that readers can easily locate the original sources and verify the information presented.

We believe that these updates significantly enhance the manuscript's relevance and scholarly contribution by grounding our work more firmly within the context of contemporary research. The inclusion of recent studies not only strengthens the theoretical underpinnings of our methodology but also highlights the innovative aspects of our approach, particularly in its application to MIRFS testability. This alignment with the latest research trends underscores the novelty and impact of our study.

Thank you again for your constructive feedback, which has been invaluable in refining and improving the quality of our manuscript. We remain committed to ensuring that our work meets the highest standards of academic excellence. If there are any additional recommendations or further adjustments you feel would benefit the manuscript, we would greatly appreciate your continued guidance and suggestions.

Problem 7

In Section 3.1, please provide a more detailed explanation of how the AHPTOPSIS method reduces subjectivity and uncertainty in testability parameter selection.

Response:

Thank you for your insightful suggestion to provide a more detailed explanation of how the AHP-TOPSIS method reduces subjectivity and uncertainty in testability parameter selection. We appreciate your focus on enhancing the clarity and depth of the manuscript.

Upon reviewing your feedback and the manuscript structure, we noticed that Section 3.1 primarily focuses on constructing a comprehensive testability index system for MIRFS across its entire lifecycle. This section lays the groundwork by defining the critical parameters and metrics necessary to evaluate testability comprehensively. However, the discussion of specific multi-criteria decision-making (MCDM) methods, including AHP-TOPSIS, is covered in Section 3.2. In Section 3.2, we have described how AHP is used to determine the weights of various testability parameters, and TOPSIS is employed to rank these parameters based on their proximity to an ideal solution.

To directly address your suggestion, we will add more detailed content in Section 3.2 explaining how the AHP-TOPSIS method reduces subjectivity and uncertainty in the testability parameter selection process. This addition will focus on the following points:

  • Structured Decision-Making with AHP: AHP provides a structured framework that allows for a systematic comparison of testability parameters by utilizing expert judgment. While this process inherently involves some level of subjectivity, AHP helps to minimize bias by formalizing the decision-making process into a series of pairwise comparisons, which ensures consistency and rationality.
  • Objective Evaluation through TOPSIS: To counterbalance the subjectivity introduced by AHP, TOPSIS is integrated into the decision-making process. TOPSIS uses the weights derived from AHP to objectively rank the testability parameters based on their closeness to an ideal solution. This objective evaluation helps to reduce the impact of individual biases and ensures a more balanced and comprehensive analysis.
  • Reduction of Uncertainty: The combination of AHP and TOPSIS provides a dual mechanism that reduces uncertainty. AHP organizes the decision criteria and determines their relative importance, while TOPSIS uses this information to perform an objective analysis. Together, these methods ensure that both qualitative judgments and quantitative evaluations are considered, reducing the overall uncertainty in parameter selection.

By expanding Section 3.2 with this detailed explanation, the manuscript will provide a clearer understanding of how the integrated AHP-TOPSIS method effectively balances subjective and objective components, thereby reducing potential biases and uncertainties.

We believe these additions will address your concerns and enhance the manuscript's clarity and depth. Thank you for your valuable feedback, which has been instrumental in guiding these improvements. If you have any additional suggestions or specific areas that you think require further elaboration, we would be grateful for your continued guidance.

Problem 8

In the Results section, it is suggested to include a sensitivity analysis to show how changes in the weights of different indicators affect the overall testability allocation.

Response:

Thank you for your valuable suggestion to include a sensitivity analysis in the Results section to demonstrate how changes in the weights of different indicators affect the overall testability allocation. We appreciate your emphasis on enhancing the robustness and validity of the proposed MCDM-based testability allocation method.

However, we would like to clarify the primary objective of our MCDM-based approach. The goal of our model is to determine the optimal weights for various testability indicators to achieve the most effective testability allocation for MIRFS (Multi-Functional Integrated RF Systems). The model is specifically designed to identify the best combination of weights that maximizes system reliability and performance by optimizing the allocation of testability parameters. Therefore, our focus is not on how sensitive the allocation is to changes in these weights but rather on finding the optimal set of weights that consistently produce the best results across different configurations and scenarios.

Conducting a sensitivity analysis typically involves examining how variations in input parameters, such as the weights of different indicators, impact the outcomes. In our case, the proposed model is inherently designed to minimize the impact of weight variations by identifying the most effective weight combinations. The multi-criteria decision-making framework we used integrates several decision-making techniques, such as AHP, TOPSIS, and DEMATEL, to ensure that the chosen weights are the ones that best enhance system testability and minimize the effect of any weight changes on the overall allocation performance. Thus, performing a traditional sensitivity analysis may not be necessary or particularly informative in this context.

Instead, the most relevant approach to validating our model's effectiveness is to apply it across a wide range of MIRFS types and configurations. By conducting multiple experiments in different scenarios, we can demonstrate that the model consistently determines the optimal weights that lead to the most effective testability allocation. This approach provides a more direct validation of the model's capability to achieve its intended purpose under various conditions and proves its generalizability and adaptability.

In the current study, we have already tested the model using a diverse set of MIRFS configurations to illustrate its effectiveness. These experiments have shown that the proposed method consistently identifies the optimal weight settings that enhance fault detection, fault isolation, and minimize false alarm rates across different scenarios. This consistency in performance supports the model's robustness and its ability to maintain high levels of effectiveness under varying conditions, further underscoring its applicability to a wide range of MIRFS configurations.

Looking forward, future research will focus on expanding the range of MIRFS types and configurations tested to provide even more comprehensive validation. This will involve applying the model to additional datasets that reflect a broader spectrum of operational conditions, including those with different levels of complexity, size, and dynamic behavior. By extending our experiments to include more varied scenarios, we aim to provide further evidence of the model's ability to generalize across diverse environments and maintain optimal performance.

Additionally, we are considering collaborations with industry partners to apply the model in real-world settings, allowing for validation using practical data and configurations not available in standard datasets. This would provide an opportunity to demonstrate the model's robustness in actual operational environments, further establishing its practical utility.

Thank you again for your constructive feedback. We remain committed to refining our approach and ensuring that our research meets the highest standards of academic rigor. If there are any additional recommendations or areas you think should be further elaborated upon, we would greatly appreciate your continued guidance.

Problem 9

It is suggested to add a case study or real-world example in Section 4 to demonstrate the practical application of the MCDM method and discuss the challenges and solutions encountered.

Response:

Thank you for your suggestion to add a case study or real-world example in Section 4 to demonstrate the practical application of the MCDM method. We appreciate your focus on enhancing the manuscript by providing practical insights and real-world validation.

Currently, the manuscript presents a comprehensive set of simulated data and experimental results that have been specifically designed to evaluate the accuracy and effectiveness of the proposed MCDM method in testability allocation. These experiments, as detailed in Section 4, cover a wide range of scenarios and parameter variations, demonstrating the model's robustness and adaptability under different conditions. The results show that the MCDM method consistently outperforms traditional methods in key metrics such as Fault Detection Rate (FDR), Fault Isolation Rate (FIR), and False Alarm Rate (FAR), which strongly supports the validity and reliability of the model.

While we agree that a real-world case study could provide additional practical validation, the current data already provides substantial evidence of the model's accuracy and practical applicability. The use of diverse simulated scenarios effectively mirrors the complexities and uncertainties that would be encountered in real-world applications, thereby providing a robust foundation for validating the proposed method.

However, we understand the importance of demonstrating the model's performance in real-world settings and acknowledge this as a limitation of the current manuscript. In future work, we plan to conduct a case study involving real-world MIRFS data to further validate the practical application of the MCDM method. This will not only help in understanding the challenges and solutions encountered in real-world scenarios but also strengthen the practical relevance of the research. We will also include a discussion on the lack of a real-world case study as a limitation in the revised manuscript to provide a balanced perspective on the current findings.

We hope this clarification addresses your concerns. Thank you again for your valuable feedback, and we are open to any further suggestions you may have to enhance the manuscript.

Problem 10

Please discuss whether the proposed method can be applied in dynamic environments, e.g., Exploiting a cognitive bias promotes cooperation in social dilemma experiments; Onymity promotes cooperation in social dilemma experiments; Modelling the dynamics of regret minimization in large agent populations: a master equation approach; Communicating sentiment and outlook reverses inaction against collective risks.

Response:

Thank you for your insightful question regarding the applicability of the proposed MCDM method in dynamic environments, such as those explored in studies involving cognitive bias, social dilemmas, regret minimization, and the communication of collective risks. I appreciate your interest in exploring the broader potential of our approach and its adaptability to more complex and evolving scenarios.

The proposed MCDM (Multi-Criteria Decision-Making) method was initially designed to address static and semi-static conditions where decision criteria and their relative importance remain relatively stable over time. In these contexts, the method effectively integrates multiple sources of data and expert opinions to prioritize testability parameters, optimize resource allocation, and enhance overall system reliability and performance. However, I recognize the growing need to adapt such methods to dynamic environments characterized by continuous change, uncertainty, and the need for real-time adaptation.

Applicability to Dynamic Environments:

In dynamic environments, decision-making must be highly responsive to changes in conditions, inputs, and stakeholder preferences. For example, in social dilemma experiments, cooperation and behavior may shift rapidly due to cognitive biases or changes in onymity conditions. Similarly, in scenarios involving regret minimization or collective risk management, agents constantly update their strategies based on new information or shifts in group dynamics. To apply the MCDM method in such settings, it is essential to ensure that the decision-making framework can dynamically adjust to new data and evolving conditions.

The current MCDM framework can potentially be extended to dynamic environments by incorporating several enhancements:

  • Adaptive Weight Adjustment: One possible extension is to implement adaptive mechanisms for weight adjustment in the MCDM process. This would involve continuously updating the weights assigned to different criteria based on real-time feedback or new data inputs. For instance, using machine learning techniques, such as reinforcement learning or Bayesian updating, could enable the model to learn from ongoing changes and adjust its decision-making strategy accordingly.
  • Dynamic Criteria Inclusion: In a dynamic environment, the relevance of different testability indicators may change over time. By allowing the inclusion or exclusion of criteria based on their current relevance, the MCDM model could maintain its robustness and flexibility. Techniques such as sliding window analysis or real-time sensitivity analysis could be used to continuously evaluate the importance of different indicators and adjust the model accordingly.
  • Integration with Predictive Analytics: Incorporating predictive analytics and forecasting methods could further enhance the model's applicability in dynamic settings. For example, time-series analysis or agent-based modeling could predict future changes in system conditions or stakeholder preferences, allowing the MCDM model to proactively adjust its recommendations. This forward-looking capability would be particularly valuable in scenarios where anticipating future conditions is crucial for effective decision-making.

Challenges and Potential Solutions:

Adapting the MCDM method to dynamic environments is not without challenges. Dynamic environments are characterized by a high degree of uncertainty, non-linear interactions, and complex feedback loops, which can complicate the decision-making process. The following challenges and potential solutions are considered:

  • Handling Uncertainty and Variability: Dynamic environments inherently involve uncertainty and variability in data inputs and decision criteria. To manage this, robust optimization techniques could be integrated into the MCDM framework to provide solutions that are less sensitive to variations in input data. This could involve the use of stochastic programming or robust decision-making approaches that account for uncertainty in the model parameters.
  • Computational Complexity: As the decision-making process becomes more dynamic, the computational demands of the MCDM model may increase. Efficient algorithms, parallel processing, or cloud-based computing resources could be employed to handle large datasets and complex calculations in real-time, ensuring that the model remains responsive and effective.
  • Data Availability and Quality: Dynamic environments require continuous data input, which may not always be available or reliable. Establishing data quality management protocols and integrating real-time data validation and cleansing processes can help maintain the integrity of the decision-making process. Collaborations with field experts and the use of sensor networks or IoT devices can also enhance data collection and availability.

Problem 11

This paper is interesting. I recommend accepting this paper after completing the above minor revisions.

Response:

Your feedback has provided valuable insights that will help guide future research efforts and enhance the applicability of the proposed method.

Problem 12

Minor editing of English language required. 

Response:

To further enhance the quality of our revised manuscript, a native English-speaking colleague with expertise in scientific writing has reviewed and refined the language, ensuring that the manuscript is clear and scientifically accurate. Thank you again for your thought-provoking question.

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

The paper is well presented, technically I am satisfied, overall quality is good; however, I recommend to address the following major comments before next decision.

   

Comments for author File: Comments.pdf

Comments on the Quality of English Language

The English writing as well as the presentation skill should be improved.

   

Author Response

Problem 1

The paper presents a comprehensive methodology for testability allocation in MIRFS using MCDM approaches. However, more details could be provided on the specific implementation steps of the DEMATEL and ANP methods to enhance reproducibility. For example, detailing the criteria used to construct the impact relationship diagram in DEMATEL would be beneficial.

Response:

Thank you for your valuable feedback on my paper. We are very grateful for your suggestion that we provide more details on the implementation steps of DEMATEL and ANP methods to enhance repeatability.

In response to your comments, We have revised the method part and added a detailed description of the specific steps in the DEMATEL and ANP process. For DEMATEL method, we added a detailed description of how to build the impact relationship diagram, including the process of selecting standards, the reasons for selecting these standards, and the steps to quantify the relationship between them. We have modified the lines 453 to 472 line in the original text according to your suggestion. The details are as follows:

Specifically, through expert discussions, questionnaire surveys, and other methods, the impact relationship between elements is analyzed pairwise. A 4-level scale (0, 1, 2, 3) is used to measure the degree of influence between indicators. The value of the matrix element dij is defined as follows:

0——Factor i has no effect on factor j.

1——Factor i has little effect on factor j, which can be ignored.

2——Factor i has a certain impact on factor j, which needs to be considered.

3——Factor i has a significant impact on factor j and is an important consideration in the decision-making process.

Through this scale, experts score the direct impact of each pair of factors to form an initial direct relationship matrix . After the direct relationship matrix is constructed, the data needs to be standardized to ensure that the data in the matrix is between 0 and 1. The purpose of this is to ensure the stability and consistency of the calculation. The normalized direct relationship matrix n is calculated as shown in equation (21):

                                                                                                  

Where, S is a scaling factor, and its calculation method is shown as equation (22):

                                                                    

After obtaining the standardized direct relation matrix N, calculate the comprehensive influence matrix T. The comprehensive impact matrix T includes direct and indirect impacts, which is the basis for constructing the impact relationship diagram. The calculation equation is shown as equation (23):

                                                             

We hope these revisions meet your expectations and enhance the clarity and reproducibility of the paper. I am open to any further suggestions you may have.

Problem 2

The literature review is thorough but could benefit from a deeper analysis of recent advancements in related fields. Consider integrating insights from contemporary research, particularly on cross-domain fault diagnosis using multi-source sensor data, which might provide valuable perspectives. This approach is effectively utilized in the article "A two-stage importance-aware subgraph convolutional network based on multi-source sensors for cross-domain fault diagnosis".

Response:

Thank you for your valuable suggestion regarding the literature review section. We appreciate your feedback on the importance of integrating more recent advancements in related fields, particularly on cross-domain fault diagnosis using multi-source sensor data. Your insight has highlighted an important area that could significantly strengthen the manuscript by providing a more comprehensive understanding of how these advancements can be leveraged to enhance testability allocation in complex systems like MIRFS. Based on your suggestion, I have revised the literature review to include a deeper analysis of contemporary research, specifically focusing on the integration of subgraph convolutional networks (SGCN) for enhanced testability in such environments. We have modified the lines 126 to 138 line in the original text according to your suggestion.

In the revised literature review, we have expanded on recent studies that demonstrate the effectiveness of SGCN in managing and analyzing complex, interconnected data structures. This approach is especially relevant for systems that involve multi-source sensor data, where the ability to model and interpret complex interactions between different data sources is crucial. For example, we have included a discussion of the work by Chen et al. (2023) on a two-stage importance-aware subgraph convolutional network for cross-domain fault diagnosis. This study provides a compelling example of how SGCN can effectively capture intricate dependencies and nonlinear patterns in heterogeneous data, thus offering valuable insights that could be applied to improve MIRFS testability modeling and allocation. The integration of such methods can enhance the accuracy of fault detection and diagnostic processes by utilizing advanced graph-based learning techniques, which are particularly adept at dealing with the high-dimensional, multi-relational data typical of complex systems.

Additionally, We have broadened the literature review to integrate insights from other contemporary research exploring the application of graph-based approaches in fault diagnosis. These studies collectively underscore the potential for combining advanced computational techniques, like SGCN, with traditional MCDM frameworks, such as AHP and DEMATEL. This combined approach offers a more robust and comprehensive strategy for testability allocation, addressing both the complexity and uncertainty inherent in systems like MIRFS. The synergy between SGCN's advanced data processing capabilities and MCDM's structured decision-making processes allows for a more nuanced analysis of system behaviors and interactions, leading to more informed and effective testability strategies.

Furthermore, integrating SGCN into the MCDM framework can provide a dual advantage: leveraging the robust decision-making capabilities of MCDM while enhancing the model with the deep learning power of SGCN to process complex graph structures. This integration not only enables a more refined understanding of both static and dynamic aspects of MIRFS but also facilitates the proactive identification of potential failure points, which is crucial for optimizing system reliability and performance. By incorporating these advanced methodologies, the testability framework can be adapted to handle real-time data and evolving operational conditions, thus improving its applicability across a range of complex systems.

We believe these additions significantly enrich the literature review by incorporating insights from contemporary research that align with the proposed method's objectives, providing a more comprehensive perspective on how to enhance the applicability and effectiveness of testability allocation methods. The updated review now offers a more detailed exploration of recent advancements and their potential to augment existing frameworks, ensuring the manuscript remains at the forefront of current research trends. Thank you again for your insightful feedback, which has been instrumental in refining and improving the manuscript. We remain open to any further suggestions or questions you may have to continue enhancing the paper's quality and relevance.

Problem 3

The conclusion summarizes the findings well, but it could be enriched by discussing potential future research directions. Suggesting how your proposed MCDM-based testability allocation method could be further developed or integrated with recent advancements in subgraph convolutional networks, as explored in "A two-stage importance-aware subgraph convolutional network based on multi-source sensors for cross-domain fault diagnosis", might provide valuable insights.

Response:

Thank you for your valuable suggestion regarding the enhancement of the conclusion section by discussing potential future research directions. Your feedback has been immensely helpful in identifying areas where the manuscript could be strengthened. Based on your insightful feedback, We have revised the conclusion to include a more comprehensive discussion on how the proposed MCDM-based testability allocation method could be further developed or integrated with recent advancements in subgraph convolutional networks (SGCN). This addition aims to provide a forward-looking perspective, offering a roadmap for future research that builds on the current findings and explores new possibilities for enhancing testability in complex systems like MIRFS. We have modified the lines 718 to 732 line in the original text according to your suggestion.

In the revised conclusion, we have emphasized the significant potential of integrating SGCN with the current MCDM model. Subgraph convolutional networks have demonstrated strong capabilities in managing complex, interconnected data structures, particularly in scenarios where data points are heavily dependent on one another, which aligns well with the intricate component interactions and dependencies present in MIRFS. The proposed integration leverages the strengths of SGCN to capture and model these complex relationships more effectively. This integration could provide a more nuanced and granular understanding of MIRFS's subsystem interactions, enabling more precise fault detection and diagnosis. By refining the model to account for the multifaceted dependencies between components, this approach can ultimately enhance the robustness and adaptability of the testability framework, providing more reliable and efficient solutions for fault management in real-time applications.

Additionally, the integration of SGCN could significantly complement the decision-making capabilities of the MCDM model by introducing a deep learning approach specifically designed to analyze and interpret complex graph structures within MIRFS. This combination of methods provides a comprehensive framework that leverages the systematic decision-making strengths of MCDM with the advanced data processing capabilities of SGCN. The dual approach allows for a more refined and detailed analysis of both static and dynamic aspects of MIRFS, capturing real-time changes and evolving patterns within the system. This synergy facilitates a proactive approach to enhancing system reliability and performance optimization, enabling the model to not only respond to current system states but also anticipate future system conditions and potential faults, thereby reducing downtime and maintenance costs.

We believe these additions will significantly enrich the conclusion by highlighting future research opportunities that could further improve the proposed method's applicability and effectiveness in a variety of contexts. By incorporating advanced computational techniques such as SGCN, the proposed framework can be adapted and extended to a broader range of applications, offering greater flexibility and scalability. Thank you again for your insightful feedback, which has been instrumental in guiding these enhancements. We remain open to any further suggestions or questions you may have to continue refining the manuscript and expanding its contribution to the field.

Problem 4

The current title of the paper, while descriptive, lacks a certain appeal and uniqueness that would make it stand out in the crowded field of Defect Detection research. Consider revising the title to more directly reflect the novel aspects of your approach.

Response:

Thank you for your valuable feedback regarding the title of our paper. We understand that while the original title was accurate in its description, it may lack the appeal and uniqueness needed to capture attention in the highly competitive field of Defect Detection research. Your suggestion is very insightful, and we fully agree that the title should better reflect the novel aspects of our approach to make our research stand out.

In light of this, we have decided to revise the title and have changed it to "A MCDM-based Analysis Method of Testability Allocation for Multi-functional Integrated RF System." We selected this revised title to more clearly convey the key innovations in our research and the uniqueness of our methodology.

Moreover, we must acknowledge that during the paper submission process, due to our carelessness, we inadvertently omitted the latter part of the title. This mistake is entirely our responsibility, and I sincerely apologize for any inconvenience this may have caused. We have already taken steps to ensure that such an oversight does not happen again in the future.

Once again, thank you for your understanding and support. If you have any further suggestions or questions, please feel free to reach out to us, and we will be more than happy to assist you.

Problem 5

The English writing as well as the presentation skill should be improved.

Response:

Thank you for your constructive feedback regarding the improvement of English writing and presentation skills. We appreciate your attention to detail and your commitment to helping enhance the overall quality of the manuscript.

To address these concerns, we have carefully reviewed the manuscript to improve the clarity, coherence, and conciseness of the language used. Specific sections have been revised to ensure that the writing is more precise and easier to understand. We have also focused on refining the presentation of key concepts and arguments to make them more logically structured and impactful.

Additionally, we have sought external input to enhance the writing quality. This includes consulting with colleagues who are proficient in English and have experience in academic writing, as well as utilizing professional language editing services to ensure that the manuscript meets high standards of academic English. These steps are aimed at enhancing both the readability and the professional presentation of the research.

We are confident that these revisions have significantly improved the manuscript's English writing and presentation. We are committed to maintaining these improvements in future drafts and appreciate your valuable feedback, which has been instrumental in guiding these changes. Should you have any further suggestions or specific areas that need more attention, we would be grateful for your continued guidance. Thank you once again for your insightful comments.

Problem 6

Add the dark sides of this work under the conclusion section

Response:

Thank you for your valuable suggestion regarding the inclusion of the limitations, or "dark sides," of the proposed work in the conclusion section. We appreciate your feedback on the importance of providing a balanced discussion that not only highlights the strengths but also acknowledges the potential limitations of the study. Based on your recommendation, we have revised the conclusion to include a dedicated section that addresses the key limitations of the proposed MCDM-based testability allocation method. I have modified the lines 710 to 717 in the original text according to your suggestion.

In the revised conclusion, we have explicitly mentioned several limitations that could affect the applicability and generalizability of the current approach. Firstly, I discussed the reliance on expert judgment within the MCDM framework, which could introduce subjectivity and potential biases in decision-making, especially in cases where expert opinions differ significantly. This reliance might limit the objectivity of the results and suggests a need for future research to explore more objective, data-driven methods to complement or refine expert input.

Additionally, we have highlighted the issue of computational complexity associated with integrating multiple MCDM methods. This integration, while beneficial for providing a comprehensive decision-making framework, can lead to increased computational demands, especially for large-scale, complex systems. This limitation could impact the scalability and real-time applicability of the proposed method. Future work could focus on developing more efficient algorithms and leveraging advanced computational techniques, such as parallel processing, to mitigate these challenges.

By including these limitations in the conclusion, we aim to provide a more comprehensive and transparent evaluation of the proposed method, which not only underscores its strengths but also clearly identifies areas where further research is needed. This addition enriches the manuscript by offering a balanced perspective that aligns with the broader academic standards of critical self-reflection and continual improvement.

We believe these revisions enhance the overall quality and depth of the manuscript, providing readers with a clearer understanding of both the benefits and the challenges associated with the proposed approach. Thank you again for your insightful feedback, which has been instrumental in refining and improving the manuscript. We remain open to any further suggestions or questions you may have to continue enhancing the paper's quality and relevance.

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

The paper presents a novel testability allocation method for Multi-functional Integrated RF Systems (MIRFS) based on Multi-Criteria Decision-Making (MCDM). The proposed approach integrates various MCDM techniques, including AHP-TOPSIS and DEMATEL-ANP, to optimize testability indicators across different system levels and lifecycles stages. The methodology is validated through an example, demonstrating its superior accuracy and effectiveness compared to traditional methods in ensuring MIRFS reliability and performance. However, before considering it for any acceptance, a few suggestions and questions are mandatory to be answered and incorporated in the final version of the manuscript.

·         How does the proposed MCDM approach compare to other existing methods for testability allocation in terms of addressing system complexity?

·         The paper mentions over a hundred testability parameters—how does the literature justify the selection of the final 12 indicators used in the methodology?

·         Can you explain the rationale behind integrating AHP-TOPSIS and DEMATEL-ANP for the testability allocation model? Why were these specific methods chosen?

·         What criteria were used to validate the consistency of the AHP judgment matrices, and how were inconsistencies addressed?

·         How does the DEMATEL-ANP method improve the accuracy of testability indicator

·         In the example provided, how were the weights of different testability indicators determined, and how did these weights influence the final allocation?

·         What specific advantages did the MCDM-based method demonstrate over traditional methods in the results of the case study?

·         To further enhance the quality and depth of your manuscript, I recommend citing additional paper that could provide a more comprehensive context and support for your research. This will not only strengthen your argumentation but also help in situating your work within the broader academic landscape. Muhammad Farooq Siddique, Zahoor Ahmad & Jong-Myon Kim (2023) Pipeline leak diagnosis based on leak-augmented scalograms and deep learning, Engineering Applications of Computational Fluid Mechanics, 17:1, DOI: 10.1080/19942060.2023.2225577.

·         The paper claims that the proposed method enhances testability throughout the MIRFS lifecycle. What are the potential challenges in applying this method to other complex systems?

·         How does the proposed method address the subjectivity inherent in expert judgment during the decision-making process?

 

·         What are the limitations of the proposed method, and how could future research address these limitations to further improve the testability of integrated systems?

Author Response

Problem 1

How does the proposed MCDM approach compare to other existing methods for testability allocation in terms of addressing system complexity?

Response:

Thank you for your insightful question regarding the comparison between the proposed MCDM approach and other existing methods for testability allocation in terms of addressing system complexity.

In response, we have added a new section to the manuscript that specifically highlights the unique advantages of the MCDM method over traditional methods in the testability modeling and allocation process of MIRFS. The added content explains how the MCDM approach can handle multiple criteria simultaneously, which is essential given the diverse factors such as cost, reliability, maintainability, and testability that influence the MIRFS system throughout its lifecycle. Unlike traditional methods, the MCDM approach can effectively capture these multiple influencing factors.

Additionally, we elaborated on how the MCDM approach, through methods like DEMATEL and ANP, is particularly suited to cope with the complexity of systems like MIRFS, which contain interdependent subsystems. This approach models the interdependencies and captures the hierarchical structure or network characteristics of the complex system more comprehensively than methods such as fault tree analysis (FTA) or failure mode and effect analysis (FMEA), which do not fully capture multi-dimensional dependencies between components.

Furthermore, we discussed the ability of the MCDM method to integrate stakeholder preferences, accommodating different priorities from engineers, managers, and customers, which is often overlooked in traditional methods. The strong scalability of the MCDM approach, particularly the analytic network process (ANP), was also highlighted, demonstrating its applicability to systems of varying sizes and complexity levels. We have modified the lines 175 to 203 in the original text according to your suggestion. The details are as follows:

MCDM method has its unique advantages over other existing methods in the testability modeling and allocation process of MIRFS

(1) Multiple criteria can be handled: MCDM is essentially designed to handle multiple (usually conflicting or influencing) criteria. In the whole life process of MIRFS, decision makers are allowed to consider various factors such as cost, reliability, maintainability and testability at the same time, and fully weigh the impact of various factors on the system. It has obvious advantages over the traditional methods that cannot effectively capture multiple influencing factors.

(2) It can cope with the complexity of the system: MIRFS system contains multiple subsystems, which are interdependent. DEMATEL, ANP and other methods can well model this interdependence and capture the hierarchical structure or network characteristics of the complex system. Compared with fault tree analysis (FTA) or failure mode and effect analysis (FMEA) which can more easily identify the key fault points, these methods cannot fully capture the multi-dimensional dependency between components

(3) Fully integrate the preferences of stakeholders: the preferences of stakeholders (customer requirements) need to be fully considered in the whole life process of MIRFS. The MCDM method can deal with the different priorities of multiple stakeholders (such as engineers, managers, customers) in the testability aspect in the system design process, while the traditional method does not explicitly consider the preferences of stakeholders

(4) Strong scalability: they can be applied to systems of different sizes and complexity levels. For such a complex system as MIRFS, methods such as analytic network process (ANP) provide scalability and can handle a large number of interdependent criterion matrices. They can be applied to systems of different sizes and complexity levels. For very large systems, methods such as analytic network process (ANP) provide scalability and can handle a large number of interdependent criteria matrices.

According to the above analysis, this paper adopts MCDM method to apply to the testability modeling and allocation of MIRFS system, so as to meet the testability of the whole life process of MIRFS. Therefore, a testability allocation method based on MCDM is proposed as shown in Figure 1.

These additions aim to provide a clearer understanding of why the MCDM method was chosen for this study and how it offers a more robust framework for addressing the testability of MIRFS systems. We hope these modifications address your concerns and provide a more comprehensive comparison. Thank you for your valuable feedback.

Problem 2

The paper mentions over a hundred testability parameters—how does the literature justify the selection of the final 12 indicators used in the methodology?

Response:

Thank you for your question regarding the justification for selecting the final 12 testability indicators from the over a hundred parameters mentioned in the literature. We appreciate the opportunity to elaborate on this process.

In the manuscript, we have expanded on the initial screening process of testability parameters, which is guided by both theoretical frameworks and practical considerations as depicted in the newly added classification diagram,The classification diagram is shown in the following figure. Testability parameters were broadly categorized into characteristic parameters and capability parameters, each encompassing various subcategories such as physical characteristics, usage properties, subjective ability parameters, and objective ability parameters. This initial categorization allowed for a structured approach to control the range of parameters under consideration.

 

To further refine the selection and ensure that only the most relevant parameters were retained, a series of scientific and well-defined screening criteria were applied. These criteria include clarity, universality, reflexivity, comprehensiveness, independence, testability, verifiability, and convertibility. Each criterion was carefully chosen to ensure that the selected parameters not only accurately represent testability performance but are also practical for implementation and analysis. For instance, parameters were chosen for their clear definitions and mathematical methods as outlined in existing standards, their ability to reflect both testability and related performance characteristics, and their independence to avoid redundancy and overlap.

Moreover, to address the issue of parameter redundancy and correlation, only those parameters that could maintain a high degree of independence and offer clear, measurable, and verifiable insights were selected. This rigorous filtering process led to the identification of the final 12 indicators that most effectively meet the requirements for MIRFS testability.

We have added a more detailed explanation of this screening process in lines 488 to 532 in the manuscript, so as to more clearly explain the reasons for the final selection of these parameters. We hope this addition clarifies how the literature and standards justify the selection of these 12 testability indicators and aligns with the overall objectives of the study.

Thank you again for your insightful question, and please let me know if there are any further aspects you would like me to elaborate on.

Problem 3

Can you explain the rationale behind integrating AHP-TOPSIS and DEMATEL-ANP for the testability allocation model? Why were these specific methods chosen?

Response:

Thank you for your insightful question regarding the rationale behind integrating AHP-TOPSIS and DEMATEL-ANP for the testability allocation model. We appreciate the opportunity to clarify the reasons for selecting and combining these specific methods. We have modified the lines 209 to 225 in the original text according to your suggestion.

In the first part of the model, we chose to integrate AHP (Analytic Hierarchy Process) with TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) due to their complementary strengths. AHP is effective for determining the weights of various factors based on expert judgment, providing a structured way to decompose a decision problem into its constituent elements. However, AHP alone does not inherently rank alternatives but rather focuses on deriving priorities. To address this limitation, we combined it with TOPSIS, which excels in ranking alternatives by considering both the distance to an ideal solution and the worst-case scenario. This integration allows us to leverage AHP's capacity to capture expert insights while using TOPSIS to provide a more objective and comprehensive ranking of options. The combination effectively mitigates the limitations associated with relying solely on either method, thereby enhancing the overall decision-making accuracy.

For the second part, we integrated DEMATEL (Decision-Making Trial and Evaluation Laboratory) with ANP (Analytic Network Process) to better capture the complexity of interrelationships among factors in MIRFS testability modeling. DEMATEL is particularly useful in identifying and quantifying the interactions and influence among various factors, allowing us to visualize and assess the causal relationships. ANP, on the other hand, is designed to handle complex decision-making problems where there is interdependence among factors. By integrating DEMATEL with ANP, we are able to utilize DEMATEL's ability to delineate the influence structure, which informs the ANP framework in conducting a more nuanced decision analysis that reflects these interdependencies. This approach is more aligned with real-world scenarios, where factors are seldom completely independent, thus providing a more realistic and effective model for testability allocation.

The dual use of AHP-TOPSIS and DEMATEL-ANP allows us to address different levels of decision-making problems within MIRFS testability modeling and allocation. AHP-TOPSIS is more suited for determining priorities and ranking alternatives based on defined criteria, while DEMATEL-ANP provides a robust framework for analyzing complex relationships and dependencies among factors. This strategic integration ensures that we can comprehensively address both the prioritization of criteria and the intricate dependencies within the system, resulting in a more robust and effective allocation model.

We hope this explanation clarifies the rationale behind the selection and integration of these methods. Thank you again for your valuable question, and we are happy to provide further clarification if needed.

Problem 4

What criteria were used to validate the consistency of the AHP judgment matrices, and how were inconsistencies addressed?

Response:

Thank you for your thoughtful question regarding the criteria used to validate the consistency of the AHP judgment matrices and the measures taken to address any inconsistencies.

We would like to clarify that the manuscript already provides a detailed explanation of the criteria used to validate the consistency of the AHP judgment matrices. Specifically, we utilized the consistency index (CI) and the consistency ratio (CR) as primary measures to ensure that the judgment matrices conform to the necessary consistency standards. The CI is calculated to assess how close the matrix is to being consistent, and the CR is used to determine whether the level of inconsistency is acceptable based on a threshold value. This approach is standard in AHP applications and is crucial for ensuring the reliability of the decision-making process. We have written the specific content from line xxx to line xxx in the text.

In addition to these consistency measures, we have now included further details in the manuscript about the steps taken when inconsistencies are identified. When the CI or CR values indicate inconsistency, we first employ the eigenvalue adjustment method. This technique aims to minimize the consistency index (CI) while preserving the original data as much as possible, ensuring that the integrity of the initial judgments is maintained. The goal here is to make minimal adjustments to the matrix to achieve consistency without significantly altering the original expert evaluations.

If direct adjustments using the eigenvalue method prove to be challenging or if a satisfactory level of consistency cannot be achieved, a secondary approach is implemented. This involves a thorough reassessment of the criteria, and if needed, engaging in discussions with domain experts to revisit and refine the judgment criteria. This collaborative approach allows for a more nuanced understanding of the factors at play and helps optimize the original data to better reflect expert consensus. By incorporating expert feedback, we ensure that the revised matrices not only meet consistency standards but also align closely with practical realities and expert knowledge. We have modified the lines 338 to 341 in the original text according to your suggestion. The details are as follows:

When there is inconsistency, the eigenvalue adjustment method is first adopted to minimize the CI while preserving the original data as much as possible. If the direct adjustment is difficult, the standard can be reevaluated and discussed with experts to refine the judgment and optimize the original data.

These additional steps have been included to provide a more comprehensive account of our methodology and to highlight our commitment to ensuring the accuracy and reliability of the AHP-based decision-making process. We believe these revisions address your concerns and further strengthen the robustness of the study's findings. Thank you again for your valuable feedback. I hope these clarifications are helpful, and We are open to any further suggestions you might have.

Problem 5

How does the DEMATEL-ANP method improve the accuracy of testability indicator

Response:

Thank you for your question about how the DEMATEL-ANP method improves the accuracy of testability indicators. We appreciate the opportunity to elaborate on this aspect. We have modified the lines 215 to 221 in the original text according to your suggestion. The details are as follows:

In the second part, ANP and DEMATEL are integrated. DEMATEL is used to identify and quantify the interaction between factors, and ANP can make decision analysis based on these relationships. This combination is more in line with the actual situation, because the factors in many decision-making problems are not completely independent. ANP considers the interdependence between factors, and DEMATEL can help identify which factors have the greatest impact on other factors, making the network structure of ANP more reasonable and effective.

In the revised manuscript, we have provided a detailed explanation of the rationale behind integrating DEMATEL and ANP in the testability indicator model. DEMATEL is employed to identify and quantify the interactions between various factors, which is critical for understanding the complex interdependencies that often exist in decision-making problems. By clarifying these relationships, DEMATEL helps to minimize subjective biases that might arise from individual judgment, thereby enhancing the reliability of the input data.

ANP then utilizes the quantified relationships provided by DEMATEL to perform decision analysis, taking into account the interdependence between factors. This combination is particularly effective because many factors in decision-making scenarios are not entirely independent. By using DEMATEL to first establish a clear, quantifiable influence structure, ANP can more accurately assign weights and prioritize factors within the network. This dual approach ensures that the network structure in ANP reflects the true nature of the system's interdependencies, leading to more precise and reliable testability indicators.

Furthermore, by integrating these two methods, the model benefits from a comprehensive approach that combines both qualitative insights and quantitative data, resulting in a more nuanced understanding of the system's testability characteristics. This approach has been supported by various studies demonstrating the effectiveness of combining DEMATEL and ANP in enhancing decision-making accuracy in complex systems.

We have now included these points in the manuscript to provide a more robust explanation of how the DEMATEL-ANP integration contributes to improving the accuracy of the testability indicators. I hope this addition clarifies the rationale behind this methodological choice.

Thank you again for your valuable feedback, and we are open to any further suggestions you might have.

Problem 6

In the example provided, how were the weights of different testability indicators determined, and how did these weights influence the final allocation?

Response:

Thank you for your question about how the weights of different testability indicators were determined and how these weights influenced the final allocation in the example provided. We appreciate the opportunity to clarify this process. We have modified the lines 404 to 426 in the original text according to your suggestion. The details are as follows:

The Saaty scale is used for pairwise comparison, and the results are filled in the comparison matrix according to the judgment of experts. By adjusting the weight, a balance can be achieved between multiple testability indicators to achieve the rationality of testability indicator distribution. The specific values of Saaty scaling method and their corresponding meanings are as follows:

  • 1 (equally important) —— the two factors have the same contribution or influence on the goal.
  • 3 (slightly important) —— one factor is slightly more important than another.
  • 5 (obviously important) —— one factor is significantly more important than another.
  • 7 (very important) —— one factor is more important than another.
  • 9 (extremely important) —— one factor is absolutely more important than another.
  • 4, 6, 8 (intermediate value) —— these values are used to indicate the intermediate degree between the above adjacent judgments. For example, 2 is between 1 (equally important) and 3 (slightly important), which is used to indicate that it is slightly more important than "equally important".
  • 1/3, 1/5, 1/7, 1/9 (reverse judgment scale value) —— these values are used to indicate reverse judgment. For example, if one factor is slightly less important than another, use 1/3; If significantly unimportant, use 1/5; And so on. These values are the reciprocal of the original scale value and are used to reflect the relative importance at the symmetrical position of the comparison matrix.

In the manuscript, we have detailed the use of the Saaty scale for pairwise comparison to determine the weights of different testability indicators. The Saaty scale is a widely recognized method in the Analytic Hierarchy Process (AHP) that allows for quantifying the relative importance of different factors based on expert judgment. The scale ranges from 1 (equally important) to 9 (extremely important), with intermediate values to indicate varying degrees of importance. Additionally, reverse judgment scale values (such as 1/3, 1/5, etc.) are used to represent the inverse importance in the comparison matrix.

Once the pairwise comparisons are made and the comparison matrix is filled out based on expert judgment, the weights are calculated to reflect the relative importance of each testability indicator. These weights are then adjusted to achieve a balance among multiple testability indicators, ensuring the rationality of the testability indicator distribution. This balancing process is crucial because it ensures that the allocation reflects both the quantitative data and the qualitative insights provided by the experts.

We hope this explanation clarifies the process of determining the weights and their influence on the final allocation. If there are any further aspects you would like me to elaborate on, please let me know. Thank you for your valuable feedback.

Problem 7:

What specific advantages did the MCDM-based method demonstrate over traditional methods in the results of the case study?

Response:

Thank you for your question regarding the specific advantages demonstrated by the MCDM-based method over traditional methods in the case study results. We appreciate the opportunity to provide a more detailed explanation.

In the manuscript, we conducted a comparative analysis between the MCDM-based method and three traditional methods (SWM, EDM, HDBA) to evaluate their performance in testability allocation. The results, as shown in the provided table, indicate that while all methods meet the reference values ( , , ), the MCDM-based method demonstrates superior performance in several key areas.

Quantitative Analysis: The MCDM-based method consistently achieved higher Fault Detection Rate (FDR) and Fault Isolation Rate (FIR) across all test elements (E11 to E33) compared to the other methods. For instance, the highest FDR reached 0.9898 with MCDM, while the other methods achieved a maximum of around 0.9666. Similarly, the FIR values for MCDM were higher, with a maximum of 0.9783, outperforming the other methods. Additionally, the False Alarm Rate (FAR) was significantly lower with the MCDM method, reaching as low as 0.0309 compared to higher values in other methods. This indicates that the MCDM-based method is more effective in reducing false alarms while maintaining high fault detection and isolation capabilities.

Qualitative Analysis: The MCDM-based method offers several qualitative advantages over traditional methods. It allows for a more comprehensive consideration of multiple, often conflicting factors, enabling a more holistic and optimized decision-making process. Unlike traditional methods that may rely more on experience or single criteria, the MCDM method integrates both expert judgment and quantitative data, enhancing the accuracy and reliability of the decisions. Moreover, the robustness of the MCDM method in handling various metrics simultaneously makes it more adaptable and reliable under varying conditions, providing a more stable performance in real-world applications.

We have analyzed these points in detail from lines 667 to 685 in the manuscript to better highlight the specific advantages of the MCDM based method in the case study. We hope this clarification addresses your concerns, and we are open to any further suggestions or questions you may have. Thank you again for your valuable feedback.

Problem 8

To further enhance the quality and depth of your manuscript, I recommend citing additional paper that could provide a more comprehensive context and support for your research. This will not only strengthen your argumentation but also help in situating your work within the broader academic landscape. Muhammad Farooq Siddique, Zahoor Ahmad & Jong-Myon Kim (2023) Pipeline leak diagnosis based on leak-augmented scalograms and deep learning, Engineering Applications of Computational Fluid Mechanics, 17:1, DOI: 10.1080/19942060.2023.2225577.

Response:

Thank you for your valuable suggestion on enhancing the quality and depth of the manuscript by citing additional papers. Your feedback on the importance of situating the research within a broader academic context is greatly appreciated. In response to your recommendation, we have revised the manuscript to include a range of new references that provide stronger theoretical support and a more comprehensive context for the research presented.

To strengthen the discussion on Multi-Criteria Decision-Making (MCDM) methods, we have added several new references related to the application and advancements of methods such as Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), and Decision-Making Trial and Evaluation Laboratory (DEMATEL). These references provide a deeper understanding of how each method contributes to the decision-making process, particularly in complex systems like MIRFS. By expanding on these foundational theories, the manuscript now more thoroughly explains the strengths and limitations of each MCDM technique, offering a well-rounded perspective on how they can be effectively integrated to optimize testability allocation.

Moreover, these additional references help contextualize the proposed research within the existing body of knowledge, showing how the integration of MCDM methods addresses specific challenges in testability modeling and allocation. This expanded literature base underscores the relevance and applicability of the MCDM framework to complex decision-making environments, enhancing the manuscript's argumentation and situating it more firmly within contemporary research trends.

While the primary focus of the revisions was on enhancing the MCDM discussion, we have also included a brief mention of Subgraph Convolutional Networks (SGCN) to provide a forward-looking perspective on potential future research directions. This addition suggests how integrating SGCN with MCDM methods could further improve testability allocation in complex environments, providing a pathway for future exploration without detracting from the manuscript’s primary focus on MCDM.

We believe these revisions significantly improve the manuscript by providing a stronger theoretical foundation and a more comprehensive view of how MCDM methods can be applied to enhance testability in complex systems. Thank you again for your insightful feedback. We remain open to any further suggestions or questions you may have to continue refining and enhancing the manuscript.

Problem 9

The paper claims that the proposed method enhances testability throughout the MIRFS lifecycle. What are the potential challenges in applying this method to other complex systems?

Response:

Thank you for your insightful question regarding the potential challenges of applying the proposed method to other complex systems. While the paper claims that the method enhances testability throughout the MIRFS lifecycle, we recognize that there are several challenges that could arise when extending this approach to different types of complex systems.

One potential challenge is the unique complexity and specific characteristics of other systems compared to MIRFS. Each complex system, such as spacecraft, nuclear reactor control systems, or autonomous vehicle systems, may have different architectures, operational environments, and functional requirements. These differences may limit the applicability of the proposed method without significant modifications. For instance, testability parameters and their weights, which were carefully defined for MIRFS, may need to be re-evaluated and adjusted for different systems. The diversity in operational conditions and mission objectives could necessitate substantial customization of the model to fit the unique needs of each system. This could involve re-calibrating parameters, re-modeling certain aspects, or even integrating additional tools and methods to handle new challenges presented by different systems.

Another challenge is related to data acquisition and quality. The effectiveness of the MCDM-based approach, including DEMATEL, AHP, and TOPSIS, heavily relies on high-quality input data. However, in some complex systems, obtaining such data can be difficult due to limitations in data accessibility, poor data quality, or incomplete datasets. These issues could undermine the reliability and validity of the model’s outputs. To address this, future research could explore methods to ensure data accuracy and completeness, such as advanced data preprocessing and augmentation techniques, or leveraging simulation and synthetic data generation to supplement real-world data deficiencies. Additionally, developing robust data integration frameworks that can effectively combine data from multiple sources may enhance the model's applicability across diverse systems.

The availability and consistency of expert knowledge also pose significant challenges. For various complex systems, finding experts with sufficient domain knowledge and experience to provide accurate judgments can be challenging. Furthermore, expert opinions may vary, especially in emerging or interdisciplinary fields, affecting the consistency and reliability of the decision-making process. To overcome this, approaches like the Delphi method could be employed to build consensus among experts. Alternatively, in scenarios with limited expert availability, machine learning models could be developed to supplement expert knowledge by learning from historical data and providing insights based on patterns identified in similar systems.

Moreover, computational and resource constraints are important considerations when applying this method to larger-scale complex systems. The integration of multiple decision-making tools, while beneficial for a comprehensive analysis, can lead to high computational demands. This can be particularly challenging in real-time systems where decision-making needs to be both rapid and resource-efficient. Future research could focus on developing optimized algorithms or leveraging distributed computing techniques to improve computational efficiency and scalability, making the approach more feasible for larger systems with limited resources.

The ability to adapt to dynamic changes is another critical challenge. Many complex systems, such as drone networks or intelligent transportation systems, operate in highly dynamic and uncertain environments. A static MCDM approach may not be able to adapt quickly to these changes, leading to testability analyses that are outdated or no longer relevant. To address this, future work could explore integrating dynamic modeling techniques or adaptive control strategies into the current framework, enhancing its ability to respond to real-time changes and maintain its effectiveness in dynamic environments.

While these challenges are significant, they also present opportunities for future research to refine and enhance the proposed method. By addressing these challenges through innovative approaches and methodologies, we can further improve the testability of integrated systems across a wider range of applications. I hope this provides a comprehensive overview of the potential challenges and future directions for research. Thank you for your valuable feedback, and I am open to any further suggestions or questions you might have.

Problem 10

How does the proposed method address the subjectivity inherent in expert judgment during the decision-making process?

Response:

Thank you for your insightful question on how the proposed method addresses the subjectivity inherent in expert judgment during the decision-making process.

To mitigate the subjectivity associated with expert judgment, our method strategically integrates multiple decision-making tools, including DEMATEL, AHP, and TOPSIS. Each of these methods plays a unique role in refining and enhancing the decision-making process. In addition, consistency inspection is added in the use of expert scoring to eliminate and replace the expert scoring that does not conform to the actual situation.

Role of DEMATEL: DEMATEL is utilized to identify and quantify the interdependencies among various factors. By revealing the causal relationships between factors, DEMATEL provides a more objective foundation for subsequent analysis, thereby reducing the reliance on subjective expert intuition. This step helps to identify key influence factors that have the most significant impact on the decision-making process, allowing for a more informed and less biased judgment.

Combining AHP and TOPSIS: AHP is employed to establish a hierarchical structure for decision-making, breaking down complex problems into multiple levels and factors. This structured approach allows for a more systematic evaluation of each factor's relative importance, using pairwise comparisons and consistency checks to ensure reliability. However, AHP alone does not provide a ranking of alternatives. Therefore, we integrate TOPSIS, which utilizes the weights derived from AHP to rank alternatives based on their proximity to an ideal solution. This combination leverages AHP's strength in weight determination and TOPSIS's ability to provide a more objective ranking, further minimizing subjectivity in the final decision.

Synergistic Effect of the Integrated Approach: The integration of DEMATEL, AHP, and TOPSIS offers a multi-dimensional decision-making framework. DEMATEL first clarifies the relationships between factors, AHP then precisely allocates weights based on these relationships, and finally, TOPSIS ranks the alternatives according to the calculated weights. This comprehensive approach not only mitigates the limitations of relying on a single method but also cross-validates the results, enhancing the robustness and objectivity of the decision-making process.

By combining these methods, our approach effectively reduces the subjectivity of expert judgment, offering a more balanced, reliable, and data-driven framework for decision-making. This integration ensures that both qualitative insights and quantitative data are utilized to their fullest potential, leading to more consistent and stable outcomes in real-world applications.

We hope this explanation provides a clearer understanding of how the proposed method addresses the inherent subjectivity in expert judgment. Thank you for highlighting this important aspect, and we are open to any further questions or suggestions you may have.

Problem 11

What are the limitations of the proposed method, and how could future research address these limitations to further improve the testability of integrated systems?

Response:

Thank you for your insightful question regarding the limitations of the proposed method and potential future research directions to improve the testability of integrated systems. Your question provides an excellent opportunity to critically evaluate our approach and explore avenues for enhancing its robustness and applicability in various contexts.

One of the primary limitations of the proposed method is its reliance on expert judgment, particularly in determining the weights and relationships between testability parameters. While the integration of DEMATEL, AHP, and TOPSIS helps mitigate some of the inherent subjectivity, the method still requires experts to provide initial inputs. This dependency can introduce biases, especially in cases where experts have differing opinions or where there is a lack of sufficient expertise. The subjective nature of expert judgment may lead to inconsistencies, which could affect the reliability of the final decision-making outcomes. Future research could focus on developing more objective approaches, such as incorporating machine learning techniques or statistical models, which can analyze large datasets to derive weights and relationships more objectively. These data-driven methods could help reduce the reliance on subjective inputs and enhance the robustness of the model by providing a more empirical basis for decision-making.

Another significant limitation relates to the computational complexity involved in integrating multiple decision-making tools such as DEMATEL, AHP, and TOPSIS. The combined use of these methods, while beneficial for comprehensive analysis, can result in increased computational demands, particularly for large-scale integrated systems with numerous components and interdependencies. This complexity can impact the model's real-time applicability and may pose challenges in environments where quick decision-making is crucial. To address this limitation, future research could explore the development of more efficient computational algorithms and optimization techniques. For instance, leveraging parallel processing, distributed computing, or heuristic optimization could significantly enhance the computational efficiency, making the method more suitable for real-time applications and scalable to larger, more complex systems.

The proposed method also faces challenges in terms of flexibility regarding parameter selection and weight determination. The current framework utilizes fixed criteria and predefined parameters, largely based on expert knowledge and specific standards. This rigidity may limit its adaptability to different types of systems or varying operational contexts. Different systems may require distinct sets of parameters or weightings that better reflect their unique characteristics. Future studies could investigate the development of more adaptive frameworks that allow for dynamic adjustments of parameters and weights based on real-time system feedback. For example, incorporating reinforcement learning algorithms could enable the model to learn and adjust its parameters continuously, improving its applicability across diverse operational environments.

Additionally, the adaptability of the proposed method to dynamically changing environments is another area that warrants further exploration. Integrated systems often operate under conditions that are not static but continuously evolving, influenced by factors such as wear and tear, environmental changes, and shifting operational requirements. The static nature of the current model might not fully capture these dynamic changes, potentially leading to gaps between predicted and actual system performance. Future research could aim to incorporate dynamic modeling techniques, such as time-series analysis, dynamic Bayesian networks, or adaptive control methods, into the framework. This would allow the model to better respond to real-time changes and improve its predictive accuracy and relevance over time.

Lastly, while the proposed method has shown promising results in specific case studies, its scalability and generalizability to other types of integrated systems remain uncertain. Each system can present unique challenges that the current framework might not fully address, necessitating significant modifications or adaptations to the model. Future research could involve comprehensive validation across a broader range of systems and contexts, providing deeper insights into the method’s scalability and adaptability. This could also help identify specific areas where the model may need refinement or customization to better meet the needs of different applications.

By addressing these limitations and exploring these future research directions, we aim to enhance the proposed method's effectiveness in improving the testability of integrated systems. We hope this provides a comprehensive overview of both the current limitations and potential paths for future enhancement. Thank you for your thoughtful feedback, and we welcome any further questions or suggestions you might have.

Author Response File: Author Response.pdf

Round 2

Reviewer 3 Report

Comments and Suggestions for Authors

The manuscript has been corrected according to the comments, and it can be considered for acceptance in the journal.

  Comments on the Quality of English Language

It is very good.

 
Back to TopTop