Adapting Commercial Best Practices to U.S. Air Force Maintenance Scheduling
Round 1
Reviewer 1 Report
Paper presents initial results from a study of an adaptation of maintenance scheduling methods from commercial aviation to military with the intet of gaining readiness hours and reduced costs. Study based on a limited sample size but indicated a promissing direction to be further explored. Authors could have tried a simpler analysis, since they knew a priori they lacked the data for a strong statistical aproach. Stronger focused on showing the detailed conditions enabling the IDF approach with insights on where the pain and gain points are. For instance, why did the Avg TNMCMS Hours went up, if and how to avoid this?
In the introduction:
Can the authors clarify what are the "different set of constraints and incentives" they are considering between commercial and military operations, and possibly how these may influence scheduling method adoption?
Before proposing the IDF solution, what exactly is the gap to be filled/solved?
What are the problems with other solutions? (DTO maintenance scheduling processes or others)
What are the differences between what you propose and what current practice?
How does this proposal fit into MSG-3?
Section 2.1:
Figure 2 and associated description text can be improved. Authors should describe in the text all the steps of the image, maybe use cycle step numbering for clarity. Questions for clarification:
Who is responsible for the "EASILY MANAGED BY SCHEDULING" step?
Why is the "Scheduled Maint" a circular process?
Does the cube represent a different area or is it just to explain the packaging (the story happens before with the loose parts)?
Why is the relationship between "Scheduled Maint" and "Sortie Generation" bidirectional?
Expand TCTOs?
Don't assume your reader knows acronyms. Try to avoid abbreviations in images such as "Maint"
Again, image 3 needs to be improved.
Why are there 3 text blocks? Do they represent different functions or steps?
Maybe use a different color for the IDF drawing part and another for the bundling.
Is there any order for this conversion, first IDF and second the bundling?
The text should better explain the whole process, such as: how the separation of PDM to HSC is done in practice? Why are the arrows bidirectional?
Do not use acronym in the images (as per MDPI Instructions for Authors) or abbreviations like eg "INSP".
Start 2.2 by explaining what are Home Station Checks.
Introduce the concept of work cards before using it. If possible map this concept in the IDF/Bundling in Figure 3
Before fig 4
- clarify: "probability" of a finding? (first variable?)
- Is there any uncertainty in the "expected time"? Is it being taken into account?
- 2riskier work card": Please put the risk matrix to understand how the task is categorized.
After fig 4 you have 5 acronyms in one line is very hard to read!
Why respect a window of PLUS or minus 10 days? - shouldn't the due date be always respected?
Figure 6 - indicate what image (upper/down) refers to Traditional vs. Experimental HSC Scheduling. What the meaning of the red and blue lines? What represents each line (Mids/Swings)?
Section 2.3:
"sizing, scheduling, and supportability" - Please, describe what is meant by each one.
"Additionally, no additional", please rephrase.
"These variables are compared ..." - If it is used to compare why you are using "+" signal in the Figure 7?
Fig 8 - Improve the quality of this image for readability (font size and line width).
"Nevertheless, it was statistically improbable that the means for the TNMCMS hours 360 would be exactly the same for the control and experimental groups." - how can you tell? please carify what you meant to imply.
Table 4 -
Please specify what the different colors mean on the arrows (red, yellow, green)?
Why did the Avg TNMCMS Hours went up? Did IDF imply added work to a fix scheduled plan?
Figure 10 - What does these colors mean? Are they related to any previous definition?
MDPI guide: Improve the resolution, don't cut the text, use acronyms just after a complete name description.
Author Response
Please see our responses to your comments in the attachment.
Author Response File: Author Response.docx
Reviewer 2 Report
The manuscript “Adapting Commercial Best Practices to U.S. Air Force Maintenance Scheduling” presents a novel maintenance scheduling technique and applies it to a sample of the USAF’s C-5M fleet. The paper gives a clear overview of the adopted methodology and presents solid insights as to the C-5M application.
While accounting for these strengths, there are several issues with the paper. These are identified below and associated suggestions for improvement are given as follows:
· While the authors position their research with respect to CBM (and, by extension, the theme of the special issue), the connection to CBM and the associated relevance of the work can be further explicated. In particular, the paper lacks detail on how a CBM policy matches with or is influenced by the proposed maintenance scheduling technique (IDF). I.e., how will a CBM capability influence the predictability of maintenance work, as for instance reflected in task substitution, interval escalation or task removal? Only interval escalation is obliquely discussed in the current version of the manuscript.
· The positioning with respect to the academic state of the art has to be improved. The novel contributions of the work are not ‘sharply’ identified. In particular, two issues spring to mind: while availability / mission readiness is a key consideration, the presented work does not acknowledge the relation with the flight maintenance and planning (FMP) problem and related literature. Integration and/or alignment with the flight planning process is missing in the current work. Mission readiness is not defined and can therefore be interpreted in multiple ways. To resolve this, the following literature can be considered. These are merely some suggestions as a starting point; the authors are encouraged to look beyond these papers and include their own selections:
o Marlow, D., and R. Dell. "Optimal short-term military aircraft fleet planning." Journal of Applied Operational Research 9.1 (2017): 38-53.
o Peschiera, Franco, et al. "A novel solution approach with ML-based pseudo-cuts for the Flight and Maintenance Planning problem." OR Spectrum 43.3 (2021): 635-664.
o Verhoeff, M., W. J. C. Verhagen, and Richard Curran. "Maximizing operational readiness in military aviation by optimizing flight and maintenance planning." Transportation Research Procedia 10 (2015): 941-950.
· A rationale for the adopted risk scales is missing and validation using real-life data (ibid. Fig 4 –) is not performed. This is particulary noticeable relative to Figure 4, where the authors note “measure … via historical data”.
· Do the authors consider real-life stochasticity which may be present relative to the work package critical path? Several works are available in literature to assess how variations in task execution times carry over in critical path modelling, analysis and subsequent recovery?
· The authors perform an effort to check the assumptions related to the use of the t-test, which is commendable. However, it is not entirely clear how the assumption of normality is accepted given the shape of the histograms in Figure 8. The authors claim that “Histograms for each metric shown in Figure 8 showed that despite only having 10 pairs of samples, it can reasonably be assumed that the normality assumption holds (since the data is not heavily skewed in one direction)”. This is rather a stretch, to be honest. A formal test of normality is not performed but a visual check shows clear issues with normality in (a) and (b) in particular. If the data is assumed not to be distributed normally, which other (non-parametric) test would be relevant and would this change the findings of the research?
· The authors mention that “Other data that are not currently measured in the internal USAF databases such as work-card detail, maintenance and flying schedules, and qualitative risk assessments was collected manually using Excel. Completing these Excel documents required consistent communication between front-line maintenance technicians, schedulers, and production superintendents.” – How did this communication effort (presumably with a time requirement) effect the experimental setup, in particular versus the mentioned control variables?
· The authors mention that “Manning was consistent within the experiment. This is to say that manning did not vary from one day to the next during the HSC.” Has this been quantitatively evaluated?
· Given the time required to generate additional data (and resolve the current issue with a limited dataset), have the authors considered to simulate the benchmark process versus the updated process? The availability of real-life data may be used to set up valid process parameters in such a case, whereas the issue of sample size can easily be addressed under such conditions.
Author Response
Please see our responses to your comments in the attachment.
Author Response File: Author Response.docx
Reviewer 3 Report
This article presents a new scheduling technique for aircraft maintenance planning, called Inspection Development Framework (IDF). This problem is crucial for the availability of aircraft and therefore for operational problems. This article shows that the segmentation of maintenance needs has a crucial impact on flight hours, maintenance downtime and the number of sorties. The results are validated on a real-world implementation from January to July 2021, and not from simulations or mathematical models. I have a good opinion of this article, it contains interesting expert knowledge on a real case study of the United States Air Force (USAF). These results seem interesting to me for a wider audience than only AF US. I propose some revisions to improve this article. The weakness is that it looks more like an engineering report than an academic publication. Few references are provided, some are internal, the field of aircraft maintenance planning has received a lot of attention over the past decade. I propose to situate the contributions of this article in relation to recent developments in the field, by comparing the practices used in maintenance planning, and also what perspectives the results of this article bring to other articles. My first impression on reading this article is that these results of segmenting maintenance operations are contrary to current practices of grouping many operations into a maintenance period, with a crucial impact of extending the duration of maintenance visits. . Sometimes, maintenance periods are used to have other maintenance visits in parallel, to take advantage of an immobilization of the aircraft and thus avoid a new period of unavailability. To situate the contributions of this article to the state of the academic art, I propose three references below. Many of the cited references from these three recent articles may be of interest for the state of the art section of this article. C.Li et al. An improved optimization algorithm for the aircraft maintenance and repair task scheduling problem. Mathematics 2022, 10(20), 3777. https://doi.org/10.3390/math10203777 F. Peschiera et al. A new solution approach with ML-based pseudo-cuts for the flight planning and maintenance problem. OR Spectrum, 43, 2021, pp 635-664. https://doi.org/10.1007/s00291-020-00591-z Shahmoradi-Moghadam, H., Safaei, N., Sadjadi, S.J. (2021). Robust Aircraft Fleet Maintenance Scheduling: A Hybrid Simulation-Optimization Approach. IEEE Access, 9, 17854-17865. https://ieeexplore.ieee.org/iel7/6287639/9312710/09333564.pdf Operations research papers refer to the "flight and maintenance planning problem" (FMPP) as the class of complex optimization problems involving the joint optimization of maintenance planning and flight hours, respecting technical constraints. and maintenance. FMPPs with different specific constraints are considered in this article and in the quotes, by the NPS in the United States, by the armies of China, France, Greece, Switzerland, the Netherlands.. This is useful to provide comparative analyzes and compare the assumptions that are made in these articles. The third (and some of the previous works) also uses simulation techniques, it is also a related state-of-the-art technique, complementary to real-world implementation.
Author Response
Please see our responses to your comments in the attachment.
Author Response File: Author Response.docx
Round 2
Reviewer 2 Report
The authors have provided a thorough revision plus associated response to reviewer comments. Substantial (and necessary) improvements have been made to the positioning / theoretical context, while most concerns regarding the analysis work have been addressed through the revisions and response. While some comments are essentially considered beyond the scope of the current work, the authors have clarified this throughout the manuscript and as part of the recommendations / future research, which is a satisfactory response to me.
Reviewer 3 Report
Dear authors,
My remarks are correctly taken into account, I have no further remark. The response letter contains very interesting points, that improve the first version of the paper. MAybe some of these points derserve to be interted in the paper (if not to close to confidential elements of course.)
I recommend aceptation.