Next Article in Journal
UNC Charlotte Autonomous Shuttle Pilot Study: An Assessment of Operational Performance, Reliability, and Challenges
Previous Article in Journal
Design of a Thermal Performance Test Equipment for a High-Temperature and High-Pressure Heat Exchanger in an Aero-Engine
Previous Article in Special Issue
Improved Synchronous Sampling and Its Application in High-Speed Railway Bearing Damage Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Inspection and Maintenance Policy: Integrating a Continuous-Time Markov Chain into a Homing Problem

Department of Mathematics and Industrial Engineering, Polytechnique Montréal, C.P. 6079, Succursale Centre-Ville, Montréal, QC H3C 3A7, Canada
*
Author to whom correspondence should be addressed.
Machines 2024, 12(11), 795; https://doi.org/10.3390/machines12110795
Submission received: 14 October 2024 / Revised: 8 November 2024 / Accepted: 8 November 2024 / Published: 10 November 2024

Abstract

:
The state of a machine is modeled as a controlled continuous-time Markov chain. Moreover, the machine is being serviced at random times. The aim is to maximize the time until the machine must be repaired, while taking the maintenance costs into account. The dynamic programming equation satisfied by the value function is derived, enabling optimal decision-making regarding inspection rates, and special problems are solved explicitly. This approach minimizes direct maintenance costs along with potential failure expenses, establishing a robust methodology for determining inspection frequencies in reliability-centered maintenance. The results contribute to the advancement of maintenance strategies and provide explicit solutions for particular cases, offering ideas for application in reliability engineering.

1. Introduction

The preventive maintenance framework is an essential element in ensuring the reliability and efficiency of operating systems in the context of reliability engineering, maintenance, and availability optimization. Maintenance planning takes into account scheduled tasks to ensure the continuity of operations and prevent breakdowns by proactively managing potential failures before they occur. Optimizing these strategies may prove more difficult if degradations are considered stochastic processes, and if the inherent randomness of system operation is taken into account. A mathematical model commonly used to represent these processes is the continuous-time Markov chain, which effectively represents the probabilistic state transitions of the machine over the planned period.
A key element of maintenance optimization is the adjustment of costs associated with maintenance activities and potential system failures. An effective maintenance policy must take into account both the direct costs of performing maintenance tasks and the indirect costs associated with system downtime, lost production, and repairs due to failures. This optimal adjustment is vital in areas where high demands are placed on system reliability and availability for operational safety and process costs.
Optimal control of the frequency of inspections, a fundamental element of preventive maintenance, is essential to achieving this objective. Inspection frequency dictates how often a system’s condition is checked, which in turn affects the scheduling and need for maintenance interventions. More frequent inspections can lead to excessive maintenance costs, while less frequent inspections can increase the risk of failure and the resulting downtime. Determining the optimal inspection rate is therefore essential to minimize maintenance costs and improve system reliability.
Our research has focused on developing an optimal inspection and maintenance plan for a system by modeling the state of degradation as a continuous-time Markov chain. The basic concept is to dynamically regulate the inspection rate u, which specifies the frequency or intensity of preventive maintenance tasks. This inspection rate u serves as a control parameter that balances the trade-off between maintenance actions and the risk of system failure. A higher u inspection rate results in more inspection tasks, which may reduce failure rates but simultaneously increases direct inspection–maintenance costs. Conversely, a low u rate reduces immediate maintenance expenditure but may result in higher long-term costs due to increased failures and downtime. The problem is to identify the optimal rate u * that balances the conflicting factors mentioned. The authors’ objective is to apply dynamic programming to determine the optimal value of u, which minimizes the long-term costs associated with both inspection operations and potential system downtime caused by failures.
Dynamic programming is perfectly aligned with this problem, as it allows for the systematic evaluation of different maintenance policies under uncertainty. The dynamic nature of the problem, where the state of the system and the optimal maintenance plan selection are influenced by past states and actions, necessitates a model which can handle sequential decision-making under randomness. By employing dynamic programming in a continuous-time Markov chain framework, this study tends to propose an approach that defines the optimal inspection–maintenance rate and yields valuable insights into the optimal timing of inspection–maintenance activities, based on the state of the system. Such insights can assist in modeling robust inspection–maintenance planning adaptable to real-time alterations in system conditions and operational requirements.
This methodology develops a new type of homing problem applied in reliability theory, characterized by decision-making under randomness. Deriving the dynamic programming equation and applying it involves notable theoretical challenges, which this research seeks to tackle. These complexities include precisely modeling the degradation phenomena, properly incorporating the inspection–maintenance and downtime costs, and ensuring the computational tractability when addressing the dynamic programming equations for sophisticated systems. Resolving these challenges will facilitate the formulation of more advanced inspection–maintenance strategies that are not only cost effective but also adaptable to diverse operational settings and varying risk preferences.

Literature Review

A thorough review of recent developments in the optimal control of continuous-time Markov chains, particularly in the context of reliability theory, reveals that the subject has been widely studied in this field, especially in the context of inspection–maintenance planning. Previous works cover a wide range of models and strategies for establishing optimal testing, maintenance strategies to improve machine reliability, availability, and safety while reducing costs. These explorations point to ongoing efforts to improve optimal control methods in this field.
We now present a chronological list of important articles related to our paper.
  • G. Parmigiani [1], ‘Optimal scheduling of fallible inspections’. The article provides an optimal control model for a manufacturing system, applying a continuous-time Markov chain for the optimal control of maintenance frequencies, in order to improve system reliability.
  • E. K. Boukas and Z. K. Liu [2], ‘Production and maintenance control for manufacturing systems’. This article deals with maintenance control using a continuous-time Markov process. The model focuses on the optimal control of preventive and corrective maintenance rates to improve system reliability.
  • C.-H. Wang and S. H. Sheu [3], ‘Determining the optimal production-maintenance policy with inspection errors: using a Markov chain’. The paper develops an optimal policy for inspection intervals and maintenance for a deteriorating production system using a continuous-time Markov chain model to minimize total cost, taking into account system reliability.
  • H. Suryadi and L. G. Papageorgiou [4], ‘Optimal maintenance planning and crew allocation for multipurpose batch plants’. This article proposes a mathematical programming method for optimizing maintenance planning in plants, using continuous-time Markov chains to model maintenance in order to improve system reliability.
  • W. Liying, F. Youtong, S. Liying and L. Baoyou [5], ‘On fault diagnosis and inspection policy for deteriorating system’. It identifies failure diagnosis and inspection policy, and optimizes the inspection cycle to maximize revenue, enabling imperfect repairs before component replacement. A Markov vector process and a numerical example are used for validation.
  • H. R. Golmakani and F. Fattahipour [6], ‘Optimal replacement policy and inspection interval for condition-based maintenance’. This work uses a Markov process to provide an optimal replacement strategy and inspection frequency for condition-based maintenance. The model tends to optimize maintenance costs, with inspection intervals balancing out the costs of preventive and failure replacements, thus improving system reliability.
  • F. Naderkhani and V. Makis [7], ‘Optimal condition-based maintenance policy for a partially observable system with two sampling intervals’. The study develops an optimal conditional maintenance strategy using a continuous-time hidden Markov process. It tunes inspection intervals according to aging states, in order to minimize maintenance costs and improve reliability.
  • K. He [8], ‘Optimal Maintenance Planning in Novel Settings’. This thesis develops the method of optimal maintenance planning in healthcare systems, focusing on events with imperfect inspection intervals and unpunctual preventive maintenance with the use of stochastic processes.
  • Q. Sun, Z.-S. Ye and N. Chen [9], ‘Optimal inspection and replacement policies for multi-unit systems subject to degradation’. It proposes optimal inspection and replacement strategies for aging systems, using the Markov decision process framework. It uses the Wiener process to model component degradation and finds inspection intervals and replacement choices to minimize total operational cost while maintaining system reliability.
  • P. Cao, K. Yang and K. Liu [10], ‘Optimal selection and release problem in software testing process: A continuous-time stochastic control approach’. It proposes optimal selection for software testing using continuous-time stochastic control. It takes into account the trade-off between testing costs and availability time to minimize total costs. The proposed model uses dynamic programming to determine when to test, release, or reject software, taking into account software reliability.
  • Q. Sun, Z.-S. Ye and X. Zhu [11], ‘Managing component degradation in series systems for balancing degradation through reallocation and maintenance’. The control of component aging in serial systems is examined through the optimization of preventive replacement and reallocation policy by deploying stochastic optimization and Markov chain.
  • C. P. Andriotis, K. G. Papakonstantinou and E. N. Chatzi [12], ‘Value of structural health monitoring quantification in partially observable stochastic environments’. This paper focuses on the optimal life cycle control of aging systems in an uncertain environment. It uses partially observable Markov decision processes to find optimal intervention and monitoring strategies.
  • S. Gan, N. Yousefi and D. W. Coit [13], ‘Optimal control-limit maintenance policy for a production system with multiple process states’. It develops a maintenance strategy for a production system with several processing states, incorporating machine age and spare parts control to optimize maintenance tasks. The method uses a discrete-time Markov decision process to identify optimal maintenance activities, with the aim of minimizing overall long-term costs.
  • P. Vrignat, F. Kratz and M. Avila [14], ‘Sustainable manufacturing, maintenance policies, prognostics and health management: A literature review’. This work contains a review of sustainable manufacturing focusing on maintenance policies, prognostics, and health management systems (HMS). It discusses the incorporation of Industry 4.0 and e-maintenance to evolve proactive maintenance strategies that are consistent with sustainability goals.
  • P. G. Morato, K. G. Papakonstantinou, C. P. Andriotis, J. S. Nielsen and P. Rigo [15], ‘Optimal inspection and maintenance planning for deteriorating structural components through dynamic Bayesian networks and Markov decision processes’. The paper introduces a scheme integrating dynamic Bayesian networks and partially observable Markov decision processes to explore optimal planning of structural deterioration inspection and maintenance. It highlights the limitations of the heuristic method for optimizing structural reliability and minimizing costs through stochastic optimization.
  • M. Roux, Y.-P. Fang and A. Barros [16], ‘Maintenance planning under imperfect monitoring: an efficient POMDP model using interpolated value function’. The field is improved by introducing an efficient partially observable Markov decision process model for maintenance planning in the case of imprecise observations.
  • M. Lefebvre and P. Pazhoheshfar [17], ‘An optimal control problem for the maintenance of a machine’. The study develops an optimal control problem for machine maintenance using a discrete-time stochastic process model. The methodology consists of solving a dynamic programming equation to determine whether maintenance activities should be performed at each unit of time, given that the final time will be a random variable.
  • Y. Wang and Y. Li [18], ‘Replacement policy for a single-component machine with limited spares in a finite time horizon’. This study develops a maintenance scheduling scheme with a Markov decision process to explore the optimal control of a system with limited spare parts over a finite period of time.
  • S. Nasersarraf, S. Asadzadeh and Y. Samimi [19], ‘Determining the optimal policy in condition-based maintenance for electrical panels’. It develops a state-based optimal maintenance policy for electrical panels in parallel systems, using a proportional hazards model to account for failure dependency between components. The model aims to minimize expected total costs while maintaining system reliability by determining optimal inspection intervals and preventive replacement strategies.
  • W. Wang and X. Chen [20], ‘Piecewise deterministic Markov process for condition-based imperfect maintenance models’. This work proposes a condition-based maintenance model using a piecewise deterministic Markov process. It integrates corrective and imperfect maintenance. The paper implements optimal control theory to explore the optimal maintenance policy, with the aim of minimizing total system cost.
  • G. Wang and Z. Zhu [21], ‘Optimal control of sampled-data systems based on an optimized stochastic sampling’. This work covers the optimal control of a system with sampled data and stochastic sampling. The paper presents several cost functions, including one for sampling frequency, and implements dynamic programming to obtain optimal controllers in finite and infinite time.
Now, continuous-time Markov chains and dynamic programming are increasingly applied beyond classical industrial contexts, supporting advanced strategies in reliability and optimization as shown by recent studies (see, for example, Liu et al. [22] and Mancuso et al. [23]) in the context of prognostics and health management (PHM).
Despite remarkable developments in the optimal control of continuous-time Markov chains applied in the field of maintenance optimization, there are still a number of gaps in this literature. Many existing models are tailored to certain applications, such as production systems or healthcare, with incomplete evaluation of the optimal inspection rate in a more generalized setting. In addition, the use of dynamic programming to extract continuous decision variables, such as testing intervals, has not yet been studied, particularly in the context of reliability theory homing problems.
This research aims to fill existing gaps by proposing an extended framework for defining the optimal preventive maintenance and inspection rate of a system using continuous-time Markov chains and dynamic programming. The presented model will provide a robust approach to the homing problem in reliability theory optimization, which incorporates advanced stochastic control practices to formulate optimal maintenance planning. In addition, by synthesizing the latest advances in inspection–maintenance planning optimization and dynamic programming, this study contributes to the broader goal of advancing reliability theory and the cost-effectiveness of engineering systems, proposing practical policies for industries, where component reliability is of great importance.
This research addresses the question of optimal inspection times for maintenance in a generalized framework by improving the applicability of continuous-time Markov chains in a more global context. Section 3 of the paper deals with inspection rates obtained by dynamic programming, allowing us to propose a versatile methodology for determining the optimal inspection rate in a system, without limiting its relevance to specialized domains. This methodology is supported by Equation (18), where the generalized formulation of the optimal inspection rate calculation is described, enhancing the flexibility of the model for various maintenance application contexts.
The gap is further bridged by the inclusion of dynamic programming to determine continuous decision variables, such as the inspection rate, in the maintenance and reliability domain and in the context of the homing problem. The results, particularly in Section 4 and Section 5, illustrate how the determination of the optimal inspection is supported by dynamic programming, especially when random final times are taken into account in the cost function—a novel implementation of the homing problem in the discussed field of application. By focusing on continuous decision variables, the model provided fills this gap by proposing a robust approach to calculating inspection frequencies that would balance the cost and reliability resulting from the system’s maintenance–operating states as highlighted by our results.

2. Mathematical Formulation of the Problem

Let X ( t ) denote the state of a machine at time t. We assume that the stochastic process { X ( t ) , t t 0 } is a continuous-time Markov chain having the following states:
0 : The machine is working ; 1 : The machine is undergoing routine maintenance ; 2 : The machine is undergoing prolonged maintenance ; 3 : The machine is being repaired .
The transition diagram of the Markov chain is shown in Figure 1. This figure provides a visual illustration of state transitions during the machine’s operational life cycle, which is modeled as a continuous-time Markov chain. The arrows indicate possible transitions between defined states, each with specified transition probabilities. The machine can take different paths throughout its operation in the “Working” state. It can enter the “Routine Maintenance” state (with transition probability p 0 , 1 ), or the “Prolonged Maintenance” state (with transition probability p 0 , 2 ), or the “Repair” state (with transition probability p 0 , 3 ). Each of these transitions is triggered by particular maintenance requirements or failure events. At the end of “Routine Maintenance” or “Prolonged Maintenance”, the machine switches to the “Working” state as indicated by the transitions p 1 , 0 = 1 and p 2 , 0 = 1 , respectively. This indicates that once maintenance is complete, the machine is once again operating at full capacity (as good as new). Similarly, the transition p 3 , 0 = 1 indicates that after “Repair”, the machine returns to the “Working” state, thus regaining full functionality (as if it were new).
The time spent by the Markov chain in state i is an exponential random variable T i with parameter ν i , for i = 0 , 1 , 2 , 3 . Moreover, inspection is carried out at random times. We assume that the time elapsed between two inspection operations is an exponential random variable with parameter u > 0 . When the machine is serviced, there is a probability equal to p that no serious problem will be discovered (and to 1 p that at least one important problem will be detected).
Let p i , j be the probability that the Markov chain will move to state j when it leaves state i for i j { 0 , 1 , 2 , 3 } . We can write (see, for instance, Lefebvre [24]) that p j , 0 = 1 for j = 1 , 2 , 3 and
p 0 , 1 = p u u + ν 0 , p 0 , 2 = ( 1 p ) u u + ν 0 and p 0 , 3 = ν 0 u + ν 0 .
Next, we define the first-passage time
τ ( i ) = inf { t t 0 : X ( t ) = 3 or t = t 1 ( > t 0 ) X ( t 0 ) = i }
for i = 0 , 1 , 2 (with τ ( 3 ) = t 0 ).
We suppose that the optimizer can choose any value of the constant u U at any time instant so that u = u ( t ) is the control variable and p i , j becomes p i , j ( t ) . We look for the value u * ( t ) 0 of u ( t ) that minimizes the expected value of the cost criterion
J ( i ) = t 0 τ ( i ) f [ u ( t ) ] λ d t + K I F ,
where f ( 0 ) = 0 , f [ u ( t ) ] > 0 for u ( t ) > 0 , λ > 0 , K > 0 , and I F is the indicator function of the event F: the optimizer chooses u ( t ) 0 . Therefore, the aim is to maximize the expected time until the machine needs to be repaired, while taking into account the control costs f [ u ( t ) ] .
  • Remarks.
(i)
The final cost K is imposed if the optimizer decides not to perform any maintenance, to reflect the fact that when the machine needs to be repaired (state 3), repair costs should be higher.
(ii)
The above problem is a particular homing problem, in which the optimizer controls a stochastic process until a certain event occurs; see Whittle [25,26]. The authors have recently considered homing problems for queuing systems; see Lefebvre and Yaghoubi [27,28].
(iii)
To the best of our knowledge, this is the first time that a homing problem will be treated for a continuous-time Markov chain that is not a queuing model. Numerous papers have been published in the field of reliability, in which continuous-time Markov chains have been used as models. Moreover, as we have seen in the literature review, optimal control problems for these models have also been the subject of numerous publications. In our problem, rather than controlling a stochastic process until a final time that is either fixed or infinite, we stop controlling the process at the instant when the machine needs to be repaired (or a given amount of time has elapsed). In reality, this instant is indeed a random variable.
(iv)
To validate the formulated model, we apply a series of actions designed to assess its robustness and reliability. As a first step, a sensitivity analysis is carried out to examine the effect of varying key parameters on the model’s results. This analysis enables us to verify that the model remains reliable and produces consistent results under a particular range of hypothetical scenarios. In addition, we analyze the model’s response to several initial conditions and parameters in order to simulate different system behaviors. These verification steps demonstrate the usefulness of the model for maintenance decision-making.
Now, the value function is defined by
F ( i , t 0 ) = inf u ( t ) t [ t 0 , τ ( i ) ) E [ J ( i ) ]
for i = 0 , 1 , 2 , 3 . We can write that F ( 3 , t 0 ) = 0 . Moreover, when the Markov chain is in state 1 or 2, we have u ( t ) = 0 . It follows that
F ( i , t 0 ) = λ E [ T i ] + F ( 0 , t 0 ) = λ ν i + F ( 0 , t 0 ) for i = 1 , 2 .
  • Remark. Let us denote by F 0 ( 0 , t 0 ) the expected value of J ( 0 ) if the optimizer uses no control at all so that u ( t ) 0 . Since f ( 0 ) = 0 , we can write that
    F 0 ( 0 , t 0 ) = λ E [ T 0 ] + K = λ ν 0 + K .
Thus, the value function F ( 0 , t 0 ) must necessarily be smaller than or equal to λ ν 0 + K .
In the next section, the dynamic programming equation satisfied by the function F ( 0 , t 0 ) will be derived. Then, in Section 3, a particular case of the above problem will be solved explicitly. In Section 4, the case when the constant t 1 in the definition of τ ( i ) tends to infinity will be treated. We will conclude this paper with a few remarks in Section 5.

3. Dynamic Programming Equation

Assume that the machine is working at time t 0 so that X ( t 0 ) = 0 . We have
P [ T 0 < Δ t ] = 1 e ν 0 Δ t = ν 0 Δ t + o ( Δ t ) .
It follows that we can write that
X ( t 0 + Δ t ) = 1 with probability ν 0 Δ t p 0 , 1 ( t 0 ) [ + o ( Δ t ) ] , 2 with probability ν 0 Δ t p 0 , 2 ( t 0 ) , 3 with probability ν 0 Δ t p 0 , 3 ( t 0 ) , 0 with probability 1 ν 0 Δ t .
  • Remark. The fact that the various probabilities in the above equation are all proportional to Δ t is essential in order to use dynamic programming.
Let
g [ u ( t ) ] : = f [ u ( t ) ] λ .
We have
F ( 0 , t 0 ) = inf u ( t ) t [ t 0 , τ ( 0 ) ) E t 0 t 0 + Δ t g [ u ( t ) ] d t + t 0 + Δ t τ ( 0 ) g [ u ( t ) ] d t = inf u ( t ) t [ t 0 , τ ( 0 ) ) E g [ u ( t 0 ) ] Δ t + t 0 + Δ t τ ( 0 ) g [ u ( t ) ] d t + o ( Δ t ) .
Moreover,
E t 0 + Δ t τ ( 0 ) g [ u ( t ) ] d t = E E t 0 + Δ t τ ( 0 ) g [ u ( t ) ] d t | X ( t 0 + Δ t ) .
Hence, using Bellman’s principle of optimality, we can write that
inf u ( t ) t [ t 0 + Δ t , τ ( 0 ) ) E t 0 + Δ t τ ( 0 ) g [ u ( t ) ] d t = E F X ( t 0 + Δ t ) , t 0 + Δ t .
Furthermore, we deduce from Equation (8) that [since X ( t 0 ) = 0 ]
E F X ( t 0 + Δ t ) , t 0 + Δ t = j = 1 2 F ( j , t 0 + Δ t ) ν 0 Δ t p 0 , j ( t 0 ) + F ( 0 , t 0 + Δ t ) ( 1 ν 0 Δ t ) .
Next, assuming that the function F ( 0 , t 0 ) is differentiable with respect to t 0 , we deduce from Taylor’s theorem that
F ( · , t 0 + Δ t ) = F ( · , t 0 ) + Δ t F ( · , t 0 ) + o ( Δ t ) .
It follows that
F ( · , t 0 + Δ t ) ( 1 ν 0 Δ t ) = ( 1 ν 0 Δ t ) F ( · , t 0 ) + Δ t F ( · , t 0 ) + o ( Δ t ) .
Making use of the above results, we find that
0 = inf u ( t 0 ) { g [ u ( t 0 ) ] Δ t + ν 0 Δ t j = 1 2 p 0 , j ( t 0 ) F ( j , t 0 ) ν 0 Δ t F ( 0 , t 0 ) + Δ t F ( 0 , t 0 ) + o ( Δ t ) } .
Furthermore, we deduce from Equation (5) that
0 = inf u ( t 0 ) { g [ u ( t 0 ) ] Δ t + ν 0 Δ t j = 1 2 p 0 , j ( t 0 ) λ ν j + F ( 0 , t 0 ) ν 0 Δ t F ( 0 , t 0 ) + Δ t F ( 0 , t 0 ) + o ( Δ t ) } .
Finally, dividing each side of the preceding equation by Δ t and taking the limit as Δ t decreases to zero, we can state the following proposition.
Proposition 1.
The value function F ( 0 , t 0 ) satisfies the dynamic programming equation (DPE)
0 = inf u ( t 0 ) g [ u ( t 0 ) ] + ν 0 j = 1 2 p 0 , j ( t 0 ) λ ν j + F ( 0 , t 0 ) ν 0 F ( 0 , t 0 ) + F ( 0 , t 0 ) .
Moreover, we have the boundary condition F ( 0 , t 1 ) = 0 if u ( t 0 ) > 0 .
Let u 0 : = u ( t 0 ) . We have
p 0 , 1 ( t 0 ) = p u 0 u 0 + ν 0 and p 0 , 2 ( t 0 ) = ( 1 p ) u 0 u 0 + ν 0 .
Furthermore, the function f ( · ) is often chosen of the form
f [ u ( t ) ] = 1 2 q 0 u 2 ( t ) ,
where q 0 is a positive constant. Then, if u ( t ) is a continuous function of t, we can obtain the optimal control u 0 * in terms of F ( 0 , t 0 , t 1 ) by differentiating the expression between the curly brackets in Equation (18) with respect to u 0 and solving the resulting equation for u 0 .
If we are able to find an explicit expression for u 0 * , we replace it into Equation (18) and we try to solve the resulting differential equation, subject to the boundary condition F ( 0 , t 1 ) = 0 .
In the next section, we will consider the simplest possible case, namely, the one when u ( t ) k 1 or k 2 .

4. Particular Case

Suppose that the set U contains only two values, denoted by k 1 and k 2 . We deduce from Equation (18) that we must consider the following equation:
0 = g ( k i ) + ν 0 p k i k i + ν 0 λ ν 1 + F ( 0 , t 0 ) + ν 0 ( 1 p ) k i k i + ν 0 λ ν 2 + F ( 0 , t 0 ) ν 0 F ( 0 , t 0 ) + F ( 0 , t 0 )
for i = 1 , 2 .
Let us choose the function f [ u ( t ) ] defined in Equation (20) with q 0 = 2 . Furthermore, we take λ = ν i = 1 for i = 0 , 1 , 2 , 3 , p = 1 / 2 , k 1 = 1 and k 2 = 0.9 . With these values, we find that we must solve the first-order linear differential equations (denoting the function F ( 0 , t 0 ) by F i ( t 0 ) if u 0 = k i , for i = 1 , 2 )
1 = F 1 ( t 0 ) + 2 F 1 ( t 0 )
and
0 = 639 + 1000 F 2 ( t 0 ) + 1900 F 2 ( t 0 ) .
The solutions to the above equations that satisfy the boundary condition F i ( t 1 ) = 0 for i = 1 , 2 are
F 1 ( t 0 ) = 1 + e ( t 0 t 1 ) / 2
and
F 2 ( t 0 ) = 639 1000 1 e 10 ( t 1 t 0 ) / 19 .
The two functions are presented in Figure 2 in the case when t 1 = 2 . We see that the optimal solution in this example is to choose
u 0 = 0.9 if 0 t 0 t * , 1 if t * t 0 2 ,
where t * 1.2285 .
  • Remark. The value of u 0 given in Equation (26) is actually the optimal solution if we assume that the optimizer must choose u ( t ) equal to k 1 = 1 or k 2 = 0.9 for any t [ t 0 , t 1 ) . However, if we admit the possibility that we can choose u ( t ) 0 , then this result is correct at time t 0 if and only if the corresponding value function is less than or equal to K 1 [see Equation (6)]. When K = 0 , we find that the optimizer should use no control at all for t 0 [ 0.21 , 2 ) (approximately).
In the next section, we will consider the case when t 1 tends to infinity.

5. The Time-Invariant Case

Once we have computed the value function in the preceding section, we can take the limit as t 1 tends to infinity. Actually, as can be seen in the example that we presented, the value function will then become a constant. This is true because when t 1 , the problem becomes time-invariant so that F ( 0 , t 0 ) = 0 . It follows that the DPE given in Equation (18) reduces to
0 = inf u ( t 0 ) g [ u ( t 0 ) ] + ν 0 j = 1 2 p 0 , j ( t 0 ) λ ν j + F ( 0 , t 0 ) ν 0 F ( 0 , t 0 ) .
We will now present a problem that can be solved explicitly. Assume that the function f ( · ) in the cost function J ( i ) defined in Equation (3) is such that
f ( u 0 ) = c u 0 2 ( u 0 + ν 0 ) 2 ,
where c is a positive constant. Notice that f ( 0 ) = 0 and f ( u 0 ) > 0 if u 0 > 0 , as required. Moreover, we find that f ( u 0 ) > 0 , so that the maintenance costs increase with u 0 , which is logical.
Next, assume that ν 1 = ν 2 . Then, differentiating Equation (27) with respect to u 0 , we obtain, after simplification, that
0 = 2 c u 0 u 0 + ν 0 ν 0 ν 1 + ν 0 F ( 0 , t 0 ) .
It follows that the optimal control is given by
u 0 * = ν 0 2 1 ν 1 F ( 0 , t 0 ) 2 c ν 1 ν 0 1 ν 1 F ( 0 , t 0 ) ,
which must be non-negative.
Let us take ν 0 = ν 1 = 1 . Then, u 0 * becomes
u 0 * = 1 F ( 0 , t 0 ) 2 c 1 + F ( 0 , t 0 ) .
Substituting this expression into Equation (27), we obtain, if p = 1 / 2 , that the value function F ( 0 , t 0 ) satisfies the algebraic equation
0 = [ 1 F ( 0 , t 0 ) ] 2 4 c λ λ [ 1 F ( 0 , t 0 ) ] 2 c [ 2 c 1 F ( 0 , t 0 ) ] 2 c F ( 0 , t 0 ) .
We find that
3 F ( 0 , t 0 ) = 2 c λ 4 c 2 + 8 c λ + λ 2 + 6 λ 3 ,
from which we deduce that the optimal control is given by
u 0 * = 3 + 2 c λ λ 2 + ( 8 c + 6 ) λ + 4 c 2 3 3 8 c + λ + λ 2 + ( 8 c + 6 ) λ + 4 c 2 3 .
Now, it is clear that the optimal control should be u 0 * = 0 if K = 0 and λ 1 . Let us choose λ = 10 . Then, we find that the constant c must be such that
c > 1 5 12 + 139 4.76 .
For instance, if we take c = 5 , then F ( 0 , t 0 ) = 657 / 3 8.544 and
u 0 * = 1 73 1 73 20.93 .
The optimal control is shown in Figure 3 for c [ 5 , 10 ] .
  • Remark. If u ( t ) 0 , we have F 0 ( 0 , t 0 ) = K 10 . Therefore, the optimal control is indeed equal to (approximately) 20.93 when λ = 10 and c = 5 if and only if K > 1.456 .

6. Conclusions

In this paper, an optimal control problem known as a homing problem for a continuous-time Markov chain used as a model for the state of a machine has been considered. The aim was to find the optimal maintenance rate in order to maximize the time until the machine needs to be repaired, while taking the maintenance costs into account.
The model could be generalized by defining other states. For instance, there could be a state that corresponds to the case when the machine is broken down and cannot be repaired. If the optimizer uses no control at all (that is, if the machine is never serviced), there could be a high probability that the chain will move from the state when the machine is working to that state.
We were able to obtain explicit and exact solutions to two particular problems. In general, numerical methods or simulations could be used to obtain the value function and the optimal control.

Author Contributions

Conceptualization, M.L.; Methodology, M.L. and R.Y.; Writing original draft, M.L.; Writing review and editing, R.Y.; Funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Sciences and Engineering Research Council of Canada.

Data Availability Statement

Data is contained within the article.

Acknowledgments

This research was supported by the Natural Sciences and Engineering Research Council of Canada. The authors also wish to thank the anonymous reviewers of this paper for their constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Parmigiani, G. Optimal scheduling of fallible inspections. Oper. Res. 1993, 44, 360–367. [Google Scholar] [CrossRef]
  2. Boukas, E.K.; Liu, Z.K. Production and maintenance control for manufacturing systems. IEEE Trans. Autom. Control 2001, 46, 1455–1460. [Google Scholar] [CrossRef]
  3. Wang, C.-H.; Sheu, S.H. Determining the optimal production-maintenance policy with inspection errors: Using a Markov chain. Comput. Oper. Res. 2003, 30, 1–17. [Google Scholar] [CrossRef]
  4. Suryadi, H.; Papageorgiou, L.G. Optimal maintenance planning and crew allocation for multipurpose batch plants. Int. J. Prod. Res. 2004, 42, 355–377. [Google Scholar] [CrossRef]
  5. Liying, W.; Youtong, F.; Liying, S.; Baoyou, L. On fault diagnosis and inspection policy for deteriorating system. In Proceedings of the 26th Chinese Control Conference, Shenyang, China, 25–27 July 2007; pp. 477–481. [Google Scholar] [CrossRef]
  6. Golmakani, H.R.; Fattahipour, F. Optimal replacement policy and inspection interval for condition-based maintenance. Int. J. Prod. Res. 2011, 49, 5153–5167. [Google Scholar] [CrossRef]
  7. Naderkhani, F.; Makis, V. Optimal condition-based maintenance policy for a partially observable system with two sampling intervals. Int. J. Adv. Manuf. Tech. 2014, 78, 795–805. [Google Scholar] [CrossRef]
  8. He, K. Optimal Maintenance Planning in Novel Settings. Ph.D. Dissertation, University of Pittsburgh, Pittsburgh, PA, USA, 2017. [Google Scholar]
  9. Sun, Q.; Ye, Z.-S.; Chen, N. Optimal Inspection and Replacement Policies for Multi-Unit Systems Subject to Degradation. IEEE Trans. Reliab. 2018, 67, 404–413. [Google Scholar] [CrossRef]
  10. Cao, P.; Yang, K.; Liu, K. Optimal selection and release problem in software testing process: A continuous-time stochastic control approach. Eur. J. Oper. Res. 2020, 285, 211–222. [Google Scholar] [CrossRef]
  11. Sun, Q.; Ye, Z.-S.; Zhu, X. Managing component degradation in series systems for balancing degradation through reallocation and maintenance. IISE Trans. 2020, 52, 797–810. [Google Scholar] [CrossRef]
  12. Andriotis, C.P.; Papakonstantinou, K.G.; Chatzi, E.N. Value of structural health monitoring quantification in partially observable stochastic environments. Struct. Saf. 2021, 93, 102072. [Google Scholar] [CrossRef]
  13. Gan, S.; Yousefi, N.; Coit, D.W. Optimal control-limit maintenance policy for a production system with multiple process states. Comput. Ind. Eng. 2021, 158, 107454. [Google Scholar] [CrossRef]
  14. Vrignat, P.; Kratz, F.; Avila, M. Sustainable manufacturing, maintenance policies, prognostics and health management: A literature review. Reliab. Eng. Syst. Saf. 2022, 218, 108140. [Google Scholar] [CrossRef]
  15. Morato, P.G.; Papakonstantinou, K.G.; Andriotis, C.P.; Nielsen, J.S.; Rigo, P. Optimal inspection and maintenance planning for deteriorating structural components through dynamic Bayesian networks and Markov decision processes. Struct. Saf. 2022, 94, 102140. [Google Scholar] [CrossRef]
  16. Roux, M.; Fang, Y.-P.; Barros, A. Maintenance planning under imperfect monitoring: An efficient POMDP model using interpolated value function. IFAC-PapersOnLine 2022, 55, 128–135. [Google Scholar] [CrossRef]
  17. Lefebvre, M.; Pazhoheshfar, P. An optimal control problem for the maintenance of a machine. Int. J. Syst. Sci. 2022, 53, 3364–3373. [Google Scholar] [CrossRef]
  18. Wang, Y.; Li, Y. Replacement policy for a single-component machine with limited spares in a finite time horizon. IET Conf. Proc. 2022, 21, 1503–1510. [Google Scholar] [CrossRef]
  19. Nasersarraf, S.; Asadzadeh, S.; Samimi, Y. Determining the optimal policy in condition-based maintenance for electrical panels. Iran. Electr. Ind. J. Qual. Product. 2023, 12, 37–45. [Google Scholar]
  20. Wang, W.; Chen, X. Piecewise deterministic Markov process for condition-based imperfect maintenance models. Reliab. Eng. Syst. Saf. 2023, 236, 109271. [Google Scholar] [CrossRef]
  21. Wang, G.; Zhu, Z. Optimal control of sampled-data systems based on an optimized stochastic sampling. Int. J. Robust Nonlinear Control 2023, 33, 4304–4325. [Google Scholar] [CrossRef]
  22. Liu, B.; Lin, J.; Zhang, L.; Xie, M. A dynamic maintenance strategy for prognostics and health management of degrading systems: Application in locomotive wheel-sets. In Proceedings of the 2018 IEEE International Conference on Prognostics and Health Management (ICPHM), Seattle, WA, USA, 4–7 June 2018; pp. 1–5. [Google Scholar] [CrossRef]
  23. Mancuso, A.; Compare, M.; Salo, A.; Zio, E. Optimal Prognostics and Health Management-driven inspection and maintenance strategies for industrial systems. Reliab. Eng. Syst. Saf. 2021, 210, 107536. [Google Scholar] [CrossRef]
  24. Lefebvre, M. Applied Stochastic Processes; Springer: New York, NY, USA, 2007. [Google Scholar]
  25. Whittle, P. Optimization over Time; Wiley: Chichester, UK, 1982; Volume I. [Google Scholar]
  26. Whittle, P. Risk-Sensitive Optimal Control; Wiley: Chichester, UK, 1990. [Google Scholar]
  27. Lefebvre, M.; Yaghoubi, R. Optimal service time distribution for an M/G/1 queue. Axioms 2024, 9, 594. [Google Scholar] [CrossRef]
  28. Lefebvre, M.; Yaghoubi, R. Optimal control of a queueing system. Optimization 2024, 1–14. [Google Scholar] [CrossRef]
Figure 1. Transition diagram of the Markov chain.
Figure 1. Transition diagram of the Markov chain.
Machines 12 00795 g001
Figure 2. Functions F 1 ( t 0 ) (solid line) and F 2 ( t 0 ) defined respectively in Equation (24) and Equation (25) for t [ 0 , 2 ] , when t 1 = 2 .
Figure 2. Functions F 1 ( t 0 ) (solid line) and F 2 ( t 0 ) defined respectively in Equation (24) and Equation (25) for t [ 0 , 2 ] , when t 1 = 2 .
Machines 12 00795 g002
Figure 3. Optimal control u 0 * given in Equation (34) when λ = 10 and c [ 5 , 10 ] .
Figure 3. Optimal control u 0 * given in Equation (34) when λ = 10 and c [ 5 , 10 ] .
Machines 12 00795 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lefebvre, M.; Yaghoubi, R. Optimal Inspection and Maintenance Policy: Integrating a Continuous-Time Markov Chain into a Homing Problem. Machines 2024, 12, 795. https://doi.org/10.3390/machines12110795

AMA Style

Lefebvre M, Yaghoubi R. Optimal Inspection and Maintenance Policy: Integrating a Continuous-Time Markov Chain into a Homing Problem. Machines. 2024; 12(11):795. https://doi.org/10.3390/machines12110795

Chicago/Turabian Style

Lefebvre, Mario, and Roozbeh Yaghoubi. 2024. "Optimal Inspection and Maintenance Policy: Integrating a Continuous-Time Markov Chain into a Homing Problem" Machines 12, no. 11: 795. https://doi.org/10.3390/machines12110795

APA Style

Lefebvre, M., & Yaghoubi, R. (2024). Optimal Inspection and Maintenance Policy: Integrating a Continuous-Time Markov Chain into a Homing Problem. Machines, 12(11), 795. https://doi.org/10.3390/machines12110795

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop