Next Article in Journal
The Impact of the Polymer Layer Thickness in the Foundation Shim on the Stiffness of the Multi-Bolted Foundation Connection
Previous Article in Journal
Finite Element Modeling and Analysis of RC Shear Walls with Cutting-Out Openings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Adaptive Hidden Markov Model Utilizing Expectation–Maximization Algorithm for Advanced Pipeline Leak Detection

by
Omid Zadehbagheri
1,
Mohammad Reza Salehizadeh
1,*,
Seyed Vahid Naghavi
2,
Mazda Moattari
1 and
Behzad Moshiri
3,4
1
Department of Electrical Engineering, Marvdasht Branch, Islamic Azad University, Marvdasht 1477893780, Iran
2
Engineering Division, Research Institute of Petroleum Industry, Tehran 1485613111, Iran
3
School of ECE, College of Engineering, University of Tehran, Tehran 1439957131, Iran
4
Department of ECE, University of Waterloo, Waterloo, ON N2L 3G1, Canada
*
Author to whom correspondence should be addressed.
Modelling 2024, 5(4), 1339-1364; https://doi.org/10.3390/modelling5040069
Submission received: 20 August 2024 / Revised: 14 September 2024 / Accepted: 20 September 2024 / Published: 24 September 2024
(This article belongs to the Topic Oil and Gas Pipeline Network for Industrial Applications)

Abstract

:
In the oil industry, the leakage of pipelines containing hydrocarbon fluids causes significant environmental and economic damage. Recently, there has been a growing trend in employing data mining techniques for detecting leaks. Among these methods is the Hidden Markov Model, which, despite good results with stationary data, becomes inefficient when a leak causes a drop in the pressure or flow, reducing its accuracy. This paper presents an adaptive Hidden Markov method. Previous methods had low accuracy due to insufficient information for accurate leak detection. They often classified the size and location of leaks broadly. In contrast, the proposed model extracts hidden features to accurately identify the location and size of leaks, even in noisy conditions. Simulating a leak in a section of an oil pipeline in the Iranian Oil Export Corridor demonstrates the proposed method’s superiority over common methods like K-NN, SVM, Naive Bayes, and logistic regression.

1. Introduction

1.1. Background and Literature Review

1.1.1. Importance of Leak Detection

Pipelines are one of the most pivotal channels for transporting oil and gas. Consequently, regular inspections of the pipelines are essential. Leaks in gas and oil pipeline networks result in significant economic losses and environmental pollution. For instance, from 2015 to 2020, 954 pipeline leaks were reported in the United States, causing financial damage of USD 1.11 billion. Consequently, proper measures for detecting leaks have gradually attracted researchers’ attention in recent years. Millions of kilometers of pipelines act as crucial conduits in the global oil and gas industry, representing 3% to 7% of the total transported oil or gas [1]. Oil leakage can occur due to several reasons, including unauthorized diversions/branches from pipelines that may be due to pipeline theft.
Ruptures and leaks might continue to happen, and operators must provide systems for leak detection to mitigate the ramifications. This, in turn, could reduce the risk for the pipeline operator. A steady decline in the number of spills has been evident over the last two decades as reported by the Department of Transportation Office of Pipeline Safety (DTOPS). Nevertheless, since 2001, the overall average number of spills seems to have leveled off at a rate of about 100 incidents annually. Even though the yearly reports may vary below and above the 2020 value, on average, a flattened-out curve is observed. The reason for the termination of the declining trend from 2001 to 2020 has not been specifically identified [2]. Therefore, the search for effective leak detection methods is crucial given the potential harm caused by pipeline damage.

1.1.2. Existing Methods and Research Gaps

In recent decades, numerous techniques for monitoring the structural health of pipelines have been developed to detect such damage. As a result, a wide range of validated and established methods now exist for locating pipeline damage. These methods can be categorized in various ways, such as hardware- vs. software-based, direct vs. indirect detection, and internal vs. external detection. Leak detection commonly employs both software- and hardware-based methods [3].
Leak detection methods based on hardware include magnetic flux, characteristic impedance, radioactive tracing, gas infrared imaging, and self-organizing wireless sensor detection [4]. While hardware-based methods have improved over the years and proven useful, they can still be complex and time-consuming. On the other hand, software-based methods have not improved as much and have been slower to develop, with current systems in the market still using design concepts from the 1980s [4]. Software-based methods cover wavelet analysis, support vector machine, flow balance, negative pressure wave, statistical, fuzzy neural network, generalized correlation analysis, sound wave, and real-time model detection methods [5].
The pressure profile of the pipeline plays a crucial role in leak detection. Leaks can be identified by a sudden drop in pressure, followed by a partial recovery. This pressure pulse travels both downstream and upstream in a wave-like pattern through the pipeline. Accurately addressing the velocity of hydraulic transient wave propagation, and detecting and locating leaks, is essential for the automated supervision of the pipeline. To monitor pressure profiles, operators deploy sensors at different locations along the pipeline. These readings are compared with pressure profiles generated from fluid flow modeling, which incorporates pipeline friction factors, the Darcy–Weisbach equation, and fluid flow parameters. By comparing these values, operators can detect anomalies that indicate leaks. However, this method does not pinpoint the leak location, necessitating further verification and assessment before making a shutdown decision. The complexity and length of pipeline networks, with multiple pumps and compressors, cause anomalies and delays in the pressure profile. Current methods for detecting pipeline leaks, in addition to location techniques, are generally categorized into two primary approaches under the internal method: real-time transient modeling and pipeline data modeling. These methods utilize steady-state pipeline models to predict unknown parameters and more accurately detect and locate leaks. The pressure distribution in the pipeline and leak location is predicted based on the flow and pressure rate signals at both ends of the pipelines using pipeline-model-based methods. In [6,7,8], the data-based approach has been implemented using neural networks and fuzzy logic to classify leak size and distance. The model-based approach is implemented using state estimation along with the calibration of the fitting loss coefficient as described in [9].
One of the important topics in the control literature is model-based fault detection and identification, which has attracted researchers’ attention recently [10]. If we consider a leak as a fault in the pipeline network, reviewing the methods used in this regard can be effective. On the other hand, data mining algorithms are widely used in many industries, including the energy industry. They have already been implemented as computer-aided leak detection methods. Various algorithms can be introduced to detect breaks to increase reliability and improve accuracy and sensitivity. Applying machine learning techniques to leak detection opens the opportunity to investigate the size and type of leaks that are undetectable by traditional methods [11]. With the advancement of machine learning, there has been a growing focus on methods based on statistical and probabilistic models that can effectively handle signal uncertainty. Support vector machines (SVMs) are a commonly used supervised learning algorithm for classification tasks. SVMs are robust and adaptable, suitable for linear and non-linear classifications, regression, and outlier detection. Qu et al. proposed a real-time monitoring and pipeline leak detection system using SVM [12]. The k-nearest neighbors (K-NN) algorithm utilizes feature similarity to predict values for new data points and is widely recognized as one of the most popular algorithms in machine learning and data mining, both in industry and academia. K-NN has also been effectively applied in leak detection [13]. Bayes is an approach for classifying phenomena based on their probability of occurrence or non-occurrence [14]. The Naive Bayes algorithm is applied to detect faults in different applications [15,16]. Additionally, deep learning methods have been employed in processing acoustic signals for leak detection, such as deep belief networks (DBNs) combined with genetic algorithms (GAs), which offer potential improvements in detection accuracy [17]. A Hidden Markov Model (HMM) is a statistical model that can be viewed as a simplified dynamic Bayesian network [18]. Amongst the statistical and probabilistic models, Markov and Hidden Markov Models (HMMs) are more capable of classifying and modeling patterns, particularly for signals that exhibit non-stationary behavior and low repeatability, which creates an effective ability to detect faults [19,20,21]. The Hidden Markov Model (HMM) is a combination of two stochastic processes: one observed and the other, namely the hidden process, unknown. The hidden process can be viewed via its produced sequence of observations. In [22], new connections between Hidden Markov Models and undirected graphical models are given. There are two reasons for using these models: (a) understanding and predicting an unknown process using another observable process and (b) explaining the variation of the unknown process based on observational changes [23]. The most well-known applications of HMMs include temporal pattern recognition (e.g., speech recognition), signal processing, novel gene clustering, handwriting recognition, gesture recognition, speech tagging, musical scores, prediction of water consumption, cyberattack detection in petroleum systems, acoustic leak detection, DNA sequencing, pipeline safety monitoring, and robotics [24]. Markov models are also used in the literature for leak identification [25]. Ai et al. created a pipeline monitoring and leak detection system that employs Hidden Markov Models (HMM) to identify acoustic signals indicative of damage. [26]. Qiu et al. proposed an early-warning model of the compressor unit in the equipment chain of the gas pipeline based on a neural network and HMM, which exhibited superior generalization accuracy [27]. Ai et al. introduced a leak detection and pipeline monitoring system using the Linear Prediction Spectrum Coefficient and HMM approach for calculating damage acoustic signals [28]. Fagiani et al. proposed a framework for automatic leak detection in water and natural gas networks, utilizing three statistical methods: the Gaussian mixture model, Hidden Markov Models (HMM), and a one-class support vector machine. The HMM exhibited the best performance among them [29]. Liu et al. developed a leak detection approach based on data mining, which incorporates Markov feature extraction along with a dual-phase decision strategy, integrating both short-term and long-term techniques. To detect leakages, they used a distance between raw pressure and linear pressure estimation [30]. Zhang et al. recently used the Gaussian mixture model and HMM to identify the negative pressure wave that causes the leakage [31].
One of the challenges faced by existing leak detection methods is the inadequate extraction of leak information. The features extracted from pressure and flow signals cannot fully represent all the useful information. To enhance the effectiveness of leak detection methods, more efficient feature extraction methods are needed. Feature extraction from the Markov chain can offer insights into dynamic changes and improve leak detection in the presence of noise [30]. Another problem with the above models is that while they can identify leak points, they cannot identify the size of the leak. Looking at the data as a chain, in addition to identifying the location of the leak, can help determine the leak size. The assumption is that future states depend solely on the current state and not on past events (the Markov property). This assumption typically allows for reasoning and computation with the model, which would otherwise be infeasible.
This paper introduces a new method for identifying leaks in oil and gas transmission pipelines. The developed method is based on HMM and utilizes regression analysis, called the Adaptive Hidden Markov Model (AHMM), to identify and estimate leak characteristics. It allows for the efficient modeling of raw pressure and flow data and can be used for both online and offline analysis. By simulating a leak in a section of an oil pipeline in the Iranian Oil Export Corridor, the accuracy of the method is evaluated using the pipeline pressure and flow data. Since actual leakage data were not available, OLGA v2022.1.0 software, which is widely regarded as the industrial standard for flow assurance and operational analysis in oil and gas pipelines, was adopted for simulating the pipeline in the event of a leak. Mechanical characteristics such as diameter, thickness, length of construction materials, and pipeline profiles, as well as dynamic parameters including pressure and operating flow of the line, are modeled in the OLGA software [32,33]. Many authors have conducted flow assurance studies using the OLGA code, which has been utilized to simulate system behavior and solve multiphase flow problems in various case studies. To make a comparative view, the proposed algorithm has been compared with Naive Bayes, K-NN, SVM, and logistic regression models. Moreover, the model can properly detect the size of the leakage. The capability and superiority of the model in identifying the location of leaks have been investigated as well.

1.2. Contribution Highlights and Paper Organization

The paper’s contributions are summarized as follows:
  • Introducing the Adaptive Hidden Markov Model (AHMM): This novel method identifies the size and location of oil leakages more accurately than existing techniques like K-NN, SVM, logistic regression, and Naive Bayes, and operates effectively in detecting small leaks. The AHMM extracts linear flow and pressure trends using the Hidden Markov concept, providing a robust analysis in both online and offline settings.
  • Practical Application and Flexibility: The proposed AHMM algorithm has been successfully tested using simulation data generated by OLGA, a widely used industrial standard for pipeline simulation. Although actual leak data were unavailable, the simulations were carefully calibrated with real pipeline parameters from a section of the Iranian Oil Export pipeline, ensuring that the results closely reflect real-life operational conditions. These promising results suggest that the model holds potential for practical implementation. The flexibility of the AHMM allows for effective analysis of pressure and flow data, making it adaptable to various real-world scenarios.
The paper is organized as follows: Section 2 provides a comprehensive overview of leak detection principles, emphasizing the fundamental models and equations involved. Section 3 explains the Hidden Markov Model (HMM) and its application in leak detection. Section 4 describes the proposed Adaptive Hidden Markov Model (AHMM) method for leak detection, including the regression model and the EM algorithm for parameter estimation. Section 5 presents the numerical analysis and results, demonstrating the performance of the AHMM in detecting leaks and comparing it with other common methods. Finally, Section 6 concludes the research by summarizing the findings, emphasizing the benefits of the proposed method, and suggesting directions for future research.
This structure aims to provide a comprehensive understanding of the proposed method, from theoretical foundations to practical applications, and offers a clear pathway for further advancements in pipeline leak detection.

2. Principle of Leak Detection

In this section, we review the fundamental principles of pipeline leak detection. Our proposed method is not based on traditional mathematical models but instead fully relies on simulated data and data-driven analysis. For this purpose, we have utilized the OLGA software as a digital twin model, which accurately simulates the behavior of pipelines under various conditions and provides reliable data for our analysis. These simulated data are then applied to methods such as the Adaptive Hidden Markov Model (AHMM) and other data-driven comparative techniques. In the following, we will provide further details on how the OLGA software operates, the simulation process, and the impact of leaks on the pressure and flow profiles within the pipeline.

2.1. Model and Equations

The mass transport equation, which governs the transport of a mass field represented by m i , moving with velocity U i , can be expressed as follows:
δ t m i + δ z m j U j = j ψ j i + G j
In the equation, δ t represents differentiation with respect to time, while δ z represents spatial differentiation. The term ψ j i refers to the rate of mass transfer between the i-th and j-th mass fields. The variable G i represents any mass source or sink. If we consider a momentum field represented by m j U j , the equation governing the balance of momentum can be expressed as follows:
δ t m i U i + δ z m i U i 2 = m i g cos φ + P i + G i U i + j Ψ ji + U j Ψ ji U i + j F ji U j U i F i w U i
The equation for momentum balance takes into account several factors, including, the angle of the pipe ( φ ), pressure force ( P i ), friction forces between the i-th and j-th mass fields ( F j i ), and wall friction ( F w ). The momentum contribution related to the mass source/sink G i is denoted as G i U i . Momentum contributions from the mass transfer between the j-th and i-th mass fields are represented by Ψ j i . In this context, Ψ j i + denotes the net contribution from mass field i to j, while Ψ j i signifies the net contribution from mass field j to i.
The volume equation, which accounts for the relationship between pressure, temperature, and fluid volume, can be derived by applying the equation of state and the basic constraint that the volume of fluid is equal to the pipe volume. It can be expressed as
Σ L m L ρ L 2 d ρ L d P δ t P + Σ L m L ρ L 2 d ρ L d T δ t T + Σ L 1 ρ L δ z m L U L + G L = 0
where L denotes the existing phases.
The energy balance equation for a specific mass field m i can be written as
δ i m i E i + δ z m i U i H i = s i + Q i + j T i j E j

2.2. Impact of Leakage on Flow

Considering that the network of pipelines is closed, the network and pipeline connections in the network are specified. The continuity law can be used to determine the amount of possible leakage in pipelines [3]:
d m 1 / d t = d m 2 / d t
where d m / d t is the mass changes over time, i.e., the mass flow. In this method, based on the fluid’s mass flow rate entering the line from the source station and taking into account the existing outlets along the pipeline, the amount of leakage in the line can be determined. In case of leakage in any of the pipeline points, the amount of flow drop measured between both flowmeters will indicate the amount of leakage.
For the pipeline under study, which is 87 km long and has a diameter of 42 inches, the inlet flow is approximately 60,000 barrels per day, and the inlet pressure is 18 bar. Based on this, the flow and pressure profiles have been plotted. It is worth mentioning that more detailed information about the pipeline is provided in Section 5 of the paper.
Based on Figure 1, the amount of volumetric flow drop after the occurrence of leakage is proportional to the amount of leakage. It should be noted that if the chart represented mass flow instead of volumetric flow, the mass flow drop after the leakage would be exactly equal to the mass flow of the leakage.

2.3. Impact of Leakage on Pressure

Due to frictional and other similar drops in pipelines, and their effect on the pressure of the fluid flowing through the pipeline, and considering the changes in the passage of the pipeline such as changes in altitude, weather conditions, etc., fluid equations should be used to calculate and investigate all the aforementioned changes. The most appropriate method for investigating the conditions of the fluid passing through the pipeline is to use the Bernoulli equation as follows:
P 1 ρ 1 + 1 2 U 1 2 + g h 1 = P 2 ρ 2 + 1 2 U 2 2 + g h 2
In this equation, the pressure measured by the pressure transmitters along the pipeline will be used in the calculations. Additionally, the velocity of the fluid will be determined by measuring the volume flow at different points, and the pressure changes due to height changes will be evaluated in the height term in the equation. In the absence of leakage along the pipeline, the fluid pressure profile will align along the pipeline, ideally forming a straight line with a negative slope. However, in the event of a leak at any point on the line, the uniformity of the pattern at the site of the leak will be disrupted, causing a discontinuity in the pressure profile. Figure 2 illustrates the effect of leakage in the pipeline on the pressure profile. It is important to note that while the frictional effects are indeed considered in the pipeline profile, their influence is relatively small compared to the impact of elevation changes. Therefore, the pressure profile appears nearly linear, with the dominant factor being the change in elevation, and the effects of friction being less noticeable.

3. Hidden Markov Model

The definition of HMM is based on a pair of stochastic processes. In the finite state, the underlying process is a discrete homogeneous Markov chain, which is not visible, and that is why it is called hidden. As a consequence of this layer, a sequence of stochastic observations will be generated. The structure of HMMs mainly consists of a Markov chain of latent variables, where each latent variable is linked to an observable outcome.

3.1. Markov Chain

Markov chains provide a relatively straightforward and versatile framework for modeling sequential patterns in time series data. These probabilistic models have found widespread application in various fields, including finance, biology, and natural language processing. They are especially effective for modeling systems where the next state depends only on the current state, rather than on the sequence of prior events. Let us consider a random process denoted as { q t , tT} where T = {0, 1, 2, …} represents the set of time points, and S = {S1, S2, …, SN} represents the set of possible states. At any given time t + 1, the state q t + 1 is only correlated with the state at the previous time q t as P q t + 1 = S j | q 1 , q 2 , q t = S i = P q t + 1 = S j | q t = S i = P ij , where P i j represents the probability of transition from state S j to state S i . This means that the past information is fully summarized by the knowledge of the current state, which is the essence of the Markov property.

3.2. Typical HMM Method

Mathematically, a standard Hidden Markov Model (HMM) is defined by an underlying stochastic process that includes hidden states forming a Markov chain. The behavior of an HMM can be explained by the following factors:
N = Number of hidden states in the Markov chain. Even though the states are hidden, they could still correspond to physical concepts. Individual states are presented by S = {S1, S2, …, SN}, and the state at time t is indicated by q t .
π = Initial state of probability distribution indicated by π = {πi}, where πi = P q 1 = S i , 1 ≤ IN.
A = Matrix of State Transition Probabilities indicated by A = { P i j }, where P i j = P q t + 1 = S j | q t = S i , 1 ≤ i, jN; where, P i j shows the probability of transitioning from state i at time t to state j at time t + 1 which has the Markov property.
B = Emission probability distribution at state Sj indicated by B = { b j y t }; here, b j y t = P y t | q t = S j , 1 ≤ jN and describes the probability of emitting symbol yt from state j.
The HMM assumes that the observations Y= (y1, y2, y3, …, yT) are generated from the finite hidden states, such as q1, q2, q3, …, qN. They have different properties in different states. A typical HMM is defined by a three-tuple: λ = (π, A, B).
In an HMM, each hidden state is governed by an initial probability π, and the transition between states at time t is represented by a transition matrix A. Within each hidden state qt, an observation is emitted according to its distribution. This observable stochastic process set forms the foundation of the Hidden Markov Model. A graphical representation is provided in Figure 3.
The observed values of the random variables in the HMM can be generated from either a discrete distribution or a continuous one, such as Gaussian distribution. For an HMM to be effective in real-world applications, three fundamental problems must be addressed:
  • Evaluation Problem: Given a sequence of observations y1, y2, y3, , yT and a model λ = (π, A, B), how can we efficiently calculate the probability of the sequence of observations, given the model P (Y|λ)?
  • Optimal State Sequence Problem: Given a sequence of observed values y1, y2, y3, …, yT and a model λ = (π, A, B), how can we determine the most likely sequence for hidden states q1, q2, …, qT that best describes the sequence of observations?
  • Training Problem: How to optimize the parameters of the model λ = (π, A, B) to maximize P (Y|λ) and the probability of a sequence of observations, y1, y2, y3, …, yT, given the model?
In the postulated model, all three questions about AHMM will be examined and the possible responding methods will be explained. Solving the Evaluation Problem paves the way for identifying the best observations features of the linear trend. Finding the exact location of the leak is dependent on solving the Optimal State Sequence Problem. In the posited model, by Solving Training Problems, the best parameters of regression models can be estimated, and based on the information obtained from Solving the Evaluation Problem, a regression model with optimal parameters can be estimated for the data and the leakage rate can be monitored. Solving Training Problems gives the optimal modeling for data of flow and pressure inside pipelines and may help detect the intensity of leakage.
This study introduces an Adaptive Hidden Markov Model (AHMM) to extend the linear trend characteristics when the dynamic data of oil pressure or flow is predictable.

4. The Adaptive Hidden Markov Model (AHMM)

The two fundamental assumptions in this paper are as follows:
The probability of P(yt|qt = i, qt−1, yt−1, …, q1, y1) = P(yt|qt = i) = bi(yt), which means that the observations only depend on their generating state.
P(qt = j|yt, qt−1 = i, yt−1, …, q1, y1) = P(qt = j|qt−1 = i) = Pij, which means that the hidden states have the Markov property.
When a leak occurs, the fluid density at the leakage point quickly decreases due to the loss of the flow medium and a drop in pressure. Because of varying pressure levels in the pipeline transmission process, numerical-based methods often produce numerous false positives. However, both normal and leak samples can yield similar Markov chains despite their significant differences. To address this problem, we extract the linear trend in flow and pressure using the Hidden Markov Model concept.
According to the presented models, the pipeline pressure and flow are random variables that depend on the distance traveled from the origin. Therefore, the linear regression model can be used to show the pressure and flow inside the pipelines. Assume that the pipeline pressure/flow follows the following model:
y t = α + β X t + e t   for   t = 1 , , T
where X is the design matrix, β represents the unknown parameter of the regression line slope, α is an unknown parameter indicating the intercept, y shows the observations (here the pressure or flow of the material inside the pipeline), and e represents the error that is assumed to follow the Gaussian distribution. When a leak happens in the pipeline, the pressure or flow at point t drops. Therefore, the linear model that the pressure/flow follows will be changed. We suppose that the data of pressure or flow levels follow the AHHM model:
y t = α i + β i X t + e t   for   t = 1 ,   ,   T , i = 1 ,   ,   N
where y t and X t can be regarded as previous values. Also, α i and β i are unobservable fixed parameters, and the error term is shown by e t N 0 , σ i 2 . The hidden state space of this model is denoted by S = {1, 2, …, N}; for example, S = {1, 2} is a special case of the AHMM with only two hidden states. S = 1 is the state that generates leak-free observations and S = 2 is the state that generates observations with leakage.
The Expectation–Maximization algorithm will be used for estimating the parameters of the AHMM. It can be utilized in two scenarios:
(i)
When there are missing values;
(ii)
When estimating the maximum likelihood is challenging and consequently estimating the parameters of the complete data becomes difficult.
Since the number of observations is very large, estimating the parameters in real situations can be challenging. To overcome this problem, one can use the EM algorithm. This algorithm consists of two steps. A demonstration of this design is shown in Figure 4. First, in step E of the EM algorithm, given the value of λ k and observations y = {y1, y2, …, yT}, the expected value of the logarithm of the likelihood function should be determined. Then, in step M, the derivative of the likelihood function’s logarithm will be maximized. As the iteration procedure is applied to these two steps, the logarithm of the likelihood function will continue to increase until it converges to a local maximum or reaches the maximum iteration number m a x i t e r .
The details of the EM algorithm process for estimating the initial value, πi, the transition probability, Pij, and the expected linear part of the model, α i , β i , and σ i 2 are explained below.

4.1. Expectation–Maximization Algorithm

The Expectation–Maximization (EM) algorithm is an iterative computational approach that can be utilized to estimate maximum likelihood values, and it is widely applicable in a variety of complete data problems [30]. The parameters of the AHMM can efficiently calculated using the EM algorithm. The likelihood function of the complete data is expressed as L λ | Y , Q = π q 1 t = 1 T P q t 1 q t b q t y t . Generally, the emission model ( b q t y t = b i y t ) is assumed as linear regression. In this context, λ is the vector of the parameters in the model; λ = A , B , π = P i j , α i , β i , σ i 2 , π i , which contains the parameters of linear regression α i , β i , σ i 2 and transition probability between hidden states (Pij) and the probability of starting from the i-th hidden state, πi. The likelihood function for the incomplete data can be calculated as follows.
L c λ | y = P y | λ = q 1 , , q t = 1 N P y , q t | λ = q 1 , , q t = 1 N ( π q 1 t = 1 T P q t 1 q t b q t y t )
In step M, the maximization of Equation (9) by using the Lagrange multiplier γ and assuming i = 1 N π i = 1 yields
π ^ i = P q 1 = i | Y , λ k i = 1 N P q 1 = i | Y , λ k
By computing the differentiation based on Pij in a similar way as Equation (9), we will have
P ^ i j = t = 1 T P q t = i , q t + 1 = j | λ k , Y t = 1 T P q t + 1 = j | λ k , Y
Appendix A contains details of the derivation process for π i , the initial value and P i j , the transition probability.
Also, because the observations are generated based on the linear regression, the observations of the hidden i-th state have a Gaussian distribution with a mean of X β i and variance of σ i 2 . On the other hand, E y i | q i , X = X β i and y i N X β i , σ i 2 , so we have
i = 1 N ( t = 1 T l o g b i y t ) p q t = i | Y , λ = i = 1 N ( t = 1 T l o g b i y t ) P q t = i | Y , λ k
The partial derivatives of Equation (9) concerning α i , β i , and σ i 2 are taken from the below Gaussian distribution:
b i y t = φ y t : α i + X β i , σ i 2 = 1 2 π σ i 2 exp y t α i + X β i 2 2 σ i 2
By taking the partial derivative concerning the expected α i in Equation (13) and setting it to 0, the equation for deriving the expected α i and after differentiation will be
α ^ i = t = 1 T P q t = i | Y , λ k y t β k i X t t = 1 T P q t = i | Y , λ k
With a similar partial derivative with respect to the expected β i (the volatility of data, σ2) in Equation (13), setting it to 0, the equation for deriving the expected β i (the volatility of data, σ2) after differentiation will be
β ^ i = t = 1 T P q t = i | Y , λ k y t α ^ i t = 1 T X t 2 P q t = i | Y , λ k
σ ^ i 2 = t = 1 T P q t = i | Y , λ k y t α ^ i X t β ^ i 2 t = 1 T X t 2 P q t = i | Y , λ k

4.2. Efficient Calculation of the Desired Quantities

To perform this evaluation, the forward–backward algorithm is used here. This algorithm adopts k-period parameters ( λ k ) to estimate all k + 1 period parameters ( λ k + 1 ). The declaration of the forward variable can be written as θ t i = P y 1 , , y t , q t = i | λ . This can be calculated in the following three steps:
θ 1 i = π i b i y 1 θ t + 1 j = [ i = 1 N θ t i P i j ] b j y t + 1 t = 2 , 3 , , T P Y | λ = i = 1 N P y 1 , y 2 , , y T , q T = i | λ = i = 1 N θ T i
And, in a similar way for the backward variable, we have
κ t i = P y t + 1 , y t + 2 , , y T | q t = i , λ
The three backward steps are
κ T i = 1 κ t i = [ j = 1 N κ t + 1 j P i j ] b j y t + 1 t = T 1 , T 2 , , 1 P Y | λ = i = 1 N P ( y 1 ,   y 2 ,     , y T | q 1 = i ) P ( q 1 = i )   = i = 1 N κ 1 i b i   ( y 1 )   π i
Then, we can calculate γ t i and η t i , j based on κ t i and θ t i :
γ t i = P q t = i | y 1 , y 2 , y 3 , , y T , λ = P y 1 , y 2 , y 3 , , y T , q t = i | λ P y 1 , y 2 , y 3 , , y T | λ = θ t ( i )   κ t ( i )   i θ t ( i )   κ t ( i )  
η t i , j = P ( q t = i ,   q t + 1 = j | Y , λ ) = P ( q t = i , y 1 , y 2 , , y t )   P ij b j ( y t + 1 ) P ( y t + 2 , , y T | q t + 1 = j ) P ( y 1 , y 2 ,   , y T ) = θ t ( i ) P ij b j ( y t + 1 ) κ t + 1 ( j ) i j θ t ( i ) P ij b j ( y t + 1 ) κ t + 1 ( j )
Using Equations (20) and (21), one can estimate the transition probability and initial state probability of the AHMM as follows:
Transition probability:
P i j = t = 1 T 1 η t i , j t = 1 T 1 γ t i , t = 1 , 2 , , T 1 , i , j N
Initial state probability:
π i = γ 1 i
The proposed regression models have three unknown parameters α i , β i , σ i 2 . In this case, the parameter estimation using the EM algorithm will proceed as follows:
α i k + 1 = t = 1 T P q t = i | y , λ k y t β k i X t t = 1 T P q t = i | y , λ k β i k + 1 = t = 1 T P q t = i | y , λ k y t α k + 1 i t = 1 T X t 2 P q t = i | y , λ k σ i 2 k + 1 = t = 1 T P q t = i | y , λ k y t α k + 1 i X t β k + 1 i 2 t = 1 T X t 2 P q t = i | y , λ k
After estimating the parameters using Equation (24), an optimized model is obtained for the AHMM. Based on this optimized model by EM and using the Viterbi algorithm, the most possible hidden states for each observation can be achieved.

4.3. Viterbi Algorithm

The Viterbi algorithm is a dynamic programming technique used to determine the most probable sequence of hidden states based on a given sequence of observations. It is widely used in signal processing and machine learning, especially in the context of Hidden Markov Models. By computing the probabilities of all possible state sequences and selecting the one with the highest probability, the Viterbi algorithm is able to find the most likely explanation for a given observation sequence. The parameters of the algorithm are
δ t i = max q 1 , q t 1   P q 1   q t 1   ,   q t =   i ,   y 1 , y 2     y t
The variable ( δ t i ) in Equation (25) optimizes the probability of being in state i at time t. This variable can be estimated using the following recursive relations:
δ 1 i = max i   P ( q 1 =   i , y 1 )   = π i b i y 1 ψ 1 i = 0 i = 1 , , N δ t j = max q t 1 = i δ t 1 i P ij b j y t ψ t j = a r g max q t 1 = i δ t 1 i P ij t = 2 , , T , j = 1 , , N
p T * = m a x i = 1 , , N δ T i q T * = a r g m a x i = 1 , , N δ T i
q T * = a r g m a x i = 1 , , N ψ t + 1 q t + 1 * t = T 1 , , 1
Using the Viterbi algorithm, the hidden state of each observation can be determined by Equation (28).
By adapting to changes in hidden states, this algorithm aids in precisely locating pipeline leaks. The Viterbi algorithm works by maximizing the expected number of correctly identified distinct states. Graphically, this is represented as the probable paths within a fragment of an AHMM lattice. The primary objective of the Viterbi algorithm is to identify the most probable pathway, which can subsequently be used to accurately determine the leak’s location, as shown in Figure 5.

5. Numerical Analysis

In this section, the AHMM is simulated and estimations of the model parameters are compared with the true values. The model is represented by Equation (8). To simulate the model, the parameters should be determined and the estimation should be evaluated. The considered parameters include the sample size of T = 50, the number of hidden states I = 2, and the design matrix X t = 12 t = 12 , ,   600 T . We set α1 = 2000 and α2 such that the two lines intersect at point X 13 = 156 . In addition, β1 = −2, β2 = 0, −1, −2, −3, and −4 are the slopes, and the error term e t N 0 , σ i 2 is assumed with variances σ 1 2 = 1 and σ 2 2 = 2 .
Initial values of the EM algorithm are π1 = 0.5, P11 = P22 = 0.5 for the maximum iteration number m a x i t e r = 200, where the value of convergence criterion ε is set to 1 × 10 6 . The parameters of the AHMM are re-estimated by the EM algorithm. The results of this study are summarized in Table 1. It can be observed that the estimated parameters are nearly identical to the true values with very few standard deviations, proffering solid justifications for the method. For instance, in the case of setting number 1, the real values of β1 and β2 are −2 and 0, respectively, and after 200 iterations, the final estimation of these values will be −1.9999 and 0.0009. Also, the standard deviation of the estimations from their real values is 0.0293 and 0.0314, showing that the slope coefficients of the lines have been found with high accuracy.
Setting number 2 of the above table is considered as an arbitrary example exhibiting the results of the simulation in Figure 6 and Figure 7. To identify the convergence trend, the EM algorithm process runs for 200 iterations. The true values are denoted by the dotted red line and the estimated parameter values are displayed by blue points. Generally, after the first 20–30 iterations, the estimated values will approach the true values closely, demonstrating that the EM algorithm can effectively approximate the parameters of the AHMM.
In Figure 6, the horizontal axis indicates the numbers of the design matrix, which are extended from 12 to 600. The vertical axis indicates the values simulated by the regression model. The green lines indicate the regression models and their parameters have been estimated using the AHMM in each iteration. Moreover, black lines indicate the final estimated models. As can be seen, the estimated hidden values and states correspond to the actual values and completely identify the real states. In Figure 7, the convergence trend for the actual preliminary parameters β i , α i , σ i 2 , P i i , and i = 1 , 2 is shown using the iterative EM algorithm. These parameters indicate the intercept, slope, and standard deviation of the observations and the hidden state transfer matrix between the two regression models. As depicted, the real values differ from the initial values. With the iterative process of the EM algorithm, however, the estimated values closely match the real values after 30 iterations, and the convergence process remains almost constant after the 30th iteration.

5.1. Practical Results of the Model

Pipeline leak detection systems are usually evaluated in terms of speed of detection, ease of implementation, affordability, ability to detect faults, and determination of deterioration conditions. In this paper, efforts have been made to satisfy these conditions with the practical implementation of the system.
The pipeline in this study is part of the Iranian Oil Export Corridor, which connects two major pump stations. The pipeline is 87 km long and has a diameter of 42 inches. The inlet flow to the pipeline is 550 kg/s, and the inlet pressure is 18 bar. The pipeline profile is shown in Figure 8. Next, different leakage scenarios were applied and pressure and flow characteristics of different points were extracted from OLGA software. To evaluate the validity of the simulation results, the flow and pressure real data have been matched at the important check points.
These scenarios include leakage with (0, 0.1, …, 2) inch sizes and are assumed at (5, 10, 20, …, 80) km distances from the origin. With 20 different leakages in nine different locations, 180 different scenarios were created. The flow and pressure of information for each scenario were observed in N = 78 locations. A training of n samples was chosen for each scenario, where n = 0.6 * N, and a test sample size of n = 0.4 * N.
For each scenario, the model is fitted using training samples and its performance is tested for test samples using the tested model.
Form training is used for the estimation of AHMM parameters. A hidden state change is adopted for identifying leak locations and changes in the slope of fitted lines (indicating the size of leakage).
The details of the training stage are specific to the classifier type. The results were elicited utilizing four different well-known classifiers: the support vector machine (SVM), k-nearest neighbor (k-NN), Naive Bayes, and linear logistic regression. To measure the accuracy of the model in leakage detection, the F1 score was used, which is examined in the following subsection. To extract the leak size and distance information at the test sample, Solving Training and the Optimal State Sequence Problems were utilized. Root Mean Squared Error (RMSE) has been used to measure the accuracy of leakage estimation, which is also examined below.

5.2. Measures of Fault Detection and Performance

The main goal of leak detection is to trigger an alarm when anomalies are detected in the data. Leak detection can be framed as a binary classification problem, where the system distinguishes between normal and abnormal conditions. Alarm triggering is typically associated with the categorization of events into ‘negative’ and ‘positive’ classes, which include true negative (TN), false negative (FN), true positive (TP), and false positive (FP) outcomes. P r and R , as the precision and recall rates, respectively, are described as follows:
P r = number   of   T P number   of   T P + number   of   F P
R = number   of   T P number   of   T P + number   of   F N
Additionally, the F1 score is another useful measure defined as
F 1 = 2 P r P r + R
These detection algorithms have been adopted for the data and the findings have been juxtaposed with the following parameters [25].
For gauging the reconstruction error at each size, the RMSE is utilized, given by
R M S E = 1 T i = 1 T y i y ^ i 2
where AHMM components are computed for estimating the reconstructed leakage size in test samples.

5.3. Numeric Results

As shown in Figure 9, The pressure/flow transmitters are embedded along the pipeline at a specified distances from the source. At each point, a Remote Transfer Unit (RTU) is embedded to receive information from local equipment and transmits it to the Master Terminal Unit (MTU).
The pressure and flow information can be used separately to identify the leak size and location. The leak detection process based on the data is as follows:
(i)
Simulated leak pressure data with (0, 0.1, …, 2) inch sizes located at (5, 10, 20, …, 80) km distances from the source were used as inputs to the OLGA software.
(ii)
For each scenario, pressure data were observed for N = 78 locations.
(iii)
Each set of observed data was randomly split into a test sample and a training sample.
(iv)
The AHMM Optimal State Sequence Problem classifiers obtained from the training sample were applied, assigning each of the test data to leak or without leak states.
(v)
Using k-NN, Naive Bayes, SVM classifier, linear logistic regression algorithms, and examples, the experimental samples were classified.
(vi)
After classifying different scenarios with the presented algorithms, the values of indicators of precision, i.e., recall and F1 score, are calculated and the accuracy of the classifiers is compared.
Moreover, for the accuracy of the model in measuring the amount of leakage, the following procedures are followed; parts (i–iv) are similar to the above process.
(i)
For various scenarios of the leak, by EM algorithm, the amount of pressure was estimated by the fitted regression models.
(ii)
For each model, the leak size was determined using Optimal State Sequence problem-solving.
(iii)
Using the AHMM, sample point pressures were estimated.
(iv)
With having the true pressure at the test sample points, RMSE was used for size estimation with the AHMM fitted model.
(v)
The size of the leak in the selected model was compared with the actual values, and the RMSE of the method was calculated.
The above steps were followed to model the flows. The accuracy of identifying the amount and location of leakage for flow data was obtained using a similar algorithm. The steps of the AHMM-based leakage detection method are depicted in Figure 10.
As depicted in Figure 10, the proposed method in this paper consists of two main sections. The modeling of the pipeline status is performed in the first part. This part consists of a sample section, data transformation, feature extraction, and model construction. The second part introduces a decision method that employs a distance indicator as a classifier switch. This method allows for the selection of an appropriate detection model for any unlabeled samples.
Table 2 and Table 3 exhibit the accuracy of identification of location for different leaks. In the results, when adopting pressure data, the highest F1 score of AHMM is 0.978.
This model has the best performance amongst the methods used for classification. The K-NN model has an F1 score of 0.944, the SVM model has a statistic value of 0.938, and the corresponding statistic values are 0.914 and 0.946 for logistic regression and Naive Bayes, respectively. Evidently, based on the collected data, while the best performance is displayed by AHMM followed by Naive Bayes, the worst performance is exhibited by logistic regression.
When using flow data, the highest F1 score of AHMM is 0.978. While the K-NN model has an F1 score of 0.988, the SVM model has a statistic value of 0.984; the corresponding statistic values are 0.989 and 0.952 for logistic regression and Naive Bayes, respectively. It can be observed from the examined data that the best performance is for logistic regression and then for K-NN. Conversely, the worst performance is shown by Naive Bayes.
When using flow data, AHMM performance is almost identical compared to pressure data; however, the performance of K-NN, SVM, and logistic regression classification methods is better than that of this model.
The proposed model can successfully estimate the amount of pressure and flow. Therefore, to measure the capability of the model, the RMSE amount has been proposed for estimating the amount of pressure and flow in the AHMM. Figure 11 shows the estimated RMSE values for flow and pressure for different leak sizes (0, 0.1…, 2) in inches. As displayed, the AHMM data are used based on real pressure estimation in real samples, and this estimation is compared with the actual value for different leak sizes. A similar process is performed to compare the estimated flow with the actual flow and use the RMSE.
As observed in Figure 11 and also in Figure 1 and Figure 2, the increase in leak size has a significant impact on the pressure and flow profiles in the pipeline. With the increase in leak size, the slope of the regression lines in the pressure profile increases. This change in the flow profile is reflected as an intercept increase after the leak occurrence. The increase in the slope of the regression lines leads to significant changes in the AHMM parameters, which in turn improves the model’s performance in detecting leaks. Specifically, for leaks larger than 0.3 inches, the AHMM shows much better performance in pressure data. This improvement is less apparent in flow data due to structural differences and smaller changes in the intercept in the model parameters. However, the results from flow data are still satisfactory and can be used for accurately estimating the size of the leaks. Overall, the analysis of pressure and flow data shows that the AHMM is capable of effectively detecting both small and large leaks.
Considering the authors’ approach in practically implementing the results and taking into account the costs of installation, operation, maintenance, and calibration of pressure measurement sensors compared to flow measurement sensors, along with the high accuracy of pressure measurement equipment used in the oil and gas industry, the results are operationally valuable and useful.

6. Conclusions

This paper presented a novel approach, the AHMM (Adaptive Hidden Markov Model), which combined a regression model with a Hidden Markov Model to identify the size and location of leakages in oil pipelines using pressure or flow data. The EM algorithm was employed for parameter estimation with maximum likelihood, and numerical analysis demonstrated that the calculated parameter values closely matched the actual preset values. The results indicated that the proposed AHMM-based method achieved significant improvements in leak detection compared to conventional methods, with improvements ranging from at least 4% to a maximum of 7% in all indicators based on pressure classification.
Moreover, the AHMM-based method demonstrated several advantages, including higher performance in leak detection, the ability to detect small leaks, and accurately locating and determining the size of leaks. Additionally, this method could be easily applied, with minor adjustments, to other pipelines, such as those for water and gas, making it suitable for a broad range of applications. The proposed method also enabled the data-based monitoring of the pipeline status and benefited from boosting and updating the database of training samples to enhance the accuracy of diagnosis and other functions.
Given the complexities in correlating pressure and flow data due to non-linear factors, future research could explore a two-stage approach. In this method, the pressure and flow data would be processed separately to detect leaks, and then the results could be combined into a final model that leverages both data sets. This approach could further improve the accuracy and robustness of leak detection in pipelines.
Overall, the AHMM-based method presented in this paper offered a promising approach for effective leak detection in oil pipelines and held potential for further advancements in the field of pipeline monitoring and maintenance. The model was highly flexible and capable of expansion to multiple linear regression or non-linear models by altering the likelihood function. Additionally, incorporating the Multiple Classifier Fusion Algorithm (MCFA) could have enhanced the accuracy and robustness of leak detection. These topics suggested potential areas for further research.

Author Contributions

Conceptualization, O.Z. and M.R.S.; methodology, S.V.N.; software, O.Z.; validation, M.M., M.R.S. and B.M.; formal analysis, M.R.S.; investigation, S.V.N.; resources, M.R.S.; data curation, O.Z.; writing—original draft preparation, O.Z.; writing—review and editing, S.V.N.; visualization, M.R.S.; supervision, S.V.N.; project administration, M.R.S.; funding acquisition, M.R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

SymbolDescription
Physical Quantities
m Mass
U Velocity
G Mass Source/Sink
g Acceleration Of Gravity
h Height
P Pressure Force
φFluid Angle to Gravity
ψ Rate of Mass Transfer
LExisting Phases
EField Energy
HEnthalpy of the Field
SSource/Sink of Enthalpy
QHeat Flow in Pipe Wall
TEnergy Transfer Between Different Fields
ρ Density
F Friction Forces
F w Wall Friction
Ψ Momentum Contributions Related to Mass Transfer
Mathematical Operations
δ t Differentiation In Time
δ z Spatial Differentiation
Summation
cosCosine
logLogarithm
maxMaximum
argmaxArgument of the Maximum
δ t m i Time Differentiation of Mass
δ z m i Spatial Differentiation of Mass
δ t m i U i Time Differentiation of Momentum
δ z m i U i Spatial Differentiation of Momentum
d m / d t Mass Change Over Time
Statistical Measures
β Unknown Parameter of the Regression Line Slope
α Unknown Parameter Indicating the Intercept
γ Lagrange Multiplier
σ 2 Variance
L()Likelihood Function
Performance Metrics
F1F1 Score
DRDetection Rate
FARFalse Alarm Rate
MARMissed Alarm Rate
RMSERoot Mean Square Error
PrPrecision
RRecall Rates
FPFalse Positive
TNTrue Negative
FNFalse Negative
Hmm Parameters
α i Intercept Parameter in AHMM
β i Regression Line Slope Parameter in AHMM
σ i 2 Variance of the Error Term for the i-th Hidden State
α ^ i Estimator of the Parameter α i in AHMM
β ^ i Estimator of the Parameter β i in AHMM
σ ^ i 2 Estimator of the Variance of the Error Term for the i-th Hidden State
pijTransition Probability from State i to j
πiInitial Probability of State i
λ Parameter Vector in the AHMM
NNumber of Hidden States Within the Markov Chain
AMatrix of State Transition Probabilities
BEmission Probability Distribution
y t Observed Symbol at Time t
q t Hidden State at Time t
q t + 1 Hidden State at Time t + 1
S Set of Possible States of AHMM
s i State of the System at the i-th Time Step in the Hidden Markov Model
P i j Probability of Transition from State s j to State s i
θ t i Forward Variable
κ t i Backward Variable
QSet of Hidden States
XDesign Matrix
m a x i t e r Maximum Number of Iterations
Abbreviations
RTURemote Transfer Unit
MTUMaster Terminal Unit
SCADASupervisory Control and Data Acquisition
EMExpectation–Maximization algorithm

Appendix A. Details the Derivation Process

To apply the EM algorithm by assuming that λ k is known, the logarithm of the likelihood function for the complete data in Equation (9) can be expressed as follows:
Q λ , λ k = E log P Y , q | λ | Y , λ k = q Q log π q 1 P q | Y , λ k + q Q ( t = 1 T P q t 1 q t ) P q | Y , λ k + i = 1 N ( t = 1 T log b i y t ) P q t = i | Y , λ k
In step M, the maximization of Equation (A1) by using the Lagrange multiplier γ and assuming i = 1 N π i = 1 yields
1 π i P q 1 = i | Y , λ k + γ = 0
Taking transpose of Equation (A2), it becomes
π i = P q 1 = i | Y , λ k γ
Because i = 1 N π i = 1 , then
i = 1 N π i = i = 1 N P q 1 = i | Y , λ k γ = 1
and
γ = i = 1 N P q 1 = i | Y , λ k
Thus, the estimator of the initial value π i can be obtained as
π ^ i = P q 1 = i | Y , λ k i = 1 N P q 1 = i | Y , λ k
For the P i j j = 1 N i = 1 N t = 1 T l o g P i j P q t = i , q t + 1 = j | Y , λ k in Equation (A1), we can use the Lagrange multiplier γ and let j = 1 N P i j = 1 to derive the transition probability P i j ,
P i j j = 1 N i = 1 N ( t = 1 T l o g P i j ) P q t = i , q t + 1 = j | Y , λ k + γ ( j = 1 N P i j 1 ) = 0
Taking partial derivative of Equation (A3) with respect to the transition probability P i j setting it to 0, we obtain
1 P i j = t = 1 T P q t = i , q t + 1 = j | λ k , Y + γ = 0
Taking transpose, it can be modified as
P i j = t = 1 T P q t = i , q t + 1 = j | λ k , Y γ
Because j = 1 N P i j = 1 , so
j = 1 N P i j = j = 1 N t = 1 T P q t = i , q t + 1 = j | λ k , Y γ = 1
Thus, the estimator of the transition probability P i j can be obtained as
P ^ i j = t = 1 T P q t = i , q t + 1 = j | λ k , Y t = 1 T P q t + 1 = j | λ k , Y

References

  1. Duru, C.; Ani, C. A statistical analysis on the leak detection performance of underground and overground pipelines with wireless sensor networks through the maximum likelihood ratio test. Sadhana 2017, 42, 1889–1899. [Google Scholar] [CrossRef]
  2. PHMSA. Available online: https://www.phmsa.dot.gov/data-and-statistics/pipeline/pipeline-incident-20-year-trends (accessed on 18 April 2024).
  3. Adegboye, M.A.; Fung, W.-K.; Karnik, A. Recent Advances in Pipeline Monitoring and Oil Leakage Detection Technologies: Principles and Approaches. Sensors 2019, 19, 2548. [Google Scholar] [CrossRef] [PubMed]
  4. Henrie, M.; Carpenter, P.; Nicholas, R.E. Pipeline Leak Detection Handbook; Elsevier: Amsterdam, The Netherlands, 2016. [Google Scholar] [CrossRef]
  5. Wang, C.; Zhang, Y.; Song, J.; Liu, Q.; Dong, H. A novel optimized SVM algorithm based on PSO with saturation and mixed time-delays for classification of oil pipeline leak detection. Syst. Sci. Control Eng. 2019, 7, 75–88. [Google Scholar] [CrossRef]
  6. Rajasekaran, U.; Kothandaraman, M. A Survey and Study of Signal and Data-Driven Approaches for Pipeline Leak Detection and Localization. J. Pipeline Syst. Eng. Pract. 2024, 15, 03124001. [Google Scholar] [CrossRef]
  7. Zadehbagheri, O.; Salehizadeh, M.R.; Naghavi, V.; Moatari, M. Design of Pipeline leak Detection System using Neural Network on Scada Platform of National Iranian Oil Company. Pet. Res. 2020, 31, 39–50. [Google Scholar]
  8. Valizadeh, S.; Moshiri, B.; Salahshoor, K.; Hakim, A.H.; Vasant, P.; Barsoum, N. Multiphase Pipeline Leak Detection Based on Fuzzy Classification. AIP Conf. Proc. 2009, 1159, 72–80. [Google Scholar] [CrossRef]
  9. Navarro, A.; Begovich, O.; Sánchez, J.; Besancon, G. Real-Time Leak Isolation Based on State Estimation with Fitting Loss Coefficient Calibration in a Plastic Pipeline. Asian J. Control 2017, 19, 255–265. [Google Scholar] [CrossRef]
  10. Song, J.; He, X. Model-based fault diagnosis of networked systems: A survey. Asian J. Control 2022, 24, 526–536. [Google Scholar] [CrossRef]
  11. Korlapati, N.V.S.; Khan, F.; Noor, Q.; Mirza, S.; Vaddiraju, S. Review and analysis of pipeline leak detection methods. J. Pipeline Sci. Eng. 2022, 2, 100074. [Google Scholar] [CrossRef]
  12. Qu, Z.; Feng, H.; Zeng, Z.; Zhuge, J.; Jin, S. A SVM-based pipeline leakage detection and pre-warning system. Measurement 2010, 43, 513–519. [Google Scholar] [CrossRef]
  13. Valizadeh, S.; Moshiri, B.; Salahshoor, K. Leak Detection in Transportation Pipelines Using Feature Extraction and KNN Classification. In Pipelines 2009; American Society of Civil Engineers: Reston, VA, USA, 2009; pp. 580–589. [Google Scholar] [CrossRef]
  14. Cascianelli, S.; Costante, G.; Crocetti, F.; Ricci, E.; Valigi, P.; Fravolini, M.L. Data-based design of robust fault detection and isolation residuals via LASSO optimization and Bayesian filtering. Asian J. Control 2021, 23, 57–71. [Google Scholar] [CrossRef]
  15. Habibi, H.; Howard, I.; Habibi, R. Bayesian Fault Probability Estimation: Application in Wind Turbine Drivetrain Sensor Fault Detection. Asian J. Control 2020, 22, 624–647. [Google Scholar] [CrossRef]
  16. Habibi, H.; Howard, I.; Habibi, R. Bayesian Sensor Fault Detection in a Markov Jump System. Asian J. Control 2017, 19, 1465–1481. [Google Scholar] [CrossRef]
  17. Siddique, M.F.; Ahmad, Z.; Ullah, N.; Ullah, S.; Kim, J.-M. Pipeline Leak Detection: A Comprehensive Deep Learning Model Using CWT Image Analysis and an Optimized DBN-GA-LSSVM Framework. Sensors 2024, 24, 4009. [Google Scholar] [CrossRef]
  18. Yang, D.; Hai, X.; Ren, Y.; Cui, J.; Li, K.; Zeng, S. A hybrid fault prediction method for control systems based on extended state observer and hidden Markov model. Asian J. Control 2023, 25, 418–432. [Google Scholar] [CrossRef]
  19. Jia, T.; Song, J.; Niu, Y.; Chen, B.; Cao, Z. Optimized hybrid design with stabilizing transition probability for stochastic Markovian jump systems under hidden Markov mode detector. Asian J. Control 2022, 24, 2787–2795. [Google Scholar] [CrossRef]
  20. Wang, A.; Fei, M.; Song, Y. Optimal fault-tolerant control for Markov jump power systems with asynchronous actuator faults. Asian J. Control 2023, 25, 4466–4480. [Google Scholar] [CrossRef]
  21. Lu, D.; Zeng, G.; Liu, J. Non-Fragile Simultaneous Actuator and Sensor Fault-Tolerant Control Design for Markovian Jump Systems Based on Adaptive Observer. Asian J. Control 2018, 20, 125–134. [Google Scholar] [CrossRef]
  22. Saize, S.; Yang, X. On the definitions of hidden Markov models. Appl. Math. Model. 2024, 125, 617–629. [Google Scholar] [CrossRef]
  23. Westhead, D.R.; Vijayabaskar, M.S. (Eds.) Hidden Markov Models; Springer: New York, NY, USA, 2017; Volume 1552. [Google Scholar] [CrossRef]
  24. Che, J.; Zhu, Y.; Zhou, D. Hidden Markov model-based robust H fault estimation for Markov switching systems with application to a single-link robot arm. Asian J. Control 2021, 23, 2227–2238. [Google Scholar] [CrossRef]
  25. Ozkan, H.; Ozkan, F.; Kozat, S.S. Online Anomaly Detection under Markov Statistics with Controllable Type-I Error. IEEE Trans. Signal Process. 2016, 64, 1435–1445. [Google Scholar] [CrossRef]
  26. Ai, C.; Sun, X.; Zhao, H.; Ma, R.; Dong, X. Pipeline damage and leak sound recognition based on HMM. In Proceedings of the 2008 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 1940–1944. [Google Scholar] [CrossRef]
  27. Qiu, J.; Liang, W.; Zhang, L.; Yu, X.; Zhang, M. The early-warning model of equipment chain in gas pipeline based on DNN-HMM. J. Nat. Gas Sci. Eng. 2015, 27, 1710–1722. [Google Scholar] [CrossRef]
  28. Ai, C.; Zhao, H.; Ma, R.; Dong, X. Pipeline Damage and Leak Detection Based on Sound Spectrum LPCC and HMM. In Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications, Jinan, China, 16–18 October 2006; pp. 829–833. [Google Scholar] [CrossRef]
  29. Fagiani, M.; Squartini, S.; Gabrielli, L.; Severini, M.; Piazza, F. A Statistical Framework for Automatic Leakage Detection in Smart Water and Gas Grids. Energies 2016, 9, 665. [Google Scholar] [CrossRef]
  30. Liu, J.; Zang, D.; Liu, C.; Ma, Y.; Fu, M. A leak detection method for oil pipeline based on markov feature and two-stage decision scheme. Measurement 2019, 138, 433–445. [Google Scholar] [CrossRef]
  31. Zhang, M.; Chen, X.; Li, W. Hidden Markov models for pipeline damage detection using piezoelectric transducers. J. Civ. Struct. Health Monit. 2021, 11, 745–755. [Google Scholar] [CrossRef]
  32. Available online: https://www.slb.com/products-and-services/delivering-digital-at-scale/software/olga/olga-dynamic-multiphase-flow-simulator (accessed on 14 September 2024).
  33. Ramezani, A.; Moshiri, B.; Abdulhai, B.; Kian, A. Estimation of free flow speed and critical density in a segmented freeway using missing data and Monte Carlo-based expectation maximisation algorithm. IET Control Theory Appl. 2011, 5, 123–130. [Google Scholar] [CrossRef]
Figure 1. Flow changes after leak occurrence.
Figure 1. Flow changes after leak occurrence.
Modelling 05 00069 g001
Figure 2. Pressure changes after leak occurrence.
Figure 2. Pressure changes after leak occurrence.
Modelling 05 00069 g002
Figure 3. Graphical representation of the dependence structure of a Hidden Markov Model, where yt is the observable process and qt is the hidden chain.
Figure 3. Graphical representation of the dependence structure of a Hidden Markov Model, where yt is the observable process and qt is the hidden chain.
Modelling 05 00069 g003
Figure 4. EM algorithm for AHMM.
Figure 4. EM algorithm for AHMM.
Modelling 05 00069 g004
Figure 5. Graphical representation of all probable pathways in a AHMM states S1 and S2. The objective of the Viterbi algorithm is to identify the most probable sequence, which is depicted by the solid line and solve the optimal state sequence problem.
Figure 5. Graphical representation of all probable pathways in a AHMM states S1 and S2. The objective of the Viterbi algorithm is to identify the most probable sequence, which is depicted by the solid line and solve the optimal state sequence problem.
Modelling 05 00069 g005
Figure 6. Fitting process of linear models using the AHMM algorithm.
Figure 6. Fitting process of linear models using the AHMM algorithm.
Modelling 05 00069 g006
Figure 7. Convergence process of parameter estimation. (a) α 1 , (b) α 2 , (c) β 1 , (d) β 2 , (e) σ 1 , (f) σ 2 , (g) P 11 , (h) P 22 .
Figure 7. Convergence process of parameter estimation. (a) α 1 , (b) α 2 , (c) β 1 , (d) β 2 , (e) σ 1 , (f) σ 2 , (g) P 11 , (h) P 22 .
Modelling 05 00069 g007aModelling 05 00069 g007b
Figure 8. Diagram of the simulated pipeline.
Figure 8. Diagram of the simulated pipeline.
Modelling 05 00069 g008
Figure 9. A schematic of the designed leak detection system with k RTUs and diagram of the simulated pipeline profile.
Figure 9. A schematic of the designed leak detection system with k RTUs and diagram of the simulated pipeline profile.
Modelling 05 00069 g009
Figure 10. Steps of the AHMM-based leakage detection method.
Figure 10. Steps of the AHMM-based leakage detection method.
Modelling 05 00069 g010
Figure 11. (a) RMSE for different leakage sizes using the AHMM pressure data; (b) RMSE for different leakage sizes using the AHMM flow data.
Figure 11. (a) RMSE for different leakage sizes using the AHMM pressure data; (b) RMSE for different leakage sizes using the AHMM flow data.
Modelling 05 00069 g011
Table 1. Estimation of parameters of the AHMM for the simulated data.
Table 1. Estimation of parameters of the AHMM for the simulated data.
Setting Number Parameters
π1p11p22α1α2β1β2 σ 1 2 σ 2 2
1True value10.9120001688−2012
Estimation10.9211999.9221687.721−1.99990.00090.6222.13
Standard deviation0.00010.001101.85590.96040.02930.03140.9380.1122
2True value10.9120001844−2−112
Estimation10.918511999.9221843.721−1.9999−0.9990.62252.1322
Standard deviation0.00010.000220.00180.18060.08260.001830.000190.02940.0005
3True value10.9120002000−2−212
Estimation10.87190.99320001999.76−2.002−1.9990.43492.0732
Standard deviation0.00020.1680.34630.04450.20810.00090.00090.33650.2789
4True value10.9120002156−2−312
Estimation10.918511999.9222155.72−1.9999−2.9990.62252.132
Standard deviation0.000100.00060.4171.34430.00410.00320.11690.1715
5True value10.9120002312−1−412
Estimation10.9185911999.9222311.72−1.999−3.9990.62252.132
Standard deviation00.000700.53831.9030.00520.00450.16470.2981
Table 2. Classification based on pressure.
Table 2. Classification based on pressure.
ModelPrecisionRecallF1 Score
AHMM0.9747830.9803240.977545
K-NN0.9432810.9453430.944311
SVM0.9348680.9414080.938126
Logistic Regression0.8999150.9278530.913671
Naive Bayes0.9454150.9466550.946035
Table 3. Classification based on flow.
Table 3. Classification based on flow.
ModelPrecisionRecallF1 score
AHMM0.9827820.9733280.978032
K-NN0.9814340.9938780.987617
SVM0.9747110.9943160.984416
Logistic Regression0.9827360.9956280.98914
Naive Bayes0.9542860.4884140.951776
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zadehbagheri, O.; Salehizadeh, M.R.; Naghavi, S.V.; Moattari, M.; Moshiri, B. Novel Adaptive Hidden Markov Model Utilizing Expectation–Maximization Algorithm for Advanced Pipeline Leak Detection. Modelling 2024, 5, 1339-1364. https://doi.org/10.3390/modelling5040069

AMA Style

Zadehbagheri O, Salehizadeh MR, Naghavi SV, Moattari M, Moshiri B. Novel Adaptive Hidden Markov Model Utilizing Expectation–Maximization Algorithm for Advanced Pipeline Leak Detection. Modelling. 2024; 5(4):1339-1364. https://doi.org/10.3390/modelling5040069

Chicago/Turabian Style

Zadehbagheri, Omid, Mohammad Reza Salehizadeh, Seyed Vahid Naghavi, Mazda Moattari, and Behzad Moshiri. 2024. "Novel Adaptive Hidden Markov Model Utilizing Expectation–Maximization Algorithm for Advanced Pipeline Leak Detection" Modelling 5, no. 4: 1339-1364. https://doi.org/10.3390/modelling5040069

APA Style

Zadehbagheri, O., Salehizadeh, M. R., Naghavi, S. V., Moattari, M., & Moshiri, B. (2024). Novel Adaptive Hidden Markov Model Utilizing Expectation–Maximization Algorithm for Advanced Pipeline Leak Detection. Modelling, 5(4), 1339-1364. https://doi.org/10.3390/modelling5040069

Article Metrics

Back to TopTop