Next Article in Journal
The Impact of the Covariance Matrix Sampling on the Angle of Arrival Estimation Accuracy
Previous Article in Journal
Comparison of Lithium-Ion Battery Models for Simulating Storage Systems in Distributed Power Generation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Change Point Detection for Diversely Distributed Stochastic Processes Using a Probabilistic Method

by
Muhammad Rizwan Khan
1 and
Biswajit Sarkar
2,*
1
Department of Industrial Engineering, Hanyang University, 222 Wangsimni-Ro, Seoul 133-791, Korea
2
Department of Industrial & Management Engineering, Hanyang University, Ansan, Gyeonggi-do 15588, Korea
*
Author to whom correspondence should be addressed.
Inventions 2019, 4(3), 42; https://doi.org/10.3390/inventions4030042
Submission received: 11 June 2019 / Revised: 1 August 2019 / Accepted: 2 August 2019 / Published: 8 August 2019

Abstract

:
Unpredicted deviations in time series data are called change points. These unexpected changes indicate transitions between states. Change point detection is a valuable technique in modeling to estimate unanticipated property changes underlying time series data. It can be applied in different areas like climate change detection, human activity analysis, medical condition monitoring and speech and image analyses. Supervised and unsupervised techniques are equally used to identify changes in time series. Even though change point detection algorithms have improved considerably in recent years, several undefended challenges exist. Previous work on change point detection was limited to specific areas; therefore, more studies are required to investigate appropriate change point detection techniques applicable to any data distribution to assess the numerical productivity of any stochastic process. This research is primarily focused on the formulation of an innovative methodology for change point detection of diversely distributed stochastic processes using a probabilistic method with variable data structures. Bayesian inference and a likelihood ratio test are used to detect a change point at an unknown time (k). The likelihood of k is determined and used in the likelihood ratio test. Parameter change must be evaluated by critically analyzing the parameters expectations before and after a change point. Real-time data of particulate matter concentrations at different locations were used for numerical verification, due to diverse features, that is, environment, population densities and transportation vehicle densities. Therefore, this study provides an understanding of how well this recommended model could perform for different data structures.

1. Introduction

Unexpected deviations in time series data are called change points. These sudden changes indicate transitions between states. Change point detection is worthwhile in modeling, to estimate unexpected property changes underlying time series data. It is applicable in different areas like climate change detection, human activity analysis, medical condition monitoring and speech and image analyses. Supervised and unsupervised techniques are equally used to identify changes in time series. Even though change point detection algorithms have improved considerably in recent years, several undefended challenges exist [1].
Several techniques have been recommended for the identification of undocumented change points in climate data sequences [2]. A change-point analysis technique has been described and its potential applications have been highlighted through a number of examples [3]. The kernel-based change point (KCP) detection procedure can only be used to detect a particular type of change; therefore, based on the Gaussian KCP method, a new nonparametric approach was proposed for predicting correlation changes, called KCP-corr. KCP-corr performs better than the Cusum technique, which specifically aims to identify correlation changes [4]. A generalized likelihood ratio test (GLRT) was used for detecting changes in the mean of a one dimensional Gaussian process [5]. A new method was recommended for change point detection in a Brownian motion, with a time-dependent diffusion coefficient in fractional Brownian motion [6]. A production inventory model with probabilistic deterioration was developed in two-echelon supply chain management [7]. A two stage change point detection technique in machine monitoring was suggested [8]. Bayesian Approach was used for change point detection of polluted days [9].
A statistical change point algorithm was proposed in which direct density ratio estimation technique was used for deviation measurement of nonparametric deviation estimation among time series samples through relative Pearson divergence variable data structures [10]. An innovative statistical approach for online change point detection was recommended in which estimation method could also be updated online [11]. An economic production quantity model with stochastic demand was developed for an imperfect production system [12]. For a change point test in a series, the Karhunen-Loeve expansion of the limit Gaussian processes was recommended [13]. The test for sudden changes in random fields was presented as a Cramer-von Mises type test and was dependent on the Hilbert space theory [14]. An integrated inventory model was developed to determine the optimal lot size and production uptime while considering stochastic machine breakdown and multiple shipments for a single-buyer and single-vendor [15]. A new methodology was introduced for the identification of structural changes in linear quantile regression models because the conventional mean regression technique couldn’t be appropriate for the identification of such structural changes at tails [16]. Supply chain model with stochastic lead time, trade-credit financing and transportation discounts was developed in order to make a coordination mechanism between transportation discounts, trade-credit financing, number of shipments, quality improvement of products and reduced setup cost in such a way that the total cost of the whole system can be reduced, where the supplier offers trade-credit-period to the buyer [17]. The fuzzy classification maximum likelihood change point (FCML-CP) algorithm was suggested for detection of simultaneous multiple change points in the mean and variance of a process and it reduces analysis time [18]. For sequential data series, a Bayesian change point algorithm was presented but it had unreliable restrictions for a number of change points and their location [19].
The Bayesian change point detection (BCPD) technique being suggested in this research paper can overcome challenges in identifying the location and number of change points due to the probabilistic concept. This methodology would precisely be based on posterior distributions and likelihood ratio test to deduce if a change point has occurred. It can also update itself linearly as new data points are observed. Posterior distribution monitoring is the best way to identify the presence of a new change point in observed data points. Simulation studies illustrate that this algorithm is good for rapid detection of existing change points and it also has a low rate of false detection, [19]. Previous work on change point detection was limited. Therefore, more studies are required to investigate appropriate change point detection techniques that are applicable to any data distribution to assess the numerical productivity of any stochastic process. This research is primarily focused on formulation of an innovative methodology for change point detection of diversely distributed stochastic processes by a probabilistic method with variable data structures. The parameter expectations before and after change point must be critically analyzed so that the parameter change can be evaluated. Bayesian inference and the likelihood ratio test are used to detect a change point at an unknown time (k).
Real-time data of particulate matter concentrations at different sites were used to validate the proposed approach. Investigation of particulate matter (PM) pollution status was conducted to evaluate the long-term trends in Seoul which shows a decreasing trend during the study period (2004–2013) [20]. Long-term behavior of particulate matters at urban roadside and background locations in Seoul, Korea were analyzed and the mean PM values exhibit a slight fall over the decade [21]. Probabilistic method was used to comprehensively analyze the change point (k), parameters before the change point ( μ 1 , μ 2 , . . . , μ n ) and parameters after the change point ( η 1 , η 2 , . . . , η n ). Hence, simulation models were built on diverse data structures of different areas to consider different features, that is, environment, population densities and transportation vehicle densities. Therefore, this study delivers a vision about how well this suggested model could perform in different areas. The paper is arranged along these lines: Section 2 discusses a literature review regarding Bayesian change point detection, while Section 3 refers to problem definitions, explaining assumptions and notations and demonstrates the formulation of mathematical models. Section 4 and Section 5 depict real world application of the model and results to validate practical application of the proposed models. Section 6 discusses the results for each area; finally, Section 7 presents conclusions of this study.

2. Related Literature

A basic literature review for Bayesian change point methodology was performed. An approach was proposed to detect changes in a non-homogeneous Poisson process and it was used to detect if a change in event rate has occurred, the time of the change and the event rate before and after the change [22]. A novel Bayesian approach was suggested to detect abnormal regions in multiple time series. Model was built and revealed that posterior distribution was used for independent sampling to conclude Bayesian inference. This approach was evaluated for simulated CNVs (copy number variations) and real data to confirm that this methodology is more accurate as compared to other methods [23]. An economic manufacturing quantity model with probabilistic deterioration was developed for a production system [24]. A comparison of Expectation Maximization (EM) method and Bayesian method for change point detection of multivariate data was done. The Bayesian technique involves fewer computational work, while EM reveals better performance for unsuitable priors and minor changes [25]. Min–max distribution free continuous-review model was presented with a service level constraint and variable lead time [26]. The Bayesian change point detection model was recommended to identify the flooding attacks in VoIP systems in which the Session Initiation Protocol (SIP) is used as a signaling mechanism [27].
To acquire accurate and reliable change detection maps for land cover monitoring, a new post classification methodology with iterative slow feature analysis (ISFA) along with Bayesian soft fusion was proposed. This methodology included three steps, first one to define the probability class of images, then a continuous change probability map and last posterior probabilities for the class arrangements of coupled pixels [28]. An economic production quantity model was developed with random defective rate, rework process and backorders for a single stage production system [29]. A Bayesian change point detection methodology was developed to analyze biomarker time series data in women for earlier diagnosis of ovarian cancer [30]. A method for approximation of digital planar curves with line segments and circular arcs using genetic algorithms was proposed [31]. The Generalized Extreme Value (GEV) fused lasso penalty function was used to detect change points for annual maximum precipitation (AMP) in South Korea. A comparison between GEV fused lasso and Bayesian change point analysis was conducted, which depicted that GEV fused lasso method should be used if water resource structures are hydrologically designed [32]. Mathematical models were developed for work-in-process-based inventory by incorporating the effect of random defects rate on lot size and expected total cost function [33]. An innovative Bayesian approach was suggested to detect change points in extreme precipitation data, while the model was based on a generalized Pareto distribution. Four different situations were used for analysis, first with no change, second with a shape change, third with a scale change and fourth with both shape and scale change [34]. See Table 1 for comparison of studies of different authors and for the difference in previous works and this work.

3. Methodological Part

3.1. Problem Definition

This research is primarily focused on formulation of a unique methodology for change point detection of diverse data structures following any kind of distribution at any unknown time (k) at any area across the globe. The existing procedures for change point detection are either very complicated or no applicable to stochastic processes and random time series. That’s why, a more precise, well defined and easily applicable approach for change point detection of stochastic processes and random time series has been proposed. Second, analysis of these changes need to be conducted, whether or not these change points are favorable. For this, a comparison of distribution parameters before and after a change point has to be performed for evaluation of subjected change. Third, an alteration in parameters expectations must be measured to define new policies for further improvements in the current states. For anticipated goals, the Probabilistic method will be used to determine posterior probabilities of data and the change point in that Bayesian model will be identified through likelihood ratio test. This suggested model will be numerically validated by using real-time data of particulate matter concentrations and particulate matter hazards in different areas of Seoul, South Korea, observed from January 2004 to December 2013. The change point (k) for particulate matter ( P M 2.5 and P M 10 ) daily concentrations, the parameters before the change point ( μ 1 , μ 2 , . . . , μ n ) and the parameters after the change point ( η 1 , η 2 , . . . , η n ) are comprehensively analyzed. The central idea for using different regions is their considerably different features, that is, environment, population densities and transportation vehicle densities. Hence, this study can also be the basis for implementation of the recommended model in different areas. Later, this probabilistic method is verified by the CUSUM approach. The results of the CUSUM approach are compared with the probabilistic method.
  • The probabilistic method is based on probability distributions, which can be applicable to data distribution. In this case, first define the data distributions and then apply proposed method to attain the results. This methodology is better to apply for random data structures and time series.
  • The CUSUM approach is directly applicable to the raw data, which is good for deterministic data structures.

3.2. Notations

The list of notations to represent the random variables and parameters are as follows.
Indices
i  
sequence data points in time series, where i 1 , 2 , . . . , n
h  
replication number (multiple simulations are performed to get the converged value), h 1 , 2 , . . . , m
j  
position in the replication or chain, ( V h j be the jth observation from the hth replication), j 1 , 2 , . . . , n
Random variables
Y  
random process or stochastic process
y  
variable (Y) at any given point
y i   
variable (Y) at point i where i 1 , 2 , . . . , n
Parameters
k  
change point in a random process
μ 1   
first parameter before change point k associated with the probability distribution function of random variable Y
η 1   
first parameter after change point k associated with the probability distribution function of random variable Y
μ 2   
second parameter before change point k associated with the probability distribution function of random variable Y
η 2   
second parameter after change point k associated with the probability distribution function of random variable Y
μ n   
nth parameter before change point k associated with the probability distribution function of random variable Y
η n   
nth parameter after change point k associated with the probability distribution function of random variable Y
μ   
mean before change point k for Normal distribution of random variable Y
η   
mean after change point k for Normal distribution of random variable Y
σ 2   
variance before change point k for Normal distribution of random variable Y
ϕ 2   
variance after change point k for Normal distribution of random variable Y
θ 0   
mean for Normal prior distribution p ( μ )
τ 0 2   
variance for Normal prior distribution p ( μ )
θ k   
mean for Normal posterior distribution p ( μ | σ 2 , y 1 , y 2 , y 3 , . . . . . , y k )
τ k 2   
variance for Normal posterior distribution p ( μ | σ 2 , y 1 , y 2 , y 3 , . . . . . , y k )
λ 0   
mean for Normal prior distribution p ( η )
ω 0 2   
variance for Normal prior distribution p ( η )
λ n   
mean for Normal posterior distribution p ( η | ϕ 2 , y k + 1 , y k + 2 , y k + 3 , . . . . . , y n )
ω n 2   
variance for Normal posterior distribution p ( η | ϕ 2 , y k + 1 , y k + 2 , y k + 3 , . . . . . , y n )
y ¯   
average of data values
Variables
p ( μ )   
prior Normal distribution for mean before change point k
p ( σ 2 )   
prior Normal distribution for variance before change point k
p ( η )   
prior Normal distribution for mean after change point k
p ( ϕ 2 )   
prior Normal distribution for variance after change point k
p ( μ , σ 2 )   
joint prior probability before change point k
p ( η , ϕ 2 )   
joint prior probability after change point k
p ( μ | y i )   
posterior Normal distribution for mean before change point k
p ( η | y i )   
posterior Normal distribution for mean after change point k
p ( μ , σ 2 | y i )   
joint posterior probability before change point k
p ( η , ϕ 2 | y i )   
joint posterior probability after change point k
p ( y i | μ , σ 2 )   
likelihood or sampling model given ( μ , σ 2 )
p ( y i | η , ϕ 2 )   
likelihood or sampling model given ( η , ϕ 2 )
k 0   
prior sample size for mean parameters ( μ , η )
k k   
(prior sample size k 0 + k)
k n   
(prior sample size k 0 + ( n k ) )
ν 0   
prior sample size for variance parameters ( σ 2 , ϕ 2 )
σ 0 2   
prior sample variance
ν k   
(prior sample size ν 0 + k)
ν n   
(prior sample size ν 0 + ( n k ) )
V  
mean of the chain or replications (average daily pollutant concentrations)
V h j   
jth observation from the hth replication
V h   
mean of hth replication
V  
mean of m replications
B  
between sequence variance representing the variance of replications with the mean of m replications
S h 2   
variance for all replications
W  
within sequence variance, the mean variance for m replications
V a r ( V )   
overall estimate of the variance of V in the target distribution
V a r ( V ) μ   
first parameter overall variance before change point k
V a r ( V ) η   
first parameter overall variance after change point k
V a r ( V ) σ 2   
second parameter overall variance before change point k
V a r ( V ) ϕ 2   
second parameter overall variance after change point k
R   
estimated potential scale reduction for convergence
R μ   
first parameter convergence before change point k
R η   
first parameter convergence after change point k
R σ 2   
second parameter convergence before change point k
R ϕ 2   
second parameter convergence after change point k
R k   
change point ( k ) convergence
S i   
cumulative sum

3.3. Assumptions

The following assumptions were used for the proposed model:
  • Y represents the random data at given time t and this random data series is distributed at state space y 1 , 2 , . . . , n that can be any random value.
  • Y ( 0 ) = 0 means that no event occurred at time t = 0 , while time series random data are observed on intervals of equal length.
  • The random data structure follows a specific probability distribution function, in any interval of length ( t ) , resulting in a random variable with parameters ( μ 1 , μ 2 , . . . , μ n ) .

3.4. Formulation of Change Point Detection Model

The probability distribution function of a random variable Y with the parameters μ 1 , μ 2 , . . . , μ n at any specific point y is given as follows
Y P ( μ 1 , μ 2 , . . . , μ n ) = f ( y ; μ 1 , μ 2 , . . . , μ n ) = p ( Y = y | μ 1 , μ 2 , . . . , μ n ) for y 1 , 2 , . . . , n
After defining the probability distribution function of random process Y. Now, divide the process in two segments; first segment defines the process before change point and second segment defines the process after change point. Let the change point in the random process Y be denoted by k and ( μ 1 , μ 2 , . . . , μ n ) be the random variable parameters before change point k, while ( η 1 , η 2 , . . . , η n ) are the random variable parameters after change point k.
y i P ( μ 1 , μ 2 , . . . , μ n ) = f ( y ; μ 1 , μ 2 , . . . , μ n ) = p ( Y = y i | μ 1 , μ 2 , . . . , μ n ) for i = 1 , 2 , . . . . , k
y i P ( η 1 , η 2 , . . . , η n ) = f ( y ; η 1 , η 2 , . . . , η n ) = p ( Y = y i | η 1 , η 2 , . . . , η n ) for i = k + 1 , k + 2 , . . . , n
The joint probability function is the product of a marginal probability function. If random variable Y = y i with parameters ( μ 1 , μ 2 , . . . , μ n ) is modeled, then the joint probability function of the sample data will be as below:
p ( Y = y i | μ 1 , μ 2 , . . . , μ n ) = i = 1 k p ( y i | μ 1 , μ 2 , . . . , μ n ) for i 1 , 2 , . . . , k
A class of prior densities is conjugate for the likelihood/sampling model p ( y i | μ 1 ) if the posterior probability distribution is in the same class. Therefore, prior distribution p ( μ 1 ) and posterior distribution p ( μ 1 | y i ) will follow the same conjugate prior distribution as the likelihood/sampling model p ( y i | μ 1 ) . However, the likelihood p ( y i | μ 1 ) follows a random distribution based on data. The Bayes theorem can be used to determine the posterior probability p ( μ 1 | y i ) of individual parameter μ 1 .
Posterior probability Prior probability × Likelihood
p ( μ 1 | y i ) p ( μ 1 ) p ( y i | μ 1 )
Bayesian inference for multiple unknown parameters is not conceptually different from the one-parameter case. For any joint prior distribution p ( μ 1 , μ 2 , . . . , μ n ) , posterior inference proceeds using Bayes’ rule:
p ( μ 1 , μ 2 , . . . , μ n | y i ) = p ( y i | μ 1 , μ 2 , . . . , μ n ) p ( μ 1 , μ 2 , . . . , μ n ) p ( y i ) for i = 1 , 2 , . . . . , k
p ( η 1 , η 2 , . . . , η n | y i ) = p ( y i | η 1 , η 2 , . . . , η n ) p ( η 1 , η 2 , . . . , η n ) p ( y i ) for i = k + 1 , k + 2 , . . . , n
The inference for this multi-parameter model can be broken down into multiple one-parameter problems. First, make an inference for μ 1 when remaining parameters ( μ 2 , . . . , μ n ) are known and use a conjugate prior distribution for μ 1 . For any (conditional) prior probability p ( μ 1 | μ 2 , . . . , μ n ) , the posterior distribution will satisfy
p ( μ 1 | y 1 , y 2 , y 3 , . . . . . , y k , μ 2 , . . . , μ n ) p ( μ 1 | μ 2 , . . . , μ n ) p ( y 1 , y 2 , y 3 , . . . . , y k | μ 1 , μ 2 , . . . , μ n )
p ( μ 1 | y 1 , y 2 , y 3 , . . . . , y k , μ 2 , . . . , μ n ) = p ( μ 1 | μ 2 , . . . , μ n ) p ( y 1 , y 2 , y 3 , . . . . , y k | μ 1 , μ 2 , . . . , μ n ) p ( y 1 , y 2 , y 3 , . . . . , y k | μ 2 , . . . , μ n )
The posterior parameters combine the prior parameters with terms from the data:
Posterior information = Prior information + Data information
Hence, the prior distribution and sampling model are as follows:
p ( y 1 , y 2 , y 3 , . . . . , y k | μ 1 , μ 2 , . . . , μ n ) P ( μ 1 , μ 2 , . . . , μ n ) for i = 1 , 2 , . . . . , k
p ( y k + 1 , y k + 2 , y k + 3 , . . . . , y n | η 1 , η 2 , . . . , η n ) P ( η 1 , η 2 , . . . , η n ) for i = k + 1 , k + 2 , . . . , n
μ 1 , μ 2 , . . . , μ n Conjugate prior distribution ( Posterior hyperparameters ) for i = 1 , 2 , . . . . , k
η 1 , η 2 , . . . , η n Conjugate prior distribution ( Posterior hyperparameters ) for i = k + 1 , k + 2 , . . . , n
Thus, the posterior inference for first parameter μ 1 and η 1 can be given by
p ( μ 1 | y 1 , y 2 , y 3 , . . . . , y k , μ 2 , . . . , μ n ) Conjugate posterior distribution ( Posterior hyperparameters )
for i = 1 , 2 , . . . . , k
p ( η 1 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n , η 2 , . . . , η n ) Conjugate posterior distribution ( Posterior hyperparameters )
for i = k + 1 , k + 2 , . . . , n
Just as the prior distribution for μ 1 and μ 2 , . . . , μ n can be decomposed as p ( μ 1 , μ 2 , . . . , μ n ) = p ( μ 1 | μ 2 , . . . , μ n ) p ( μ 2 , . . . , μ n ) , the posterior distribution can be similarly decomposed:
p ( μ 1 , μ 2 , . . . , μ n | y 1 , y 2 , y 3 , . . . . , y k ) = p ( μ 1 | y 1 , y 2 , y 3 , . . . . , y k , μ 2 , . . . , μ n ) p ( μ 2 , . . . , μ n | y 1 , y 2 , y 3 , . . . . , y k )
Similarly, p ( η 1 , η 2 , . . . , η n | y k + 1 , y k + 2 , y k + 3 , . . . . , y n ) after a change point can be given by
p ( η 1 , η 2 , . . . , η n | y k + 1 , y k + 2 , y k + 3 , . . . . , y n ) = p ( η 1 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n , η 2 , . . . , η n )
p ( η 2 , . . . , η n | y k + 1 , y k + 2 , y k + 3 , . . . . , y n )
The conditional distribution of μ 1 given μ 2 , . . . , μ n and the data ( y 1 , y 2 , y 3 , . . . . , y n ) was obtained in previous sections. The posterior distribution of all other parameters μ 2 , . . . , μ n can be found by estimating an integration over the unknown value of μ 1 :
p ( μ 2 | y 1 , . . . . . , y k ) p ( μ 2 ) p ( y 1 , . . . . . , y k | μ 2 ) = p ( μ 2 ) p ( y 1 , . . . . . , y k | μ 1 , μ 2 ) p ( μ 1 | μ 2 ) d μ 1
μ 2 | y 1 , y 2 , y 3 , . . . . , y k Conjugate posterior distribution ( Posterior hyperparameters )
Similarly, p ( η 2 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n ) after a change point can be given by
η 2 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n Conjugate posterior distribution ( Posterior hyperparameters )
The Bayesian inference for parameters μ 3 , . . . , μ n and η 3 , . . . , η n in the distribution can be determined in the similar way as used for μ 2 and η 2 .
The likelihood function for change point detection can be determined as:
L Y ; Change point , Parameter 1 , Parameter 2 , . . . , Parameter n = exp ( Change point Expectation after change point Expectation before change point ) Expectation before change point Expectation after change point i = 1 Change point y i
L Y ; k , ( μ 1 , μ 2 , . . . , μ n ) , ( η 1 , η 2 , . . . , η n ) = e x p k E Y = y i | η 1 , η 2 , . . . , η n E Y = y i | μ 1 , μ 2 , . . . , μ n E Y = y i | μ 1 , μ 2 , . . . , μ n E Y = y i | η 1 , η 2 , . . . , η n i = 1 k y i
The change point for random process Y is being detected by the likelihood ratio test (LRT). The LRT begins with a comparison of the likelihood scores of the two models; one is null model and other is alternative model. The test is based on the likelihood ratio, which states how many times more likely the data are under one model than the other. This likelihood ratio compared to a critical value used to decide whether to reject the null model.
f ( Change point | Y , parameters before change point , parameters after change point ) = L Y ; Change point , parameters before change point , parameters after change point j = 1 n L Y ; j , Change point , parameters before change point , parameters after change point
The likelihood ratio test for change point k given the random variable Y and parameters before and after change points is as follows:
f k | Y , ( μ 1 , μ 2 , . . . , μ n ) , ( η 1 , η 2 , . . . , η n ) = L Y ; k , ( μ 1 , μ 2 , . . . , μ n ) , ( η 1 , η 2 , . . . , η n ) j = 1 n L Y ; j , ( μ 1 , μ 2 , . . . , μ n ) , ( η 1 , η 2 , . . . , η n )

3.4.1. Multiple Change Points Detection

After detecting first change point k, now the data can be broken into two distinct segments, one each side of the change point, 1 to k and k + 1 to n. Apply the same above mentioned procedure on each segment separately to detect multiple change points in the random process Y.

3.4.2. Convergence of the Parameters

Only one simulation run cannot signify the real features of the resulting model. That’s why, the Gelman-Rubin Convergence diagnostic is being used for the estimation of steady-state parameters by running multiple number sequences of the chain. Lack of convergence can be detected by comparing multiple sequences but cannot be detected by looking at a single sequence. Therefore, multiple sequences of the chain are being run to estimate the actual characteristics of the target distribution, Gelman and Rubin [35,36,37]. m replications of the simulation ( m 10 ) are performed, each of length n = 1000 . If the target distribution is unimodal, then Cowles and Carlin recommend performance of at least 10 chains [38]. The mean pollutant concentration is a parameter of interest and is denoted by V.
Scalar summary V = Mean of the chain (average daily pollutant concentrations)
Let V h j be the jth observation from the hth replication
V h j = sin gle observation for mean pollutant concentration per day
where , replication number = h 1 , 2 , . . . , m , observation number in a replication = j 1 , 2 , . . . , n .
Mean of hth replication
V h = 1 n j = 1 n V h j
Mean of m replications
V = 1 m h = 1 m V h
The between sequence variance represents the variance of a mean of m replications and is calculated as follows:
B = n m 1 i = 1 m ( V h V ) 2
Variance for all replications is calculated to determine the within-sequence variance
S i 2 = 1 n 1 j = 1 n ( V h j V ) 2
The within-sequence variance is the mean variance for k replications determined as given below:
W = 1 m i = 1 m S h 2
Finally, the within-sequence variance and between-sequence variance are combined to get an overall estimate of the variance of V in the target distribution
V a r ( V ) = n 1 n W + 1 n B
Convergence is identified by calculating
R = V a r ( V ) W
This factor R (estimated potential scale reduction) is the proportions among the upper and lower bounds on the standard deviation of V that are used to compute the factor and V a r ( V ) could be reduced through a larger number of iterations. Further iterations of the chain must be performed if the potential scale reduction is high. Run the replications for all scalar summaries until R is lower than 1.1 or 1.2 .

3.5. Flowchart Algorithm

The flowchart for change point ( k ) detection, for any random process Y, is given as follows:Inventions 04 00042 i001

3.6. Comparison Method for Change Point Detection

A change point analysis was performed using a combination of CUSUM (cumulative sum control chart) and bootstrapping for comparative analysis.

3.6.1. The CUSUM Technique

The CUSUM is a sequential analysis technique typically used for monitoring change detection. CUSUM charts are constructed by calculating and plotting a cumulative sum based on the data. The cumulative sums are calculated as follows.
  • First calculate the average.
    y ¯ = y 1 + y 2 + y 3 + , . . . , y n n
  • Start the cumulative sum at zero by setting S 0 = 0 ,
  • Calculate the other cumulative sums by adding the difference between the current value and the average to the previous sum, that is,
    S i = S i 1 + y i y ¯
The cumulative sum is not the sum of the values but it is the cumulative sum of the differences in the values and averages. As the average is being deducted from each value, the final cumulative sum must be zero. Some practice is required to interpret a CUSUM chart. If, during a certain period of time, the overall average is less than the values. Then, the sum will steadily increase because the values being added to cumulative sum will be positive. An upward trend in the CUSUM chart shows a certain period of time, when overall average is less than the values. Similarly, a downward trend in the chart shows that overall average is above than the values. A rapid change in the trend of CUSUM specifies a shift or change in the average. Certain periods, when the CUSUM chart follows straight line, it indicates no change in average.

3.6.2. Bootstrap Analysis

Bootstrap analysis can be performed to determine confidence level for apparent change. The magnitude of change S d i f f must be estimated before performing bootstrap analysis.
S d i f f = S m a x S m i n
S m a x = max i = 0 , 1 , 2 , . . . , S i
S m i n = min i = 0 , 1 , 2 , . . . , S i
Once the estimator of the magnitude of the change has been selected, the bootstrap analysis can be performed. A single bootstrap is performed as follows.
  • Generate a bootstrap sample of n units, denoted y 0 1 , y 0 2 , y 0 3 , . . . y 0 n by randomly reordering the original n values. This is called sampling without replacement.
  • Based on the bootstrap sample, calculate the bootstrap CUSUM, denoted S 0 0 , S 0 1 , S 0 2 , . . . S 0 n .
  • Calculate the maximum, minimum and difference of the bootstrap CUSUM, denoted S 0 m a x , S 0 m i n and S 0 d i f f .
  • Determine whether the bootstrap difference S 0 d i f f is less than the original difference S d i f f respectively.
The idea behind bootstrapping is that the bootstrap samples represent random reordering of the data that mimic the behavior of the CUSUM if no change has occurred. By performing a large number of bootstrap samples, the variance in S d i f f if no change took place can be estimated. The value can be compared with the S d i f f value calculated from the data in its original order to determine if this value is consistent with the expectation under zero change, if bootstrap CUSUM charts tend to stay closer to zero than the CUSUM of the data in its original order, a change likely occurred. A bootstrap analysis consists of performing a large number of bootstraps and counting the number of bootstraps for which S 0 d i f f is less than S d i f f . Let N be the number of bootstrap samples performed and let X be the number of bootstraps for which S 0 d i f f < S d i f f . Then, the confidence level that a change occurred as a percentage is calculated as follows:
Confidence Level = 100 X N percentage
This is the solid proof to indicate that change does have occurred. Based on all possible reordering of the data, one would prefer to estimate the distribution of S 0 d i f f instead of bootstrapping, which is not possible usually. That’s why, for better estimation, number of bootstrap samples need to be increased. Bootstrapping is a distribution free methodology with only one supposition of an independent error structure. Change-point analysis and control charting, both are dependent on the mean-shift model. Let y 1 , y 2 , y 3 , . . . , y n represent the data in time order. The mean-shift model can be written as
y i = μ i + ϵ i
where, μ i is the average at time i. Generally μ i = μ i 1 except for a small number of values of i called the change-points. ϵ i is the random error associated with the ith value and is assumed to be independent with a mean of zero. Once a change has been detected, an estimate of the time at which the change occurred can be made. One such estimator is the CUSUM estimator. Let m be such that
S m = max i = 0 , 1 , 2 , . . . , S i
Here, S m is the point furthest from zero in the CUSUM chart. The point m estimates the last point before the change occurred. The point m + 1 estimates the first point after the change.

3.6.3. Mean and Variance Estimation

Once a change has been detected, the data can be broken into two segments, one on each side of the change-point, 1 to m and m + 1 to n. Then, the two segments can be analyzed by determining their parameters.
M e a n = y ¯ = y i n
V a r i a n c e = σ 2 = ( y i M e a n ) 2 n

4. Computational Experiment

Section 4.1 described the numerical verification of the formulated mathematical model to authenticate the validity of model. Real-time data of particulate matter daily concentrations for four different sites of Seoul, South Korea were utilized for this investigation as given in Section 4.2.

4.1. Toy Model for Validation with Known Solution

As shown in Figure 1, Figure 2 and Figure 3, an artificial data set of random data is generated which consists of two segments with equal length of 50 data points. The samples are drawn from the Poisson distributions Poisson(5) and Poisson(2.5), respectively. Thus, change point occurs at 50th data point.
In Table 2, the results for this artificial data set obtained through Probabilistic method have been described. As shown in Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, the following results were acquired by applying the method explained in Section 3.4 to this artificial data set.

4.2. Particulate Matter ( P M 2.5 and P M 10 ) Change Points for Four Different Sites

Particulate matter ( P M 2.5 and P M 10 ) concentrations are considered as Normally distributed. A random variable Y is understood as Normally distributed with mean μ and variance σ 2 > 0 if the probability distribution function at any given point y in the sample space is given as follows:
Y N o r m a l ( μ , σ 2 ) = f ( y ; μ , σ 2 ) = ( p ( Y = y | μ , σ 2 ) ) = 1 2 π σ 2 e ( y μ ) 2 2 σ 2 f o r y 0 , 1 , 2 , . . . , n
The distribution is symmetric about mean μ and σ 2 represents the variance. The numerical details for P M 2.5 and P M 10 concentrations are given in Table 3 and Table 4, respectively.
Here, the results were acquired by applying the method explained in Section 3.4 to the particulate matter ( P M 2.5 and P M 10 ) concentrations for four different sites (Guro, Nowon, Songpa and Yongsan) in Seoul, South Korea. The daily data observed from January 2004 to December 2013 were used to compute the change point of both pollutants. However, the particulate matter ( P M 2.5 and P M 10 ) concentrations are shown in Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19.
A change point for a process with different data structures is identified to know that a change has occurred, the most likely period in which the change occurred and the parameter behavior before and after the change point. If particulate matter ( P M 2.5 and P M 10 ) concentrations are Normally distributed and the change point for the random process is denoted by k, it is supposed that the data follow a Normal distribution with m e a n = μ and v a r i a n c e = σ 2 until the k point. After the k point, the data is Normally distributed with parameters mean ( η ) and variance ( ϕ 2 ) and can be represented as
y i N o r m a l ( μ , σ 2 ) = f ( y ; μ , σ 2 ) = p ( Y = y i | μ , σ 2 ) for i = 1 , 2 , . . . . , k
y i N o r m a l ( η , ϕ 2 ) = f ( y ; η , ϕ 2 ) = p ( Y = y i | η , ϕ 2 ) for i = k + 1 , k + 2 , . . . , n
M o r e o v e r , t h e n o t a t i o n " " m e a n s i s d i s t r i b u t e d a s .
If the proposed model is ( Y 1 , Y 2 , Y 3 , . . . , Y k | μ , σ 2 ) N o r m a l ( μ , σ 2 ) , then the joint pdf (probability density function) is given by
p ( y 1 , y 2 , y 3 , . . . , y k | μ , σ 2 ) = i = 1 k p ( y i | μ , σ 2 ) = i = 1 k 1 2 π σ 2 e 1 2 y μ σ 2 = 2 π σ 2 k 2 e x p 1 2 y i μ σ 2
Expanding the quadratic term in the exponent, it can be seen that p ( y 1 , y 2 , y 3 , . . . . . , y k | μ , σ 2 ) depends on y 1 , y 2 , y 3 , . . . . . , y k through
i = 1 k y i μ σ 2 = 1 σ 2 y i 2 2 μ σ 2 y i + k μ 2 σ 2
It can be shown that ( y i 2 , y i ) make up a two-dimensional sufficient statistic. Knowing the values of these quantities is equivalent to knowing the values of y ¯ = y i k and s 2 = ( y i y ¯ ) 2 k 1 and so ( y ¯ , s 2 ) are also sufficient statistic.
Inference for this two-parameter model can be broken down into two one-parameter problems.

4.2.1. Bayesian Inference for Mean When Variance Is Known

Firstly, make an inference for μ when σ 2 is known and use a conjugate prior distribution for μ . For any (conditional) prior probability p ( μ | σ 2 ) , the posterior distribution will satisfy
p ( μ | y 1 , y 2 , y 3 , . . . . . , y k , σ 2 ) p ( μ | σ 2 ) × e 1 2 σ 2 ( y i μ ) 2 p ( μ | σ 2 ) × e c 1 ( μ c 2 )
A class of prior distributions is conjugate for a likelihood or sampling model p ( y 1 , y 2 , y 3 . . . . . y k | μ , σ 2 ) if the resulting posterior distribution is also in the similar class. The above calculations indicate that, if p ( μ | σ 2 ) is to be conjugate, it must include quadratic terms like e c 1 ( μ c 2 ) 2 . The simplest such class of probability densities on R is the Normal family of densities, suggesting that if p ( μ | σ 2 ) is Normal and y 1 , y 2 , y 3 . . . . . y k are N o r m a l ( μ , σ 2 ) , then p ( μ | y 1 , . . . . . , y k , σ 2 ) is also a Normal density.
Hence, the Bayesian model for parameter mean before change point μ can be given by
p ( y 1 , y 2 , y 3 , . . . . , y k | μ , σ 2 ) N o r m a l ( μ , σ 2 )
μ N o r m a l ( θ 0 , τ 0 2 )
p ( μ | σ 2 , y 1 , y 2 , y 3 , . . . . , y k ) N o r m a l ( θ k , τ k 2 )
θ 0 = m e a n o f k 0 p r i o r o b s e r v a t i o n s
Consider the particular case in which τ 0 2 = σ 2 k 0
p ( μ , σ 2 ) = p ( μ | σ 2 ) p ( σ 2 ) = d n o r m μ , θ 0 , τ 0 = σ k 0 × p ( σ 2 )
The parameters θ 0 and k 0 can be interpreted as the mean and sample size, respectively, from a set of prior observations.
Similarly, the Bayesian model for the parameter mean after change point η can be given by
p ( y k + 1 , y k + 2 , y k + 3 , . . . . , y n | η , ϕ 2 ) N o r m a l ( η , ϕ 2 )
η N o r m a l ( λ 0 , ω 0 2 )
p ( η | ϕ 2 , y k + 1 , y k + 2 , y k + 3 , . . . . , y n ) N o r m a l ( λ n , ω n 2 )
ω 0 2 = ϕ 2 k 0
ω n 2 = 1 1 ω 0 2 + ( n k ) ϕ 2
λ 0 = m e a n o f p r i o r o b s e r v a t i o n s
λ n = k 0 k 0 + ( n k ) λ 0 + ( n k ) k 0 + ( n k ) y ¯

4.2.2. Bayesian Inference for Variance ( σ 2 , ϕ 2 )

For σ 2 , a family of prior distributions is required with support on ( 0 , ) . One such family of distributions is the Gamma family; unfortunately, this family is not conjugate for the Normal variance. However, the Gamma family does turn out to be a conjugate class of densities for 1 σ 2 (the precision). When using such a prior distribution, σ 2 has an Inverse-Gamma distribution. For interpret ability later, instead of using a 1 and b 1 , this prior distribution of σ 2 can be parameterized as:
P r e c i s i o n = 1 σ 2 G a m m a ( a 1 , b 1 ) G a m m a ν 0 2 , ν 0 σ 0 2 2
V a r i a n c e b e f o r e c h a n g e p o i n t = σ 2 I n v e r s e G a m m a ( a 1 , b 1 ) G a m m a ν 0 2 , ν 0 σ 0 2 2
V a r i a n c e a f t e r c h a n g e p o i n t = ϕ 2 I n v e r s e G a m m a ( a 2 , b 2 ) G a m m a ν 0 2 , ν 0 ϕ 0 2 2
The prior parameters ( σ 0 2 , ν 0 ) can be interpreted as the sample variance and sample size of prior observations, respectively, for posterior inference, the prior distributions and sampling model are as follows:
1 σ 2 G a m m a ν 0 2 , ν 0 σ 0 2 2
( μ | σ 2 ) N o r m a l θ 0 , σ 0 2 k 0
( y 1 , y 2 , y 3 , . . . . , y k | μ , σ 2 ) N o r m a l ( μ , σ 2 )
After a change point, the prior distributions and sampling model are
1 ϕ 2 G a m m a ν 0 2 , ν 0 ϕ 0 2 2
( η | ϕ 2 ) N o r m a l λ 0 , ϕ 0 2 k 0
( y k + 1 , y k + 2 , y k + 3 , . . . . , y n | η , ϕ 2 ) N o r m a l ( η , ϕ 2 )

4.2.3. Joint Inference for Mean and Variance

Just as the prior distribution for μ and σ 2 can be decomposed as p ( μ , σ 2 ) = p ( μ | σ 2 ) p ( σ 2 ) , the posterior distribution can be similarly decomposed:
p ( μ , σ 2 | y 1 , y 2 , y 3 , . . . . , y k ) = p ( μ | σ 2 , y 1 , y 2 , y 3 , . . . . , y k ) p ( σ 2 | y 1 , y 2 , y 3 , . . . . , y k )
p ( η , ϕ 2 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n ) = p ( η | ϕ 2 , y k + 1 , y k + 2 , y k + 3 , . . . . , y n ) p ( ϕ 2 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n )
The conditional distribution of μ given σ 2 and the data ( y 1 , y 2 , y 3 , . . . . , y k ) has already been obtained:
p ( μ | σ 2 , y 1 , y 2 , y 3 , . . . . , y k ) N o r m a l ( θ k , τ k 2 ) N o r m a l ( θ k , σ 2 k k )
where k k = k 0 + k and θ k = k 0 k 0 + k θ 0 + k k 0 + k y ¯ = k 0 θ 0 + k y ¯ k k
Therefore, if θ 0 is the mean of k 0 prior observations, then E ( μ | σ 2 , y 1 , . . . . . , y k ) is the sample mean of the current and prior observations and V a r ( μ | σ 2 , y 1 , . . . . . , y k ) is σ 2 divided by the total number of observations, both prior and current.
p ( η | ϕ 2 , y k + 1 , y k + 2 , y k + 3 , . . . . , y n ) N o r m a l ( λ n , ω n 2 ) N o r m a l ( λ n , ϕ 2 k n )
where k n = k 0 + ( n k ) and λ n = k 0 k 0 + ( n k ) λ 0 + ( n k ) k 0 + ( n k ) y ¯ = k 0 λ 0 + ( n k ) y ¯ k n
The posterior distribution of σ 2 can be found by estimating an integration over the unknown value of μ :
p ( σ 2 | y 1 , . . . . . , y k ) p ( σ 2 ) p ( y 1 , . . . . . , y k | σ 2 ) = p ( σ 2 ) p ( y 1 , . . . . . , y k | μ , σ 2 ) p ( μ | σ 2 ) d μ
1 σ 2 | y 1 , . . . . . , y k G a m m a ν k 2 , ν k σ k 2 2
where ν k = ν 0 + k and σ k 2 = 1 ν k ν 0 σ 0 2 + ( k 1 ) s 2 + k 0 k k k ( y ¯ θ 0 ) 2 It has been shown that variance before change point ( σ 2 ) and variance after change point ( ϕ 2 ) follow an Inverse-Gamma distribution. Prior and posterior distributions for variance parameters are given as follows:
σ 2 I n v e r s e G a m m a ν 0 2 , ν 0 σ 0 2 2
σ 2 | y 1 , y 2 , y 3 , . . . . , y k I n v e r s e G a m m a ν k 2 , ν k σ k 2 2 I n v e r s e G a m m a ν 0 + k 2 , ( ν 0 + k ) 2 1 ( ν 0 + k ) ν 0 σ 0 2 + ( k 1 ) s 2 + k 0 k k k ( y ¯ θ 0 ) 2 I n v e r s e G a m m a ν 0 + k 2 , 1 2 ν 0 σ 0 2 + ( k 1 ) s 2 + k 0 k k 0 + k ( y ¯ θ 0 ) 2
Similarly, the variance after change point ( ϕ 2 ) is given as
ϕ 2 I n v e r s e G a m m a ν 0 2 , ν 0 ϕ 0 2 2
ϕ 2 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n I n v e r s e G a m m a ν n 2 , ν n ϕ n 2 2 I n v e r s e G a m m a ν 0 + ( n k ) 2 , ( ν 0 + ( n k ) ) 2 1 ( ν 0 + ( n k ) ) ν 0 ϕ 0 2 + ( ( n k ) 1 ) s 2 + k 0 ( n k ) k n ( y ¯ λ 0 ) 2 I n v e r s e G a m m a ν 0 + ( n k ) 2 , 1 2 ν 0 ϕ 0 2 + ( ( n k ) 1 ) s 2 + k 0 ( n k ) k 0 + ( n k ) ( y ¯ λ 0 ) 2

4.2.4. Improper Priors

Since k 0 and ν 0 are prior sample sizes, the smaller are these parameters, the more objective will be the estimates. The posterior distribution as k 0 and ν 0 get smaller and smaller;
θ k = k 0 θ 0 + k y ¯ k 0 + k
σ k 2 = 1 ( ν 0 + k ) ν 0 σ 0 2 + ( k 1 ) s 2 + k 0 k k k ( y ¯ θ 0 ) 2
Thus, k 0 , ν 0 0
θ k y ¯
and
σ k 2 ( k 1 ) k s 2 = 1 k ( y i y ¯ ) 2
Improper priors has led to the following posterior distribution for variance:
1 σ 2 | y 1 , y 2 , y 3 , . . . . , y k G a m m a k 2 , 1 2 i = 1 k ( y i y ¯ ) 2
1 ϕ 2 | y k + 1 , y k + 2 , y k + 3 , . . . . , y n G a m m a n k 2 , 1 2 i = k + 1 n ( y i y ¯ ) 2
Similarly, the posterior distribution for mean is given as
μ | σ 2 , y 1 , y 2 , y 3 , . . . . , y k N o r m a l 1 k i = 1 k y i , σ 2 k
η | ϕ 2 , y k + 1 , y k + 2 , y k + 3 , . . . . , y n N o r m a l 1 n k i = k + 1 n y i , ϕ 2 n k

4.2.5. Likelihood Ratio Test and Likelihood Function

As the expected value of a Normal distribution is the mean. The following likelihood ratio test needs to be applied for change point detection:
f ( k | y , μ , η , σ 2 , ϕ 2 ) = L ( Y ; k , μ , η ) j = 1 n L ( Y ; j , μ , η )
The likelihood function for the expected value of a Normal distribution is determined as
L Y ; k , μ , η = e x p k η μ μ η i = 1 k y i
For the probabilistic method, MATLAB was used for change point detection of particulate matter ( P M 2.5 and P M 10 ) data during the study period 2004–2013 for four different sites (Guro, Nowon, Songpa and Yongsan) in Seoul, South Korea. Ten replications of each simulation were performed with 1100 observations in each replication. The first 100 observations are discarded as a burn-in period. Replication of the mean V i of the remaining 1000 observations was performed for each replication, as shown in Table 5 and Table 6. Mean ( V ) ofthe replication mean was used to get the converged values of parameters.
Moreover, the CUSUM charts of particulate matter ( P M 2.5 and P M 10 ) concentrations are shown in Figure 20, Figure 21, Figure 22, Figure 23, Figure 24, Figure 25, Figure 26 and Figure 27 for the four different sites Guro, Nowon, Songpa and Yongsan in Seoul, South Korea.
In addition, the bootstraps analysis of CUSUM charts are shown in Figure 28, Figure 29, Figure 30, Figure 31, Figure 32, Figure 33, Figure 34 and Figure 35.
The value of k is uniform over y 1 . . . . . , y n .

5. Results

Summarized forms of particulate matter ( P M 2.5 and P M 10 ) change point (k), the parameters before a change point ( m e a n = μ , v a r i a n c e = σ 2 ) and the parameters after a change point ( m e a n = η , v a r i a n c e = ϕ 2 ) during the study period 2004–2013 for four different sites (Guro, Nowon, Songpa and Yongsan) in Seoul, South Korea are given in Table 7Table 8, respectively. The results were computed using the numerical example of the mathematical model given in Section 4. At airkorea official website, the annual Particulate Matter trend in Seoul is being exhibited in Figure 36, which shows a decreasing trend during (2004–2013). These particulate matters concentrations are given in μ g/m 3 [39].

5.1. P M 2.5 Change Point (k) through Probabilistic Method

In Table 7, the results obtained through Probabilistic method have been described. Where, ( k ) is the predicted change point varies for different areas. The results indicate the reduction of P M 2.5 concentrations after change point ( k ) . While, ( μ ) represents the mean concentrations before change point ( k ) and ( η ) be the mean concentrations after change point ( k ) . The variance before change point ( σ 2 ) and variance after change point ( ϕ 2 ) have been determined through Inverse-Gamma distribution with conjugate hyper-parameters.

5.2. P M 2.5 Last Point before Change (k) and First Point after Change ( k + 1 ) through CUSUM Approach

Table 8 represents the results obtained for P M 2.5 through CUSUM approach. Where, ( k ) is the last point before change and ( k + 1 ) be the first point after change point. So, the change point leis somewhere between ( k ) and ( k + 1 ) . This method also shows the reduction of P M 2.5 concentrations after change point as ( μ ) represents the mean concentration before change point and ( η ) be the pollutant concentrations after change point. The variance before change point ( σ 2 ) and variance after change point ( ϕ 2 ) have been determined through formulae σ 2 = ( X i M e a n ) 2 n .

5.3. P M 10 Change Point (k) through Probabilistic Method

Table 9 explains the results obtained for P M 10 through Probabilistic method. Hence, the expected change point is ( k ) that differs for different areas. These results show the reduction of P M 10 concentrations after change point ( k ) . While, ( μ ) be the P M 10 concentrations before change point ( k ) and ( η ) represents P M 10 concentrations after change point ( k ) . The variance before change point ( σ 2 ) and variance after change point ( ϕ 2 ) have been determined through Inverse-Gamma distribution with conjugate hyper-parameters.

5.4. P M 10 Last Point before Change (k) and First Point after Change ( k + 1 ) through CUSUM Approach

The results obtained for P M 10 through CUSUM approach have been described in Table 10. Where, the last point before change is ( k ) and the first point after change point is ( k + 1 ) . Therefore, the change point leis anywhere between ( k ) and ( k + 1 ) . This method also depicts the reduction of P M 10 concentrations after change point. ( μ ) represents the P M 10 concentrations before change point and ( η ) be the P M 10 concentrations after change point.The variance before change point ( σ 2 ) and variance after change point ( ϕ 2 ) have been determined through formulae σ 2 = ( X i M e a n ) 2 n .

6. Discussion

6.1. Guro (Seoul, South Korea)

Guro is located in the southwestern part of Seoul, having an essential location as a transport link for railroads and land routes. The largest digital industrial complex in Korea is located in Guro. Thus, the policies of the Ministry of Environment in South Korea have decreased the particulate matters ( P M 2.5 and P M 10 ) concentrations and occurrences of polluted days in Guro.

6.1.1. Probabilistic Method

Table 7 presents the reduction of 16.02% in particulate matter P M 2.5 concentration in Guro, from ( m e a n ( μ ) = 0.02931 mg/m 3 ) to ( m e a n ( η ) = 0.02461 mg/m 3 ) and a change point ( k = 1228 ) . This point occurred on 13th July, 2010 during 7 years data from March 2007–December 2013. Therefore, the expected pollutant concentration before 13th July, 2010 was 0.02931, which changed to 0.02461 after 13th July, 2010. On the other hand, variance ( σ 2 = 4.14015 × 10 30 ) changed to ( ϕ 2 = 1.39483 × 10 30 ) . Similarly, Table 7 indicates 18.88% reduction in P M 10 concentration from ( m e a n ( μ ) = 0.05789 mg/m 3 ) to ( m e a n ( η ) = 0.04696 mg/m 3 ) with a change point k = 2025.19 . This change point occurred 19th October, 2009 in the period of 2004–2013 and involved a change in variance from ( σ 2 = 5.05815 × 10 29 ) to ( ϕ 2 = 1.34766 × 10 29 ) .

6.1.2. CUSUM Approach

CUSUM Approach also indicates a reduction in P M 2.5 and P M 10 concentrations from ( μ ) to ( η ) after change. As for Guro, Table 8 and Table 10 depict the change of P M 2.5 and P M 10 concentrations through CUSUM approach respectively, change point for P M 2.5 concentrations lies in-between point 1570 (k) and 1571 ( k + 1 ) and it occurred between point 1836 (k) and 1837 ( k + 1 ) for P M 10 concentrations.

6.2. Nowon (Seoul, South Korea)

Nowon is positioned at the northeastern part of Seoul and has the highest population density in Seoul, with 619,509 persons living in 35.44 km 2 . The area is surrounded by mountains and forests on the northeast. The policies of the Ministry of Environment in Nowon have improved the particulate matter ( P M 2.5 and P M 10 ) concentrations from ( μ , σ 2 ) to ( η , ϕ 2 ) . Improvement in the reduction of pollutant concentrations varies which is more than Guro.

6.2.1. Probabilistic Method

For Nowon, Table 7 and Table 9 depict the change points ( k = 1502.84 ) and ( k = 1860.49 ) for P M 2.5 and P M 10 concentrations respectively. The change point ( k = 1502.84 ) for P M 2.5 occurred 10th August 2009 for the period March 2005–December 2013. The parameters ( μ = 0.02950 mg/m 3 , σ 2 = 1.13558 × 10 30 ) and ( η = 0.02354 mg/m 3 , ϕ 2 = 5.483837 × 10 31 ) indicate a 20.21% reduction in P M 2.5 expected value after the change point, while there was minor change in the variance parameter. similarly, the change point ( k = 1860.49 ) for P M 10 occurred on 18th April, 2009 for the period 2004–2013 and a 23.38% reduction in pollutant expectation was observed from ( μ = 0.05706 mg/m 3 ) to ( η = 0.04372 mg/m 3 ) with a change in variance from ( σ 2 = 1.10597 × 10 29 ) to ( ϕ 2 = 4.24382 × 10 30 ) .

6.2.2. CUSUM Approach

Moreover, CUSUM Approach also validates the reduction of PM concentrations and polluted days. In case of Nowon, Table 8 and Table 10 depict the change of P M 2.5 and P M 10 concentrations through CUSUM approach respectively, change point for P M 2.5 concentrations lies in-between point 1474 (k) and 1475 ( k + 1 ) and it occurred between point 1952 (k) and 1953 ( k + 1 ) for P M 10 concentrations.

6.3. Songpa (Seoul, South Korea)

Songpa is situated at the southeastern part of Seoul, with the largest population of 647000 residents. As per Ministry of Environment policies in Songpa, there was a significant reduction for pollutant concentrations ( μ , σ 2 ) to ( η , ϕ 2 ) .

6.3.1. Probabilistic Method

For Songpa, Table 7 shows that change point ( k ) for P M 2.5 pollutant concentration was 1745.82, which occurred on 23rd March 2009. The reduction in P M 2.5 expectation was 12.65% from ( μ = 0.02812 mg/m 3 ) to ( η = 0.02456 mg/m 3 ) while variance ( σ 2 = 9.79587 × 10 31 ) changed to ( ϕ 2 = 1.93175 × 10 30 ) . Correspondingly, in Table 9, a 24.26% improvement in the mean of P M 10 concentration was from ( μ = 0.05758 mg/m 3 ) to ( η = 0.04361 mg/m 3 ) after the change point ( k = 2025.53 ) , which occurred on 1st January 2010, while variance changed from ( σ 2 = 7.0782 × 10 29 ) to ( ϕ 2 = 5.12048 × 10 30 ) .

6.3.2. CUSUM Approach

As per CUSUM Approach, there is a decrease in PM concentrations. Table 8 and Table 10 depict the change of P M 2.5 and P M 10 concentrations through CUSUM approach respectively, change point for P M 2.5 concentrations lies in-between point 1455 (k) and 1456 ( k + 1 ) and it occurred between point 1515 (k) and 1516 ( k + 1 ) for P M 10 concentrations.

6.4. Yongsan (Seoul, South Korea)

Yongsan is the center of Seoul, in which almost 250,000 people reside. Prominent locations in Yongsan includes Yongsan station, an electronics market and Itaewon commercial area with heavy traffic and transportation. Consequently, the policies of the Ministry of Environment in Yongsan have affected the particulate matter ( P M 2.5 and P M 10 ) concentrations, producing a remarkable decrease from ( μ , σ 2 ) to ( η , ϕ 2 ).

6.4.1. Probabilistic Method

Similarly, Yongsan, Table 7 and Table 9 show that particulate matter ( P M 2.5 and P M 10 ) concentrations were changed from ( μ , σ 2 ) to ( η , ϕ 2 ) after change point ( k ) . The change point ( k = 1501.15 ) for P M 2.5 concentration occurred with a reduction of 18.97% from ( μ = 0.03066 mg/m 3 ) to ( η = 0.02485 mg/m 3 ) and a change in variance from ( σ 2 = 1.01342 × 10 30 ) to ( ϕ 2 = 3.22178 × 10 30 ) . This change point k occurred on 19th August, 2008. Correspondingly, the change point ( k = 2019.46 ) for P M 10 concentration produced a 23.79% reduction in the expected value from ( μ = 0.05970 mg/m 3 ) to ( η = 0.04550 mg/m 3 ). The change observed for variance was ( σ 2 = 4.50556 × 10 29 ) to ( ϕ 2 = 5.79728 × 10 30 ) and change point k occurred on 24th November, 2009.

6.4.2. CUSUM Approach

The CUSUM Approach is directly applied on the raw data, which should be better for deterministic data structures. It also shows a reduction in pollutant concentrations. In case of Yongsan, Table 8 and Table 10 depict the change of P M 2.5 and P M 10 concentrations through CUSUM approach respectively, change point for P M 2.5 concentrations lies in-between point 1738 (k) and 1739 ( k + 1 ) and it occurred between point 1795 (k) and 1796 ( k + 1 ) for P M 10 concentrations.

6.5. Managerial Insights

  • This model presents a suitable technique for change point detection of diversely distributed data structures for all kind of stochastic processes.
  • By detecting change points in different areas, such as climate change detection, human activity analysis and medical condition monitoring and also analyzing the parameters before and after change points, the results of legislation efforts can be understood and it can be determined whether these change points are favorable.
  • A comparison of parameters before and after a change point evaluates the performance from previous status to current status, which can also be helpful for future prediction with the current strategies.
  • This study of change point detection also defines the current levels of an area under study, which is helpful for designing new policies for further improvements.
  • This research provides guidance for defining new goals if previously defined goals have been achieved and indicates if the standards need to be revised to overcome upcoming challenges.

7. Conclusions

The key motivation of this research work was to explicate an appropriate change point detection model for diversely distributed data structures. This probabilistic method is being verified by the CUSUM approach and the results of the CUSUM approach are compared with the proposed method. This methodology is based on probability distributions and better to apply for random data structures and time series. But the CUSUM approach is directly applicable to the raw data, which is good for deterministic data structures. The model is applicable to various stochastic processes because different data structures follow different probability distributions. The parameter expectations before and after a change point were also estimated to measure the effectiveness and performance of policies applied. To verify the model, four major locations (Guro, Nowon, Songpa and Yongsan) in Seoul, South Korea were chosen as study areas considering their different characteristics, such as climate zone, environment, population and population density. The results were calculated and conclusions were drawn with the application of model on real-time data sets in all cases. The parameters before and after the change point of particulate matter concentrations indicated a reduction in pollutant concentrations over a 10-year period. At airkorea official website, the annual Particulate Matter trend in Seoul is being exhibited in Figure 36, which also shows a similar kind of decreasing trend during (2004–2013). The overall outcomes of this study indicate the effectiveness of policies applied to reduce the pollutant concentrations over time. Thus, further reduction in PM concentrations is required to achieve the set standards. This study can be further extended by locating change segments through multiple change points.

Author Contributions

Conceptualization, M.R.K. and B.S.; Data curation, M.R.K.; Formal analysis, M.R.K. and B.S.; Funding acquisition, B.S.; Investigation, M.R.K.; Methodology, M.R.K.; Project administration, B.S.; Resources, B.S.; Software, M.R.K.; Supervision, B.S.; Validation, M.R.K. and B.S.; Visualization, M.R.K. and B.S.; Writing—original draft, M.R.K.; Writing—review and editing, B.S.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aminikhanghahi, S.; Cook, D.J. A survey of methods for time series change point detection. Knowl. Inf. Syst. 2017, 51, 339–367. [Google Scholar] [CrossRef] [PubMed]
  2. Reeves, J.; Chen, J.; Wang, X.L.; Lund, R.; Lu, Q.Q. A review and comparison of changepoint detection techniques for climate data. J. Appl. Meteorol. Climatol. 2007, 46, 900–915. [Google Scholar] [CrossRef]
  3. Taylor, W. Change-point analysis: A powerful tool for detecting changes. Retriev. July 2000, 5, 2012. [Google Scholar]
  4. Cabrieto, J.; Tuerlinckx, F.; Kuppens, P.; Wilhelm, F.H.; Liedlgruber, M.; Ceulemans, E. Capturing correlation changes by applying kernel change point detection on the running correlations. Inf. Sci. 2018, 447, 117–139. [Google Scholar] [CrossRef]
  5. Keshavarz, H.; Scott, C.; Nguyen, X. Optimal change point detection in Gaussian processes. J. Stat. Plan. Inference 2018, 193, 151–178. [Google Scholar] [CrossRef] [Green Version]
  6. Kucharczyk, D.; Wyłomańska, A.; Sikora, G. Variance change point detection for fractional Brownian motion based on the likelihood ratio test. Phys. A Stat. Mech. Its Appl. 2018, 490, 439–450. [Google Scholar] [CrossRef]
  7. Sarkar, B. A production-inventory model with probabilistic deterioration in two-echelon supply chain management. Appl. Math. Model. 2013, 37, 3138–3151. [Google Scholar] [CrossRef]
  8. Lu, G.; Zhou, Y.; Lu, C.; Li, X. A novel framework of change-point detection for machine monitoring. Mech. Syst. Signal Process. 2017, 83, 533–548. [Google Scholar] [CrossRef]
  9. Khan, M.R.; Sarkar, B. Change Point Detection for Airborne Particulate Matter (PM2. 5, PM10) by Using the Bayesian Approach. Mathematics 2019, 7, 474. [Google Scholar] [CrossRef]
  10. Liu, S.; Yamada, M.; Collier, N.; Sugiyama, M. Change-point detection in time-series data by relative density-ratio estimation. Neural Netw. 2013, 43, 72–83. [Google Scholar] [CrossRef] [Green Version]
  11. Hilgert, N.; Verdier, G.; Vila, J.P. Change detection for uncertain autoregressive dynamic models through nonparametric estimation. Stat. Methodol. 2016, 33, 96–113. [Google Scholar] [CrossRef] [Green Version]
  12. Sarkar, B.; Sana, S.S.; Chaudhuri, K. An economic production quantity model with stochastic demand in an imperfect production system. Int. J. Serv. Oper. Manag. 2011, 9, 259–283. [Google Scholar] [CrossRef]
  13. Górecki, T.; Horváth, L.; Kokoszka, P. Change point detection in heteroscedastic time series. Econom. Stat. 2017, 7, 63–88. [Google Scholar] [CrossRef]
  14. Bucchia, B.; Wendler, M. Change-point detection and bootstrap for Hilbert space valued random fields. J. Multivar. Anal. 2017, 155, 344–368. [Google Scholar] [CrossRef] [Green Version]
  15. Taleizadeh, A.A.; Samimi, H.; Sarkar, B.; Mohammadi, B. Stochastic machine breakdown and discrete delivery in an imperfect inventory-production system. J. Ind. Manag. Optim. 2017, 13, 1511–1535. [Google Scholar] [CrossRef] [Green Version]
  16. Zhou, M.; Wang, H.J.; Tang, Y. Sequential change point detection in linear quantile regression models. Stat. Probab. Lett. 2015, 100, 98–103. [Google Scholar] [CrossRef]
  17. Kim, S.J.; Sarkar, B. Supply chain model with stochastic lead time, trade-credit financing, and transportation discounts. Math. Probl. Eng. 2017, 2017. [Google Scholar] [CrossRef]
  18. Lu, K.P.; Chang, S.T. Detecting change-points for shifts in mean and variance using fuzzy classification maximum likelihood change-point algorithms. J. Comput. Appl. Math. 2016, 308, 447–463. [Google Scholar] [CrossRef]
  19. Ruggieri, E.; Antonellis, M. An exact approach to Bayesian sequential change point detection. Comput. Stat. Data Anal. 2016, 97, 71–86. [Google Scholar] [CrossRef] [Green Version]
  20. Ahmed, E.; Kim, K.H.; Shon, Z.H.; Song, S.K. Long-term trend of airborne particulate matter in Seoul, Korea from 2004 to 2013. Atmos. Environ. 2015, 101, 125–133. [Google Scholar] [CrossRef]
  21. Kim, K.H.; Pandey, S.K.; Nguyen, H.T.; Chung, S.Y.; Cho, S.J.; Kim, M.Y.; Oh, J.M.; Sunwoo, Y. Long-term behavior of particulate matters at urban roadside and background locations in Seoul, Korea. Transp. Res. Part D Transp. Environ. 2010, 15, 168–174. [Google Scholar] [CrossRef]
  22. Gupta, A.; Baker, J.W. Estimating spatially varying event rates with a change point using Bayesian statistics: Application to induced seismicity. Struct. Saf. 2017, 65, 1–11. [Google Scholar] [CrossRef]
  23. Bardwell, L.; Fearnhead, P. Bayesian detection of abnormal segments in multiple time series. Bayesian Anal. 2017, 12, 193–218. [Google Scholar] [CrossRef]
  24. Sarkar, M.; Sarkar, B. An economic manufacturing quantity model with probabilistic deterioration in a production system. Econ. Model. 2013, 31, 245–252. [Google Scholar] [CrossRef]
  25. Keshavarz, M.; Huang, B. Bayesian and Expectation Maximization methods for multivariate change point detection. Comput. Chem. Eng. 2014, 60, 339–353. [Google Scholar] [CrossRef]
  26. Moon, I.; Shin, E.; Sarkar, B. Min–max distribution free continuous-review model with a service level constraint and variable lead time. Appl. Math. Comput. 2014, 229, 310–315. [Google Scholar] [CrossRef]
  27. Kurt, B.; Yıldız, Ç.; Ceritli, T.Y.; Sankur, B.; Cemgil, A.T. A Bayesian change point model for detecting SIP-based DDoS attacks. Digit. Signal Process. 2018, 77, 48–62. [Google Scholar] [CrossRef]
  28. Wu, C.; Du, B.; Cui, X.; Zhang, L. A post-classification change detection method based on iterative slow feature analysis and Bayesian soft fusion. Remote Sens. Environ. 2017, 199, 241–255. [Google Scholar] [CrossRef]
  29. Sarkar, B.; Cárdenas-Barrón, L.E.; Sarkar, M.; Singgih, M.L. An economic production quantity model with random defective rate, rework process and backorders for a single stage production system. J. Manuf. Syst. 2014, 33, 423–435. [Google Scholar] [CrossRef]
  30. Mariño, I.P.; Blyuss, O.; Ryan, A.; Gentry-Maharaj, A.; Timms, J.F.; Dawnay, A.; Kalsi, J.; Jacobs, I.; Menon, U.; Zaikin, A. Change-point of multiple biomarkers in women with ovarian cancer. Biomed. Signal Process. Control 2017, 33, 169–177. [Google Scholar] [CrossRef] [Green Version]
  31. Sarkar, B.; Singh, L.K.; Sarkar, D. Approximation of digital curves with line segments and circular arcs using genetic algorithms. Pattern Recognit. Lett. 2003, 24, 2585–2595. [Google Scholar] [CrossRef]
  32. Jeon, J.J.; Sung, J.H.; Chung, E.S. Abrupt change point detection of annual maximum precipitation using fused lasso. J. Hydrol. 2016, 538, 831–841. [Google Scholar] [CrossRef]
  33. Kang, C.W.; Ullah, M.; Sarkar, B.; Hussain, I.; Akhtar, R. Impact of random defective rate on lot size focusing work-in-process inventory in manufacturing system. Int. J. Prod. Res. 2017, 55, 1748–1766. [Google Scholar] [CrossRef]
  34. Chen, S.; Li, Y.; Kim, J.; Kim, S.W. Bayesian change point analysis for extreme daily precipitation. Int. J. Climatol. 2017, 37, 3123–3137. [Google Scholar] [CrossRef]
  35. Gelman, A. Inference and monitoring convergence. In Markov Chain Monte Carlo in Practice; CRC Press: Boca Raton, FL, USA, 1996; pp. 131–143. [Google Scholar]
  36. Gelman, A.; Rubin, D.B. lnference from iterative simulation using multiple sequences. Stat. Sci. 1992, 7, 457–472. [Google Scholar] [CrossRef]
  37. Gelman, A.; Rubin, D.B. A single series from the Gibbs sampler provides a false sense of security. Bayesian Stat. 1992, 4, 625–631. [Google Scholar]
  38. Cowles, M.K.; Carlin, B.P. Markov chain Monte Carlo convergence diagnostics: A comparative review. J. Am. Stat. Assoc. 1996, 91, 883–904. [Google Scholar] [CrossRef]
  39. Air Korea, Annual Air Quality Trends. Available online: https://www.airkorea.or.kr/eng/annualAirQualityTrends?pMENU_NO=161 (accessed on 26 July 2019).
Figure 1. Artificial Data set for rate (Poisson Distribution).
Figure 1. Artificial Data set for rate (Poisson Distribution).
Inventions 04 00042 g001
Figure 2. Artificial Data set for Poisson Distribution before change point.
Figure 2. Artificial Data set for Poisson Distribution before change point.
Inventions 04 00042 g002
Figure 3. Artificial Data set for Poisson Distribution after change point.
Figure 3. Artificial Data set for Poisson Distribution after change point.
Inventions 04 00042 g003
Figure 4. Artificial Data set time series for Poisson Distribution.
Figure 4. Artificial Data set time series for Poisson Distribution.
Inventions 04 00042 g004
Figure 5. Change point ( k ) .
Figure 5. Change point ( k ) .
Inventions 04 00042 g005
Figure 6. Change point ( k ) frequency histogram.
Figure 6. Change point ( k ) frequency histogram.
Inventions 04 00042 g006
Figure 7. Change point ( k ) density histogram.
Figure 7. Change point ( k ) density histogram.
Inventions 04 00042 g007
Figure 8. Rate before change point.
Figure 8. Rate before change point.
Inventions 04 00042 g008
Figure 9. Rate before change point density histogram.
Figure 9. Rate before change point density histogram.
Inventions 04 00042 g009
Figure 10. Rate after change point.
Figure 10. Rate after change point.
Inventions 04 00042 g010
Figure 11. Rate after change point density histogram.
Figure 11. Rate after change point density histogram.
Inventions 04 00042 g011
Figure 12. Guro P M 2.5 Data.
Figure 12. Guro P M 2.5 Data.
Inventions 04 00042 g012
Figure 13. Guro P M 10 Data.
Figure 13. Guro P M 10 Data.
Inventions 04 00042 g013
Figure 14. Nowon P M 2.5 Data.
Figure 14. Nowon P M 2.5 Data.
Inventions 04 00042 g014
Figure 15. Nowon P M 10 Data.
Figure 15. Nowon P M 10 Data.
Inventions 04 00042 g015
Figure 16. Songpa P M 2.5 Data.
Figure 16. Songpa P M 2.5 Data.
Inventions 04 00042 g016
Figure 17. Songpa P M 10 Data.
Figure 17. Songpa P M 10 Data.
Inventions 04 00042 g017
Figure 18. Yongsan P M 2.5 Data.
Figure 18. Yongsan P M 2.5 Data.
Inventions 04 00042 g018
Figure 19. Yongsan P M 10 Data.
Figure 19. Yongsan P M 10 Data.
Inventions 04 00042 g019
Figure 20. CUSUM chart for Guro P M 2.5 .
Figure 20. CUSUM chart for Guro P M 2.5 .
Inventions 04 00042 g020
Figure 21. CUSUM chart for Guro P M 10 .
Figure 21. CUSUM chart for Guro P M 10 .
Inventions 04 00042 g021
Figure 22. CUSUM chart for Nowon P M 2.5 .
Figure 22. CUSUM chart for Nowon P M 2.5 .
Inventions 04 00042 g022
Figure 23. CUSUM chart for Nowon P M 10 .
Figure 23. CUSUM chart for Nowon P M 10 .
Inventions 04 00042 g023
Figure 24. CUSUM chart for Songpa P M 2.5 .
Figure 24. CUSUM chart for Songpa P M 2.5 .
Inventions 04 00042 g024
Figure 25. CUSUM chart for Songpa P M 10 .
Figure 25. CUSUM chart for Songpa P M 10 .
Inventions 04 00042 g025
Figure 26. CUSUM chart for Yongsan P M 2.5 .
Figure 26. CUSUM chart for Yongsan P M 2.5 .
Inventions 04 00042 g026
Figure 27. CUSUM chart for Yongsan P M 10 .
Figure 27. CUSUM chart for Yongsan P M 10 .
Inventions 04 00042 g027
Figure 28. CUSUM chart for Guro P M 2.5 plus 10 bootstraps.
Figure 28. CUSUM chart for Guro P M 2.5 plus 10 bootstraps.
Inventions 04 00042 g028
Figure 29. CUSUM chart for Guro P M 10 plus 10 bootstraps.
Figure 29. CUSUM chart for Guro P M 10 plus 10 bootstraps.
Inventions 04 00042 g029
Figure 30. CUSUM chart for Nowon P M 2.5 plus 10 bootstraps.
Figure 30. CUSUM chart for Nowon P M 2.5 plus 10 bootstraps.
Inventions 04 00042 g030
Figure 31. CUSUM chart for Nowon P M 10 plus 10 bootstraps.
Figure 31. CUSUM chart for Nowon P M 10 plus 10 bootstraps.
Inventions 04 00042 g031
Figure 32. CUSUM chart for Songpa P M 2.5 plus 10 bootstraps.
Figure 32. CUSUM chart for Songpa P M 2.5 plus 10 bootstraps.
Inventions 04 00042 g032
Figure 33. CUSUM chart for Songpa P M 10 plus 10 bootstraps.
Figure 33. CUSUM chart for Songpa P M 10 plus 10 bootstraps.
Inventions 04 00042 g033
Figure 34. CUSUM chart for Yongsan P M 2.5 plus 10 bootstraps.
Figure 34. CUSUM chart for Yongsan P M 2.5 plus 10 bootstraps.
Inventions 04 00042 g034
Figure 35. CUSUM chart for Yongsan P M 10 plus 10 bootstraps.
Figure 35. CUSUM chart for Yongsan P M 10 plus 10 bootstraps.
Inventions 04 00042 g035
Figure 36. Annual Particulate Matter trend in Seoul (airkorea).
Figure 36. Annual Particulate Matter trend in Seoul (airkorea).
Inventions 04 00042 g036
Table 1. Previous studies on this topic.
Table 1. Previous studies on this topic.
DocumentChange-Point DetectionBCPDTime Series
(Bayesian Change-Point Detection)
Cabrieto et al. (2018) [4]Kernel change point detection,--
correlation changes--
Keshavarz et al. (2018) [5]Generalized likelihood ratio test,--
One-dimensional Gaussian process--
Kucharczyk et al. (2018) [6]Likelihood ratio test,--
fractional Brownian motion--
Lu et al. (2017) [8]Change-point detection, machine monitoring,-Time Series
Anomaly measure (AR model), Martingale test-
On-line change detection,--
Hilgert et al. (2016) [11]autoregressive dynamic models,--
CUSUM-like scheme--
Górecki et al. (2017) [13]Change point detection, heteroscedastic,-Time series
Karhunen-Loeve expansions-
Bucchia and Wendler (2017) [14]Change point detection, bootstrap,-
Hilbert space valued random fields,-Time series
(Cramer–von Mises type test)-
Zhou et al. (2015) [16]Sequential change point detection,--
linear quantile regression models--
Lu and Chang (2016) [18]Detecting change points,--
mean/variance shifts, FCML-CP algorithm--
Liu et al. (2013) [10]Change point detection,-Time series
relative density ratio estimation-
Ruggieri and Antonellis (2016) [19]-Bayesian sequential change point detection-
Keshavarz and Huang (2014) [25]-Bayesian and Expectation Maximization methods,-
-multivariate change point detection-
Kurt et al. (2018) [27]-Bayesian change point model,-
-SIP-based DDoS attacks detection-
Wu et al. (2017) [28]-Post classification change detection,-
-iterative slow feature analysis,-
-Bayesian soft fusion-
Gupta and Baker (2017) [22]-Spatial event rates, change point,-
-Bayesian statistics, induced seismicity-
Marino et al. (2017) [30]-Change point, multiple biomarkers,-
-ovarian cancer-
Jeon et al. (2016) [32]-Abrupt change point detection,-
-annual maximum precipitation, fused lasso-
Bardwell and Fearnhead (2017) [23]-Bayesian Detection, Abnormal SegmentsTime series
Chen et al. (2017) [34]-Bayesian change point analysis,-
-extreme daily precipitation-
Bayesian approach and
This studyChange point detectionlikelihood ratio test forTime series
change point detection
Table 2. Change point (k) for artificial data set.
Table 2. Change point (k) for artificial data set.
Change Point (k) for Artificial Data Set
( R a t e = θ ) ( R a t e = λ ) K
before changeafter changechange point
5.02.550
Table 3. P M 2.5 Normal distribution.
Table 3. P M 2.5 Normal distribution.
PM 2.5 Normal Distribution
Area ( μ ) ( σ 2 )
Mean Is the LocationVariance Is Scale or Deviation
Guro0.026750.01614
Nowon0.026520.01551
Songpa0.026380.01544
Yongsan0.026490.01621
Table 4. P M 10 Normal distribution.
Table 4. P M 10 Normal distribution.
PM 10 Normal Distribution
Area ( μ ) ( σ 2 )
Mean Is the LocationVariance Is Scale or Deviation
Guro0.053400.03644
Nowon0.050360.03331
Songpa0.051930.03811
Yongsan0.053790.03958
Table 5. P M 2.5 Converged values of parameters (Probabilistic Method)
Table 5. P M 2.5 Converged values of parameters (Probabilistic Method)
PM 2.5 Converged Values of Parameters (Probabilistic Method)
GuroNowon
Replication ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K
meanReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplication
meanmeanmeanmeanmeanmeanmeanmeanmeanmean
V 1 0.02930.0246 3.98 × 10 30 1.41 × 10 30 1229.020.02960.0235 1.17 × 10 30 5.16 × 10 31 1497.30
V 2 0.02920.0246 4.37 × 10 30 1.28 × 10 30 1261.720.02960.0235 1.16 × 10 30 5.17 × 10 31 1529.49
V 3 0.02940.0246 4.14 × 10 30 1.46 × 10 30 1199.080.02960.0236 1.16 × 10 30 5.62 × 10 31 1475.20
V 4 0.02940.0246 4.00 × 10 30 1.40 × 10 30 1214.480.02950.0236 1.12 × 10 30 5.46 × 10 31 1485.39
V 5 0.02920.0246 4.40 × 10 30 1.34 × 10 30 1232.260.02950.0235 1.16 × 10 30 5.23 × 10 31 1527.50
V 6 0.02940.0246 3.75 × 10 30 1.48 × 10 30 1202.370.02950.0236 1.09 × 10 30 5.75 × 10 31 1478.39
V 7 0.02930.0247 3.20 × 10 30 1.45 × 10 30 1236.930.02940.0236 1.11 × 10 30 5.72 × 10 31 1497.28
V 8 0.02920.0246 4.36 × 10 30 1.32 × 10 30 1248.800.02950.0235 1.14 × 10 30 4.95 × 10 31 1524.84
V 9 0.02930.0245 4.13 × 10 30 1.33 × 10 30 1253.420.02950.0235 1.12 × 10 30 5.32 × 10 31 1525.16
V 10 0.02940.0247 4.07 × 10 30 1.49 × 10 30 1206.300.02940.0236 1.13 × 10 30 5.98 × 10 31 1487.82
( V )
Mean of 100.02930.0246 4.14 × 10 30 1.39 × 10 30 1228.440.02950.0235 1.14 × 10 30 5.44 × 10 31 1502.84
replications
SongpaYongsan
Replication ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K
meanReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplication
meanmeanmeanmeanmeanmeanmeanmeanmeanmean
V 1 0.02800.0245 1.01 × 10 30 1.89 × 10 30 1788.620.03020.0248 1.12 × 10 30 3.26 × 10 30 1563.53
V 2 0.02810.0245 9.82 × 10 31 1.71 × 10 30 1753.390.03020.0248 1.09 × 10 30 3.25 × 10 30 1534.23
V 3 0.02820.0245 9.45 × 10 31 1.92 × 10 30 1709.280.03080.0249 1.00 × 10 30 3.33 × 10 30 1457.41
V 4 0.02810.0246 1.05 × 10 30 1.94 × 10 30 1725.050.03080.0249 9.87 × 10 31 3.19 × 10 30 1482.48
V 5 0.02810.0245 9.19 × 10 31 1.97 × 10 30 1774.620.03030.0249 9.59 × 10 31 2.95 × 10 30 1543.11
V 6 0.02820.0246 9.35 × 10 31 2.16 × 10 30 1702.200.03080.0249 9.76 × 10 31 3.27 × 10 30 1476.89
V 7 0.02820.0247 9.58 × 10 31 1.90 × 10 30 1732.440.03130.0249 9.10 × 10 31 3.24 × 10 30 1458.11
V 8 0.02800.0246 1.01 × 10 30 1.80 × 10 30 1768.740.03030.0248 1.10 × 10 30 3.30 × 10 30 1552.81
V 9 0.02810.0245 1.02 × 10 30 1.96 × 10 30 1785.430.03040.0248 1.02 × 10 30 3.04 × 10 30 1545.42
V 10 0.02820.0245 9.74 × 10 31 2.06 × 10 30 1718.400.03160.0249 9.77 × 10 31 3.38 × 10 30 1397.51
( V )
Mean of 100.02810.0246 9.80 × 10 31 1.93 × 10 30 1745.820.03070.0248 1.01 × 10 30 3.22 × 10 30 1501.15
replications
Table 6. P M 10 Converged values of parameters (Probabilistic Method)
Table 6. P M 10 Converged values of parameters (Probabilistic Method)
PM 10 Converged values of parameters (Probabilistic Method)
GuroNowon
Replication ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K
meanReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplication
meanmeanmeanmeanmeanmeanmeanmeanmeanmean
V 1 0.05800.0472 9.96 × 10 30 5.48 × 10 29 1951.130.05710.0439 1.08 × 10 29 4.40 × 10 30 1815.41
V 2 0.05780.0468 5.59 × 10 29 8.24 × 10 30 2074.000.05660.0436 1.10 × 10 29 3.79 × 10 30 1903.76
V 3 0.05790.0471 5.29 × 10 29 9.69 × 10 30 1985.460.05720.0439 1.07 × 10 29 4.80 × 10 30 1817.89
V 4 0.05790.0470 5.51 × 10 29 9.20 × 10 30 2021.470.05710.0437 1.15 × 10 29 4.10 × 10 30 1865.10
V 5 0.05780.0469 5.64 × 10 29 8.29 × 10 30 2063.700.05620.0434 1.19 × 10 29 3.50 × 10 30 1967.87
V 6 0.05800.0470 5.52 × 10 29 9.02 × 10 30 2015.710.05750.0439 1.07 × 10 29 4.49 × 10 30 1826.11
V 7 0.05800.0470 5.37 × 10 29 8.90 × 10 30 2018.960.05710.0438 1.12 × 10 29 4.18 × 10 30 1848.09
V 8 0.05780.0469 5.32 × 10 29 8.72 × 10 30 2057.580.05650.0436 1.13 × 10 29 4.04 × 10 30 1914.57
V 9 0.05780.0467 5.88 × 10 29 8.17 × 10 30 2076.960.05710.0435 1.12 × 10 29 3.99 × 10 30 1883.78
V 10 0.05800.0471 5.46 × 10 29 9.72 × 10 30 1986.920.05810.0440 1.03 × 10 29 5.14 × 10 30 1762.35
( V )
Mean of 100.05790.0470 5.06 × 10 29 1.35 × 10 29 2025.190.05710.0437 1.11 × 10 29 4.24 × 10 30 1860.49
replications
SongpaYongsan
Replication ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K ( μ ) ( η ) ( σ 2 ) ( ϕ 2 ) K
meanReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplicationReplication
meanmeanmeanmeanmeanmeanmeanmeanmeanmean
V 1 0.05780.0438 6.63 × 10 29 5.39 × 10 30 1965.250.05970.0457 4.48 × 10 29 5.65 × 10 30 2006.82
V 2 0.05760.0435 7.07 × 10 29 4.35 × 10 30 2034.640.05960.0453 4.45 × 10 29 5.33 × 10 30 2055.41
V 3 0.05760.0437 6.84 × 10 29 5.13 × 10 30 1997.230.05990.0457 4.41 × 10 29 6.31 × 10 30 1968.12
V 4 0.05770.0436 6.93 × 10 29 4.90 × 10 30 2019.750.05980.0455 4.55 × 10 29 5.70 × 10 30 2000.97
V 5 0.05750.0435 7.30 × 10 29 3.16 × 10 30 2068.540.05950.0454 4.56 × 10 29 5.35 × 10 30 2064.62
V 6 0.05750.0437 7.14 × 10 29 6.09 × 10 30 2019.050.05980.0456 4.37 × 10 29 6.30 × 10 30 1996.19
V 7 0.05760.0436 7.13 × 10 29 5.60 × 10 30 2030.350.05970.0455 4.62 × 10 29 5.96 × 10 30 2022.05
V 8 0.05750.0434 7.25 × 10 29 4.71 × 10 30 2055.770.05960.0454 4.44 × 10 29 5.38 × 10 30 2042.75
V 9 0.05740.0435 7.42 × 10 29 4.98 × 10 30 2062.380.05960.0453 4.75 × 10 29 5.64 × 10 30 2055.56
V 10 0.05750.0437 7.07 × 10 29 6.89 × 10 30 2002.310.05990.0456 4.43 × 10 29 6.34 × 10 30 1982.16
( V )
Mean of 100.05760.0436 7.08 × 10 29 5.12 × 10 30 2025.530.05970.0455 4.51 × 10 29 5.80 × 10 30 2019.46
replications
Table 7. P M 2.5 Change point (k) for Normal distribution, parameters before the change point ( m e a n = μ , v a r i a n c e = σ 2 ) , and parameters after the change point ( m e a n = η , v a r i a n c e = ϕ 2 ) .
Table 7. P M 2.5 Change point (k) for Normal distribution, parameters before the change point ( m e a n = μ , v a r i a n c e = σ 2 ) , and parameters after the change point ( m e a n = η , v a r i a n c e = ϕ 2 ) .
Probabilistic Method for Change Point Detection (Normal Distribution)
ParametersGuroNowonSongpaYongsan
M e a n b e f o r e c h a n g e p o i n t = μ (mg/m 3 )0.029310.029500.028120.03066
M e a n a f t e r c h a n g e p o i n t = η (mg/m 3 )0.024610.023540.024560.02485
V a r i a n c e b e f o r e c h a n g e p o i n t = σ 2 4.14015 × 10 30 1.13558 × 10 30 9.79587 × 10 31 1.01342 × 10 30
V a r i a n c e a f t e r c h a n g e p o i n t = ϕ 2 1.39483 × 10 30 5.43837 × 10 31 1.93175 × 10 30 3.22178 × 10 30
C h a n g e p o i n t = k 1228.441502.841745.821501.15
C o n v e r g e n c e f o r μ = R μ 1.000321.000201.000861.00285
O v e r a l l e s t i m a t e o f v a r i a n c e f o r μ = V a r ( V ) μ 0.0000040.0000030.0000020.000033
C o n v e r g e n c e f o r η = R η 0.999861.000091.000120.99999
O v e r a l l e s t i m a t e o f v a r i a n c e f o r η = V a r ( V ) η 0.0000020.0000020.0000020.000001
C o n v e r g e n c e f o r σ 2 = R σ 2 1.0001050.9997980.9999821.000099
O v e r a l l e s t i m a t e o f v a r i a n c e f o r σ 2 = V a r ( V ) σ 2 3.41 × 10 59 1.31 × 10 60 1.73 × 10 60 3.81 × 10 60
C o n v e r g e n c e f o r ϕ 2 = R ϕ 2 1.0001651.0003710.9998490.999884
O v e r a l l e s t i m a t e o f v a r i a n c e f o r ϕ 2 = V a r ( V ) ϕ 2 4.31 × 10 60 5.96 × 10 61 2.28 × 10 59 2.33 × 10 59
C o n v e r g e n c e f o r k = R k 0.999850.999951.000111.00098
O v e r a l l e s t i m a t e o f v a r i a n c e f o r k = V a r ( V ) k 698111.33521067.70853426.781008921.72
Table 8. P M 2.5 Last point before change (k) and first point after change ( k + 1 ) through the CUSUM approach (Normal distribution).
Table 8. P M 2.5 Last point before change (k) and first point after change ( k + 1 ) through the CUSUM approach (Normal distribution).
CUSUM Approach for Change Point Detection (Normal Distribution)
Area ( M e a n = μ ) ( M e a n = η ) ( σ 2 ) ( ϕ 2 ) K k + 1 S m S m a x S m i n S d i f f Confidence
Seoul,before changeafter changeVarianceVarianceLast pointFirst pointMost extremeHighest pointLowest pointMagnitudelevel
South Korea(mg/m 3 )(mg/m 3 )before changeafter changebefore changeafter changepointin CUSUMin CUSUMof change%
Guro0.028940.023010.000296420.00017677157015713.4283.428−0.1433.572100 %
Nowon0.030840.022420.000272450.00017550147414756.3706.370−0.4866.856100 %
Songpa0.029280.024200.000282410.00019390145514564.2184.218−0.2234.441100 %
Yongsan0.028840.024120.000321780.00019152173817394.0764.076−0.1674.243100 %
Table 9. P M 10 Change point (k) for Normal distribution, parameters before the change point ( m e a n = μ , v a r i a n c e = σ 2 ) , and parameters after the change point ( m e a n = η , v a r i a n c e = ϕ 2 ) .
Table 9. P M 10 Change point (k) for Normal distribution, parameters before the change point ( m e a n = μ , v a r i a n c e = σ 2 ) , and parameters after the change point ( m e a n = η , v a r i a n c e = ϕ 2 ) .
Probabilistic Method for Change Point Detection (Normal Distribution)
ParametersGuroNowonSongpaYongsan
M e a n b e f o r e c h a n g e p o i n t = μ (mg/m 3 )0.057890.057060.057580.05970
M e a n a f t e r c h a n g e p o i n t = η (mg/m 3 )0.046960.043720.043610.04550
V a r i a n c e b e f o r e c h a n g e p o i n t = σ 2 5.05815 × 10 29 1.10597 × 10 29 7.07826 × 10 29 4.50556 × 10 29
V a r i a n c e a f t e r c h a n g e p o i n t = ϕ 2 1.34766 × 10 29 4.24382 × 10 30 5.12048 × 10 30 5.79728 × 10 30
C h a n g e p o i n t = k 2025.191860.492025.532019.46
C o n v e r g e n c e f o r μ = R μ 1.000461.004171.000591.00053
O v e r a l l e s t i m a t e o f v a r i a n c e f o r μ = V a r ( V ) μ 0.000010.000030.000010.00001
C o n v e r g e n c e f o r η = R η 0.999801.001651.000571.00038
O v e r a l l e s t i m a t e o f v a r i a n c e f o r η = V a r ( V ) η 0.000040.000010.000010.00001
C o n v e r g e n c e f o r σ 2 = R σ 2 1.0389101.0009181.0006271.000106
O v e r a l l e s t i m a t e o f v a r i a n c e f o r σ 2 = V a r ( V ) σ 2 2.78 × 10 57 7.15 × 10 59 2.41 × 10 57 1.08 × 10 57
C o n v e r g e n c e f o r ϕ 2 = R ϕ 2 1.2214031.0011161.0002231.000568
O v e r a l l e s t i m a t e o f v a r i a n c e f o r ϕ 2 = V a r ( V ) ϕ 2 6.40 × 10 58 7.18 × 10 59 6.97 × 10 58 7.71 × 10 59
C o n v e r g e n c e f o r k = R k 1.000951.002451.000781.00059
O v e r a l l e s t i m a t e o f v a r i a n c e f o r k = V a r ( V ) k 625170.52599959.50404584.27522327.71
Table 10. P M 10 Last point before change (k) and first point after change ( k + 1 ) through the CUSUM approach (Normal distribution).
Table 10. P M 10 Last point before change (k) and first point after change ( k + 1 ) through the CUSUM approach (Normal distribution).
CUSUM Approach for Change Point Detection (Normal Distribution)
Area ( M e a n = μ ) ( M e a n = η ) ( σ 2 ) ( ϕ 2 ) K k + 1 S m S m a x S m i n S d i f f Confidence
Seoul,before changeafter changeVarianceVarianceLast pointFirst pointMost extremeHighest pointLowest pointMagnitudelevel
South Korea(mg/m 3 )(mg/m 3 )before changeafter changebefore changeafter changepointin CUSUMin CUSUMof change%
Guro0.059140.047210.001667180.000887481836183710.53810.538−0.11810.656100 %
Nowon0.057430.041530.001393380.000616311952195313.94913.949−0.19414.143100 %
Songpa0.061060.044840.002177060.000773761515151613.83813.838−0.12613.963100 %
Yongsan0.061050.046170.002169820.000820271795179613.04613.046−0.10813.154100 %

Share and Cite

MDPI and ACS Style

Khan, M.R.; Sarkar, B. Change Point Detection for Diversely Distributed Stochastic Processes Using a Probabilistic Method. Inventions 2019, 4, 42. https://doi.org/10.3390/inventions4030042

AMA Style

Khan MR, Sarkar B. Change Point Detection for Diversely Distributed Stochastic Processes Using a Probabilistic Method. Inventions. 2019; 4(3):42. https://doi.org/10.3390/inventions4030042

Chicago/Turabian Style

Khan, Muhammad Rizwan, and Biswajit Sarkar. 2019. "Change Point Detection for Diversely Distributed Stochastic Processes Using a Probabilistic Method" Inventions 4, no. 3: 42. https://doi.org/10.3390/inventions4030042

APA Style

Khan, M. R., & Sarkar, B. (2019). Change Point Detection for Diversely Distributed Stochastic Processes Using a Probabilistic Method. Inventions, 4(3), 42. https://doi.org/10.3390/inventions4030042

Article Metrics

Back to TopTop