Next Article in Journal
Dynamic Analyses of Contagion Risk and Module Evolution on the SSE A-Shares Market Based on Minimum Information Entropy
Next Article in Special Issue
Knowledge Discovery for Higher Education Student Retention Based on Data Mining: Machine Learning Algorithms and Case Study in Chile
Previous Article in Journal
A Decision Support Model for Hotel Recommendation Based on the Online Consumer Reviews Using Logarithmic Spherical Hesitant Fuzzy Information
Previous Article in Special Issue
Co-Training for Visual Object Recognition Based on Self-Supervised Models Using a Cross-Entropy Regularization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Change Point Test for the Conditional Mean of Time Series of Counts Based on Support Vector Regression

Department of Statistics, Seoul National University, Seoul 08826, Korea
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(4), 433; https://doi.org/10.3390/e23040433
Submission received: 25 March 2021 / Revised: 2 April 2021 / Accepted: 3 April 2021 / Published: 7 April 2021

Abstract

:
This study considers support vector regression (SVR) and twin SVR (TSVR) for the time series of counts, wherein the hyper parameters are tuned using the particle swarm optimization (PSO) method. For prediction, we employ the framework of integer-valued generalized autoregressive conditional heteroskedasticity (INGARCH) models. As an application, we consider change point problems, using the cumulative sum (CUSUM) test based on the residuals obtained from the PSO-SVR and PSO-TSVR methods. We conduct Monte Carlo simulation experiments to illustrate the methods’ validity with various linear and nonlinear INGARCH models. Subsequently, a real data analysis, with the return times of extreme events constructed based on the daily log-returns of Goldman Sachs stock prices, is conducted to exhibit its scope of application.

1. Introduction

In this study, we developed a forecasting method for the time series of counts based on support vector regression (SVR) with particle swarm optimization (PSO), and used it to detect a change in the conditional mean of the time series based on the cumulative sum (CUSUM) test that is calculated from integer-valued autoregressive conditional heteroscedastic (INGARCH) residuals. Over the past few decades, the time series of counts have gained increased attention from researchers in diverse scientific areas. Considering the research conducted by [1,2,3,4,5], two classes of models, such as integer-valued autoregressive (INAR) and INGARCH models, have been popular for analyzing the time series of counts. See [6] for more details. These models have been harnessed to analyze polio data [7], crime data [8], car accident traffic data [9], and financial data [10].
Although the basic theories and analytical tools for these models are quite well developed in the literature, as seen in [11,12,13,14,15], a restriction on their usage exists because both INAR and INGARCH models are mostly assumed to have a linear structure in their conditional mean. In INGARCH models, Poisson and negative binomial distributions have been widely adopted as the conditional distribution of current observations over past information. This is because assuming these distributions is not impractical, as the correct specification of underlying distributions is not essential when attempting to estimate the conditional mean equation, as demonstrated by [16], who considered the quasi-maximum likelihood estimation (QMLE) method for the time series of counts. However, for the QMLE approach to perform adequately, the conditional mean structure must be correctly specified. As misspecification can potentially lead to false conclusions in real situations, we considered SVR as a nonparametric algorithm for forecasting the time series of counts. To our knowledge, our current study is the first attempt in the literature to use SVR for the prediction of time series of counts based on the INGARCH scheme.
SVR has been one of the most popular nonparametric algorithms for forecasting time series and has been proven to outperform classical time series models, such as autoregressive and moving average (ARMA) and GARCH models, as SVR can approximate nonlinearity without knowing the underlying dynamic structure of time series [17,18,19,20,21,22,23,24,25]. SVR has the merit of implementing the “structural risk minimization principle” [26] and seeks a balance between model complexity and empirical risk [27]. Moreover, a smaller number of tuning parameters is required, and determining a global solution is not problematic because it solves a quadratic programming problem (QPP).
SVR has been modified in various manners; for example, smooth SVR [28], least squares (LS)-SVM [29], and twin SVR (TSVR) [30]. TSVR generates two hyperplanes unlike SVR and has a significant advantage over SVR in computational speed. For the relevant references, see [31,32,33]. Here, we harness the SVR and TSVR methods particularly with the particle swarm optimization (PSO) algorithm, originally proposed by [34], in determining a set of optimal parameters to enhance their efficacy. For an overview of PSO, see [35,36].
As an application of our SVR method, we consider the problem of detecting a significant change in the conditional mean of the INGARCH time series. Since [37], the parameter change detection problem has been a core issue in various research areas. As financial time series often suffer from structural changes, owing to changes in governmental policy and critical social events, and ignoring them leads to a false conclusion, change point tests have been considered as an important research topic in time series analysis. See [38,39] for a general review. The CUSUM test has long been used as a tool for detecting a change point, owing to its practical efficiency [40,41,42,43,44]. As regards the time series of counts, see [7,45,46,47,48,49,50].
Among the CUSUM tests, we adopted the residual-based CUSUM test, because the residual method can successfully discard the correlations of time series and enhance the performance of the CUSUM test in terms of both stability and power. See [51,52]. The authors of the recent reference [43,53] developed a simplistic residual-based CUSUM test for location-scale time series models, based on which the authors of [21,22] designated a hybridization of the SVR and CUSUM methods for handling the change point problem for AR and GARCH time series and demonstrated its superiority over classical models. However, their approach is not directly applicable and requires a new modification for effective performance, especially on the proxies used for the GARCH prediction, as seen in Section 3.4, as simple or exponential moving average type proxies conventionally used for SVR-GARCH models [22] would not work adequately in our current study. Here, we instead used the proxies obtained through the linear INGARCH fit to time series of counts.
The rest of this paper is organized as follows. Section 2 reviews the principle of the CUSUM test and CUSUM of squares test for the INGARCH models and then briefly describes how to apply the SVR-INGARCH method for constructing the CUSUM tests. Section 3 presents the SVR and TSVR-GARCH models for forecasting the conditional mean and describes the SVR and TSVR methods with PSO. Section 4 discusses the Monte Carlo simulations conducted to evaluate the performance of the proposed method. Section 5 discusses the performance of the real data analysis, using the return times of extreme events constructed based on the daily log-returns of Goldman Sachs (GS) stock prices. Finally, Section 6 provides concluding remarks.

2. INGARCH Model-Based Change Point Test

Let { Y t , t 1 } be a time series of counts. In order to make inferences for { Y t } , one can consider fitting a parametric model to Y t , for instance, the INGARCH model with the conditional distribution of the one-parameter exponential family and the link function f θ , parameterized with θ Θ R d , that describes the conditional expectation, namely,
Y t | F t 1 p ( y | η t ) , X t : = E ( Y t | F t 1 ) = f θ ( X t 1 , Y t 1 ) ,
where F t denotes the past information up to time t, f θ is defined on [ 0 , ) × N 0 with N 0 = { 0 , 1 , } , and p ( · | · ) is a probability mass function given by
p ( y | η ) = exp { η y A ( η ) } h ( y ) , y 0 ,
where η is the natural parameter, A ( · ) and h ( · ) are known real-valued functions, B = A is strictly increasing, and η t = B 1 ( X t ) . B ( η t ) and B ( η t ) are the conditional mean and variance of Y t over past observations, respectively. Symbols X t ( θ ) and η t ( θ ) are used to emphasize θ .
Conventionally, f θ is assumed to be bounded below by some real number c > 0 and to satisfy
sup θ Θ | f θ ( x , y ) f θ ( x , y ) | ν 1 | x x | + ν 2 | y y |
for all x , x 0 and y , y N 0 , where ν 1 , ν 2 0 satisfies ν 1 + ν 2 < 1 , which, according to [12], allows { Y t } to be strictly stationary and ergodic, required for the consistency of the parameter estimates.
In practice, Poisson or negative binomial (NB) linear INGARCH(1,1) models with X t = ω + α X t 1 + β Y t 1 , ω > 0 , α 0 , β 0 , α + β < 1 , are frequently used. For the former, we assume Y t | F t 1 Poisson ( X t ) , whereas for the latter, we assume Y t | F t 1 N B ( r , p t ) , X t = r ( 1 p t ) p t = ω + α X t 1 + β Y t 1 , where r N and Y NB ( r , p ) denotes the negative binomial distribution with its mass function: P ( Y = k ) = ( k + r 1 ) ! ( r 1 ) ! k ! ( 1 p ) k p r , k 0 .
Let θ 0 be a true parameter, which is assumed to be an interior point of the compact parameter space Θ . The θ 0 is then estimated using the conditional likelihood function of model (1), based on the observations Y 1 , , Y n :
L ˜ n ( θ ) = t = 1 n exp { η t ˜ ( θ ) Y t A ( η t ˜ ( θ ) ) } h ( Y t ) ,
where η ˜ t ( θ ) = B 1 ( X ˜ t ( θ ) ) is updated through the equations: X ˜ t ( θ ) = f θ ( X ˜ t 1 ( θ ) , Y t 1 ) for t 2 , X ˜ 1 ( θ ) = X ˜ 1 , with an initial value X ˜ 1 .The conditional maximum likelihood estimator (CMLE) of θ 0 is then obtained as the maximizer of the likelihood function in Equation (3):
θ ^ n = argmax θ Θ L ˜ n ( θ ) = argmax θ Θ t = 1 n ˜ t ( θ ) ,
with ˜ t ( θ ) = log p ( Y t | η ˜ t ( θ ) ) = η ˜ t ( θ ) Y t A ( η ˜ t ( θ ) ) . The authors of the reference [12,50] showed that, under certain conditions, θ ^ n converges to θ 0 in probability and n ( θ ^ n θ 0 ) is asymptotically normally distributed as n tends to . This θ ^ n is harnessed to make prediction for calculating residuals.
In our current study, we aim to extend Model (1) to the nonparametric model:
Y t | F t 1 p ( y | η t ) , X t = g ( X t 1 , Y t 1 ) ,
where p ( · | · ) and g are unknown, and g is implicitly assumed to satisfy Equation (2). Provided g { f θ ; θ Θ } and p ( · | · ) is known a priori, one can estimate g with g ^ = f θ ^ . Even if p ( · | · ) is unknown, one can still consider using the Poisson or NB quasi-likelihood estimator (QMLE) method as in [16]. See also [54] for various types of CUSUM tests based on the QMLEs. However, when one has no prior information as to g, the parametric modeling may hamper the inference, and in this case, one can estimate g with the nonparametric SVR method stated below in Section 3.
On Model (1), setting up the null and alternative hypotheses: H 0 : θ remain the same over t = 1 , , n . vs. H 1 : not H 0 , The authors of the reference [50] considered the problem of detecting a change in θ based on the CUSUM test:
T ^ n r e s = max 1 k n 1 n τ ^ n | t = 1 k ϵ ^ t k n t = 1 n ϵ ^ t |
with the residuals ϵ ^ t = Y t X ˜ t ( θ ^ n ) and τ ^ n 2 = 1 n t = 1 n ϵ ^ t 2 1 n t = 1 n ϵ ^ t 2 . Furthermore, the authors of the references [55,56] employed the residual-based CUSUM of squares test:
T ^ n s q u a r e = max 1 k n 1 n τ ˜ n | t = 1 k ϵ ^ t 2 k n t = 1 n ϵ ^ t 2 |
with τ ˜ n 2 = γ ˜ n ( 0 ) + 2 h = 1 h n γ ˜ n ( h ) , γ ˜ n ( h ) = 1 n t = 1 n h ( ϵ ^ t 2 ϵ ¯ 2 ) ( ϵ ^ t + h 2 ϵ ¯ 2 ) , ϵ ¯ 2 = 1 n t = 1 n ϵ ^ t 2 , and h n = 2 ( log 10 n ) 2 .
The authors of the reference [50] verified that, under the null H 0 , T ^ n r e s behaves asymptotically the same as
T n = max 1 k n 1 n τ | t = 1 k ϵ t k n t = 1 n ϵ t | ,
where ϵ t = Y t X t ( θ 0 ) and τ 2 = V a r ( ϵ 1 ) . As { ϵ t } forms a sequence of martingale differences, we obtain T n T : = sup 0 s 1 | B ( s ) | in distribution [57], where B denotes a Brownian bridge, owing to Donsker’s invariance principle, so that, as T ^ n r e s T n , we have T ^ n r e s T in distribution. For instance, H 0 is rejected if T ^ n r e s 1.3397 at the level of 0.05, which is obtainable with Monte Carlo simulations. Similarly, the authors of the reference [55] verified that T ^ n s q u a r e T in distribution, so that the same critical values as for the case of T ^ n r e s can be harnessed. Provided that a change point exists, the location of change is identified as
k ^ n = argmax 1 k n | t = 1 k ϵ ^ t i k n t = 1 n ϵ ^ t i | , i = 1 , 2 .
This CUSUM framework for parametric models can be easily adopted for nonparametric models as far as the residuals ϵ ^ t can be accurately calculated, as seen in [21,22] who deal with the change point problem on SVR-ARMA and SVR-GARCH models. Below, when dealing with Model (4), instead of ϵ ^ t in Equation (5), we use ϵ ^ t = Y t g ^ ( X t 1 , Y t 1 ) in the construction of the CUSUM tests in Equations (5) and (6).
When estimating g with SVR and TSVR, we train ( y t , x t ) T either with y t = X ˜ t and x t = ( X ˜ t 1 , Y t 1 ) T or y t = Y t and x t = ( X ˜ t 1 , Y t 1 ) T , with some proper proxy X ˜ t 1 . The former has been used for the SVR-GARCH model in [21], while the latter is newly considered here, inspired by the fact that Y t = g ( X t 1 , Y t 1 ) + ν t , where the error process { ν t } is a sequence of martingale differences, which also holds for Model (1) because we can express Y t = X t + ν t in this case. See Step 3 in Section 4 below for more details.

3. SVR-INGARCH Model

In this section, we provide an outline of the SVR, TSVR, and PSO methods for a quick reference and describe the change point test based on the SVR-INGARCH model.

3.1. Support Vector Regression

SVR is an extension of the SVM, originally proposed by [58], and merits accurate nonlinear prediction. SVR aims to identify a nonlinear function of the form: f ( x ) = w T ϕ ( x ) + b , where x denotes a vector of inputs, w and b are vectors of regression parameters, and ϕ is a known kernel function. The optimal w and b are determined from the ϵ -insensitive loss function (Vapnik, 2000):
ϵ ( y , f ( x ) ) = | y f ( x ) | ϵ if | y f ( x ) | ϵ 0 otherwise .
Given input vectors x i , scalar output y i , i = 1 , , n , and a constant C > 0 , we construct the objective function of the SVR as follows:
minimize 1 2 | | w | | 2 + C i = 1 n ( ξ 1 , i + ξ 2 , i ) ,
subject to y i w T ϕ ( x i ) b ϵ + ξ 2 , i w T ϕ ( x i ) + b y i ϵ + ξ 1 , i ξ 1 , i 0 , ξ 2 , i 0 ,
where ξ 1 , i , ξ 2 , i > 0 denote slack variables that allow some points to lie outside the ϵ -band with a penalty, and C denotes a trade-off between the function complexity and the training error.
To obtain the optimal w and b, we formulate an unconstrained optimization problem using Lagrange multipliers [27]. The Karush–Kuhn–Tucker (KKT) conditions then lead to the following dual form:
maximize 1 2 i = 1 n j = 1 n ( α 1 , i α 2 , i ) ( α 1 , j α 2 , j ) ϕ ( x i ) T ϕ ( x j ) ϵ i = 1 n ( α 1 , i + α 2 , i ) + i = 1 n ( α 1 , i α 2 , i ) y i ,
subject to i = 1 n ( α 1 , i α 2 , i ) = 0 , 0 α 1 , i C , 0 α 2 , i C , where α 1 , i and α 2 , i denote dual variables [26]. Subsequently, the optimization problem in Equation (9) yields the solutions w ^ , b ^ , f ^ of w , b , f , as follows:
w ^ = i = 1 n ( α 1 , i α 2 , i ) ϕ ( x i ) ,
b ^ = y i w ^ T ϕ ( x i ) ϵ , 0 < α 1 , i < C y i w ^ T ϕ ( x i ) + ϵ , 0 < α 2 , i < C , f ^ ( x ) = i = 1 n ( α 1 , i α 2 , i ) K ( x i , x ) + b ^
with K ( x , y ) = ϕ ( x ) T ϕ ( y ) . In particular, we employ the Gaussian kernel for K in Equation (10), K ( x , y ) = exp | | x y | | 2 2 γ 2 , and determine the tuning parameters γ 2 , C in Equation (8), and ϵ in the loss function Equation (7) using the PSO method on the cube of ( C , γ 2 , ϵ ) with 1 C 100 , 0.1 γ 2 1 , and 0.1 ϵ 1 .

3.2. Twin Support Vector Regression

TSVR is a modified version of SVR [30]. Similar to TSVM [59], TSVR derives two nonparallel planes f 1 ( x ) = w 1 T ϕ 1 ( x ) + b 1 and f 2 ( x ) = w 2 T ϕ 2 ( x ) + b 2 , which respectively determines the ϵ 1 -sensitive downbound and ϵ 2 -sensitive upbound of data. Given input vectors x i and output y i , i = 1 , , n , the linear TSVR can be formulated as the constraint minimization problem as follows:
minimize f ( w 1 , b 1 , ξ 1 ) = 1 2 | | Y A w 1 e b 1 e ϵ 1 | | 2 + C 1 e T ξ 1 , subject to Y A w 1 e b 1 e ϵ 1 ξ 1 ;
minimize f ( w 2 , b 2 , ξ 2 ) = 1 2 | | Y A w 2 e b 2 + e ϵ 2 | | 2 + C 2 e T ξ 2 , subject to A w 2 + e b 2 Y e ϵ 2 ξ 2 ,
where Y = ( y 1 , , y n ) T , A = ( x 1 x n ) T , ϵ 1 , ϵ 2 0, e denotes a vector whose components are all equal to 1, C 1 , C 2 0 are hyperparameters, and ξ 1 , ξ 2 0 are slack variables. Each QPP has m constraints instead of 2 m constraints and has an advantage of faster computational speed. To obtain the optimal w 1 and b 1 in Equation (11), we solve the QPP using the Lagrangian function:
L ( w 1 , b 1 , ξ 1 , α 1 , β 1 ) : = 1 2 | | Y A w 1 e b 1 e ϵ 1 | | 2 + C 1 e T ξ 1 α 1 T ( Y A w 1 e b 1 e ϵ 1 ξ 1 ) β 1 T ξ 1 ,
where α 1 0 and β 1 are Lagrangian multiplier vectors. If there is an optimal solution, it must satisfy the following KKT conditions:
A T ( Y A w 1 e b 1 e ϵ 1 ) + A T α 1 = 0 ;
e T ( Y A w 1 e b 1 = e ϵ 1 ) + e T α 1 = 0 ; C 1 e α 1 β 1 = 0 ; α 1 T ( Y A w 1 = e b 1 e ϵ 1 + ξ 1 ) = 0 , α 1 0 ; β 1 T ξ 1 = 0 , β 1 0 ; Y A w 1 + e b 1 e ϵ 1 ξ 1 , ξ 1 0 ; A w 2 + e b 2 Y e ϵ 2 ξ 2 , ξ 2 0 .
We define G = ( A e ) , h 1 = Y e ϵ 1 , and u 1 = ( w 1 T b 1 ) T . Combining Equations (14) and (15), we have
u 1 = ( G T G ) 1 G T ( h 1 α 1 ) .
However, since G T G is only semidefinite, we introduce a regularization term σ I , where σ > 0 is very small, to overcome some ill-conditioned setting and use u 1 = ( G T G + σ I ) 1 G T ( h 1 α 1 ) . Next, substituting Equation (16) and the KKT conditions into Equation (13), we obtain the dual QPP form:
maximize 1 2 α 1 T G ( G T G ) 1 G T α 1 + h 1 T G ( G T G ) 1 G T α 1 h 1 T α 1 , subject to 0 α 1 C 1 e ,
which yields u 1 . Likewise, Equation (12) can acquire the dual QPP form:
minimize 1 2 α 2 T G ( G T G ) 1 G T α 2 h 2 T G ( G T G ) 1 G T α 2 + h 2 T α 2 , subject to 0 α 2 C 2 e ,
where α 2 is the Lagrangian multiplier vector and h 2 = Y + e ϵ 2 . This yields u 2 = ( w 2 T b 2 ) T . Then, the estimated regressor can be formulated as follows:
f ( x ) = 1 2 ( f 1 ( x ) + f 2 ( x ) ) = 1 2 ( w 1 + w 2 ) T x + 1 2 ( b 1 + b 2 ) .
For extending the linear TSVR to a nonlinear one, we use the kernel-generated nonparallel hyperplanes, that is, f 1 ( x ) = K ( x T , A T ) w 1 + b 1 and f 2 ( x ) = K ( x T , A T ) w 2 + b 2 . The optimization problem in this case is similar to the linear TSVR, and the nonlinear TSVR regressor is obtained as follows:
f ( x ) = 1 2 ( f 1 ( x ) + f 2 ( x ) ) = 1 2 ( w 1 + w 2 ) T K ( A , x ) + 1 2 ( b 1 + b 2 ) .

3.3. Particle Swarm Optimization Method

In a standard PSO algorithm [34], a set of d hyperparameters are considered as a particle of d-dimensional vector in search region S = { p = ( p 1 , , p d ) T R d ; l k p k u k with l k , u k R for k = 1 , , d } . Here, N particles are modeled to move in S, with the position p i = ( p i 1 , , p i d ) T and velocity v i = ( v i 1 , , v i d ) T for i = 1 , , N . The previous best position of the i-th particle is represented by p i b e s t = ( p i 1 b e s t , , p i d b e s t ) T , and the previous best position of all particles is represented by g b e s t = ( g 1 b e s t , , g d b e s t ) T . At each iteration k, where 1 k K m a x with maximum iteration number K m a x , the velocity and position of the i-th particle are updated as follows:
v i k + 1 = w k v i k + c 1 r 1 ( p i b e s t , k p i k ) + c 2 r 2 ( g b e s t , k p i k ) , p i k + 1 = p i k + v i k + 1 ,
where c 1 and c 2 are two acceleration factors, r 1 and r 2 are two random variables following a uniform distribution over [0,1], and w k is an inertia factor defined by
w k = ( w s t a r t w e n d ) K m a x k K m a x + w e n d ,
where w s t a r t and w e n d are initial and final values of inertia. Since the positions of particles are updated, p i b e s t and g b e s t are also updated as follows:
p i b e s t , k + 1 = p i k + 1 , if f ( p i k + 1 ) < f ( p i b e s t , k ) p i b e s t , k , otherwise g b e s t , k + 1 = argmin p i b e s t , k + 1 f ( p i b e s t , k + 1 ) .
The finally updated g b e s t in the above procedure is used as an optimal hyperparameter in estimating the SVR and TSVR models as seen below.

3.4. PSO-TSVR Model-Based CUSUM Test

In this subsection, we explain how to estimate X t in Model (4) using the SVR and TSVR methods with PSO as described above and how to construct the CUSUM test based on the residuals obtained from the SVR-INGARCH model. In the following steps, we assume that a time series { Y 1 , , Y n , Y n + 1 , , Y n + n } has no change points.
  • Step 1. As in Section 3.3, in order to apply the PSO method, we set a space of hyperparameters and initialize the positions { p 1 0 , , p N 0 } and velocities { v 1 0 , , v N 0 } of particles to be evaluated within this space. Subsequently, we divide the given time series into two groups of time series, { Y 1 , , Y n } and { Y n + 1 , , Y n + n } . The former is used as a training set while the latter is used as a validation set.
  • Step 2. Compute the initial estimates of X t based on the training time series. For this task, we use two different methods. The first method is using moving averages (Niemira, 1994):
    X ˜ t = 1 m j = 1 m Y t j + 1 ,
    where m is a positive integer. When t is smaller than m, X ˜ t is computed as an average of the first to the t-th squares of observations, i.e. X ˜ t = j = 1 t Y t j + 1 / t when t < m .
    The second method is using the Poisson QMLE [54] assuming that the time series follows Model (1), for example,
    X ˜ t = ω ^ + α ^ X ˜ t 1 + β ^ Y t 1 .
    These estimates play the role of proxy of X t and replace the true conditional volatility.
  • Step 3. For particles p i k , k = 1 , 2 , K m a x , we train ( y t , x t ) T , either with y t = X ˜ t and x t = ( X ˜ t 1 , Y t 1 ) T or y t = Y t and x t = ( X ˜ t 1 , Y t 1 ) T with proxy X ˜ t 1 to the SVR and TSVR models to obtain g ^ . Subsequently, for the first, we obtain
    X ^ t = g ^ ( Y t 1 , X ˜ t 1 ) ,
    named “ X ^ t -targeting”, and for the second,
    Y ^ t = g ^ ( Y t 1 , X ˜ t 1 ) ,
    named “ Y ^ t -targeting”, where Y ^ t is an estimate of Y t , which is an estimate X t as well since Y ^ t itself is the conditional expectation.
  • Step 4. Applying the estimated SVR and TSVR models and using the same proxy formula as in Step 2 for the validation time series, the mean absolute error (MAE) is computed as follows:
    MAE = 1 n t = n + 1 n + n | X ^ t X ˜ t |
    for the case of Equation (19), and
    MAE = 1 n t = n + 1 n + n | Y ^ t X ˜ t |
    for the case of Equation (20). The MAE is employed here because it is more robust against outliers in a model fitting than the root mean square error.
  • Step 5. Update the p i k , v i k , p i b e s t , k , and g b e s t , k as in Section 3.3 and repeat Steps 3 and 4 until the MAE in Step 4 converges within a limit or k reaches the maximum iteration number K m a x .
  • Step 6. Apply the estimated SVR and TSVR models with selected parameters in Step 5 to a testing time series to perform the CUSUM tests in Equations (5) and (6) based on the residuals ϵ ^ t = Y t g ^ ( Y t 1 , X ˜ t 1 ) .

4. Simulation Results

In this section, we apply the PSO-SVR and -TSVR models to the INAR(1) and INGARCH(1,1) models, and evaluate the performance of the proposed CUSUM tests. For this task, we generate a time series of length 1000 ( n = 500 , n = 500 ) to evaluate the empirical sizes and powers at the nominal level of 0.05. The size and power are calculated as the rejection number of the null hypothesis of no changes out of 500 repetitions. The simulations were conducted with R version 3.6.3, running on Windows 10. Moreover, we use the following R packages: “pso” for the PSO [60], “kernlab” for the Gaussian kernel [61], and “osqp” [62] for solving the quadratic problem. The procedure for the simulation is as follows.
  • Step 1. Generate a time series of length n = 1000 to train the PSO-SVR and -TSVR models.
  • Step 2. Apply the estimation scheme described in Section 4. For the proxy of moving averages, we used m = 5 . In this procedure, we divide the given time series into n = 500 and n = 500 in Step 1 of Section 4.
  • Step 3. Generate a testing time series of length n = 1000 to evaluate the size and power. For computing sizes, we generate a time series of no changes, whereas to examine the power, we generate a time series with a change point in the middle.
  • Step 4. Apply the estimated model in Step 2 to the time series of Step 3 and conduct the residual CUSUM and CUSUM of squares tests.
  • Step 5. Repeat the above steps N times, e.g., 500, and then compute the empirical sizes and powers.
We consider the INGARCH(1,1) and INAR(1) models, as these are the most acclaimed models in practice:
  • Model 1. Y t | F t 1 Poisson ( X t ) , X t = ω + α X t 1 + β Y t 1 ,
  • Model 2. Y t = ϕ Y t 1 + Z t , Z t Poisson ( ω ) , where ∘ is a binomial thinning operator and | ϕ | < 1 .
Further, upon one referee’s suggestion, we also consider the softplus INGARCH(1,1) model in [63]:
  • Model 3. Y t | F t 1 Poisson ( X t ) , X t = s c ( ω + α Y t 1 + β X t 1 ) , where s c ( x ) = c log ( 1 + exp ( x / c ) ) .
Under the null hypothesis, we use the parameter settings as follows.
  • Model 1:
    Case 1: ω = 3 , α = 0.3 , β = 0.3 ;
    Case 2: ω = 5 , α = 0.3 , β = 0.3 ;
    Case 3: ω = 3 , α = 0.6 , β = 0.3 ;
    Case 4: ω = 3 , α = 0.3 , β = 0.6 ;
  • Model 2:
    Case 1: ω = 3 , ϕ = 0.3 ;
    Case 2: ω = 5 , ϕ = 0.3 ;
    Case 3: ω = 3 , ϕ = 0.7 ;
  • Model 3:
    c = 1 , ω = 3 , α = 0.3 , β = 0.3 .
Under the alternative hypothesis, we only consider the case of one parameter change, while the other parameters remain the same. Table A1, Table A2, Table A3 and Table A4 in Appendix A summarize the results for Model 1. Here, MA and ML denote the proxies obtained from the moving average in Equation (17) and Poisson QMLE in Equation (18), and Y ^ t and X ^ t , respectively, denote the two targeting methods in Equation (20) and Equation (19). The tables show that the difference between the SVR and TSVR methods is marginal. Moreover, in most cases, T ^ n s q u a r e appears to be much more stable than T ^ n r e s ; that is, the latter test suffers from more severe size distortions. In terms of power, T ^ n r e s with the ML proxy and X ^ t -targeting tends to outperform the others. However, the gap between this test and T ^ n s q u a r e is only marginal; therefore, considering the stability of the test, T ^ n s q u a r e is highly favored for Model 1. Table A5, Table A6 and Table A7 summarize the result for Model 2, showing that T ^ n r e s exhibits a more stable performance for the INAR models than for the INGARCH models. However, it is still not as stable as T ^ n s q u a r e and tends to outperform T ^ n s q u a r e in terms of power. Table A8 summarizes the result for Model 3, showing no significant differences from the results of the previous models. This result, to a certain extent, coincides with that of Lee and Lee (2020) who considered parametric INGARCH models for a change point test. Overall, our findings strongly confirm the reliability of using T ^ n s q u a r e , particularly with the ML proxy and X ^ t -targeting. However, in practice, one can additionally implement T ^ n r e s because either test can react more sensitively to a specific situation in comparison to the other.
Table A9 lists the computing times (in seconds) of the SVR and TSVR methods when implemented in R on Windows 10, running on a PC with an Intel i7-3770 processor (3.4 GHz) with 8 GB of RAM, wherein the figures denote the averages of training times in simulations, and the values in the parentheses indicate the sample standard deviations. In each model and parameter setting, the values of the two quickest results are written in boldface. As reported by [21,30], the TSVR method is shown to markedly reduce the CPU time. In particular, the results indicate that the computational speed of the TSVR-based method, with the ML proxy and X ^ t -targeting, appears to be the fastest in most cases. The result suggests that using the TSVR-based CUSUM tests is beneficial when computational speed is of significance to the implementation.

5. Real Data Analysis

In this section, we analyze the return times of extreme events constructed based on the daily log-returns of GS stock prices from 1 January 2003, to 28 June 2019, obtained using the R package “quantmod.” We used data from 2 January 2003, to 29 June 2007, as the training set and that from 1 July 2009, to 28 June 2019, as the test set. Figure 1 and Figure 2 exhibit the GS stock prices and 100 times the log-returns, with their ranges denoted by the green and blue vertical lines, respectively. As shown in Figure 2, the time series between the training and test sets has severe volatility, owing to the financial crisis that occurred in 2008; therefore, it is omitted from our data analysis.
Before applying the PSO-SVR-INGARCH and PSO-TSVR-INGARCH methods, similarly to [12,14], we first transform the given time series into the hitting times τ 1 , τ 2 , , for which the log-returns of the GS stock fall outside the 0.15 and 0.85 quantiles of the training data, that is, -1.242 and 1.440, respectively. More specifically, τ 1 = inf { t 1 ; w t [ 1.242 , 1.440 ] } , τ 2 = inf { t τ 1 ; w t [ 1.242 , 1.440 ] } , , where w t denote the 100 times log-returns. We then set Y t : = τ t τ t 1 , which forms the return times of these extreme events. Consequently, the training set is transformed into an integer-valued time series of length 341, and the test set is transformed into that of length 844 (see Figure 2), plotting Y t .
To determine whether the training set exhibits change, we apply the Poisson QMLE method and CUSUM of squares test from [54]. The result shows that the CUSUM statistics T ^ n s q u a r e has a value of 0.590, which is smaller than the theoretical critical value of 1.358; thus, the null hypothesis of no change is not rejected at the nominal level of 0.05, supporting the adequacy of the training set. The residual-based CUSUM of squares tests based on the SVR and TSVR models with the ML proxy and X ^ t -targeting are then applied; subsequently, both tests detect a change point at the 441st observation of the testing data, corresponding to 16 October 2013. The red vertical line in Figure 3 denotes the detected change point.
To examine how the change affects the dynamic structure of the time series, we fit a Poisson linear INGARCH model to the training and testing time series before and after the change point. For the training time series, the fitted INGARCH model appears to have ω ^ = 0.334 , α ^ = 0.813 , and β ^ = 0.086 . Conversely, for the testing time series before the change, we obtain ω ^ = 0.135 , α ^ = 0.878 , and β ^ = 0.067 , which are not as different as those from the training time series case. However, after the change point, the fitted parameters are shown to be ω ^ = 1.045 , α ^ = 0.560 , and β ^ = 0.147 , thus confirming a significant change in the parameters. For instance, the sum of α and β in the training data is 0.899, which changes from 0.945 to 0.707 in the testing data.

6. Concluding Remarks

In this study, we proposed the CUSUM test based on the residuals obtained with the SVR and TSVR-INGARCH models to detect a parameter change in the conditional mean of the time series of counts. To improve accuracy and efficiency, we also employed the PSO method to obtain an optimal set of hyperparameters. Monte Carlo simulations were conducted using the INAR and INGARCH models with various parameter settings. The results showed that the TSVR method using the ML proxy and the conditional mean X ^ t -targeting method is recommendable, as it generally performs well and markedly reduces computational time. Our method was then applied to the analysis of the return times of extreme events constructed based on the daily log-returns of Goldman Sachs stock prices and, subsequently, detected one change. Overall, our findings, based on a simulation study and real data analysis, demonstrated the validity of our method. Although the proposed method performs well in general, it might have a limitation in its performance when the amount of available training data is not large enough or the dataset has features that can violate the stationarity, e.g., high volatilities. The method can also suffer from over-fitting to a specific training sample. Thus, it would be an important task to develop more robust methods, which we leave as our future project.

Author Contributions

Conceptualization, S.L. (Sangyeol Lee); methodology, S.L. (Sangyeol Lee) and S.L. (Sangjo Lee); software, S.L. (Sangjo Lee); formal analysis, S.L. (Sangyeol Lee) and S.L. (Sangjo Lee); data curation, S.L. (Sangjo Lee); writing—original draft preparation, S.L. (Sangyeol Lee) and S.L. (Sangjo Lee); writing—review and editing, S.L. (Sangyeol Lee) and S.L. (Sangjo Lee); funding acquisition, S.L. (Sangyeol Lee). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT), grant no. NRF-2021R1A2C1004009.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://finance.yahoo.com, (accessed on 10 August 2020).

Acknowledgments

We sincerely thank the Editor and the two anonymous reviewers for their precious time and valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SVRSupport vector regression
TSVRTwin support vector regression
PSOParticle swarm optimization
INGARCHInteger-valued generalized autoregressive conditional heteroskedasticity
CUSUMCumulative sum
INARInteger-valued autoregressive
QMLEQuasi-maximum likelihood estimation
ARMAAutoregressive and moving average
GARCHGeneralized autoregressive conditional heteroskedasticity
NBNegative binomial
CMLEConditional maximum likelihood estimator
SVMSupport vector machine
MAEMean absolute error

Appendix A

Table A1. Empirical sizes and powers for the INGARCH(1,1) model, Case 1.
Table A1. Empirical sizes and powers for the INGARCH(1,1) model, Case 1.
ω = 3 MethodSVRTSVR
α = 0.3 Target Y ^ t X ^ t Y ^ t X ^ t
β = 0.3 ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.0720.1040.0000.0960.0760.0640.0000.074
T ^ n s q u a r e 0.0400.0340.0320.0420.0360.0380.0320.046
power ω 5 T ^ n r e s 1.0001.0000.6401.0000.9840.8600.2740.996
T ^ n s q u a r e 0.9680.9800.7760.9940.9840.9640.8080.990
ω 10 T ^ n r e s 1.0001.0001.0001.0000.9720.9620.9480.956
T ^ n s q u a r e 1.0001.0001.0001.0001.0001.0001.0001.000
α 0.5 T ^ n r e s 1.0001.0000.9781.0000.9760.8680.6200.982
T ^ n s q u a r e 0.9860.9940.9360.9440.9860.9920.9441.000
β 0.5 T ^ n r e s 1.0001.0000.9881.0000.9820.8700.6640.982
T ^ n s q u a r e 0.9520.9500.8260.9560.9160.9240.8260.980
Table A2. Empirical sizes and powers for the INGARCH(1,1) model, Case 2.
Table A2. Empirical sizes and powers for the INGARCH(1,1) model, Case 2.
ω = 5 MethodSVRTSVR
α = 0.3 Target Y ^ t X ^ t Y ^ t X ^ t
β = 0.3 ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.0620.1140.0000.1040.0720.0680.0000.084
T ^ n s q u a r e 0.0480.0440.0480.0440.0380.0480.0500.046
power ω 3 T ^ n r e s 1.0001.0000.8781.0001.0000.9860.8261.000
T ^ n s q u a r e 0.6200.5580.6360.5660.5560.5200.5900.504
ω 10 T ^ n r e s 1.0001.0001.0001.0000.9620.9200.7701.000
T ^ n s q u a r e 0.9960.9960.9961.0000.9921.0000.9861.000
α 0.5 T ^ n r e s 1.0001.0001.0001.0000.9740.9180.7901.000
T ^ n s q u a r e 0.9960.9940.9881.0000.9860.9800.9561.000
β 0.5 T ^ n r e s 1.0001.0001.0001.0000.9760.9280.8241.000
T ^ n s q u a r e 0.9900.9880.9020.9940.9460.9480.8701.000
Table A3. Empirical sizes and powers for the INGARCH(1,1) model, Case 3.
Table A3. Empirical sizes and powers for the INGARCH(1,1) model, Case 3.
ω = 3 MethodSVRTSVR
α = 0.6 Target Y ^ t X ^ t Y ^ t X ^ t
β = 0.3 ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.1760.2780.0020.2300.1540.1800.0000.180
T ^ n s q u a r e 0.0360.0360.0420.0460.0320.0480.0400.042
power ω 5 T ^ n r e s 1.0001.0000.9901.0000.9700.9280.7461.000
T ^ n s q u a r e 0.9640.9480.8560.9480.9560.9560.8660.972
α 0.3 T ^ n r e s 1.0001.0001.0001.0000.9980.9961.0001.000
T ^ n s q u a r e 0.9980.9981.0000.9980.9940.9720.9961.000
β 0.1 T ^ n r e s 1.0001.0001.0001.0001.0001.0001.0001.000
T ^ n s q u a r e 0.9920.9980.9740.9980.9580.9080.8600.998
Table A4. Empirical sizes and powers for the INGARCH(1,1) model, Case 4.
Table A4. Empirical sizes and powers for the INGARCH(1,1) model, Case 4.
ω = 3 MethodSVRTSVR
α = 0.3 Target Y ^ t X ^ t Y ^ t X ^ t
β = 0.6 ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.2580.2600.0200.1740.1720.1580.0180.100
T ^ n s q u a r e 0.0360.0400.0320.0300.0320.0260.0240.030
power ω 5 T ^ n r e s 0.9920.9960.8200.9980.9340.9040.4220.848
T ^ n s q u a r e 0.4920.4720.3260.4280.6020.5860.5700.776
α 0.1 T ^ n r e s 1.0001.0000.9941.0000.9960.9940.9941.000
T ^ n s q u a r e 0.8000.7800.4360.6040.6340.6740.4060.644
β 0.3 T ^ n r e s 1.0001.0000.9961.0000.9980.9941.0001.000
T ^ n s q u a r e 0.9260.9200.7160.8620.8820.8900.7120.956
Table A5. Empirical sizes and powers for the INAR(1) model, Case 1.
Table A5. Empirical sizes and powers for the INAR(1) model, Case 1.
ω = 3
ϕ = 0.3
MethodSVRTSVR
Target Y ^ t X ^ t Y ^ t X ^ t
ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.0520.0700.0000.0700.0640.0600.0000.060
T ^ n s q u a r e 0.0480.0600.0640.0540.0660.0520.0580.062
power ω 5 T ^ n r e s 1.0001.0000.4661.0000.9960.9900.1561.000
T ^ n s q u a r e 0.9600.9940.8001.0000.9820.9900.8101.000
ω 10 T ^ n r e s 1.0001.0001.0001.0000.9840.9660.8380.976
T ^ n s q u a r e 1.0001.0001.0001.0001.0001.0001.0001.000
ϕ 0.1 T ^ n r e s 0.9060.9320.0000.9400.9220.9220.0000.934
T ^ n s q u a r e 0.2180.1500.1660.0800.0840.0780.1780.072
ϕ 0.5 T ^ n r e s 1.0001.0000.1061.0000.9940.9920.0261.000
T ^ n s q u a r e 0.6000.5980.1640.5840.6060.5620.1860.542
Table A6. Empirical sizes and powers for the INAR(1) model, Case 2.
Table A6. Empirical sizes and powers for the INAR(1) model, Case 2.
ω = 5 MethodSVRTSVR
ϕ = 0.3 Target Y ^ t X ^ t Y ^ t X ^ t
ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.0680.0840.0000.0700.0680.0680.0000.060
T ^ n s q u a r e 0.0620.0440.0380.0480.0420.0480.0360.040
power ω 3 T ^ n r e s 1.0001.0000.6301.0001.0000.9980.6921.000
T ^ n s q u a r e 0.5780.5660.7160.4420.4920.5520.7080.452
ω 10 T ^ n r e s 1.0001.0000.9921.0000.9760.9620.6881.000
T ^ n s q u a r e 0.9860.9980.9961.0001.0001.0000.9981.000
ϕ 0.1 T ^ n r e s 0.9901.0000.0020.9960.9980.9960.0000.996
T ^ n s q u a r e 0.2820.2240.1620.0960.1220.1480.1700.112
ϕ 0.5 T ^ n r e s 1.0001.0000.3401.0001.0000.9960.1381.000
T ^ n s q u a r e 0.8000.8220.1720.8300.8120.7740.2040.812
Table A7. Empirical sizes and powers for the INAR(1) model, Case 3.
Table A7. Empirical sizes and powers for the INAR(1) model, Case 3.
ω = 3
ϕ = 0.7
MethodSVRTSVR
Target Y ^ t X ^ t Y ^ t X ^ t
ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.1160.0720.0020.0540.0920.0560.0000.048
T ^ n s q u a r e 0.0480.0580.0340.0540.0540.0520.0380.058
power ω 5 T ^ n r e s 1.0001.0000.7621.0000.9780.9160.4260.976
T ^ n s q u a r e 0.9400.9600.6480.9420.9400.9360.7280.990
ϕ 0.3 T ^ n r e s 1.0001.0000.8961.0001.0001.0000.9461.000
T ^ n s q u a r e 0.8520.8460.2840.8360.8540.9060.2100.898
Table A8. Empirical sizes and powers for the softplus INGARCH(1,1) model.
Table A8. Empirical sizes and powers for the softplus INGARCH(1,1) model.
c = 1 , ω = 3
α = 0.3 , β = 0.3
MethodSVRTSVR
Target Y ^ t X ^ t Y ^ t X ^ t
ProxyMAMLMAMLMAMLMAML
size T ^ n r e s 0.0720.1140.0000.1080.0800.0620.0000.072
T ^ n s q u a r e 0.0340.0320.0340.0380.0320.0360.0300.046
power ω 5 T ^ n r e s 1.0000.9980.6661.0000.9780.8620.2840.996
T ^ n s q u a r e 0.9660.9820.7920.9920.9840.9620.8280.986
α 0.5 T ^ n r e s 1.0000.9980.9841.0000.9800.8340.6580.982
T ^ n s q u a r e 0.9820.9860.9320.9940.9820.9880.9241.000
β 0.5 T ^ n r e s 1.0001.0000.9841.0000.9800.8420.7040.978
T ^ n s q u a r e 0.9480.9380.8100.9500.9260.9360.8300.964
Table A9. Computing times for training the SVR and the TSVR methods.
Table A9. Computing times for training the SVR and the TSVR methods.
MethodSVRTSVR
Target Y ^ t X ^ t Y ^ t X ^ t
ProxyMAMLMAMLMAMLMAML
INGARCH ω = 3 , 918.02863.331466.45825.02193.70192.68185.44193.09
α = 0.3 , β = 0.3 (190.61)(201.94)(247.97)(305.41)(19.79)(19.11)(15.94)(21.02)
ω = 5 , 1633.971065.131231.811029.88224.41245.24155.52162.26
α = 0.3 , β = 0.3 (319.64)(229.89)(191.73)(363.53)(22.42)(27.33)(16.06)(17.89)
ω = 3 , 1979.821552.362456.071766.37195.97207.49191.79192.45
α = 0.6 , β = 0.3 (242.94)(366.01)(395.27)(666.71)(20.59)(21.05)(19.60)(19.34)
ω = 3 , 1877.271940.012697.952219.461198.25196.93186.23189.34
α = 0.3 , β = 0.6 (389.79)(347.92)(356.38)(587.99)(20.95)(20.56)(18.98)(21.95)
INAR ω = 3 , ϕ = 0.3 780.011379.591340.27754.08191.38189.67182.64192.35
(189.10)(358.28)(256.68)(255.29)(20.17)(19.59)(16.42)(26.07)
ω = 5 , ϕ = 0.3 1000.50972.171128.041336.76240.72226.47150.59105.45
(235.13)(240.82)(217.57)(516.75)(31.46)(23.05)(12.85)(14.81)
ω = 10 , ϕ = 0.3 828.821437.231542.48769.35194.25193.05184.82194.41
(208.43)(378.40)(239.95)(288.29)(19.67)(17.84)(16.14)(21.17)
softplus c = 1 , ω = 3 692.41711.94911.11692.71193.99193.54186.39196.66
INGARCH α = 0.3 , β = 0.3 (118.22)(146.43)(157.28)(206.60)(19.01)(18.86)(17.34)(20.83)

References

  1. Al-Osh, M.A.; Aly, E.-E.A.A. First order autoregressive time series with negative binomial and geometric marginals. Commun. Stat. Theory Methods 1992, 21, 2483–2492. [Google Scholar] [CrossRef]
  2. Alzaid, A.A.; Al-Osh, M. An integer-valued pth-order autoregressive structure (INAR(p)) process. J. Appl. Probab. 1990, 27, 314–324. [Google Scholar] [CrossRef]
  3. Ferland, R.; Latour, A.; Oraichi, D. Integer-valued GARCH process. J. Time Ser. Anal. 2006, 27, 923–942. [Google Scholar] [CrossRef]
  4. Fokianos, K.; Rahbek, A.; Tjøstheim, D. Poisson autoregression. J. Am. Stat. Assoc. 2009, 104, 1430–1439. [Google Scholar] [CrossRef] [Green Version]
  5. McKenzie, E. Some simple models for discrete variate time series1. J. Am. Water Resour. Assoc. 1985, 21, 645–650. [Google Scholar] [CrossRef]
  6. Weiß, C.H. An Introduction to Discrete-Valued Time Series; Wiley: New York, NY, USA, 2018. [Google Scholar]
  7. Kang, J.; Lee, S. Parameter change test for random coefficient integer-valued autoregressive processes with application to polio data analysis. J. Time Ser. Anal. 2009, 30, 239–258. [Google Scholar] [CrossRef]
  8. Kim, H.; Lee, S. Improved CUSUM monitoring of Markov counting process with frequent zeros. Qual. Reliab. Eng. Int. 2019, 35, 2371–2394. [Google Scholar] [CrossRef]
  9. Lee, Y.; Lee, S.; Tjøstheim, D. Asymptotic normality and parameter change test for bivariate Poisson INGARCH models. Test 2018, 27, 52–69. [Google Scholar] [CrossRef]
  10. Kim, B.; Lee, S. Robust change point test for general integer-valued time series models based on density power divergence. Entropy 2020, 22, 493. [Google Scholar] [CrossRef] [PubMed]
  11. Christou, V.; Fokianos, K. Quasi-likelihood inference for negative binomial time series models. J. Time Ser. Anal. 2014, 35, 55–78. [Google Scholar] [CrossRef]
  12. Davis, R.A.; Liu, H. Theory and inference for a class of nonlinear models with application to time series of counts. Stat. Sin. 2016, 26, 1673–1707. [Google Scholar] [CrossRef] [Green Version]
  13. Jazi, M.A.; Jones, G.; Lai, C.D. First-order integer valued AR processes with zero inflated Poisson innovations. J. Time Ser. Anal. 2012, 33, 954–963. [Google Scholar] [CrossRef]
  14. Kim, B.; Lee, S. Robust estimation for general integer-valued time series models. Ann. Inst. Stat. Math. 2019, 72, 1371–1396. [Google Scholar] [CrossRef]
  15. Zhu, F. A negative binomial integer-valued GARCH model. J. Time Ser. Anal. 2011, 32, 54–67. [Google Scholar] [CrossRef]
  16. Ahmad, A.; Francq, C. Poisson QMLE of count time series models. J. Time Ser. Anal. 2016, 37, 291–314. [Google Scholar] [CrossRef]
  17. Bezerra, P.C.S.; Albuquerque, P.H.M. Volatility forecasting via SVR–GARCH with mixture of Gaussian kernels. Comput. Manag. Sci. 2017, 14, 179–196. [Google Scholar] [CrossRef]
  18. Cao, L.; Tay, F.E. Financial forecasting using support vector machines. Neural. Comput. Appl. 2001, 10, 184–192. [Google Scholar] [CrossRef]
  19. Chen, S.; Härdle, W.K.; Jeong, K. Forecasting volatility with support vector machine-based GARCH model. J. Forecast. 2010, 29, 406–433. [Google Scholar] [CrossRef]
  20. Cherkassky, V.; Ma, Y. Practical selection of SVM parameters and noise estimation for SVM regression. Neural Netw. 2004, 17, 113–126. [Google Scholar] [CrossRef] [Green Version]
  21. Lee, S.; Lee, S.; Moon, M. Hybrid change point detection for time series via support vector regression and CUSUM method. Appl. Soft Comput. 2020, 89, 106101. [Google Scholar] [CrossRef]
  22. Lee, S.; Kim, C.K.; Lee, S. Hybrid CUSUM change point test for time series with time-varying volatilities based on support vector regression. Entropy 2020, 22, 578. [Google Scholar] [CrossRef]
  23. Pérez-Cruz, F.; Afonso-Rodríguez, J.A.; Giner, J. Estimating GARCH models using support vector machines. Quant. Finance 2003, 3, 163–172. [Google Scholar] [CrossRef]
  24. Shim, J.; Kim, Y.; Lee, J.; Hwang, C. Estimating value at risk with semiparametric support vector quantile regression. Comput. Stat. 2012, 27, 685–700. [Google Scholar] [CrossRef]
  25. Shim, J.; Hwang, C.; Seok, K. Support vector quantile regression with varying coefficients. Comput. Stat. 2016, 31, 1015–1030. [Google Scholar] [CrossRef]
  26. Vapnik, V.N. The Nature Of Statistical Learning Theory; Springer: New York, NY, USA, 2000. [Google Scholar] [CrossRef]
  27. Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef] [Green Version]
  28. Lee, Y.J.; Hsieh, W.F.; Huang, C.M. ϵ-SSVR: A smooth support vector machine for ϵ-insensitive regression. IEEE Trans. Knowl. Data Eng. 2005, 17, 678–685. [Google Scholar] [CrossRef]
  29. Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
  30. Peng, X. TSVR: An efficient twin support vector machine for regression. Neural Netw. 2010, 23, 365–372. [Google Scholar] [CrossRef] [PubMed]
  31. Gupta, D.; Pratama, M.; Ma, Z.; Li, J.; Prasad, M. Financial time series forecasting using twin support vector regression. PLoS ONE 2019, 14, 1–27. [Google Scholar] [CrossRef]
  32. Tomar, D.; Agarwal, S. Twin support vector machine: A review from 2007 to 2014. Egypt. Inform. J. 2015, 16, 55–69. [Google Scholar] [CrossRef]
  33. Zhong, P.; Xu, Y.; Zhao, Y. Training twin support vector regression via linear programming. Neural. Comput. Appl. 2012, 21, 399–407. [Google Scholar] [CrossRef]
  34. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
  35. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Wang, S.; Ji, G. A comprehensive survey on particle swarm optimization algorithm and its applications. Math. Probl. Eng. 2015, 2015, 1–38. [Google Scholar] [CrossRef] [Green Version]
  37. Page, E.S. A test for a change in a parameter occurring at an unknown point. Biometrika 1955, 42, 523. [Google Scholar] [CrossRef]
  38. Chen, J.; Gupta, A.K. Parametric Statistical Change Point Analysis: With Applications To Genetics, Medicine, and Finance, 2nd ed.; Birkhäuser: Boston, MA, USA, 2012. [Google Scholar]
  39. Csörgö, M.; Horváth, L. Limit Theorems In Change-Point Analysis; John Wiley & Sons Inc.: New York, NY, USA, 2012. [Google Scholar]
  40. Berkes, I.; Gombay, E.; Horváth, L.; Kokoszka, P. Sequential change-point detection in GARCH(p,q) models. Econ. Theory 2004, 20, 1140–1167. [Google Scholar] [CrossRef]
  41. Inclán, C.; Tiao, G.C. Use of cumulative sums of squares for retrospective detection of changes of variance. J. Am. Stat. Assoc. 1994, 89, 913–923. [Google Scholar] [CrossRef]
  42. Lee, S.; Ha, J.; Na, O.; Na, S. The cusum test for parameter change in time series models. Scand. Stat. Theory Appl. 2003, 30, 781–796. [Google Scholar] [CrossRef]
  43. Oh, H.; Lee, S. Modified residual CUSUM test for location-scale time series models with heteroscedasticity. Ann. Inst. Stat. Math. 2018, 71, 1059–1091. [Google Scholar] [CrossRef]
  44. Ross, G.J. Modelling financial volatility in the presence of abrupt changes. Physica A 2013, 392, 350–360. [Google Scholar] [CrossRef] [Green Version]
  45. Kang, J.; Lee, S. Parameter change test for Poisson autoregressive models. Scand. Stat. Theory Appl. 2014, 41, 1136–1152. [Google Scholar] [CrossRef]
  46. Fokianos, K.; Fried, R. Interventions in INGARCH processes. J. Time Ser. Anal. 2010, 31, 210–225. [Google Scholar] [CrossRef] [Green Version]
  47. Franke, J.; Kirch, C.; Kamgaing, J.T. Changepoints in times series of counts. J. Time Ser. Anal. 2012, 33, 757–770. [Google Scholar] [CrossRef]
  48. Fokianos, K.; Gombay, E.; Hussein, A. Retrospective change detection for binary time series models. J. Stat. Plan. Inference 2014, 145, 102–112. [Google Scholar] [CrossRef]
  49. Rakitzis, A.C.; Castagliola, P.; Maravelakis, P.E. On the modelling and monitoring of general inflated poisson processes. Qual. Reliab. Eng. Int. 2016, 32, 1837–1851. [Google Scholar] [CrossRef]
  50. Lee, Y.; Lee, S. CUSUM test for general nonlinear integer-valued GARCH models: Comparison study. Ann. Inst. Stat. Math. 2019, 71, 1033–1057. [Google Scholar] [CrossRef]
  51. De Pooter, M.; Van Dijk, D. Testing for Changes in Volatility in Heteroskedastic Time Series—A Further Examination. Available online: https://repub.eur.nl/pub/1627/ (accessed on 28 July 2020).
  52. Lee, S.; Tokutsu, Y.; Maekawa, K. The cusum test for parameter change in regression models with ARCH errors. J. Jpn. Stat. Soc. 2004, 34, 173–188. [Google Scholar] [CrossRef] [Green Version]
  53. Lee, S. Location and scale-based CUSUM test with application to autoregressive models. J. Stat. Comput. Simul. 2020, 90, 2309–2328. [Google Scholar] [CrossRef]
  54. Lee, S.; Lee, S. Exponential family QMLE-based CUSUM test for integer-valued time series. Commun. Stat. Simul. Comput. 2021. accepted. [Google Scholar]
  55. Lee, S. Residual-based CUSUM of squares test for poisson integer-valued GARCH models. J. Stat. Comput. Simul. 2019, 89, 3182–3195. [Google Scholar] [CrossRef]
  56. Lee, S.; Kim, D.; Seok, S. Modelling and inference for counts time series based on zero-infated exponential family INGARCH models. J. Stat. Comput. Simul. 2021, in press. [Google Scholar] [CrossRef]
  57. Billingsley, P. Convergence of Probability Measures, 2nd ed.; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  58. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  59. Khemchandani, R.; Chandra, S. Twin Support Vector Machines for Pattern Classification. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 905–910. [Google Scholar] [CrossRef]
  60. Bendtsen, C. Pso: Particle Swarm Optimization. R Package Version 1.0.3. Foundation for Statistical Computing. Available online: https://cran.r-project.org/web/packages/pso (accessed on 27 July 2020).
  61. Karatzoglou, A.; Smola, A.; Hornik, K.; Zeileis, A. Kernlab—An S4 package for kernel methods in R. J. Stat. Softw. 2004, 11, 1–20. [Google Scholar] [CrossRef] [Green Version]
  62. Stellato, B.; Banjac, G.; Goulart, P.; Boyd, S. Osqp: Quadratic Programming Solver Using the ‘osqp’ Library. R Package Version 0.6.0.3; Foundation for Statistical Computing. Available online: https://cran.r-project.org/web/packages/osqp (accessed on 27 July 2020).
  63. Weiss, C.H.; Zhu, F.; Hoshiyar, A. Softplus INGARCH models. Stat. Sin. 2020, in press. [Google Scholar]
Figure 1. Goldman Sachs stock price.
Figure 1. Goldman Sachs stock price.
Entropy 23 00433 g001
Figure 2. Log-return of Goldman Sachs stock price.
Figure 2. Log-return of Goldman Sachs stock price.
Entropy 23 00433 g002
Figure 3. Return time of extreme events of Goldman Sachs stock prices.
Figure 3. Return time of extreme events of Goldman Sachs stock prices.
Entropy 23 00433 g003
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, S.; Lee, S. Change Point Test for the Conditional Mean of Time Series of Counts Based on Support Vector Regression. Entropy 2021, 23, 433. https://doi.org/10.3390/e23040433

AMA Style

Lee S, Lee S. Change Point Test for the Conditional Mean of Time Series of Counts Based on Support Vector Regression. Entropy. 2021; 23(4):433. https://doi.org/10.3390/e23040433

Chicago/Turabian Style

Lee, Sangyeol, and Sangjo Lee. 2021. "Change Point Test for the Conditional Mean of Time Series of Counts Based on Support Vector Regression" Entropy 23, no. 4: 433. https://doi.org/10.3390/e23040433

APA Style

Lee, S., & Lee, S. (2021). Change Point Test for the Conditional Mean of Time Series of Counts Based on Support Vector Regression. Entropy, 23(4), 433. https://doi.org/10.3390/e23040433

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop