1. Introduction
During the last two decades, statistics and finance literature have provided substantive empirical evidence that many financial time series contain surprise elements or jumps. For example, Johannes [
1] found significant evidence for the presence of jumps in the three-month Treasury bill rate, while Dungey et al [
2] found significant evidence of the presence of jumps in U.S. Treasury bond prices. It is also well understood that compared to continuous price changes, jumps require distinctly different modelling, inferences and testing tools. Consequently, given the implications for the valuation of derivative securities, the design of risk management tools and statistical portfolio allocation performance measures, tremendous research efforts have been dedicated to jump detection tools in asset price data. Some important examples are [
3,
4,
5].
Among them, Barndorff-Nielsen and Shephard [
6,
7] proposed a bipower variation (BPV) measure to separate the jump variance from the diffusive variance. This separation is important since the jump variance is the quadratic variation of the jump component while the diffusive variance is defined as the actual volatility [
6]. Consistent with the BPV approach, Andersen et al [
8] proposed a method utilizing a jump-robust estimator of realized volatility and characteristics of the BPV to detect the existence of jumps and estimate jump size, although this method cannot detect the exact location of jumps or the exact number of jumps over a given time period. The work in [
9] developed a rolling-based nonparametric test of jumps based on the properties of BPV to detect jumps and to estimate jump size and jump arrival time. The work in [
10] proposed a jump test based on the notion of variance swap that explicitly takes into account market micro-structure noise.
Although jump detection tests demonstrate great success in identifying the existence of jumps, especially with high frequency data, they cannot provide an exact location of jumps even ex post. To solve this issue, a wavelet-based method utilizing the different convergence rates of wavelet coefficients at high resolution levels with or without jumps was introduced by [
11,
12]. Following this approach, the method proposed by [
12], in addition to jump size, provides an estimate of the number of jumps and an estimate of the interval of the jump location without an exact point estimate of the jump location. The work in [
13] extended the wavelet-based jump detection analysis by providing a framework in which a formal jump detection test with distributional properties is defined. Moreover, the use of a maximum overlap discrete wavelet transformation rather than a discrete wavelet transformation, as in [
12], enabled them to provide an exact point estimate of the jump location. Moreover, Barunik and Vacha [
14] implemented wavelet-based jump localization techniques in multivariate asset price processes showing that wavelets are very useful in detection and localization of the co-jumps in the multivariate data.
As such, the wavelet-based jump detection analysis constitutes the most complete tool in that it delivers information on the number, size and location of jumps within a formal testing framework. Even though this encompassing feature of the wavelets in jump detection renders them a most powerful tool, to the best of our knowledge, the performances of these wavelets under different non-standard data characteristics have not yet been investigated. We contend that there are several important features that may coexist together with jumps in the data that have the potential to severely influence the performance of the wavelet-based jump tests. From this perspective, one may first consider the effect of a drift term in the data. While the drift term can be deterministic, as well as time variant, the latter case contains the possibility of structural breaks with either abrupt or smooth occurrences. Similar to a drift term, the dynamics of spot volatility in the data generation process, whether the process is stochastic or deterministic, may also impact the outcome of the wavelet analysis. Other data features such as the location, i.e., whether the jump occurs at the beginning or the end of the data, can also be influential. Similarly, in the case of multiple jumps, features such as consecutive or distant jumps may have consequences on test performance.
In this paper, we conduct an extensive simulation study that addresses all of the above-mentioned data characteristics in a systematic manner to track the performance of wavelet-based tests vis-à-vis changes in the data features. While performing this task, we also compare the performance of the wavelet-based tests with non-wavelet-based alternatives, introduce a bootstrap version of the wavelet jump test and include this version in the performance analysis.
Our findings indicate that the performances of the wavelet-based tests are significantly robust to the non-standard features in the data. Although their performances are affected, to some extent, by the presence of the non-standard features, as expected, the extent of this influence does not raise any concern about the success of the wavelet-based test. Conversely, the performances of the non-wavelet-based tests are seriously affected when they are confronted by non-standard data features. Under standard data features, although the superiority of wavelets versus non-wavelet alternatives cannot be sustained, there is at least one wavelet version that performs as well as the alternative tests. It is further emphasized that this performance on the wavelet test is attained even though the success criteria of wavelet-based tests are much more demanding than those on other tests. Wavelet-based tests should not only find the presence of jumps, but should also successfully predict the number and location of the jumps simultaneously. As previously mentioned, by design, non-wavelet tests can only indicate whether a jump (or jumps) occurs; they cannot indicate the number of jumps or the location of the jumps.
The remainder of the paper is organized as follows.
Section 2 and
Section 3 provide an overview of the wavelet- and non-wavelet-based jump tests used in the simulations.
Section 4 describes the data generation processes and the simulation settings.
Section 5 illustrates the results, and
Section 6 concludes the paper.
2. Wavelet-Based Jump Tests
In this section, we briefly outline the wavelet decomposition analysis and provide an overview of wavelet-based jump tests.
2.1. Wavelet Decomposition
Wavelets are wave-like functions that oscillate in a finite subset of real numbers and eventually fade out. By using these functions, one can construct a filter that can effectively separate different frequency fluctuations in a time series process. This separation can be achieved by utilizing two types of filters, namely a high pass and a low pass. Let
denote the high pass filter coefficients where
L is the filter length. These filter coefficients satisfy the following conditions:
These equations indicate that the filter coefficients fluctuate approximately at zero and have unit energy. Furthermore, using the quadrature mirror relation, we obtain the complementary, i.e., low pass, filter
:
There are various types of wavelet decomposition considered in the literature. Discrete wavelet transformation (DWT), maximum overlap discrete wavelet transformation (MODWT), continuous wavelet transformation and dual-tree wavelet transformation are among the more popular ones. As stated above, the most elaborate wavelet jump analysis, according to [
13], incorporates the MODWT and has been able to locate the exact jump points and reduce the jump location detection problem with respect to the jump detection problem (the property of the MODWT that makes this possible is that, combined with a zero phase correction, the MODWT yields an equal number of wavelet coefficients as the number of observations).
Let
y be the length of the
T vector of observations (this section follows [
13]). The MODWT coefficients after the
J level transformation are characterized by the
vector
, which is obtained by:
where
is the
transformation matrix. We express the wavelet coefficients as:
where
is the vector of the wavelet coefficients associated with changes on a scale of length
and
is the length of the
T vector of the scaling or approximation coefficients associated with averages on a scale of length
. It is noted that while
corresponds to the highest frequency coefficients,
corresponds to the lowest frequency coefficients at the level of the
J transformation.
The transformation matrix
can be decomposed into the
sub-matrices, each of which has a dimension of
:
Note that the MODWT uses the descaled filters for
.
From these descaled filters, we obtain the sub-matrices:
where
are the zero padded unit scale wavelet filter coefficients in reverse order. That is, coefficients
are taken from the orthonormal wavelet family of length
L, and the remaining values
are defined as zero. The other components in
are the circularly shifted versions of
, for instance:
In practice, a pyramid algorithm is used to compute the MODWT. First, we apply high pass
and low pass
filters by convolving the filter with the observed series
y to obtain the first level wavelet and scaling the coefficients, respectively:
After obtaining the first level coefficients, we apply the filters to the previous level scaling coefficient:
Finally, we obtain the decomposition, which is as follows:
. In this analysis, we only use the first level wavelet coefficient
. For a more general discussion of wavelets, see [
15].
2.2. Wavelet Jump Test
As mentioned in the Introduction of [
12], utilizing the different convergency rates of wavelet coefficients at a high resolution level with or without jumps provides an estimate of the number of jumps and an estimate of the interval of the jump location without an exact point estimate of the jump location. Xue et al [
13] extended the analysis by developing a formal test (XGF test) with exact distributional properties and providing an exact point estimate of the jump location. The XGF framework lies in the decomposing of the time series into several low and high frequency additive components to separate the underlying process into the jump and noise components. The framework utilizes the properties of the multi-resolution decomposition of data through the MODWT. The smooth, i.e., scaling coefficients from the multi-resolution decomposition contain the information from low frequency features of the underlying series, namely jumps and trend. The detail, i.e., wavelet coefficients, from the multi-resolution decomposition contains the information from high frequency features of the underlying series, which includes the market micro-structure noise. A useful property of the MODWT is that it has an equal number of scaling and wavelet coefficients as the number of data points. With such features of the MODWT and a zero phase distortion wavelet function, the location of jumps can be detected precisely from a noisy time series process. The testing procedure that relies on the first level wavelet coefficients providing the highest frequency resolution is summarized as follows:
The null hypothesis is : There is no jump between and vs. : There is a jump between and
First, pick an appropriate wavelet (with filter length L), and apply the first level maximum overlap discrete wavelet transform (MODWT) to the observed series . Note that we lose the first data points after applying the filter.
Retrieve only high pass filtered coefficients, for example, for all
Obtain the spot volatility estimator from the following variant of the bipower variation:
The test statistic is given as for all
is asymptotically distributed as where d is determined according to the wavelet type chosen (see, Theorem 1, Proposition 2–3 of XGF, 2014).
We reject the null hypothesis at time , if where and are the -th quantile of .
These procedures can be applied to all data points to test the presence of and locate the jump points.
2.3. Wavelet Jump Test
In this section, we introduce the bootstrap version of the wavelet-based jump tests. We adopt a wild bootstrap procedure to improve the small sample inference in these tests. The general idea is to replicate the small sample distribution of the
statistic by simulating a bootstrap sample. Since, in the standard bootstrap techniques, the resampling should be conducted under the null hypothesis, the wild bootstrap procedure would be the most appropriate way. Other bootstrapping techniques may not preserve jump information at a given time period or may even spread the jump to other no jump periods. A similar bootstrap method is applied by Dovonon et al. [
16] (BNS) for the high frequency jump test. We adopt a similar framework with some differences. The following procedure is used for bootstrapping:
Choose B as the number of bootstrap replications.
Apply the testing procedure explained in the previous section to the observed series with Haar filter (any other filter will work fine, but for the parameter
d, we have a closed form solution for the Haar filter). We then calculate
from Equation (
2), and we use
to obtain
as a consistent estimator of the true population spot volatility process for
(see [
13]).
Let be random draws from . For each , generate as the bootstrapped innovations.
Using the Euler method, generate the bootstrap replicate of the Price process:
We do not include the drift term in the bootstrap sample since the testing procedure is asymptotically invariant to the drift process. Further, can be chosen according to the sampling frequency of the data.
Calculate the bootstrapped test statistic for by using the testing procedure described in the previous section.
Repeat Steps 2–5 many times (B times) to obtain an empirical distribution for the bootstrap samples . Denote this bootstrap distribution as .
Reject the null hypothesis if where is the -th quantile of the empirical distribution of for each .
The bootstrapped p-values are computed as follows:
Steps 1–8 can be used to analyse the jump dynamics in particular price data. However, for Monte Carlo simulations, we need a more intensive algorithm. Accordingly, the following method, which modifies Step 6, is useful for conducting the simulation exercises.
- 6∗
The above algorithm is an example of a double bootstrap, since, for each data point, we need to bootstrap the
statistic
B times. This requires the computation of the
statistics. However, since for each data point, the asymptotic distribution of the
statistic is the same (XGF), by setting
, we can apply the double fast bootstrap algorithm, which was first proposed by [
17]. The following routine summarizes the double fast bootstrap for the wavelet-based jump test:
- 6∗.1
Set , and for each , obtain .
- 6∗.2
Instead of finding the empirical distribution for each , we compute this distribution over , that is the sequence becomes the common bootstrap sample for all i. Since all elements of this sequence follow the same distribution and the test statistic is asymptotically pivotal, we use this algorithm.
- 6∗.3
From the empirical distribution obtained in the previous step, we calculate the critical values and follow Step 6 by replacing the critical value with the double fast bootstrapped statistic.
- 6∗.4
The p-values becomes
We note the similarities of our approach with the methodology of [
16] in which the authors do not use the spot volatility estimator directly in their bootstrapping routine. Nevertheless, since this estimator is consistent up to some constant, we use it in our bootstrapping scheme. The simulation exercises also show that the bootstrapping procedure above is working as intended.
3. Non-Wavelet Jump Tests
In this section, we briefly describe the other jump tests used in our analysis. The first test that we consider was proposed by Barndor-Nielsen and Shephard [
7] (BNS test). This test focuses on the difference or ratio of the realized bipower variation and the realized quadratic variation of the price data. The key point of the BNS test lies in the different behaviours of the realized quadratic variation and the bipower variation in the presence and absence of jumps. When there is a jump in a given interval, the quadratic variation absorbs this information, but the bipower variation behaves as if there is no jump. Moreover, when there is no jump, the bipower variation and the quadratic variation behave similarly. Consequently, the gap between the two variation measures becomes apparent in the presence of jumps, whereas it becomes obscure when there is a lack of jump dynamics. Accordingly, the BNS test utilizes the following jump test statistic, which can be used to test the presence of jumps in a given interval with the partition
:
where
is the return process and
denotes a standard normal variable
u. Furthermore, the BNS test demonstrates that under the null hypothesis of no jump on the given interval, this test statistic converges to a standard normal distribution. Notice that the different term in the numerator of the BNS test diverges when there is a jump in the price data.
The second test included in this study is the nonparametric jump test by Jiang and Oomen [
10] (JO test). The JO test considers the swap variance measure rather than the bipower variation. This measure is illustrated as follows:
where
and
is defined above. Using the difference between the swap variance measure and the realized quadratic volatility of the price data, the authors construct their test statistic as follows:
The consistent estimates for
are given in JO test (for a detailed discussion of the construction of the test statistic and the choice of the estimate for
, (see [
10]). Moreover, the JO test considers the same null hypothesis as the BNS test. Under the null of no jumps, the JO test finds that the test statistic has a standard normal distribution.
According to the above discussion, these two jump tests share many common features (we omit the test of [
9] because it is identical to the wavelet test with the Haar filter). They use the same null hypothesis and can detect the presence of jumps in a given interval. However, they cannot conclude the number of jumps in this interval. The wavelet-based test, however, can also detect the number and location of the jumps, as well as the presence of the jump dynamics. In this regard, the wavelet jump tests provide a much broader cache for practitioners.
4. Data Generation Processes and Simulation Settings
We apply jump tests to one-dimensional simulated asset price data. Following [
13], the true log-price
is generally assumed to be a semi-martingale of the form:
where the three terms in Equation (
3) correspond to the drift, diffusion and jump parts of
, respectively. In the diffusion term,
is a standard Brownian motion, and the diffusion variance
is the spot volatility. Regarding the finite activity jump part,
represents the number of jumps in
up to time
t, and
denotes the jump size. Further, we demonstrate the price process as a diffusion:
As previously mentioned, one purpose of this study is to investigate the sensitivity of the performance of the jump tests to the various data characteristics. These characteristics are considered with respect to the three components in Equation (
3).
4.1. Drift Component,
Regarding the drift term,
, we consider no-drift (
), constant drift (
) and time-varying drift cases. The purpose of introducing the time-varying case is to allow smooth structural breaks in the data characteristics. Smooth structural breaks are usually regarded as discontinuous change in the mean process. Following [
18], these break processes are approximated by the following Fourier function, which allows both smooth and abrupt breaks with random sizes and locations, where randomness is introduced through the Monte Carlo simulations when creating the simulated high frequency data with jump and smooth structural break features:
where
n is the number of frequencies included in the approximation (
is the indicator function taking a value of one when statement
A is true, and zero otherwise). Because a Fourier expansion is capable of approximating absolutely integrable functions to any desired degree of accuracy, smooth or abrupt breaks can be approximated by Fourier sinusoidals with an appropriate frequency mix. If a single frequency is used (
), then the transitions tend to be smooth, whereas higher
n values facilitate the analysis of abrupt breaks.
As they are both assumed to be discontinuous shifts in the data, jumps and structural breaks are assumed to be the same type of events corresponding to different frequency levels of the data. Jumps are assumed only to be relevant in high frequency data (in minutes), whereas structural breaks are relevant only in the higher frequencies (in weeks). However, smooth structural breaks can be allowed in high frequency frameworks, as well. Therefore, we only allow smooth breaks () along with (abrupt) jumps.
z determines the number of these smooth breaks. We randomly select break times (or locations of breaks) such that (c determines the length of time between two consecutive breaks) for all smooth break points with . is the random amplitude of the Fourier function; hence, it captures the sizes of the (random) breaks.
4.2. Spot Volatility Component,
Regarding the spot volatility,
, we consider a constant (
) and a stochastic volatility case in which the logarithm of the variance evolves as:
4.3. Jump Component,
In this study, the jump component is both introduced in a deterministic and random manner. The deterministic jump process is the trivial case where the number of jumps, the jump magnitude and the jump location are fixed, non-stochastic numbers. Conversely, in the random jump process, all of these become random. In this setting, the jump component is assumed to follow a compound Poisson process with double exponential jump magnitudes. The number of jumps, and follows a truncated double exponential process characterized by the rate parameters and and the truncation parameter . To generate the truncated double exponential process, we use the following routine:
Generate by drawing from .
To determine the jump locations, we draw integers without replacement from the discrete uniform distribution (unlike smooth breaks, we allow the jumps to appear at the beginning or the end of the sample). We denote these jump locations as , and the set of jump locations as .
We generate a random variable, with for all .
We generate two random variables, and for all .
We generate the truncated double exponential process as for all .
In our simulations, we first consider the influence of the jump location on the performance of the test statistics in a single jump framework. We generate price data that contain a single jump with varying (fixed) locations. Second, we consider multiple jumps by adopting a setup with two consecutive jumps. We introduce varying (fixed) distances between these two jumps and evaluate the performance of the tests. Finally, we investigate whether jumps with random numbers, magnitudes and locations alter the results.
4.4. Monte Carlo Simulation
With respect to the Monte Carlo simulations, for a given specification of the drift () and spot volatility () terms, we first generate the continuous time price data without the jumps (), and we then add jumps with different specifications.
We set the sample size as and the length of the training period as , and accept . With the length of , we generate multivariate normal random variables where is a positive definite variance covariance matrix such that and as in XGF.
We generate two Brownian motions
and
from
.
is employed to simulate the time varying spot volatility,
, as in Equation (
6) using the Euler method. Similarly, for a given specification,
and
,
is used to simulate the following continuous time price data without jumps:
Having obtained a jump-free continuous time process as above, we burn the first
observation and add the jump process
to
to obtain a simulated path of the price data
:
All simulations are run in the software program R, and they are available upon request from the authors.
Parametric Configurations
The parametric values are set as follows, The constant drift is set equal to either or . For the time-varying drift, we introduced smooth () single breaks () with random sizes where . Additionally, the truncation parameter for the distribution of the smooth break locations, c, is set at five. As in XGF, the constant spot volatility is set equal to . In the time-varying spot volatility case, it is assumed to follow exponential Brownian motion without a drift term where and . Furthermore, when the jumps are generated at random locations and different magnitudes, we use the associated parameter values as , , , .
5. Results
To evaluate the performance of wavelet-based jump tests and their alternatives, the BNS test and the JO test under different data configurations, we run a series of Monte Carlo simulations to compute their size and power statistics. The number of Monte Carlo simulations is set equal to 1000, and the number of data points is fixed at 1024 for each simulation (increasing the number of Monte Carlos further does not significantly alter the results).
Size, as usual, is defined as the frequency of the spurious detection of jumps when there is, in fact, no jump in the data generation process. The BNS and JO tests, unlike the wavelet-based tests, cannot detect the jump location, but they can determine whether there are any jumps in a given interval. Hence, in terms of comparing sizes, a direct comparison between wavelet-based tests and their alternatives, the BNS and JO tests, is possible.
However, power comparisons are not as straightforward as are those of sizes. When the null of no jump is rejected in the BNS and JO tests, we do not know whether it is rejected in favour of a single or multiple jumps, unlike the wavelet-based tests for which we know the exact number and locations of the jumps in the case of rejection. Therefore, a fair power comparison between wavelet-based tests and their alternatives appears possible. Accordingly, we define power as the frequency of detecting the presence of a jump(s) in a pre-specified interval for the BNS and JO tests, while it is defined as the frequency of detecting correct jump locations in wavelet-based tests.
In our analysis, we also search for the most powerful wavelet filter to be used in wavelet-based jump tests. For this purpose, 29 different wavelet filters are included in the search from four different wavelet classes. The included classes are extremal phase (daublet), least symmetric (symlet), best localized and coiflets filters. They are denoted as , , and , respectively, with the subscript L denoting the filter length, as before. The filter length can be as great as 30, and it changes from one class to another. All 29 wavelets filters are included in the analysis, as follows: (daublet with ten different lengths); (symlet with nine different lengths); (best localized with five different lengths); (coiflet with five different lengths).
From these filters, extremal phase filters, also known as Daubechies wavelets, are compactly supported filters of finite length and can be characterized by a maximal number of vanishing moments [
15]. A special filter under this class is
, namely the Haar wavelet, which is frequently used in the time series literature. The second class is the symlets or least asymmetric filters, which are modified versions of the Daubechies. These filters are obtained by increasing the symmetry feature of the Daubechies wavelets. Moreover, the other two classes are also generalized from the Daubechies filters. To save space, we only report on the six most successful wavelets in terms of size and power. These are listed in
Table 1 (the complete simulation results can be provided upon request).
The list in
Table 1 reflects the fact that filter lengths greater than six or eight are not as useful when used in wavelet jump tests, which can result from the over smoothing property of the higher length wavelets.
We report the empirical power calculations computed under different jump characteristics. The rejection frequencies calculated at the one percent level of significance are presented under single, consecutive and random number of jumps. For each of these jump characteristics, we also consider both the constant and stochastic volatility cases. However, since the difference between constant and stochastic spot volatility are qualitatively not important, we provide the latter results in
Appendix A.
Table 2,
Table 3,
Table 4 and
Table 5 summarize the size-corrected empirical power values as related to the various cases.
5.1. Single Jump
The size and power analysis is first conducted for the single jump case for which a relatively fairer comparison is possible between wavelet-based and BNS and JO jump tests (XGF also conducted a similar power analysis for the single jump case. Our analysis extends theirs and contains much richer data generated configurations.).
Table 2 provides the results for the constant spot volatility case (
).
As evidenced from the first vertical block of
Table 2, where the data exhibit no drift (
), the BNS and JO tests perform equally well with the best of the wavelet-based tests (
) when power is concerned (the performances of a standard and a bootstrapped version of
are identical). However, as illustrated in the remaining two vertical blocks of the table, when the data exhibit a non-constant drift (
) or a structural break (
), the performance of non-wavelet tests deteriorates significantly with noticeable size distortions (We follow the size of the test from the no jump rows. The tests are performed at one percent significance levels.). The JO test almost always accepts the presence of a break irrespective of whether there is, in fact, a break in the data, whereas the BNS test has lower size distortions, but they are still highly above the acceptable levels. In contrast, no wavelet tests exhibit any size distortions for non-zero drift cases similar to the zero drift case. In terms of power, the BNS and JO tests perform as well as wavelet-based tests in the case of large break sizes (
). In the case of moderate jump sizes (
), all wavelet-based tests, except
, are superior to the BNS test, but equal to the JO test in terms of power. However, this equally strong power performance of the JO test is questionable because, when a non-zero drift is present in the data, the test is oversized at the maximum level. This means that the test cannot differentiate between the non-zero drift and a jump, and thus, it considers both as jumps. In these power comparisons between wavelet-based and non-wavelet tests, the wavelet-based tests, unlike the BNS and JO tests, should also indicate the exact location of the jump to be considered as powerful as the others.
Another noticeable pattern that emerges from
Table 2 is that when the filter length used in the wavelet-based tests increases, the power decreases. As previously indicated, this is a general result and constitutes the reason why wavelet tests with higher filters are excluded from the tables.
Jump locations are introduced in a non-random manner at 30, 475 and 875 points (approximately 3, 45 and 85 percent, respectively, of the data). The results presented in
Table 2 indicate that location is not significantly important with respect to the power and size properties of the tests. It only appears that jumps occurring near the beginning of the sample (at location 30, i.e., in the first three percent of the data) cause some of the tests to be oversized. This result is expected since the spot volatility estimate is consistent, but may provide weak results under small sample sizes. When the jump is spread across the middle or the end of the sample, these size distortions disappear. Regarding power, in some of the tests, we only observe a slight deterioration when the jump location moves towards the end of the sample. Furthermore, the usage of a bootstrap or non-bootstrap version of the wavelet test does appear to significantly affect the results.
5.2. Multiple Jumps
In this section, we evaluate the performance of the test statistics under two consecutive jumps, and we accept the same jump magnitude for both. To control the impact of jump location and focus on the effect of multiple jumps, we again do not adopt randomization of jump locations. Accordingly, we fix the location of the first jump at the 300th period. The second jump will follow the first jump with varying (non-random) distances. To save space, we only tabulate three different distance magnitudes between the first and consecutive second jump, at 2, 10 and 20 data points, which is sufficient to capture the main dynamics.
Table 3 summarizes the power results related to the first jump. Since the size results remain the same as the single jump case, they are not included in this or following tables. Regarding wavelet-based tests, the results depicted in the table are almost identical with those of the single jump case of
Table 2. This equivalence implies that the power and size properties of the first jump are independent of the presence of the second jump. This result is expected because the wavelet-based tests use the information only up to the jump point (following the same logic, even if we introduced more jumps, we would have observed similar results). In contrast, compared to the single jump results of
Table 2, the JO and BNS tests exhibit significantly more power when non-drift is present. This is also expected because the presence of more than one jump apparently strengthens the capability of detecting them in the JO and BNS tests as their conclusions are irrespective of the number of jumps. However, even in this case, some wavelet-based tests are equally successful or even slightly better. It is also noted that non-wavelet tests suffer from serious size distortions when non-zero drift manifests itself in the data. Hence, the main conclusion from the previous section remains valid. It is also still valid to suggest that wavelet-based test performances are relatively successful in terms of detecting the first jump even when non-standard features of data are present compared to their alternatives.
We now focus on the power results of second jumps that constitute a more interesting case of wavelet-based tests. Apparently, the results for the non-wavelet tests remain identical with the first jump case as their conclusions are irrespective of the number of jumps. However, the results of wavelet-based tests may differ as their success may differentiate among jumps. The power results are tabulated in
Table 4. Compared to previous tables, in the single and first jump cases of some wavelet tests, we observed some significant power losses with moderate jump magnitudes (
) associated with jump distances of ten or more. This new result causes those wavelets to become less powerful compared to non-wavelet tests in the non-zero drift data, unlike before. Nonetheless, as the best of the wavelets still perform better than non-wavelets in these cases, as well, the main conclusion previously obtained is not altered.
5.3. Random Jumps
Thus far, we know that the jump characteristics such as number, location and magnitudes are modelled as non-random and remained fixed across simulations. In this section, we consider the implications of randomized jump behaviours and introduce random jump locations (or time) and magnitudes. In randomized jump settings, as outlined in
Section 4.3, the number of jumps is assumed to be distributed as Poisson with rates
, and random jump magnitudes are distributed as a truncated version of the double exponential process with a truncation level of
. The Poisson rate
determines the average number of jumps, and the truncation point
indicates the minimum value that jump magnitudes may assume (after drawing the random jump magnitudes using the truncation parameter, actual jump sizes are obtained by multiplying this random quantity by the spot volatility at the jump location). As usual, the tabulated
and
values are illustrative with space considerations (as explained in
Section 4.3, once the number of jumps is determined, their locations are randomly determined through a uniform distribution). We only include two values to represent large (
) and small (
) jump magnitudes and three values to represent single (
) and multiple (
) jumps (As above, the results for test sizes remain the same. Note also that values such as
and
, in this random setting, could not ensure the absence of jumps for all realizations if they had been included).
The power results are illustrated in
Table 5. The power for the wavelet tests is defined as the detection of all jumps simultaneously. Failing in detecting even one of the jumps results in failure. For non-wavelets, as before, we only check whether a jump is detected, regardless of the number of jumps. The results are similar to those of the second jump case above (
Table 4). We observe some significant power losses in most of the wavelet-based tests and note that the jump magnitudes are small (
). Nonetheless, we do not observe the same power deterioration in
, and the best of the wavelet tests still demonstrates full power in all conditions.
6. Conclusions
In this paper, we analysed the performance of wavelet and non-wavelet jump tests under various data characteristics. Wavelet jump tests have clear advantages in terms of detecting the exact number and points of jumps unlike the alternatives. In addition to these clear advantages, wavelet-based tests are robust with respect to several non-standard data features. Our results indicate that this robustness cannot be easily attained by the non-wavelet alternatives.
In this paper, we focus on smooth breaks in drift term and nonstationary stochastic volatility in the diffusive variance of the price process. However, there are other important features that may distort the jump detection procedures. One of these feature is the presence of microstructure noise in price data. If data include a highly volatile microstructure noise, the jump detection methods may require further refinements. We leave this discussion for a future comprehensive study.
It is also important to note that not all wavelet filters with different lengths exhibit the same success in jump detection. Particularly, when the filter length increases, the power of the wavelet-based test starts to be negatively affected. The reason behind this result may be the over-smoothing property of the lengthier filters. As a result, we recommend practitioners use the wavelet filters , or in their analysis.