Next Article in Journal
Multi-Keyword Searchable Identity-Based Proxy Re-Encryption from Lattices
Previous Article in Journal
An Efficient Audio Encryption Scheme Based on Elliptic Curve over Finite Fields
Previous Article in Special Issue
EvolveNet: Evolving Networks by Learning Scale of Depth and Width
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Fast Generalized Sliding Sinusoidal Transforms

1
Center of Scientific Research and Higher Education of Ensenada, Ensenada 22860, BC, Mexico
2
Department of Mathematics, Chelyabinsk State University, Chelyabinsk 454001, Russia
3
Institute for Information Transmission Problems, Russian Academy of Sciences, Moscow 127051, Russia
Mathematics 2023, 11(18), 3829; https://doi.org/10.3390/math11183829
Submission received: 28 May 2023 / Revised: 27 August 2023 / Accepted: 4 September 2023 / Published: 6 September 2023

Abstract

:
Discrete cosine and sine transforms closely approximate the Karhunen–Loeve transform for first-order Markov stationary signals with high and low correlation coefficients, respectively. Discrete sinusoidal transforms can be used in data compression, digital filtering, spectral analysis and pattern recognition. Short-time transforms based on discrete sinusoidal transforms are suitable for the adaptive processing and time–frequency analysis of quasi-stationary data. The generalized sliding discrete transform is a type of short-time transform, that is, a fixed-length windowed transform that slides over a signal with an arbitrary integer step. In this paper, eight fast algorithms for calculating various sliding sinusoidal transforms based on a generalized solution of a second-order linear nonhomogeneous difference equation and pruned discrete sine transforms are proposed. The performances of the algorithms in terms of computational complexity and execution time were compared with those of recursive sliding and fast discrete sinusoidal algorithms. The low complexity of the proposed algorithms resulted in significant time savings.

1. Introduction

Discrete cosine transform (DCT) and discrete sine transform (DST) are two of the most common unitary transforms in signal processing due to their close approximation to the Karhunen–Loeve transform (KLT) [1]. For signals with a correlation coefficient close to one, DCT provides a better approximation of KLT than DST, whereas DST is closer to KLT when the correlation coefficient is less than 0.5. Because KLT consists of eigenvectors of the covariance matrix of data, there is no single unique transform for all random processes, and there are no fast algorithms. In contrast, DCT and DST are data-independent, and many fast algorithms have been proposed.
Discrete sinusoidal transforms were originally introduced to process long-term stationary data [2]. They have found widespread application in various forms of signal processing such as data coding [3], adaptive digital filtering [4], image enhancement [5], interpolation [6,7]. Short-time signal processing [8] is an appropriate technique for time–frequency analyses and processing nonstationary data, such as ECG signal processing [9], spectral analysis and speech processing [10], adaptive linear filtering [5,11], radar emitter recognition [12], spectral analysis of biological signals [13], heart sound classification [14], and time–frequency analyses of supply voltage systems [15]. The hopping discrete transform [16,17,18] is a time series of equidistant windowed signal transforms. The generalized sliding discrete transform is a type of short-time transform in which a fixed-length windowed transform slides over a signal with arbitrary integer steps. In this case, the window signal is constantly updated with new samples, and the old samples are discarded. To obtain an acceptable spectral resolution, the window size must be such that the signal in the window can be considered approximately stationary. DCT and DST are important unitary transforms for the spectral analysis of time-varying signals. Because the computation of discrete sinusoidal transforms in a window sliding over a signal with one sample step is computationally expensive, fast algorithms have been proposed to compute eight types of transforms using recursive equations [19,20].
When the signal spectrum changes slowly, the window can move more than one sample at a time to speed up the spectral analysis. To quickly calculate the DCT in a window moving with small integer steps, a recursive sliding algorithm was proposed [21]. Recursive equations and fast algorithms have been proposed for calculating various types of generalized sliding DCTs [22] that can be turned as a trade-off between hardware consumption and performance.
Four types of DCT and four types of DST were classified [23]. This paper proposes eight fast algorithms for calculating sliding discrete sinusoidal transforms based on the derived recursive equations and fast-pruned DSTs. The contribution of this work is as follows:
  • A generalized solution of a second-order linear nonhomogeneous difference equation is obtained using the unilateral z-transform.
  • Based on the generalized solution, eight generalized sliding sinusoidal transforms are proposed.
  • Fast algorithms have been proposed for computing generalized sliding sinusoidal transforms. The algorithms are based on the derived recursive equations and fast-pruned DSTs.
  • The performance of the proposed algorithms in terms of computational complexity and execution time is compared with those of recursive sliding and fast sinusoidal algorithms.
The remainder of this paper is organized as follows. Section 2 introduces the notation and recalls the sliding discrete sinusoidal transforms. A generalized solution of a second-order linear nonhomogeneous difference equation and generalized sliding sinusoidal transforms are presented in Section 2. The design of the fast algorithms is presented in Section 3. In Section 4, the performances of the proposed, recursive sliding and well-known fast discrete sinusoidal algorithms with respect to computational complexity and execution time are presented and discussed. Section 5 summarizes our findings.

2. Generalized Sliding Discrete Sinusoidal Transforms

2.1. Sliding Discrete Sinusoidal Transforms

Let us recall the definition of sliding discrete sinusoidal transforms [20] using the following notation: sn N r s sin π N r s , c s N r s cos π N r s , where N is the transform order, and r , s R .
The sliding cosine transforms (SCT) of types I, II, III, and IV can be defined as
SCT - I : y s k = n = N 1 N 2 x k + n cs N s n + N 1 ,
SCT - II :   y s k = n = N 1 N 2 x k + n cs N s n + N 1 + 1 / 2 ,
SCT - III :   y s k = n = N 1 N 2 x k + n cs N s + 1 / 2 n + N 1 ,
SCT - IV :   y s k = n = N 1 N 2 x k + n cs N s + 1 / 2 n + N 1 + 1 / 2 ,
where x k ;   k Z is the input signal; N 1 and N 2 are integers; for SCT-I, the signal window size is N = N 1 + N 2 , and for other cosine transforms the window size is N = N 1 + N 2 + 1 ; for SCT-I, the transform coefficients around time k are y s k ;   k Z ,   s = 0 , N , whereas for other cosine transforms the coefficients are y s k ;   k Z ,   s = 0 , N 1 .
The sliding sine transforms (SST) of types I, II, III, and IV are given as
SST - I : y s k = n = N 1 N 2 x k + n sn N s n + N 1 + 1 ,
SST - II :   y s k = n = N 1 N 2 x k + n sn N s n + N 1 + 1 / 2 ,
SST - III :   y s k = n = N 1 N 2 x k + n sn N s 1 / 2 n + N 1 ,
SST - IV :   y s k = n = N 1 N 2 x k + n sn N s 1 / 2 n + N 1 + 1 / 2 ,
where x k ;   k Z is the input signal; N 1 and N 2 are integers; for SST-I, the signal window size is N = N 1 + N 2 + 2 , and for other sine transforms the window size is N = N 1 + N 2 + 1 ; for SST-I, the transform coefficients around time k are y s k ;   k Z ,   s = 1 , N 1 , whereas for other sine transforms the coefficients are y s k ;   k Z ,   s = 1 , N .
The recursive relationships between three consecutive spectra for all sliding sinusoidal transforms are completely determined by a second-order linear nonhomogeneous difference equation, as follows [20]:
y s k + 2 + a s y s ( k + 1 ) + b s y s ( k ) = f s ( k ) .
The parameters a s and f s k for all sliding transforms are listed in Table 1. The parameter b s for all transforms is equal to one. Note that, for a fixed s , Equation (9) is a liner constant-coefficient difference equation. Next, we determine a generalized solution to this equation.

2.2. Generalized Solution of Second-Order Linear Nonhomogeneous Difference Equation

The second-order constant-coefficient difference equation is given as follows:
y k + 2 + a y ( k + 1 ) + b y ( k ) = f ( k ) ,
where a , b R , f ( k ) 0 is a given real sequence, k is integer, and k 0 . The objective is to find a generalized solution to Equation (2) with the initial values y 0 ,   y ( K ) , where K is an integer, and K 2 . To solve the problem, we first find the solution of Equation (10), assuming that the initial values y 0 ,   y ( 1 ) are given. The unilateral z-transform [8] is used to analyze casual systems (inputs are specified for k 0 ) described by linear constant-coefficient difference equations with prescribed initial conditions. The bilateral and unilateral z-transforms differ significantly in their time-shifting properties. By applying the unilateral z-transform and using its shifting property, the equation can be written as
Y z y 0 y ( 1 ) z 1 z 2 + a Y z y 0 z + b Y z = F ( z ) ,
where Y z ,   F z are the z-transforms of y k ,   f ( k ) , respectively. Note that when solving the equation using the unilateral z-transform, the initial values are already considered. To simplify the inverse z-transform, we define the following rational function:
W z z z 2 + a z + b = z z λ 1 z λ 2 ,
where λ 1 , λ 2 are the roots of the W z denominator, and a = ( λ 1 + λ 2 ) ,   b = λ 1 λ 2 .
Next, Equation (11) is rewritten as
Y z = y 0 z + a W z + y ( 1 ) W z + F ( z ) W z z 1 .
W z can be represented by expanding into partial fractions as follows:
W z = 1 λ 2 λ 1 1 1 λ 2 z 1 1 1 λ 1 z 1 , λ 2 λ 1 z 1 1 λ 1 z 1 2 , λ 2 = λ 1 .
According to Table 15 of [8], the inverse z-transform can be calculated as
w k = λ 2 k λ 1 k λ 2 λ 1 , λ 2 λ 1 k λ 1 k 1 , λ 2 = λ 1 .
Considering that w k = 0 for k < 1 , and using the shifting and convolution properties (see Table 16 of [8]), the solution of the difference equation given in Equation (2) has the form
y k = y 0 w k + 1 + a w k + y 1 w k + m = 2 k w m 1 f k m .
Using Equations (12), (15) and (16) can be rewritten as
y k = 1 λ 2 λ 1 m = 2 k λ 2 m 1 λ 1 m 1 f k m b y 0 λ 2 k 1 λ 1 k 1 + y 1 λ 2 k λ 1 k ,   λ 2 λ 1 m = 2 k λ 1 m 2 m 1 f k m b y 0 k 1 λ 1 k 2 + y 1 k λ 1 k 1 ,                                                                     λ 2 = λ 1 .
Now, suppose that y 0 ,   y ( K ) are given; then, y 1 can be expressed from (17) as follows:
y 1 = 1 λ 2 K λ 1 K y K λ 2 λ 1 m = 2 K λ 2 m 1 λ 1 m 1 f K m + b y 0 λ 2 K 1 λ 1 K 1 ,   λ 2 λ 1 1 K λ 1 K 1 y K m = 2 K λ 1 m 2 m 1 f K m + b y 0 K 1 λ 1 K 2 ,                                                         λ 2 = λ 1 .
Finally, substituting y 1 into Equation (17) and considering that b = λ 1 λ 2 , one can arrive at a generalized solution of the problem as follows:
y k = 1 λ 2 λ 1 m = 2 k λ 2 m 1 λ 1 m 1 f k m λ 2 k λ 1 k λ 2 K λ 1 K m = 2 K λ 2 m 1 λ 1 m 1 f K m + + 1 λ 2 K λ 1 K y 0 λ 2 K λ 1 k λ 1 K λ 2 k + y K λ 2 k λ 1 k ,                                       λ 2 λ 1 m = 2 k λ 1 m 2 m 1 f k m k λ 1 k K K m = 2 K λ 1 m 2 m 1 f K m k K K y 0 λ 1 k + + k λ 1 k K K y K ,                                       λ 2 = λ 1       .
We are interested in the relationship between three equidistant values of the function: y 0 ,   y ( K ) , and y ( 2 K ) . This simplifies the equation:
y 2 k = 1 λ 2 λ 1 m = 2 K λ 2 m 1 λ 1 m 1 f 2 K m λ 2 K λ 1 K λ 2 m + 1 λ 1 m + 1 f m 2 +                                                     + λ 2 K λ 1 K λ 2 λ 1 f K 1 λ 2 K λ 1 K y 0 + λ 2 K + λ 1 K y K ,                 λ 2 λ 1 m = 2 K m 1 λ 1 m 2 f 2 K m + λ 1 2 ( K m + 1 ) f m 2 + λ 1 K 1 K f K 1 λ 1 2 K y 0 +                                                     + 2 λ 1 K y K ,                       λ 2 = λ 1         .

2.3. Recursive Equations for Generalized Sliding Sinusoidal Transforms

Sliding sinusoidal transforms based on the recursive Equation (20) are referred to as generalized sliding discrete sinusoidal transforms. The transforms are computed in a window sliding over the signal with an integer step K 2 . The parameters λ 1 , λ 2 for all generalized sliding sinusoidal transforms are listed in Table 2.
It can be seen that the parameters are described by the same equation with different arguments, that is, λ 1 , 2 = exp ± i φ s . Therefore, by substituting λ 1 , λ 2 into Equation (20), we can obtain a second-order recursive equation common to all generalized sliding sinusoidal transforms as follows:
y s 2 K = m = 1 K f ¯ s m sn N m φ s sn N φ s y s 0 + 2 cs N K φ s y s K ,   λ 2 λ 1 m = 1 K m f ¯ s m y s 0 + 2 y s K ,                                                                   λ 1 = λ 2 = 1 m = 1 K m f ¯ s m 1 m + 1 y s 0 + 2 1 K y s K ,                   λ 1 = λ 2 = 1 ,
where f ¯ s m = f s m 1 + f s 2 K m 1 ;   m = 1 , , K 1 and f ¯ s K = f s K 1 .
From Equation (21), one can obtain recursive equations for a generalized SCT of types I, II, III and IV, respectively, as
GSCT - 1 GSCT - II : y s 2 K = m = 1 K f ¯ s m sn N m s sn N s y s 0 + 2 cs N K s y s K ,                       s 1 , N 1   m = 1 K m f ¯ s m y s 0 + 2 y s K ,                                                                                   s = 0 m = 1 K m f ¯ s m 1 m + 1 y s 0 + 2 1 K y s K ,                         s = N ,   only   GSCT - I ,
GSCT - III GSCT - IV : y s 2 K = m = 1 K f ¯ s m sn N m s + 1 / 2 sn N s + 1 / 2 y s 0 + 2 cs N K s + 1 / 2 y s K ,   s 0 , N 1 .
Similarly, recursive equations can be derived for a generalized SST of types I, II, III and IV, respectively, as
GSST - 1 GSST - II : y s 2 K = m = 1 K f ¯ s m sn N m s sn N s y s 0 + 2 cs N K s y s K ,   s 1 , N 1 m = 1 K m f ¯ s m 1 m + 1 y s 0 + 2 1 K y s K ,   s = N ,   only   GSST - II ,
GSST - III GSST - IV : y s 2 K = m = 1 K f ¯ s m sn N m s 1 / 2 sn N s 1 / 2 y s 0 + 2 cs N K s 1 / 2 y s K ,   s 1 , N .
Next, recursive Equations (22)–(25) are used to develop fast algorithms for computing generalized sliding sinusoidal transforms.

3. Fast Algorithms for Computing Generalized Sliding Sinusoidal Transforms

3.1. Fast-Pruned Discrete Sine Transforms

We design two fast-pruned DSTs that will be useful in the future. The sum in the first term in Equations (22) and (24) has the form:
F N ( s ) = m = 1 K f ¯ s m sn N m s ,   s 1 , N 1 .
Suppose that f ¯ s m = 0 ,   m > K , 1 < K < N 1 , and s in f ¯ s m is fixed; then, Equation (26) is a pruned DST of type I (DST-I). If N has a power of 2, then the decimation-in-time algorithm [24] recursively splits the input DST-I into two half-length DST-I transforms with even and odd indices, as follows:
F N ( s ) = F N / 2 o ( s ) + F N / 2 e ( s ) F N ( N s ) = F N / 2 o ( s ) F N / 2 e ( s ) ,   s [ 1 , N / 2 1 ] .
where
F N / 2 o ( s ) = 1 2 c s N s m = 1 K 1 f ¯ s 2 m 1 + f ¯ s 2 m + 1 sn N / 2 m s F N / 2 e ( s ) = m = 1 K 2 f ¯ s 2 m sn N / 2 m s ,   s [ 1 , N / 2 1 ]
F N / 2 ( N / 2 ) = m = 1 K 1 f ¯ N / 2 2 m 1 1 m + 1 ,
with K 1 = K + 1 / 2 , K 2 = K / 2 , and [.] denotes the integer part.
This recursive procedure can be used to design a fast-pruned DST-I algorithm. Figure 1 shows a flow graph for N =   16 and K = 5 .
The solid and dashed blue lines represent transfer factors of 1 and −1, respectively. A circle indicates the addition if more than one input line is on the left. C s on the graph means multiplication by the coefficient C s = 1 2 c s N s .
The computational complexity of the pruned DST-I algorithm with respect to additions and multiplications can be estimated recursively: C A D D _ D S T I N , K = N / 2 + C A D D _ D S T I N / 2 , K 1 + C A D D _ D S T I N / 2 , K 2 + K 1 2 and C M U L _ D S T I N , K = N / 2 + C M U L _ D S T I N / 2 , K 1 + C M U L _ D S T I N / 2 , K 2 1 , respectively. Here, the initial condition for the computational complexity is given as C A D D _ D S T I N , 1 = 0 ,   C M U L _ D S T I N , 1 = N / 2 - 1   .
Consider the sum in the first term in Equations (23) and (25), which can be written as
F N ( s ) = m = 1 K f ¯ s m sn N m s 1 / 2 ,   s 1 , N .
Similarly, for 1 < K < N 1 and fixed s in f ¯ s m , this is a pruned DST of type III (DST-III). If N has a power of 2, then the decimation-in-time algorithm [24] recursively decomposes the input DST-III into two half-length DST-III transforms as follows:
F N ( s ) = F N / 2 o ( s ) + F N / 2 e ( s ) F N ( N s + 1 ) = F N / 2 o ( s ) F N / 2 e ( s )   ,   s [ 1 , N / 2 ] .
where
F N / 2 o ( s ) = 1 2 c s 2 N 2 s 1 m = 1 K 1 f ¯ s 2 m 1 + f ¯ s 2 m + 1 sn N / 2 m s 1 / 2 F N / 2 e ( s ) = m = 1 K 2 f ¯ s 2 m sn N / 2 m s 1 / 2 ,   s [ 1 , N / 2 ]
with K 1 = K + 1 / 2 , K 2 = K / 2 .
A flow graph of the fast-pruned DST-III algorithm designed on the basis of Equations (30)–(32) is shown in Figure 2, N =   16 and K = 5 . C s means multiplication by the coefficient C s = 1 2 c s 2 N 2 s 1 .
The computational complexity of the pruned DST-III algorithm with respect to additions and multiplications can be also estimated with the following recursive equations: C A D D _ D S T I I I N , K = N / 2 + C A D D _ D S T I I I N / 2 , K 1 + C A D D _ D S T I I I N / 2 , K 2 and C M U L _ D S T I I I N , K = N / 2 + C M U L _ D S T I I I N / 2 , K 1 + C M U L _ D S T I I I N / 2 , K 2 , respectively. The initial condition is given as C A D D _ D S T I I I N , 1 = 0 ,   C M U L _ D S T I I I N , 1 = N / 2 .  

3.2. Fast Generalized Sliding Sinusoidal Transforms

To use the designed fast-pruned algorithms, we first decompose f s k given in Table 1 into components that do not depend on s (see Table 3).
It can be seen that f s 0 k is s-independent, whereas each of f s 1 k and f s 2 k takes only two values depending on the parity of s. Using the components of f s k , one can define f ¯ s i m = f s i m 1 + f s i 2 K m 1   and   f ¯ s i K = f s i K 1 ;   m = 1 , , K 1 ,   i = 0 ,   1 ,   2 and express the first terms of generalized sliding sinusoidal transforms (see recursive Equations (22)–(25)) in terms of the fast-pruned DST algorithms as follows:
GSCT - I : m = 1 K f ¯ s m sn N m s sn N s = DST - I f ¯ s 1 m + c s N s DST - I f ¯ s 2 m sn N s = DST - I f ¯ s 1 m + f ¯ s 2 m 1 + f ¯ s 2 m + 1 / 2 + f ¯ s 2 K sn N K + 1 s / 2 sn N s , f ¯ s 2 K + 1 = f ¯ s 2 0 = 0 , s 1 , N 1 ;
GSCT - II : m = 1 K f ¯ s m sn N m s sn N s = 1 2 sn 2 N s DST - I f ¯ s 1 m ,   s 1 , N 1 ;
GSCT-III : m = 1 K f ¯ s m sn N m s + 1 / 2 sn N s + 1 / 2 = DST-III f ¯ s + 1 0 m + c s N s + 1 / 2 DST-III f ¯ s + 1 2 m sn N s + 1 / 2 + +DST-III f ¯ s + 1 1 m = DST-III f ¯ s + 1 0 m + f ¯ s + 1 2 m 1 + f ¯ s + 1 2 m + 1 / 2 sn N s + 1 / 2 + f ¯ s + 1 2 K sn N K + 1 s + 1 / 2 / 2 sn N s + 1 / 2 +DST-III f ¯ s + 1 1 m ,   f ¯ s + 1 2 K + 1 = f ¯ s + 1 2 0 = 0 , s 0 , N 1 ;
GSCT - IV : m = 1 K f ¯ s m sn N m s + 1 / 2 sn N s + 1 / 2 = 1 2 sn 2 N s + 1 / 2 DST - III f ¯ s + 1 0 m + + 1 2 cs 2 N s + 1 / 2 DST - III f ¯ s + 1 1 m ,                   s 0 , N 1 ;
GSST - I : m = 1 K f ¯ s m sn N m s sn N s = DST - I f ¯ s 1 m ,   s 1 , N 1 ;
GSST - II : m = 1 K f ¯ s m sn N m s sn N s = 1 2 cs 2 N s DST - I f ¯ s 1 m ,   s 1 , N ;
GSST-III : m = 1 K f ¯ s m sn N m s 1 / 2 sn N s 1 / 2 = c s N s 1 / 2 DST-III f ¯ s 1 m + DST-III f ¯ s 2 m sn N s 1 / 2 + + DST-III f ¯ s 0 m = DST-III f ¯ s 2 m + f ¯ s 1 m 1 + f ¯ s 1 m + 1 / 2 sn N s 1 / 2 + + f ¯ s 1 K sn N K + 1 s 1 / 2 / 2 sn N s 1 / 2 + DST-III f ¯ s 0 m ,   f ¯ s 1 K + 1 = f ¯ s 1 0 = 0 , s 1 , N ;
GSST - IV : m = 1 K f ¯ s m sn N m s 1 / 2 sn N s 1 / 2 = 1 2 sn 2 N s 1 / 2 DST - III f ¯ s 1 m + + 1 2 cs 2 N s 1 / 2 DST - III f ¯ s 0 m , s 1 , N .
Here, DST - I v m and DST - III v m denote the fast-pruned DST-I and DST-III algorithms over the input signal v m , respectively.
Finally, by substituting Equations (33)–(40) into recursive Equations (22)–(25), we can obtain a fast implementation of all generalized sliding sinusoidal transforms. To more clearly illustrate the design of the proposed algorithms, a simple example for computing the generalized DST-I coefficients for K = 2 ,   N 1 =   7 ,   N 2 =   7 , and N =   16 is provided in Appendix A.

3.3. Analysis of Computational Complexity

Taking into account Equations (22)–(25) and (33)–(40), Table 3, and the complexity of fast-pruned DST-I and DST-III algorithms, the computational complexity of the proposed generalized sliding sinusoidal transforms can be estimated as shown in Table 4.
Note that the calculation of f ¯ s i k from f s i k around time k requires the addition of K 1 . Coefficients f s i k ;   k = 1 , K 1 ;   i = 0 , 2 , 3 have already been calculated and stored at time K . Additional costs are also required to calculate the initial K coefficients for each f s i k ; i = 0 , 2 , 3 .
The proposed algorithms require 2N memory locations to store the transform coefficients computed at times 0 and K. Note that no additional memory is required to store the output, as the recursive calculation is made “in-place” using the memory originally occupied by the computed coefficients at time 0. The algorithms also need 3 N / 2 additional memory locations for each pruned DST for storing F N s and C s factors. Depending on the type of generalized sinusoidal transform, additional memory locations are required to store f s i k , f ¯ s i k and sinusoidal weights.

4. Results

Fast radix-2 algorithms are the most popular fast algorithms for computing discrete sinusoidal transforms. In this section, the performance of the proposed algorithms in terms of computational costs and execution time is compared with that of fast radix-2 algorithms when computing generalized sliding sinusoidal transforms. In addition, the computational complexity of the proposed GSCTs is compared with that of recursive generalized sliding cosine algorithms [22].
Table 5 and Table 6 show the computational complexity of the proposed algorithms in terms of additions and multiplications, respectively, when step K is varied and window size N is fixed.
First, the computational complexity of the proposed GSCTs was compared with that of recursive generalized sliding cosine algorithms [22]. The complexity of the latter algorithms with respect to additions and multiplications is presented in Table 7 and Table 8, respectively.
One can observe that the proposed GSCTs are superior to the corresponding recursive generalized sliding cosine algorithms.
Next, the performance of the proposed algorithms was compared with that of fast radix-2 DCT and DST algorithms. The computational complexity of common fast radix-2 DCT/DST algorithms with respect to additions and multiplications is listed in Table 9.
It can be seen that the proposed algorithms GCST-I, GSST-I, GCST-II, GSST-II, GCST-IV, and GSST-IV outperform the corresponding fast algorithms for a wide range of parameter K. The proposed algorithms GCST-III and GSST-III are only efficient for small K values.
Table 10 and Table 11 show how the computational complexity with respect to additions and multiplications changes as window size N varies and K is fixed. It can be seen that, as window size N increases, the gain in the proposed algorithms increases in terms of computational complexity.
The actual running time of an algorithm depends on the hardware and software characteristics and the specific implementation of the algorithm. The purpose of the following simulation was to demonstrate that the theoretical computational complexity of the tested algorithms closely correlates with the execution time of the implemented algorithms. All the tested algorithms were implemented on a laptop with an Intel Core i7-2630QM processor and 8 GB RAM using MATLAB R2016a.
All the experiments were repeated 100 times to ensure statistically correct results, and the average runtime results for each algorithm were calculated. Figure 3 and Figure 4 show the runtime performance of the proposed GSCT-I, GCST-II, GCST-III, GCST-IV, and corresponding fast radix-2 algorithms.
Considering that floating-point addition and multiplication in modern processors are comparable in terms of execution time, we can observe that the theoretical results presented in Table 5, Table 6 and Table 9 are in good agreement with the experimental results in Figure 3 and Figure 4.

5. Conclusions

A generalized solution of a second-order linear nonhomogeneous difference equation was derived. Based on the generalized solution, eight generalized sliding sinusoidal transforms are proposed. Fast algorithms were developed to compute generalized sliding sinusoidal transforms. The algorithms were based on the derived recursive equations and designed fast-pruned DSTs. The performance of the proposed algorithms in terms of computational complexity and execution time was compared with that of recursive generalized sliding cosine transforms and fast discrete sinusoidal algorithms. GCST-I, GSST-I, GCST-II, GSST-II, GCST-IV, and GSST-IV outperformed the corresponding fast algorithms for a wide range of parameters. It was also shown that the obtained theoretical results were in good agreement with the experimental results.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The author thanks the Russian Science Foundation, grant No. 22-19-20071.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

The output coefficients y s 2 K ,   s = 1 , ...15 are computed at time 2 K . Two coefficients   f 0 e = x 7 x 9 ; f 0 o = x 7 + x 9 can be borrowed from time K . Auxiliary data are calculated as
f 1 e = x 6 x 10 ;   f 1 o = x 6 + x 10 ;   f 2 e = x 6 x 11 ;   f 2 o = x 5 + x 11 f ¯ 1 e = f 0 e + f 2 e ;   f ¯ 1 o = f 0 o + f 2 o S 1 = 0.195 f ¯ 1 o ;   S 2 = 0.383 f ¯ 1 e ;   S 3 = 0.556 f ¯ 1 o ;   S 4 = 0 . 707 f ¯ 1 e S 5 = 0.832 f ¯ 1 o ;   S 6 = 0.9239 f ¯ 1 e ;   S 7 = 0.981 f ¯ 1 o T 1 = 0.3827 f 1 o ;   T 2 = 0.7071 f 1 e ;   T 3 = 0.9239 f 1 o
The output coefficients are calculated as follows:
y 1 4 = S 1 y 1 0 + 1.848 y 1 2 + T 1 y 2 4 = S 2 y 2 0 + 1.414 y 2 2 + T 2 y 3 4 = S 3 y 3 0 + 0.765 y 3 2 + T 3 y 3 4 = S 3 y 3 0 + 0.765 y 3 2 + T 3 y 3 4 = S 3 y 3 0 + 0.765 y 3 2 + T 3 y 4 4 = S 4 y 4 0 + f 1 e y 5 4 = S 5 y 5 0 0.765 y 5 2 + T 3 y 6 4 = S 6 y 6 0 1.414 y 6 2 + T 2 y 8 4 = A y 8 0 2 y 8 p y 9 4 = S 7 y 9 0 1.848 y 9 2 T 1 y 10 4 = S 6 y 10 0 1.414 y 10 2 T 2 y 11 4 = S 5 y 11 0 0.765 y 11 2 T 3 y 12 4 = S 4 y 12 0 f 1 e y 13 4 = S 3 y 13 0 + 0.765 y 13 2 T 3 y 14 4 = S 2 y 14 0 + 1.414 y 14 2 T 2 y 15 2 p = S 1 y 15 0 + 1.848 y 15 2 T 1
The algorithm complexity is 24 multiplications and 49 additions.

References

  1. Jain, A.K. A fast Karhunen-Loeve transform for a class of random processes. IEEE Trans. Commun. 1976, 24, 1023–1029. [Google Scholar] [CrossRef]
  2. Jain, A.K. A sinusoidal family of unitary transforms. IEEE Trans. Pattern Anal. Mach. Intell. 1979, PAMI-1, 356–365. [Google Scholar] [CrossRef]
  3. Rose, K.; Heiman, A.; Dinstein, I. DCT/DST alternate-transform image coding. IEEE Trans. Commun. 1990, 38, 94–101. [Google Scholar] [CrossRef]
  4. Lee, J.; Un, C. Performance of transform-domain LMS adaptive digital filters. IEEE Trans. Acoust. Speech Signal Process. 1986, 34, 499–510. [Google Scholar] [CrossRef]
  5. Kober, V. Robust and efficient algorithm of image enhancement. IEEE Trans. Consum. Electron. 2006, 52, 655–659. [Google Scholar] [CrossRef]
  6. Wang, Z.; Wang, L. Interpolation using the fast discrete sine transform. Signal Process. 1992, 26, 131–137. [Google Scholar] [CrossRef]
  7. Kim, M.; Lee, Y.-L. Discrete sine transform-based interpolation filter for video compression. Symmetry 2017, 9, 257. [Google Scholar] [CrossRef]
  8. Oppenheim, A.V.; Schafer, R.W. Discrete-Time Signal Processing, 3rd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 2009. [Google Scholar]
  9. Sharma, R.R.; Kumar, M.; Pachori, R.B. Joint time-frequency domain-based CAD disease sensing system using ECG signals. IEEE Sensors J. 2019, 19, 3912–3920. [Google Scholar] [CrossRef]
  10. Portnoff, M. Short-time Fourier analysis of sampled speech. IEEE Trans. Acoust. Speech Signal Process. 1981, 29, 364–373. [Google Scholar] [CrossRef]
  11. Shi, J.; Zheng, J.; Liu, X.; Xiang, W.; Zhang, Q. Novel short-time fractional Fourier transform: Theory, implementation, and applications. IEEE Trans. Signal Process. 2020, 68, 3280–3295. [Google Scholar] [CrossRef]
  12. Wang, X.; Huang, G.; Zhou, Z.; Tian, W.; Yao, J.; Gao, J. Radar emitter recognition based on the energy cumulant of short-time Fourier transform and reinforced deep belief network. Sensors 2018, 18, 3103. [Google Scholar] [CrossRef]
  13. Wang, Y.; Veluvolu, K.C. Time-frequency analysis of non-stationary biological signals with sparse linear regression based Fourier linear combiner. Sensors 2017, 17, 1386. [Google Scholar] [CrossRef]
  14. Thalmayer, A.; Zeising, S.; Fischer, G.; Kirchner, J. A robust and real-time capable envelope-based algorithm for heart sound classification: Validation under different physiological conditions. Sensors 2020, 20, 972. [Google Scholar] [CrossRef]
  15. Priyadarshini, M.S.; Krishna, D.; Kumar, K.V.; Amaresh, K.; Goud, B.S.; Bajaj, M.; Altameem, T.; El-Shafai, W.; Fouda, M.M. Significance of harmonic filters by computation of short-time Fourier transform-based time–frequency representation of supply voltage. Energies 2023, 16, 2194. [Google Scholar] [CrossRef]
  16. Park, C.; Ko, S. The hopping discrete Fourier transform. IEEE Signal Process. Mag. 2014, 31, 135–139. [Google Scholar] [CrossRef]
  17. Kober, V. Fast recursive computation of sliding DHT with arbitrary step. Sensors 2020, 20, 5556. [Google Scholar] [CrossRef]
  18. Kober, V. Fast hopping discrete sine transform. IEEE Access 2021, 9, 94293–94298. [Google Scholar] [CrossRef]
  19. Xi, J.; Chiraro, J.F. Computing running DCT’s and DST’s based on their second-order shift properties. IEEE Trans. Circuits Syst. I 2000, 47, 779–783. [Google Scholar] [CrossRef]
  20. Kober, V. Fast algorithms for the computation of sliding discrete sinusoidal transforms. IEEE Trans. Signal Process. 2004, 52, 1704–1710. [Google Scholar] [CrossRef]
  21. Kober, V. Recursive algorithms for computing sliding DCT with arbitrary step. IEEE Sensors 2021, 21, 11507–11513. [Google Scholar] [CrossRef]
  22. Qian, L.; Luo, S.; He, S.; Chen, G. Recursive algorithms for direct computation of generalized sliding discrete cosine transforms. In Proceedings of the 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; pp. 3017–3020. [Google Scholar] [CrossRef]
  23. Wang, Z. Fast algorithms for the discrete W transform and for the discrete Fourier transform. IEEE Trans. Acoust. Speech Signal Process. 1984, 32, 803–816. [Google Scholar] [CrossRef]
  24. Yip, P.; Rao, K.R. Fast decimation-in-time algorithms for a family of discrete sine and cosine transforms. Circuits Syst. Signal Process. 1984, 3, 387–408. [Google Scholar] [CrossRef]
  25. Hou, H.S. A fast recursive algorithm for computing the discrete cosine transform. IEEE Trans. Acoust. Speech Signal Process. 1987, 35, 1455–1461. [Google Scholar] [CrossRef]
  26. Wang, Z. Fast discrete sine transform algorithms. Signal Process. 1990, 19, 91–102. [Google Scholar] [CrossRef]
  27. Britanak, V. On the discrete cosine computation. Signal Process. 1994, 40, 183–194. [Google Scholar] [CrossRef]
  28. Britanak, V. A unified discrete cosine and sine transform computation. Signal Process. 1995, 43, 333–339. [Google Scholar] [CrossRef]
Figure 1. Flow graph of a fast-pruned DST-I algorithm, N = 16 and K = 5.
Figure 1. Flow graph of a fast-pruned DST-I algorithm, N = 16 and K = 5.
Mathematics 11 03829 g001
Figure 2. Flow graph of a fast-pruned DST-III algorithm, N = 16 and K = 5.
Figure 2. Flow graph of a fast-pruned DST-III algorithm, N = 16 and K = 5.
Mathematics 11 03829 g002
Figure 3. Running time (milliseconds) of the tested DCT-I and DCT-II algorithms per window when N = 256 and K varies.
Figure 3. Running time (milliseconds) of the tested DCT-I and DCT-II algorithms per window when N = 256 and K varies.
Mathematics 11 03829 g003
Figure 4. Running time (milliseconds) of the tested DCT-III and DCT-IV algorithms per window when N = 256 and K varies.
Figure 4. Running time (milliseconds) of the tested DCT-III and DCT-IV algorithms per window when N = 256 and K varies.
Mathematics 11 03829 g004
Table 1. Parameters of recursive equations for sliding sinusoidal transforms.
Table 1. Parameters of recursive equations for sliding sinusoidal transforms.
Transforms a s f s k
SCT-I 2 cs N s x k N 1 + 1 s x k + N 2 + 2 x k N 1 + 1 + 1 s x k + N 2 + 1 c s N s
SCT-II 2 cs N s x k N 1 x k N 1 + 1 + 1 s x k + N 2 + 2 x k + N 2 + 1 c s 2 N s
SCT-III 2 cs N s + 1 / 2 x k N 1 + 1 s x k + N 2 + 2 s n N s + 1 / 2 x k N 1 + 1 c s N s + 1 / 2
SCT-IV 2 cs N s + 1 / 2 x k N 1 x k N 1 + 1 c s 2 N s + 1 / 2 + + 1 s x k + N 2 + 1 + x k + N 2 + 2 s n 2 N s + 1 / 2
SST-I 2 cs N s x k N 1 1 s x k + N 2 + 2 s n N s
SST-II 2 cs N s x k N 1 + x k N 1 + 1 1 s x k + N 2 + 2 + x k + N 2 + 1 s n 2 N s
SST-III 2 cs N s 1 / 2 x k N 1 s n N s 1 / 2 1 s x k + N 2 + 2 + + 1 s x k + N 2 + 1 c s N s 1 / 2
SST-IV 2 cs N s 1 / 2 x k N 1 + x k N 1 + 1 s n 2 N s 1 / 2 + + 1 s x k + N 2 + 1 x k + N 2 + 2 c s 2 N s 1 / 2
Table 2. Parameters λ 1 , λ 2 for generalized sliding sinusoidal transforms.
Table 2. Parameters λ 1 , λ 2 for generalized sliding sinusoidal transforms.
Transforms λ 1 λ 2
GSCT-I, GSCT-II
GSST-I, GSST-II
cs N s + i s n N s cs N s i s n N s
GSCT-III
GSCT-IV
cs N s + 1 / 2 + i s n N s + 1 / 2 cs N s + 1 / 2 i s n N s + 1 / 2
GSST-III
GSST-IV
cs N s 1 / 2 + i s n N s 1 / 2 cs N s 1 / 2 i s n N s 1 / 2
Table 3. Decomposition of f s k into s-independent components.
Table 3. Decomposition of f s k into s-independent components.
Transforms s Components
SCT-Ieven f e 1 k = x k N 1 + x k + N 2 + 2 ; f e 2 k = x k N 1 + 1 x k + N 2 + 1
odd f o 1 k = x k N 1 x k + N 2 + 2 ; f o 2 k = x k + N 2 + 1 x k N 1 + 1
0 , N f s k = f s 1 k + f s 2 k c s N s
SCT-IIeven f e 1 k = x k N 1 x k N 1 + 1 + x k + N 2 + 2 x k + N 2 + 1
odd f o 1 k = x k N 1 x k N 1 + 1 x k + N 2 + 2 + x k + N 2 + 1
0 , N 1 f s k = f s 1 k c s 2 N s
SCT-IIIeven f e 1 k = x k + N 2 + 2
odd f o 1 k = x k + N 2 + 2
0 , N 1 f s 0 k = x k N 1 ;   f s 2 k = x k N 1 + 1 f s k = f s 0 k + f s 1 k s n N s + 1 / 2 + f s 2 k c s N s + 1 / 2
SCT-IVeven f e 1 k = x k + N 2 + 1 + x k + N 2 + 2
odd f o 1 k = k + N 2 + 1 x k + N 2 + 2
0 , N 1   f s 0 k = x k N 1 x k N 1 + 1
f s k = f s 0 k c s 2 N s + 1 / 2 + f s 1 k s n 2 N s + 1 / 2
SST-Ieven f e 1 k = x k N 1 x k + N 2 + 2
odd f o 1 = x k N 1 + x k + N 2 + 2
1 , N 1 f s k = f s 1 k s n N s
SST-IIeven f e 1 k = x k N 1 + x k N 1 + 1 x k + N 2 + 2 x k + N 2 + 1
odd f o 1 k = x k N 1 + x k N 1 + 1 + x k + N 2 + 2 + x k + N 2 + 1
1 , N f s k = f s 1 k s n 2 N s
SST-IIIeven f e 1 k = x k + N 2 + 1 ; f e 2 k = x k + N 2 + 2
odd f o 1 k = x k + N 2 + 1 ; f o 2 k = x k + N 2 + 2
1 , N f s 0 k = x k N 1
f s k = f s 0 k s n N s 1 / 2 + f s 1 k c s N s 1 / 2 + f s 2 k
SST-IVeven f e 1 k = x k + N 2 + 1 x k + N 2 + 2
odd f o 1 k = x k + N 2 + 2 x k + N 2 + 1
1 , N f s 0 k = x k N 1 + x k N 1 + 1
f s k = f s 0 k s n 2 N s 1 / 2 + f s 1 k c s 2 N s 1 / 2
Table 4. Complexity of fast-generalized sliding sinusoidal transforms.
Table 4. Complexity of fast-generalized sliding sinusoidal transforms.
Transforms Number   of   Additions   C A D D f Number   of   Multiplications   C M U L f
GSCT-I 3 N + 10 K 3 + C A D D _ D S T I N , K 3 N + 4 K 2 + C M U L _ D S T I N , K
GSST-I 2 N + 3 K 3 + C A D D _ D S T I N , K N 1 + C M U L _ D S T I N , K
GSCT-II
GSST-II
2 N + 9 K 3 + C A D D _ D S T I N , K 2 N + K 2 + C M U L _ D S T I N , K
GSCT-III
GSST-III
4 N + 5 K 3 + 2 C A D D _ D S T I I I N , K 3 N + 2 K + 2 C M U L _ D S T I I I N , K
GSCT-IV
GSST-IV
3 N + 5 K 2 + 2 C A D D _ D S T I I I N , K 2 N + 2 C M U L _ D S T I I I N , K
Table 5. Complexity of the proposed algorithms in terms of additions, N = 256.
Table 5. Complexity of the proposed algorithms in terms of additions, N = 256.
Step K
GSCT-I
GSST-I
GSCT-II
GSST-II
GSCT-III
GSST-III
GSCT-IV
GSST-IV
2103976978115431288
3117689991718041549
413121028105220651810
513861095112521981943
816041292134025972342
1016891363142327352480
1619361568166431492894
3223521872206437413486
6429622240262444134158
Table 6. Complexity of the proposed algorithms in terms of multiplications, N = 256.
Table 6. Complexity of the proposed algorithms in terms of multiplications, N = 256.
Step K
GSCT-I
GSST-I
GSCT-II
GSST-II
GSCT-III
GSST-III
GSCT-IV
GSST-IV
2102750876512841024
3109357082814141152
4115963289115441280
5119366292216101344
81295752101518081536
101331780104518761600
161439864113520801792
321599960124723682048
6417911024134326882304
Table 7. Complexity of recursive sliding DCT algorithms [22] in terms of additions, N = 256.
Table 7. Complexity of recursive sliding DCT algorithms [22] in terms of additions, N = 256.
Step KDCT-IDCT-IIDCT-IIIDCT-IV
22295127523042304
33315178533283328
44335229543524352
55355280553765376
88415433584488448
1010,455535510,49610,496
1616,575841516,64016,640
3232,89516,57533,02433,024
6465,53532,89565,79265,792
Table 8. Complexity of recursive sliding DCT algorithms [22] in terms of multiplications, N = 256.
Table 8. Complexity of recursive sliding DCT algorithms [22] in terms of multiplications, N = 256.
Step KDCT-IDCT-IIDCT-IIIDCT-IV
22304128823042304
33328180433283328
44352232043524352
55376283653765376
88448438484488448
1010,496541610,49610,496
1616,640851216,64016,640
3233,02416,76833,02433,024
6465,79233,28065,79265,792
Table 9. Complexity of fast DCT/DST algorithms in terms of additions/multiplications; N = 256.
Table 9. Complexity of fast DCT/DST algorithms in terms of additions/multiplications; N = 256.
OperationsFDCT-I
[25]

FDST-I
[26]

FDCT-II
[27]
FDST-II
[26]
FDCT-III
[28]
FDST-III
[26]
FDCT-IV
[25]
FDST-IV
[26]
ADD271625542818281834563072
MUL9177691025102516641280
Table 10. Complexity of the proposed algorithms in terms of additions, K = 16.
Table 10. Complexity of the proposed algorithms in terms of additions, K = 16.
N
GSCT-I
GSST-I
GSCT-II
GSST-II
GSCT-III
GSST-III
GSCT-IV
GSST-IV
32368224320461430
64592416512845782
128104080089616131486
25619361568166431492894
51237283104320062215710
102473126176627212,36511,342
204814,48012,32012,41624,65322,606
Table 11. Complexity of the proposed algorithms in terms of multiplications, K = 16.
Table 11. Complexity of the proposed algorithms in terms of multiplications, K = 16.
N
GSCT-I
GSST-I
GSCT-II
GSST-II
GSCT-III
GSST-III
GSCT-IV
GSST-IV
3220780127288224
64383192271544448
1287384165591056896
2561439864113520801792
51228471760228741283584
102456433552459182247168
204811,2957136919916,41614,336
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kober, V. Fast Generalized Sliding Sinusoidal Transforms. Mathematics 2023, 11, 3829. https://doi.org/10.3390/math11183829

AMA Style

Kober V. Fast Generalized Sliding Sinusoidal Transforms. Mathematics. 2023; 11(18):3829. https://doi.org/10.3390/math11183829

Chicago/Turabian Style

Kober, Vitaly. 2023. "Fast Generalized Sliding Sinusoidal Transforms" Mathematics 11, no. 18: 3829. https://doi.org/10.3390/math11183829

APA Style

Kober, V. (2023). Fast Generalized Sliding Sinusoidal Transforms. Mathematics, 11(18), 3829. https://doi.org/10.3390/math11183829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop