Next Article in Journal
Characterizations of Minimal Dominating Sets in γ-Endowed and Symmetric γ-Endowed Graphs with Applications to Structure-Property Modeling
Previous Article in Journal
Sufficient Conditions for Linear Operators Related to Confluent Hypergeometric Function and Generalized Bessel Function of the First Kind to Belong to a Certain Class of Analytic Functions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Integer Time Series Model Estimations through Neural Network-Based Fuzzy Time Series Analysis

by
Mohammed H. El-Menshawy
1,
Mohamed S. Eliwa
2,3,
Laila A. Al-Essa
4,
Mahmoud El-Morshedy
5,6,* and
Rashad M. EL-Sagheer
1,7
1
Mathematics Department, Faculty of Science, Al-Azhar University, Naser City, Cairo 11884, Egypt
2
Department of Statistics and Operations Research, College of Science, Qassim University, Buraydah 51482, Saudi Arabia
3
Department of Mathematics, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
4
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
5
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
6
Department of Statistics and Computer Science, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
7
High Institute of Computer and Management Information System, First Statement, Cairo 11865, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(6), 660; https://doi.org/10.3390/sym16060660
Submission received: 17 March 2024 / Revised: 26 April 2024 / Accepted: 15 May 2024 / Published: 27 May 2024

Abstract

:
This investigation explores the effects of applying fuzzy time series (FTSs) based on neural network models for estimating a variety of spectral functions in integer time series models. The focus is particularly on the skew integer autoregressive of order one (NSINAR(1)) model. To support this estimation, a dataset consisting of NSINAR(1) realizations with a sample size of n = 1000 is created. These input values are then subjected to fuzzification via fuzzy logic. The prowess of artificial neural networks in pinpointing fuzzy relationships is harnessed to improve prediction accuracy by generating output values. The study meticulously analyzes the enhancement in smoothing of spectral function estimators for NSINAR(1) by utilizing both input and output values. The effectiveness of the output value estimates is evaluated by comparing them to input value estimates using a mean-squared error (MSE) analysis, which shows how much better the output value estimates perform.

1. Introduction

In recent years, a variety of FTS methodologies have emerged in the literature. FTSs can be defined as employing fuzzy sets to represent its observations. Ref. [1] provided the initial description of FTSs and outlined its fundamental definitions. Fundamentally, an FTS is comprised of three consecutive steps: fuzzification, fuzzy relationship formation, and defuzzification. Numerous studies that have investigated these processes [2,3,4,5] have focused solely on the most recent process. Additionally, because the formation of fuzzy links is directly related to predicting, numerous studies have concentrated on it [6,7,8,9,10]. Artificial neural networks appear to be quite good at figuring out fuzzy relationships that increase the forecasting performance’s accuracy. Generally, artificial neural network techniques have shown success in a wide range of applications as in [11,12,13], and researchers often employ neural networks to build fuzzy relationships of the FTS. Over time, there have been notable breakthroughs in the field of modeling integer-valued time series due to the substantial attention given to this topic. The integer autoregressive (INAR) model is one of the best among them for modeling counting series. In recent years, researchers have been striving to improve the INAR model’s capacity to accurately reproduce observed data. This effort has involved modifications to various aspects of INAR models, including adjustments to the marginal distribution (see [14,15,16,17]), others the ranking of the models as [18,19], and yet others, the thinning operators as [20,21,22,23,24,25,26,27]. Ref. [28] investigates certain statistical metrics for a number of INAR(1) models. To match specific data and more accurately characterize it, [29] provided an INAR(1) model based on two thinning operators. The fuzzy logic employed in [30] enhanced the estimation of all density functions. Ref. [31] conducted research on the FTS technique to enhance periodograms while keeping their statistical characteristics. Ref. [32] investigates a few statistical properties and all density functions for the ZTPINAR(1) process. Ref. [33] applies the FTS based on the Chen method to smoothing estimates for the DCGINAR(1) process. Ref. [34] used a novel technique of FTSs to improve the estimation of stationary processes’ unknown parameters by traditional methods. Ref. [35] employed a fuzzy Markov chain to enhance the estimates of density functions with respect to the MCGINAR(1) process.
Since it focuses on building fuzzy relationships that lead to forecasting, [36] was the first to apply neural networks to FTS predictions. Ref. [37] proposes a novel method for handling high-order multivariate fuzzy time series that is based on artificial neural networks. Ref. [38] develops a fuzzy time series model based on neural networks to enhance the forecasting of observations. Ref. [39] provides a brand-new hybrid fuzzy time series technique that uses artificial neural networks for defuzzification and a fuzzy c-means (FCM) method for fuzzification. Ref. [40] creates and develops precise statistical forecasting models to predict the monthly API and assesses these models to track the state of the air quality. In a unique method, [41] created a PSNN to create fuzzy associations in high-order FTSs and modified PSO to train the network’s weights. By combining a convectional neural network and FTS method, [42] suggested short-term load forecasting. Ref. [43] employed various techniques to forecast the air pollution index (API) of Kuala Lumpur, Malaysia, for the year 2017. These techniques included the use of artificial neural networks (ANNs), autoregressive integrated moving average (ARIMA), trigonometric regressors, Box–Cox transformation, ARMA errors, trend and seasonality (TBATS), and multiple fuzzy time series (FTS) models. Ref. [44] built an LSTM recurrent neural network based on trend fuzzy granulation for long-term time series forecasting. A unique multi-functional recurrent fuzzy neural network (MFRFNN) for time series prediction was developed in [45]. Ref. [46] suggested a shallow and deep neural network model for demand forecasting fuzzy time series pharmaceutical data.
The purpose of this research is to present fuzzy time series (FTSs), which are based on neural network models [36], and is intended to enhance estimates of density functions for integer time series. These functions are spectral, bispectral, and normalized bispectral density functions. The NSINAR(1) model is used in this strategy. All spectrum functions and their smoothed estimations for NSINAR(1) are computed for this purpose. We use neural network-based FTSs and forecast realizations to generate the output values for NSINAR(1) observational “input values”. All density functions are estimated with input and output values. The contribution of the output values of neural network-based FTSs to smoothing of these estimations is investigated by contrasting the two cases using the results.

2. Developing the Forecasting Models

The following are the procedures for establishing a forecasting model: data preparation, evaluation, and selection; neural network construction (in terms of input variable selection, transfer function, structure, etc. [47].

2.1. Preparing Data

A number of crucial decisions, including data preparation, input variable selection, network type and design, the transfer function, training methodology, and model validation, assessment, and selection, must be made by the neural network forecaster. Some of these choices can be made while building the model, but others must be carefully thought through before beginning any modeling work. Given that neural networks operate on data-driven principles, data preparation emerges as a pivotal initial step in crafting an effective neural network model. Indeed, the creation of a practical and predictive neural network model heavily relies on the availability of a robust, sufficient, and representative dataset. Consequently, the quality of the data significantly influences the reliability of neural network models. Moreover, ample data are indispensable for the training process of neural networks. Numerous studies have utilized a practical ratio ranging from 70%:30% to 90%:10% for distinguish between in-samples and out-of-samples [47]. As a result, we use NSINAR(1) to construct a series of size n = 1000. For our estimation (the in-sample), we chose the data from the first observation to the 800th, and for our forecast (the out-of-sample), we used the data from the 801st and 1000th. In other words, the ratio is around 800 1000 : 200 1000 = 80 % : 20 % falling among the two categories of samples.

2.2. Setup of a Neural Network

The model used in forecasting the most is a multilayer feedforward structure [48]. As a result, backpropagation [49] was selected as the model, and PC Neuron Institutional, a backpropagation software package, was selected as the method for creating the forecasting model. In the setup, our goal was to first build (or train) the fuzzy associations between each FLR before forecasting. An FLR is a 1-1 relationship. Hence, there is one input layer and one output layer with one node each. The majority of forecasting applications have employed only one hidden layer and a suitable number of hidden nodes, even if there are several criteria for selecting the number of hidden layers and hidden nodes in each layer [50,51,52]. A minimal neural network was used [53] to avoid over-fitting. As a result, we employed two hidden nodes and one hidden layer. As a result, we created the neural network structure shown in Figure 1. Figure 2 shows the purpose of each node in the hidden and output layers. The previous layer’s node(t) r, like X r in the diagram, provides input(t) to the node s. A weight, Wrs, which represents the strength of the connection between each connection from node r to s, is attached to each one. The following formula is used to calculate node s output, Y t [30].
Y t = f ( W r s × X r θ r ) ,
f ( z ) = 1 1 + e x p ( z ) ,
where f(z) is a sigmoid function.

2.3. Model Selection and Evaluation

A pertinent study [54] states that no single approach has proven to be consistently best for all kinds of applications, including neural network applications. Thus, we used a simple model, as well as a hybrid one. An in-sample and an out-of-sample category were created for the observations. A further division into known and unknown patterns was made for the out-of-sample observations. In contrast to the unidentified patterns that constituted the majority of the data outside of the sample, the observations that were present in both outside of the sample and inside the sample were regarded as established patterns. With regard to their respective original functions, we used the mean-squared error (MSE) to compare the performance of the three estimators in each of the three scenarios. For details on the calculation of the MSE, please refer to [32,33], which results in a different evaluation of the three different types of observations.

3. Fuzzy Time Series Models Utilizing Neural Networks

This study applied a backpropagation neural network to forecast the fuzzy observations with respect to the FTS of NSINAR(1). There are six steps in this study, which can be reported as follows:
  • Defining and partitioning the universe of discourse: According to the problem domain of NSINAR(1)’ series, the universe of discourse for observations, U = [starting, ending], is defined. After the length of the intervals, l, is determined, U can be partitioned into equal-length intervals u 1 , u 2 , u 3 , , u b , b = 1 , and their corresponding midpoints m 1 , m 2 , m 3 , , m b : respectively.
    u b = [ s t a r t i n g + ( b 1 ) × l a , s t a r t i n g + b × l a ] ,
    m b = 1 2 × [ s t a r t i n g + ( b 1 ) × l a , s t a r t i n g + b × l a ] .
  • Defining fuzzy sets for observations: Each linguistic observation, A i , can be defined by the intervals u 1 , u 2 , u 3 , , u b , where A i = f A i ( u 1 ) / u 1 + f A i ( u 2 ) / u 2 + + f A i ( u b ) / u b .
  • Fuzzifying the observations: A fuzzy set is created by fuzzifying each observation. An observation is fuzzified to A i if its greatest degree of membership is in A i , just as in [6,55].
  • Establishing the fuzzy relationship (neural network training): The fuzzy associations in these FLRs were built (or trained) using a backpropagation neural network. Index i served as the input, and index j served as the appropriate output for each FLR, A i A j . These FLRs became the input and output patterns for neural network training.
  • Forecasting: A description of the hybrid and basic models can be found below. The basic model uses a neural network methodology to forecast each piece of data, while the hybrid model uses the same neural network approach to forecast the known patterns together with a simple strategy to anticipate the unknown patterns.
    Model 1 (basic mode): Assume F ( t 1 ) = A i . We chose i as the forecast input in order to make the calculations easier. Assume that j is the neural network’s output. The hazy forecast is A j , we say. In other words,
    F ( t ) = A j .
    Model 2 (hybrid mode): Assume F ( t 1 ) = A i . If A i is a recognized pattern, the basic model is used to obtain the fuzzy forecast. If A i is an unknown pattern, then we merely take Ai as the fuzzily predicted value for F ( t ) , in accordance with Chen’s model [6]. That is,
    F ( t ) = A i .
  • Defuzzifying: No matter whatever model is used, the defuzzified forecast is always equal to the fuzzy forecast’s midpoint. Assume A k is the fuzzy prediction for F(t). The forecast that has been defuzzed corresponds to A k ’s middle, i.e.,
    f o r e c a s t t = m k .
    For additional details on this methodology, see [36]. Therefore, the forecast observations that were obtained from the generated realizations from the NSINAR(1) process—known as the “input values”—are the “output values” of this approach.

4. The New Skew INAR(1) Model

The NSINAR(1) process was first defined by [56]. These model, also known as true integer autoregressive models, have become necessary for the modeling of count data with positive and negative values. It defines the NSINAR(1) as
X t = β X t 1 + η t
where β X t 1 = d β A t 1 β B t 1 is the difference between the negative binomial thinning and binomial thinning operators, where the counting series β A t 1 and β B t 1 are independent random variables. β A = i = 0 A W i and β B = i = 0 B U i , where W i is a sequence of i . i . d geometric random variables and U i is a sequence of i . i . d . Bernoulli random variables independent of W i , X t = A t B t , where A t geometric( γ 1 + γ ), B t Poisson( λ ), { X t } is a sequence of random variables having a geometric-Poisson ( γ , λ ) , and η t has the distribution of ϵ t ε t , where { ϵ t } and { ε t } are independent r.v.s and X t and η t l are independent for all l 1 . ε t are i.i.d. r.v.s with a common Poisson( λ ( 1 β ) ) distribution, and ϵ t is a mixture of two random variables with geometric γ / ( 1 + γ ) and geometric( β / ( 1 + β ) ) distributions. The condition of the stationarity of the process { X t } is 0 β γ / ( 1 + γ ) , and the condition of the non-stationarity of the process { X t } is γ / ( 1 + γ ) β 1 . Here, our study is restricted to the stationary case. Some properties of X t are introduced:
The mean γ X = γ λ , the variance σ X 2 = γ ( γ + 1 ) + λ , the second-order central moment R 2 ( s ) = β s ( γ 2 + γ + λ ) = β s R 2 ( 0 ) , and the third-order central moment are calculated as R 3 ( 0 , 0 ) = 2 γ 3 + 3 γ 2 + γ λ ,   R 3 ( 0 , s ) = β s ( 2 γ 3 + 3 γ 2 + γ λ ) = β s R 3 ( 0 , 0 ) , and R 3 ( s , s ) = β 2 s R 3 ( 0 , 0 ) β R 2 ( 0 ) β s β 2 s ( 1 β ) . Refer to [23] and [56] for detailed information on the thinning operator ★ and the NSINAR(1) model, respectively.

5. Spectral and Bispectral Density Functions

In time series analysis, statistical spectral analysis plays a number of roles, including the estimate, hypothesis testing, data reduction, and description. It is quite helpful to examine a time series in both the time and frequency domains when performing an analysis. The Fourier transform, which converts the time series from the time domain to the frequency domain, is the fundamental tool of spectral analysis. This field frequently contains a wealth of important information that requires additional research to locate. Second-order spectra have played a very important role in the analysis of linear and Gaussian time series data and in signal processing. When the process is Gaussian, the second-order spectra contain all the necessary and useful information about the process, and we do not need to consider higher order spectra. Before analyzing the time series data, in most cases, we assume that the series is generated from a linear process and even from a Gaussian process for further simplified analysis. But, in reality, this is not the case for every process. The reason behind assuming the linearity of the process is that it is very easy to fit a linear model in comparison to fitting the actual non-linear model. In order to study non-linear and non-Gaussian processes, one needs to consider higher order spectra. Theoretically, it is possible to compute the spectrum of any order, but computationally, it is very costly. Hence, we will consider only the bispectrum, the simplest higher order spectrum. So, we can say that the second-order spectra will not adequately characterize the series (unless it is Gaussian), and hence, there is a need for higher order spectral analysis. The simplest type of higher order spectral analysis is the bispectral analysis. This leads us to the bispectral density function, which gives us important details regarding the process’ nonlinearity. The normalized bispectrum’s modulus for continuous non-Gaussian time series is flat.
The spectral density function “SDF”, bispectral density function “BDF”, and normalized bispectral density function “NBDF” of NSINAR(1) are provided in this section.
Theorem 1.
If (8) is satisfied by { X t } , then the SDF, denoted by g ( w ) , is computed as Let { X t } satisfy (8), then the SDF, represented by g ( w ) is calculated as
g ( w ) = ( 1 β 2 ) ( γ 2 + γ + λ ) 2 π ( 1 + β 2 2 β cos w ) , π w π .
Proof. 
g ( w ) = 1 2 π t = R 2 ( t ) e x p ( i w t ) = 1 2 π [ R 2 ( 0 ) + t = 1 β t R 2 ( 0 ) e x p ( i w t ) + t = 1 β t R 2 ( 0 ) e x p ( i w t ) ] = R 2 ( 0 ) 2 π [ 1 + β e x p ( i w ) 1 β e x p ( i w ) + β e x p ( i w ) 1 β e x p ( i w ) ] = R 2 ( 0 ) [ 1 β 2 ] 2 π ( 1 + β 2 2 β c o s w ) .
Theorem 2.
If (8) is satisfied by { X t } , then the BDF, denoted by g ( w 1 , w 2 ) , are computed as
g ( w 1 , w 2 ) = 1 ( 2 π ) 2 [ R 3 ( 0 , 0 ) { 1 + ϝ 1 ( w 1 ) + ϝ 1 ( w 2 ) + ϝ 1 ( w 1 + w 2 ) + R 3 ( 0 , 0 ) + β 2 R 2 ( 0 ) β ( 1 β ) { ϝ 2 ( w 1 ) + ϝ 2 ( w 2 ) + ϝ 2 ( w 1 w 2 ) } β 2 R 2 ( 0 ) β ( 1 β ) { ϝ 1 ( w 1 ) + ϝ 1 ( w 2 ) + ϝ 1 ( w 1 w 2 ) } + ϝ 1 ( w 1 ) { [ ϝ 2 ( w 1 w 2 ) + ϝ 2 ( w 2 ) ] ( R 3 ( 0 , 0 ) + β 2 R 2 ( 0 ) β ( 1 β ) ) [ ϝ 1 ( w 1 w 2 ) + ϝ 1 ( w 2 ) ] β 2 R 2 ( 0 ) β ( 1 β ) } + ϝ 1 ( w 2 ) { [ ϝ 2 ( w 1 w 2 ) + ϝ 2 ( w 1 ) ] ( R 3 ( 0 , 0 ) + β 2 R 2 ( 0 ) β ( 1 β ) ) [ ϝ 1 ( w 1 w 2 ) + ϝ 1 ( w 1 ) ] β 2 R 2 ( 0 ) β ( 1 β ) } + ϝ 1 ( w 1 + w 2 ) { [ ϝ 2 ( w 1 ) + ϝ 2 ( w 2 ) ] ( R 3 ( 0 , 0 ) + β 2 R 2 ( 0 ) β ( 1 β ) ) [ ϝ 1 ( w 1 ) + ϝ 1 ( w 2 ) ] β 2 R 2 ( 0 ) β ( 1 β ) } ] , π w 1 , w 2 π ,
where F 1 ( w k ) = β e x p ( i w k ) 1 β e x p ( i w k ) and F 2 ( w k ) = β 2 e x p ( i w k ) 1 β 2 e x p ( i w k ) , k = 1 , 2 .
Proof. 
We can write g ( w 1 , w 2 ) as (see [17]) in the following formulae:
g ( w 1 , w 2 ) = 1 ( 2 π ) 2 [ t 1 = 0 t 2 = t 1 R 3 ( t 1 , t 2 ) e x p ( i ( t 1 w 1 + t 2 w 2 ) ) + t 2 = 0 t 1 = t 2 + 1 R 3 ( t 2 , t 1 ) e x p ( i ( t 1 w 1 + t 2 w 2 ) ) + t 1 = 0 t 2 = 1 R 3 ( t 2 , t 1 t 2 ) e x p ( i ( t 1 w 1 + t 2 w 2 ) ) + t 1 = 1 t 2 = t 1 1 R 3 ( t 1 t 2 , t 2 ) e x p ( i ( t 1 w 1 + t 2 w 2 ) ) + t 2 = 1 t 1 = t 2 R 3 ( t 2 t 1 , t 1 ) e x p ( i ( t 1 w 1 + t 2 w 2 ) ) + t 1 = 1 t 2 = 0 R 3 ( t 1 , t 2 t 1 ) e x p ( i ( t 1 w 1 + t 2 w 2 ) ) ] .
Applying the symmetry characteristics of the third-order cumulants (see [57]), then
g ( w 1 , w 2 ) = 1 ( 2 π ) 2 [ R 3 ( 0 , 0 ) + τ = 1 R 3 ( 0 , τ ) { e x p ( i τ w 1 ) + e x p ( i τ w 2 ) + e x p ( i τ ( w 1 + w 2 ) ) } + τ = 1 R 3 ( τ , τ ) { e x p ( i τ w 1 ) + e x p ( i τ w 2 ) + e x p ( i τ ( w 1 + w 2 ) ) } + t = 1 τ = 1 R 3 ( t , t + τ ) { e x p ( i t w 1 i ( t + τ ) w 2 ) + e x p ( i t w 2 i ( t + τ ) w 1 ) + e x p ( i t w 1 i τ w 2 ) e x p ( i t w 2 i τ w 1 ) + e x p ( i τ w 1 + i ( t + τ ) w 2 ) + e x p ( i τ w 2 + i ( t + τ ) w 1 ) } ] .
Using the expressions of R 3 ( 0 , τ ) , R 3 ( τ , τ ) and R 3 ( s , s + τ ) , we obtain
g ( w 1 , w 2 ) = 1 ( 2 π ) 2 [ R 3 ( 0 , 0 ) + τ = 1 β τ R 3 ( 0 , 0 ) { e x p ( i τ w 1 ) + e x p ( i τ w 2 ) + e x p ( i τ ( w 1 + w 2 ) ) } + τ = 1 ( β 2 τ R 3 ( 0 , 0 ) β 2 R 2 ( 0 ) β τ β 2 τ β ( 1 β ) ) { e x p ( i τ w 1 ) + e x p ( i τ w 2 ) + e x p ( i τ ( w 1 + w 2 ) ) } + t = 1 τ = 1 β τ [ β 2 t R 3 ( 0 , 0 ) β 2 R 2 ( 0 ) β t β 2 t β ( 1 β ) ] { e x p ( i t w 1 i ( t + τ ) w 2 ) + e x p ( i t w 2 i ( t + τ ) w 1 ) + e x p ( i t w 1 i τ w 2 ) e x p ( i t w 2 i τ w 1 ) + e x p ( i τ w 1 + i ( t + τ ) w 2 ) + e x p ( i τ w 2 + i ( t + τ ) w 1 ) } ] .
All these summations can be evaluated as follows, for example
τ = 1 ( β τ e x p ( i τ w 1 ) ) = τ = 1 ( β e x p ( i w 1 ) ) τ = β e x p ( i w 1 ) 1 β e x p ( i w 1 ) = ϝ 1 ( w 1 ) ,
Hence, following a few computations and summations, we have
g ( w 1 , w 2 ) = 1 ( 2 π ) 2 [ R 3 ( 0 , 0 ) + R 3 ( 0 , 0 ) { β e x p ( i w 1 ) 1 β e x p ( i w 1 ) + β e x p ( i w 2 ) 1 β e x p ( i w 2 ) + β e x p ( i ( w 1 + w 2 ) ) 1 β e x p ( i ( w 1 + w 2 ) ) } + ( R 3 ( 0 , 0 ) + β 2 R 2 ( 0 ) β β 2 ) { β 2 e x p ( i w 1 ) 1 β 2 e x p ( i w 1 ) + β 2 e x p ( i w 2 ) 1 β 2 e x p ( i w 2 ) + β 2 e x p ( i ( w 1 + w 2 ) ) 1 β 2 e x p ( i ( w 1 + w 2 ) ) } β 2 R 2 ( 0 ) β β 2 { β e x p ( i w 1 ) 1 β e x p ( i w 1 ) + β e x p ( i w 2 ) 1 β e x p ( i w 2 ) + β e x p ( i ( w 1 + w 2 ) ) 1 β e x p ( i ( w 1 + w 2 ) ) } + ( R 3 ( 0 , 0 ) + β 2 R 2 ( 0 ) β β 2 { β 2 e x p ( i ( w 1 + w 2 ) ) 1 β 2 e x p ( i ( w 1 + w 2 ) ) β e x p ( i w 2 ) 1 β e x p ( i w 2 ) + β 2 e x p ( i ( w 1 + w 2 ) ) 1 β 2 e x p ( i ( w 1 + w 2 ) ) β e x p ( i w 1 ) 1 β e x p ( i w 1 ) + β 2 e x p ( i w 1 ) 1 β 2 e x p ( i w 1 ) β e x p ( i w 2 ) 1 β e x p ( i w 2 ) + β 2 e x p ( i w 2 ) 1 β 2 e x p ( i w 2 ) β e x p ( i w 1 ) 1 β e x p ( i w 1 ) + β 2 e x p ( i w 1 ) 1 β 2 e x p ( i w 1 ) β e x p ( i ( w 1 + w 2 ) ) 1 β e x p ( i ( w 1 + w 2 ) ) + β 2 e x p ( i w 2 ) 1 β 2 e x p ( i w 2 ) β e x p ( i ( w 1 + w 2 ) ) 1 β e x p ( i ( w 1 + w 2 ) ) } + β 2 R 2 ( 0 ) β β 2 { β e x p ( i ( w 1 + w 2 ) ) 1 β i ( w 1 + w 2 ) β e x p ( i w 2 ) 1 β e x p ( i w 2 ) + β e x p ( i ( w 1 + w 2 ) ) 1 β e x p ( i ( w 1 + w 2 ) ) β e x p ( i w 1 ) 1 β e x p ( i w 1 ) + β e x p ( i w 1 ) 1 β e x p ( i w 1 ) β e x p ( i w 2 ) 1 β e x p ( i w 2 ) + β e x p ( i w 2 ) 1 β e x p ( i w 2 ) β e x p ( i w 1 ) 1 β e x p ( i w 1 ) + β e x p ( i w 1 ) 1 β e x p ( i w 1 ) β e x p ( i ( w 1 + w 2 ) ) 1 β e x p ( i ( w 1 + w 2 ) ) + β e x p ( i w 2 ) 1 β e x p ( i w 2 ) β e x p ( i ( w 1 + w 2 ) ) 1 β e x p ( i ( w 1 + w 2 ) ) } ] ,
and by taking, ϝ 1 ( w k ) = β e x p ( i w k ) 1 β e x p ( i w k ) and ϝ 2 ( w k ) = β 2 e x p ( i w k ) 1 ( β 2 e x p ( i w k ) , k = 1 , 2 . , the proof is complete. The NBDF, represented by f ( w 1 , w 2 ) , is calculated as
f ( w 1 , w 2 ) = g ( w 1 , w 2 ) g ( w 1 ) g ( w 2 ) g ( w 1 + w 2 ) ,
where g ( w 1 , w 2 ) and g ( w ) are obtained by (11) and (9), respectively. □
Figure 3 illustrates the generated observations from the NSINAR(1) process at λ = 3 , β = 0.25 , and γ = 5 . Figure 4, Figure 5, and Figure 6 show, respectively, the SDF, BDF, and NBDF of NSINAR(1) at λ = 3 , β = 0.25 , and γ = 5 . We infer that the model is linear from Figure 5 and Figure 6 and the results in Table 1 and Table 2, since the NBDF values, which fall between (0.73, 0.78), are flatter (constant, very tightly spaced apart) than the BDF values, which fall between (5, 20). The simulated series of the forecasted NSINAR(1) observations of the neural network-based FTS in both the basic model and the hybrid model are shown in Figure 7. Figure 7 shows that the shape and properties of the time series are maintained by the forecasted observations. From Figure 3 and Figure 7, one can deduce that the series is stationary and its values are positive- and negative-valued, and this agrees with the definition of the NSINAR(1) model.

6. Estimations of SDF, BDF, and NBDF

The smoothed periodogram technique by the lag window is utilized to estimate the spectral density functions. The lag window is one-dimensional in the case of the SDF and two-dimensional lag in the case of the BDF. Generally, if X 1 , X 2 , , X N is the realizations of a real-valued third-order stationary process { X t } with mean γ , autocovariance R 2 ( s ) , and third cumulant R 3 ( s 1 , s 2 ) , the smoothed spectrum, smoothed bispectrum, and smoothed normalized bispectrum are, respectively, given by (see [28])
g ^ ( w ) = 1 2 π s = ( N 1 ) N 1 Φ ( s ) R ^ 2 ( s ) exp ( i s w ) = 1 2 π s = ( N 1 ) N 1 Φ ( s ) R ^ 2 ( s ) c o s w s ,
g ^ ( w 1 , w 2 ) = 1 4 π 2 s 1 = ( N 1 ) N 1 s 2 = ( N 1 ) N 1 Φ ( s 1 , s 2 ) R ^ 3 ( s 1 , s 2 ) exp ( i s 1 w 1 i s 2 w 2 ) ,
f ^ ( w 1 , w 2 ) = g ^ ( w 1 , w 2 ) g ^ ( w 1 ) g ^ ( w 2 ) g ^ ( w 1 + w 2 ) ,
where R ^ 2 ( t ) and R ^ 3 ( t 1 , t 2 ) , the natural estimators for R 2 ( t ) and R 3 ( t 1 , t 2 ) are, respectively, given by
R ^ 2 ( s ) = 1 N s t = 1 N | s | ( X t X ¯ ) ( X t + | s | X ¯ ) ,
X ¯ = 1 N t = 1 N X t ,
R ^ 3 ( s 1 , s 2 ) = 1 N t = 1 N β ( X t X ¯ ) ( X t + s 1 X ¯ ) ( X t + s 2 X ¯ ) ,
where s 1 , s 2 0 , β = m a x ( 0 , s 1 , s 2 ) , s = 0 , ± 1 , ± 2 , , ± ( N 1 ) , π w 1 , w 2 π , Φ ( s ) is a one-dimensional lag window and Φ ( s 1 , s 2 ) = Φ ( s 1 s 2 ) Φ ( s 1 ) Φ ( s 2 ) is a two-dimensional lag window given by [58]. This section calculates estimates of the SDF, BDF, and NBDF by employing a smoothed periodogram approach that depends on the lag windows. The Daniell [59] window was chosen with a different number of frequencies M = 7 and M = 9. These estimates are calculated for three scenarios: (i) the generated realizations from NSINAR(1), which serve as the FTS’s “input values”, and (ii) and (iii), the forecasted observations, which serve as the output values of the FTS, which is based on a neural network.
Figure 8 and Figure 9 illustrate the estimated SDF for the input and output values of the FTS using Daniell lag windows at M = 7 and M = 9, respectively. Figure 10 and Figure 11 illustrate the estimated BDF for the input and output values of the FTS using Daniell lag windows at M = 7 and M = 9, respectively. Figure 12 and Figure 13 illustrate the estimated NBDF for the input and output values of the FTS using Daniell lag windows at M = 7 and M = 9, respectively. Table 3, Table 4 and Table 5 display the estimated BDF obtained with the Daniell window when M = 7 for the input values for the FTS, as well as the output values of the FTS, which depend on a neural network (in the case of the basic and hybrid models).

6.1. Discussion of Results

In contrast to the input values for the FTS and the output values of the FTS, which depend on a neural network (in the case of the basic and hybrid models), and depending on the M.S.E at the top of each image, we discover the following:
  • Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 illustrate the preference for the hybrid model, followed by the basic model, over the input values in estimating the SDF, BDF, and NBDF, as the hybrid model’s outputs had the lowest mean-squared error (refer to [32] and [33] to find out how to compute the MSE). In general, this indicates that the neural network-based FTS (whether used with the basic model or the hybrid model) significantly improved the smoothing of density functions’ estimates.
  • The Daniell window at M = 7 performs better than that at M = 9 in estimating the SDF for all three situations (input values and output values for the basic and hybrid models), according to a comparison of Figure 8 and Figure 9.
  • The Daniell window at M = 7 performs better than that at M = 9 in estimating the BDF for all three situations (input values and output values for the basic and hybrid models), according to a comparison of Figure 10 and Figure 11.
  • The Daniell window at M = 7 performs better than that at M = 9 in estimating the NBDF for all three situations (input values and output values for the basic and hybrid models), according to a comparison of Figure 12 and Figure 13.

6.2. Empirical Analysis

To create a series with size n = 1000, we employed NSINAR(1). Data ranging from the first observation to the 800th were utilized for our estimation (the in-sample), whereas data spanning the 801st and 1000th were used for our forecast (the out-of-sample). These observations were generated from NSINAR(1) fifteen times, and thus, fifteen time series are available. The proposed method is applied to each time series generated from NSINAR(1). In the following scenarios, the estimators of the spectral density functions were found: for the observations produced by the NSINAR(1) model and for the observations estimated by the basic and hybrid models using the suggested methodology. The MSE was used to compare these estimators in the three scenarios. The input values were compared to the respective performances of the two models. Based on the mean-squared errors (MSEs) of each of the fifteen series, we discovered that the hybrid model outperformed the basic model in terms of performance. When comparing all the series, the hybrid model outperformed the input values in 12 out of the 15 series, while the basic model outperformed the input values in 9 out of the 15 series.

7. Conclusions

More precise smoothing estimates of the SDF, BDF, and NBDF for the INAR(1) models were suggested using a unique strategy. A neural network-based FTS was employed in this method. Consequently, the SDF, BDF, and NBDF for a recognized process (NSINAR(1)) were computed. The observations produced by this process served as the FTS input values. In order to anticipate the observations of NSINAR(1)’s observations, two models were employed: the basic and hybrid models, both applying neural networks. These forecasted observations served as the output values of the neural network-based FTS. Both the input and output values were used for calculating the estimates of the SDF, BDF, and NBDF of NSINAR(1). A definite improvement in the estimation for the hybrid model over the basic model, as well as the input values was discovered by comparing all density functions with their estimations in each scenario. As a result, the neural network-based FTS (either via the basic model or the hybrid model) helped further improve the smoothing of the INAR(1) estimates. Future research will attempt to improve the aforementioned technique using pi-sigma artificial neural networks, despite the fact that this requires far more work than this method to achieve the best estimate smoothing when compared to the findings reached here.

Author Contributions

Methodology, M.E.-M., M.S.E. and R.M.E.-S.; Software, M.E.-M. and M.H.E.-M.; Validation, M.S.E. and M.H.E.-M.; Formal analysis, M.S.E. and R.M.E.-S.; Investigation, M.H.E.-M., M.E.-M. and L.A.A.-E.; Resources, M.E.-M. and L.A.A.-E.; Data curation, M.H.E.-M., M.S.E. and M.E.-M.; Writing—original draft, M.E.-M.; Writing—review & editing, M.S.E. and R.M.E.-S.; Visualization, M.H.E.-M., M.S.E., L.A.A.-E. and M.E.-M.; Project administration, M.E.-M. and R.M.E.-S.; Funding acquisition, L.A.A.-E. All authors have read and agreed to the published version of the manuscript.

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R443), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. This study is supported via funding from Prince sattam bin Abdulaziz University project number (PSAU/2024/R/1445).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Song, Q.; Chissom, B.S. Fuzzy time series and its models. Fuzzy Sets Syst. 1993, 54, 269–277. [Google Scholar] [CrossRef]
  2. Huarng, K. Effective lengths of intervals to improve forecasting in fuzzy time series. Fuzzy Sets Syst. 2001, 123, 387–394. [Google Scholar] [CrossRef]
  3. Yu, T.H. A refined fuzzy time-series model for forecasting. Phys. A Stat. Mech. Its Appl. 2005, 346, 657–681. [Google Scholar] [CrossRef]
  4. Huarng, K.; Yu, T.H. Ratio-based lengths of intervals to improve enrollment forecasting. In Proceedings of the Ninth International Conference on Fuzzy Theory and Technology, Cary, NC, USA, 23 September 2003. [Google Scholar]
  5. Huarng, K.; Yu, T.H. A dynamic approach to adjusting lengths of intervals in fuzzy time series forecasting. Intell. Data Anal. 2004, 8, 3–27. [Google Scholar] [CrossRef]
  6. Chen, S.M. Forecasting enrollments based on fuzzy time series. Fuzzy Sets Syst. 1996, 81, 311–319. [Google Scholar] [CrossRef]
  7. Chen, S.M. Forecasting enrollments based on high-order fuzzy time series. Cybern. Syst. 2002, 33, 1–16. [Google Scholar] [CrossRef]
  8. Hwang, J.R.; Chen, S.M.; Lee, C.H. Handling forecasting problems using fuzzy time series. Fuzzy Sets Syst. 1998, 100, 217–228. [Google Scholar] [CrossRef]
  9. Song, Q.; Chissom, B.S. Forecasting enrollments with fuzzy time series-Part II. Fuzzy Sets Syst. 1994, 62, 1–8. [Google Scholar] [CrossRef]
  10. Huarng, K. Heuristic models of fuzzy time series for forecasting. Fuzzy Sets Syst. 2001, 123, 369–386. [Google Scholar] [CrossRef]
  11. Kitchens, F.L.; Johnson, J.D.; Gupta, J.N. Predicting automobile insurance losses using artificial neural networks. In Neural Networks in Business: Techniques and Applications; IGI Global: Hershey, PA, USA, 2002; pp. 167–187. [Google Scholar]
  12. Widrow, B.; Rumelhart, D.E.; Lehr, M.A. Neural networks: Applications in industry, business and science. Commun. ACM 1994, 37, 93–106. [Google Scholar] [CrossRef]
  13. Zhang, G.; Patuwo, B.E.; Hu, M.Y. Forecasting with artificial neural networks: The state of the art. Int. J. Forecast. 1998, 14, 35–62. [Google Scholar] [CrossRef]
  14. McKenzie, E. Autoregressive moving-average processes with negative-binomial and geometric marginal distributions. Adv. Appl. Probab. 1986, 18, 679–705. [Google Scholar] [CrossRef]
  15. Al-Osh, M.A.; Aly, E.A.A. First order autoregressive time series with negative binomial and geometric marginals. Commun. Stat.-Theory Methods 1992, 21, 2483–2492. [Google Scholar] [CrossRef]
  16. Alzaid, A.; Al-Osh, M. First-order integer-valued autoregressive (INAR(1)) process: Distributional and regression properties. Stat. Neerl. 1988, 42, 53–61. [Google Scholar] [CrossRef]
  17. Bakouch, H.S.; Ristić, M.M. Zero truncated Poisson integer-valued AR(1) model. Metrika 2010, 72, 265–280. [Google Scholar] [CrossRef]
  18. Alzaid, A.A.; Al-Osh, M.A. Some autoregressive moving average processes with generalized Poisson marginal distributions. Ann. Inst. Stat. Math. 1993, 45, 223–232. [Google Scholar] [CrossRef]
  19. Jin-Guan, D.; Yuan, L. The integer-valued autoregressive (INAR(p)) model. J. Time Ser. Anal. 1991, 12, 129–142. [Google Scholar] [CrossRef]
  20. Aly, E.A.A.; Bouzar, N. Explicit stationary distributions for some Galton-Watson processes with immigration. Stoch. Model. 1994, 10, 499–517. [Google Scholar] [CrossRef]
  21. Ristić, M.M.; Nastić, A.S.; Miletić, A.V. A geometric time series model with dependent Bernoulli counting series. J. Time Ser. Anal. 2013, 34, 466–476. [Google Scholar] [CrossRef]
  22. Latour, A. Existence and stochastic structure of a non-negative integer-valued autoregressive process. J. Time Ser. Anal. 1998, 19, 439–455. [Google Scholar] [CrossRef]
  23. Ristić, M.M.; Bakouch, H.S.; Nastić, A.S. A new geometric first-order integer-valued autoregressive (NGINAR(1)) process. J. Stat. Plan. Inference 2009, 139, 2218–2226. [Google Scholar] [CrossRef]
  24. Ristić, M.M.; Nastić, A.S.; Bakouch, H.S. Estimation in an integer-valued autoregressive process with negative binomial marginals (NBINAR(1)). Commun. Stat.-Theory Methods 2012, 41, 606–618. [Google Scholar] [CrossRef]
  25. Ristić, M.M.; Nastić, A.S.; Jayakumar, K.; Bakouch, H.S. A bivariate INAR(1) time series model with geometric marginals. Appl. Math. Lett. 2012, 25, 481–485. [Google Scholar] [CrossRef]
  26. Nastić, A.S.; Ristić, M.M. Some geometric mixed integer-valued autoregressive (INAR) models. Stat. Probab. Lett. 2012, 82, 805–811. [Google Scholar] [CrossRef]
  27. Nastić, A.S.; Ristić, M.M.; Bakouch, H.S. A combined geometric INAR(p) model based on negative binomial thinning. Math. Comput. Model. 2012, 55, 1665–1672. [Google Scholar] [CrossRef]
  28. Gabr, M.M.; El-Desouky, B.S.; Shiha, F.A.; El-Hadidy, S.M. Higher Order Moments, Spectral and Bispectral Density Functions for INAR(1). Int. J. Comput. Appl. 2018, 182, 0975–8887. [Google Scholar]
  29. Miletić, A.V.; Ristić, M.M.; Nastić, A.S.; Bakouch, H.S. An INAR(1) model based on a mixed dependent and independent counting series. J. Stat. Comput. Simul. 2018, 88, 290–304. [Google Scholar] [CrossRef]
  30. Teamah, A.A.M.; Faied, H.M.; El-Menshawy, M.H. Using the Fuzzy Time Series Technique to Improve the Estimation of the Spectral Density Function. J. Stat. Adv. Theory Appl. 2018, 19, 151–170. [Google Scholar] [CrossRef]
  31. Teamah, A.A.M.; Faied, H.M.; El-Menshawy, M.H. Effect of Fuzzy Time Series Technique on Estimators of Spectral Analysis. Recent Adv. Math. Res. Comput. Sci. 2022, 6, 29–38. [Google Scholar]
  32. El-menshawy, M.H.; Teamah, A.A.M.; Abu-Youssef, S.E.; Faied, H.M. Higher Order Moments, Cumulants, Spectral and Bispectral Density Functions of the ZTPINAR(1) Process. Appl. Math 2022, 16, 213–225. [Google Scholar]
  33. El-Morshedy, M.; El-Menshawy, M.H.; Almazah, M.M.A.; El-Sagheer, R.M.; Eliwa, M.S. Effect of fuzzy time series on smoothing estimation of the INAR(1) process. Axioms 2022, 11, 423. [Google Scholar] [CrossRef]
  34. Alqahtani, K.M.; El-Menshawy, M.H.; Eliwa, M.S.; El-Morshedy, M.; EL-Sagheer, R.M. Fuzzy Time Series Inference for Stationary Linear Processes: Features and Algorithms With Simulation. Appl. Math 2023, 17, 405–416. [Google Scholar]
  35. El-Menshawy, M.H.; Teamah, A.E.M.A.; Eliwa, M.S.; Al-Essa, L.A.; El-Morshedy, M.; EL-Sagheer, R.M. A New Statistical Technique to Enhance MCGINAR (1) Process Estimates under Symmetric and Asymmetric Data: Fuzzy Time Series Markov Chain and Its Characteristics. Symmetry 2023, 15, 1577. [Google Scholar] [CrossRef]
  36. Huarng, K.; Yu, T.H. The application of neural networks to forecast fuzzy time series. Phys. A Stat. Mech. Its Appl. 2006, 363, 481–491. [Google Scholar] [CrossRef]
  37. Egrioglu, E.; Aladag, C.H.; Yolcu, U.; Uslu, V.R.; Basaran, M.A. A new approach based on artificial neural networks for high order multivariate fuzzy time series. Expert Syst. Appl. 2009, 36, 10589–10594. [Google Scholar] [CrossRef]
  38. Yu, T.H.K.; Huarng, K.H. A neural network-based fuzzy time series model to improve forecasting. Expert Syst. Appl. 2010, 37, 3366–3372. [Google Scholar] [CrossRef]
  39. Egrioglu, E.; Aladag, C.H.; Yolcu, U. Fuzzy time series forecasting with a novel hybrid approach combining fuzzy c-means and neural networks. Expert Syst. Appl. 2013, 40, 854–857. [Google Scholar] [CrossRef]
  40. Rahman, N.H.A.; Lee, M.H.; Suhartono; Latif, M.T. Artificial neural networks and fuzzy time series forecasting: An application to air quality. Qual. Quant. 2015, 49, 2633–2647. [Google Scholar] [CrossRef]
  41. Bas, E.; Grosan, C.; Egrioglu, E.; Yolcu, U. High order fuzzy time series method based on pi-sigma neural network. Eng. Appl. Artif. Intell. 2018, 72, 350–356. [Google Scholar] [CrossRef]
  42. Sadaei, H.J.; Silva, P.C.; Guimaraes, F.G.; Lee, M.H. Short-term load forecasting by using a combined method of convolutional neural networks and fuzzy time series. Energy 2019, 175, 365–377. [Google Scholar] [CrossRef]
  43. Koo, J.W.; Wong, S.W.; Selvachandran, G.; Long, H.V.; Son, L.H. Prediction of Air Pollution Index in Kuala Lumpur using fuzzy time series and statistical models. Air Qual. Atmos. Health 2020, 13, 77–88. [Google Scholar] [CrossRef]
  44. Tang, Y.; Yu, F.; Pedrycz, W.; Yang, X.; Wang, J.; Liu, S. Building trend fuzzy granulation-based LSTM recurrent neural network for long-term time-series forecasting. IEEE Trans. Fuzzy Syst. 2021, 30, 1599–1613. [Google Scholar] [CrossRef]
  45. Nasiri, H.; Ebadzadeh, M.M. MFRFNN: Multi-functional recurrent fuzzy neural network for chaotic time series prediction. Neurocomputing 2022, 507, 292–310. [Google Scholar] [CrossRef]
  46. Rathipriya, R.; Abdul Rahman, A.A.; Dhamodharavadhani, S.; Meero, A.; Yoganandan, G. Demand forecasting model for time-series pharmaceutical data using shallow and deep neural network model. Neural Comput. Appl. 2023, 35, 1945–1957. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, G.P. Business forecasting with artificial neural networks: An overview. In Neural Networks in Business Forecasting;2004; pp. 1–22. IGI Global: Hershey, PA, USA, 2004; pp. 1–22. [Google Scholar]
  48. Indro, D.C.; Jiang, C.X.; Patuwo, B.; Zhang, G. Predicting mutual fund performance using artificial neural networks. Omega 1999, 27, 373–380. [Google Scholar] [CrossRef]
  49. Rumelhart, D.E.; McClelland, J.L.; Parallel Distributed Processing Research Group, C. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
  50. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  51. Hornik, K. Approximation capabilities of multilayer feedforward networks. Neural Netw. 1991, 4, 251–257. [Google Scholar] [CrossRef]
  52. Hornik, K. Some new results on neural network approximation. Neural Netw. 1993, 6, 1069–1072. [Google Scholar] [CrossRef]
  53. Armstrong, J.S. Principles of Forecasting: A Handbook for Researchers and Practitioners; Springer: Berlin/Heidelberg, Germany, 2001; Volume 30. [Google Scholar]
  54. Zhang, G.P. Neural Networks in Business Forecasting; IGI Global: Hershey, PA, USA, 2004. [Google Scholar]
  55. Song, Q.; Chissom, B.S. Forecasting enrollments with fuzzy time series—Part I. Fuzzy Sets Syst. 1993, 54, 1–9. [Google Scholar] [CrossRef]
  56. Bourguignon, M.; Vasconcellos, K.L. A new skew integer valued time series process. Stat. Methodol. 2016, 31, 8–19. [Google Scholar] [CrossRef]
  57. Eduarda Da Silva, M.; Oliveira, V.L. Difference equations for the higher-order moments and cumulants of the INAR (1) model. J. Time Ser. Anal. 2010, 25, 317–333. [Google Scholar] [CrossRef]
  58. Rao, T.S.; Gabr, M.M. An Introduction to Bispectral Analysis and Bilinear Time Series Models; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1984; Volume 24. [Google Scholar]
  59. Daniell, P.J. Discussion on symposium on autocorrelation in time series. J. R. Stat. Soc. 1946, 8, 88–90. [Google Scholar]
Figure 1. Neural network structure.
Figure 1. Neural network structure.
Symmetry 16 00660 g001
Figure 2. A node in a neural network.
Figure 2. A node in a neural network.
Symmetry 16 00660 g002
Figure 3. NSINAR(1) model simulations using λ = 0.3 , β = 0.25 , and γ = 5 .
Figure 3. NSINAR(1) model simulations using λ = 0.3 , β = 0.25 , and γ = 5 .
Symmetry 16 00660 g003
Figure 4. SDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Figure 4. SDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Symmetry 16 00660 g004
Figure 5. BDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Figure 5. BDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Symmetry 16 00660 g005
Figure 6. NBDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Figure 6. NBDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Symmetry 16 00660 g006
Figure 7. The simulated series of the forecasted NSINAR(1) observations of neural network-based FTS.
Figure 7. The simulated series of the forecasted NSINAR(1) observations of neural network-based FTS.
Symmetry 16 00660 g007
Figure 8. The SDF and estimated SDF obtained with the Daniell window when M = 7, both in the case of the input and output values of the FTS.
Figure 8. The SDF and estimated SDF obtained with the Daniell window when M = 7, both in the case of the input and output values of the FTS.
Symmetry 16 00660 g008
Figure 9. The SDF and estimated SDF obtained with the Daniell window when M = 9, both in the case of the input and output values.
Figure 9. The SDF and estimated SDF obtained with the Daniell window when M = 9, both in the case of the input and output values.
Symmetry 16 00660 g009
Figure 10. Estimated BDF obtained with the Daniell window when M = 7, both in the case of the input and output values.
Figure 10. Estimated BDF obtained with the Daniell window when M = 7, both in the case of the input and output values.
Symmetry 16 00660 g010
Figure 11. Estimated BDF obtained with the Daniell window when M = 9 both in the case of the input and output values.
Figure 11. Estimated BDF obtained with the Daniell window when M = 9 both in the case of the input and output values.
Symmetry 16 00660 g011
Figure 12. Estimated NBDF obtained with the Daniell window when M = 7 both in the case of the input and output values.
Figure 12. Estimated NBDF obtained with the Daniell window when M = 7 both in the case of the input and output values.
Symmetry 16 00660 g012
Figure 13. Estimated NBDF obtained with the Daniell window when M = 9 both in the case of the input and output values.
Figure 13. Estimated NBDF obtained with the Daniell window when M = 9 both in the case of the input and output values.
Symmetry 16 00660 g013
Table 1. The BDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Table 1. The BDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
w 2 0.00 π 0.05 π 0.10 π 0.15 π 0.20 π 0.25 π 0.30 π 0.35 π 0.40 π 0.45 π 0.50 π 0.55 π 0.60 π 0.65 π 0.70 π 0.75 π 0.80 π 0.85 π 0.90 π 0.95 π π
w 1
0.00 π 18.95618.75918.19617.34416.30115.16714.02412.93211.92511.02210.2289.5418.9548.4598.0497.7157.4517.2527.1127.0307.002
0.05 π 18.75918.37917.67216.72615.64214.50813.39712.35411.40510.5629.8279.1948.6578.2087.8397.5427.3127.1447.0346.9796.979
0.10 π 18.19617.67216.87415.89014.81413.72112.67111.69710.82110.0479.3768.8028.3177.9157.5877.3287.1326.9956.9146.8876.914
0.15 π 17.34416.72615.89014.91813.88812.86411.89311.00110.2039.5028.8978.3827.9507.5947.3087.0856.9236.8166.7636.7636.816
0.20 π 16.30115.64214.81413.88812.92911.99011.10610.3009.5828.9548.4147.9567.5747.2637.0166.8296.6986.6216.5956.6216.698
0.25 π 15.16714.50813.72112.86411.99011.14110.3479.6258.9848.4257.9467.5427.2076.9386.7286.5746.4736.4236.4236.4736.574
0.30 π 14.02413.39712.67111.89311.10610.3479.6388.9968.4277.9327.5097.1546.8636.6326.4576.3346.2616.2376.2616.3346.457
0.35 π 12.93212.35411.69711.00110.3009.6258.9968.4267.9227.4857.1136.8036.5526.3566.2126.1186.0716.0716.1186.2126.356
0.40 π 11.92511.40510.82110.2039.5828.9848.4277.9227.4767.0916.7646.4956.2806.1166.0015.9325.9095.9326.0016.1166.280
0.45 π 11.02210.56210.0479.5028.9548.4257.9327.4857.0916.7516.4656.2326.0495.9145.8255.7815.7815.8255.9146.0496.232
0.50 π 10.2289.8279.3768.8978.4147.9467.5097.1136.7646.4656.2166.0155.8615.7535.6885.6675.6885.7535.8616.0156.216
0.55 π 9.5419.1948.8028.3827.9567.5427.1546.8036.4956.2326.0155.8435.7165.6325.5905.5905.6325.7165.8436.0156.232
0.60 π 8.9548.6578.3177.9507.5747.2076.8636.5526.2806.0495.8615.7165.6135.5515.5315.5515.6135.7165.8616.0496.280
0.65 π 8.4598.2087.9157.5947.2636.9386.6326.3566.1165.9145.7535.6325.5515.5115.5115.5515.6325.7535.9146.1166.356
0.70 π 8.0497.8397.5877.3087.0166.7286.4576.2126.0015.8255.6885.5905.5315.5115.5315.5905.6885.8256.0016.2126.457
0.75 π 7.7157.5427.3287.0856.8296.5746.3346.1185.9325.7815.6675.5905.5515.5515.5905.6675.7815.9326.1186.3346.574
0.80 π 7.4517.3127.1326.9236.6986.4736.2616.0715.9095.7815.6885.6325.6135.6325.6885.7815.9096.0716.2616.4736.698
0.85 π 7.2527.1446.9956.8166.6216.4236.2376.0715.9325.8255.7535.7165.7165.7535.8255.9326.0716.2376.4236.6216.816
0.90 π 7.1127.0346.9146.7636.5956.4236.2616.1186.0015.9145.8615.8435.8615.9146.0016.1186.2616.4236.5956.7636.914
0.95 π 7.0306.9796.8876.7636.6216.4736.3346.2126.1166.0496.0156.0156.0496.1166.2126.3346.4736.6216.7636.8876.979
π 7.0026.9796.9146.8166.6986.5746.4576.3566.2806.2326.2166.2326.2806.3566.4576.5746.6986.8166.9146.9797.002
Table 2. The NBDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
Table 2. The NBDF of NSINAR(1) using λ = 0.3 , β = 0.25 , and γ = 5 .
w 2 0.00 π 0.05 π 0.10 π 0.15 π 0.20 π 0.25 π 0.30 π 0.35 π 0.40 π 0.45 π 0.50 π 0.55 π 0.60 π 0.65 π 0.70 π 0.75 π 0.80 π 0.85 π 0.90 π 0.95 π π
w 1
0.00 π 0.73190.73220.73320.73460.73630.73810.73990.74170.74330.74470.74600.74710.74800.74880.74940.74990.75030.75070.75090.75100.7510
0.05 π 0.73220.73290.73400.73560.73730.73920.74100.74260.74420.74550.74670.74770.74850.74920.74980.75030.75070.75090.75110.75120.7512
0.10 π 0.73320.73400.73530.73690.73870.74050.74220.74380.74530.74650.74760.74850.74930.75000.75050.75090.75120.75150.75160.75160.7516
0.15 π 0.73460.73560.73690.73860.74030.74200.74370.74520.74650.74770.74870.74960.75030.75090.75140.75170.75200.75220.75230.75230.7522
0.20 π 0.73630.73730.73870.74030.74190.74360.74510.74660.74780.74890.74990.75070.75140.75190.75230.75270.75290.75300.75310.75300.7529
0.25 π 0.73810.73920.74050.74200.74360.74510.74660.74790.74910.75020.75110.75180.75240.75290.75330.75360.75380.75390.75390.75380.7536
0.30 π 0.73990.74100.74220.74370.74510.74660.74800.74930.75040.75140.75220.75290.75350.75390.75430.75450.75460.75470.75460.75450.7543
0.35 π 0.74170.74260.74380.74520.74660.74790.74930.75050.75150.75250.75320.75390.75440.75480.75510.75530.75540.75540.75530.75510.7548
0.40 π 0.74330.74420.74530.74650.74780.74910.75040.75150.75250.75340.75420.75480.75520.75560.75590.75600.75610.75600.75590.75560.7552
0.45 π 0.74470.74550.74650.74770.74890.75020.75140.75250.75340.75420.75490.75550.75590.75630.75650.75660.75660.75650.75630.75590.7555
0.50 π 0.74600.74670.74760.74870.74990.75110.75220.75320.75420.75490.75560.75610.75650.75680.75700.75700.75700.75680.75650.75610.7556
0.55 π 0.74710.74770.74850.74960.75070.75180.75290.75390.75480.75550.75610.75660.75690.75720.75730.75730.75720.75690.75660.75610.7555
0.60 π 0.74800.74850.74930.75030.75140.75240.75350.75440.75520.75590.75650.75690.75730.75740.75750.75740.75730.75690.75650.75590.7552
0.65 π 0.74880.74920.75000.75090.75190.75290.75390.75480.75560.75630.75680.75720.75740.75760.75760.75740.75720.75680.75630.75560.7548
0.70 π 0.74940.74980.75050.75140.75230.75330.75430.75510.75590.75650.75700.75730.75750.75760.75750.75730.75700.75650.75590.75510.7543
0.75 π 0.74990.75030.75090.75170.75270.75360.75450.75530.75600.75660.75700.75730.75740.75740.75730.75700.75660.75600.75530.75450.7536
0.80 π 0.75030.75070.75120.75200.75290.75380.75460.75540.75610.75660.75700.75720.75730.75720.75700.75660.75610.75540.75460.75380.7529
0.85 π 0.75070.75090.75150.75220.75300.75390.75470.75540.75600.75650.75680.75690.75690.75680.75650.75600.75540.75470.75390.75300.7522
0.90 π 0.75090.75110.75160.75230.75310.75390.75460.75530.75590.75630.75650.75660.75650.75630.75590.75530.75460.75390.75310.75230.7516
0.95 π 0.75100.75120.75160.75230.75300.75380.75450.75510.75560.75590.75610.75610.75590.75560.75510.75450.75380.75300.75230.75160.7512
π 0.75100.75120.75160.75220.75290.75360.75430.75480.75520.75550.75560.75550.75520.75480.75430.75360.75290.75220.75160.75120.7510
Table 3. Estimated BDF of NSINAR(1) using the Daniell window at M = 7 by the input values of the FTS.
Table 3. Estimated BDF of NSINAR(1) using the Daniell window at M = 7 by the input values of the FTS.
w 2 0.00 π 0.05 π 0.10 π 0.15 π 0.20 π 0.25 π 0.30 π 0.35 π 0.40 π 0.45 π 0.50 π 0.55 π 0.60 π 0.65 π 0.70 π 0.75 π 0.80 π 0.85 π 0.90 π 0.95 π π
w 1
0.0 π 19.58319.38018.62417.13915.16113.30512.03611.26910.5809.7508.9768.5328.3598.1027.5056.6685.8785.3074.9354.7004.613
0.05 π 19.38018.90317.78416.01013.98012.28611.21410.5349.8639.1428.6108.3758.2057.7797.0416.2175.5445.0744.7514.5684.568
0.10 π 18.62417.78416.35714.43012.40610.8139.8679.3228.8598.4598.2388.1067.8077.2136.4575.7585.2074.7704.4454.3174.445
0.15 π 17.13916.01014.43012.49510.6279.2918.5938.2588.0257.8727.7727.5527.0836.4445.8165.2774.7774.3013.9803.9804.301
0.20 π 15.16113.98012.40610.6279.1328.2287.8087.5757.3797.2217.0126.6296.1035.5875.1494.7044.1753.6753.4623.6754.175
0.25 π 13.30512.28610.8139.2918.2287.6887.3767.0406.6706.3145.9225.4735.0604.7454.4394.0013.4643.0763.0763.4644.001
0.30 π 12.03611.2149.8678.5937.8087.3766.9476.3745.7525.1874.7094.3524.1353.9703.6993.2702.8522.6812.8523.2703.699
0.35 π 11.26910.5349.3228.2587.5757.0406.3745.5514.7334.0813.6713.4873.4203.3033.0352.6902.4572.4572.6903.0353.303
0.40 π 10.5809.8638.8598.0257.3796.6705.7524.7333.8423.2532.9952.9482.9302.8052.5612.3192.2202.3192.5612.8052.930
0.45 π 9.7509.1428.4597.8727.2216.3145.1874.0813.2532.8112.6792.6762.6382.4912.2692.0992.0992.2692.4912.6382.676
0.50 π 8.9768.6108.2387.7727.0125.9224.7093.6712.9952.6792.5872.5552.4672.2882.0922.0082.0922.2882.4672.5552.587
0.55 π 8.5328.3758.1067.5526.6295.4734.3523.4872.9482.6762.5552.4632.3142.1382.0322.0322.1382.3142.4632.5552.676
0.60 π 8.3598.2057.8077.0836.1035.0604.1353.4202.9302.6382.4672.3142.1622.0892.0842.0892.1622.3142.4672.6382.930
0.65 π 8.1027.7797.2136.4445.5874.7453.9703.3032.8052.4912.2882.1382.0892.1302.1302.0892.1382.2882.4912.8053.303
0.70 π 7.5057.0416.4575.8165.1494.4393.6993.0352.5612.2692.0922.0322.0842.1302.0842.0322.0922.2692.5613.0353.699
0.75 π 6.6686.2175.7585.2774.7044.0013.2702.6902.3192.0992.0082.0322.0892.0892.0322.0082.0992.3192.6903.2704.001
0.80 π 5.8785.5445.2074.7774.1753.4642.8522.4572.2202.0992.0922.1382.1622.1382.0922.0992.2202.4572.8523.4644.175
0.85 π 5.3075.0744.7704.3013.6753.0762.6812.4572.3192.2692.2882.3142.3142.2882.2692.3192.4572.6813.0763.6754.301
0.90 π 4.9354.7514.4453.9803.4623.0762.8522.6902.5612.4912.4672.4632.4672.4912.5612.6902.8523.0763.4623.9804.445
0.95 π 4.7004.5684.3173.9803.6753.4643.2703.0352.8052.6382.5552.5552.6382.8053.0353.2703.4643.6753.9804.3174.568
π 4.6134.5684.4454.3014.1754.0013.6993.3032.9302.6762.5872.6762.9303.3033.6994.0014.1754.3014.4454.5684.613
Table 4. Estimated BDF of NSINAR(1) using the Daniell window at M=7 by the output values of the neural network-based FTS “Model 1”.
Table 4. Estimated BDF of NSINAR(1) using the Daniell window at M=7 by the output values of the neural network-based FTS “Model 1”.
w 2 0.00 π 0.05 π 0.10 π 0.15 π 0.20 π 0.25 π 0.30 π 0.35 π 0.40 π 0.45 π 0.50 π 0.55 π 0.60 π 0.65 π 0.70 π 0.75 π 0.80 π 0.85 π 0.90 π 0.95 π π
w 1
0.0 π 19.52319.16018.11516.53514.69512.94111.55110.61010.0059.5459.1098.6798.2747.8597.3506.7015.9835.3444.9064.6854.624
0.05 π 19.16018.46017.16515.47513.67212.05410.83510.0439.5349.1338.7468.3677.9877.5476.9806.2955.6045.0544.7214.5814.581
0.10 π 18.11517.16515.78514.15012.45510.9489.8399.1628.7748.4898.2067.8957.5287.0606.4655.7955.1714.7134.4594.3824.459
0.15 π 16.53515.47514.15012.63011.0619.7018.7618.2517.9957.8007.5657.2636.8796.3965.8205.2084.6644.2804.0894.0894.280
0.20 π 14.69513.67212.45511.0619.6788.5737.8847.5317.3157.0826.7866.4376.0425.5905.0774.5444.0793.7653.6553.7654.079
0.25 π 12.94112.05410.9489.7018.5737.7737.2966.9756.6406.2455.8355.4565.1034.7304.3033.8543.4763.2613.2613.4763.854
0.30 π 11.55110.8359.8398.7617.8847.2966.8656.4005.8385.2654.7974.4654.2073.9293.5863.2302.9672.8732.9673.2303.586
0.35 π 10.61010.0439.1628.2517.5316.9756.4005.6924.9264.2793.8613.6373.4833.2803.0082.7512.6052.6052.7513.0083.280
0.40 π 10.0059.5348.7747.9957.3156.6405.8384.9264.0843.4943.1943.0752.9822.8132.5922.4122.3462.4122.5922.8132.982
0.45 π 9.5459.1338.4897.8007.0826.2455.2654.2793.4943.0262.8312.7602.6662.4942.2942.1642.1642.2942.4942.6662.760
0.50 π 9.1098.7468.2067.5656.7865.8354.7973.8613.1942.8312.6782.5862.4502.2632.0962.0282.0962.2632.4502.5862.678
0.55 π 8.6798.3677.8957.2636.4375.4564.4653.6373.0752.7602.5862.4402.2742.1202.0292.0292.1202.2742.4402.5862.760
0.60 π 8.2747.9877.5286.8796.0425.1034.2073.4832.9822.6662.4502.2742.1452.0842.0702.0842.1452.2742.4502.6662.982
0.65 π 7.8597.5477.0606.3965.5904.7303.9293.2802.8132.4942.2632.1202.0842.1052.1052.0842.1202.2632.4942.8133.280
0.70 π 7.3506.9806.4655.8205.0774.3033.5863.0082.5922.2942.0962.0292.0702.1052.0702.0292.0962.2942.5923.0083.586
0.75 π 6.7016.2955.7955.2084.5443.8543.2302.7512.4122.1642.0282.0292.0842.0842.0292.0282.1642.4122.7513.2303.854
0.80 π 5.9835.6045.1714.6644.0793.4762.9672.6052.3462.1642.0962.1202.1452.1202.0962.1642.3462.6052.9673.4764.079
0.85 π 5.3445.0544.7134.2803.7653.2612.8732.6052.4122.2942.2632.2742.2742.2632.2942.4122.6052.8733.2613.7654.280
0.90 π 4.9064.7214.4594.0893.6553.2612.9672.7512.5922.4942.4502.4402.4502.4942.5922.7512.9673.2613.6554.0894.459
0.95 π 4.6854.5814.3824.0893.7653.4763.2303.0082.8132.6662.5862.5862.6662.8133.0083.2303.4763.7654.0894.3824.581
π 4.6244.5814.4594.2804.0793.8543.5863.2802.9822.7602.6782.7602.9823.2803.5863.8544.0794.2804.4594.5814.624
Table 5. Estimated BDF of NSINAR(1) using the Daniell window at M = 7 by the outputvalues of the neural network-based FTS “Model 2”.
Table 5. Estimated BDF of NSINAR(1) using the Daniell window at M = 7 by the outputvalues of the neural network-based FTS “Model 2”.
w 2 0.00 π 0.05 π 0.10 π 0.15 π 0.20 π 0.25 π 0.30 π 0.35 π 0.40 π 0.45 π 0.50 π 0.55 π 0.60 π 0.65 π 0.70 π 0.75 π 0.80 π 0.85 π 0.90 π 0.95 π π
w 1
0.0 π 18.71018.58818.00416.67414.80713.02311.73610.84410.0649.3498.8348.5268.2457.8347.2846.6485.9755.3554.9144.7014.649
0.05 π 18.58818.23017.23515.55513.63812.05410.97510.1799.4738.9098.5638.3227.9917.5026.9076.2595.6095.0644.7284.5954.595
0.10 π 18.00417.23515.81913.96512.16510.8029.8849.2138.6958.3698.1827.9447.5326.9866.3895.7755.1874.7224.4534.3714.453
0.15 π 16.67415.55513.96512.21210.6889.5858.8398.3137.9767.8047.6437.3236.8366.2945.7595.2204.6984.2834.0584.0584.283
0.20 π 14.80713.63812.16510.6889.4958.6488.0437.6007.3217.1316.8646.4445.9575.5045.0714.6024.1253.7573.6183.7574.125
0.25 π 13.02312.05410.8029.5858.6487.9697.4226.9706.6136.2775.8705.4245.0324.7064.3593.9353.5133.2493.2493.5133.935
0.30 π 11.73610.9759.8848.8398.0437.4226.8566.3165.8005.2954.8214.4474.1903.9633.6603.2872.9722.8512.9723.2873.660
0.35 π 10.84410.1799.2138.3137.6006.9706.3165.6244.9404.3423.9083.6553.5013.3193.0482.7572.5732.5732.7573.0483.319
0.40 π 10.0649.4738.6957.9767.3216.6135.8004.9404.1613.5853.2553.1012.9912.8242.5982.3982.3202.3982.5982.8242.991
0.45 π 9.3498.9098.3697.8047.1316.2775.2954.3423.5853.1082.8732.7642.6612.5032.3162.1852.1852.3162.5032.6612.764
0.50 π 8.8348.5638.1827.6436.8645.8704.8213.9083.2552.8732.6862.5832.4682.3052.1432.0732.1432.3052.4682.5832.686
0.55 π 8.5268.3227.9447.3236.4445.4244.4473.6553.1012.7642.5832.4632.3242.1602.0482.0482.1602.3242.4632.5832.764
0.60 π 8.2457.9917.5326.8365.9575.0324.1903.5012.9912.6612.4682.3242.1782.0722.0402.0722.1782.3242.4682.6612.991
0.65 π 7.8347.5026.9866.2945.5044.7063.9633.3192.8242.5032.3052.1602.0722.0602.0602.0722.1602.3052.5032.8243.319
0.70 π 7.2846.9076.3895.7595.0714.3593.6603.0482.5982.3162.1432.0482.0402.0602.0402.0482.1432.3162.5983.0483.660
0.75 π 6.6486.2595.7755.2204.6023.9353.2872.7572.3982.1852.0732.0482.0722.0722.0482.0732.1852.3982.7573.2873.935
0.80 π 5.9755.6095.1874.6984.1253.5132.9722.5732.3202.1852.1432.1602.1782.1602.1432.1852.3202.5732.9723.5134.125
0.85 π 5.3555.0644.7224.2833.7573.2492.8512.5732.3982.3162.3052.3242.3242.3052.3162.3982.5732.8513.2493.7574.283
0.90 π 4.9144.7284.4534.0583.6183.2492.9722.7572.5982.5032.4682.4632.4682.5032.5982.7572.9723.2493.6184.0584.453
0.95 π 4.7014.5954.3714.0583.7573.5133.2873.0482.8242.6612.5832.5832.6612.8243.0483.2873.5133.7574.0584.3714.595
π 4.6494.5954.4534.2834.1253.9353.6603.3192.9912.7642.6862.7642.9913.3193.6603.9354.1254.2834.4534.5954.649
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El-Menshawy, M.H.; Eliwa, M.S.; Al-Essa, L.A.; El-Morshedy, M.; EL-Sagheer, R.M. Enhancing Integer Time Series Model Estimations through Neural Network-Based Fuzzy Time Series Analysis. Symmetry 2024, 16, 660. https://doi.org/10.3390/sym16060660

AMA Style

El-Menshawy MH, Eliwa MS, Al-Essa LA, El-Morshedy M, EL-Sagheer RM. Enhancing Integer Time Series Model Estimations through Neural Network-Based Fuzzy Time Series Analysis. Symmetry. 2024; 16(6):660. https://doi.org/10.3390/sym16060660

Chicago/Turabian Style

El-Menshawy, Mohammed H., Mohamed S. Eliwa, Laila A. Al-Essa, Mahmoud El-Morshedy, and Rashad M. EL-Sagheer. 2024. "Enhancing Integer Time Series Model Estimations through Neural Network-Based Fuzzy Time Series Analysis" Symmetry 16, no. 6: 660. https://doi.org/10.3390/sym16060660

APA Style

El-Menshawy, M. H., Eliwa, M. S., Al-Essa, L. A., El-Morshedy, M., & EL-Sagheer, R. M. (2024). Enhancing Integer Time Series Model Estimations through Neural Network-Based Fuzzy Time Series Analysis. Symmetry, 16(6), 660. https://doi.org/10.3390/sym16060660

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop