Next Article in Journal
Parallel Sum of Bounded Operators with Closed Ranges
Next Article in Special Issue
Prediction and Analysis of Container Terminal Logistics Arrival Time Based on Simulation Interactive Modeling: A Case Study of Ningbo Port
Previous Article in Journal
Reinforcement Learning Recommendation Algorithm Based on Label Value Distribution
Previous Article in Special Issue
Dynamic Modeling for Metro Passenger Flows on Congested Transfer Routes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wavelets in Combination with Stochastic and Machine Learning Models to Predict Agricultural Prices

1
ICAR-Indian Agricultural Statistics Research Institute, New Delhi 110012, India
2
Department of Statistics and Operations Research, Faculty of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
3
Department of Mathematics, University of Caen-Normandie, 14000 Caen, France
*
Authors to whom correspondence should be addressed.
Present Address: ICAR-Indian Institute of Agricultural Biotechnology, Ranchi 834003, India.
Mathematics 2023, 11(13), 2896; https://doi.org/10.3390/math11132896
Submission received: 6 June 2023 / Revised: 22 June 2023 / Accepted: 24 June 2023 / Published: 28 June 2023
(This article belongs to the Special Issue Advances in Statistical Modeling)

Abstract

:
Wavelet decomposition in signal processing has been widely used in the literature. The popularity of machine learning (ML) algorithms is increasing day by day in agriculture, from irrigation scheduling and yield prediction to price prediction. It is quite interesting to study wavelet-based stochastic and ML models to appropriately choose the most suitable wavelet filters to predict agricultural commodity prices. In the present study, some popular wavelet filters, such as Haar, Daubechies (D4), Coiflet (C6), best localized (BL14), and least asymmetric (LA8), were considered. Daily wholesale price data of onions from three major Indian markets, namely Bengaluru, Delhi, and Lasalgaon, were used to illustrate the potential of different wavelet filters. The performance of wavelet-based models was compared with that of benchmark models. It was observed that, in general, the wavelet-based combination models outperformed other models. Moreover, wavelet decomposition with the Haar filter followed by application of the random forest (RF) model gave better prediction accuracy than other combinations as well as other individual models.

1. Introduction

Agricultural datasets, in particular, are of great importance as they provide information on various variables, such as the occurrence and intensity of rainfall, daily temperature fluctuations, and price variations of commodities. The forecasting of future commodity prices is crucial for all stakeholders in the agricultural supply chain, from farmers/producers to consumers and policymakers. Having proper knowledge about possible future prices can prevent distress sales by farmers and enable them to make better decisions about their farming activities.
Time series modeling is a crucial tool in data analysis and forecasting, as it allows researchers to uncover hidden patterns and trends within the data. Model selection is an important step in time series modeling, as the choice of model can have a significant impact on the accuracy of the forecast. Ultimately, the selection of the appropriate technique depends on the nature of the data and the specific research question being addressed.
The wavelet transformation helps capture minute events in a signal that may not be obvious [1]. It represents localized phenomena at different time scales in the signal. A wavelet representation of a signal can identify frequency content as well as temporal variations [2]. There are two different ways to analyze a signal through wavelet transformation: continuous wavelet transform (CWT) and discrete wavelet transform (DWT). CWT operates on a continuous signal, whereas DWT performs decomposition on scales with discrete values with a dyadic structure. CWT produces a redundant number of signals to capture specific details in the signal through reconstruction of the original signal, which is quite difficult in this case [3]. Reconstruction is useful for forecasting the original signals [3]. Consequently, DWT is best suited for discrete, multiscale agricultural price datasets. DWT produces orthogonal sub-signals that are amenable to further treatment.
Stochastic models, such as autoregressive integrated moving average (ARIMA) and generalized autoregressive conditional heteroscedastic (GARCH) models, have several data specifications before modeling can be performed [4]. Data-driven ML algorithms need very little human intervention, unlike other stochastic models [5]. Several important algorithms are used for forecasting purposes. MARS is a piecewise regression model that can efficiently handle nonlinearity in the data [6]. PCR is an artificial intelligence (AI) algorithm based on principal components (PCs), which are linear combinations of the original predictor variables [7,8]. The PCs are orthogonal among themselves, which easily solves the serious issue of multicollinearity in regression analysis [9,10]. The support vector regression (SVR) algorithm is simultaneously useful in both classification and regression problems and is based on the risk minimization principle [11,12,13]. Zhang et al. [14] used the linear programming method and proposed a two-phase SVR with multiple kernel functions. This helped them find important features to predict output variables with reduced computational complexity, which is the case in solving convex quadratic programming. RF is based on the famous bagging algorithm, which combines several decision trees to give the final prediction output of many input variables [15,16,17,18]. ANN makes use of a three-layered system for use in classification and regression tasks [19]. Li et al. [20] proved the outperformance of an induced ordered weighted averaging (IOWA)-optimized neural network (NN) model over other models to predict a vegetable price series. Zhou et al. [21] discussed tensor principle component analysis (PCA)-based techniques to recover clean data in the presence of noise. Zhao [22] studied a wavelet-based signal processing technique along with an ML model to predict future prices of agricultural products. Paul and Garai [23] predicted tomato prices using wavelet filter-based decomposition in combination with stochastic and ML models, but they did not address the issue of the best combination of filter and model or their relationship. Iniyan et al. [24] used ML as a dynamic device to forecast crop yield. They utilized several ML methods with several variables to help farmers decide which crop grow and to increase yield.
In the present investigation, onion prices from three major markets in India, namely Bengaluru, Delhi, and Lasalgaon, were used. Onion is the second-most-produced vegetable in India after potatoes. As per the 3rd Advance Estimates (2020-21), its production stands at 26.83 million tonnes. The modeling and forecasting of onion price series in various markets in India has importance among researchers. Many studies are available in the literature on the application of stochastic and ML models [25,26,27,28,29,30]. The price series were predicted using stochastic models and ML algorithms. Thereafter, wavelet-decomposed subseries were used in these models to obtain better results. The efficacy of the prediction results was measured using three commonly used error functions: root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). There are several wavelet filters that have evolved in the literature for use in different aspects. It is necessary to identify which filter performs particularly well for an individual model so that it can be efficiently used for modeling time series with that particular model in the future. In this paper, an attempt is made to address the issue of finding the best-performing filter for a particular method based on several performance metrics.

2. Methodology

2.1. ARIMA

The most popular linear time series model is the autoregressive integrated moving average (ARIMA) model [31]. For a time series y t , the ARMA ( p , q ) model is presented by Equations (1) or (2):
y t = φ 1 y t 1 + φ 2 y t 2 + + φ p y t p + ε t θ 1 ε t 1 θ 2 ε t 2 θ q ε t q
or
φ L y t = θ L ε t
where φ L and θ L are the AR and MA polynomials of lag operator L of order p and q , respectively.
For a wide class of nonstationary time series, the ARMA model is generalized by incorporating a differencing term. The ARIMA ( p , d , q ) model is defined as:
φ L Δ d y t = θ L ε t
where p , d , and q represent the order of autoregression, integration (differencing), and moving average, respectively.

2.2. GARCH

The ARIMA model is unable to capture the nonlinear structure of a time series. The generalized autoregressive conditional heteroscedastic (GARCH) model was proposed by [32] to capture the conditional heteroscedasticity present in time series data. For a GARCH process, the conditional distribution of the error, ε t , given the available information ψ t 1 up to t 1 time epoch, is assumed to follow a normal distribution, i.e., ε t | ψ t 1 N 0 , h t and ε t = h t ν t . The values of ν t are identically and independently distributed (i.i.d.) innovations with zero mean and unit variance.
Here, the conditional variance of the GARCH (p, q) process is defined as:
h t = α 0 + i = 1 q α i ε t i 2 + j = 1 p β j h t j
provided α 0 > 0 ,   α i 0     i ,   β j 0     j .
The GARCH (p, q) process is said to be weakly stationary if and only if:
i = 1 q α i + j = 1 p β j < 1

2.3. ANN

ANNs are nonlinear, data-driven, and self-adaptive approaches. Like human brains, neural networks also consist of processing units (artificial neurons) and connections (weights) between them.
The output y from a neuron can be expressed as:
y = θ i = 1 n w i x i + b
where x i is the input to the network, w i is the corresponding weights, b is the bias imposed on the output of a neuron, and θ is the activation function. The number of hidden layers, number of neurons in each hidden layer, learning rate, activation function, regularization techniques, and optimization algorithm are different hyperparameters. Manual tuning, grid search, random search, Bayesian optimization, and evolutionary algorithms are some methods employed to tune them. Once the system is optimized, it can produce outputs from the supplied inputs.

2.4. SVR

SVR is useful in both classification and regression studies [11]. It is formulated as Equations (7) and (8), subject to the constraints represented in Equation (9), where:
y = k z = v ϕ z + c
Minimize:
1 2 | v | 2 + P b = 1 l ζ b + ζ b * where ,   P > 0
Subject to:
y b v ϕ z b + c b ε + ζ b v ϕ z b + c b y b ε + ζ b * ζ b , ζ b * 0
In the above equations, z = ( z 1 , z 2 , , z l ) are the input variables; y b is the predicted value of the output variable; k or ϕ is the kernel function; v ,   c ,   a n d   l are constants where v R l and c R ; P is the cost factor or regularization parameter; ζ b   a n d   ζ b * are slack variables; and epsilon ( ε ) is a constant. The kernel function (linear, polynomial, radial basis function), cos factor, and epsilon are tunable parameters for optimizing the SVR algorithm.

2.5. RF

RF is a supervised learning algorithm that is used for both classification and regression. A natural forest consists of trees. Similarly, the random forest algorithm develops decision trees from the sample information, obtains predictions from each of them, and finally selects the best solution using voting. It is based on the bagging technique (bootstrap aggregation) over the decision trees. The correlation between trees is obtained by applying randomization in two ways. Firstly, each tree is trained on a bootstrapped subset. Secondly, the feature by which splitting is performed in each node is not selected from all possible features, but only from their random subset of size m . This algorithm generates all N trees independently. The efficiency of the RF algorithm is achieved by constructing a full binary tree of maximum depth for each candidate tree. The random forest prediction can be estimated using the following formula:
y = 1 k i = 1 k y i
Here, k is the total number of candidate regression trees, y i is th prediction from the ith tree, and y is the final random forest prediction.

2.6. SMLR

MLR is formulated as:
y = x 0 + b 1 x 1 + b 2 x 2 + + b n x n
where y denotes the response, x 1 , x 2 ,…, x n are the explanatory variables (predictors), and b 1 , b 2 ,…, b n are the constants to be estimated, called regression coefficients. In SMLR, the final model is produced by selectively adding or deleting the explanatory variables one by one and checking their statistical significance iteratively. The algorithm proceeds in a stepwise manner, typically using a combination of forward selection, backward elimination, and/or variable updating. Once the stepwise procedure is complete, the algorithm provides a final model that includes a subset of the available predictor variables. The selected variables are considered the most relevant and statistically significant for predicting the response variable.

2.7. MARS

The MARS model uses a series of piecewise linear or nonlinear splines a basis functions (BF) to mimic the nonlinear relationship between output (splines) and input variables. Each subspace’s BF and slope can be altered by moving from one subspace to the adjacent subspace. Knots are the endpoints of each section. The MARS model can be denoted by:
f x ^ = b 0 + i = 1 m w i B F i x
where f x ^ is the expected response, b 0 is the bias, w i is the unknown coefficient of the weight connecting m BFs to the response. The unknown coefficients are estimated by using the least squares method. The spline BF can be expressed as:
B F i x = k = 1 k i S k i x k , i C k i
where k i is the number of knots; S k i denotes the right or left associated linear step function, which takes values of either +1 or −1; x k , i denotes the input variable i at knot k ; and C k i represents the knot location.
An optimal MARS model is created by using a two-stage forward and backward technique. In the forward stage, the data are overfitted by taking into account a large number of BFs. To overcome this, the duplicate BFs are eliminated from Equation (12) in the backward stage. The generalized cross-validation (GCV) criterion is used to remove the duplicate BFs. The GCV is calculated as:
G C V = 1 N i = 1 N y i f x i ^ 2 1 C B N 2
where N is the total number of points in the data; y i is the observed response, f x i ^ is the expected response; and C B is a complexity penalty that increases with the number of BFs in the model, which is defined as C B = B + 1 + d B . Here, B is the number of BFs in the model, and d denotes the penalty of BF.

2.8. PCR

PCA is a data dimension reduction technique in which a set of correlated variables is transformed into a set of uncorrelated PCs that retain as much information as the original variables. The PCs are ordered by explained variance, and the first PC can explain most of the variance in the data.
Let x = x 1 , x 2 , x 3 , . . . , x p be the vector of variables under study and be the variance–covariance matrix of the dataset. λ i i = 1,2 , , p are the eigenvalues of and a 1 , a 2 , , a p are the corresponding eigen vectors. Then, the PCs are defined as z i = a i x subject to the condition that a i a i = 1 and C o v z i , z j = 0 i j = 1,2 , , p . The variances of the PCs will be the corresponding eigenvalues, i.e., V a r z i = λ i . Out of this p number of PCs, the first few are selected that can explain around 85% to 90% of the total variability. PCR is a statistical technique that combines PCA and linear regression. It is used for handling multicollinearity in regression models and dealing with high-dimensional data. In PCR, the selected PCs are used as regressors in a linear regression model, and usually the OLS method is applied for estimation. PCR helps to improve the stability and interpretability of the regression model.

2.9. Wavelet

The wavelet transform (WT) uses particular high- and low-pass filters to decompose a signal into several subseries containing information at different resolutions [33]. Levels ( L ) of decomposition are fixed according to the number of observations ( N ) in the series ( L l o g N , l o g a r i t h m is of base 2). Detailed (Equation (15)) and smooth coefficients (Equation (16)) are generated at the first decomposition. The decomposition takes place in an approximate (smooth) series in the next steps until it completes all levels of decomposition. Detailed coefficients are given by:
D j t = k = W ψ j , k ψ j , k ( t )
Smooth coefficients are given by:
A j t = k = V ϕ j , k ϕ j , k ( t )
where ψ j , k ( t ) is the wavelet function and is associated with ϕ j , k ( t ) , the scaling function. The wavelet coefficient, or mother wavelet, is represented as W ψ j , k ; the father wavelet or scaling coefficient is denoted by V ϕ j , k ; t represents time; and j , and k are the scale and translation parameters, respectively.

2.10. Proposed Methods

The extent of statistical dependencies in the onion price series was obtained using the autocorrelation function (ACF) and partial autocorrelation function (PACF). Lag series were prepared accordingly for all series. Price series were predicted using a suitable ARIMA model based on the Akaike information criterion (AIC). The residual series were obtained, and it was found that they were not white noise. The residual series were fitted with the GARCH model to obtain the prediction. Using the prepared data frame of the lagged series, the ANN, SVR, RF, SMLR, MARS, and PCR models were trained. The predictions obtained from these individual models were stored to determine the accuracy measures. Wavelet decomposition of the original price series was carried out using Haar, D4, C6, LA8, and BL14 filters at three levels. Three levels of decomposition were suggested in many studies [23,34,35]. Thereafter, the wavelet-decomposed subseries were fitted one by one into ARIMA, GARCH, ANN, SVR, and RF models, respectively. Therefore, five predictions were obtained from one ML algorithm. All of them were stored to calculate their prediction performance. A detailed explanation of the work is given in the following steps.
Step 1: The original series was divided into training and validation sets. Two validation sets were prepared, containing the last 1-week and last 1-month data, respectively.
Step 2: The training set was used to train various models by preparing lag series wherever necessary. The obtained predictions were validated through unseen validation sets, and the performance under different scenarios was recorded.
Step 3: Decomposition of the actual series was performed through five wavelet filters, namely Haar, D4, C6, Bl14, and LA8.
Step 4: The decomposed series were divided into training and validation sets.
Step 5: The lag series were prepared for fitting the training set into various models.
Step 6: Predictions from the wavelet-based models were obtained and compared with those from the validation sets.
Step 7: According to the results, the best wavelet filter in combination with the best statistical model was determined.
In this study, stochastic ARIMA and GARCH models and ML models such as ANN, MARS, PCR, RF, SMLR, and SVR were used along with wavelet-based decomposition (Figure 1).
These models were represented as WML in Equation (21). If y t is the actual series ( t represents time), W ( ) represents the wavelet function, and D 1 , D 2 , D 3 , A 3 are decomposed series at three levels of decomposition (Equation (17)), then:
W y = D 1 t , D 2 t , D 3 t , A 3 t
W A R M A represents the wavelet based-ARMA model, and is expressed as:
W A R M A D 1 t , D 2 t , D 3 t , A 3 t : φ L D j t / A J t = θ L μ j t / ϑ J t
Here, j = 1 1 3 ; J = 3 ; and μ and ϑ are error terms assoicated with D ’s and A , respectively. The ‘ / ’ sign in Equations (18)–(21) has been used to denote ‘or’.
The conditional variance of error term ( c t ) can be modeled using the wavelet-based GARCH model, as in Equation (18):
c t = α 0 + i = 1 q α i ( μ j / ϑ J ) t i 2 + j = 1 p β j c t j
The wavelet-based ANN ( W A N N ) model is represented below:
W A N N D 1 t , D 2 t , D 3 t , A 3 t : D ^ j t / A ^ J t A N N = θ i = 1 n w i L i + b
The term θ has been used to represent the activation function. The term L i represents lags of D j t / A J t . D ^ j t / A ^ J t represents the respective predicted values.
The wavelet-based SVR ( W S V R ) model is given as:
W S V R D 1 t , D 2 t , D 3 t , A 3 t : D ^ j t / A ^ J t S V R = v ϕ L + c
Here, L represents the matrix of L i . For implementation of the wavelet-based RF ( W R F ) model, each D j t / A J t is modeled with the RF algorithm using L i as a predictor variable. D ^ j t / A ^ J t i is the prediction from the i th tree:
W R F D 1 t , D 2 t , D 3 t , A 3 t : D ^ j t / A ^ J t R F = 1 k i = 1 k D ^ j t / A ^ J t i
The inverse wavelet transform is represented as i n v [ ] , W M L { } is the function (Equations (18)–(21)) to predict all subseries individually, and P is final prediction of the WML model, so:
P = i n v W M L D 1 , D 2 , D 3 , A 3
The ranks of the individual models were determined based on RMSE, MAE, and MAPE values for each of the markets. Then, the wavelet filter that performed the best for an individual model was determined using similar metrics. For a particular wavelet filter, the ranks of the wavelet-based combination models were obtained based on the performance measures. Based on the results, the best wavelet-ML combination model was declared, and poorly performing models were pointed out for particular markets.

3. Data Description

The daily modal prices (Rs./q) of onions for the Bengaluru, Delhi, and Lasalgaon markets were obtained from the Agricultural Marketing Information System (AGMARKNET) website (https://agmarknet.gov.in/, accessed on 28 January 2023) for the time interval of 1 May 2019 to 31 December 2022. The descriptive statistics of these price series are given in Table 1. From this table, it can be seen that the mean prices were in the order of Bengaluru > Lasalgaon > Delhi. The same sequence was seen for the median and maximum prices, standard deviation (SD), and coefficient of variation (CV) percentage. The minimum price for the Bengaluru and Lasalgaon markets was the same, and it was less for the Delhi market. All price series were positively skewed and leptokurtic. For skewness and kurtosis, the sequence was Bengaluru > Delhi > Lasalgaon. The kurtosis of the price series of the Bengaluru market was significantly higher than that of the other two markets. The time plots of the price series are shown in Figure 2. Here, it can be seen that all of them followed almost similar patterns. The highest spike in price was noticeable in December 2019. Again, during the last quarter of 2020, a price spike with relatively less intensity was noticeable.
The other properties of the dataset, such as normality, stationarity, and linearity, were also tested. The normality of the price series was tested using the Shapiro–Wilk test [36]. The null hypothesis of this test was that the series followed normality, and it was seen (Table 2) that none of the price series displayed normality at a 1% level of significance.
The stationarity of the price series was tested using the Kwiatkowski–Phillips–Schmidt–Shin (KPSS) test [37] and the Phillips–Perron (PP) test [38]. The KPSS test had a null hypothesis of the absence of a unit root (stationary data), whereas the PP test assumed that the data had a unit root (non-stationary data). The values of the test statistics for these two tests are given in Table 3, and it was seen that all price series were stationary.
The linearity of the price series was tested using the Broock–Dechert–Scheinkman (BDS) test [39]. Its null hypothesis was that the data in a time series were independently and identically distributed (i.i.d.). The values of the test statistics for two and three embedding dimensions at different values of epsilon are given in Table 4, and all were significant at a 1% level of significance. Hence, it could be concluded that the price series were nonlinear.
The autocorrelation function (ACF) and partial autocorrelation function (PACF) of the onion price series are illustrated in Figure 3. The significant values of ACF and PACF indicated statistical dependencies among the different lagged realizations of any time series. The ACF values of the price series of all the markets were significant for the large lag. This is known as hyperbolic decay. This indicated the presence of long-term persistence (long memory). Significant values of PACF at different lags were also noticeable.
In this research article, wavelet decomposition of the price series helped to address non-normality, nonlinearity, and the presence of long-term persistence.

4. Performance Measures

Three accuracy measures, namely RMSE, MAE, and MAPE, were used to compare the prediction performance of the models. RMSE is the square root of the average of the squared residuals. Here, the residual indicated the difference between the actual and predicted values. MAE depicts the average of absolute residuals. These two measures depend on the unit and scale of the observations. They may be used to compare the performance of two different models, but whether a model actually performs well cannot be concluded from these measures. Nevertheless, MAPE does the job. Firstly, the ratio of the absolute residual and actual observation is taken. The average of these ratios is multiplied by a percentage to obtain the MAPE value. The MAPE is a unit-free measure. Generally, a MAPE value less than 10 is considered good [23]. All of these are expected to be lower for a better predictor.

5. Results and Discussion

In this study, the onion price series of three markets in India were divided into training and validation sets. The validation set contained 30 observations for each market. Here, the short- (1-week) and long- (1-month) term prediction performances of the models were studied. A total of 33 different models, including wavelet-based combination models, were used to model the series. Among the models used for this purpose, two were stochastic models and six were ML models.
The training process of an ANN involved feeding the training data through the network, adjusting the weights based on the error, and updating the model’s hyperparameters (number of hidden layers, number of neurons in each hidden layer, learning rate, activation function, regularization techniques, and optimization algorithm). The validation set was used to assess the model’s performance and choose the best hyperparameters. There are several methods for tuning the hyperparameters of ANN, including manual tuning, grid search, random search, Bayesian optimization, and evolutionary algorithms. Significant changes were not noticed during the manual tuning phase with different hyperparameter setups. To keep the model simple but efficient, a single hidden layer with four hidden units, a sigmoid activation function, resilient back propagation with a weighted backtracking optimization algorithm, L2 regularization, and a learning rate of 0.05 were set. For the RF-based model, 100, 250, 500, and 1000 trees (bootstrapped subsets) were tried. In the present investigation, 500 trees were found to be optimal for the three markets with the maximum explained variation in the data. To tune the SVR model, the epsilon values were varied from 0.001 to 0.1. The cost factor varied in the range of 0.01 to 1. Gamma values were considered between 0.1 and 0.5. The optimum combination of hyperparameters was selected based on the minimum MSE value. The optimum values of hyperparameters are represented in Tables S1–S4 in the supplementary material.
In the case of the Bengaluru market, the order of the best ARIMA model was (4, 1, 4) with an AIC value of −6516.45. The most efficient SVR model had a radial kernel, an epsilon value of 0.1, cost factor of 1, gamma of 0.33, and 269 support vectors. The RF model with 500 trees explained 95.71% of the variation in the data. In the case of the Delhi market, the order of the best ARIMA model was (1, 1, 0) with an AIC value of −6331.73. The most efficient SVR model had 209 support vectors, and the other parameters remained the same. The best RF model explained 97.39% of the variation in price in Delhi. In the case of the Lasalgaon market, the order of the best ARIMA model was (0, 1, 1) with an AIC value of −5140.65. The best SVR model used 325 support vectors, and the other parameters remained the same as those of the Bengaluru market. The best RF model explained 94.97% variation in the Bengaluru price.
Five wavelet filters were used to decompose the three datasets. Each of the decomposed sets was predicted using the ARMA, GARCH, ANN, SVR, and RF models to obtain the final predictions of the original datasets. They called the wavelet-based combination models. In total, 25 wavelet-based combination models were formulated.
The validation results of the performance metrics for the 1-month as well as 1-week data for the above-mentioned models for the three markets are provided in Table 5 and Table 6, respectively. The MAPE values in the tables are given in percentage. For representing the different wavelet-based combination models, the term ‘Wmodel_filter’ was used, where ‘W’ represents ‘wavelet’, ‘model’ means either ARMA, GARCH, ANN, SVR, or RF, and ‘filter’ indicates either Haar, D4, C6, BL14, or LA8.
To compare the prediction performance of the different methods, Figure 4 depicts the actual versus predicted values obtained using the best prediction model.
There were eight individual models, including the stochastic and ML models. Their performance was compared for predicting the onion prices of the mentioned markets, and the ranks of these models are presented in Table 7. The PCR was the best-performing model for the Bengaluru and Delhi markets based on the RMSE values. SVR was the best model for the Lasalgaon market. The ARIMA model was the worst performer for the Bengaluru and Delhi markets, and GARCH did not perform well for the Lasalgaon market. Based on the MAE values, MARS was the best-performing model for the first two markets, and ANN was the best for the last market. ARIMA was the worst performer for the first market and GARCH was the worst for the remaining two markets. Nevertheless, ANN was the best-performing model for the three markets based on the MAPE values. ARIMA and GARCH were the worst-performing models for the first and last two markets, respectively.
There are several wavelet filters that have evolved in the literature for use in different aspects. It is necessary to identify which filter performs particularly well for an individual model so that it can be efficiently used for modeling time series with that particular model in the future. A thorough representation of the ranks of the models with separate filters is provided in Table 8 for predicting the onion prices of the selected markets. The three performance measures indicated that the Haar filter was the best performer with the RF model for all markets. The best performance was achieved by RF in combination with the D4 filter for the Bengaluru market. In the Delhi and Lasalgaon markets, GARCH was the best model with the D4 filter. It may be said that the RF and GARCH models had invariable performance when used with the C6 filter for the prediction of onion prices. RF, ARMA, and SVR performed the best with the BL14 filter for the Bengaluru, Delhi, and Lasalagaon markets, respectively. GARCH and RF were more or less equally efficient with the LA8 filter for the best prediction of onion prices in the three markets.
In the previous section, discussion was confined to the fact that a particular model was good for prediction using a particular wavelet filter. In this section, an attempt is made to identify the best filter for use with an individual model to predict onion prices. From Table 9, it was noticeable that the Haar and D4 filters could be used interchangeably with the stochastic and ML models mentioned above for the best prediction. However, the BL14 and C6 filters should not be used for the decomposition of datasets in these models.
Now that filter-wise and model-wise comparisons had been performed, the best and worst models needed to be identified for the prediction of onion prices. It was observed from Table 10 that the Haar filter combined with the RF and SVR models was the best performer for all markets. WGARCH_BL14 and WANN_BL14 were the worst-performing models for the prediction of onion prices in the first two markets. Here, GARCH was the most poorly performing model. WANN_C6 and WARMA_C6 were the most undesirable models for the Lasalgaon market. The ANN-based combination was the worst model.

6. Conclusions

In the present study, the wholesale price of onions for three major markets in India was modeled using 33 different models, including wavelet-based combination models. The prediction performance was compared using three different criteria, namely RMSE, MAE, and MAPE. Among the stand-alone models, PCR, ANN, and MARS performed well. An attempt was made to determine the optimum combination of wavelet filter and stochastic or ML model. It was observed that RF performed efficiently with the Haar filter. The D4 filter performed well with GARCH and/or RF. The BL14 filter may be coupled with the ARMA, RF, and (or) SVR model(s) for better prediction. The LA8 filter gave better performance with the GARCH and RF models. The best wavelet-based combination models were WRF_Haar and WSVR_Haar for the present data under consideration. Overall, the Haar and D4 filters coupled with the ML model outperformed other combination models. This study can be helpful for future endeavors by reducing the effort of selecting a specific wavelet filter in combination with a specific model for best performance.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math11132896/s1, Table S1: Optimum order of ARIMA model for different markets; Table S2: Optimum value of hyperparameters of ANN models for different markets; Table S3: Optimum value of hyperparameters of RF models for different markets; Table S4: Optimum value of hyperparameters of SVR models for different markets.

Author Contributions

Conceptualization, S.G., R.K.P. and M.Y.; Methodology, S.G. and R.K.P.; Validation, R.K.P. and D.R.; Formal analysis, S.G.; Resources, M.Y., W.E., Y.T. and C.C.; Data curation, D.R.; Writing—original draft, S.G.; Writing—review & editing, R.K.P., D.R., M.Y., W.E., Y.T. and C.C.; Funding acquisition, W.E. and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

The study was funded by Researchers Supporting Project number (RSP2023R488), King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

All the datasets used in this paper are available from the corresponding author upon request.

Acknowledgments

The authors are grateful to ICAR-Indian Agricultural Statistics Research Institute, New Delhi, for providing the facilities to carry out the study. The authors are thankful to the anonymous reviewers for their fruitful comments that helped to improve the quality of the manuscript.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Grossmann, A.; Morlet, J. Decomposition of Hardy Functions into Square Integrable Wavelets of Constant Shape. J. Math. Anal. 1984, 15, 723–736. [Google Scholar] [CrossRef] [Green Version]
  2. Heil, C.E.; Walnut, D.F. Continuous and Discrete Wavelet Transforms. SIAM Rev. 1989, 31, 628–666. [Google Scholar] [CrossRef] [Green Version]
  3. Fugal, D.L. Conceptual Wavelets in Digital Signal Processing: An In-Depth, Practical Approach for the Non-Mathematician; Space & Signals Technical Pub.: San Diego, CA, USA, 2009. [Google Scholar]
  4. Paul, R.K.; Ghosh, H.; Prajneshu. Development of out-of-sample forecasts formulae for ARIMAX-GARCH model and their application. J. Indian Soc. Agric. Stat. 2014, 68, 85–92. [Google Scholar]
  5. Ramyar, S.; Kianfar, F. Forecasting Crude Oil Prices: A Comparison between Artificial Neural Networks and Vector Autoregressive Models. Comput. Econ. 2019, 53, 743–761. [Google Scholar] [CrossRef]
  6. Friedman, J.H. Multivariate adaptive regression splines. Ann. Stat. 1991, 19, 1–67. [Google Scholar] [CrossRef]
  7. Agarwal, A.; Shah, D.; Shen, D.; Song, D. On robustness of principal component regression. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar] [CrossRef]
  8. Jolliffe, I.T. A Note on the Use of Principal Components in Regression. Appl. Stat. 1982, 31, 300. [Google Scholar] [CrossRef] [Green Version]
  9. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  10. Bro, R.; Smilde, A.K. Principal component analysis. Anal. Methods 2014, 6, 2812–2831. [Google Scholar] [CrossRef] [Green Version]
  11. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  12. Ebrahimi-Khusfi, Z.; Taghizadeh-Mehrjardi, R.; Kazemi, M.; Nafarzadegan, A.R. Predicting the ground-level pollutants concentrations and identifying the influencing factors using machine learning, wavelet transformation, and remote sensing techniques. Atmos. Pollut. Res. 2021, 12, 101064. [Google Scholar] [CrossRef]
  13. Schapire, R.E.; Freund, Y.; Bartlett, P.; Lee, W.S. Boosting the margin: A new explanation for the effectiveness of voting methods. Ann. Stat. 1998, 26, 1651–1686. [Google Scholar] [CrossRef]
  14. Zhang, Z.; Gao, G.; Tian, Y.; Yue, J. Two-phase multi-kernel LP-SVR for feature sparsification and forecasting. Neurocomputing 2016, 214, 594–606. [Google Scholar] [CrossRef]
  15. Breiman, L. Bagging predictors. Risks 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  16. Liyew, C.M.; Melese, H.A. Machine learning techniques to predict daily rainfall amount. J. Big Data 2021, 8, 153. [Google Scholar] [CrossRef]
  17. Palanichamy, N.; Haw, S.-C.; Srikrishna, S.; Murugan, R.; Govindasamy, K. Machine learning methods to predict particulate matter PM2.5. F1000Research 2022, 11, 406. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, G.; Ma, J. A hybrid ensemble approach for enterprise credit risk assessment based on Support Vector Machine. Expert Syst. Appl. 2012, 39, 5325–5331. [Google Scholar] [CrossRef]
  19. Merdun, H.; Cinar, O. Artificial neural network and regression techniques in modelling surface water quality. Environ. Prot. Eng. 2010, 36, 95–109. [Google Scholar]
  20. Li, B.; Ding, J.; Yin, Z.; Li, K.; Zhao, X.; Zhang, L. Optimized neural network combined model based on the induced ordered weighted averaging operator for vegetable price forecasting. Expert Syst. Appl. 2021, 168, 114232. [Google Scholar] [CrossRef]
  21. Zhou, P.; Lu, C.; Lin, Z. Tensor principal component analysis. Tensors Data Process. Theory Methods Appl. 2021, 2, 153–213. [Google Scholar] [CrossRef]
  22. Zhao, H. Futures price prediction of agricultural products based on machine learning. Neural Comput. Appl. 2021, 33, 837–850. [Google Scholar] [CrossRef]
  23. Paul, R.K.; Garai, S. Performance comparison of wavelets-based machine learning technique for forecasting agricultural commodity prices. Soft Comput. 2021, 25, 12857–12873. [Google Scholar] [CrossRef]
  24. Iniyan, S.; Akhil Varma, V.; Teja Naidu, C. Crop yield prediction using machine learning techniques. Adv. Eng. Softw. 2023, 175, 103326. [Google Scholar] [CrossRef]
  25. Das, T.; Paul, R.K.; Bhar, L.M.; Paul, A.K. Application of Machine Learning Techniques with GARCH Model for Forecasting Volatility in Agricultural Commodity Prices. J. Indian Soc. Agric. Stat. 2020, 74, 187–194. [Google Scholar]
  26. Paul, R.K.; Yeasin, M.; Kumar, P.; Kumar, P.; Balasubramanian, M.; Roy, H.S.; Paul, A.K.; Gupta, A. Machine learning techniques for forecasting agricultural prices: A case of brinjal in Odisha, India. PLoS ONE 2022, 17, e0270553. [Google Scholar] [CrossRef]
  27. Paul, R.K.; Simmi, R.; Raka, S. Effectiveness of price forecasting techniques for capturing asymmetric volatility for onion in selected markets of Delhi. Indian J. Agric. Sci. 2016, 86, 303–309. [Google Scholar]
  28. Paul, R.K.; Yeasin, M.; Kumar, P.; Paul, A.K.; Roy, H.S. Deep Learning Technique for Forecasting Price of Cauliflower. Curr. Sci. 2023, 124, 1065–1073. [Google Scholar]
  29. Rakshit, D.; Paul, R.K.; Panwar, S. Asymmetric Price Volatility of Onion in India. Indian J. Agric. Econ. 2021, 76, 245–260. [Google Scholar]
  30. Rakshit, D.; Paul, R.K.; Yeasin, M.; Emam, W.; Tashkandy, Y.; Chesneau, C. Modeling Asymmetric Volatility: A News Impact Curve Approach. Mathematics 2023, 11, 2793. [Google Scholar] [CrossRef]
  31. Box, G.E.P.; Jenkins, M.G. Time Series Analysis: Forecasting and Control; San Francisco Holden-Day: San Francisco, CA, USA, 1970. [Google Scholar]
  32. Bollerslev, T. Generalized autoregressive conditional heteroskedasticity. J. Econom. 1986, 31, 307–327. [Google Scholar] [CrossRef] [Green Version]
  33. Percival, D.B.; Walden, A.T. Wavelet Methods for Time Series Analysis; Cambridge University Press: Cambridge, UK, 2000; Volume 4. [Google Scholar]
  34. Anjoy, P.; Paul, R.K. Comparative performance of wavelet-based neural network approaches. Neural Comput. Appl. 2019, 31, 3443–3453. [Google Scholar] [CrossRef]
  35. Paul, R.K.; Garai, S. Wavelets Based Artificial Neural Network Technique for Forecasting Agricultural Prices. J. Indian Soc. Probab. Stat. 2022, 23, 47–61. [Google Scholar] [CrossRef]
  36. Shapiro, S.S.; Wilk, M.B. An analysis of variance test for normality (complete samples). Biometrika 1965, 52, 591–611. [Google Scholar] [CrossRef]
  37. Kwiatkowski, D.; Phillips, P.C.B.; Schmidt, P.; Shin, Y. Testing the null hypothesis of stationarity against the alternative of a unit root: How sure are we that economic time series have a unit root? J. Econom. 1992, 54, 159–178. [Google Scholar] [CrossRef]
  38. Phillips, P.C.B.; Perron, P. Testing for a unit root in time series regression. Biometrika 1988, 75, 335–346. [Google Scholar] [CrossRef]
  39. Broock, W.A.; Scheinkman, J.A.; Dechert, W.D.; LeBaron, B. A test for independence based on the correlation dimension. Econom. Rev. 1996, 15, 197–235. [Google Scholar] [CrossRef]
Figure 1. Individual and wavelet-based combination models.
Figure 1. Individual and wavelet-based combination models.
Mathematics 11 02896 g001
Figure 2. Time plots of onion price series for Bengaluru, Delhi, and Lasalgaon markets.
Figure 2. Time plots of onion price series for Bengaluru, Delhi, and Lasalgaon markets.
Mathematics 11 02896 g002
Figure 3. ACF and PACF of onion price series. Blue lines indicate significance at 5% level of significance.
Figure 3. ACF and PACF of onion price series. Blue lines indicate significance at 5% level of significance.
Mathematics 11 02896 g003aMathematics 11 02896 g003b
Figure 4. Plots of actual vs. best predicted values for Bengaluru, Delhi, and Lasalgaon markets.
Figure 4. Plots of actual vs. best predicted values for Bengaluru, Delhi, and Lasalgaon markets.
Mathematics 11 02896 g004aMathematics 11 02896 g004b
Table 1. Descriptive statistics of onion price series for Bengaluru, Delhi, and Lasalgaon markets.
Table 1. Descriptive statistics of onion price series for Bengaluru, Delhi, and Lasalgaon markets.
StatisticsBengaluruDelhiLasalgaon
Mean (Rs./q)2100.181848.271976.76
Median (Rs./q)1775.001604.001660.00
Minimum (Rs./q)600.00567.00600.00
Maximum (Rs./q)12,500.007650.008625.00
SD (Rs./q)1472.461021.451226.93
CV (%)70.1155.2762.06
Skewness3.031.891.83
Kurtosis12.934.904.30
Table 2. Test for normality (Shapiro–Wilk test).
Table 2. Test for normality (Shapiro–Wilk test).
MarketBengaluruDelhiLasalgaon
Test statistic0.713 ***0.831 ***0.832 ***
*** p < 0.01.
Table 3. Test for stationarity.
Table 3. Test for stationarity.
MarketBengaluruDelhiLasalgaon
KPSS0.991.521.71
PP−22.42 **−17.46 *−23.35 **
** p < 0.05, * p < 0.10.
Table 4. Test for linearity (BDS test).
Table 4. Test for linearity (BDS test).
StatisticsEmbedding Dimension
23
Bengaluru
eps[1]108.51 ***146.74 ***
eps[2]62.89 ***66.00 ***
eps[3]52.55 ***51.31 ***
eps[4]46.08 ***43.52 ***
Delhi
eps[1]160.45 ***250.22 ***
eps[2]80.21 ***90.48 ***
eps[3]65.30 ***65.99 ***
eps[4]61.20 ***58.83 ***
Lasalgaon
eps[1]131.801 ***201.19 ***
eps[2]79.49 ***90.07 ***
eps[3]62.93 ***64.00 ***
eps[4]56.61 ***54.61 ***
*** p < 0.01.
Table 5. Prediction performance of the selected models in the validation set (1-month).
Table 5. Prediction performance of the selected models in the validation set (1-month).
SL No.ModelsBengaluruDelhiLasalgaon
RMSEMAEMAPERMSEMAEMAPERMSEMAEMAPE
1ARIMA582.11529.2022.94246.56159.5013.29195.79149.0214.13
2GARCH415.22321.5515.68231.17171.7213.59224.71175.5417.37
3ANN184.0987.364.9183.5935.342.88112.4469.725.92
4SVR185.51100.775.5388.5444.993.69111.9271.956.54
5RF205.55123.846.64100.7464.575.09140.0096.318.65
6SMLR184.4589.784.8684.2537.743.01114.4675.598.32
7MARS183.3386.214.7883.7934.372.80112.3070.886.54
8PCR182.3787.084.8682.9236.873.01113.9174.617.16
9WARMA_Haar549.05491.9521.67229.89178.6713.94185.77166.3212.65
10WARMA_D4544.48491.1521.69230.20176.1713.82207.29189.3916.35
11WARMA_C6793.29761.6230.05431.30398.5124.091737.551726.7959.68
12WARMA_BL141677.351635.9047.38250.72222.1215.981241.321135.6347.35
13WARMA_LA8711.84675.0427.57394.04368.5922.90821.65681.2034.24
14WGARCH_Haar496.67427.4419.33230.78161.9313.04189.33147.2414.36
15WGARCH_D4511.94451.1920.25229.51168.9013.38155.11127.2211.21
16WGARCH_C6601.45543.1223.32281.80255.4317.771539.231530.5657.38
17WGARCH_BL141905.661821.6349.511285.941137.1243.971205.031180.0650.21
18WGARCH_LA8460.79396.3518.26449.56369.1440.32537.30499.5870.47
19WANN_Haar645.79614.4925.82268.41168.1114.48182.80146.4313.46
20WANN_D4558.61513.0322.50323.79180.7817.37229.54198.2418.32
21WANN_C6727.87706.6428.63388.30307.6330.081967.421934.8562.21
22WANN_BL141872.261819.9749.84955.73904.5640.451160.951094.1547.45
23WANN_LA81527.811454.8643.84395.63295.3327.75930.40896.0143.12
24WSVR_Haar61.2652.213.0551.1245.933.6762.0554.545.21
25WSVR_D4456.73342.2421.26289.45186.1317.26272.83227.4421.14
26WSVR_C6636.65574.4428.74324.37272.7821.21582.55488.2533.21
27WSVR_BL141062.50857.6629.87561.77516.5531.371031.12901.3040.11
28WSVR_LA8614.71519.1927.07318.05265.9521.03665.25518.0233.24
29WRF_Haar55.2330.781.8233.7419.801.4232.0824.022.17
30WRF_D4422.49318.3319.48257.64171.7814.84238.21197.4117.25
31WRF_C6599.20541.4227.38300.28260.0119.76558.75462.0130.19
32WRF_BL141057.85835.4229.05535.39489.7530.261077.32918.1440.27
33WRF_LA8576.03483.9625.17293.99250.7119.36640.12494.7531.36
Table 6. Prediction performance of the selected models in the validation set (1-week).
Table 6. Prediction performance of the selected models in the validation set (1-week).
SL No.ModelsBengaluruDelhiLasalgaon
RMSEMAEMAPERMSEMAEMAPERMSEMAEMAPE
1ARIMA752.59698.5230.0518.9014.291.19105.1683.177.61
2GARCH632.65580.2126.4351.1249.603.92109.5897.659.71
3ANN346.57227.3712.1910.055.870.49120.4283.977.99
4SVR348.04245.4212.8518.3914.791.22115.0792.18.65
5RF373.17250.9012.9621.2317.751.47145.22115.8310.97
6SMLR344.05220.3411.659.508.760.72121.9989.688.38
7MARS345.81222.0511.678.785.600.46121.5885.098.07
8PCR344.05220.3411.659.508.760.72121.9989.688.38
9WARMA_Haar700.51646.5228.5168.1066.965.23214.1173.6513.98
10WARMA_D4704.77662.2129.1861.9660.714.76263.21246.6519.11
11WARMA_C6858.77816.5333.48440.40440.2326.611643.841627.8760.61
12WARMA_BL141351.511241.2941.80176.04175.6012.631337.811135.7148.29
13WARMA_LA8788.01740.2031.32395.55395.3524.561318.031309.8255.48
14WGARCH_Haar669.88624.0427.9342.0538.273.0592.2183.487.88
15WGARCH_D4668.80623.9427.9448.9847.083.73123.3997.168.67
16WGARCH_C6742.95683.8929.55206.95206.6314.541646.121643.0161.08
17WGARCH_BL141612.271548.8048.13826.27809.3439.581268.921260.0754.51
18WGARCH_LA8545.38523.1325.00419.27418.2752.72585.25564.65124.26
19WANN_Haar726.08669.6629.1715.8513.121.07136.2998.098.6
20WANN_D4706.20664.0529.2420.2213.571.10248.58231.9618.18
21WANN_C6801.57763.5232.09239.51203.4213.711869.731787.7961.61
22WANN_BL141252.411160.2940.60533.75513.0829.141351.661190.3648.8
23WANN_LA8857.53763.4631.20247.35215.0814.501146.111140.4152.1
24WSVR_Haar95.8591.395.5723.8723.451.9721.2518.11.77
25WSVR_D4648.29582.5026.1254.9349.864.29226.37201.4815.83
26WSVR_C6842.31788.8232.48349.75339.9621.67976.02945.2446.62
27WSVR_BL141345.171234.2241.66587.22555.7030.501076.811047.6949.44
28WSVR_LA8628.13557.5525.38257.97211.4714.041212.171159.2551.07
29WRF_Haar107.0874.454.498.666.530.5428.9722.132.23
30WRF_D4611.81552.6525.2435.6928.532.30224.51206.7216.29
31WRF_C6790.42735.5730.99317.42310.4420.23939.53908.9845.64
32WRF_BL141346.161234.5041.68554.43526.2829.451200.381095.3748.18
33WRF_LA8586.75517.8824.06227.77184.7812.541170.481111.9649.71
Table 7. Comparison among different individual models.
Table 7. Comparison among different individual models.
MarketsRMSEMAEMAPE
BengaluruPCR < MARS < ANN = SMLR < SVR < RF < GARCH < ARIMAMARS < PCR = ANN < SMLR < SVR < RF < GARCH < ARIMAANN = MARS = PCR < SVR = SMLR< RF < GARCH < ARIMA
DelhiPCR < ANN = MARS < SMLR < SVR < RF < GARCH < ARIMAMARS < ANN < PCR < SMLR < SVR < RF < ARIMA < GARCHANN = MARS < PCR = SVR < SMLR< RF < ARIMA < GARCH
LasalgaonSVR < MARS = ANN < PCR < SMLR < RF < ARIMA < GARCHANN < MARS < SVR < PCR < SMLR < RF < ARIMA < GARCHANN = MARS < SVR = PCR < SMLR< RF < ARIMA < GARCH
Table 8. Wavelet filter-wise comparison of wavelet-based combination models.
Table 8. Wavelet filter-wise comparison of wavelet-based combination models.
FiltersMarketsRMSEMAEMAPE
HaarBengaluruRF < SVR < GARCH < ARMA < ANNRF < SVR < GARCH < ARMA < ANNRF < SVR < GARCH < ARMA < ANN
DelhiRF < SVR < ARMA < GARCH < ANNRF < SVR < GARCH < ARMA < ANNRF < SVR < GARCH < ARMA = ANN
LasalgaonRF < SVR < ANN < ARMA < GARCHRF < SVR < ANN < GARCH < ARMARF < SVR < ARMA = ANN < GARCH
D4BengaluruRF < SVR < GARCH < ARMA < ANNRF < SVR < GARCH < ARMA < ANNRF < GARCH < SVR < ARMA = ANN
DelhiGARCH < ARMA < RF < SVR < ANNGARCH < RF < ARMA < ANN < SVRGARCH < ARMA < RF < ANN= SVR
LasalgaonGARCH < ARMA < ANN < RF < SVRGARCH < ARMA < RF < ANN< SVRGARCH < ARMA < RF < ANN< SVR
C6BengaluruRF < GARCH < SVR < ANN < ARMARF < GARCH < SVR < ANN < ARMAGARCH < RF < ANN = SVR < ARMA
DelhiGARCH < RF < SVR < ANN < ARMAGARCH < RF < SVR < ANN < ARMAGARCH < RF < SVR < ARMA < ANN
LasalgaonRF < SVR < GARCH < ARMA < ANNRF < SVR < GARCH < ARMA < ANNRF < SVR < GARCH < ARMA < ANN
BL4BengaluruRF < SVR < ARMA < ANN < GARCH RF < SVR < ARMA < ANN < GARCHRF < SVR < ARMA < GARCH = ANN
DelhiARMA < RF < SVR < ANN < GARCHARMA < RF < SVR < ANN < GARCHARMA < RF < SVR < ANN < GARCH
LasalgaonSVR < RF < ANN < GARCH < ARMASVR < RF < ANN < ARMA < GARCHSVR = RF < ARMA = ANN < GARCH
LA8BengaluruGARCH < RF < SVR < ARMA < ANNGARCH < RF < SVR < ARMA < ANNGARCH < RF < SVR < ARMA < ANN
DelhiRF < SVR < ARMA < ANN < GARCHRF < SVR < ANN < ARMA < GARCHRF < SVR < ARMA < ANN < GARCH
LasalgaonGARCH < RF < SVR < ARMA < ANNRF < GARCH < SVR < ARMA < ANNRF < SVR < ARMA < ANN < GARCH
Table 9. Performance comparison of wavelet filters for particular models.
Table 9. Performance comparison of wavelet filters for particular models.
Wavelet ModelsMarketsRMSEMAEMAPE
ARMABengaluruD4 < Haar < LA8 < C6 < BL14D4 < Haar < LA8 < C6 < BL14Haar = D4 < LA8 < C6 < BL14
DelhiHaar < D4 < BL14 < LA8 < C6D4 < Haar < BL14 < LA8 < C6Haar = D4 < BL14 < LA8 < C6
LasalgaonHaar < D4 < LA8 < BL14 < C6Haar < D4 < LA8 < BL14 < C6Haar < D4 < LA8 < BL14 < C6
GARCHBengaluruLA8 < Haar < D4 < C6 < BL14LA8 < Haar < D4 < C6 < BL14LA8 < Haar < D4 < C6 < BL14
DelhiD4 < Haar < C6 < LA8 < BL14Haar < D4 < C6 < LA8 < BL14Haar = D4 < C6 < LA8 < BL14
LasalgaonD4 < Haar < LA8 < BL14 < C6D4 < Haar < LA8 < BL14 < C6D4 < Haar < BL14 < C6 < LA8
ANNBengaluruD4 < Haar < C6 < LA8 < BL14D4 < Haar < C6 < LA8 < BL14D4 < Haar < C6 < LA8 < BL14
DelhiHaar < D4 < C6 < LA8 < BL14Haar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14
LasalgaonHaar < D4 < LA8 < BL14 < C6Haar < D4 < LA8 < BL14 < C6Haar < D4 < LA8 < BL14 < C6
SVRBengaluruHaar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14
DelhiHaar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14Haar < D4 < C6 = LA8 < BL14
LasalgaonHaar < D4 < C6 < LA8 < BL14Haar < D4 < C6 < LA8 < BL14Haar < D4 < C6 = LA8 < BL14
RFBengaluruHaar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14
DelhiHaar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14Haar < D4 < LA8 < C6 < BL14
LasalgaonHaar < D4 < C6 < LA8 < BL14Haar < D4 < C6 < LA8 < BL14Haar < D4 < C6 < LA8 < BL14
Table 10. Performance comparison of all models.
Table 10. Performance comparison of all models.
Best Model (s)
MarketsRMSEMAEMAPE
BengaluruWRF_Haar < WSVR_HaarWRF_Haar < WSVR_HaarWRF_Haar < WSVR_Haar
DelhiWRF_Haar < WSVR_HaarWRF_Haar < WSVR_HaarWRF_Haar < WSVR_Haar
LasalgaonWRF_Haar < WSVR_HaarWRF_Haar < WSVR_HaarWRF_Haar < WSVR_Haar
Poorly Performing Model (s)
MarketsRMSEMAEMAPE
BengaluruWGARCH_BL14 > WANN_BL14WGARCH_BL14 > WANN_BL14WGARCH_BL14 > WANN_BL14
DelhiWGARCH_BL14 > WANN_BL14WGARCH_BL14 > WANN_BL14WGARCH_BL14 > WANN_BL14
LasalgaonWANN_C6 > WARMA_C6WANN_C6 > WARMA_C6WANN_C6 > WARMA_C6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Garai, S.; Paul, R.K.; Rakshit, D.; Yeasin, M.; Emam, W.; Tashkandy, Y.; Chesneau, C. Wavelets in Combination with Stochastic and Machine Learning Models to Predict Agricultural Prices. Mathematics 2023, 11, 2896. https://doi.org/10.3390/math11132896

AMA Style

Garai S, Paul RK, Rakshit D, Yeasin M, Emam W, Tashkandy Y, Chesneau C. Wavelets in Combination with Stochastic and Machine Learning Models to Predict Agricultural Prices. Mathematics. 2023; 11(13):2896. https://doi.org/10.3390/math11132896

Chicago/Turabian Style

Garai, Sandip, Ranjit Kumar Paul, Debopam Rakshit, Md Yeasin, Walid Emam, Yusra Tashkandy, and Christophe Chesneau. 2023. "Wavelets in Combination with Stochastic and Machine Learning Models to Predict Agricultural Prices" Mathematics 11, no. 13: 2896. https://doi.org/10.3390/math11132896

APA Style

Garai, S., Paul, R. K., Rakshit, D., Yeasin, M., Emam, W., Tashkandy, Y., & Chesneau, C. (2023). Wavelets in Combination with Stochastic and Machine Learning Models to Predict Agricultural Prices. Mathematics, 11(13), 2896. https://doi.org/10.3390/math11132896

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop