Next Article in Journal
Runtime Verification-Based Safe MARL for Optimized Safety Policy Generation for Multi-Robot Systems
Previous Article in Journal
International Classification of Diseases Prediction from MIMIIC-III Clinical Text Using Pre-Trained ClinicalBERT and NLP Deep Learning Models Achieving State of the Art
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting

Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2024, 8(5), 48; https://doi.org/10.3390/bdcc8050048
Submission received: 5 April 2024 / Revised: 2 May 2024 / Accepted: 7 May 2024 / Published: 16 May 2024

Abstract

:
Time series forecasting has been a challenging area in the field of Artificial Intelligence. Various approaches such as linear neural networks, recurrent linear neural networks, Convolutional Neural Networks, and recently transformers have been attempted for the time series forecasting domain. Although transformer-based architectures have been outstanding in the Natural Language Processing domain, especially in autoregressive language modeling, the initial attempts to use transformers in the time series arena have met mixed success. A recent important work indicating simple linear networks outperform transformer-based designs. We investigate this paradox in detail comparing the linear neural network- and transformer-based designs, providing insights into why a certain approach may be better for a particular type of problem. We also improve upon the recently proposed simple linear neural network-based architecture by using dual pipelines with batch normalization and reversible instance normalization. Our enhanced architecture outperforms all existing architectures for time series forecasting on a majority of the popular benchmarks.

1. Introduction

The goal of time series forecasting is to predict future values based on patterns observed in historical data. It has been an active area of research with applications in many diverse fields such as weather, financial markets, electricity consumption, health care, and market demand, among others. Over the last few decades, different approaches have been developed for time series prediction involving classical statistics, mathematical regression, machine learning, and deep learning-based models. Both univariate and multivariate models have been developed for different application domains. The classical statistics- and mathematics-based approaches include moving average filters, exponential smoothing, Autoregressive Integrated Moving Average (ARIMA), SARIMA [1], and TBATs [2]. SARIMA improves upon ARIMA by also taking into account any seasonality patterns and usually performs better in forecasting complex data containing cycles. TBATs further refines SARIMA by including multiple seasonal periods.
With the advent of machine learning where the foundational concept is to develop a model that learns from data, several approaches to time series forecasting have been explored including Linear Regression, XGBoost, and random forests. Using random forests or XGBoost for time series forecasting requires the data to be transformed into a supervised learning problem using a sliding window approach. When the training data are relatively small, the statistical approaches tend to yield better results; however, it has been shown that for larger data, machine-learning approaches tend to outperform the classical mathematical techniques of SARIMA and TBATs [2,3].
In the last decade, deep learning-based approaches [4] to time series forecasting have drawn considerable research interest starting from designs based on Recurrent Neural Networks (RNNs) [5,6]. A detailed study comparing the ARIMA-based architectures and RNNs [6] concluded that RNNs can model seasonality patterns directly if the data have homogeneous seasonal patterns; otherwise, a deseasonalization step was recommended. It was also concluded that (semi-) automatic RNN models are no silver bullets but can be competitive in some situations. The work in [6] compared different RNN designs and indicated that a Long Short-Term Memory (LSTM) cell with peephole connections performed relatively better, the Elmann Recurrent Neural Network (ERNN) cell performed the worst, and the performance of the Gated Recurrent Unit (GRU) was in between.
LSTM and Convolutional Neural Networks (CNNs) [7] have been combined to address the long-term and short-term patterns arising in data. One notable design was proposed in [8], termed by the authors as Long- and Short-term Time-series network (LSTNet). It uses the CNN and RNN to extract short-term local dependency patterns among variables and to discover long-term patterns for time series trends. Recently, the use of RNNs and CNNs is being replaced by transformer-based architectures in many applications, such as Natural Language Processing (NLP) and Computer Vision. Transformers [9], which use an attention mechanism to determine the similarity in the input sequence, are one of the best models for NLP applications, as demonstrated by the success of large language models such as ChatGPT. Some time series forecasting implementations using transformers have achieved good performance [10,11,12,13]; however, the transformer has some inherent challenges and limitations with respect to time series forecasting in current implementations due to the following reasons:
  • Temporal dynamics vs. semantic correlations: Transformers excel in identifying semantic correlations but struggle with the complex, non-linear temporal dynamics crucial in time series forecasting [14,15]. To address this, an auto-correlation mechanism is used in Autoformer [11];
  • Order insensitivity: The self-attention mechanism in transformers treats inputs as an unsequenced collection, which is problematic for time series prediction where order is important. Even though, positional encodings used in transformers partially address this but may not fully incorporate the temporal information. Some transformer-based models try to solve this problem using enhancements in architecture, e.g., Autoformer [11] uses series decomposition blocks that enhance the system’s ability to learn from intricate temporal patterns [11,13,15];
  • Complexity trade-offs: The attention mechanism in transformers has high computational costs for long sequences due to its quadratic complexity O ( L 2 ) , and modifications of sparse attention mechanisms, e.g., Informer [10], reduce this to O ( L × l o g L ) by using a ProbSparse technique. Some models reduce this complexity to O ( L ) , e.g., FEDformer [12], which uses a Fourier-enhanced structure, and Pyraformer [16], which incorporates a pyramidal attention module with inter-scale and intra-scale connections to accomplish the linear complexity. These reductions in complexity come at the cost of some information loss in the time series prediction;
  • Noise susceptibility: transformers with many parameters are prone to overfitting noise, a significant issue in volatile data like a financial time series where the actual signal is often subtle [15];
  • Long-term dependency challenge: Transformers, despite their theoretical potential, often find it challenging to handle very long sequences typical in time series forecasting, largely due to training complexities and gradient dilution. For example, PatchTST [14] used disassembling a time series into smaller segments and used it as patches to address this issue. This may cause some segment fragmentation issues at the boundaries of the patches in input data;
  • Interpretation challenge: Transformers’ complex architecture, with layers of self-attention and feed-forward networks, complicates understanding their decision-making, a notable limitation in time series forecasting where rationale clarity is crucial. An attempt has been made in LTS-Linear [15] to address this by using a simple linear network instead of a complex architecture; however, this may be unable to exploit the intricate multivariate relationships between data.
In summary, different approaches for time series forecasting have been explored. These include classical approaches based on mathematics and statistics, neural network approaches (including linear networks, LSTMs and CNNs), and recently the transformer-based approaches. Even though transformer-based models have claimed to outperform previous approaches, the recent work in [15] questions the use of complex models including transformers, and shows that a simple linear neural network yields better results than transformer-based models. This seems counter-intuitive to not utilize the attention capabilities of the transformer, which has revolutionized AI in text generation in large language models. We investigate this paradox further to see if better models for time series can be created by using either the linear network or transformer-based approaches. We review the related work in the next section before elaborating on our enhanced models.

2. Related Work

Some of the recent works related to time series forecasting include models based on simple linear networks, transformers, and state-space models. One of the important works related to Long-Term Time Series Forecasting (LTSF), termed LTSF-Linear, was presented in [15]. It uses the most fundamental Direct Multi-Step DMS [17] model through a temporal linear layer. The core approach of LTSF-Linear involves predicting future time series data by directly applying a weighted sum to historical data, as shown in Figure 1.
The output of LTSF-Linear is described as X ^ i = W X i , where W R T × L is a temporal linear layer and X i   is the input for the i t h variable. This model applies uniform weights across various variables without considering spatial correlations between the variates. Besides LTSF-Linear, a few variations termed NLinear and DLinear were also introduced in [15]. NLinear processes the input sequence through a linear layer with normalization by subtracting and re-adding the last sequence value before predicting. DLinear decomposes raw data into trend and seasonal components using a moving average kernel, processes each with a linear layer, and sums the outputs for the final prediction [15]. This concept has been borrowed from the AutoFormer and FedFormer models [11,12].
Although some research indicates the success of the transformer-based models for time series forecasting, e.g., [10,11,12,16], the LTSF-Linear work in [15] questions the use of transformers due to the fact that the permutation-invariant self-attention mechanism may result in temporal information loss. The work in [15] also presented better forecasting results than the previous transformer-based approaches. However, important research later presented in [14] proposed a transformer-based architecture called PatchTST, showing better results than [15] in some cases. PatchTST segments the time series into subseries-level patches and maintains channel independence between variates. Each channel contains a single univariate time series that shares the same embedding and transformer weights across all the series. Figure 2 depicts the architecture of PatchTST.
In PatchTST, the ith series for L time steps is treated as a univariate x 1 : L i = ( x 1 ( i ) , , x L ( i ) ) . Each of these is fed independently to the transformer backbone after converting to patches, which provides prediction results x ^ = ( x ^ L + 1 i , , x ^ L + T i ) R 1 × T for T future steps. For a patch length P and stride S, the patching process generates a sequence of N patches x p ( i ) R P × N , where N = ( L P ) S + 2 . With the use of patches, the number of input tokens can reduce to approximately L / S .
Recently, state-space models (SSMs) have received considerable attention in the NLP and Computer Vision domain [18,19]. For time series forecasting, it has been reported that SSM representations cannot express autoregressive processes effectively. An important recent work using SSM is presented in [20] (termed SpaceTimeSSM) that enhances the traditional SSM model by employing a companion matrix, which enables SpaceTime’s SSM layers to learn desirable autoregressive processes. The time series forecasting represents the input series for p past samples as the following:
u k = 1 u k 1 + 2 u k 2 + . p u k p
Then the state-space formulation is given as follows:
x k + 1 = A x k + B u k
y k + 1 = C x k + 1 + D u k
y k + 1 = u k + 1 = C ( A x k + B u k )
The SpaceTimeSSM composes the companion matrix A as a dxd square matrix:
A = 0     0                   0         a 0 1     0                   0         a 1 0     1                   0         a 2                                                   0       1                   0     a d 1  
where a : = a 0 a 1 a d 1 T = 0 , B = 1   0   0 T , C = 1 p .
We provide a comparison of different time series benchmarks on the SpaceTimeSSM approach in the results Section 4.

3. Methodology

3.1. Proposed Models for Time Series Forecasting

As explained in the previous related works section, there are three competing approaches for time series forecasting: one based on simple linear networks, the second based on transformers where the input series is converted to patches, and channel independence is claimed to be a better scheme, and the third approach based on state-space models with additional enhancements to incorporate autoregressive behavior. We investigated these approaches further to see if better models for time series can be created in at least the first two categories. In the next subsections, we elaborate our enhancements on existing linear- and transformer-based approaches.

3.2. Enhanced Linear Models for Time Series Forecasting (ELM)

We enhanced the LTSF-Linear approach presented in [15] by performing batch normalization and reversible instance normalization. We further combined the information in a novel way using a dual pipeline design as shown in Figure 2. The recent important works, e.g., LTSF-Linear [15], which is based on simple linear networks, and the PatchTST work in [14], based on transformers’ emphasized channel independence, produce better results. We maintain this attribute but further augment the linear architecture with batch normalization. This stabilizes the distribution of input data by normalizing the activations in each layer. It also allows for higher learning rates and reduces the need for strict initialization and some forms of regularization such as dropout. By addressing the internal covariate shift, batch normalization improves network stability and performance across various tasks.
While one of the enhancements in [15], termed NLinear, accommodated for distribution shift in the dataset—by subtracting the last value of the sequence and then adding it back after the linear layer—before doing the final prediction, we incorporate a similar idea in our architecture as a separate stream, as shown in Figure 3.
One difference in our implementation for the distribution shift is that we further add batch normalization to combine temporal information more effectively. From Figure 3, it can be seen that there are two distinct pipelines operating on the input sequence in the beginning. These two streams are then merged together with the values being averaged, and after passing through a non-linearity (GeLU) and another batch normalization layer, we pass through a final Reversible Instance Normalization layer (RevIn). The RevIn originally proposed in [21] operates on each channel of each variate independently. It applies a learnable transformation to normalize the data during training, such that it can be reversed to its original scale during prediction.
We also use a custom loss function that combines the L2 (MSE) and L1 (MAE) losses together in a weighted manner as described below.
L o s s = α × y y ^ 2 + 1 α y y ^ 1
where α is a weighting factor between 0 and 1. MSE (input, target) calculates the mean squared error between the input and target values. L1 (input, target) calculates the mean absolute difference between the input and target values. As demonstrated in our results section, our enhanced linear network-based architecture produces better results than existing approaches in many cases on different benchmarks.
To investigate if a different transformer-based architecture may be more suitable for time series forecasting, we adapt the popular Swin transformer [22], which has demonstrated superior results in computer vision. Since the Swin transformer applies attention to local regions, it may have the capability to extract better temporal information. Further, by using shifting windows, it ensures that more tokens are involved in the attention process. We elaborate on this in the next sub-section.

3.3. Adaptation of Vision Transformers to Time Series Forecasting

While one of the recent works on time series forecasting used simple transformer-based architecture (PatchTST [14]) with channel independence, we explore a more intricate transformer architecture, i.e., the Swin transformer [22]. The Swin transformer presents an innovative and streamlined structure for vision-related tasks through the utilization of shifted windows to compute representations. This method tackles the scalability issues inherent to transformers in vision applications by ensuring a linear computational complexity that correlates with the size of the image. It has the additional advantage of overcoming the information loss in the patching process by the use of hierarchical overlapping windows. As a result, it has demonstrated superior results across various computer vision applications. Due to these inherent advantages of the Swin architecture, we adapt it to the time series forecasting domain. We treat the multivariate input series R L × d with L past steps and d channels as an L × d   image and convert it to an appropriate number of patches that are then fed to the Swin model. Due to the use of overlapping, shifted, and hierarchical windows, it has the potential for learning better cross-channel information in predicting future time series data. The architecture of our Swin-based time series model is shown in Figure 4.
For feeding the multivariate time series   R d × L   with L time steps and d variates to the Swin transformer, the input data need to be converted to n 2 patches where n   is a power of 2. We accomplish this by creating n 2 = d × L r k number of patches where r and k are integers, which are selected to convert the input data to n 2 patches. For example, if the input series data have 512 time steps with 7 channels, then k = 14 and r = 0 . This results in 256 patches, i.e., n = 256 . We present the evaluation results on different benchmarks in the next section.

4. Results

We tested our architectures and performed analyses on nine widely used datasets from real-world applications. These datasets consist of the Electricity Transformer Temperature (ETT) series, which include ETTh1 and ETTh2 (hourly intervals), and ETTm1 and ETTm2 (5-minute intervals), along with datasets pertaining to Traffic (hourly), Electricity (hourly), Weather (10-minute intervals), Influenza-like illness (ILI) (weekly), and Exchange rate (daily). The characteristics of the different datasets used are summarized in Table 1.
The architecture type of models that we compare to our approach are listed in Table 2.
Table 3 shows the detailed results for our Enhanced Linear Model (ELM) on different datasets and compares it with other recent popular models.
As can be seen from Table 3, our ELM model surpasses most established baseline methods in the majority of the test cases (indicated by bold values). The underlined values in Table 3 indicate the second-best results for a given category. Our model is either the best or the second-best in most categories. Note that each model in Table 3 follows a consistent experimental setup, with prediction lengths T of {96, 192, 336, 720} for all datasets except for the ILI dataset. For the ILI dataset, we use prediction lengths of {24, 36, 48, 60}. For our ELM model, the look-back window L is 512 for all datasets except Exchange and Illness, which use L = 96. For the other models that we compare to, we select their best prediction based on look-back window size from either of the (96, 192, 336, 720) [14,15]. Metrics used for evaluation are MSE (Mean Squared Error) and MAE (Mean Absolute Error).
Table 4 provides the quantitative improvement over two recent best-performing time series prediction models of PatchTST [14] and DLinear [15]. The values presented are the average of the percent improvement for the four lookback window sizes of 96, 192, 336, and 720. With respect to PatchTST, our model lags in performance on the traffic and illness datasets using the MSE metric but is competitive or exceeds the MSE or MAE metrics on the other benchmarks. The percentage improvement with respect to DLinear is more significant than the PatchTST Model, and our ELM model exceeds the DLinear in almost all dataset categories.
Figure 5 and Figure 6 show the graphs of predicted vs. actual data for two of the datasets with different prediction lengths using a context length of 512 for our ELM model for the first channel (pressure for the weather dataset, and HUFL—high useful load for the ETTm1 dataset). As can be seen, if the data are more cyclical in nature (e.g., HUFL in ETTm1), our model is able to learn the patterns nicely, as shown in Figure 6. For complex data such as the pressure feature in weather, the prediction is less accurate, as indicated in Figure 5.
Table 5 presents our results on the Swin transformer-based implementation for time series. As explained earlier, we divide the input multivariate time series data into 16 × 16, i.e., 256 patches, before feeding it to a Swin model with three transformer layers. The embeddings used in the three layers are [128,128,256]. As can be seen, the Swin transformer-based approach has the inherent capability to combine information between different channels as well as between different time-steps but does not perform as well as our linear model (ELM); only on the traffic dataset it produces the best result. This could be attributed to the fact that this dataset has the most number of features, which Swin can effectively use for more cross-channel information. Comparing our Swin transformer-based model to the PatchTST model [14] (also transformer-based), the PatchTST that uses channel independence performs better than our Swin-based model. Note that the PatchTST performs worse than our ELM model, which is based on a linear network.
We also compare our ELM model to the newly proposed state-space model-based time series prediction [20]. State-space models such as Mamba [18], VMamba [19], Vision Mamba [23], and Time Machine Mamba [24] are drawing significant attention for modeling temporal data such as time series, and therefore we compare our ELM model with the recently published work of [20] and [24,25], which are based on state-space models. Table 6 shows the results of our ELM model with the work in [20,24]. In one case, the SpaceTime model is better but most of the time our ELM model performs better than both the state-space and the previous DLinear models. The context length in Table 6 is 720, and the prediction is also 720 time steps.

5. Discussion

One of the recent unanswered questions in time series forecasting has been as to which architecture is best suited for this task. Some earlier research papers have indicated better results with transformer-based models than previous approaches, e.g., Informer [10], Autoformer [11], Fedformer [12], and Pyraformer [16]. Of these models, FedFormer demonstrated much better results as it uses Fourier-enhanced blocks and Wavelet-enhanced blocks in the transformer structure that can learn important patterns in series through frequency domain mapping. A simpler transformer-based architecture yielding even better results was proposed in [14]. This architecture, termed PatchTST, uses independent channels where an input channel is divided into patches. All channels share the same embedding and transformer weights. Since PatchTST is a simple transformer design with a simple independent channel architecture, we explored replacing this design with a Swin transformer with patching across channels. The Swin transformer has the capability to combine information across patches due to its hierarchical overlapping window design. Our detailed experimental results on the Swin architecture-based design did not produce better results as compared to the channel-independent design of PatchTST; however, compared with other transformer-based designs, it yielded improved results in many cases.
To answer the question of the best architecture for time series forecasting, we improve the recently proposed simple linear network-based model in [15] by creating dual pipelines with batch and reversible instance normalizations. We maintain channel independence and our results on the benchmarks show the best results obtained so far as compared to existing approaches in the majority of the standard datasets used in time series forecasting.

6. Conclusions

We perform a detailed investigation as to the best architecture for time series forecasting. We have implemented time series forecasting on the Swin transformer to see if aggregated channel information is useful. We also analyzed and improved an existing simpler model based on linear networks. Our study highlights the significant potential of simpler models, challenging the prevailing emphasis on complex transformer-based architectures. The ELM model developed in this work, with its straightforward design, has demonstrated superior performance across various datasets, underscoring the importance of re-evaluating the effectiveness of simpler models in time series analysis. Compared to the recent transformer-based PatchTST model, our ELM model achieves a percentage improvement of approximately 1–5% on most benchmarks. With respect to the recent linear network-based models, the percentage improvement by our model is more significant, ranging between 1 and 25% for different datasets. It is only when the number of variates in the dataset is large that the Swin transformer-based design we adapt for the time series prediction seems to be effective.
Future work involves the development of hybrid models that leverage both linear and transformer elements such that each contributes to the effective learning of the time series behavior. For example, the frequency domain component as used in FedFormer could aid a linear model when past periodicity pattern is more complex. The recent developments in state-space models and their applications to time series forecasting such as TimeMachine [24,25] (based on Mamba) also deserve further research in optimizing these models for better prediction.

Author Contributions

Conceptualization, M.A. and A.M.; methodology, M.A.; software, M.A.; validation, M.A. and A.M.; formal analysis, M.A.; investigation, M.A.; resources, M.A.; data curation, M.A.; writing—original draft preparation, M.A. and A.M.; writing—review and editing, M.A. and A.M.; visualization, M.A.; supervision, A.M.; project administration, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no financial support for this research.

Data Availability Statement

All materials related to our study, including the trained models, detailed results reports, source code, and datasets, are publicly accessible via our dedicated GitHub repository: https://github.com/muslehal/Enhanced-Linear-Model-ELM-, Dataset link: https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy (accessed on 1 April 2024).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
  2. De Livera, A.M.; Hyndman, R.J.; Snyder, R.D. Forecasting time series with complex seasonal patterns using exponential smoothing. J. Am. Stat. Assoc. 2011, 106, 1513–1527. [Google Scholar] [CrossRef]
  3. Cerqueira, V.; Torgo, L.; Soares, C. Machine Learning vs. Statistical Methods for Time Series Forecasting: Size Matters. arXiv 2019, arXiv:1909.13316v1. [Google Scholar]
  4. Lim, B.; Zohren, S. Time-series forecasting with deep learning: A survey. Philos. Trans. R. Soc. A 2021, 379, 20200209. [Google Scholar] [CrossRef] [PubMed]
  5. Salinas, D.; Flunkert, V.; Gasthaus, J.; Januschowski, T. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast. 2020, 36, 1181–1191. [Google Scholar] [CrossRef]
  6. Hewamalage, H.; Bergmeir, C.; Bandara, K. Recurrent Neural Networks for Time Series Forecasting: Current Status and Future Directions. Int. J. Forecast. 2021, 37, 388–427. [Google Scholar] [CrossRef]
  7. Sen, R.; Yu, H.F.; Dhillon, I.S. Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019); Neural Information Processing Systems Foundation, Inc. (NeurIPS): San Diego, CA, USA, 2019. [Google Scholar]
  8. Lai, G.; Chang, W.-C.; Yang, Y.; Liu, H. Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks. In Proceedings of the SIGIR ‘18: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018. [Google Scholar]
  9. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS 2017); Neural Information Processing Systems Foundation, Inc. (NeurIPS): San Diego, CA, USA, 2017. [Google Scholar]
  10. Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. Proc. AAAI Conf. Artif. Intell. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
  11. Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. In Advances in Neural Information Processing Systems 34 (NeurIPS 2021); Neural Information Processing Systems Foundation, Inc. (NeurIPS): San Diego, CA, USA, 2021; pp. 22419–22430. [Google Scholar]
  12. Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the 39th International Conference on Machine Learning PMLR 2022, Baltimore, MD, USA, 17–23 July 2022; pp. 27268–27286. [Google Scholar]
  13. Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In Advances in Neural Information Processing Systems 32 (NeurIPS 2019); Neural Information Processing Systems Foundation, Inc. (NeurIPS): San Diego, CA, USA, 2019. [Google Scholar]
  14. Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A Time Series is worth 64 words: Long-term forecasting with Transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar]
  15. Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are Transformers effective for Time Series Forecasting? Proc. AAAI Conf. Artif. Intell. 2023, 37, 11121–11128. [Google Scholar] [CrossRef]
  16. Liu, S.; Yu, H.; Liao, C.; Li, J.; Lin, W.; Liu, A.X.; Dustdar, S. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In Proceedings of the International Conference on Learning Representations 2022, Online, 25–29 April 2022. [Google Scholar]
  17. Guillaume, C. Direct multi-step estimation and forecasting. J. Econ. Surv. 2007, 21, 746–785. [Google Scholar]
  18. Albert, G.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar]
  19. Liu, Y.; Tian, Y.; Zhao, Y.; Yu, H.; Xie, L.; Wang, Y.; Ye, Q.; Liu, Y. Vmamba: Visual state space model. arXiv 2024, arXiv:2401.10166. [Google Scholar]
  20. Zhang, M.; Saab, K.K.; Poli, M.; Dao, T.; Goel, K.; Ré, C. Effectively Modeling Time Series with Simple Discrete State Spaces. arXiv 2023, arXiv:2303.09489v1. [Google Scholar]
  21. Kim, T.; Kim, J.; Tae, Y.; Park, C.; Choi, J.H.; Choo, J. Reversible instance normalization for accurate time-series forecasting against distribution shift. In Proceedings of the International Conference on Learning Representations 2021, Online, 3–7 May 2021. [Google Scholar]
  22. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  23. Zhu, L.; Liao, B.; Zhang, Q.; Wang, X.; Liu, W.; Wang, X. Vision mamba: Efficient visual representation learning with bidirectional state space model. arXiv 2024, arXiv:2401.09417. [Google Scholar]
  24. Wang, Z.; Kong, F.; Feng, S.; Wang, M.; Zhao, H.; Wang, D.; Zhang, Y. Is Mamba Effective for Time Series Forecasting? arXiv 2024, arXiv:2403.11144. [Google Scholar]
  25. Ahamed, M.A.; Cheng, Q. TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting. arXiv 2024, arXiv:2403.09898. [Google Scholar]
Figure 1. Linear network predicting T future time steps based on past L time steps [15].
Figure 1. Linear network predicting T future time steps based on past L time steps [15].
Bdcc 08 00048 g001
Figure 2. Architecture of PatchTST [15].
Figure 2. Architecture of PatchTST [15].
Bdcc 08 00048 g002
Figure 3. Our Enhanced Linear Model (ELM).
Figure 3. Our Enhanced Linear Model (ELM).
Bdcc 08 00048 g003
Figure 4. Adaptation of Swin transformer architecture for time series forecasting.
Figure 4. Adaptation of Swin transformer architecture for time series forecasting.
Bdcc 08 00048 g004
Figure 5. Predicted vs. actual forecasting using ELM model with L= 512 and T = {96, 720} for Weather dataset.
Figure 5. Predicted vs. actual forecasting using ELM model with L= 512 and T = {96, 720} for Weather dataset.
Bdcc 08 00048 g005
Figure 6. Predicted vs. actual forecasting using ELM Model with L = 512 and T = {96, 720} for ETTm1 dataset.
Figure 6. Predicted vs. actual forecasting using ELM Model with L = 512 and T = {96, 720} for ETTm1 dataset.
Bdcc 08 00048 g006
Table 1. Characteristics of the different datasets used.
Table 1. Characteristics of the different datasets used.
DatasetsWeatherTrafficElectricityILIETTh1/ETTh2Exchange RateETTm1/ETTm2
Features218623217787
Timesteps52,69617,54426,30496617,420758869,680
Granularity10 min1 h1 h1 week1 h1 day5 min
Table 2. Architecture types of different models used for comparison.
Table 2. Architecture types of different models used for comparison.
ModelType
FEDformer 1Transformer-based
AutoformerTransformer-based
InformerTransformer-based
PyraformerTransformer-based
DLinearNon-transformer
PatchTSTTransformer-based
Table 3. Comparison of our ELM model with other models on the time series datasets.
Table 3. Comparison of our ELM model with other models on the time series datasets.
Models(Our Model) ELMPatchTST/64DLinearFEDformerAutoformerInformerPyraformer
MetricMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
Weather960.1400.1840.1490.1980.1760.2370.2380.3140.2490.3290.3540.4050.8960.556
1920.1830.2260.1940.2410.220.2820.2750.3290.3250.370.4190.4340.6220.624
3360.2330.2660.2450.2820.2650.3190.3390.3770.3510.3910.5830.5430.7390.753
7200.3060.3190.3140.3340.3230.3620.3890.4090.4150.4260.9160.7051.0040.934
Traffic960.3980.2650.3600.2490.410.2820.5760.3590.5970.3710.7330.412.0850.468
1920.4080.2690.3790.2560.4230.2870.610.380.6070.3820.7770.4350.8670.467
3360.4170.2740.3920.2640.4360.2960.6080.3750.6230.3870.7760.4340.8690.469
7200.4560.2990.4320.2860.4660.3150.6210.3750.6390.3950.8270.4660.8810.473
Electricity960.1310.2230.1290.2220.140.2370.1860.3020.1960.3130.3040.3930.3860.449
1920.1460.2360.1470.2400.1530.2490.1970.3110.2110.3240.3270.4170.3860.443
3360.1620.2530.1630.2590.1690.2670.2130.3280.2140.3270.3330.4220.3780.443
7200.2000.2870.1970.290.2030.3010.2330.3440.2360.3420.3510.4270.3760.445
Illness241.8200.8091.3190.7542.2151.0812.6241.0952.9061.1824.6571.4491.422.012
361.5740.7751.5790.871.9630.9632.5161.0212.5851.0384.651.4637.3942.031
481.5640.7931.5530.8152.131.0242.5051.0413.0241.1455.0041.5427.5512.057
601.5120.8031.4700.7882.3681.0962.7421.1222.7611.1145.0711.5437.6622.1
ETTh1960.3620.3890.3700.4000.3750.3990.3760.4150.4350.4460.9410.7690.6640.612
1920.3980.4120.4130.4290.4050.4160.4230.4460.4560.4571.0070.7860.790.681
3360.4210.4270.4220.4400.4390.4430.4440.4620.4860.4871.0380.7840.8910.738
7200.4370.4530.4470.4680.4720.4900.4690.4920.5150.5171.1440.8570.9630.782
ETTh2960.2630.3310.2740.3370.2890.3530.3320.3740.3320.3681.5490.9520.6450.597
1920.3180.3690.3410.3820.3830.4180.4070.4460.4260.4343.7921.5420.7880.683
3360.3480.3990.3290.3840.4480.4650.40.4470.4770.4794.2151.6420.9070.747
7200.4090.4440.3790.4220.6050.5510.4120.4690.4530.493.6561.6190.9630.783
ETTm1960.2910.3380.2930.3460.2990.3430.3260.390.510.4920.6260.560.5430.51
1920.3320.3610.3330.3700.3350.3650.3650.4150.5140.4950.7250.6190.5570.537
3360.3620.3770.3690.3920.3690.3860.3920.4250.510.4921.0050.7410.7540.655
7200.4180.4090.4160.4200.4250.4210.4460.4580.5270.4931.1330.8450.9080.724
ETTm2960.1600.2460.1660.2560.1670.2600.180.2710.2050.2930.3550.4620.4350.507
1920.2190.2880.2230.2960.2240.3030.2520.3180.2780.3360.5950.5860.730.673
3360.2710.3210.2740.3290.2810.3420.3240.3640.3430.3791.270.8711.2010.845
7200.3600.3800.3620.3850.3970.4210.410.420.4140.4193.0011.2673.6251.451
Exchange960.0840.201 0.0810.2030.1480.2780.1970.3230.8470.7520.3761.105
1920.1560.296 0.1570.2930.2710.380.30.3691.2040.8951.7481.151
3360.2660.403 0.3050.4140.460.50.5090.5241.6721.0361.8741.172
7200.6650.649 0.6430.6011.1950.8411.4470.9412.4781.311.9431.206
Table 4. Quantitative improvements of our ELM model with respect to best-performing existing models.
Table 4. Quantitative improvements of our ELM model with respect to best-performing existing models.
DatasetAverage % Improvement of Our ELM Model PatchTST/64Average % Improvement of Our ELM Model Over DLinear
MetricMSEMAEMSEMAE
Weather4.79%5.86%13.65%17.68%
Traffic−7.54%−4.96%3.25%6.20%
Electricity−0.45%1.14%4.16%5.26%
Illness−10.31%1.11%25.09%23.49%
ETTh12.07%3.22%4.18%3.66%
ETTh2−0.74%−0.99%20.17%12.89%
ETTm10.60%2.80%1.78%1.94%
ETTm21.76%2.59%4.83%6.55%
Exchange 1.58%−1.34%
Table 5. Comparison of our Swin transformer model with other models on the time series datasets. Results highlighted in bold signify the best performance, while those underlined indicate the second-highest achievement.
Table 5. Comparison of our Swin transformer model with other models on the time series datasets. Results highlighted in bold signify the best performance, while those underlined indicate the second-highest achievement.
Models(Our) Swin Transformer(Our Model) ELMPatchTST/64DLinearFEDformer
MetricMSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
Weather960.1730.2240.1400.1840.1490.1980.1760.2370.2380.314
1920.2270.2680.1830.2260.1940.2410.220.2820.2750.329
3360.2770.3050.2330.2660.2450.2820.2650.3190.3390.377
7200.3330.3450.3060.3190.3140.3340.3230.3620.3890.409
Traffic960.6210.3420.3980.2650.3600.2490.410.2820.5760.359
1920.6510.3590.4080.2690.3790.2560.4230.2870.610.38
3360.6480.3530.4170.2740.3920.2640.4360.2960.6080.375
7200.3840.45090.4560.2990.4320.2860.4660.3150.6210.375
Electricity960.1890.2960.1310.2230.1290.2220.140.2370.1860.302
1920.1910.2960.1460.2360.1470.2400.1530.2490.1970.311
3360.2050.31070.1620.2530.1630.2590.1690.2670.2130.328
7200.2280.3270.2000.2870.1970.290.2030.3010.2330.344
ILI245.8061,8001.8200.8091.3190.7542.2151.0812.6241.095
366.9311.9681.5740.7751.5790.871.9630.9632.5161.021
486.5811.9041.5640.7931.5530.8152.131.0242.5051.041
606.9011.9681.5120.8031.4700.7882.3681.0962.7421.122
ETTh1960.5920.4880.3620.3890.3700.4000.3750.3990.3760.415
1920.5420.5140.3980.4120.4130.4290.4050.4160.4230.446
3360.5370.5180.4210.4270.4220.4400.4390.4430.4440.462
7200.6140.5710.4370.4530.4470.4680.4720.4900.4690.492
ETTh2960.3600.4050.2630.3310.2740.3370.2890.3530.3320.374
1920.3860.4260.3180.3690.3410.3820.3830.4180.4070.446
3360.3720.4210.3480.3990.3290.3840.4480.4650.40.447
7200.4240.4540.4090.4440.3790.4220.6050.5510.4120.469
ETTm1960.4000.4210.2910.3380.2930.3460.2990.3430.3260.39
1920.4290.4430.3320.3610.3330.3700.3350.3650.3650.415
3360.4390.4470.3620.3770.3690.3920.3690.3860.3920.425
7200.4770.4660.4180.4090.4160.4200.4250.4210.4460.458
ETTm2960.2100.2920.1600.2460.1660.2560.1670.2600.180.271
1920.2640.3250.2190.2880.2230.2960.2240.3030.2520.318
3360.3110.3560.2710.3210.2740.3290.2810.3420.3240.364
7200.4080.4120.3600.3800.3620.3850.3970.4210.410.42
Table 6. Comparison of our ELM model to other recently published models. Results highlighted in bold signify the best performance, while those underlined indicate the second-highest achievement.
Table 6. Comparison of our ELM model to other recently published models. Results highlighted in bold signify the best performance, while those underlined indicate the second-highest achievement.
Models(Our Model) ELMSpaceTimeDLinearFEDformerAutoformerTime Machine (Mamba)
MetricMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
ETTh1 7200.4480.4630.4990.480.4400.4530.5060.5070.5140.5120.4620.475
ETTh2 7200.3870.4280.4020.4340.3940.4360.4630.4740.5150.5110.4120.441
ETTm1 7200.4150.4090.4080.4150.4330.4220.5430.490.6710.5610.4300.429
ETTm2 7200.3480.3770.3580.3780.3680.3840.4210.4150.4330.4320.3800.396
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alharthi, M.; Mahmood, A. Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting. Big Data Cogn. Comput. 2024, 8, 48. https://doi.org/10.3390/bdcc8050048

AMA Style

Alharthi M, Mahmood A. Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting. Big Data and Cognitive Computing. 2024; 8(5):48. https://doi.org/10.3390/bdcc8050048

Chicago/Turabian Style

Alharthi, Musleh, and Ausif Mahmood. 2024. "Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting" Big Data and Cognitive Computing 8, no. 5: 48. https://doi.org/10.3390/bdcc8050048

APA Style

Alharthi, M., & Mahmood, A. (2024). Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting. Big Data and Cognitive Computing, 8(5), 48. https://doi.org/10.3390/bdcc8050048

Article Metrics

Back to TopTop