The Time Series Classification of Discrete-Time Chaotic Systems Using Deep Learning Approaches
Abstract
:1. Introduction
- A large and unique dataset was created with various initial conditions and control parameters using nine different discrete-time chaotic systems.
- The created dataset was classified using deep learning models such as DenseNet121, VGG16, VGG19, InceptionV3, MobileNetV2, and Xception. The performance of these models was tested by combining them with various classification algorithms (k-NN, SVM, XGBOOST, and RF), and an increase in performance was observed.
- A 95.76% accuracy rate was achieved by integrating the DenseNet121 model and the XGBOOST classification algorithm. This result shows the power of deep learning approaches in classifying chaotic systems.
- This study shows that deep learning models can be used with high accuracy rates on the time series data of chaotic systems and provides an important basis for future studies in this field in the literature.
- The method presented in this study and the results obtained contribute to a better understanding of chaotic systems and encourage the use of deep learning methods in this field in the future.
2. Material and Methods
2.1. The Chaotic Systems Used and the Obtained Dataset
2.2. Procurement of the Dataset
2.3. Deep Learning Models
2.4. Classification Algorithms
2.4.1. K-Nearest Neighbors (k-NN) Algorithm
- 1.
- The Euclidean distance is calculated as follows:
- 2.
- In the k-NN algorithm, the class of a test data point is determined by considering the class labels of the k nearest neighbors in the training dataset. The classification rule is expressed as follows:
- represents the predicted class.
- represents a class label.
- is the number of nearest neighbors considered in the classification.
- I checks if belongs to class . If the result is true, 1 is returned; otherwise, 0 is returned.
2.4.2. Support Vector Machine (SVM) Algorithm
- 1.
- The hyperplane separating the data is as follows:
- w is the weight vector (determining the direction of the hyperplane),
- x is the input feature vector,
- b is the bias term that determines the shift of the hyperplane from the origin.
- 2.
- The optimization problem to find the optimal hyperplane is as follows:
- 3.
- Under the following conditions:
- is the squared norm of the weight vector, which is minimized to maximize the margin,
- is the class label of the i-th training example, where ∈ {−1, 1},
- is the feature vector of the i-th training example.
2.4.3. Random Forest (RF) Algorithm
- 1.
- Each decision tree usually uses the Gini index or Entropy.
- is the Gini index for dataset , which measures the impurity or homogeneity of the dataset.
- is the entropy of dataset , representing the level of disorder or uncertainty in the data.
- pi is the proportion of examples in class i.
- C represents the total number of classes.
- 2.
- The RF determines the final classification result by using the majority vote of the predictions made independently by each decision tree. The final classification for a data point x is performed as follows:
2.4.4. Extreme Gradient Boosting (XGBoost) Algorithm
- 1.
- The prediction function is as follows:
- 2.
- The loss function is as follows:
- is the loss function and is the regularization term controlling the model complexity.
- 3.
- The optimal weights are as follows:
3. Proposed Models
4. Results and Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Umar, T.; Nadeem, M.; Anwer, F. A new modified Skew Tent Map and its application in pseudo-random number generator. Comput. Stand. Interfaces 2024, 89, 103826. [Google Scholar] [CrossRef]
- Murillo-Escobar, D.; Vega-Pérez, K.; Murillo-Escobar, M.A.; Arellano-Delgado, A.; López-Gutiérrez, R.M. Comparison of two new chaos-based pseudorandom number generators implemented in microcontroller. Integration 2024, 96, 102130. [Google Scholar] [CrossRef]
- Emin, B.; Akgul, A.; Horasan, F.; Gokyildirim, A.; Calgan, H.; Volos, C. Secure Encryption of Biomedical Images Based on Arneodo Chaotic System with the Lowest Fractional-Order Value. Electronics 2024, 13, 2122. [Google Scholar] [CrossRef]
- Li, L. A novel chaotic map application in image encryption algorithm. Expert Syst. Appl. 2024, 252, 124316. [Google Scholar] [CrossRef]
- Kıran, H.E. A Novel Chaos-Based Encryption Technique with Parallel Processing Using CUDA for Mobile Powerful GPU Control Center. Chaos Fractals 2024, 1, 6–18. [Google Scholar] [CrossRef]
- Patidar, V.; Kaur, G. Lossless Image Encryption using Robust Chaos-based Dynamic DNA Coding, XORing and Complementing. Chaos Theory Appl. 2023, 5, 178–187. [Google Scholar] [CrossRef]
- Aparna, H.; Madhumitha, J. Combined image encryption and steganography technique for enhanced security using multiple chaotic maps. Comput. Electr. Eng. 2023, 110, 108824. [Google Scholar] [CrossRef]
- Hue, T.T.K.; Linh, N.T.; Nguyen-Duc, M.; Hoang, T.M. Data Hiding in Bit-plane Medical Image Using Chaos-based Steganography. In Proceedings of the 2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR), Virtual Conference, 15–16 October 2021; pp. 1–6. [Google Scholar]
- Rahman, Z.-A.S.A.; Jasim, B.H. Hidden Dynamics Investigation, Fast Adaptive Synchronization, and Chaos-Based Secure Communication Scheme of a New 3D Fractional-Order Chaotic System. Inventions 2022, 7, 108. [Google Scholar] [CrossRef]
- Nguyen, Q.D.; Giap, V.N.; Tran, V.H.; Pham, D.-H.; Huang, S.-C. A Novel Disturbance Rejection Method Based on Robust Sliding Mode Control for the Secure Communication of Chaos-Based System. Symmetry 2022, 14, 1668. [Google Scholar] [CrossRef]
- Reddy, N.; Sadanandachary, A. A Secure Communication System of Synchronized Chua’s Circuits in LC Parallel Coupling. Chaos Theory Appl. 2023, 5, 167–177. [Google Scholar] [CrossRef]
- Ayubi, P.; Jafari Barani, M.; Yousefi Valandar, M.; Yosefnezhad Irani, B.; Sedagheh Maskan Sadigh, R. A new chaotic complex map for robust video watermarking. Artif. Intell. Rev. 2021, 54, 1237–1280. [Google Scholar] [CrossRef]
- Kumar, S.; Chauhan, A.; Alam, K. Weighted and Well-Balanced Nonlinear TV-Based Time-Dependent Model for Image Denoising. Chaos Theory Appl. 2023, 5, 300–307. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2014; pp. 580–587. [Google Scholar] [CrossRef]
- Dahl, G.E.; Yu, D.; Deng, L.; Acero, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 2012, 20, 30–42. [Google Scholar] [CrossRef]
- Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
- Ravi, D.; Wong, C.; Deligianni, F.; Berthelot, M.; Andreu-Perez, J.; Lo, B.; Yang, G.Z. Deep Learning for Health Informatics. IEEE J. Biomed. Health Inform. 2017, 21, 4–21. [Google Scholar] [CrossRef]
- Neil, D.; Pfeiffer, M.; Liu, S.-C. Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences. In Advances in Neural Information Processing Systems; Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2016; Volume 29. [Google Scholar]
- Habib, Z.; Khan, J.S.; Ahmad, J.; Khan, M.A.; Khan, F.A. Secure speech communication algorithm via DCT and TD-ERCS chaotic map. In Proceedings of the 2017 4th International Conference on Electrical and Electronic Engineering (ICEEE), Ankara, Turkey, 8–10 April 2017; pp. 246–250. [Google Scholar] [CrossRef]
- Boullé, N.; Dallas, V.; Nakatsukasa, Y.; Samaddar, D. Classification of Chaotic Time Series with Deep Learning; Elsevier: Amsterdam, The Netherlands, 2020. [Google Scholar]
- Jia, B.; Wu, H.; Guo, K. Chaos theory meets deep learning: A new approach to time series forecasting. Expert Syst. Appl. 2024, 255, 124533. [Google Scholar] [CrossRef]
- Aricioğlu, B.; Uzun, S.; Kaçar, S. Deep learning based classification of time series of Chen and Rössler chaotic systems over their graphic images. Phys. D Nonlinear Phenom. 2022, 435, 133306. [Google Scholar] [CrossRef]
- Uzun, S.; Kaçar, S.; Ar\ic\io\uglu, B. Deep learning based classification of time series of chaotic systems over graphic images. Multimed. Tools Appl. 2024, 83, 8413–8437. [Google Scholar] [CrossRef]
- Pourafzal, A.; Fereidunian, A.; Safarihamid, K. Chaotic Time Series Recognition: A Deep Learning Model Inspired by Complex Systems Characteristics. Int. J. Eng. Trans. A Basics 2023, 36, 1–9. [Google Scholar] [CrossRef]
- Sun, C.; Wu, W.; Zhang, Z.; Li, Z.; Ji, B.; Wang, C. Time series clustering of dynamical systems via deterministic learning. Int. J. Mach. Learn. Cybern. 2024, 15, 2761–2779. [Google Scholar] [CrossRef]
- Huang, W.; Li, Y.; Huang, Y. Deep Hybrid Neural Network and Improved Differential Neuroevolution for Chaotic Time Series Prediction. IEEE Access 2020, 8, 159552–159565. [Google Scholar] [CrossRef]
- Akgöz, B.; Civalek, Ö. Buckling Analysis of Functionally Graded Tapered Microbeams via Rayleigh–Ritz Method. Mathematics 2022, 10, 4429. [Google Scholar] [CrossRef]
- Le Berre, S.; Ramière, I.; Fauque, J.; Ryckelynck, D. Condition Number and Clustering-Based Efficiency Improvement of Reduced-Order Solvers for Contact Problems Using Lagrange Multipliers. Mathematics 2022, 10, 1495. [Google Scholar] [CrossRef]
- May, R.M. Simple mathematical models with very complicated dynamics. Nature 1976, 261, 459–467. [Google Scholar] [CrossRef] [PubMed]
- Pak, C.; An, K.; Jang, P.; Kim, J.; Kim, S. A novel bit-level color image encryption using improved 1D chaotic map. Multimed. Tools Appl. 2019, 78, 12027–12042. [Google Scholar] [CrossRef]
- Strogatz, S.H. Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering, 2nd ed.; Westview Press, a Member of the Perseus Books Group, [2015]: Boulder, CO, USA, 1994. [Google Scholar]
- Devaney, R.L. An Introduction to Chaotic Dynamical Systems, 2nd ed.; Addison-Wesley: Redwood City, CA, USA, 1984; ISBN 978-0201156808. [Google Scholar]
- Wan-Zhen, Z.; Ming-Zhou, D.; Jia-Nan, L. Symbolic description of periodic windows in the antisymmetric cubic map. Chin. Phys. Lett. 1985, 2, 293. [Google Scholar] [CrossRef]
- Potapov, A.; Ali, M. Robust chaos in neural networks. Phys. Lett. A 2000, 277, 310–322. [Google Scholar] [CrossRef]
- Shaw, R. Strange Attractors, Chaotic Behavior, and Information Flow. Z. Naturforschung A 1981, 36, 80–112. [Google Scholar] [CrossRef]
- Beck, C.; Schlögl, F. Thermodynamics of Chaotic Systems; Cambridge University Press: New York, NY, USA, 1995. [Google Scholar]
- Ricker, W.E. Stock and Recruitment. J. Fish. Res. Board Canada 1954, 11, 559–623. [Google Scholar] [CrossRef]
- Van Wyk, M.A.; Steeb, W.-H. Chaos in Electronics; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A. Mobilenetv2: Inverted residuals and linear bottlenecks. arXiv 2018, arXiv:1801.04381. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Cover, T.; Hart, P. Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Dean, J.; Corrado, G.S.; Monga, R.; Chen, K.; Devin, M.; Le, Q.V.; Mao, M.R.; Senior, A.; Tucker, P.; Yang, K.; et al. Large scale distributed deep networks. In Advances in Neural Information Processing Systems; National Poisons Information Service: NPIS: Birmingham, UK, 2012; pp. 1223–1231. [Google Scholar]
System | Control Parameters (a) | Number of Iterations | The Initial Conditions |
---|---|---|---|
Logistic map | [3.6, 3.626, 3.652, 3.678, 3.704, 3.73, 3.756, 3.782, 3.808, 3.834, 3.86, 3.886, 3.912, 3.938, 3.964, 3.99] | 100 150 200 250 300 | [0.1, 0.1572, 0.2144, 0.2716, 0.3288, 0.386, 0.4432, 0.5004, 0.5576, 0.6148, 0.672, 0.7292, 0.7864, 0.8436] |
Sine map | [2.0, 2.032, 2.064, 2.096, 2.128, 2.16, 2.192, 2.224, 2.256, 2.288, 2.32, 2.352, 2.384, 2.416, 2.448, 2.48] | [−1.0, −0.857, −0.714, −0.571, −0.428, −0.285, −0.142, 0.001, 0.144, 0.287, 0.43, 0.573, 0.716, 0.859] | |
Tent map | [1.5, 1.526, 1.552, 1.578, 1.604, 1.63, 1.656, 1.682, 1.708, 1.734, 1.76, 1.786, 1.812, 1.838, 1.864, 1.89] | [0.1, 0.165, 0.23, 0.295, 0.36, 0.425, 0.49, 0.555, 0.62, 0.685, 0.75, 0.815, 0.88, 0.945] | |
Cubic map | [2.88, 2.8869, 2.8938, 2.9007, 2.9076, 2.9145, 2.9214, 2.9283, 2.9352, 2.9421, 2.949, 2.9559, 2.9628, 2.9697, 2.9766, 2.9835] | [0.1, 0.153, 0.206, 0.259, 0.312, 0.365, 0.418, 0.471, 0.524, 0.577, 0.63, 0.683, 0.736, 0.789] | |
Pinchers map | [0.49, 0.4938, 0.4976, 0.5014, 0.5052, 0.509, 0.5128, 0.5166, 0.5204, 0.5242, 0.528, 0.5318, 0.5356, 0.5394, 0.5432, 0.547] | [0.0, 0.072, 0.144, 0.216, 0.288, 0.36, 0.432, 0.504, 0.576, 0.648, 0.72, 0.792, 0.864, 0.936] | |
Spence map | [0.9, 0.957, 1.014, 1.071, 1.128, 1.185, 1.242, 1.299, 1.356, 1.413, 1.47, 1.527, 1.584, 1.641, 1.698, 1.755] | [0.1, 0.308, 0.516, 0.724, 0.932, 1.14, 1.348, 1.556, 1.764, 1.972, 2.18, 2.388, 2.596, 2.804] | |
Cusp map | [1.5, 1.5256, 1.5512, 1.5768, 1.6024, 1.628, 1.6536, 1.6792, 1.7048, 1.7304, 1.756, 1.7816, 1.8072, 1.8328, 1.8584, 1.884] | [0.1, 0.165, 0.23, 0.295, 0.36, 0.425, 0.49, 0.555, 0.62, 0.685, 0.75, 0.815, 0.88, 0.945] | |
Ricker’s Population Model | [19.0, 19.126, 19.252, 19.378, 19.504, 19.63, 19.756, 19.882, 20.008, 20.134, 20.26, 20.386, 20.512, 20.638, 20.764, 20.89] | [1.0, 1.072, 1.144, 1.216, 1.288, 1.36, 1.432, 1.504, 1.576, 1.648, 1.72, 1.792, 1.864, 1.936] | |
Gauss map | [1.7, 1.7076, 1.7152, 1.7228, 1.7304, 1.738, 1.7456, 1.7532, 1.7608, 1.7684, 1.776, 1.7836, 1.7912, 1.7988, 1.8064, 1.814] | [0.15, 0.176, 0.202, 0.228, 0.254, 0.28, 0.306, 0.332, 0.358, 0.384, 0.41, 0.436, 0.462, 0.488] |
Time Series | Phase Portrait | ||||
---|---|---|---|---|---|
Logistic map | |||||
Sine map | |||||
Tent map | |||||
Cubic map | |||||
Pinchers map | |||||
Spence map | |||||
Cusp map | |||||
Ricker’s population model | |||||
Gauss map |
CNN Models | Accuracy | Sensitivity | Specificity | Precision | F1 Score |
---|---|---|---|---|---|
DenseNet121 | 0.9454 | 1.000 | 0.9454 | 0.9454 | 0.9454 |
VGG19 | 0.9409 | 1.000 | 0.9409 | 0.9409 | 0.9409 |
VGG16 | 0.8883 | 1.000 | 0.8883 | 0.9409 | 0.9139 |
InceptionV3 | 0.9370 | 1.000 | 0.9370 | 0.9409 | 0.9389 |
MobileNetV2 | 0.9107 | 1.000 | 0.9107 | 0.9409 | 0.9255 |
Xception | 0.9300 | 1.000 | 0.9300 | 0.9409 | 0.9354 |
CNN Models | Classification | Accuracy | Sensitivity | Specificity | Precision | F1 Score |
---|---|---|---|---|---|---|
DenseNet121 | SVM | 0.9335 | 1.000 | 0.9966 | 1.000 | 0.9983 |
XGBOOST | 0.9576 | 1.000 | 1.000 | 1.000 | 1.000 | |
RF | 0.9222 | 1.000 | 0.9966 | 1.000 | 0.9983 | |
KNN | 0.8644 | 1.000 | 1.000 | 1.000 | 1.000 | |
VGG19 | SVM | 0.8630 | 1.000 | 1.000 | 1.000 | 1.000 |
XGBOOST | 0.9315 | 1.000 | 1.000 | 1.000 | 1.000 | |
RF | 0.8872 | 0.9967 | 1.000 | 0.9965 | 0.9982 | |
KNN | 0.8617 | 1.000 | 1.000 | 1.000 | 1.000 | |
VGG16 | SVM | 0.8634 | 1.000 | 1.000 | 1.000 | 1.000 |
XGBOOST | 0.9222 | 1.000 | 1.000 | 1.000 | 1.000 | |
RF | 0.8849 | 1.000 | 1.000 | 1.000 | 1.000 | |
KNN | 0.8568 | 1.000 | 1.000 | 1.000 | 1.000 | |
InceptionV3 | SVM | 0.8389 | 1.000 | 1.000 | 1.000 | 1.000 |
XGBOOST | 0.8085 | 1.000 | 0.9959 | 1.000 | 0.9979 | |
RF | 0.7182 | 1.000 | 1.000 | 1.000 | 1.000 | |
KNN | 0.7235 | 0.9967 | 0.9958 | 0.9958 | 0.9958 | |
MobileNetV2 | SVM | 0.8885 | 1.000 | 1.000 | 1.000 | 1.000 |
XGBOOST | 0.8753 | 1.000 | 1.000 | 1.000 | 1.000 | |
RF | 0.7903 | 0.9933 | 0.9961 | 0.9923 | 0.9942 | |
KNN | 0.7480 | 0.9964 | 1.000 | 0.9960 | 0.9980 | |
Xception | SVM | 0.9087 | 1.000 | 1.000 | 1.000 | 1.000 |
XGBOOST | 0.8902 | 1.000 | 1.000 | 1.000 | 1.000 | |
RF | 0.8214 | 1.000 | 1.000 | 1.000 | 1.000 | |
KNN | 0.7814 | 0.9936 | 0.9923 | 0.9923 | 0.9923 |
Authors | Methods | Chaotic Systems | Acc (%) |
---|---|---|---|
2024, Sun et al. [25] | Deterministic learning + k-means | Lorenz, Rossler | 84.00 |
2020, Huang et al. [26] | Hybrid Neural Network +Traditional forecasting | Chaotic time series | 85.50 |
2016, Neil et al. [18] | Phased LSTM (Classifying chaotic systems with time series data) | Various chaotic time series | 83.20 |
1992, Boser et al. [44] | SVM (with a basic chaotic time series) | Chaotic Electronic Systems | 78.30 |
1997, Wyk, Steeb, et al. [38] | Chaos in Electronics (Classic neural network models) | Chaotic Electronic Systems | 75.00 |
2024, this paper | CNN models (DenseNet121, VGG19, VGG16, InceptionV3, MobileNetV2, and Xception) + classification algorithms (SVM, XGBOOST, RF, and KNN) | Logistic ap, Sine Map, Tent Map, etc. | 95.76 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Akmeşe, Ö.F.; Emin, B.; Alaca, Y.; Karaca, Y.; Akgül, A. The Time Series Classification of Discrete-Time Chaotic Systems Using Deep Learning Approaches. Mathematics 2024, 12, 3052. https://doi.org/10.3390/math12193052
Akmeşe ÖF, Emin B, Alaca Y, Karaca Y, Akgül A. The Time Series Classification of Discrete-Time Chaotic Systems Using Deep Learning Approaches. Mathematics. 2024; 12(19):3052. https://doi.org/10.3390/math12193052
Chicago/Turabian StyleAkmeşe, Ömer Faruk, Berkay Emin, Yusuf Alaca, Yeliz Karaca, and Akif Akgül. 2024. "The Time Series Classification of Discrete-Time Chaotic Systems Using Deep Learning Approaches" Mathematics 12, no. 19: 3052. https://doi.org/10.3390/math12193052
APA StyleAkmeşe, Ö. F., Emin, B., Alaca, Y., Karaca, Y., & Akgül, A. (2024). The Time Series Classification of Discrete-Time Chaotic Systems Using Deep Learning Approaches. Mathematics, 12(19), 3052. https://doi.org/10.3390/math12193052