Time Series Data Generation Method with High Reliability Based on ACGAN
Abstract
:1. Introduction
- First, combine the ideas of WGAN-GP to optimize the ACGAN model and improve training stability. Replace the Jensen-Shannon (JS) divergence or Kullback-Leibler (KL) divergence with the Wasserstein distance to measure the distribution difference between generated data and real data. Introduce gradient penalty instead of weight clipping to satisfy the K-Lipschitz continuity condition, avoiding gradient vanishing or explosion while retaining the advantages of the Wasserstein distance;
- Then, a modified discriminator model is established by incorporating a Bi-LSTM network layer. The Bi-LSTM network, by combining both forward and backward sequence information, can fully exploit the temporal characteristics of time series data, addressing the issue of insufficient data feature extraction in existing GAN models;
- Finally, the generator’s objective function is improved by adding a center loss term to reduce the overlap between the different categories of the samples and enhance the reliability of the generated data, addressing the issue that existing methods fail to adequately consider the potential high degree of overlap between the categories in the generated data within fields such as fault diagnosis and anomaly detection. This improvement achieves the following two key goals: it reduces intra-class dispersion by bringing samples of the same category closer together, and it preserves inter-class variability to avoid excessive overlap.
2. The System Model
3. Introduction to Algorithms
3.1. Discriminator Model
3.2. Generator Model
4. The Simulation Analysis
4.1. Data Preprocessing
4.1.1. Basic Dataset
4.1.2. Extended Dataset
4.2. Parameter Settings and Training Process
- The generator generates data. A batch of generated data is obtained by mixing random noise and labels, and inputting it to them into the generator. This batch of generated data is mixed with real data and input to the discriminator;
- Train the discriminator. According to the loss function of the discriminator and the RMSprop algorithm, the discriminator parameters are updated. Training the discriminator first can accelerate the training process of the model;
- Train the generator. The parameters of the discriminator are guaranteed to remain unchanged. Update the parameters of the generator according to the generator’s loss function and the RMSprop algorithm;
- Train alternately. Repeat step (2) to step (4) as one epoch of training. After the end of one epoch of training, start a new epoch of training. When Nash equilibrium or set epoch is reached, the training will be stopped.
4.3. Ablation Experiment
- HR-ACGAN-woGP model: This model is obtained by removing the penalty term from HR-ACGAN. In this model, the network no longer performs additional constraint optimization on the distribution differences of the generated data.
- HR-ACGAN-woLSTM model: This model is obtained by removing the Bi-LSTM network from HR-ACGAN and reverting back to the ordinary ACGAN discriminator structure.
- HR-ACGAN-woCL model: This model is derived from HR-ACGAN by removing the central loss term. In this model, the network no longer optimizes the distinguishability and consistency of the generated data by imposing constraints on class centroids.
- HR-ACGAN model: The final model proposed in this paper.
4.3.1. Similarity Analysis
4.3.2. Classification Accuracy Analysis
4.4. Comparative Experiment
4.4.1. Similarity Analysis
4.4.2. Analysis of Classification Accuracy and Scalability
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Label | Fault Type | Fault Diameter/inch | Load/hp | Number of Samples |
---|---|---|---|---|
0 | Normal | 0 | 0∼3 | 1600 |
1 | Ball | 0.007 | 0∼3 | 400 |
2 | Inner Race | 0.007 | 0∼3 | 400 |
3 | Outer Race | 0.007 | 0∼3 | 400 |
4 | Ball | 0.014 | 0∼3 | 400 |
5 | Inner Race | 0.014 | 0∼3 | 400 |
6 | Outer Race | 0.014 | 0∼3 | 400 |
7 | Ball | 0.021 | 0∼3 | 400 |
8 | Inner Race | 0.021 | 0∼3 | 400 |
9 | Outer Race | 0.021 | 0∼3 | 400 |
Label | Imbalanced Set | Training Set | Testing Set |
---|---|---|---|
0 | 1500 | 150 | 100 |
1 | 249 | 150 | 100 |
2 | 300 | 150 | 100 |
3 | 292 | 150 | 100 |
4 | 195 | 150 | 100 |
5 | 212 | 150 | 100 |
6 | 150 | 150 | 100 |
7 | 170 | 150 | 100 |
8 | 271 | 150 | 100 |
9 | 232 | 150 | 100 |
Network Layer | Convolution Kernel | Channel Numbers | Step Length | Activation Function |
---|---|---|---|---|
Input | ||||
Upsampling (scale_factor = 2) | ||||
Convld | 5 | 128 | 1 | ReLU |
Convld | 5 | 128 | 1 | ReLU |
BatchNormld | ||||
Convld | 5 | 128 | 1 | ReLU |
Convld | 5 | 128 | 1 | ReLU |
BatchNormld | ||||
Upsampling (scale_factor = 2) | ||||
Convld | 5 | 64 | 1 | ReLU |
Convld | 5 | 64 | 1 | ReLU |
BatchNormld | ||||
Convld | 5 | 64 | 1 | ReLU |
Convld | 5 | 64 | 1 | ReLU |
BatchNormld | ||||
Upsampling (scale_factor = 2) | ||||
Convld | 5 | 16 | 1 | ReLU |
Convld | 5 | 16 | 1 | ReLU |
BatchNormld | ||||
Convld | 5 | 16 | 1 | ReLU |
Convld | 5 | 16 | 1 | ReLU |
BatchNormld | ||||
Convld | 5 | 1 | 1 | Tanh |
Network Layer | Convolution Kernel | Channel Numbers | Step Length | Activation Function |
---|---|---|---|---|
Input | ||||
Convld | 5 | 64 | 2 | LeakyReLU (0.2) |
Convld | 5 | 64 | 1 | LeakyReLU (0.2) |
Dropput (0.5) | ||||
Convld | 5 | 64 | 2 | LeakyReLU (0.2) |
Convld | 5 | 64 | 1 | LeakyReLU (0.2) |
Dropput (0.5) | ||||
MaxPoolld | 5 | 2 | ||
Convld | 5 | 128 | 2 | LeakyReLU (0.2) |
Convld | 5 | 128 | 1 | LeakyReLU (0.2) |
Dropput (0.5) | ||||
Convld | 5 | 128 | 2 | LeakyReLU (0.2) |
Convld | 5 | 128 | 1 | LeakyReLU (0.2) |
Dropput (0.5) | ||||
MaxPoolld | 5 | 2 | ||
Convld | 5 | 256 | 2 | LeakyReLU (0.2) |
Convld | 5 | 256 | 1 | LeakyReLU (0.2) |
Dropput (0.5) | ||||
Convld | 5 | 256 | 2 | LeakyReLU (0.2) |
Convld | 5 | 256 | 1 | LeakyReLU (0.2) |
Dropput (0.5) | ||||
AvgPolld | 5 | 2 | ||
BiLSTM (4) | ||||
Linear1 | ||||
Linear2 |
Label | Testing Set | Imbalanced Set | ACGAN | WACGAN-GP | LSTM-ACGAN | HR-ACGAN |
---|---|---|---|---|---|---|
0 | 100 | 1500 | 1500 | 1500 | 1500 | 1500 |
1 | 100 | 249 | 249 (1251) | 249 (1251) | 249 (1251) | 249 (1251) |
2 | 100 | 300 | 300 (1200) | 300 (1200) | 300 (1200) | 300 (1200) |
3 | 100 | 292 | 292 (1208) | 292 (1208) | 292 (1208) | 292 (1208) |
4 | 100 | 195 | 195 (1305) | 195 (1305) | 195 (1305) | 195 (1305) |
5 | 100 | 212 | 212 (1288) | 212 (1288) | 212 (1288) | 212 (1288) |
6 | 100 | 150 | 150 (1350) | 150 (1350) | 150 (1350) | 150 (1350) |
7 | 100 | 170 | 170 (1330) | 170 (1330) | 170 (1330) | 170 (1330) |
8 | 100 | 271 | 271 (1229) | 271 (1229) | 271 (1229) | 271 (1229) |
9 | 100 | 232 | 232 (1268) | 232 (1268) | 232 (1268) | 232 (1268) |
Network Layer | Convolution Kernel | Channel Numbers | Step Length | Activation Function | Output |
---|---|---|---|---|---|
Input | 1 × 1024 | ||||
Convld | 3 | 16 | 2 | ReLU | 16 × 512 |
MaxPoolld | 2 | 16 × 256 | |||
Convld | 3 | 64 | 2 | ReLU | 64 × 128 |
MaxPoolld | 2 | 64 × 64 | |||
Convld | 3 | 256 | 2 | ReLU | 256 × 32 |
MaxPoolld | 2 | 256 × 16 | |||
Convld | 3 | 512 | 2 | ReLU | 512 × 8 |
AvgPoolld (8) | 512 × 1 | ||||
Flatten | 512 × 512 | ||||
Linear | 512 × 10 | ||||
SoftMax | 512 × 10 |
References
- Adadi, A. A survey on data-efficient algorithms in big data era. J. Big Data 2021, 8, 24. [Google Scholar] [CrossRef]
- Wang, J.; Xu, C.; Zhang, J.; Zhong, R. Big data analytics for intelligent manufacturing systems: A review. J. Manuf. Syst. 2022, 62, 738–752. [Google Scholar] [CrossRef]
- Niu, Y.; Ying, L.; Yang, J.; Bao, M.; Sivaparthipan, C. Organizational business intelligence and decision making using big data analytics. Inf. Process. Manag. 2021, 58, 102725. [Google Scholar] [CrossRef]
- Fan, J.; Han, F.; Liu, H. Challenges of big data analysis. Natl. Sci. Rev. 2014, 1, 293–314. [Google Scholar] [CrossRef] [PubMed]
- Soualhi, A.; Medjaher, K.; Zerhouni, N. Bearing health monitoring based on Hilbert–Huang transform, support vector machine, and regression. IEEE Trans. Instrum. Meas. 2014, 64, 52–62. [Google Scholar] [CrossRef]
- Shao, S.; Wang, P.; Yan, R. Generative adversarial networks for data augmentation in machine fault diagnosis. Comput. Ind. 2019, 106, 85–93. [Google Scholar] [CrossRef]
- Farajzadeh-Zanjani, M.; Hallaji, E.; Razavi-Far, R.; Saif, M. Generative-adversarial class-imbalance learning for classifying cyber-attacks and faults-a cyber-physical power system. IEEE Trans. Dependable Secur. Comput. 2021, 19, 4068–4081. [Google Scholar] [CrossRef]
- Razavi, R.; Gharipour, A.; Fleury, M.; Akpan, I.J. A practical feature-engineering framework for electricity theft detection in smart grids. Appl. Energy 2019, 238, 481–494. [Google Scholar] [CrossRef]
- Yao, R.; Wang, N.; Ke, W.; Chen, P.; Sheng, X. Electricity theft detection in unbalanced sample distribution: A novel approach including a mechanism of sample augmentation. Appl. Intell. 2023, 53, 11162–11181. [Google Scholar] [CrossRef]
- Wang, Y.; Chen, Q.; Hong, T.; Kang, C. Review of smart meter data analytics: Applications, methodologies, and challenges. IEEE Trans. Smart Grid 2018, 10, 3125–3148. [Google Scholar] [CrossRef]
- Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 2019, 115, 213–237. [Google Scholar] [CrossRef]
- Zou, Y.; Wang, Y.; Lu, X. Auto-encoding generative adversarial networks towards mode collapse reduction and feature representation enhancement. Entropy 2023, 25, 1657. [Google Scholar] [CrossRef] [PubMed]
- Li, Y.; Lu, X.; Wang, Y.; Dou, D. Generative time series forecasting with diffusion, denoise, and disentanglement. Adv. Neural Inf. Process. Syst. 2022, 35, 23009–23022. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Saxena, D.; Cao, J. Generative adversarial networks (GANs) challenges, solutions, and future directions. ACM Comput. Surv. (CSUR) 2021, 54, 1–42. [Google Scholar] [CrossRef]
- Park, S.W.; Kim, J.Y.; Park, J.; Jung, S.H.; Sim, C.B. How to train your pre-trained GAN models. Appl. Intell. 2023, 53, 27001–27026. [Google Scholar] [CrossRef]
- Veiner, J.; Alajaji, F.; Gharesifard, B. A unifying generator loss function for generative adversarial networks. Entropy 2024, 26, 290. [Google Scholar] [CrossRef]
- Mirza, M. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Odena, A.; Olah, C.; Shlens, J. Conditional image synthesis with auxiliary classifier gans. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR: New York, NY, USA, 2017; pp. 2642–2651. [Google Scholar]
- Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of wasserstein gans. Adv. Neural Inf. Process. Syst. 2017, 30, 5769–5779. [Google Scholar]
- Li, D. Anomaly detection with generative adversarial networks for multivariate time series. arXiv 2018, arXiv:1809.04758. [Google Scholar]
- Li, D.; Chen, D.; Jin, B.; Shi, L.; Goh, J.; Ng, S.K. MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks. In Proceedings of the International Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; Springer: Cham, Swizerland, 2019; pp. 703–716. [Google Scholar]
- Radford, A. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Ye, Y.; Hong, H.; Cheng, C.; Wenwu, Y.; Han, D. Residual Life Prediction of GRU-GAN Aero-Engine Based on Feature Attention Mechanism. Sci. China Sci. Technol. 2022, 52, 198–212. [Google Scholar]
- Yang, W.; Liang, T.; Tan, J.; Jing, Y.; Lv, L. Application of C-InGAN Model in Interpretable Feature of Bearing Fault Diagnosis. Entropy 2024, 26, 480. [Google Scholar] [CrossRef] [PubMed]
- Luo, J.; Huang, J.; Li, H. A case study of conditional deep convolutional generative adversarial networks in machine fault diagnosis. J. Intell. Manuf. 2021, 32, 407–425. [Google Scholar] [CrossRef]
- Viola, J.; Chen, Y.; Wang, J. FaultFace: Deep convolutional generative adversarial network (DCGAN) based ball-bearing failure detection method. Inf. Sci. 2021, 542, 195–211. [Google Scholar] [CrossRef]
- Yan, K.; Chong, A.; Mo, Y. Generative adversarial network for fault detection diagnosis of chillers. Build. Environ. 2020, 172, 106698. [Google Scholar] [CrossRef]
- Li, W.; Zhong, X.; Shao, H.; Cai, B.; Yang, X. Multi-mode data augmentation and fault diagnosis of rotating machinery using modified ACGAN designed with new framework. Adv. Eng. Inform. 2022, 52, 101552. [Google Scholar] [CrossRef]
- Zhou, F.; Yang, S.; Fujita, H.; Chen, D.; Wen, C. Deep learning fault diagnosis method based on global optimization GAN for unbalanced data. Knowl.-Based Syst. 2020, 187, 104837. [Google Scholar] [CrossRef]
- Liu, S.; Jiang, H.; Wu, Z.; Li, X. Data synthesis using deep feature enhanced generative adversarial networks for rolling bearing imbalanced fault diagnosis. Mech. Syst. Signal Process. 2022, 163, 108139. [Google Scholar] [CrossRef]
- Hu, X.; Zhan, Z.; Ma, D.; Zhang, S. Spatiotemporal generative adversarial imputation networks: An approach to address missing data for wind turbines. IEEE Trans. Instrum. Meas. 2023, 72, 3530508. [Google Scholar] [CrossRef]
- Randhawa, R.H.; Aslam, N.; Alauthman, M.; Khalid, M.; Rafiq, H. Deep reinforcement learning based Evasion Generative Adversarial Network for botnet detection. Future Gener. Comput. Syst. 2024, 150, 294–302. [Google Scholar] [CrossRef]
- Zuo, X.; Shen, Y. Risk identification of aerobic exercise and health data monitoring based on ACGAN image recognition algorithm. Phys. Commun. 2024, 64, 102300. [Google Scholar] [CrossRef]
- Yin, J.; Sheng, W.; Jiang, H. Small Sample Target Recognition Based on Radar HRRP and SDAE-WACGAN. IEEE Access 2024, 12, 16375–16385. [Google Scholar] [CrossRef]
- Bobadilla, J.; Gutiérrez, A. Wasserstein GAN-based architecture to generate collaborative filtering synthetic datasets. Appl. Intell. 2024, 54, 2472–2490. [Google Scholar] [CrossRef]
- Lu, J.; Zhang, X.; Zhang, W.; GUO, L.; Wen, R. Fault Diagnosis of Main Bearing of Wind Turbine Based on Improved Auxiliary Classifier Generative Adversarial Network. Autom. Electr. Power Syst 2021, 45, 148–154. [Google Scholar]
- Wang, J.; Liu, K.; Li, H. LSTM-based graph attention network for vehicle trajectory prediction. Comput. Netw. 2024, 248, 110477. [Google Scholar] [CrossRef]
- Zhao, L.; Bai, Y.; Zhang, S.; Wang, Y.; Kang, J.; Zhang, W. A novel hybrid model for short-term traffic flow prediction based on extreme learning machine and improved kernel density estimation. Sustainability 2022, 14, 16361. [Google Scholar] [CrossRef]
- Murty, C.; Rughani, P.H. Dark web text classification by learning through SVM optimization. J. Adv. Inf. Technol. 2022, 13, 624–631. [Google Scholar] [CrossRef]
Metrics | Methods | Labels | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Avg. | ||
Euclidean Distance | ACGAN | 8.16 | 6.84 | 7.93 | 7.72 | 7.85 | 7.81 | 7.43 | 7.74 | 8.19 | 7.54 | 7.72 |
WACGAN-GP | 7.12 | 6.72 | 5.74 | 5.00 | 5.92 | 5.18 | 6.22 | 6.10 | 4.90 | 4.73 | 5.76 | |
LSTM-ACGAN | 7.03 | 6.69 | 5.61 | 5.01 | 5.85 | 5.09 | 6.06 | 6.09 | 4.80 | 4.72 | 5.69 | |
HR-ACGAN | 6.96 | 6.52 | 5.55 | 4.94 | 5.17 | 4.95 | 6.13 | 6.06 | 4.73 | 4.70 | 5.57 | |
Wasserstein Distance | ACGAN | 0.130 | 0.138 | 0.136 | 0.131 | 0.142 | 0.134 | 0.130 | 0.143 | 0.136 | 0.132 | 0.135 |
WACGAN-GP | 0.067 | 0.068 | 0.062 | 0.059 | 0.064 | 0.060 | 0.055 | 0.067 | 0.062 | 0.059 | 0.062 | |
LSTM-ACGAN | 0.062 | 0.063 | 0.055 | 0.049 | 0.065 | 0.058 | 0.051 | 0.062 | 0.059 | 0.051 | 0.058 | |
HR-ACGAN | 0.061 | 0.060 | 0.051 | 0.047 | 0.061 | 0.053 | 0.053 | 0.058 | 0.055 | 0.047 | 0.055 |
Different Datasets | Metrics | Labels | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | ||
Imbalanced sample | Precision | 1.00 | 0.86 | 0.99 | 0.98 | 0.90 | 0.98 | 1.00 | 0.85 | 0.98 | 1.00 |
Recall | 1.00 | 1.00 | 1.00 | 1.00 | 0.94 | 1.00 | 0.81 | 0.77 | 1.00 | 1.00 | |
F1-score | 1.00 | 0.92 | 0.99 | 0.99 | 0.92 | 0.99 | 0.89 | 0.81 | 0.99 | 1.00 | |
Accuracy | 0.952 | ||||||||||
ACGAN-balanced sample | Precision | 1.00 | 0.89 | 0.99 | 0.99 | 0.99 | 0.95 | 1.00 | 0.86 | 1.00 | 1.00 |
Recall | 1.00 | 0.98 | 1.00 | 1.00 | 0.98 | 0.99 | 0.85 | 0.86 | 1.00 | 1.00 | |
F1-score | 1.00 | 0.93 | 0.99 | 0.99 | 0.98 | 0.97 | 0.91 | 0.85 | 1.00 | 1.00 | |
Accuracy | 0.966 | ||||||||||
WACGAN-GP-balanced sample | Precision | 1.00 | 0.96 | 0.98 | 1.00 | 0.99 | 0.99 | 1.00 | 0.92 | 1.00 | 1.00 |
Recall | 1.00 | 0.98 | 1.00 | 1.00 | 0.99 | 0.99 | 0.93 | 0.95 | 1.00 | 1.00 | |
F1-score | 1.00 | 0.97 | 0.99 | 1.00 | 0.99 | 0.99 | 0.96 | 0.94 | 1.00 | 1.00 | |
Accuracy | 0.984 | ||||||||||
LSTM-ACGAN-balanced sample | Precision | 1.00 | 0.99 | 1.00 | 1.00 | 0.99 | 1.00 | 1.00 | 0.95 | 0.99 | 1.00 |
Recall | 1.00 | 0.96 | 1.00 | 1.00 | 1.00 | 1.00 | 0.99 | 0.97 | 0.99 | 1.00 | |
F1-score | 1.00 | 0.97 | 1.00 | 1.00 | 0.99 | 1.00 | 0.99 | 0.96 | 0.99 | 1.00 | |
Accuracy | 0.992 | ||||||||||
HR-ACGAN-balanced sample | Precision | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 0.95 | 1.00 | 1.00 |
Recall | 1.00 | 0.99 | 1.00 | 1.00 | 0.99 | 1.00 | 0.97 | 1.00 | 1.00 | 1.00 | |
F1-score | 1.00 | 0.99 | 1.00 | 1.00 | 0.99 | 1.00 | 0.98 | 0.98 | 1.00 | 1.00 | |
Accuracy | 0.995 |
Different Datasets | Metrics | Labels | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | ||
Imbalanced sample | Precision | 0.98 | 0.96 | 1.00 | 1.00 | 0.885 | 1.00 | 0.826 | 0.890 | 1.00 |
Recall | 1.00 | 0.97 | 0.94 | 1.00 | 1.00 | 0.70 | 0.95 | 0.97 | 0.96 | |
F1-score | 0.99 | 0.96 | 0.97 | 1.00 | 0.94 | 0.82 | 0.88 | 0.93 | 0.98 | |
Accuracy | 0.943 | |||||||||
ACGAN-balanced sample | Precision | 1.00 | 0.94 | 0.952 | 1.00 | 0.99 | 0.96 | 0.95 | 0.95 | 0.99 |
Recall | 1.00 | 0.95 | 1.00 | 1.00 | 0.97 | 0.97 | 0.93 | 0.95 | 0.97 | |
F1-score | 1.00 | 0.95 | 0.98 | 1.00 | 0.98 | 0.96 | 0.94 | 0.95 | 0.98 | |
Accuracy | 0.971 | |||||||||
WACGAN-GP-balanced sample | Precision | 1.00 | 1.00 | 1.00 | 1.00 | 0.97 | 0.90 | 0.98 | 0.98 | 1.00 |
Recall | 1.00 | 0.99 | 1.00 | 1.00 | 0.95 | 0.99 | 0.96 | 0.94 | 1.00 | |
F1-score | 1.00 | 0.99 | 1.00 | 1.00 | 0.96 | 0.94 | 0.97 | 0.96 | 1.00 | |
Accuracy | 0.981 | |||||||||
LSTM-ACGAN-balanced sample | Precision | 1.00 | 1.00 | 0.99 | 1.00 | 1.00 | 0.95 | 0.97 | 0.99 | 1.00 |
Recall | 1.00 | 0.98 | 1.00 | 1.00 | 0.98 | 0.98 | 0.98 | 0.98 | 1.00 | |
F1-score | 1.00 | 0.99 | 0.99 | 1.00 | 0.99 | 0.97 | 0.97 | 0.98 | 1.00 | |
Accuracy | 0.988 | |||||||||
HR-ACGAN-balanced sample | Precision | 1.00 | 1.00 | 0.99 | 1.00 | 0.98 | 0.97 | 1.00 | 0.99 | 1.00 |
Recall | 1.00 | 0.99 | 1.00 | 1.00 | 0.99 | 0.99 | 0.99 | 0.97 | 1.00 | |
F1-score | 1.00 | 0.99 | 0.99 | 1.00 | 0.98 | 0.98 | 0.99 | 0.98 | 1.00 | |
Accuracy | 0.992 |
Datasets\Models | Original Dataset | ACGAN | WACGAN-GP | LSTM-ACGAN | HR-ACGAN |
---|---|---|---|---|---|
1D-CNN | 95.2% | 96.6% | 98.4% | 99.2% | 99.5% |
LSTM | 94.9% | 95.8% | 98.0% | 98.8% | 99.2% |
ELM | 95.1% | 96.1% | 97.8% | 98.1% | 98.9% |
SVM | 94.6% | 95.3% | 97.5% | 97.9% | 98.5% |
Datasets\Models | Original Dataset | ACGAN | WACGAN-GP | LSTM-ACGAN | HR-ACGAN |
---|---|---|---|---|---|
1D-CNN | 94.3% | 97.1% | 98.1% | 98.9% | 99.2% |
LSTM | 93.8% | 96.1% | 97.5% | 98.4% | 98.3% |
ELM | 94.2% | 96.3% | 97.1% | 97.5% | 97.9% |
SVM | 92.4% | 93.2% | 96.7% | 97.9% | 98.1% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, F.; Li, Y.; Zheng, Y. Time Series Data Generation Method with High Reliability Based on ACGAN. Entropy 2025, 27, 111. https://doi.org/10.3390/e27020111
Liu F, Li Y, Zheng Y. Time Series Data Generation Method with High Reliability Based on ACGAN. Entropy. 2025; 27(2):111. https://doi.org/10.3390/e27020111
Chicago/Turabian StyleLiu, Fang, Yuxin Li, and Yuanfang Zheng. 2025. "Time Series Data Generation Method with High Reliability Based on ACGAN" Entropy 27, no. 2: 111. https://doi.org/10.3390/e27020111
APA StyleLiu, F., Li, Y., & Zheng, Y. (2025). Time Series Data Generation Method with High Reliability Based on ACGAN. Entropy, 27(2), 111. https://doi.org/10.3390/e27020111