Next Article in Journal
Peer-to-Peer Energy Trading of a Community Connected with an AC and DC Microgrid
Next Article in Special Issue
Features Recognition from Piping and Instrumentation Diagrams in Image Format Using a Deep Learning Network
Previous Article in Journal
A Numerical Study of Axisymmetric Wave Propagation in Buried Fluid-Filled Pipes for Optimizing the Vibro-Acoustic Technique When Locating Gas Pipelines
Previous Article in Special Issue
A Hierarchical Self-Adaptive Method for Post-Disturbance Transient Stability Assessment of Power Systems Using an Integrated CNN-Based Ensemble Classifier
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of a Chamfering Tool Diagnosis System Using Autoencoder Learning Method

1
Department of Electrical Engineering, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan
2
Renesas Electronics Taiwan Co. Ltd., Taipei City 105, Taiwan
*
Author to whom correspondence should be addressed.
Energies 2019, 12(19), 3708; https://doi.org/10.3390/en12193708
Submission received: 21 August 2019 / Revised: 21 September 2019 / Accepted: 24 September 2019 / Published: 27 September 2019

Abstract

:
In this paper, the autoencoder learning method is proposed for the system diagnosis of chamfering tool equipment. The autoencoder uses unsupervised learning architecture. The training dataset that requires only a positive sample is quite suitable for industrial production lines. The abnormal tool can be diagnosed by comparing the output and input of the autoencoder neural network. The adjustable threshold can effectively improve accuracy. This method can effectively adapt to the current environment when the data contain multiple signals. In the experimental setup, the main diagnostic signal is the current of the motor. The current reflects the torque change when the tool is abnormal. Four-step conversions are developed to process the current signal, including (1) current-to-voltage conversion, (2) analog-digital conversion, (3) downsampling rate, and (4) discrete Fourier transform. The dataset is used to find the best autoencoder parameters by grid search. In training results, the testing accuracy, true positive rate, and precision approach are 87.5%, 83.33%, and 90.91%, respectively. The best model of the autoencoder is evaluated by online testing. The online test means loading the diagnosis model in the production line and evaluating the model. It is shown that the proposed tool can effectively detect abnormal conditions. The online assessment accuracy, true positive rate, and precision are 75%, 90%, and 69.23% in the original threshold, respectively. The accuracy can be up to 90% after adjusting the threshold, and the true positive rate and precision are up to 80% and 100%, respectively.

1. Introduction

Machine learning has become more and more mature with the advancement of technology. Many traditional industrial goals have been transformed into intelligence factories for optimizing yield and profit such as turning, milling and planing, the data for which are usually built in the time domain. The machining error is affected by the status of the tool.
The most common time-domain classification technique is to use algorithms with recurrent time characteristics. For example, Wenpeng Yin analyzes a dataset of different characteristics by the recurrent neural network (RNN), long-short term memory (LSTM), and gated recurrent unit (GRU) [1]. That dataset, emotional classification, sentence content classification, word part-of-speech classification, and path selection are compared by the algorithms above and convolutional neural networks (CNN). Due to the improvement in computer performance in recent years, the model of time series application can perform more complex operations. For example, H. Dinkel integrates the complex convolutional with other algorithms, such as long-short-term-memory and fully deep-neural-network [2]. The above algorithm was used to diagnose voice fraud based on the time-domain. At the same time, compared with other algorithms, the authors prove that it can get better benefits.
In fact, many features are hidden in the frequency domain, and data can be quickly converted from the time domain to the frequency domain by discrete Fourier transform (DFT) [3]. In reference [4], the data from the gas sensor was converted to the frequency samples, then these samples were fed to ANN to be classified to different gases, such as C2H2 and CO2 and CO2. In addition, the frequency spectral data of audio is classified to several machine states by k-nearest neighbor (KNN), support vector machine (SVM) and random forest (RF) [5]. T. N. Sainath extracted of speech features by mel-frequency cepstral coefficients (MFCC) can enhance the human ear sensitivity of frequency. Finally, T. N. Sainath performed the classification by CNN, deep neural networks (DNN), and gaussian mixture models (GMMs) and compared these algorithms [6]. However, when data was converted to the frequency domain, the time domain feature was lost. In the speech keyword classification, short-time Fourier transform (STFT) is used to convert to a spectrum in a fixed time range, and the two-dimensional data are assigned to the different CNN [7]. When the fixed time range was converted to frequency domain data, the time resolution of the high-frequency signal is low. The discrete wavelet transform (DWT) is used to analyze the vibration of the fixed speed machine and fault detection is performed by fuzzy neural networks (FNN). The fault was classified to unbalanced rotation or insufficient lubrication [8]. In addition, there are still many algorithms that have superior performance in diagnostic tasks.
Those unsupervised learning algorithms, the autoencoder (AE), and its extended convolutional autoencoder (CAE) are used to diagnose network states [9]. AE and CAE only require positive samples to train the model and detect the error between the output and input. Next, these methods use a judgment threshold to classify. These methods are called “reconstruction error”. Moreover, the CAE can share weight in the process of compression and decompression, to make the model simpler. Reconstruction error is also applied in another case, which is bearing and gear detection [10]. AE only needs positive sample training, which is more suitable for solving the problem of uneven data set like the production line. However, the compression characteristics of AE also have good performance in feature extraction. The feature extraction of AE is used for the position information of the permanent magnet synchronous motor, then the ANN and SVM are used to diagnose whether the motor is overloaded or lacks lubrication [11]. The feature extraction of AE is used to diagnose the system state of the polisher and classify the reason for the error [12]. The restricted Boltzmann machine (RBM) architecture and stack autoencoder (SAE) extracts features of handwriting recognition, then classifies them by SVM [13]. The data of the time domain can also be compressed via AE, such an AE is used to extract features of limb motion and then classify by pattern recognition neural network (PRNN) [14]. Although the effect of AE is worse than the original CNN, it reduces a lot of computing costs. AE is often compared with principal component analysis (PCA) in feature extraction tasks [15,16]. The nonlinear reconstruction of AE makes the feature extraction significantly better than PCA. Even so, PCA can effectively reduce the feature dimension in the application. C. Vununu diagnosed whether the tool is abnormal through the sound of the machine [17]. First, the sound is transformed from the time domain to the frequency domain by DFT. Second, it reduces the number of inputs from the 250 dimensions to 50 dimensions via PCA. Finally, these features are classified through ANN. This method showed good results in the conclusion, and it confirmed that the time domain is less effective in [17]. The diagnosis of drilling tasks is not only in those cases. For example, [18] is also a diagnosis of the tool whether is invalid. An expert system is replaced via machine learning for the diagnostic application of drilling state [19].
Comparing multiple models is usually necessary to remove the influence of the dataset. For example, D. R. S. Caon used K-fold cross-validation (CV) to compare different models [20]. This method splits the dataset into K groups, and takes turns testing the dataset for each unique group. Finally, it evaluates the average of each training score. In addition to using K-fold, grid search is used to traverse and train the parameter combinations of each model [21]. Finally, the author of [21] selects the highest accuracy model to compare with others. The required epoch is different for each model in the grid search. The early stopping method is used to save the model when the testing accuracy reached the highest point [22,23]. This method can avoid overfitting problems.
The goal of this paper is to improve the yield and cost of the chamfering task, so the diagnostic of tool state is the closest approach. In the above literature, using machine learning is more accurate and faster than the expert system, so machine learning is used in this paper to improve the problem. In addition, the material is filtered [24,25] to unify the material size before processing. Although the application is similar to [17,18], the unobvious features of the dataset and the unpredictable noise of the environment lead to performance difficulties. In addition, the goal of this paper is to use a low-cost controller for edge computing. The controller is less suitable for complex models. The current of the motor is used for diagnosis in this paper.
This article will proceed with the following sections: In the second chapter, we will describe the working environment in the actual production line and how to design the data set. The third chapter describes the algorithm used. The fourth chapter states the details of the experiment. The fifth chapter uses the data in the actual production line to evaluate and analyze. The last chapter draws conclusions from the above results and outlines future work.

2. Materials and Methods

2.1. System Structure

The architecture diagram is shown in Figure 1. The master controller, a personal computer (PC), was used to control the 3-dimensional servo system, which sent the control commands to devices through the control area network (CAN) bus from the PC-CAN. The chamfering process is described in Figure 2. Due to the height difference of the chopstick tubes, the Z-axis stroke of the 3D servo table needed to be adjusted. The phase current of the chamfering motor changed when the material was scraped, so the microcontroller unit (MCU), Renesas RX231 was used to sample the current. When the current of the motor varied, the master was notified to control the drill stroke and the AI unit was triggered to sample the current of the motor by MCU.
The tool, chamfer tool, was diagnosed by the AI model based on the current samples during drilling stroke. The AI unit is an arithmetic unit developed by Renesas. It is composed of a higher performance MCU. Both the AI unit and MCU are behind the processing platform. The processing platform is shown in Figure 3, which is one of the stations in the overall product processing system. The models trained by Tensorflow can be loaded. The method is relatively low in cost and can perform edge operations to improve overall system performance.

2.2. Dataset Collection

Data collection is always an important part of machine learning. In this paper, the microcontroller unit (MCU) was used to detect the U-phase current of the chamfering motor by the current sensor. We selected one phase current of the three-phase motors for sampling, because the three-phase signal is the same. The current translates to voltage as follows,
V I O U T = 0.151 I p + 0.5 V C C   ,
where V I O U T , V C C , and I p are the output voltage, logic power voltage, and current sources, respectively. The Vcc used 5 volts in this paper. These relations [26] of the signal are shown. In order to get the data of the complete chamfering process, the diagnostic task was started when the current raised slightly. The definition of the sample label is more rigorous at the training stage. Figure 4a shows the normal chamfering tool, and Figure 4b shows the abnormal chamfering tool. Although both can complete the processing, the training phase needed to find the threshold, so the definition was stricter. After that, the proper threshold was adjusted based on the application in the machine evaluation.
The data were originally collected at a sampling rate of 20 kHz by analog-digital conversion (ADC) of MCU. The conversion result and reduction voltage value are as follows,
D o u t p u t [ n ] = m a x ( m Z   |   m V I O U T [ n ] V r e f × 2 N )
D o u t p u t [ n ] = m a x ( m Z   |   m 409.6 V I O U T [ n ] )
V d i g i t a l [ n ] = D o u t p u t [ n ] 2 N × V r e f
V d i g i t a l [ n ] = 1.220703 × 10 3 × D o u t p u t [ n ]
where D o u t p u t is the digital output of voltage after ADC the V I O U T , V r e f   is reference voltage, and N is a number of bits in ADC converter. The V r e f and N was set to 5 and 12 as shown in [27], so we can get Equations (4) and (6). In order to reduce the complexity of the model, the data set was downsampled and trained. When the AI unit was triggered, it was sampled at a sampling rate of 2 kHz for 768 ms, which had 1536 sampling points. The sampling stuffed to 2048 by average value. Finally, there were 1024 points after DFT and they were saved for training and testing. The DFT algorithm is as follows,
X f d [ k ] = n = 0 N 1 ( V d i g i t a l [ n ] × e i 2 π n k N )     k = 0 , 1 , 2 , , N 1
where X f d is the frequency domain data and N is the data length. In fact, the signal characteristics are very similar. The time-domain raw data is shown in Figure 5. Figure 5a is the current signal of the motor for normal tool, and Figure 5b is the current signal of the motor for abnormal. The spectrum is shown in Figure 6. Figure 6a shows the normal chamfering tool, and Figure 6b shows the abnormal chamfering tool. Finding key features with manpower is time-consuming, so machine learning was used to solve the problem in this paper.

2.3. Methodology

The diagnostic methods include these three types: (1) statistical methods, (2) neighbor-based methods, and (3) dimensionality reduction based methods. Autoencoder belongs to the last one, and it was implemented on the reconstruction error detection.
Autoencoder is also a kind of artificial neuron network as shown in Figure 7, and it is given as,
x o u t = k = 1 K ( x p r e [ k ] ×   w [ k ] ) + b i a s
where x p r e is the neuron of the first layer, K is the number of the neuron, w is the weight of the neuron, bias is an offset, and x o u t is the output of the neuron network.
The AE includes an input layer, hidden layers, and an output layer as in Figure 8. The AE scheme can be divided into two parts, encoder and decoder. The number of input layer neurons successively decreased in the hidden layers of the encoder block. The feature extraction for the input data was performed, and the most important information was kept after the number of hidden layers neurons successively decreased. After that, the output layer was reconstructed by the decoder, and the number of hidden layers neurons successively increased. Then, the information was restored from the hidden layers. When the model was training, the objective function is the following,
m i n   k = 1 K ( x o u t   [ k ] x i n   [ k ] ) 2
where x o u t   , x i n   , and N are the output of AE, the input of AE, and the number of input. The characteristics of AE includes data compression and reconstruction. In more complex applications, AE was used to extract the important parts from a large amount of input information by data compression feature, and then the extracted data was sent to another classifier for classification. This method can effectively reduce the complexity of classification. The reconstruction error was calculated from the difference between the original input signal and the decompressed output signal. Next, the error can be used to detect the defect. The model can also be trained only by positive samples, so this method is more suitable for abnormal diagnosis when abnormal data was lacking. Its detection effect is good and the model complexity is low. This model is used in many applications of yield detection and device detection.

2.4. Cross-Validation

To develop and evaluate the prediction model, three CV techniques such as K-fold CV, leave-one-out CV, and independent dataset are frequently used [28,29,30,31]. To reduce the noise of dataset influence, we attempted the K-fold CV, where K = 4. This method can be determined by the dataset not to generate super parameter in grid search. In the experiment, the grid search contained a lot of combinations and low hardware performance. The experiment took a long time to calculate. We finally chose to skip CV techniques.

2.5. Evaluation Metrics

If there was noise in the training and validation set in training, it made the threshold set too big. In the diagnosis, the larger threshold caused the abnormal tool not to be discovered. This paper used the traversal value of the training set and validation set to define the threshold to find the best threshold during training. Although the method will reduce the precision (PRE) as follows,
PRE = T P T P + F P
where true positive (TP) means the abnormal tool was correctly identified as abnormal, and false positive (FP) means the normal tool was incorrectly identified as abnormal. In addition, the ACC and TPR are as follows,
ACC = T P + T N T P + T N + F P + F N
TPR = T P T P + F N
where true negative (TN) means a normal tool was correctly identified as normal, false negative (FN) is an abnormal tool incorrectly identified as normal.

3. Results

3.1. Parameter Optimization

The parameter settings vary for different applications, and manual adjustments are quite impractical. This paper uses a grid search to test the model for different combinations of parameters. The traversal AE parameters are as follows: The zoom in and zoom out size increased from 1.25 to 15 at intervals of 0.25. Both zooms in and zoom out means are the number of neural scaling ratio between previous layers to the current layer. However, the hidden layer numbers are one and three, and the reconstruction error type is maximum of absolute difference (MAD), sum of absolute difference (SAD), and sum of squared difference (SSD). The equations are defined as follows:
M A D = m a x ( | x o u t   [ k ] x i n   [ k ] |     |     k < K )
S A D = k = 1 K | x o u t   [ k ] x i n   [ k ] |
SSD = k = 1 K ( x o u t   [ k ] x i n   [ k ] ) 2
In the training parameter, the learning rate is set to 0.001 and epoch is 3000. There are a total of 342 combinations. However, the input size is 1024 as follows,
x i n [ k ] = | X f d [ k ] |   = | n = 0 N 1 ( V d i g i t a l [ n ] × e i 2 π n k N ) |     k = 0 , 1 , 2 , , N 2 1
where N is the number of time-domain data, and the number is set as 2048 here. The input size is based on Hermitian, so the maximum K is half of N.
After completing the grid search, several better model parameters are listed in Table 1 and Table 2. These tables use grid search to evaluate each combination and select the best top three models. In Table 1, the maximum error value of the training and verification set is used as the threshold. In Table 2, the appropriate value for traversing the sample is used as the threshold. Obviously, the latter method can effectively improve the accuracy rate (ACC) and true positive rate (TPR) and only needs to sacrifice a little PRE.

3.2. Model Evaluation

Figure 9 shows the training curve of the AE best model. The blue line is the testing accuracy, the green line is the testing TPR, the red line is the testing PRE, and the orange line is the MAD convergence process. It can be seen from the figure that the model achieves the best accuracy in 1726 epochs, and TPR and PRE are also the best values. The higher accuracy is used to save the best model before the model begins to overfit in this paper. In Figure 10, the MAD of the overall dataset is calculated based on the best model. As Figure 10 shown, the MAD of the abnormal sample area is significantly increased, and there are still several samples with large MAD in the normal sample area. It may be caused by power or mechanism error of the production line.

3.3. Online Analysis

In this paper, the best model is loaded into the AI unit. The model is evaluated by 10 normal samples and 10 abnormal samples. Figure 11 shows the calculated results, MAD by AE. The accuracy of the online evaluation is only 75%, the TPR is 90%, and the PRE is 69.23%. It can be seen from the figure that there is a significant difference between the normal and the abnormal chamfering tool. Because the label definition is stricter during the training, the chamfering tool is judged to abnormal state when it was slightly worn. The prior error may be caused by system noise.

4. Discussion

This paper proposes the autoencoder and signals processing scheme for a chamfering tool diagnosis system. The machine learning algorithm is implemented and developed in an AI unit. In the online evaluation, the error tolerance is low due to the completion of the training under the strict definition. In fact, the processing precision is not a high requirement in this application, which can increase the judgment threshold. For example, the MAD threshold is set to 30 as shown in Figure 12. The value of errors dropped to two. The accuracy rate increased to 90% and the PRE rose to 100%, but the TPR decreased to 80%. Adjustable detection makes the application flexible and suitable for our requirements. It is illustrated that AE can be performed as a low-cost controller. In the future, we can try to correct the position of the machine and the tubular material to reduce noise, or use multi-sensing technology to increase input characteristics and improve the accuracy of diagnosis. In addition to the optimization of the approach, the method can be applied to another diagnosis, such as stamping press fixture, rail drill tool, and lathe tool, etc.

Author Contributions

Conceptualization, C.-W.H.; methodology, W.-T.L.; software, W.-T.L.; validation, W.-T.L.; investigation, W.-T.L.; resources, C.-W.H.; data Curation, C.-W.H.; writing-original draft preparation, W.-L.M.; writing-review & editing W.-L.M.; visualization, W.-L.M.; Funding acquisition, P.-C.L.; Project administration, P.-C.L.

Acknowledgments

This research is supported by the Renesas Electronics, and also by Ministry of Science and Technology, Taiwan, under contract MOST 108-2221-E-224-045- and 107-2218-E-150-001.

Conflicts of Interest

The authors declare no potential conflict of interest.

References

  1. Yin, W.; Kann, K.; Yu, M.; Schütze, H. Comparative Study of CNN and RNN for Natural Language Processing. arXiv 2017, arXiv:1702.01923. [Google Scholar]
  2. Dinkel, H.; Chen, N.; Qian, Y.; Yu, K. End-to-end spoofing detection with raw waveform CLDNNS. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017. [Google Scholar]
  3. Brigham, E.O.; Morrow, R.E. The fast Fourier transform. IEEE Spectr. 1967, 4, 63–70. [Google Scholar] [CrossRef]
  4. Birlasekaran, S.; Ledwich, G. Use of FFT and ANN techniques in monitoring of transformer fault gases. In Proceedings of the 1998 International Symposium on Electrical Insulating Materials, Toyohashi, Japan, 30 September 1998; pp. 75–78. [Google Scholar]
  5. Liang, J.; Wang, K. Vibration Feature Extraction Using Audio Spectrum Analyzer Based Machine Learning. In Proceedings of the 2017 International Conference on Information, Communication, and Engineering (ICICE), Xiamen, China, 17–20 November 2017. [Google Scholar]
  6. Sainath, T.N.; Mohamed, A.; Kingsbury, B.; Ramabhadran, B. Deep convolutional neural networks for LVCSR. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
  7. Hershey, S.; Chaudhuri, S.; Ellis, D.P.; Gemmeke, J.F.; Jansen, A.; Moore, R.C.; Plakal, M.; Platt, D.; Saurous, R.A.; Seybold, B.; et al. CNN architectures for large-scale audio classification. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017. [Google Scholar]
  8. Wang, C.; Lee, C.; Ouyang, C. A machine-learning-based fault diagnosis approach for intelligent condition monitoring. In Proceedings of the 2010 International Conference on Machine Learning and Cybernetics, Qingdao, China, 11–14 July 2010. [Google Scholar]
  9. Chen, Z.; Yeo, C.K.; Lee, B.S.; Lau, C.T. Autoencoder-based network anomaly detection. In Proceedings of the 2018 Wireless Telecommunications Symposium (WTS), Phoenix, AZ, USA, 17–20 April 2018. [Google Scholar]
  10. Qi, Y.; Shen, C.; Wang, D.; Shi, J.; Jiang, X.; Zhu, Z. Stacked Sparse Autoencoder-Based Deep Network for Fault Diagnosis of Rotating Machinery. IEEE Access 2017, 5, 15066–15079. [Google Scholar] [CrossRef]
  11. Zhang, Z.; Cao, S.; Cao, J. fault diagnosis of servo drive system of CNC machine based on deep learning. In Proceedings of the 2018 Chinese Automation Congress (CAC), Xi’an, China, 30 November–2 December 2018. [Google Scholar]
  12. Qu, X.Y.; Zeng, P.; Fu, D.D.; Xu, C.C. Autoencoder-based fault diagnosis for grinding system. In Proceedings of the 2017 29th Chinese Control and Decision Conference (CCDC), Chongqing, China, 28–30 May 2017. [Google Scholar]
  13. Gogoi, M.; Begum, S.A. Image Classification Using Deep Autoencoders. In Proceedings of the 2017 IEEE International Conference on Computational Intelligence and Computing Research (ICICIC), Coimbatore, Japan, 17–19 August 2017. [Google Scholar]
  14. Xiao, Q.; Si, Y. Human action recognition using autoencoder. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017. [Google Scholar]
  15. Siwek, K.; Osowski, S. Autoencoder versus PCA in face recognition. In Proceedings of the 2017 18th International Conference on Computational Problems of Electrical Engineering (CPEE), Kutna Hora, Czech Republic, 11–13 September 2017. [Google Scholar]
  16. Almotiri, J.; Elleithy, K.; Elleithy, A. Comparison of an autoencoder and Principal Component Analysis followed by neural network for e-learning using handwritten recognition. In Proceedings of the 2017 IEEE Long Island Systems, Applications, and Technology Conference (LISAT), Farmingdale, NY, USA, 5 May 2017. [Google Scholar]
  17. Vanunu, C.; Kwon, K.; Lee, E.; Moon, K.; Lee, S. Automatic Fault Diagnosis of Drills Using Artificial Neural Networks. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017. [Google Scholar]
  18. Min, Y.; Bin, L. Drilling Tool Failure Diagnosis Based on GA-SVM. In Proceedings of the 2012 Fourth International Conference on Computational and Information Sciences, Chongqing, China, 17–19 August 2012. [Google Scholar]
  19. Liu, Y.; Zhang, W.; Liao, Z. Research on fault diagnosis of HT-60 drilling rig based on t neural network expert system. In Proceedings of the 2010 International Conference on Computer Application and System Modeling (ICCASM 2010), Taiyuan, China, 22–24 October 2010. [Google Scholar]
  20. Caon, D.R.S.; Amehraye, A.; Razik, J.; Chollet, G.; Andreäo, R.V.; Mokbel, C. Experiments on acoustic model supervised adaptation and evaluation by K-Fold Cross Validation technique. In Proceedings of the 2010 5th International Symposium On I/V Communications and Mobile Network, Rabat, Morocco, 30 September–2 October 2010. [Google Scholar]
  21. Sun, Y.; Wang, Y.; Guo, L.; Ma, Z.; Jin, S. The comparison of optimizing SVM by GA and grid search. In Proceedings of the 2017 13th IEEE International Conference on Electronic Measurement & Instruments (ICEMI), Yangzhou, China, 20–22 October 2017. [Google Scholar]
  22. Wu, X.; Liu, J. A New Early Stopping Algorithm for Improving Neural Network Generalization. In Proceedings of the 2009 Second International Conference on Intelligent Computation Technology and Automation, Changsha, China, 10–11 October 2009. [Google Scholar]
  23. Shao, Y.; Taff, G.N.; Walsh, S.J. Comparison of Early Stopping Criteria for Neural-Network-Based Subpixel Classification. IEEE Geosci. Remote Sens. Lett. 2010, 8, 113–117. [Google Scholar] [CrossRef]
  24. Hung, C.W.; Jiang, J.G.; Wu, H.H.P.; Mao, W.L. An Automated Optical Inspection system for a tube inner circumference state identification. J. Robot. Netw. Artif. Life 2018, 4, 308–311. [Google Scholar] [CrossRef] [Green Version]
  25. Li, W.T.; Hung, C.W.; Chang, C.Y. Tube Inner Circumference State Classification Using Artificial Neural Networks, Random Forest and Support Vector Machines Algorithms to Optimize. In Proceedings of the International Computer Symposium ICS, Yunlin, Taiwan, 20–22 December 2018. [Google Scholar]
  26. Allegro MicroSystems. 120 kHz Bandwidth, High Voltage Isolation Current Sensor with Integrated Overcurrent Detection; Allegro MicroSystems: Marlborough, MA, USA, 2013; p. 3. [Google Scholar]
  27. Renesas Electronics. RX230 Group, RX231 Group Datasheet; Renesas Electronics: Tokyo, Japan, 2018; p. 1. [Google Scholar]
  28. Wei, L.; Su, R.; Luan, S.; Liao, Z.; Manavalan, B.; Zou, Q.; Shi, X. Iterative feature representations improve N4-methylcytosine site prediction. Bioinformatics 2019. [Google Scholar] [CrossRef]
  29. Boopathi, V.; Subramaniyam, S.; Malik, A.; Lee, G.; Manavalan, B.; Yang, D.C. A Support Vector Machine-Based Meta-Predictor for Identification of Anticancer Peptides. Int. J. Mol. Sci. 2019, 20, 1964. [Google Scholar] [CrossRef]
  30. Manavalan, B.; Basith, S.; Shin, T.H.; Wei, L.; Lee, G. A sequence-based meta-predictor for improving the prediction of anti-hypertensive peptides using effective feature representation. Bioinformatics 2019, 35, 2757–2765. [Google Scholar] [CrossRef]
  31. Basith, S.; Manavalan, B.; Shin, T.H.; Lee, G. Computational identification of growth hormone binding proteins from sequences using extremely randomised tree. Comput. Struct. Biotechnol. J. 2018, 16, 412–420. [Google Scholar] [CrossRef]
Figure 1. System architecture.
Figure 1. System architecture.
Energies 12 03708 g001
Figure 2. Chamfering process flow chart.
Figure 2. Chamfering process flow chart.
Energies 12 03708 g002
Figure 3. Chamfering station.
Figure 3. Chamfering station.
Energies 12 03708 g003
Figure 4. The examples of chamfering tool states: (a) the normal tool state, and (b) the abnormal tool state.
Figure 4. The examples of chamfering tool states: (a) the normal tool state, and (b) the abnormal tool state.
Energies 12 03708 g004
Figure 5. Current signal of the motor in the time-domain. (a) The motor curve of the normal tool; (b) the motor curve of the abnormal tool.
Figure 5. Current signal of the motor in the time-domain. (a) The motor curve of the normal tool; (b) the motor curve of the abnormal tool.
Energies 12 03708 g005
Figure 6. Motor current in the frequency domain. (a) The motor curve of the normal tool; (b) the motor curve of the abnormal tool.
Figure 6. Motor current in the frequency domain. (a) The motor curve of the normal tool; (b) the motor curve of the abnormal tool.
Energies 12 03708 g006
Figure 7. Neuron network structure.
Figure 7. Neuron network structure.
Energies 12 03708 g007
Figure 8. Autoencoder structure.
Figure 8. Autoencoder structure.
Energies 12 03708 g008
Figure 9. Autoencoder (AE) training curve.
Figure 9. Autoencoder (AE) training curve.
Energies 12 03708 g009
Figure 10. Evaluation of AE accuracy for the overall dataset.
Figure 10. Evaluation of AE accuracy for the overall dataset.
Energies 12 03708 g010
Figure 11. Online test result.
Figure 11. Online test result.
Energies 12 03708 g011
Figure 12. Adjustment of the online test maximum absolute difference (MAD) threshold.
Figure 12. Adjustment of the online test maximum absolute difference (MAD) threshold.
Energies 12 03708 g012
Table 1. Grid search based on max threshold.
Table 1. Grid search based on max threshold.
Item\Rank123
Testing ACC63.89%61.11%55.56%
Testing TPR45.83%41.67%37.50%
Testing PRE100.0%100.0%90.00%
Overall ACC83.75%81.25%80.00%
Overall TPR45.83%41.67%37.50%
Overall PRE100.0%90.91%90.00%
Zoom in / out1411.7515
hidden layer333
judge typeMADSADMAD
threshold29.915626036.352620.0945
Table 2. Grid search based on fitting threshold.
Table 2. Grid search based on fitting threshold.
Item\Rank123
Testing ACC87.50%79.17%75.00%
Testing TPR83.33%100.0%91.67%
Testing PRE90.91%70.59%68.75%
Overall ACC88.75%85.00%83.75%
Overall TPR75.00%91.67%83.33%
Overall PRE85.71%68.75%68.97%
Zoom in / out11.514.514.75
hidden layer333
judge typeMADMADMAD
threshold20.0776.9114584.697

Share and Cite

MDPI and ACS Style

Hung, C.-W.; Li, W.-T.; Mao, W.-L.; Lee, P.-C. Design of a Chamfering Tool Diagnosis System Using Autoencoder Learning Method. Energies 2019, 12, 3708. https://doi.org/10.3390/en12193708

AMA Style

Hung C-W, Li W-T, Mao W-L, Lee P-C. Design of a Chamfering Tool Diagnosis System Using Autoencoder Learning Method. Energies. 2019; 12(19):3708. https://doi.org/10.3390/en12193708

Chicago/Turabian Style

Hung, Chung-Wen, Wei-Ting Li, Wei-Lung Mao, and Pal-Chun Lee. 2019. "Design of a Chamfering Tool Diagnosis System Using Autoencoder Learning Method" Energies 12, no. 19: 3708. https://doi.org/10.3390/en12193708

APA Style

Hung, C. -W., Li, W. -T., Mao, W. -L., & Lee, P. -C. (2019). Design of a Chamfering Tool Diagnosis System Using Autoencoder Learning Method. Energies, 12(19), 3708. https://doi.org/10.3390/en12193708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop