Author Contributions
Conceptualization, A.B. and W.G.; methodology, A.B., R.K. and M.K.; software, R.K. and M.K.; validation, A.B., M.K. and R.K.; resources, A.B., M.K. and R.K.; data curation, M.K. and R.K.; writing—original draft preparation, A.B; writing—review and editing, W.G. and A.B.; supervision, W.G. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Samples of rotors used in fault detection experiments and overview of the propulsion test rig.
Figure 1.
Samples of rotors used in fault detection experiments and overview of the propulsion test rig.
Figure 2.
Performance comparison of faulty and undamaged rotors: thrust of a single motor–rotor unit (top) and achieved power efficiency (bottom).
Figure 2.
Performance comparison of faulty and undamaged rotors: thrust of a single motor–rotor unit (top) and achieved power efficiency (bottom).
Figure 3.
The Falcon V5 UAV used for FDI experiments and a detailed view of the coaxial propulsion unit.
Figure 3.
The Falcon V5 UAV used for FDI experiments and a detailed view of the coaxial propulsion unit.
Figure 4.
The stack of Raspberry Pi 3B+ SBC and microphone array module.
Figure 4.
The stack of Raspberry Pi 3B+ SBC and microphone array module.
Figure 5.
Experimental setup to acquire acoustic data.
Figure 5.
Experimental setup to acquire acoustic data.
Figure 6.
Summary of the pretraining signal processing steps.
Figure 6.
Summary of the pretraining signal processing steps.
Figure 7.
Structure of the LSTM-based fault classifier.
Figure 7.
Structure of the LSTM-based fault classifier.
Figure 8.
Structure of CNN-based fault classifier.
Figure 8.
Structure of CNN-based fault classifier.
Figure 9.
Process of gathering samples of acoustic dataset and evaluating FDI method.
Figure 9.
Process of gathering samples of acoustic dataset and evaluating FDI method.
Figure 10.
Types of fault scenarios considered in experiments: (1) no faults, (2) single damaged rotor, (3) dual fault, adjacent, (4) dual fault, opposite actuators.
Figure 10.
Types of fault scenarios considered in experiments: (1) no faults, (2) single damaged rotor, (3) dual fault, adjacent, (4) dual fault, opposite actuators.
Figure 11.
Samples of recorded audio signals along with their PSD estimates: (a) all rotors healthy, (b) single damaged rotor, opposite microphone, (c) single damaged rotor, closest microphone.
Figure 11.
Samples of recorded audio signals along with their PSD estimates: (a) all rotors healthy, (b) single damaged rotor, opposite microphone, (c) single damaged rotor, closest microphone.
Table 1.
Selected properties of the MSM321A3729H9BP microphone.
Table 1.
Selected properties of the MSM321A3729H9BP microphone.
Parameter | Value | Unit |
---|
Frequency band | 100∼10 k | [Hz], range of dB |
THD | | [%] |
AOP | 123 | [dB SPL] |
SNR | 65 | [dB] |
Sensitivity | dB | [dB] for 1 kHz in relation to 1 V/Pa |
Table 2.
Summary of flight experiments conducted and the quantities of data for every fault class considered.
Table 2.
Summary of flight experiments conducted and the quantities of data for every fault class considered.
Flight Scenario | | | | | | No. Flights | Audio Data Length [s] |
---|
Nominal (healthy) condition | 10 | 2400 | |
Single damaged blade | 12 | 2400 | |
Two damaged rotors | Adjacent locations | 9 | 1800 |
Opposite locations | 5 | 1000 |
Fault Class | Broken Blade Location | Fault Type | No. Flights | Audio Data Length [s] |
A | B | C | D |
H | − | − | − | − | none | 10 | 2400 |
AF | + | − | − | − | fractured tip | 5 | 1000 |
BF | − | + | − | − | 5 | 1000 |
CF | − | − | + | − | 5 | 1000 |
DF | − | − | − | + | 4 | 800 |
AE | + | − | − | − | edge distortion | 4 | 800 |
BE | − | + | − | − | 5 | 1000 |
CE | − | − | + | − | 3 | 600 |
DE | − | − | − | + | 6 | 1200 |
Table 3.
Evaluation of the LSTM-based fault classifier.
Table 3.
Evaluation of the LSTM-based fault classifier.
Id | LSTM Layer Size | Linear Layer Size | F-Score | Precision | Recall | Accuracy |
---|
1 | 64 | 32 | 0.910 | 0.858 | 0.969 | 0.967 |
2 | 128 | 64 | 0.936 | 0.900 | 0.975 | 0.977 |
3 | 256 | 128 | 0.982 | 0.987 | 0.978 | 0.994 |
4 | 512 | 256 | 0.985 | 0.989 | 0.980 | 0.995 |
Table 4.
Parameters of the selected LSTM-based fault classifier.
Table 4.
Parameters of the selected LSTM-based fault classifier.
Parameter | Value |
---|
Number of input layer neurons | 512 |
Number of hidden layer neurons | 256 |
Number of output layer neurons | 9 |
Batch size | 640 |
Loss function | CrossEntropyLoss |
Output layer activation function | Sigmoid |
Optimizer | Adam |
Compiler metric | accuracy |
Checkpoint monitor | validation loss |
Number of epochs | 866 |
Training loss | 2.19 |
Training accuracy | 1 |
Validation loss | 2.36 |
Validation accuracy | 0.998 |
Test accuracy | 0.994 |
Table 5.
Confusion matrices for the LSTM-based fault classifier.
Table 5.
Confusion matrices for the LSTM-based fault classifier.
AF | Prediction | | AE | Prediction | | BF | Prediction | |
True Label | 0 | 1 | True Label | 0 | 1 | True Label | 0 | 1 |
0 | 6727 | 0 | 0 | 6270 | 66 | 0 | 6335 | 1 |
1 | 39 | 1134 | 1 | 52 | 1512 | 1 | 10 | 1554 |
BE | Prediction | | CF | Prediction | | CE | Prediction | |
True Label | 0 | 1 | True Label | 0 | 1 | True Label | 0 | 1 |
0 | 6317 | 19 | 0 | 6321 | 15 | 0 | 6726 | 1 |
1 | 29 | 1535 | 1 | 95 | 1469 | 1 | 2 | 1171 |
DF | Prediction | | DE | Prediction | | H | Prediction | |
True Label | 0 | 1 | True Label | 0 | 1 | True Label | 0 | 1 |
0 | 6336 | 0 | 0 | 6331 | 5 | 0 | 7404 | 25 |
1 | 0 | 1564 | 1 | 11 | 1553 | 1 | 1 | 470 |
Table 6.
Evaluation of CNN-based fault classifier.
Table 6.
Evaluation of CNN-based fault classifier.
Kernel Size | No. Kernels | No. Filters | F-Score | Precision | Recall | Accuracy |
---|
3 | 4 | 2 | 0.789 | 0.707 | 0.893 | 0.918 |
4 | 0.876 | 0.839 | 0.918 | 0.956 |
8 | 0.889 | 0.850 | 0.932 | 0.960 |
16 | 0.884 | 0.817 | 0.962 | 0.957 |
32 | 0.963 | 0.968 | 0.957 | 0.987 |
64 | 0.964 | 0.973 | 0.956 | 0.988 |
5 | 2 | 0.804 | 0.736 | 0.885 | 0.926 |
4 | 0.910 | 0.881 | 0.940 | 0.968 |
8 | 0.939 | 0.930 | 0.948 | 0.979 |
16 | 0.963 | 0.971 | 0.955 | 0.987 |
32 | 0.965 | 0.972 | 0.958 | 0.988 |
64 | 0.969 | 0.978 | 0.961 | 0.990 |
6 | 2 | 0.811 | 0.748 | 0.886 | 0.929 |
4 | 0.910 | 0.907 | 0.912 | 0.969 |
8 | 0.957 | 0.973 | 0.942 | 0.986 |
16 | 0.967 | 0.977 | 0.957 | 0.989 |
32 | 0.963 | 0.976 | 0.949 | 0.987 |
64 | 0.980 | 0.986 | 0.975 | 0.993 |
5 | 4 | 2 | 0.844 | 0.789 | 0.908 | 0.942 |
4 | 0.896 | 0.862 | 0.934 | 0.963 |
8 | 0.949 | 0.943 | 0.956 | 0.983 |
16 | 0.957 | 0.960 | 0.954 | 0.985 |
32 | 0.971 | 0.978 | 0.964 | 0.990 |
64 | 0.973 | 0.983 | 0.964 | 0.991 |
5 | 2 | 0.862 | 0.801 | 0.933 | 0.949 |
4 | 0.937 | 0.926 | 0.947 | 0.978 |
8 | 0.948 | 0.941 | 0.955 | 0.982 |
16 | 0.966 | 0.974 | 0.959 | 0.989 |
32 | 0.965 | 0.975 | 0.956 | 0.988 |
64 | 0.970 | 0.978 | 0.961 | 0.990 |
6 | 2 | 0.909 | 0.898 | 0.921 | 0.969 |
4 | 0.931 | 0.938 | 0.924 | 0.976 |
8 | 0.940 | 0.958 | 0.924 | 0.980 |
16 | 0.965 | 0.974 | 0.956 | 0.988 |
32 | 0.964 | 0.974 | 0.954 | 0.988 |
64 | 0.971 | 0.975 | 0.967 | 0.990 |
Table 7.
Parameters of the best developed CNN-based fault classifier.
Table 7.
Parameters of the best developed CNN-based fault classifier.
Parameter | Value |
---|
Batch size | 640 |
Loss function | CrossEntropyLoss |
Output layer activation function | Sigmoid |
Between layer activation function | ReLU |
Optimizer | Adam |
Compiler metric | accuracy |
Checkpoint monitor | validation loss |
Number of epochs | 38 |
Training loss | 2.27 |
Training accuracy | 1 |
Validation loss | 2.38 |
Validation accuracy | 0.997 |
Test accuracy | 0.993 |
Table 8.
Confusion matrices for CNN-based fault classifier.
Table 8.
Confusion matrices for CNN-based fault classifier.
AF | Prediction | | AE | Prediction | | BF | Prediction | |
True Label | 0 | 1 | True Label | 0 | 1 | True Label | 0 | 1 |
0 | 6725 | 2 | 0 | 6269 | 67 | 0 | 6334 | 2 |
1 | 34 | 1139 | 1 | 43 | 1521 | 1 | 19 | 1545 |
BE | Prediction | | CF | Prediction | | CE | Prediction | |
True Label | 0 | 1 | True Label | 0 | 1 | True Label | 0 | 1 |
0 | 6310 | 26 | 0 | 6318 | 18 | 0 | 6725 | 2 |
1 | 63 | 1501 | 1 | 106 | 1458 | 1 | 23 | 1150 |
DF | Prediction | | DE | Prediction | | H | Prediction | |
True Label | 0 | 1 | True Label | 0 | 1 | True Label | 0 | 1 |
0 | 6330 | 6 | 0 | 6326 | 10 | 0 | 7394 | 35 |
1 | 0 | 1564 | 1 | 6 | 1558 | 1 | 21 | 450 |
Table 9.
Execution time for feature extraction, training and classification steps.
Table 9.
Execution time for feature extraction, training and classification steps.
| | LSTM | CNN |
---|
MFCC extraction | Avg. per frame [ms] | 2.860 |
Std dev. [ms] | 0.032 |
Model training | Total [min] | 24 | 7 |
Fault classification | Avg. per frame [ms] | 0.037 | 0.630 |
Std dev. [ms] | 0.003 | 0.011 |
Table 10.
Effects of the length of the signal frame on the performance of the LSTM-based fault classifier.
Table 10.
Effects of the length of the signal frame on the performance of the LSTM-based fault classifier.
Frame Length [ms] | F-Score | Precision | Recall | Accuracy |
---|
600 | 0.980 | 0.986 | 0.975 | 0.993 |
500 | 0.985 | 0.989 | 0.980 | 0.995 |
400 | 0.975 | 0.980 | 0.969 | 0.991 |
300 | 0.964 | 0.972 | 0.955 | 0.988 |
200 | 0.947 | 0.956 | 0.938 | 0.982 |
100 | 0.907 | 0.896 | 0.919 | 0.968 |
75 | 0.868 | 0.842 | 0.896 | 0.953 |
Table 11.
Performance of the LSTM fault classifier with different numbers of cepstral coefficients extracted for every audio channel.
Table 11.
Performance of the LSTM fault classifier with different numbers of cepstral coefficients extracted for every audio channel.
No. MFCC | F-Score | Precision | Recall | Accuracy |
---|
208 | 0.971 | 0.980 | 0.962 | 0.990 |
104 | 0.985 | 0.989 | 0.980 | 0.995 |
52 | 0.960 | 0.959 | 0.962 | 0.986 |
26 | 0.915 | 0.891 | 0.940 | 0.970 |
13 | 0.885 | 0.836 | 0.939 | 0.958 |