An Incremental Class-Learning Approach with Acoustic Novelty Detection for Acoustic Event Recognition
Abstract
:1. Introduction
- the use and investigation of ICL algorithms using acoustic signals in an AER task,
- the pre-training of F-TDNN and TDNN-LSTM using MFCCs and raw acoustic signals, respectively,
- the extraction of deep audio representations with the pre-trained F-TDNN and TDNN-LSTM,
- the development of a semi-supervised AND method to detect new acoustic events for ICL,
- the augmentation of audio signals to increase the number of features from the detected novel event for ICL and retraining of the AND algorithm,
- the comparison of the deep features from the F-TDNN and TDNN-LSTM, and the state-of-the-art networks VGG-16 and ResNet-34 pre-trained using Mel-spectrograms from the same dataset,
- the integration of ICL and AND in a single framework to achieve ICL without human supervision and
- the collection of an audio dataset in a domestic environment.
2. Background
2.1. Acoustic Scene Analysis
2.2. Novelty Detection on Acoustic Signals
2.3. Incremental Class-Learning
3. Proposed Approach
- a novel event detector retrained in a semi-supervised manner and
- an acoustic event recognizer that can learn incrementally from new events detected in the AND step.
3.1. Problem Definition
3.2. Pre-Processing
3.3. Feature Extraction
3.4. Acoustic Novelty Detection
3.5. Incremental Class-Learning
4. Results and Discussion
4.1. Experimental Setup
4.2. Experimental Procedure
4.3. Evaluation Metrics
4.4. Results of Novelty Detection
4.5. Results of the Incremental Class-Learning Experiments
4.6. Results of Incremental Class-Learning with Novelty Detection
4.7. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- McCloskey, M.; Cohen, N.J. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of Learning and Motivation; Elsevier: Amsterdam, The Netherlands, 1989; Volume 24, pp. 109–165. [Google Scholar]
- Robins, A. Catastrophic forgetting, rehearsal and pseudorehearsal. Connect. Sci. 1995, 7, 123–146. [Google Scholar] [CrossRef]
- Vaila, R.; Chiasson, J.; Saxena, V. Continuous Learning in a Single-Incremental-Task Scenario with Spike Features. In Proceedings of the International Conference on Neuromorphic Systems 2020, Chicago, IL, USA, 28–30 July 2020; pp. 1–4. [Google Scholar]
- Zhao, H.; Wang, H.; Fu, Y.; Wu, F.; Li, X. Memory Efficient Class-Incremental Learning for Image Classification. arXiv 2020, arXiv:2008.01411. [Google Scholar]
- Yu, L.; Liu, X.; van de Weijer, J. Self-Training for Class-Incremental Semantic Segmentation. arXiv 2020, arXiv:2012.03362. [Google Scholar]
- Maltoni, D.; Lomonaco, V. Continuous learning in single-incremental-task scenarios. Neural Netw. 2019, 116, 56–73. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Koh, E.; Saki, F.; Guo, Y.; Hung, C.Y.; Visser, E. Incremental Learning Algorithm For Sound Event Detection. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
- Wang, Y.; Bryan, N.J.; Cartwright, M.; Bello, J.P.; Salamon, J. Few-Shot Continual Learning for Audio Classification. In Proceedings of the ICASSP 2021—2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 321–325. [Google Scholar]
- Carletti, V.; Foggia, P.; Percannella, G.; Saggese, A.; Strisciuglio, N.; Vento, M. Audio surveillance using a bag of aural words classifier. In Proceedings of the 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, Krakow, Poland, 27–30 August 2013; pp. 81–86. [Google Scholar]
- Naronglerdrit, P.; Mporas, I. Recognition of Indoors Activity Sounds for Robot-Based Home Monitoring in Assisted Living Environments. In Proceedings of the International Conference on Interactive Collaborative Robotics; Springer: Berlin/Heidelberg, Germany, 2017; pp. 153–161. [Google Scholar]
- Wang, J.C.; Lee, H.P.; Wang, J.F.; Lin, C.B. Robust environmental sound recognition for home automation. IEEE Trans. Autom. Sci. Eng. 2008, 5, 25–31. [Google Scholar] [CrossRef]
- Saltali, I.; Sariel, S.; Ince, G. Scene analysis through auditory event monitoring. In Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents, Tokyo, Japan, 12–16 November 2016; pp. 1–6. [Google Scholar]
- Rivenez12, M.; Gorea, A.; Pressnitzer, D.; Drake, C. The Tolerance Window for Sequences of Musical, Environmental and Artificial Sounds. In Proceedings of the 7th International Conference on Music Perception and Cognition, Sydney, Australia, 17–21 July 2002. [Google Scholar]
- Chu, S.; Narayanan, S.; Kuo, C.C.J. Unstructured environmental audio: Representation, classification and modeling. In Machine Audition: Principles, Algorithms and Systems; IGI Global: Pennsylvania, PA, USA, 2011; pp. 1–21. [Google Scholar]
- Wyse, L. Audio spectrogram representations for processing with convolutional neural networks. arXiv 2017, arXiv:1706.09559. [Google Scholar]
- Piczak, K.J. Environmental sound classification with convolutional neural networks. In Proceedings of the 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA, USA, 17–20 September 2015; pp. 1–6. [Google Scholar]
- Phan, H.; Chén, O.Y.; Pham, L.; Koch, P.; De Vos, M.; McLoughlin, I.; Mertins, A. Spatio-temporal attention pooling for audio scene classification. arXiv 2019, arXiv:1904.03543. [Google Scholar]
- Waldekar, S.; Saha, G. Wavelet-Based Audio Features for Acoustic Scene Classification; Tech. Rep.; DCASE Challenge: Chicago, IL, USA, September 2018. [Google Scholar]
- Ford, L.; Tang, H.; Grondin, F.; Glass, J.R. A Deep Residual Network for Large-Scale Acoustic Scene Analysis. In Proceedings of the INTERSPEECH, Graz, Austria, 15–19 September 2019; pp. 2568–2572. [Google Scholar]
- Kim, C.D.; Kim, B.; Lee, H.; Kim, G. AudioCaps: Generating captions for audios in the wild. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational 7 Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; Volume 1, pp. 119–132, (Long and Short Papers). [Google Scholar]
- Palanisamy, K.; Singhania, D.; Yao, A. Rethinking cnn models for audio classification. arXiv 2020, arXiv:2007.11154. [Google Scholar]
- Zhou, S.; Beigi, H. A transfer learning method for speech emotion recognition from automatic speech recognition. arXiv 2020, arXiv:2008.02863. [Google Scholar]
- Chen, C.P.; Zhang, S.Y.; Yeh, C.T.; Wang, J.C.; Wang, T.; Huang, C.L. Speaker characterization using tdnn-lstm based speaker embedding. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 6211–6215. [Google Scholar]
- Gemmeke, J.F.; Ellis, D.P.; Freedman, D.; Jansen, A.; Lawrence, W.; Moore, R.C.; Plakal, M.; Ritter, M. Audio set: An ontology and human-labeled dataset for audio events. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 776–780. [Google Scholar]
- Meire, M.; Karsmakers, P. Comparison of deep autoencoder architectures for real-time acoustic based anomaly detection in assets. In Proceedings of the 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Metz, France, 18–21 September 2019; Volume 2, pp. 786–790. [Google Scholar]
- Suefusa, K.; Nishida, T.; Purohit, H.; Tanabe, R.; Endo, T.; Kawaguchi, Y. Anomalous sound detection based on interpolation deep neural network. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), [Online], 4–8 May 2020; pp. 271–275. [Google Scholar]
- Lakshmi, S.V.; Prabakaran, T.E. Application of k-nearest neighbour classification method for intrusion detection in network data. Int. J. Comput. Appl. 2014, 97, 34–37. [Google Scholar]
- Ntalampiras, S.; Potamitis, I.; Fakotakis, N. Probabilistic novelty detection for acoustic surveillance under real-world conditions. IEEE Trans. Multimed. 2011, 13, 713–719. [Google Scholar] [CrossRef]
- Popescu, M.; Mahnot, A. Acoustic fall detection using one-class classifiers. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, 23–27 July 2009; pp. 3505–3508. [Google Scholar]
- Antonini, M.; Vecchio, M.; Antonelli, F.; Ducange, P.; Perera, C. Smart audio sensors in the internet of things edge for anomaly detection. IEEE Access 2018, 6, 67594–67610. [Google Scholar] [CrossRef]
- Li, Z.; Hoiem, D. Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2935–2947. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rebuffi, S.A.; Kolesnikov, A.; Sperl, G.; Lampert, C.H. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2001–2010. [Google Scholar]
- Kemker, R.; Kanan, C. Fearnet: Brain-inspired model for incremental learning. arXiv 2017, arXiv:1711.10563. [Google Scholar]
- Piczak, K.J. ESC: Dataset for environmental sound classification. In Proceedings of the 23rd ACM international conference on Multimedia, Brisbane, Australia, 26–30 October 2015; pp. 1015–1018. [Google Scholar]
- Salamon, J.; Jacoby, C.; Bello, J.P. A dataset and taxonomy for urban sound research. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 1041–1044. [Google Scholar]
- Phan, H.; Hertel, L.; Maass, M.; Koch, P.; Mazur, R.; Mertins, A. Improved audio scene classification based on label-tree embeddings and convolutional neural networks. IEEE ACM Trans. Audio Speech Lang. Process. 2017, 25, 1278–1290. [Google Scholar] [CrossRef] [Green Version]
- Dang, A.; Vu, T.H.; Wang, J.C. Acoustic scene classification using convolutional neural networks and multi-scale multi-feature extraction. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 12–14 January 2018; pp. 1–4. [Google Scholar]
- Zhang, Z.; Xu, S.; Zhang, S.; Qiao, T.; Cao, S. Learning Frame Level Attention for Environmental Sound Classification. arXiv 2020, arXiv:2007.07241. [Google Scholar]
- Ciaburro, G.; Iannace, G. Improving smart cities safety using sound events detection based on deep neural network algorithms. Informatics 2020, 7, 23. [Google Scholar] [CrossRef]
- Kataria, S.; Nidadavolu, P.S.; Villalba, J.; Chen, N.; Garcia-Perera, P.; Dehak, N. Feature enhancement with deep feature losses for speaker verification. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), [Online], 4–8 May 2020; pp. 7584–7588. [Google Scholar]
- Fathima, N.; Patel, T.; Mahima, C.; Iyengar, A. TDNN-based Multilingual Speech Recognition System for Low Resource Indian Languages. In Proceedings of the INTERSPEECH, Hyderabad, India, 2–6 September 2018; pp. 3197–3201. [Google Scholar]
- Huang, J.; Tao, J.; Liu, B.; Lian, Z.; Niu, M. Efficient modeling of long temporal contexts for continuous emotion recognition. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, UK, 3–6 September 2019; pp. 185–191. [Google Scholar]
- Zhou, Q.; Feng, Z.; Benetos, E. Adaptive noise reduction for sound event detection using subband-weighted NMF. Sensors 2019, 19, 3206. [Google Scholar] [CrossRef] [Green Version]
- Noh, K.; Chang, J.H. Joint optimization of deep neural network-based dereverberation and beamforming for sound event detection in multi-channel environments. Sensors 2020, 20, 1883. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nanni, L.; Maguolo, G.; Brahnam, S.; Paci, M. An Ensemble of Convolutional Neural Networks for Audio Classification. arXiv 2020, arXiv:2007.07966. [Google Scholar]
- Pandeya, Y.R.; Kim, D.; Lee, J. Domestic cat sound classification using learned features from deep neural nets. Appl. Sci. 2018, 8, 1949. [Google Scholar] [CrossRef] [Green Version]
- Nanni, L.; Maguolo, G.; Paci, M. Data augmentation approaches for improving animal audio classification. Ecol. Inform. 2020, 57, 101084. [Google Scholar] [CrossRef] [Green Version]
- Marchi, E.; Vesperini, F.; Squartini, S.; Schuller, B. Deep recurrent neural network-based autoencoders for acoustic novelty detection. Comput. Intell. Neurosci. 2017. [Google Scholar] [CrossRef] [Green Version]
- Nguyen, D.; Kirsebom, O.S.; Frazão, F.; Fablet, R.; Matwin, S. Recurrent neural networks with stochastic layers for acoustic novelty detection. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 765–769. [Google Scholar]
- Nguyen, M.H.; Nguyen, D.Q.; Nguyen, D.Q.; Pham, C.N.; Bui, D.; Han, H.D. Deep Convolutional Variational Autoencoder for Anomalous Sound Detection. In Proceedings of the 2020 IEEE Eighth International Conference on Communications and Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021; pp. 313–318. [Google Scholar]
- Müller, R.; Illium, S.; Ritz, F.; Schmid, K. Analysis of Feature Representations for Anomalous Sound Detection. arXiv 2020, arXiv:2012.06282. [Google Scholar]
- Hoang, T.V.; Nguyen, H.C.; Pham, G.N. Unsupervised Detection of Anomalous Sound for Machine Condition Monitoring Using Different Auto-Encoder Methods; Tech. Rep.; DCASE Challenge: Chicago, IL, USA, July 2020. [Google Scholar]
- Janse, P.V.; Magre, S.B.; Kurzekar, P.K.; Deshmukh, R. A comparative study between mfcc and dwt feature extraction technique. Int. J. Eng. Res. Technol. 2014, 3, 3124–3127. [Google Scholar]
- Shimada, K.; Koyama, Y.; Inoue, A. Metric learning with background noise class for few-shot detection of rare sound events. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), [Online], 4–8 May 2020; pp. 616–620. [Google Scholar]
- Bayram, B.; Duman, T.B.; Ince, G. Real time detection of acoustic anomalies in industrial processes using sequential autoencoders. Expert Syst. 2021, 38, e12564. [Google Scholar] [CrossRef]
- Shi, B.; Sun, M.; Puvvada, K.C.; Kao, C.C.; Matsoukas, S.; Wang, C. Few-Shot Acoustic Event Detection Via Meta Learning. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), [Online], 4–8 May 2020; pp. 76–80. [Google Scholar]
- Al-Behadili, H.; Grumpe, A.; Wöhler, C. Incremental learning and novelty detection of gestures in a multi-class system. In Proceedings of the 2015 3rd International Conference on Artificial Intelligence, Modelling and Simulation (AIMS), Kota Kinabalu, Sabah, Malaysia, 2–4 December 2015; pp. 304–309. [Google Scholar]
- Shmelkov, K.; Schmid, C.; Alahari, K. Incremental learning of object detectors without catastrophic forgetting. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3400–3409. [Google Scholar]
- Ren, M.; Liao, R.; Fetaya, E.; Zemel, R. Incremental few-shot learning with attention attractor networks. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, Canada, 8–14 December 2019; pp. 5276–5286. [Google Scholar]
- Povey, D.; Cheng, G.; Wang, Y.; Li, K.; Xu, H.; Yarmohammadi, M.; Khudanpur, S. Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks. In Proceedings of the INTERSPEECH, Hyderabad, India, 2–6 September 2018; pp. 3743–3747. [Google Scholar]
- Yu, Y.Q.; Li, W.J. Densely Connected Time Delay Neural Network for Speaker Verification. In Proceedings of the INTERSPEECH 2020, Shanghai, China, 25–29 October 2020; pp. 921–925. [Google Scholar]
- Abraham, W.C.; Robins, A. Memory retention—The synaptic stability versus plasticity dilemma. Trends Neurosci. 2005, 28, 73–78. [Google Scholar] [CrossRef] [PubMed]
- McFee, B.; Raffel, C.; Liang, D.; Ellis, D.P.; McVicar, M.; Battenberg, E.; Nieto, O. librosa: Audio and music signal analysis in python. In Proceedings of the 14th Python in Science Conference, Austin, TX, USA, 6–12 July 2015; Volume 8, pp. 18–25. [Google Scholar]
- Stowell, D. Computational bioacoustic scene analysis. In Computational Analysis of Sound Scenes and Events; Springer: Berlin/Heidelberg, Germany, 2018; pp. 303–333. [Google Scholar]
- Ghaleb, E.; Popa, M.; Asteriadis, S. Multimodal and temporal perception of audio-visual cues for emotion recognition. In Proceedings of the 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Dublin, Ireland, 3 September 2019; pp. 552–558. [Google Scholar]
Algorithm | Mel-Spectrogram | F-TDNN | TDNN-LSTM | ResNet-34 | VGG-16 |
---|---|---|---|---|---|
Stacked AE | 89.7/92.1 | 91.7/94.0 | 86.1/88.3 | 90.1/92.9 | 92.8/95.0 |
VAE | 83.5/86.2 | 84.1/85.2 | 80.7/81.7 | 87.3/90.3 | 88.9/91.1 |
kNN | 86.0/88.2 | 91.1/94.7 | 81.5/83.8 | 88.8/90.4 | 94.7/97.1 |
GMM | 92.7/94.4 | 96.1/97.2 | 86.5/86.7 | 92.3/94.2 | 96.4/97.4 |
OCSVM | 86.3/90.6 | 86.1/91.2 | 80.1/87.5 | 85.1/91.7 | 91.4/94.9 |
iForest | 78.4/81.6 | 77.4/84.1 | 74.9/76.8 | 80.7/86.1 | 83.1/88.1 |
Algorithm | Mel-Spectrogram | F-TDNN | TDNN-LSTM | ResNet-34 | VGG-16 |
---|---|---|---|---|---|
Stacked AE | 83.1/86.8 | 76.4/83.1 | 70.4/74.3 | 81.5/86.1 | 81.4/88.7 |
VAE | 76.0/83.2 | 76.1/83.4 | 66.4/67.8 | 83.8/86.3 | 81.3/85.2 |
kNN | 81.1/88.2 | 84.8/86.9 | 70.7/73.8 | 83.8/87.2 | 88.4/89.1 |
GMM | 80.5/87.9 | 85.1/88.9 | 77.0/77.7 | 83.0/86.7 | 89.0/89.1 |
OCSVM | 80.5/85.8 | 78.1/81.1 | 76.2/74.1 | 83.1/85.0 | 86.2/88.3 |
iForest | 67.2/71.4 | 62.4/65.2 | 59.4/57.8 | 63.3/66.3 | 71.3/73.0 |
Algorithm | Mel-Spectrogram | F-TDNN | TDNN-LSTM | ResNet-34 | VGG-16 |
---|---|---|---|---|---|
Stacked AE | 65.7/74.6 | 63.8/68.9 | 59.9/63.3 | 80.8/84.4 | 81.3/84.8 |
VAE | 62.2/68.6 | 60.1/66.6 | 56.7/62.4 | 74.8/77.9 | 74.7/76.7 |
kNN | 72.5/78.7 | 69.2/74.8 | 65.5/72.8 | 83.5/87.1 | 82.0/85.6 |
GMM | 70.8/78.9 | 73.0/78.9 | 71.2/78.5 | 80.1/85.9 | 85.1/87.7 |
OCSVM | 68.9/71.3 | 65.1/68.9 | 66.1/73.5 | 76.4/80.1 | 84.1/87.3 |
iForest | 60.5/62.8 | 59.7/64.0 | 55.8/57.7 | 62.2/66.8 | 63.3/68.2 |
Algorithm | Mel-Spectrogram | F-TDNN | TDNN-LSTM | ResNet-34 | VGG-16 |
---|---|---|---|---|---|
Stacked AE | 68.9/69.8 | 65.5/67.6 | 58.1/60.4 | 68.9/70.1 | 71.6/72.7 |
VAE | 53.4/58.9 | 60.9/64.4 | 59.9/62.2 | 67.4/68.8 | 66.3/69.7 |
kNN | 67.9/70.7 | 66.6/68.3 | 60.1/63.8 | 70.9/71.8 | 70.4/71.1 |
GMM | 71.0/73.8 | 68.1/69.8 | 59.5/64.7 | 71.2/73.4 | 71.9/73.4 |
OCSVM | 68.9/71.4 | 64.3/66.8 | 58.8/60.0 | 65.7/68.8 | 68.1/69.4 |
iForest | 52.2/54.2 | 56.1/58.2 | 48.1/52.8 | 56.2/58.1 | 59.2/60.1 |
Algorithm | Domestic | ESC-10 | US8K | ESC-50 |
---|---|---|---|---|
LwF | 69.4/64.8 | 64.2/60.0 | 57.1/54.4 | 24.1/20.9 |
iCaRL | 78.5/77.6 | 68.1/68.3 | 62.1/59.6 | 28.1/21.1 |
FearNet | 80.7/81.4 | 74.3/71.0 | 63.8/59.5 | 30.8/24.7 |
Accuracy Values on VGG/F-TDNN and Number of Detected Events | ||||
---|---|---|---|---|
Algorithm | Domestic | ESC-10 | US8K | ESC-50 |
iCaRL | 56.4/51.0/26 | 48.0/44.3/36 | 42.4/36.2/40 | 14.4/9.7/226 |
FearNet | 59.1/52.6/26 | 53.3/50.3/36 | 43.9/39.3/40 | 17.8/14.7/226 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bayram, B.; İnce, G. An Incremental Class-Learning Approach with Acoustic Novelty Detection for Acoustic Event Recognition. Sensors 2021, 21, 6622. https://doi.org/10.3390/s21196622
Bayram B, İnce G. An Incremental Class-Learning Approach with Acoustic Novelty Detection for Acoustic Event Recognition. Sensors. 2021; 21(19):6622. https://doi.org/10.3390/s21196622
Chicago/Turabian StyleBayram, Barış, and Gökhan İnce. 2021. "An Incremental Class-Learning Approach with Acoustic Novelty Detection for Acoustic Event Recognition" Sensors 21, no. 19: 6622. https://doi.org/10.3390/s21196622
APA StyleBayram, B., & İnce, G. (2021). An Incremental Class-Learning Approach with Acoustic Novelty Detection for Acoustic Event Recognition. Sensors, 21(19), 6622. https://doi.org/10.3390/s21196622