Source Type Classification and Localization of Inter-Floor Noise with a Single Sensor and Knowledge Transfer between Reinforced Concrete Buildings †
Abstract
:1. Introduction
1.1. Motivation
1.2. Related Literature
1.3. Approach
1.4. Contributions
2. Apartment Building Inter-Floor Noise Datasets
3. Inter-Floor Noise Classification
3.1. Onset Detection
3.2. Convolutional Neural Network-Based Classifier
3.3. Network Training
3.4. Inter-Floor Noise Source Type Classification and Localization Tasks
3.4.1. Source Type Classification in a Single Apartment Building
- (a)
- . This task cross-validates the source type classification with the inter-floor noise on the floors above/below in APT I. This is realized by finding predictive functions with a label space against five-folds () of labeled training/validation data pairs from .
- (b)
- . This task verifies the source type classification against the inter-floor noise generated at unlearned positions in the same apartment building, APT I. obtained in is tested against the test data pairs from .
- (c)
- and . These tasks verify the same things against the noise samples on the floors above/below in APT II. , with are obtained and tested against , from .
3.4.2. Localization in a Single Apartment Building
- (a)
- . This task cross-validates locators against the inter-floor noise on the floors above/below in APT I. This is realized by finding with on five-folds () of labeled training/validation data pairs , from .
- (b)
- . obtained in are tested against the inter-floor noises generated at the unlearned positions in the same apartment building, APT I. The predictive functions are tested against the test data pairs , from .
- (c)
- and . They verify locators using the same approach for and with the inter-floor noise obtained from APT II.
3.4.3. Knowledge Transfer between the Apartment Buildings
- (a)
- and . These tasks test and against data pairs from and , respectively.
- (b)
- and . These tasks test localization knowledge transfer. The XY positions relative to the receiver position of the data points in APT I and APT II are different. Therefore, the position label space of APT I and APT II are considered different. Hence, and are tested against data pairs from and , respectively. Furthermore, the localized positions are rearranged to their corresponding floors.
4. Performance Evaluation
4.1. Source Type Classification Results in a Single Apartment Building
4.2. Localization Results in a Single Apartment Building
- (a)
- The position labels 1-A-a and 1-B-a are considered 1-A-a and 1-B-a, respectively. These realize the localization of inter-floor noise transmitted through the unlearned floor section (4 F → 3 F in APT I).
- (b)
- Localization results of 1-C-a, 1-D-a and 1-E-a are squeezed into and approximated to floor classification because their XY positions cannot be mapped to directly.
4.3. Results of Knowledge Transfer between the Apartment Buildings
4.4. Input Signal Length Selection
5. Conclusions and Future Study
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
References
- Jeon, J.Y.; Ryu, J.K.; Lee, P.J. A quantification model of overall dissatisfaction with indoor noise environment in residential buildings. Appl. Acoust. 2010, 71, 914–921. [Google Scholar] [CrossRef]
- Jeon, J.Y. Subjective evaluation of floor impact noise based on the model of ACF/IACF. J. Sound Vib. 2001, 241, 147–155. [Google Scholar] [CrossRef]
- Maschke, C.; Niemann, H. Health effects of annoyance induced by neighbour noise. Noise Control Eng. J. 2007, 55, 348–356. [Google Scholar] [CrossRef]
- Floor Noise Management Center. Monthly Report on Inter-Floor Noise Complaints. Available online: http://www.noiseinfo.or.kr/about/data_view.jsp?boardNo=199&keyfield=whole&keyword=&pg=2 (accessed on 19 January 2021).
- Park, S.H.; Lee, P.J.; Yang, K.S.; Kim, K.W. Relationships between non-acoustic factors and subjective reactions to floor impact noise in apartment buildings. J. Acoust. Soc. Am. 2016, 139, 1158–1167. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Korean Statistical Information Service. 2019 Housing Units by Type of Housing Units. Available online: https://kosis.kr/eng/statisticsList/statisticsListIndex.do?menuId=M_01_01&vwcd=MT_ETITLE&parmTabId=M_01_01&statId=1962005&themaId=#SelectStatsBoxDiv (accessed on 4 June 2021).
- Korea Environment Corporation. Korea Environment Corporation Main Webpage. Available online: https://www.keco.or.kr/en/main/index.do (accessed on 19 January 2021).
- Floor Noise Management Center. Inter-Floor Noise Complaints Received until the End of Year 2019. Available online: http://www.noiseinfo.or.kr/about/stats/counselServiceSttus_01.jsp (accessed on 19 January 2021).
- Choi, H.; Lee, S.; Yang, H.; Seong, W. Classification of noise between floors in a building using pre-trained deep convolutional neural networks. In Proceedings of the 16th International Workshop on Acoustic Signal Enhancement (IWAENC), Tokyo, Japan, 17–20 September 2018; pp. 535–539. [Google Scholar]
- Choi, H.; Yang, H.; Lee, S.; Seong, W. Classification of inter-floor noise type/position via convolutional neural network-based supervised learning. Appl. Sci. 2019, 18, 3735. [Google Scholar] [CrossRef] [Green Version]
- Bahroun, R.; Michel, O.; Frassati, F.; Carmona, M.; Lacoume, J.L. New algorithm for footstep localization using seismic sensors in an indoor environment. J. Sound Vib. 2014, 333, 1046–1066. [Google Scholar] [CrossRef] [Green Version]
- Poston, J.D.; Buehrer, R.M.; Tarazaga, P.A. Indoor footstep localization from structural dynamics instrumentation. Mech. Syst. Signal Process. 2017, 88, 224–239. [Google Scholar] [CrossRef]
- Mirshekari, M.; Pan, S.; Fagert, J.; Schooler, E.M.; Zhang, P.; Noh, H.Y. Occupant localization using footstep-induced structural vibration. Mech. Syst. Signal Process. 2018, 112, 77–97. [Google Scholar] [CrossRef]
- Barchiesi, D.; Giannoulis, D.; Stowell, D.; Plumbley, M.D. Acoustic scene classification: Classifying environments from the sounds they produce. IEEE Signal Process. Mag. 2015, 32, 16–34. [Google Scholar] [CrossRef]
- Abeßer, J. A review of deep learning based methods for acoustic scene classification. Appl. Sci. 2020, 10, 2020. [Google Scholar] [CrossRef] [Green Version]
- Sawhney, N.; Maes, P. Situational awareness from environmental sounds. Proj. Rep. Pattie Maes 1997, 1–7. [Google Scholar]
- Malkin, R.G.; Waibel, A. Single-channel indoor microphone localization. In Proceedings of the IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP), Philadelphia, PA, USA, 18–23 March 2005; pp. 1434–1438. [Google Scholar]
- Aucouturier, J.J.; Defreville, B.; Pachet, F. The bag-of-frames approach to audio pattern recognition: A sufficient model for urban soundscapes but not for polyphonic music. J. Acoust. Soc. Am. 2007, 122, 881–891. [Google Scholar] [CrossRef] [Green Version]
- Piczak, K.J. Environmental sound classification with convolutional neural networks. In Proceedings of the IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA, USA, 17–20 September 2015; pp. 1–6. [Google Scholar]
- Tobias, A. Acoustic-emission source location in two dimensions by an array of three sensors. Non Destruct. Test 1976, 9, 9–12. [Google Scholar] [CrossRef]
- Ciampa, F.; Meo, M. Acoustic emission localization in complex dissipative anisotropic structures using a one-channel reciprocal time reversal method. J. Acoust. Soc. Am. 2011, 130, 168–175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kundu, T.; Das, S.; Jata, K.V. Point of impact prediction in isotropic and anisotropic plates from the acoustic emission data. J. Acoust. Soc. Am. 2007, 122, 2057–2066. [Google Scholar] [CrossRef]
- Ciampa, F.; Meo, M. Acoustic emission source localization and velocity determination of the fundamental mode A0 using wavelet analysis and a newton-based optimization technique. Smart Mater. Struct. 2010, 19, 045027. [Google Scholar] [CrossRef] [Green Version]
- Goutaudier, D.; Gendre, D.; Kehr-Candille, V.; Ohayon, R. Single-sensor approach for impact localization and force reconstruction by using discriminating vibration modes. Mech. Syst. Signal Process. 2020, 138, 106534. [Google Scholar] [CrossRef]
- Grabec, I.; Sachse, W. Application of an intelligent signal processing system to acoustic emission analysis. J. Acoust. Soc. Am. 1989, 85, 1226–1235. [Google Scholar] [CrossRef]
- Kosel, T.; Grabec, I.; Mužič, P. Location of acoustic emission sources generated by air flow. Ultrasonics 2000, 38, 824–826. [Google Scholar] [CrossRef]
- Ing, R.K.; Quieffin, N.; Catheline, S.; Fink, M. In solid localization of finger impacts using acoustic time-reversal process. Appl. Phys. Lett. 2005, 87, 204104. [Google Scholar] [CrossRef]
- Ruiz, M.; Mujica, L.; Berjaga, X.; Rodellar, J. Partial least square/projection to latent structures (PLS) regression to estimate impact localization in structures. Smart Mater. Struct. 2013, 22, 025028. [Google Scholar] [CrossRef]
- Sung, D.U.; Oh, J.H.; Kim, C.G.; Hong, C.S. Impact monitoring of smart composite laminates using neural network and wavelet analysis. J. Intell. Mater. Syst. Struct. 2000, 11, 180–190. [Google Scholar] [CrossRef]
- Ebrahimkhanlou, A.; Salamone, A. Single-sensor acoustic emission source localization in plate-like structures using deep learning. Aerospace 2018, 5, 50. [Google Scholar] [CrossRef] [Green Version]
- Parhizkar, R.; Dokmanić, I.; Vetterli, M. Single-channel indoor microphone localization. In Proceedings of the IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 1434–1438. [Google Scholar]
- Niu, H.; Gong, Z.; Ozanich, E.; Gerstoft, P.; Wang, H.; Li, Z. Deep-learning source localization using multi-frequency magnitude-only data. J. Acoust. Soc. Am. 2019, 146, 211–222. [Google Scholar] [CrossRef] [Green Version]
- Komen, D.F.; Neilsen, T.B.; Howarth, K.; Knobels, D.P.; Dahl, P.H. Seabed and range estimation of impulsive time series using a convolutional neural network. J. Acoust. Soc. Am. 2020, 147, EL403–EL408. [Google Scholar] [CrossRef]
- Poston, J.D. Toward tracking multiple building occupants by footstep vibrations. In Proceedings of the IEEE Global Conference on Signal and Information Processing (GlobalSIP), Anaheim, CA, USA, 26–28 November 2018; pp. 86–90. [Google Scholar]
- Woolard, A.G. Supplementing Localization Algorithms for Indoor Footsteps. Ph.D. Thesis, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA, 7 July 2017. [Google Scholar]
- The Ministry of Land, Infrastructure and Transport Korea. Statistics of Housing Construction (Construction Consent). Available online: http://kosis.kr/statHtml/statHtml.do?orgId=116&tblId=DT_MLTM_564&conn_path=I2 (accessed on 19 January 2021).
- The Seoul Institute. Construction Consent. Available online: http://data.si.re.kr/node/344 (accessed on 19 January 2021).
- Song, Y.; Choi, G.-R. Flat column dry wall (FcDW) system design for apartment. Mag. Korea Concr. Inst. 2008, 20, 37–42. [Google Scholar]
- Chosun Ilbo. Noise Characteristic of Apartment Buildings in the South Korea. Available online: http://realty.chosun.com/site/data/html_dir/2018/08/21/2018082102461.html (accessed on 19 January 2021).
- Samsung Electronics. Galaxy S6. Available online: https://www.samsung.com/global/galaxy/galaxys6/galaxy-s6 (accessed on 11 March 2020).
- Allen, R.V. Automatic earthquake recognition and timing from single traces. Bull. Seismol. Soc. Am. 1978, 68, 1521–1532. [Google Scholar]
- Kurz, J.H.; Grosse, C.U.; Reinhardt, H.W. Strategies for reliable automatic onset time picking of acoustic emissions and of ultrasound signals in concrete. Ultrasonics 2005, 43, 538–546. [Google Scholar] [CrossRef]
- Allen, R.V. Automatic phase pickers: Their present use and future prospects. Bull. Seismol. Soc. Am. 1982, 72, S225–S242. [Google Scholar]
- Hensman, J.; Mills, R.; Pierce, S.G.; Worden, K.; Eaton, M. Locating acoustic emission sources in complex structures using Gaussian processes. Mech. Syst. Signal Process. 2010, 24, 211–223. [Google Scholar] [CrossRef]
- Chung, P.; Jost, M.L.; Böhme, J.F. Estimation of seismic-wave parameters and signal detection using maximum-likelihood methods. Comput. Geosci. 2001, 27, 147–156. [Google Scholar] [CrossRef]
- Saragiotis, C.D.; Hadjileontiadis, L.J.; Panas, S.M. PAI-S/K: A robust automatic seismic P phase arrival identification scheme. IEEE Trans. Geosci. Remote Sens. 2002, 40, 1395–1404. [Google Scholar] [CrossRef]
- Saragiotis, C.D.; Hadjileontiadis, L.J.; Rekanos, I.T.; Panas, S.M. Automatic P phase picking using maximum kurtosis and/spl kappa/-statistics criteria. IEEE Trans. Geosci. Remote Sens. Lett. 2004, 1, 147–151. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Representation learning. In Deep learning; Dietterich, T., Bishop, C., Heckerman, D., Jordan, M., Kearns, M., Eds.; The MIT Press: Cambridge, MA, USA; London, UK, 2017; pp. 330–372. ISBN 978-026-203-561-3. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Dieleman, S.; Schrauwen, B. End-to-end learning for music audio. In Proceedings of the IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 6964–6968. [Google Scholar]
- Tokozume, Y.; Harada, T. Learning environmental sounds with end-to-end convolutional neural network. In Proceedings of the IEEE Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2721–2725. [Google Scholar]
- Lee, J.; Park, J.; Kim, K.L.; Nam, J. End-to-end deep convolutional neural networks using very small filters for music classification. Appl. Sci. 2018, 8, 150. [Google Scholar] [CrossRef] [Green Version]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Available online: https://www.tensorflow.org (accessed on 20 March 2020).
- McFee, B.; Raffel, C.; Liang, D.; Ellis, D.P.; McVicar, M.; Battenberg, E.; Nieto, O. librosa: Audio and music signal analysis in python. In Proceedings of the 14th Python in Science Conference (SCIPY), Austin, TX, USA, 6–12 July 2015; pp. 18–25. [Google Scholar]
- Bodlund, K. Alternative reference curves for evaluation of the impact sound insulation between dwellings. J. Sound Vib. 2017, 116, 173–181. [Google Scholar] [CrossRef]
- Park, S.H.; Lee, P.J. Effects of floor impact noise on psychophysiological responses. Build. Environ. 2017, 116, 173–181. [Google Scholar] [CrossRef] [Green Version]
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
- Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014; pp. 1717–1724. [Google Scholar]
- Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 2015, 13, 105–109. [Google Scholar] [CrossRef] [Green Version]
- Zhang, S.; Qin, Y.; Sun, K.; Lin, Y. Few-Shot Audio Classification with Attentional Graph Neural Networks. In Proceedings of the INTERSPEECH, Graz, Austria, 15–19 September 2019; pp. 3649–3653. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1026–1034. [Google Scholar]
- Aytar, Y.; Vondrick, C.; Torralba, A. SoundNet: Learning sound representations from unlabeled video. In Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS), Barcelona, Spain, 5–10 December 2016; pp. 892–900. [Google Scholar]
- Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
- Larochelle, H.; Erhan, D.; Courville, A.; Bergstra, J.; Bengio, Y. An empirical evaluation of deep architectures on problems with many factors of variation. In Proceedings of the 24th International Conference on Machine Learning (ICML), Corvallis, OR, USA, 20–24 June 2007; pp. 473–480. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Task | Task Type | Dataset | |
---|---|---|---|
Training/Validation | Test | ||
Type | , | None | |
, | |||
, | None | ||
, | |||
Position | , | None | |
, | |||
, | None | ||
, |
Task | Task Type | Dataset | |
---|---|---|---|
Training/Validation | Test | ||
Type | , | , | |
, | , | ||
Position | , | , | |
, | , |
Task | CNN | t (s) | |||||
---|---|---|---|---|---|---|---|
0.152 | 0.501 | 1.00 | 1.50 | 2.00 | 3.00 | ||
VGG16 | 0.9346 | 0.9663 | 0.9663 | 0.9719 | 0.9731 | 0.9662 | |
(0.9377) | (0.9708) | (0.9742) | (0.9764) | (0.9798) | (0.9730) | ||
SoundNet | 0.3016 | 0.4102 | 0.9497 | 0.9555 | 0.9545 | 0.9172 | |
(0.4162) | (0.4913) | (0.9553) | (0.9634) | (0.9601) | (0.9240) | ||
VGG16 | 0.6801 | 0.8235 | 0.8019 | 0.7873 | 0.8303 | 0.4616 | |
(0.9606) | (0.9645) | (0.9552) | (0.9220) | (0.9563) | (0.5128) | ||
SoundNet | 0.2286 | 0.3339 | 0.6862 | 0.7494 | 0.7324 | 0.6976 | |
(0.3536) | (0.4094) | (0.8229) | (0.8132) | (0.8559) | (0.7688) | ||
VGG16 | 0.9513 | 0.9582 | 0.9536 | 0.9541 | 0.9551 | 0.9456 | |
(0.9917) | (0.9938) | (0.9948) | (0.9938) | (0.9953) | (0.9922) | ||
SoundNet | 0.3491 | 0.3789 | 0.9487 | 0.9507 | 0.9501 | 0.9378 | |
(0.5523) | (0.5266) | (0.9886) | (0.9876) | (0.9876) | (0.9844) | ||
VGG16 | 0.7038 | 0.7262 | 0.8553 | 0.7910 | 0.7991 | 0.9102 | |
(0.9664) | (0.9898) | (0.9896) | (0.9907) | (0.9875) | (0.9900) | ||
SoundNet | 0.2965 | 0.3204 | 0.8079 | 0.8616 | 0.8064 | 0.7324 | |
(0.4530) | (0.4686) | (0.9106) | (0.9164) | (0.9145) | (0.9249) |
Task | CNN | t (s) | |||||
---|---|---|---|---|---|---|---|
0.152 | 0.501 | 1.00 | 1.50 | 2.00 | 3.00 | ||
VGG16 | 0.9047 | 0.9381 | 0.9438 | 0.9516 | 0.9574 | 0.9496 | |
0.9607 | 0.9899 | 0.9910 | 1.000 | 0.9955 | 0.9944 | ||
SoundNet | 0.7526 | 0.6954 | 0.7338 | 0.9426 | 0.9323 | 0.9607 | |
0.9786 | 0.9273 | 0.9764 | 0.9910 | 0.9865 | 0.9899 | ||
VGG16 | 0.8344 | 0.8785 | 0.8968 | 0.9079 | 0.9272 | 0.9328 | |
0.9333 | 0.9557 | 0.9682 | 0.9813 | 0.9786 | 0.9807 | ||
SoundNet | 0.4413 | 0.5150 | 0.8787 | 0.8752 | 0.9017 | 0.9337 | |
0.8512 | 0.9318 | 0.9646 | 0.9641 | 0.9750 | 0.9880 | ||
VGG16 | 0.6794 | 0.7401 | 0.7805 | 0.5960 | 0.7032 | 0.5596 | |
0.9709 | 0.9651 | 0.9796 | 0.9818 | 0.9834 | 0.9793 | ||
SoundNet | 0.1118 | 0.3461 | 0.4336 | 0.5951 | 0.6967 | 0.5093 | |
0.2038 | 0.8627 | 0.9564 | 0.9087 | 0.9276 | 0.9333 | ||
VGG16 | 0.2857 | 0.3454 | 0.3211 | 0.3470 | 0.2818 | 0.3475 | |
0.9570 | 0.9635 | 0.9811 | 0.9775 | 0.9704 | 0.9752 | ||
SoundNet | 0.1554 | 0.1855 | 0.3506 | 0.3557 | 0.3837 | 0.3356 | |
0.6381 | 0.7368 | 0.9088 | 0.8348 | 0.9358 | 0.9059 |
Task | CNN | t (s) | |||||
---|---|---|---|---|---|---|---|
0.152 | 0.501 | 1.00 | 1.50 | 2.00 | 3.00 | ||
VGG16 | 0.5778 | 0.6244 | 0.5865 | 0.5952 | 0.6617 | 0.2238 | |
(0.8273) | (0.8520) | (0.8178) | (0.8356) | (0.8800) | (0.3044) | ||
SoundNet | 0.2557 | 0.2318 | 0.5789 | 0.5636 | 0.5421 | 0.4995 | |
(0.3727) | (0.3274) | (0.7838) | (0.7661) | (0.7848) | (0.7396) | ||
VGG16 | 0.5296 | 0.7522 | 0.7482 | 0.7682 | 0.6122 | 0.7684 | |
(0.7285) | (0.8046) | (0.7957) | (0.7970) | (0.7656) | (0.8093) | ||
SoundNet | 0.2048 | 0.2196 | 0.6009 | 0.5196 | 0.5691 | 0.4876 | |
(0.3231) | (0.3385) | (0.6537) | (0.5977) | (0.6481) | (0.5674) | ||
VGG16 | 0.8037 | 0.7842 | 0.7384 | 0.6786 | 0.6976 | 0.5678 | |
SoundNet | 0.4528 | 0.4741 | 0.5747 | 0.6941 | 0.7943 | 0.6443 | |
VGG16 | 0.8903 | 0.8309 | 0.7654 | 0.7057 | 0.6850 | 0.6006 | |
SoundNet | 0.2171 | 0.2969 | 0.7650 | 0.7513 | 0.7094 | 0.7290 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Choi, H.; Seong, W.; Yang, H. Source Type Classification and Localization of Inter-Floor Noise with a Single Sensor and Knowledge Transfer between Reinforced Concrete Buildings. Appl. Sci. 2021, 11, 5399. https://doi.org/10.3390/app11125399
Choi H, Seong W, Yang H. Source Type Classification and Localization of Inter-Floor Noise with a Single Sensor and Knowledge Transfer between Reinforced Concrete Buildings. Applied Sciences. 2021; 11(12):5399. https://doi.org/10.3390/app11125399
Chicago/Turabian StyleChoi, Hwiyong, Woojae Seong, and Haesang Yang. 2021. "Source Type Classification and Localization of Inter-Floor Noise with a Single Sensor and Knowledge Transfer between Reinforced Concrete Buildings" Applied Sciences 11, no. 12: 5399. https://doi.org/10.3390/app11125399
APA StyleChoi, H., Seong, W., & Yang, H. (2021). Source Type Classification and Localization of Inter-Floor Noise with a Single Sensor and Knowledge Transfer between Reinforced Concrete Buildings. Applied Sciences, 11(12), 5399. https://doi.org/10.3390/app11125399