Multi-Threaded Sound Propagation Algorithm to Improve Performance on Mobile Devices
Abstract
:1. Introduction
2. Related Work
2.1. Sound Propagation
2.2. Sound Propagation Components
3. Processing Flow and Analysis of Sound Rendering
3.1. Sound Rendering Pipeline
3.2. Single-Threaded Sound Propagation Algorithm
3.3. Performance Analysis of Sound Propagation Modes
4. Proposed Multi-Threaded Sound Propagation Algorithm
4.1. Multi-Threaded Sound Propagation Algorithm
Algorithm 1 G Mode |
1: , N = maximum number of guide ray |
2: 3: 4: Combinations of hit-triangles 5: Combinations buffer 6: procedure Guide mode ( 7: Step 01: Finds combination of hit-triangles 8: for |
9: Set origin position (position of L) and random direction 10: Ray tracing processing () 11: if is valid then 12: Add |
13: end if 14: end for 15: Step 02: Sorts and removes duplicate combination of hit-triangles 16: for d = 0 to 3 do // depth loop 17: Merge-sort that have depth d in 18: end for 19: for i = 0 to N − 1, j = 0 to N − 1 do // N is number of combinations 20: if is equal to then 21: 22: else 23: Remove from to 24: 25: 26: end if 27: end for |
28: end procedure |
Algorithm 2 Setup-Hit-Triangles |
1: Combination of triangles by G mode, index: 2: Listener |
3: Sound source 4: PathCacheBuffer of 5: Triangle, depth:d = 6: Type of {Reflection, Diffraction, None} 7: procedure Setup-hit-triangles 8: Merge sort combinations in and combinations in 9: for 10: for 11: 12: if is a reflection type then 13: Calculate image mirror positions based on 14: update the image mirror positions 15: else if is a diffraction type then 16: Calculate edge information based on 17: update edge information |
18: else // None 19: continue 20: end if 21: end for |
22: end for |
23: end procedure |
4.2. Thread Synchronization
5. Experimental Results
5.1. Experimental Setup
5.2. Load-Balancing
5.3. Performance
5.4. Memory Usage and CPU Utilization
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kumar, R.; Kumar, P.; Tripathi, R.; Gupta, G.P.; Islam, A.N.; Shorfuzzaman, M. Permissioned blockchain and deep-learning for secure and efficient data sharing in industrial healthcare systems. IEEE Trans. Ind. Inform. 2022, 18, 8065–8073. [Google Scholar] [CrossRef]
- Kumar, R.; Kumar, P.; Tripathi, R.; Gupta, G.P.; Garg, S.; Hassan, M.M. Bdtwin: An integrated framework for enhancing security and privacy in cybertwin-driven automotive industrial Internet of things. IEEE Internet Things J. 2021, 9, 17110–17119. [Google Scholar] [CrossRef]
- Loftin, R.B. Multisensory perception: Beyond the visual in visualization. Comput. Sci. Eng. 2003, 5, 56–58. [Google Scholar] [CrossRef]
- Taylor, M.T.; Chandak, A.; Manocha, D. Resound: Interactive sound rendering for dynamic virtual environments. In Proceedings of the 17th ACM International Conference on Multimedia, Beijing, China, 19–24 October 2009; pp. 271–280. [Google Scholar]
- Botteldooren, D. Acoustical finite-difference time-domain simulation in a quasi-Cartesian grid. J. Acoust. Soc. Am. 1994, 95, 2313–2319. [Google Scholar] [CrossRef]
- Siltanen, S.; Tapio, L.; Savioja, L. Frequency domain acoustic radiance transfer for real-time auralization. Acta Acust. United Acust. 2009, 95, 106–117. [Google Scholar] [CrossRef]
- Rosen, M.; Godin, K.W.; Raghuvanshi, N. Interactive sound propagation for dynamic scenes using 2D wave simulation. Comput. Graph. Forum 2020, 39, 39–46. [Google Scholar] [CrossRef]
- Schissler, C.; Manocha, D. Interactive sound propagation and rendering for large multi-source scenes. ACM Trans. Graph. TOG 2016, 36, 114:1–114:12. [Google Scholar]
- Cao, C.; Ren, Z.; Schissler, C.; Manocha, D.; Zhou, K. Interactive sound propagation with bidirectional path tracing. ACM Trans. Graph. TOG 2006, 35, 180:1–180:11. [Google Scholar] [CrossRef]
- Schissler, C.; Manocha, D. Interactive sound rendering on mobile devices using ray-parameterized reverberation filters. arXiv 2018, arXiv:1803.00430. [Google Scholar]
- Raghuvanshi, N.; Narain, R.; Lin, M.C. Efficient and accurate sound propagation using adaptive rectangular decomposition. IEEE Trans. Vis. Comput. Graph. 2009, 15, 789–801. [Google Scholar] [CrossRef]
- Mehra, R.; Raghuvanshi, N.; Antani, L.; Chandak, A.; Curtis, S.; Manocha, D. Wave-based sound propagation in large open scenes using an equivalent source formulation. ACM Trans. Graph. TOG 2013, 32, 19:1–19:13. [Google Scholar] [CrossRef]
- Mehra, R.; Rungta, A.; Golas, A.; Lin, M.; Manocha, D. Wave: Interactive wave-based sound propagation for virtual environments. IEEE Trans. Vis. Comput. Graph. 2015, 21, 434–442. [Google Scholar] [CrossRef] [PubMed]
- Chaitanya, C.R.A.; Raghuvanshi, M.; Godin, K.W.; Zhang, Z.; Nowrouzezahrai, D.; Snyder, J.M. Directional sources and listeners in interactive sound propagation using reciprocal wave field coding. ACM Trans. Graph. TOG 2020, 39, 44:1–44:14. [Google Scholar] [CrossRef]
- Funkhouser, T.; Carlbom, I.; Elko, G.; Pingali, G.; Sondhi, M.; West, J. A beam tracing approach to acoustic modeling for interactive virtual environments. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA, 24 July 1998; pp. 21–32. [Google Scholar]
- Lauterbach, C.; Chandak, A.; Manocha, D. Interactive sound rendering in complex and dynamic scenes using frustum tracing. IEEE Trans. Vis. Comput. Graph. 2007, 13, 1672–1679. [Google Scholar] [CrossRef] [PubMed]
- Schissler, C.; Manocha, D. Gsound: Interactive sound propagation for games. In Audio Engineering Society Conference: 41st International Conference: Audio for Games; Audio Engineering Society: London, UK, 2011; pp. 1–6. [Google Scholar]
- Liu, S.; Manocha, D. Sound synthesis, propagation, and rendering: A survey. arXiv 2020, arXiv:2011.05538. [Google Scholar]
- Taylor, M.; Chandak, A.; Ren, Z.; Lauterbach, C.; Manocha, D. Fast edge-diffraction for sound propagation in complex virtual environments. In Proceedings of the EAA Auralization Symposium, Espoo, Finland, 15–17 June 2009. [Google Scholar]
- Schissler, C.; Mückl, G.; Calamia, P. Fast diffraction pathfinding for dynamic sound propagation. ACM Trans. Graph. TOG 2021, 40, 138:1–138:13. [Google Scholar]
- Taylor, M.; Chandak, A.; Mo, Q.; Lauterbach, C.; Schissler, C.; Manocha, D. Guided multiview ray tracing for fast auralization. IEEE Trans. Vis. Comput. Graph. 2012, 18, 1797–1810. [Google Scholar] [CrossRef]
- Taylor, M.; Chandak, A.; Mo, Q.; Lauterbach, C.; Schissler, C.; Manocha, D. iSound: Interactive GPU-based Sound Auralization in Dynamic Scenes. Tech. Report TR10-006. 2010. [Google Scholar]
- Vorländer, M. Simulation of the transient and steady-state sound propagation in rooms using a new combined ray-tracing image-source algorithm. J. Acoust. Soc. Am. 1989, 86, 172–178. [Google Scholar] [CrossRef]
- Chandak, A.; Lauterbach, C.; Taylor, M.; Ren, Z.; Manocha, D. Ad-frustum: Adaptive frustum tracing for interactive sound propagation. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1707–1722. [Google Scholar] [CrossRef] [Green Version]
- Allen, J.B.; Berkley, D.A. Image method for efficiently simulating small-room acoustics. J. Acoust. Soc. Am. 1979, 65, 943–950. [Google Scholar] [CrossRef]
- Biot, M.A.; Tolstoy, I. Formulation of wave propagation in infinite media by normal coordinates with an application to diffraction. J. Acoust. Soc. Am. 1957, 29, 381–391. [Google Scholar] [CrossRef]
- Kouyoumjian, R.G.; Pathak, P.H. A uniform geometrical theory of diffraction for an edge in a perfectly conducting surface. Proc. IEEE. 1974, 62, 1448–1461. [Google Scholar] [CrossRef] [Green Version]
- Dalenbäck, B.I.L. Room acoustic prediction based on a unified treatment of diffuse and specular reflection. J. Acoust. Soc. Am. 1906, 100, 899–909. [Google Scholar] [CrossRef]
- Kapralos, B.; Jenkin, M.; Milios, E. Sonel mapping: Acoustic modeling utilizing an acoustic version of phonon mapping. In Proceedings of the 3rd IEEE International Workshop on Haptic, Audio and Visual Environments and Their Applications, Ottawa, ON, Canada, 2–3 October 2004. [Google Scholar]
- Siltanen, S.; Lokki, T.; Kiminki, S.; Savioja, L. The room acoustic rendering equation. J. Acoust. Soc. Am. 2007, 122, 1624–1635. [Google Scholar] [CrossRef] [PubMed]
- Eyring, C.F. Reverberation time in “dead” rooms. J. Acoust. Soc. Am. 1930, 1, 217–241. [Google Scholar] [CrossRef]
PC Mode | DT Mode | ER Mode | LR Mode | Total Time | |
---|---|---|---|---|---|
Sibenik | 8.15 | 0.00 | 125.19 | 168.90 | 302.6 |
Concerthall | 10.82 | 0.00 | 142.86 | 161.91 | 315.6 |
Angrybot | 2.06 | 0.00 | 38.59 | 60.45 | 101.1 |
RaceLake | 8.04 | 0.00 | 78.91 | 131.44 | 218.4 |
SoundSource | Reflection Path (max: 4-Order) | Diffraction Path (max: 2-Order) | Single-Threaded Frame Time (ms) | Multi-Threaded Frame Time (ms) | Increase Rate (%) | |
---|---|---|---|---|---|---|
Sibenik | 1 | 36 | 7 | 57.2 | 32.4 | 76.54 |
2 | 77 | 18 | 90.2 | 49.2 | 83.33 | |
4 | 153 | 20 | 162.6 | 82.8 | 96.38 | |
8 | 315 | 53 | 302.6 | 163.6 | 84.96 | |
Concert hall | 1 | 62 | 0 | 57.8 | 35.8 | 61.45 |
2 | 128 | 2 | 95.0 | 50.4 | 88.49 | |
4 | 266 | 8 | 167.4 | 84..8 | 97..41 | |
8 | 562 | 16 | 315.6 | 154.2 | 104.67 | |
Angrybot | 1 | 11 | 2 | 25.4 | 14.4 | 76.39 |
2 | 30 | 9 | 37.8 | 19.8 | 90.91 | |
4 | 53 | 15 | 62.2 | 34.6 | 79.77 | |
8 | 84 | 28 | 100.1 | 64.6 | 54.95 | |
RaceLake | 1 | 75 | 0 | 45.8 | 25.2 | 81.75 |
2 | 180 | 0 | 78.6 | 50.6 | 55.34 | |
4 | 295 | 2 | 139.8 | 97.4 | 43.53 | |
8 | 608 | 5 | 218.4 | 132.8 | 64.46 |
Average CPU Utilization (%) | |||
---|---|---|---|
Scene | Single-Threaded | Multi-Threaded | Difference |
Sibenik | 16.30 | 17.30 | 1.00 |
Concert hall | 17.80 | 19.20 | 1.40 |
Angrybot | 11.20 | 11.60 | 0.38 |
RaceLake | 14.60 | 15.40 | 0.71 |
Average Memory Usage (MB) | |||
---|---|---|---|
Sound Source | Single-Threaded | Multi-Threaded | Difference |
1 | 378.63 | 380.43 | 1.80 |
2 | 382.04 | 383.52 | 1.49 |
4 | 386.16 | 386.79 | 0.63 |
8 | 395.49 | 395.88 | 0.39 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, E.; Choi, S.; Kim, C.G.; Park, W.-C. Multi-Threaded Sound Propagation Algorithm to Improve Performance on Mobile Devices. Sensors 2023, 23, 973. https://doi.org/10.3390/s23020973
Kim E, Choi S, Kim CG, Park W-C. Multi-Threaded Sound Propagation Algorithm to Improve Performance on Mobile Devices. Sensors. 2023; 23(2):973. https://doi.org/10.3390/s23020973
Chicago/Turabian StyleKim, Eunjae, Sukwon Choi, Cheong Ghil Kim, and Woo-Chan Park. 2023. "Multi-Threaded Sound Propagation Algorithm to Improve Performance on Mobile Devices" Sensors 23, no. 2: 973. https://doi.org/10.3390/s23020973
APA StyleKim, E., Choi, S., Kim, C. G., & Park, W. -C. (2023). Multi-Threaded Sound Propagation Algorithm to Improve Performance on Mobile Devices. Sensors, 23(2), 973. https://doi.org/10.3390/s23020973