Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays
Abstract
:1. Introduction
- A deep convolutional neural network (DCNN) that detects the unique saliency of 360 videos in HMDs.
- An LSTM-based head-movement predictor leveraging both salient content and users’ head-orientation history.
- A system that enables saliency-based 360 video streaming and studies the effect of head-movement prediction on the streaming performance.
2. Related Work, Motivation, and Overview of the Proposed System
2.1. Saliency Detection
2.2. Head Movement Prediction
2.3. 360 Video Streaming Systems
2.4. Overview of the Proposed System
3. Video Streaming Server
3.1. Workflow
3.2. 360 Video Saliency Detection Model
3.2.1. Network Architecture
3.2.2. Model Training
4. Video Streaming Client
4.1. Workflow
4.1.1. Playback Buffer
4.1.2. Head Predictor
4.1.3. Scheduler
4.2. Streaming Scheduler
4.2.1. Tile Planning
4.2.2. Handling Unexpected Head Movement
Algorithm 1: Tile scheduling algorithm |
4.2.3. Handling Buffer Starving
4.3. LSTM-Based Head Movement Prediction
4.3.1. Model Architecture
4.3.2. Model Training
5. Evaluation
5.1. Experiment Setup
5.1.1. Training Datasets
5.1.2. System Settings
5.2. Evaluation Results
5.2.1. Buffer Stalling Count
5.2.2. Buffer Stalling Duration
5.2.3. Blank Ratio
5.2.4. Bandwidth Saved
5.2.5. Viewport Perceived Quality
6. Limitations and Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Markets. Virtual Reality Market. 2021. Available online: www.marketsandmarkets.com/Market-Reports/reality-applications-market-458.html (accessed on 1 October 2021).
- Grand View Research. Virtual Reality Market Size, Share & Trends Analysis Report by Technology (Semi & Fully Immersive, Non-Immersive), by Device (HMD, GTD), by Component (Hardware, Software), by Application, by Region, and Segment Forecasts, 2023–2030 & Trends Report, 2021–2028. 2021. Available online: www.grandviewresearch.com/industry-analysis/virtual-reality-vr-market (accessed on 1 October 2021).
- Watanabe, K.; Soneda, Y.; Matsuda, Y.; Nakamura, Y.; Arakawa, Y.; Dengel, A.; Ishimaru, S. Discaas: Micro behavior analysis on discussion by camera as a sensor. Sensors 2021, 21, 5719. [Google Scholar] [CrossRef] [PubMed]
- Pavlič, J.; Tomažič, T. The (In) effectiveness of Attention Guidance Methods for Enhancing Brand Memory in 360° Video. Sensors 2022, 22, 8809. [Google Scholar] [CrossRef] [PubMed]
- Škola, F.; Rizvić, S.; Cozza, M.; Barbieri, L.; Bruno, F.; Skarlatos, D.; Liarokapis, F. Virtual reality with 360-video storytelling in cultural heritage: Study of presence, engagement, and immersion. Sensors 2020, 20, 5851. [Google Scholar] [CrossRef]
- Corbillon, X.; Simon, G.; Devlic, A.; Chakareski, J. Viewport-adaptive Navigable 360-degree Video Delivery. In Proceedings of the IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017. [Google Scholar]
- Jeong, J.; Jang, D.; Son, J.; Ryu, E.S. 3DoF+ 360 video location-based asymmetric down-sampling for view synthesis to immersive VR video streaming. Sensors 2018, 18, 3148. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ullah, H.; Zia, O.; Kim, J.H.; Han, K.; Lee, J.W. Automatic 360 mono-stereo panorama generation using a cost-effective multi-camera system. Sensors 2020, 20, 3097. [Google Scholar] [CrossRef]
- Yan, Z.; Yi, J. Dissecting Latency in 360° Video Camera Sensing Systems. Sensors 2022, 22, 6001. [Google Scholar] [CrossRef]
- Qian, F.; Han, B.; Xiaog, Q.; Gopalakrishnan, V. Flare: Practical Viewport-Adaptive 360-Degree Video Streaming for Mobile Devices. In Proceedings of the International Conference on Mobile Computing and Networking, New Delhi, India, 29 October 2018–2 November 2018. [Google Scholar]
- He, J.; Qureshi, M.A.; Qiu, L.; Li, J.; Li, F.; Han, L. Rubiks: Practical 360-Degree Streaming for Smartphones. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys’18), Munich, Germany, 10–15 June 2018. [Google Scholar]
- Zhang, L.; Suo, Y.; Wu, X.; Wang, F.; Chen, Y.; Cui, L.; Liu, J.; Ming, Z. TBRA: Tiling and Bitrate Adaptation for Mobile 360-Degree Video Streaming. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, China, 20–24 October 2021; pp. 4007–4015. [Google Scholar]
- Nguyen, A.; Yan, Z.; Nahrstedt, K. Your Attention is Unique: Detecting 360-Degree Video Saliency in Head-Mounted Display for Head Movement Prediction. In Proceedings of the 26th ACM international conference on Multimedia (MM), Seoul, Republic of Korea, 22–26 October 2018. [Google Scholar]
- Fan, C.; Lee, J.; Lo, W.; Huang, C.; Chen, K.; Hsu, C. Fixation Prediction for 360 Video Streaming in Head-Mounted Virtual Reality. In Proceedings of the ACM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV), Taipei, Taiwan, 20–23 June 2017. [Google Scholar]
- Li, C.; Zhang, W.; Liu, Y.; Wang, Y. Very long term field of view prediction for 360-degree video streaming. In Proceedings of the IEEE Conference on Multimedia Information Processing and Retrieval (MIPR 2019), San Jose, CA, USA, 28–30 March 2019. [Google Scholar]
- Dange, S.S.; Kumar, S.; Franklin, A. Content-Aware Optimization of Tiled 360° Video Streaming Over Cellular Network. In Proceedings of the 2021 17th International Conference on Wireless and Mobile Computing, Networking and Communications (WiMob), Virtual, 11–13 October 2021; pp. 219–224. [Google Scholar]
- Shen, W.; Ding, L.; Zhai, G.; Cui, Y.; Gao, Z. A QoE-oriented saliency-aware approach for 360-degree video transmission. In Proceedings of the 2019 IEEE Visual Communications and Image Processing (VCIP), Sydney, Australia, 1–4 December 2019; pp. 1–4. [Google Scholar]
- Zhang, X.; Cheung, G.; Zhao, Y.; Le Callet, P.; Lin, C.; Tan, J.Z. Graph learning based head movement prediction for interactive 360 video streaming. IEEE Trans. Image Process. 2021, 30, 4622–4636. [Google Scholar] [CrossRef]
- Park, S.; Hoai, M.; Bhattacharya, A.; Das, S.R. Adaptive streaming of 360-degree videos with reinforcement learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 1839–1848. [Google Scholar]
- Huang, X.; Shen, C.; Boix, X.; Zhao, Q. Salicon: Reducing the semantic gap in saliency prediction by adapting deep neural networks. In Proceedings of the ICCV, Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Qian, F.; Han, B.; Ji, L.; Gopalakrishnan, V. Optimizing 360 video delivery over cellular networks. In Proceedings of the 5th Workshop on All Things Cellular Operations, Applications and Challenges—ATC ’16, Denver, CO, USA, 22–24 June 2016. [Google Scholar]
- Duanmu, F.; Kurdoglu, E.; Hosseini, S.A.; Liu, Y.; Wang, Y. Prioritized Buffer Control in Two-tier 360 Video Streaming. In Proceedings of the Workshop on Virtual Reality and Augmented Reality Network (VR/AR Network), Los Angeles, CA, USA, 25 August 2017. [Google Scholar]
- Aladagli, A.D.; Ekmekcioglu, E.; Jarnikov, D.; Kondoz, A. Predicting Head Trajectories in 360° Virtual Reality Videos. In Proceedings of the IEEE International Conference on 3D Immersion (IC3D), Brussels, Belgium, 11–12 December 2017. [Google Scholar]
- Fang, Y.; Lin, W.; Chen, Z.; Tsai, C.; Lin, C. Video Saliency Detection in the Compressed Domain. In Proceedings of the ACM International Conference on Multimedia (MM), Nara, Japan, 29 October 2012–2 November 2012. [Google Scholar]
- Nguyen, T.V.; Xu, M.; Gao, G.; Kankanhalli, M.; Tian, Q.; Yan, S. Static Saliency vs. Dynamic Saliency: A Comparative Study. In Proceedings of the ACM International Conference on Multimedia (MM), Barcelona, Spain, 14–18 November 2013. [Google Scholar]
- Kummerer, M.; Theis, L.; Bethge, M. Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Pan, F.; Sayrol, E.; Nieto, X.G.; McGuinness, K.; O’Connor, N.E. Shallow and Deep Convolutional Networks for Saliency Prediction. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26–30 June 2016. [Google Scholar]
- Zhang, Y.; Qin, L.; Huang, Q.; Yang, K.; Zhang, J.; Yao, H. From Seed Discovery to Deep Reconstruction: Predicting Saliency in Crowd via Deep Networks. In Proceedings of the ACM International Conference on Multimedia (MM), Amsterdam, The Netherlands, 15–19 October 2016. [Google Scholar]
- Abreu, A.D.; Ozcinar, C.; Smolic, A. Look Around You: Saliency Maps for Omnidirectional Images in VR Applications. In Proceedings of the IEEE International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany, 31 May–2 June 2017. [Google Scholar]
- Sitzmann, V.; Serrano, A.; Pavel, A.; Agrawala, M.; Gutierrez, D.; Masia, B.; Wetzstein, G. Saliency in VR: How do People Explore Virtual Environments? IEEE Trans. Vis. Comput. Graph. 2018, 24, 1633–1642. [Google Scholar] [CrossRef] [Green Version]
- Monroy, R.; Lutz, S.; Chalasani, T.; Smolic, A. SalNet360: Saliency Maps for omni-directional images with CNN. arXiv 2017, arXiv:1709.06505v1. [Google Scholar] [CrossRef] [Green Version]
- Martin, D.; Serrano, A.; Masia, B. Panoramic convolutions for 360 single-image saliency prediction. In Proceedings of the CVPR Workshop on Computer Vision for Augmented and Virtual Reality, Seattle, WA, USA, 14–18 June 2020. [Google Scholar]
- Zhu, Y.; Zhai, G.; Yang, Y.; Duan, H.; Min, X.; Yang, X. Viewing behavior supported visual saliency predictor for 360 degree videos. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 4188–4201. [Google Scholar] [CrossRef]
- Zhang, Z.; Xu, Y.; Yu, J.; Gao, S. Saliency detection in 360 videos. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 488–503. [Google Scholar]
- Dahou, Y.; Tliba, M.; McGuinness, K.; O’Connor, N. ATSal: An Attention Based Architecture for Saliency Prediction in 360 Videos. In Proceedings of the International Conference on Pattern Recognition, Milan, Italy, 4–8 January 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 305–320. [Google Scholar]
- Zhang, Y.; Dai, F.; Ma, Y.; Li, H.; Zhao, Q.; Zhang, Y. Saliency Prediction Network for 360 Videos. IEEE J. Sel. Top. Signal Process. 2019, 14, 27–37. [Google Scholar] [CrossRef]
- Fan, C.L.; Yen, S.C.; Huang, C.Y.; Hsu, C.H. On the optimal encoding ladder of tiled 360° videos for head-mounted virtual reality. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 1632–1647. [Google Scholar] [CrossRef]
- Xie, L.; Xu, Z.; Ban, Y.; Zhang, X.; Guo, Z. 360ProbDASH: Improving QoE of 360 Video Streaming Using Tile-based HTTP Adaptive Streaming. In Proceedings of the ACM International Conference on Multimedia (MM), Mountain View, CA, USA, 23–27 October 2017; pp. 1618–1626. [Google Scholar]
- Nasrabadi, A.T.; Mahzari, A.; Beshay, J.D.; Prakash, R. Adaptive 360-Degree Video Streaming using Scalable Video Coding. In Proceedings of the ACM International Conference on Multimedia (MM 2017), Mountain View, CA, USA, 23–27 November 2017; pp. 1794–1802. [Google Scholar]
- Zhang, X.; Hu, X.; Zhong, L.; Shirmohammadi, S.; Zhang, L. Cooperative tile-based 360° panoramic streaming in heterogeneous networks using scalable video coding. IEEE Trans. Circuits Syst. Video Technol. 2018, 30, 217–231. [Google Scholar] [CrossRef]
- Petrangeli, S.; Simon, G.; Swaminathan, V. Trajectory-based viewport prediction for 360-degree virtual reality videos. In Proceedings of the 2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), Taichung, Taiwan, 10–12 December 2018; pp. 157–160. [Google Scholar]
- Zhang, Y.; Zhao, P.; Bian, K.; Liu, Y.; Song, L.; Li, X. DRL360: 360-degree video streaming with deep reinforcement learning. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019. [Google Scholar]
- Yu, J.; Liu, Y. Field-of-view prediction in 360-degree videos with attention-based neural encoder-decoder networks. In Proceedings of the 11th ACM Workshop on Immersive Mixed and Virtual Environment Systems, Daejeon, Repulic of Korea, 13–17 November 2019; pp. 77–81. [Google Scholar]
- Lee, D.; Choi, M.; Lee, J. Prediction of head movement in 360-degree videos using attention model. Sensors 2021, 21, 3678. [Google Scholar] [CrossRef]
- Zou, J.; Li, C.; Liu, C.; Yang, Q.; Xiong, H.; Steinbach, E. Probabilistic Tile Visibility-Based Server-Side Rate Adaptation for Adaptive 360-Degree Video Streaming. IEEE J. Sel. Top. Signal Process. 2020, 14, 161–176. [Google Scholar] [CrossRef]
- Zhao, P.; Zhang, Y.; Bian, K.; Tuo, H.; Song, L. LadderNet: Knowledge Transfer Based Viewpoint Prediction in 360° Video. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 2562–2566. [Google Scholar]
- Chopra, L.; Chakraborty, S.; Mondal, A.; Chakraborty, S. Parima: Viewport adaptive 360-degree video streaming. Proc. Web Conf. 2021, 2021, 2379–2391. [Google Scholar]
- Kundu, R.K.; Rahman, A.; Paul, S. A study on sensor system latency in vr motion sickness. J. Sens. Actuator Netw. 2021, 10, 53. [Google Scholar] [CrossRef]
- Narciso, D.; Bessa, M.; Melo, M.; Coelho, A.; Vasconcelos-Raposo, J. Immersive 360 video user experience: Impact of different variables in the sense of presence and cybersickness. Univers. Access Inf. Soc. 2019, 18, 77–87. [Google Scholar] [CrossRef]
- Ye, Y.; Boyce, J.M.; Hanhart, P. Omnidirectional 360° video coding technology in responses to the joint call for proposals on video compression with capability beyond HEVC. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1241–1252. [Google Scholar] [CrossRef]
- Storch, I.; Agostini, L.; Zatt, B.; Bampi, S.; Palomino, D. Fastinter360: A fast inter mode decision for hevc 360 video coding. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3235–3249. [Google Scholar] [CrossRef]
- Dasari, M.; Bhattacharya, A.; Vargas, S.; Sahu, P.; Balasubramanian, A.; Das, S.R. Streaming 360-Degree Videos Using Super-Resolution. In Proceedings of the IEEE INFOCOM 2020-IEEE Conference on Computer Communications, Toronto, ON, Canada, 6–9 July 2020. [Google Scholar]
- Park, S.; Bhattacharya, A.; Yang, Z.; Dasari, M.; Das, S.; Samaras, D. Advancing user quality of experience in 360-degree video streaming. In Proceedings of the 2019 IFIP Networking Conference (IFIP Networking), Warsaw, Poland, 20–22 May 2019. [Google Scholar]
- Kan, N.; Zou, J.; Li, C.; Dai, W.; Xiong, H. RAPT360: Reinforcement learning-based rate adaptation for 360-degree video streaming with adaptive prediction and tiling. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1607–1623. [Google Scholar] [CrossRef]
- Zhang, H.; Ban, Y.; Guo, Z.; Chen, K.; Zhang, X. RAM360: Robust Adaptive Multi-layer 360 Video Streaming with Lyapunov Optimization. IEEE Trans. Multimed. 2022, 24, 546–558. [Google Scholar] [CrossRef]
- Maniotis, P.; Thomos, N. Tile-based edge caching for 360° live video streaming. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 4938–4950. [Google Scholar] [CrossRef]
- Xu, M.; Song, Y.; Wang, J.; Qiao, M.; Huo, L.; Wang, Z. Predicting head movement in panoramic video: A deep reinforcement learning approach. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 2693–2708. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, S.; Wu, S.; Duan, L.; Yu, C.; Sun, Y.; Dong, J. Person Re-Identification with Deep Features and Transfer Learning. arXiv 2016, arXiv:1611.05244. [Google Scholar]
- Molchanov, P.; Tyree, S.; Karras, T.; Aila, T.; Kautz, J. Pruning convolutional neural networks for resource efficient transfer learning. In Proceedings of the 5th International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
- Assens, M.; Giro-i-Nieto, X.; McGuinness, K.; O’Connor, N.E. SaltiNet: Scan-path Prediction on 360 Degree Images using Saliency Volumes. In Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Cornia, M.; Baraldi, L.; Serra, G.; Cucchiara, R. A deep Multi-level Network for Saliency Prediction. In Proceedings of the IEEE International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 4–8 December 2016. [Google Scholar]
- Sak, H.; Senior, A.; Beaufays, F. Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling. In Proceedings of the 15th Annual Conference of the International Speech Communication Association (INTERSPEECH), Singapore, 14–18 September 2014. [Google Scholar]
- Keras. Keras: The Python Deep Learning Library. Available online: https://keras.io (accessed on 1 June 2019).
- Nguyen, A.; Yan, Z. A Saliency Dataset for 360-Degree Videos. In Proceedings of the 10th ACM on Multimedia Systems Conference (MMSys’19), Amherst, MA, USA, 18–21 June 2019. [Google Scholar]
- Wu, C.; Tan, Z.; Wang, Z. A dataset for exploring user behaviors in VR spherical video streaming. In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17), Taipei, Taiwan, 20–23 June 2017. [Google Scholar]
- Tran, H.T.; Ngoc, N.P.; Pham, C.T.; Jung, Y.J.; Thang, T.C. A subjective study on QoE of 360 video for VR communication. In Proceedings of the 2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP), Luton, UK, 16–18 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
- Hooft, J.V.; Petrangeli, S.; Wauters, T.; Huysegems, R.; Alface, P.R.; Bostoen, T.; Turck, F.D. HTTP/2-Based Adaptive Streaming of HEVC Video over 4G/LTE Networks. IEEE Commun. Lett. 2016, 20, 2177–2180. [Google Scholar] [CrossRef]
- Statista. Market Share of Mobile Telecommunication Technologies Worldwide from 2016 to 2025, by Generation. Available online: www.statista.com/statistics/740442/worldwide-share-of-mobile-telecommunication-technology/ (accessed on 15 March 2023).
- Apostolopoulos, J.G.; Tan, W.T.; Wee, S.J. Video Streaming: Concepts, Algorithms, and Systems; Report HPL-2002-260; HP Laboratories: Palo Alto, CA, USA, 2002; pp. 2641–8770. [Google Scholar]
- Corbillon, X.; Simone, F.D.; Simon, G. 360-Degree Video Head Movement Dataset. In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17), Taipei, Taiwan, 20–23 June 2017. [Google Scholar]
- Broeck, M.V.d.; Kawsar, F.; Schöning, J. It’s all around you: Exploring 360 video viewing experiences on mobile devices. In Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA, 23–27 October 2017; pp. 762–768. [Google Scholar]
- Chen, J.; Hu, M.; Luo, Z.; Wang, Z.; Wu, D. SR360: Boosting 360-degree video streaming with super-resolution. In Proceedings of the 30th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video, Istanbul, Turkey, 10–11 June 2020; pp. 1–6. [Google Scholar]
- Lo, W.; Fan, C.; Lee, J. 360-degree Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings of the 8th ACM on Multimedia Systems Conference (MMSys’17), Taipei, Taiwan, 20–23 June 2017. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nguyen, A.; Yan, Z. Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays. Sensors 2023, 23, 4016. https://doi.org/10.3390/s23084016
Nguyen A, Yan Z. Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays. Sensors. 2023; 23(8):4016. https://doi.org/10.3390/s23084016
Chicago/Turabian StyleNguyen, Anh, and Zhisheng Yan. 2023. "Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays" Sensors 23, no. 8: 4016. https://doi.org/10.3390/s23084016
APA StyleNguyen, A., & Yan, Z. (2023). Enhancing 360 Video Streaming through Salient Content in Head-Mounted Displays. Sensors, 23(8), 4016. https://doi.org/10.3390/s23084016