Three-Dimensional Instance Segmentation Using the Generalized Hough Transform and the Adaptive n-Shifted Shuffle Attention
Abstract
:1. Introduction
- Introduced an innovative attention mechanism, the ANSS attention, which improves the model’s capacity to concentrate on relevant parts of the 3D space, thus capturing subtle details better than conventional approaches.
- Reformulated and improved the n-sigmoid activation function to enable the model to output values in the range of [−1, 1], which allows the representation of negative relationships between features and actively suppresses conflicting features (negative values near −1).
- Reformulated the Generalized Hough Transform with deep learning integration and the addition of a new attention mechanism for 3D instance segmentation.
- Improved the 3D object detection performance on 3DSIS, a benchmark dataset.
2. Related Works
2.1. Point-Based Networks
2.2. Attention Mechanisms
2.3. Generalized Hough Transform
2.4. Activation Functions
3. The Proposed Adaptive n-Shifted Shuffle (ANSS) Attention Integrated with the Generalized Hough Transform (GHT)
3.1. Adaptive n-Shifted Shuffle (ANSS)
3.1.1. n-Shifted Sigmoid Activation Function
- a.
- n-sigmoid activation function
- b.
- n-shifted sigmoid activation function
- c.
- Benefits of the n-shifted sigmoid function in attention mechanisms
- An enhanced representation of the negative relationships: In traditional attention mechanisms, the sigmoid function (Equation (3)) is in the range [0, 1], which restricts the model to positive relationships. However, with n-shifted sigmoid (Equation (4)), the range is [−1, 1], expressing both positive and negative relationships between features. This is particularly useful in occluded spaces where the presence of one feature may inhibit or contradict another.
- An improved attention weight: The n-shifted sigmoid broader range [−1, 1] allows for finer-grained attention weights, where values can represent both strong positive influence (near 1) and strong negative influence (near −1).
- Here, the following hold:
- ○
- When ≈ 1, f receives amplified attention.
- ○
- When ρ ≈ −1, f is actively suppressed, giving the model suppressive control over attention weights.
- An enhanced model stability and training: The n-shifted sigmoid broader range can help improve gradient flow and stability during training by mitigating the vanishing gradient problem. With the traditional sigmoid, outputs are restricted to [0, 1], and as x → ±∞, the gradient ρ → 0, causing gradients to vanish.With n-shifted sigmoid, this issue is reduced, mainly because the output range [−1, 1] allows for more substantial gradients, even for large inputs, and the shift parameter can be adjusted during training, adapting the activation to changing input distributions.
- An adaptability to complex tasks: Its ability to handle both positive and negative activations makes it particularly well suited for these tasks. This function enables the model to perform the following:
- ○
- Delineate Boundaries: Since the output can be negative, boundaries between contrasting regions can be clearly distinguished.
- ○
- Identify Complex Patterns: The shift allows flexibility in the model’s attention distribution, helping it handle intricate and variable spatial arrangements.
- ○
- Mathematical Intuition for Boundary Delineation: When a region with high contrast (boundary) is encountered, n-shifted sigmoid can amplify positive features on one side of the boundary while suppressing features on the other side.
3.1.2. Adaptive Shuffle Pattern
3.1.3. Adaptive n-Shifted Shuffle Attention (ASA)
- Split the feature map into groups: X′ = {X1, X2, …, XG}, where each Xi ∈ ℜC/G×H×W
- Shuffle the channels within each group based on a learned pattern.
3.2. Generalized Hough Transform (GHT)
3.2.1. PointNet Backbone for 3D Point Cloud Processing
3.2.2. GHT Module
- a.
- GHT as a deep learning framework
- -
- Differentiable Generalized Hough Transform: In its classic form, the GHT maps points in an image or 3D space to a parameter space and only relies on discrete operations that are not inherently differentiable, making backpropagation in a neural network impossible. So, the GHT can be modified to work with continuous probability density functions, so the vote accumulation in the Hough space becomes a continuous, smooth function. This is achieved by differentiable functions (SoftMax, sigmoid, n-shifted sigmoid, … for smoother accumulation of votes, enabling gradients to be propagated).
- -
- Adaptation of the “voting” process: Instead of hand-crafted features for mapping points to the Hough space, a neural network learns the optimal feature representations. Here, in the ANSS GHT, the n-shifted sigmoid function definitely allows scaling the voting intensities based on learned parameters, dynamically adapting to various spatial patterns and contexts in the 3D space.
- -
- Integration with convolutional and attention mechanisms: The reformulated GHT benefits from convolutional layers to extract hierarchical features from the 3D input. By using a convolutional backbone, the network can efficiently capture both local and global contexts.
- b.
- GHT for object detection
4. Experiments and Results
4.1. Dataset and Evaluation Metrics
4.1.1. Data Preprocessing
4.1.2. Implementation Details
4.2. Results and Analysis
4.2.1. Average Recall (AR)
4.2.2. Accuracy
4.2.3. Robustness to Noise
4.2.4. Inference Time
4.3. Ablation Studies
4.3.1. Impact of n-Shifted Sigmoid
4.3.2. Impact of the ANSS Attention
4.3.3. Impact of the Generalized Hough Transform (GHT)
4.4. Visualization of Segmentation Results
4.4.1. Qualitative Results
4.4.2. Failure Cases
4.5. Discussion
4.5.1. Impact of the n-Shifted Sigmoid Activation Function
4.5.2. Impact of the ANSS Attention Module
4.5.3. Impact of the Generalized Hough Transform (GHT)
4.5.4. Future Research Directions
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Yasir, S.M.; Sadiq, A.M.; Ahn, H. 3D Instance Segmentation Using Deep Learning on RGB-D Indoor Data. Comput. Mater. Contin. CMC 2022, 72, 5777–5791. [Google Scholar] [CrossRef]
- He, Y.; Yu, H.; Liu, X.; Yang, Z.; Sun, W.; Anwar, S.; Mian, A. Deep Learning Based 3D Segmentation: A Survey. arXiv 2021, arXiv:2103.05423v5. [Google Scholar]
- Rani, A.; Ortiz-Arroyo, D.; Durdevic, P. Advancements in point cloud-based 3D defect classification and segmentation for industrial systems: A comprehensive survey. Inf. Fusion 2024, 112, 102575. [Google Scholar] [CrossRef]
- Zhang, Q.; Peng, Y.; Zhang, Z.; Li, T. Semantic Segmentation of Spectral LiDAR Point Clouds Based on Neural Architecture Search. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5403811. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Mulindwa, D.B.; Du, S. An n-Sigmoid Activation Function to Improve the Squeeze-and-Excitation for 2D and 3D Deep Networks. Electronics 2023, 12, 911. [Google Scholar] [CrossRef]
- Tsai, D.-M. An improved generalized Hough transform for the recognition of overlapping objects. Image Vis. Comput. 1992, 15, 877–888. [Google Scholar] [CrossRef]
- Zong, C.; Wang, H.; Wan, Z. An improved 3D point cloud instance segmentation method for overhead catenary height detection. Comput. Electr. Eng. 2022, 98, 107685. [Google Scholar] [CrossRef]
- Sun, Y.; Zhang, X.; Miao, Y. A review of point cloud segmentation for understanding 3D indoor scenes. Vis. Intell. 2024, 2, 14. [Google Scholar] [CrossRef]
- Yang, S.; Hou, M.; Li, S. Three-Dimensional Point Cloud Semantic Segmentation for Cultural Heritage: A Comprehensive Review. Remote Sens. 2023, 15, 548. [Google Scholar] [CrossRef]
- Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. Review: Deep Learning on 3D Point Clouds. Remote Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
- Zhou, Y.; Tuzel, O. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
- Choy, C.; Gwak, J.; Savarese, S. 4D spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3075–3084. [Google Scholar]
- Qiu, S.; Wu, Y.; Anwar, S.; Liu, C. Investigating Attention Mechanism in 3D Point Cloud Object Detection. In Proceedings of the 2021 International Conference on 3D Vision, London, UK, 1–3 December 2021; pp. 403–412. [Google Scholar] [CrossRef]
- Niu, Z.; Zhong, G.; Yu, H. A review on the attention mechanism of deep learning. Neurocomputing 2021, 452, 48–62. [Google Scholar] [CrossRef]
- Geng, P.; Lu, X.; Hu, C.; Liu, H.; Lyu, L. Focusing Fine-Grained Action by Self-Attention-Enhanced Graph Neural Networks with Contrastive Learning. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 4754–4768. [Google Scholar] [CrossRef]
- Feng, M.; Zhang, L.; Lin, X.; Gilani, S.Z.; Mian, A. Point attention network for semantic segmentation of 3D point clouds. Pattern Recognit. 2020, 107, 107446. [Google Scholar] [CrossRef]
- Zhao, H.; Jiang, L.; Jia, J.; Torr, P.H.; Koltun, V. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 16259–16268. [Google Scholar]
- Zhang, Q.L.; Yang, Y.B. Sa-net: Shuffle attention for deep convolutional neural networks. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 2235–2239. [Google Scholar]
- Khoshelham, K. Extending generalized hough transform to detect 3D objects in laser range data. In Proceedings of the ISPRS Workshop on Laser Scanning, Espoo, Finland, 12–14 September 2007; Volume 36, p. 206. [Google Scholar]
- Fernández, A.; Umpiérrez, J.; Alonso, J.R. Generalized Hough transform for 3D object recognition and visualization in integral imaging. JOSA A 2023, 40, C37–C45. [Google Scholar] [CrossRef]
- Du, S.; van Wyk, B.J.; Tu, C.; Zhang, X. An Improved Hough Transform Neighborhood Map for Straight Line Segments. IEEE Trans. Image Process. 2010, 19, 573–585. [Google Scholar] [CrossRef]
- Tu, C.; van Wyk, B.J.; Djouani, K.; Hamam, Y.; Du, S. A Super Resolution Algorithm to Improve the Hough Transform. In Image Analysis and Recognition. ICIAR 2011; Lecture Notes in Computer Science; Kamel, M., Campilho, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6753. [Google Scholar] [CrossRef]
- Muench, D.; Huebner, W.; Arens, M. Generalized Hough transform based time invariant action recognition with 3D pose information. In Optics and Photonics for Counterterrorism, Crime Fighting, and Defence X; Optical Materials and Biomaterials in Security and Defence Systems Technology XI; SPIE: Bellingham, WA, USA, 2014; Volume 9253, pp. 165–175. [Google Scholar]
- Strzodka, R.; Ihrke, I.; Magnor, M. A graphics hardware implementation of the generalized hough transform for fast object recognition, scale, and 3D pose detection. In Proceedings of the 12th International Conference on Image Analysis and Processing, Mantova, Italy, 17–19 September 2003; pp. 188–193. [Google Scholar]
- Tu, C.; Van Wyk, B.; Hamam, Y.; Djouani, K.; Du, S. Vehicle Position Monitoring Using Hough Transform. IERI Procedia 2013, 4, 316–322. [Google Scholar] [CrossRef]
- Liao, B.; Li, J.; Ju, Z.; Ouyang, G. Hand gesture recognition with generalized hough transform and DC-CNN using realsense. In Proceedings of the 2018 Eighth International Conference on Information Science and Technology (ICIST), Cordoba, Spain; Granada, Spain; Seville, Spain, 30 June–6 July 2018; pp. 84–90. [Google Scholar]
- Qi, C.R.; Litany, O.; He, K.; Guibas, L.J. Deep hough voting for 3D object detection in point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9277–9286. [Google Scholar]
- Zhao, Z.; Feng, F.; Tingting, H. FNNS: An Effective Feedforward Neural Network Scheme with Random Weights for Processing Large-Scale Datasets. Appl. Sci. 2022, 12, 12478. [Google Scholar] [CrossRef]
- Agarap, A.F. Deep Learning using Rectified Linear Units (RELU). arXiv 2018, arXiv:1803.08375. [Google Scholar]
- Trottier, L.; Giguere, P.; Chaib-draa, B. Parametric Exponential Linear Unit for Deep Convolutional Neural Networks. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18 December 2017. [Google Scholar]
- LeCun, Y. Generalization and network design strategies. In Connectionism in Perspective; Pfeifer, R., Schreter, Z., Fogelman, F., Steels, L., Eds.; Elsevier: Amsterdam, The Netherlands, 1989. [Google Scholar]
- Chiu, S.-H.; Liaw, J.-J. An effective voting method for circle detection. Pattern Recognit. Lett. 2005, 26, 121–133. [Google Scholar] [CrossRef]
- Guo, S.; Pridmore, T.; Kong, Y.; Zhang, X. An improved Hough transform voting scheme utilizing surround suppression. Pattern Recognit. Lett. 2009, 30, 1241–1252. [Google Scholar] [CrossRef]
- Singh, C.; Bhatia, N. A Fast Decision Technique for Hierarchical Hough Transform for Line Detection. arXiv 2010, arXiv:1007.0547. [Google Scholar]
- Jiang, L.; Xiong, H. Coding-based hough transform for pedestrian detection. In Proceedings of the 2017 IEEE 17th International Conference on Communication Technology (ICCT), Chengdu, China, 27–30 October 2017; pp. 1524–1528. [Google Scholar] [CrossRef]
- Armeni, I.; Sener, O.; Zamir, A.R.; Jiang, H.; Brilakis, I.; Fischer, M.; Savarese, S. 3D Semantic Parsing of Large-Scale Indoor Spaces. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1534–1543. [Google Scholar] [CrossRef]
- Yang, B.; Wang, J.; Clark, R.; Hu, Q.; Wang, S.; Markham, A.; Trigoni, N. Learning object bounding boxes for 3d instance segmentation on point clouds. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
- Engelmann, F.; Bokeloh, M.; Fathi, A.; Leibe, B.; Nießner, M. 3D-mpa: Multi-proposal aggregation for 3D semantic instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9031–9040. [Google Scholar]
- Wang, W.; Yu, R.; Huang, Q.; Neumann, U. Sgpn: Similarity group proposal network for 3D point cloud instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2569–2578. [Google Scholar]
- Wang, X.; Liu, S.; Shen, X.; Shen, C.; Jia, J. Associatively segmenting instances and semantics in point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4096–4105. [Google Scholar]
- Mo, K.; Zhu, S.; Chang, A.X.; Yi, L.; Tripathi, S.; Guibas, L.J.; Su, H. Partnet: A large-scale benchmark for fine-grained and hierarchical part-level 3D object understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 909–918. [Google Scholar]
- Elich, C.; Engelmann, F.; Kontogianni, T.; Leibe, B. 3D bird’s-eye-view instance segmentation. In German Conference on Pattern Recognition; Springer International Publishing: Cham, Switzerland, 2019; pp. 48–61. [Google Scholar]
- Deng, Z.; Latecki, L.J. Amodal detection of 3D objects: Inferring 3D bounding boxes from 2D ones in RGB-depth images. In Proceedings of the CVPR, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
Layer Name | Type | Input Shape | Output Shape | Parameters |
---|---|---|---|---|
FC1 | Dense (128 units) | (batch_size, 4096, 3) | (batch_size, 4096, 128) | 512 |
BN1 | BatchNormalization | (batch_size, 4096, 128) | (batch_size, 4096, 128) | 512 |
FC2 | Dense (256 units) | (batch_size, 4096, 128) | (batch_size, 4096, 256) | 33.024 |
BN2 | BatchNormalization | (batch_size, 4096, 256) | (batch_size, 4096, 256) | 1024 |
ANSS | Attention Layer | (batch_size, 4096, 256) | (batch_size, 4096, 256) | 8576 (approx.) |
FC3 | Dense (num_classes) | (batch_size, 4096, 256) | (batch_size, 4096, num_classes) | Depends on num_classes |
GHT | GHT Layer | (batch_size, 4096, 256) | (batch_size, 4096, num_classes) | 10,240 (approx.) |
S3DIS 6-FoldCV | Ceiling | Floor | Wall | Beam | Column | Window | Door | Table | Chair | Sofa | Bookcase | Board | mAR |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3D BoNet [39] | 61.8 | 74.6 | 50.0 | 42.2 | 27.2 | 62.4 | 58.5 | 48.6 | 64.9 | 28.8 | 46.5 | 28.6 | 46.7 |
3D MPA [40] | 68.4 | 96.2 | 51.9 | 58.5 | 77.6 | 79.8 | 69.5 | 32.8 | 75.2 | 71.1 | 68.2 | 38.2 | 64.1 |
SGPN [41] | 58.42 | 83.6 | 42.2 | 25.6 | 7.15 | 42.73 | 45.2 | 38.25 | 47.0 | 0.00 | 13.5 | 31.68 | 31.68 |
Ours | 73.15 | 97.8 | 81.82 | 56.5 | 71.1 | 68.7 | 80.4 | 59.65 | 79.8 | 61.75 | 64.65 | 63.75 | 71.61 |
Improvement | 4.75 | 1.6 | 29.08 | - | - | - | 10.9 | 11.05 | 4.6 | - | - | 22.55 | 7.5 |
Methods | S3DIS 6-Fold CV | S3DIS Area 5 |
---|---|---|
3D-BoNet [39] | 47.6 | 40.2 |
ASIS [42] | 47.5 | 42.4 |
3D-MPA [40] | 64.1 | 58.0 |
PartNet [43] | 43.4 | - |
3D ANSS GHT (Ours) | 71.61 | 72.54 |
Methods | S3DIS 6-Fold CV | S3DIS Area 5 |
---|---|---|
3D-BoNet [39] | 65.6 | 57.3 |
ASIS [42] | 63.6 | 55.3 |
3D-MPA [40] | 66.7 | 63.1 |
3D ANSS GHT (Ours) | 69.87 | 69.34 |
S3DIS CV | Ceiling | Floor | Wall | Beam | Column | Window | Door | Table | Chair | Sofa | Bookcase | Board | mAP |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3D BoNet [39] | 88.5 | 89.9 | 64.9 | 42.3 | 48.0 | 93.0 | 66.8 | 55.4 | 72.0 | 49.7 | 58.3 | 80.7 | 65.66 |
3D MPA [40] | 95.5 | 99.5 | 59.0 | 44.6 | 57.7 | 89.0 | 78.7 | 34.5 | 83.6 | 55.9 | 51.6 | 71.0 | 66.7 |
SGPN [41] | 78.15 | 80.27 | 48.90 | 33.6 | 16.9 | 49.6 | 44.48 | 30.33 | 52.22 | 23.12 | 28.50 | 28.62 | 42.9 |
3D-BEVIS [44] | 71.00 | 96.70 | 79.37 | 45.10 | 64.38 | 64.3 | 70.15 | 57.22 | 74.22 | 47.92 | 57.97 | 59.27 | 65.66 |
Ours | 71.5 | 98.75 | 80.25 | 55.6 | 68.3 | 67.8 | 78.3 | 59.7 | 76.9 | 58.5 | 63.34 | 59.5 | 69.87 |
Improvement | - | - | 0.88 | 10.5 | 3.92 | - | - | 2.48 | - | 2.6 | 5.04 | - | 3.17 |
Model | mAP@50% | mAP@50% (When Noise Added) | Degradation |
---|---|---|---|
3D-BoNet [39] | 66.7 | 61.3 | 5.4 |
3D-BEVIS [44] | 65.6 | 60.5 | 5.1 |
SGPN [41] | 42.9 | 35.6 | 7.3 |
3D ANSS GHT (Ours) | 69.87 | 66.74 | 2.13 |
Model | Activation Function | Inference Time (ms) |
---|---|---|
3D ANSS GHT | n-Sigmoid | 29.35 |
n-shifted Sigmoid | 28.17 | |
3D-MPA [40] | - | 300 |
SGPN [41] | - | 170 |
DetectionNet [45] | - | 739 |
Model | Activation Function | mAP@50% |
---|---|---|
3D ANSS GHT | Original Sigmoid | 41.3 |
ReLU | 43.4 | |
ELU | 54.17 | |
n-Sigmoid | 63.35 | |
n-Shifted Sigmoid | 69.87 |
Model | Attention Mechanism | mAP@50% |
---|---|---|
3D ANSS GHT | No Attention | 62.53 |
ANSS | 69.87 | |
Traditional Shuffle | 66.29 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mulindwa, D.B.; Du, S.; Liu, Q. Three-Dimensional Instance Segmentation Using the Generalized Hough Transform and the Adaptive n-Shifted Shuffle Attention. Sensors 2024, 24, 7215. https://doi.org/10.3390/s24227215
Mulindwa DB, Du S, Liu Q. Three-Dimensional Instance Segmentation Using the Generalized Hough Transform and the Adaptive n-Shifted Shuffle Attention. Sensors. 2024; 24(22):7215. https://doi.org/10.3390/s24227215
Chicago/Turabian StyleMulindwa, Desire Burume, Shengzhi Du, and Qingxue Liu. 2024. "Three-Dimensional Instance Segmentation Using the Generalized Hough Transform and the Adaptive n-Shifted Shuffle Attention" Sensors 24, no. 22: 7215. https://doi.org/10.3390/s24227215
APA StyleMulindwa, D. B., Du, S., & Liu, Q. (2024). Three-Dimensional Instance Segmentation Using the Generalized Hough Transform and the Adaptive n-Shifted Shuffle Attention. Sensors, 24(22), 7215. https://doi.org/10.3390/s24227215