Intra-Frame Graph Structure and Inter-Frame Bipartite Graph Matching with ReID-Based Occlusion Resilience for Point Cloud Multi-Object Tracking
Abstract
:1. Introduction
- We propose an intra-frame graph structure that leverages adaptive graph convolution to aggregate edge features into central nodes, thereby enhancing the robust representation of each node.
- We view data association as inter-frame bipartite graph matching and define the objective function to minimize the global optimal matching cost. By designing a sophisticated cost matrix and applying a minimum-cost flow optimization algorithm, we achieve globally optimal matching for accurate data association in complex scenarios, thereby reducing ID switches.
- For unmatched objects, we propose a motion-based ReID layer that uses similarity scores and association probabilities to accurately re-associate objects with their previous fragmented trajectory IDs, thereby reducing ID switches and avoiding the wrong initialization of new tracks.
2. Related Works
2.1. Three-Dimensional Multi-Object Tracking
2.2. Data Association in Multi-Object Tracking
3. Methods
3.1. Intra-Frame Graph Structure
3.1.1. Voxel Feature Extraction
3.1.2. Graph Construction and Adaptive Graph Convolution
3.2. Bipartite Graph Matching for Data Association
3.2.1. Objective Function
3.2.2. Association Cost Matrix
- Angle Prediction Angle prediction is derived from the predicted values of and . The angle is then determined using the formulaThe function computes the angle from sin and cos values, accommodating positive or negative inputs for both functions and ensuring accurate angle determination.
- Velocity Prediction The velocity prediction provides the two-dimensional speed components, and , of the detected objects.
- Velocity and Angle-based Motion Cost The -th entry in the motion cost matrix is formulated as follows:
- −
- Angle Similarity The cosine similarity is utilized to measure the similarity between two angles. The value ranges from −1 to 1, with 1 indicating complete similarity and −1 indicating complete dissimilarity. To ensure that the similarity calculation results fall within [0, 1], we use . Further, dividing the entire expression by 2 normalizes the cost range to a more intuitive scale of 0 to 1.
- −
- Velocity Difference Calculating the square of the velocity differences simplifies the process and serves to amplify larger discrepancies while being less sensitive to smaller variations.
3.3. Trajectory Management and Track Update
3.3.1. Association Probability Calculation
3.3.2. Track Creation and Deletion Strategy
- Refinement of Scores: For the matched pairs and new tracks, we refine the confidence scores using the embeddings obtained from the architecture head module. This refinement step, which follows a methodology similar to CenterPoint’s [15] second stage, ensures that the final tracking results are accurate and reliable.
- New Track Creation: In each frame t, if a detection does not match any existing tracklet or unmatched detection and has a high confidence score, a new track is initialized. Each new track is assigned a unique track ID, and the corresponding detection is added to this new tracklet. This step ensures that newly appearing objects are properly tracked from the moment they are first detected.
- Track Deletion: A tracklet is deleted if it has no matching detection for three consecutive frames. This parameter ensures that tracks are not immediately discarded when an object is missed for a few frames, allowing for temporary occlusions or missed detections without losing the track.
Algorithm 1 Trajectory management and track update |
Require: Detection set , trajectory set , maximum age Ensure: Updated trajectory set Input: Detection set , where each detection : Motion information (velocity and angle) Trajectory set , where each tracklet contains a sequence of detections with motion information Maximum age Output: Updated trajectory set Hyperparameters: Similarity score threshold Association probability threshold Step 1: Refinement of Scores for each matched pair of detection and tracklet do Refine confidence scores using embeddings from the architecture head module. end for Step 2: Handle Unmatched Detections for each unmatched detection do Compute similarity score using Equation (14) Compute association probability using Equation (15) if then Reconnect to corresponding trajectory ID Update state of with else Store as unmatched for next frame end if end for Step 3: New Track Creation for each detection do if does not match any existing tracklet or unmatched detection and has a high confidence score then Create new tracklet Assign new track ID to Add to end if end for Step 4: Track Deletion for each tracklet do if tracklet has no matching detection for a dynamic number of frames based on confidence and disappearance time then Delete tracklet end if end for |
4. Experiments
4.1. Datasets and Evaluation Metrics
4.1.1. Datasets
4.1.2. Evaluation Metrics
4.2. Implementation Details
4.3. Comparison with Different Methods
4.3.1. NuScenes Open Dataset
4.3.2. Waymo Open Dataset
4.3.3. Comparison with Advanced Tracking Methods
- AB3DMOT [40]: This method employs a 3D Kalman filter for state estimation and the Hungarian algorithm for data association, providing robust tracking performance in 3D space.
- SimpleTrack [42]: This method uses non-maximum suppression for detection preprocessing, a Kalman filter for motion modeling, 3D Generalized IoU for association, and trajectory interpolation to achieve object tracking.
- ImmortalTrack [43]: This method uses a simple Kalman filter for trajectory prediction to maintain tracklets when the target is not visible, effectively preventing premature tracklet termination and reducing ID switches and track fragmentation.
- SpOT [44]: SpOT proposes a multi-frame spatio-temporal tracking method that utilizes 4D refinement for frame-by-frame detection data association, achieving efficient object tracking.
- CenterPoint [15]: This method first detects the centers of objects using a keypoint detector and regresses their 3D size, 3D orientation, and velocity. In the second stage, it refines these estimates using additional point features on the object.
4.3.4. Comparative Analysis
- Advantages and Disadvantages of the KF The Kalman filter can utilize information from multiple frames, resulting in smoother outcomes in scenarios with low-quality detections. However, it necessitates careful parameter initialization, and improper parameter settings can significantly impact its robustness.
- Advantages and Disadvantages of the CV Model The Constant Velocity model better handles abrupt and unpredictable motions with explicit speed predictions and is simpler to implement without requiring parameter tuning. Nevertheless, its effectiveness in motion smoothing is limited, and it may not perform as well in scenarios with low-quality detections.
4.3.5. Strengths and Weaknesses of Our Method
4.4. Ablation Studies
4.4.1. Ablation Study for GBRTracker
4.4.2. Influence of Object Detection Module
4.4.3. Effectiveness of Aggregated Pairwise Cost
4.4.4. Evaluation of Similarity Score and Association Probability for Track Update
4.5. Discussion of Failure Cases and Future Challenges
4.5.1. Object Suddenly Appears with Delayed Tracking
4.5.2. Complete Occlusion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, C.; Chen, J.; Li, J.; Peng, Y.; Mao, Z. Large language models for human-robot interaction: A review. Biomim. Intell. Robot. 2023, 3, 100131. [Google Scholar] [CrossRef]
- Peng, Y.; Funabora, Y.; Doki, S. An Application of Transformer based Point Cloud Auto-encoder for Fabric-type Actuator. In Proceedings of the JSME Annual Conference on Robotics and Mechatronics (Robomec), Nagoya, Japan, 28 June–1 July 2023; The Japan Society of Mechanical Engineers: Tokyo, Japan, 2023; p. 2P1-E12. [Google Scholar]
- Zhou, Y.; Tuzel, O. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
- Zhang, Y.; Hu, Q.; Xu, G.; Ma, Y.; Wan, J.; Guo, Y. Not all points are equal: Learning highly efficient point-based detectors for 3d lidar point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 18953–18962. [Google Scholar]
- Shi, S.; Guo, C.; Jiang, L.; Wang, Z.; Shi, J.; Wang, X.; Li, H. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10529–10538. [Google Scholar]
- Wang, L.; Song, Z.; Zhang, X.; Wang, C.; Zhang, G.; Zhu, L.; Li, J.; Liu, H. SAT-GCN: Self-attention graph convolutional network-based 3D object detection for autonomous driving. Knowl.-Based Syst. 2023, 259, 110080. [Google Scholar] [CrossRef]
- Shi, W.; Rajkumar, R. Point-gnn: Graph neural network for 3d object detection in a point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1711–1719. [Google Scholar]
- Sun, S.; Shi, C.; Wang, C.; Liu, X. A Novel Adaptive Graph Transformer For Point Cloud Object Detection. In Proceedings of the 2023 7th International Conference on Communication and Information Systems (ICCIS), Chongqing, China, 20–22 October 2023; pp. 151–155. [Google Scholar]
- Kim, A.; Brasó, G.; Ošep, A.; Leal-Taixé, L. Polarmot: How far can geometric relations take us in 3d multi-object tracking? In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2022; pp. 41–58. [Google Scholar]
- Chu, P.; Wang, J.; You, Q.; Ling, H.; Liu, Z. Transmot: Spatial-temporal graph transformer for multiple object tracking. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 4870–4880. [Google Scholar]
- Kuhn, H.W. The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 1955, 2, 83–97. [Google Scholar] [CrossRef]
- Xu, Y.; Osep, A.; Ban, Y.; Horaud, R.; Leal-Taixé, L.; Alameda-Pineda, X. How to train your deep multi-object tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6787–6796. [Google Scholar]
- Wang, L.; Zhang, X.; Qin, W.; Li, X.; Gao, J.; Yang, L.; Li, Z.; Li, J.; Zhu, L.; Wang, H.; et al. Camo-mot: Combined appearance-motion optimization for 3d multi-object tracking with camera-lidar fusion. IEEE Trans. Intell. Transp. Syst. 2023, 24, 11981–11996. [Google Scholar] [CrossRef]
- Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Weng, F.; Yuan, Z.; Luo, P.; Liu, W.; Wang, X. Bytetrack: Multi-object tracking by associating every detection box. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 1–21. [Google Scholar]
- Yin, T.; Zhou, X.; Krahenbuhl, P. Center-based 3d object detection and tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11784–11793. [Google Scholar]
- Chiu, H.k.; Wang, C.Y.; Chen, M.H.; Smith, S.F. Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous Driving via Differentiable Multi-Sensor Kalman Filter. arXiv 2023, arXiv:2309.14655. [Google Scholar]
- Ma, S.; Duan, S.; Hou, Z.; Yu, W.; Pu, L.; Zhao, X. Multi-object tracking algorithm based on interactive attention network and adaptive trajectory reconnection. Expert Syst. Appl. 2024, 249, 123581. [Google Scholar] [CrossRef]
- Liu, H.; Ma, Y.; Hu, Q.; Guo, Y. CenterTube: Tracking multiple 3D objects with 4D tubelets in dynamic point clouds. IEEE Trans. Multimed. 2023, 25, 8793–8804. [Google Scholar] [CrossRef]
- Wang, L.; Zhang, J.; Cai, P.; Lil, X. Towards Robust Reference System for Autonomous Driving: Rethinking 3D MOT. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 8319–8325. [Google Scholar]
- Chen, X.; Shi, S.; Zhang, C.; Zhu, B.; Wang, Q.; Cheung, K.C.; See, S.; Li, H. Trajectoryformer: 3D object tracking transformer with predictive trajectory hypotheses. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 18527–18536. [Google Scholar]
- Chen, S.; Yu, E.; Li, J.; Tao, W. Delving into the Trajectory Long-tail Distribution for Muti-object Tracking. arXiv 2024, arXiv:2403.04700. [Google Scholar]
- Zhang, Y.; Wang, C.; Wang, X.; Zeng, W.; Liu, W. Fairmot: On the fairness of detection and re-identification in multiple object tracking. Int. J. Comput. Vis. 2021, 129, 3069–3087. [Google Scholar] [CrossRef]
- Ding, G.; Liu, J.; Xia, Y.; Huang, T.; Zhu, B.; Sun, J. LiDAR Point Cloud-based Multiple Vehicle Tracking with Probabilistic Measurement-Region Association. arXiv 2024, arXiv:2403.06423. [Google Scholar]
- Liu, J.; Bai, L.; Xia, Y.; Huang, T.; Zhu, B.; Han, Q.L. GNN-PMB: A simple but effective online 3D multi-object tracker without bells and whistles. IEEE Trans. Intell. Veh. 2022, 8, 1176–1189. [Google Scholar] [CrossRef]
- Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3464–3468. [Google Scholar]
- Weng, X.; Wang, J.; Held, D.; Kitani, K. 3d multi-object tracking: A baseline and new evaluation metrics. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 10359–10366. [Google Scholar]
- Zaech, J.N.; Liniger, A.; Dai, D.; Danelljan, M.; Van Gool, L. Learnable online graph representations for 3d multi-object tracking. IEEE Robot. Autom. Lett. 2022, 7, 5103–5110. [Google Scholar] [CrossRef]
- Zhang, Z.; Liu, J.; Xia, Y.; Huang, T.; Han, Q.L.; Liu, H. LEGO: Learning and graph-optimized modular tracker for online multi-object tracking with point clouds. arXiv 2023, arXiv:2308.09908. [Google Scholar]
- Meyer, F.; Kropfreiter, T.; Williams, J.L.; Lau, R.; Hlawatsch, F.; Braca, P.; Win, M.Z. Message passing algorithms for scalable multitarget tracking. Proc. IEEE 2018, 106, 221–259. [Google Scholar] [CrossRef]
- Rangesh, A.; Maheshwari, P.; Gebre, M.; Mhatre, S.; Ramezani, V.; Trivedi, M.M. Trackmpnn: A message passing graph neural architecture for multi-object tracking. arXiv 2021, arXiv:2101.04206. [Google Scholar]
- Sun, S.; Wang, C.; Liu, X.; Shi, C.; Ding, Y.; Xi, G. Spatio-Temporal Bi-directional Cross-frame Memory for Distractor Filtering Point Cloud Single Object Tracking. arXiv 2024, arXiv:2403.15831. [Google Scholar]
- Zhou, X.; Koltun, V.; Krähenbühl, P. Tracking objects as points. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 474–490. [Google Scholar]
- Han, S.; Huang, P.; Wang, H.; Yu, E.; Liu, D.; Pan, X. Mat: Motion-aware multi-object tracking. Neurocomputing 2022, 476, 75–86. [Google Scholar] [CrossRef]
- Wu, H.; Li, Q.; Wen, C.; Li, X.; Fan, X.; Wang, C. Tracklet Proposal Network for Multi-Object Tracking on Point Clouds. In Proceedings of the IJCAI, Virtual Event, 19–26 August 2021; pp. 1165–1171. [Google Scholar]
- Yu, E.; Li, Z.; Han, S.; Wang, H. Relationtrack: Relation-aware multiple object tracking with decoupled representation. IEEE Trans. Multimed. 2022, 25, 2686–2697. [Google Scholar] [CrossRef]
- Zhang, Y.F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and efficient IOU loss for accurate bounding box regression. Neurocomputing 2022, 506, 146–157. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar]
- Bernardin, K.; Stiefelhagen, R. Evaluating multiple object tracking performance: The clear mot metrics. EURASIP J. Image Video Process. 2008, 2008, 1–10. [Google Scholar] [CrossRef]
- Luiten, J.; Osep, A.; Dendorfer, P.; Torr, P.; Geiger, A.; Leal-Taixé, L.; Leibe, B. Hota: A higher order metric for evaluating multi-object tracking. Int. J. Comput. Vis. 2021, 129, 548–578. [Google Scholar] [CrossRef] [PubMed]
- Weng, X.; Wang, J.; Held, D.; Kitani, K. Ab3dmot: A baseline for 3d multi-object tracking and new evaluation metrics. arXiv 2020, arXiv:2008.08063. [Google Scholar]
- Wang, Y.; Chen, S.; Huang, L.; Ge, R.; Hu, Y.; Ding, Z.; Liao, J. 1st Place Solutions for Waymo Open Dataset Challenges–2D and 3D Tracking. arXiv 2020, arXiv:2006.15506. [Google Scholar]
- Pang, Z.; Li, Z.; Wang, N. Simpletrack: Understanding and rethinking 3d multi-object tracking. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 680–696. [Google Scholar]
- Wang, Q.; Chen, Y.; Pang, Z.; Wang, N.; Zhang, Z. Immortal tracker: Tracklet never dies. arXiv 2021, arXiv:2111.13672. [Google Scholar]
- Stearns, C.; Rempe, D.; Li, J.; Ambruş, R.; Zakharov, S.; Guizilini, V.; Yang, Y.; Guibas, L.J. Spot: Spatiotemporal modeling for 3d object tracking. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 639–656. [Google Scholar]
Hyperparameter | Source | Value |
---|---|---|
K | Section 3.1.2 | 20 |
Equation (3) | 256 | |
Equation (12) | 0.55 | |
Equation (12) | 0.45 | |
Equation (13) | 0.5 | |
Equation (13) | 0.3 | |
Equation (13) | 0.2 | |
Equation (14) | 0.75 | |
Equation (15) | 0.8 |
Method | AMOTA | AMOTP | MOTA | IDS |
---|---|---|---|---|
AB3DMOT [40] | 57.8 | 80.7 | 51.4 | 1275 |
Probabilistic [16] | 56.1 | 80.0 | 48.3 | 679 |
MPN-Baseline [27] | 59.3 | 83.2 | 51.4 | 1079 |
CenterPoint [15] | 66.5 | 56.7 | 56.2 | 562 |
Ours | 67.0 | 56.6 | 57.3 | 543 |
Method | Vehicle | Pedestrian | Cyclist | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
MOTA↑ | FP%↓ | Miss%↓ | IDS%↓ | MOTA↑ | FP%↓ | Miss%↓ | IDS%↓ | MOTA↑ | FP%↓ | Miss%↓ | IDS%↓ | |
AB3DMOT [40] | 55.7 | - | 30.2 | 0.40 | 52.2 | - | - | 2.74 | - | - | - | - |
CenterPoint [15] | 55.1 | 10.8 | 33.9 | 0.26 | 54.9 | 10.0 | 34.0 | 1.13 | 57.4 | 13.7 | 28.1 | 0.83 |
SimpleTrack [42] | 56.1 | 10.4 | 33.4 | 0.08 | 57.8 | 10.9 | 30.9 | 0.42 | 56.9 | 11.6 | 30.9 | 0.56 |
ImmortalTrack [43] | 56.4 | 10.2 | 33.4 | 0.01 | 58.2 | 11.3 | 30.5 | 0.26 | 59.1 | 11.8 | 28.9 | 0.10 |
SpOT [44] | 55.7 | 11.0 | 28.4 | 33.2 | 0.18 | 60.5 | 11.3 | 27.6 | 0.56 | - | - | - |
Ours | 56.5 | 10.1 | 32.2 | 0.18 | 62.0 | 10.7 | 27.4 | 0.52 | 60.9 | 11.3 | 27.1 | 0.26 |
Method | AMOTA↑ |
---|---|
CenterPoint | 66.5 |
CenterPoint + graph backbone | 66.6 |
CenterPoint + ReID ReTrack | 66.64 |
CenterPoint + graph backbone + ReID ReTrack | 66.68 |
CenterPoint + bipartite graph matching | 66.76 |
CenterPoint + graph backbone + bipartite graph matching | 66.8 |
CenterPoint + bipartite graph matching + ReID ReTrack | 66.83 |
CenterPoint + graph backbone + bipartite graph matching + ReID ReTrack (GBRTracker) | 67.0 |
Method | AMOTA | AMOTP | MOTA |
---|---|---|---|
CenterPoint | |||
VoxelNet | 0.665 | 0.567 | 0.562 |
Ours | 0.667 | 0.565 | 0.563 |
GBRTracker (ours) | |||
VoxelNet | 0.669 | 0.558 | 0.581 |
Ours | 0.670 | 0.557 | 0.583 |
Cost | MOTA ↑ | IDS ↓ | FRAG ↓ | ||
---|---|---|---|---|---|
IoU | Cen. | Motion | |||
√ | √ | - | 0.53% | 2.14% | 0.94% |
√ | - | √ | 0.89% | 3.03% | 2.12% |
- | √ | √ | 1.42% | 3.91% | 3.31% |
√ | √ | √ | 1.96% | 4.27% | 4.48% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, S.; Shi, C.; Wang, C.; Zhou, Q.; Sun, R.; Xiao, B.; Ding, Y.; Xi, G. Intra-Frame Graph Structure and Inter-Frame Bipartite Graph Matching with ReID-Based Occlusion Resilience for Point Cloud Multi-Object Tracking. Electronics 2024, 13, 2968. https://doi.org/10.3390/electronics13152968
Sun S, Shi C, Wang C, Zhou Q, Sun R, Xiao B, Ding Y, Xi G. Intra-Frame Graph Structure and Inter-Frame Bipartite Graph Matching with ReID-Based Occlusion Resilience for Point Cloud Multi-Object Tracking. Electronics. 2024; 13(15):2968. https://doi.org/10.3390/electronics13152968
Chicago/Turabian StyleSun, Shaoyu, Chunhao Shi, Chunyang Wang, Qing Zhou, Rongliang Sun, Bo Xiao, Yueyang Ding, and Guan Xi. 2024. "Intra-Frame Graph Structure and Inter-Frame Bipartite Graph Matching with ReID-Based Occlusion Resilience for Point Cloud Multi-Object Tracking" Electronics 13, no. 15: 2968. https://doi.org/10.3390/electronics13152968
APA StyleSun, S., Shi, C., Wang, C., Zhou, Q., Sun, R., Xiao, B., Ding, Y., & Xi, G. (2024). Intra-Frame Graph Structure and Inter-Frame Bipartite Graph Matching with ReID-Based Occlusion Resilience for Point Cloud Multi-Object Tracking. Electronics, 13(15), 2968. https://doi.org/10.3390/electronics13152968