Performance Evaluation Metrics and Approaches for Target Tracking: A Survey
Abstract
:1. Introduction
2. A Classification of the Comprehensive Evaluation Metrics
2.1. Correctness Measures
- Number of valid tracks (NVT):If a track is assigned to a target and the target has only one track, then the track is validated. denotes the NVT at time t;
- Number of missed targets (NMT):A target is missed if it is not associated with any track. is the NMT at time t [26];
- Number of false tracks (NFT):A track is false if it is not assigned to any target. denotes the NFT at time t;
- Number of spurious tracks (NST):A track is defined as spurious if it is assigned to more than one target, and the NST is denoted as ;
- Average number of swaps in tracks (ANST):Different confirmed tracks may be assigned to a particular truth. This can happen when crossing targets and targets come close to each other. Assume as the ANST at time t [26];
- Average number of broken tracks (ANBT):There is also probable that no track is assigned to the truth for several time steps. If there is no assigned track to the truth, the number of broken tracks is counted at each time step. Reference [26] employed the ANBT to check the track segment associated with the truth;
- Tracks redundancy (TR):TR is represented as the ratio of validated tracks and total assigned tracks:
2.2. Timeliness Measures
- Rate of false alarms (RFA):The RFA [38] is defined as the NFT per time step, which can be denoted as follows:
- Track probability of detection (TPD):In the time interval , assume and as the first and last time that the ith target is present, respectively. According to [39], the TPD of each target is represented as:
- Rate of track fragmentation (RTF):It is likely that the track obtained through some tracking algorithms may not be continuous sometimes. The track segment is assigned to the ith truth, the number of changes that the continuous track becomes fragmental is defined as when the track segment is assigned to the ith truth. The smaller the RTF is, the more persistent the tracking estimated by the algorithm is [39];
- Track latency (TL):The TL, the delay from the moment that the target arises in the view of the sensor to the moment that target is detected by the tracker in the running period, is a measure of the track timeliness;
- Total execution time (TET):The computational cost is another important factor to be considered in the PE of target tracking. Therefore, the total time that is taken to run the tracker is expressed as the TET for each tracking algorithm.
2.3. Accuracy Measures
- RMSE:The RMSE is defined in terms of the estimation error , which is the average difference between the estimated state and the truth state , as:
- Hausdorff distance:The Hausdorff distance is a common method of measuring the distance between two sets of objects, which can be used to measure the similarity between tracks and is given by:
- Wasserstein distance:The Wasserstein metric was initially used to measure the similarity of probability distributions [50] and was proposed for the sets of targets in [51]. The Wasserstein distance between X and is:
- OSPA distance:The OSPA was proposed to overcome the insensitive shortcoming, whose parameters can deal with the problem that the numbers of elements in the two sets do not match [53]. The OSPA metric between X and is:In addition, there are various improved methods based on the OSPA [57], which are be enumerated as follows:
- Generalized OSPA (GOSPA):In the GOSPA metric, we look for an optimal assignment between the truth targets and the estimated tracks, leaving missed and false targets unsigned [58]. The GOSPA metric penalizes localization errors for properly detected targets, the NMT, and the NFT [59]. The GOSPA can be represented as an optimization over assignment sets:
- OSPA-on-OSPA (OSPA)metric:The OSPA metric [60,61] is the distance between two sets of tracks, which establishes an assignment between the real and the estimated trajectories that is not allowed to change with time and enables capturing the tracking errors of fragmentation and track switching. It is also simple to compute and flexible enough to capture many important aspects of the tracking performance. The OSPA distance is defined as follows:
3. CE Approaches
3.1. The Weight of Each Evaluation Metric Set
- Step 1
- The assessment metric system:According to Section 2, all levels of evaluation metrics are established. in is the ith metric of the primary metric in the system; in is the jth metric of ; , is the kth metric of . The rest can be established in the same manner;
- Step 2
- The comparison matrix :Citing the numbers 1-9 as a scale, each influencing metric in the above metric set U is determined according to the importance of the element to its corresponding quantized value . , , ;
- Step 3
- The maximum eigenvalue of A and the corresponding normalized eigenvector W:W is denoted as , , where denotes the weight of the ith evaluation metric.
3.2. Cloud Barycenter Evaluation
- Step 1
- The cloud model of the comment set:The comment set of metrics was ascertained by experts. For example, we set S= excellent, good, fair, worse, poor to denote the comment set of target tracking, which is shown in Table 2. We set the comment as the corresponding continuous number field interval [0,1]. The formula of the cloud model is represented as:
- Step 2
- The quantitative and the qualitative variables for the given metric set;
- (a)
- The cloud model of quantitative metrics:The corresponding quantitative metrics values were established by n experts as , which can be denoted by the cloud model:
- (b)
- The cloud model of qualitative metrics:In the same way, every qualitative metric, which are represented by the linguistic value, can also be described by the cloud model:
- Step 3
- The weighted departure degree:is the n-dimensional integrated barycenter vector, each dimension value of which is calculated by , where is the cloud barycenter position and is the cloud barycenter height calculated by the AHP. denotes the ideal cloud vector. The synthesized vector is normalized as follows:Finally, the weighted departure degree is given by:
- Step 4
- Result analysis:The comment set is put in a consecutive interval. Meanwhile, each comment value is realized by a cloud model. The cloud-generator model can be established as Figure 3 shows. The comment set can be divided into five categories: excellent, good, fair, worse, and poor. For a specific case, assessment results can be output by inputting into the cloud-generator model.
3.3. Fuzzy CE Method
- Step 1
- The metric set:We analyzed the result of target tracking and establish the evaluation metric set U as follows:
- Step 2
- The evaluation level set:The evaluation level set is given by , where is the ith grey category. V is the remark collection, which is made up of remarks of the research object;
- Step 3
- The evaluation matrix:Starting from a single factor for the evaluation, we determine the degree of membership about evaluation objects to the evaluation level set and make the fuzzy evaluation. Then, combining the single-factor set, a multi-factor evaluation set is given by:
- Step 4
- The fuzzy CE value:According to the principle of the maximum membership degree, the comprehensive value of the PE is obtained; thereby, the corresponding performance levels [70] are calculated.
3.4. Grey Clustering
- Step 1
- Triangular whitenization weight functions are established and obtained as follows:
- Step 2
- The clustering coefficient:
- Step 3
- The integrated clustering coefficient for each monitoring point with respect to the grey classes k can be calculated with the following equation [76]:
- Step 4
- According to the integrated clustering coefficient, the evaluation result is determined. The value range of the integrated clustering coefficient is divided into s intervals of the same length, which are: , , ⋯, . The track algorithm is judged as the kth grey category, when belongs to .
4. Rating and Overall Performance
4.1. Application of Cloud Theory for Target Tracking
4.2. Application of Fuzzy CE for Target Tracking
4.3. PE in Target Tracking Using Grey Clustering
5. Conclusions and Remaining Challenges
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
CE | Comprehensive evaluation |
PE | Performance evaluation |
TPE | Track position error |
TE | Trajectory error |
TVE | Track velocity error |
NVT | Number of valid tracks |
NMT | Number of missed targets |
NFT | Number of false tracks |
NST | Number of spurious tracks |
ANST | Average number of swaps in tracks |
ANBT | Average number of broken tracks |
TR | Tracks redundancy |
RFA | Rate of false alarms |
TPD | Track probability of detection |
RTF | Rate of track fragmentation |
TL | Track latency |
TET | Total execution time |
OSPA | Optimal subpattern assignment |
AHP | Analytic hierarchy process |
SIAP | Single integrated air picture |
MOPT | Multiple-object-tracking precision |
MOTA | Multiple-object-tracking accuracy |
GUI | Graphical user interface |
PS | Pseudo-observation |
PRO | Projection |
KKT | Karush–Kuhn–Tucker |
KKT_KF | Karush–Kuhn–Tucker–Kalman filter |
UKF | Unconstrained Kalman filter |
T-FoT | Trajectory function of time |
References
- Chong, C.Y. Tracking and data fusion: A handbook of algorithms (bar-shalom, y. et al; 2011)[bookshelf]. IEEE Control. Syst. Mag. 2012, 32, 114–116. [Google Scholar]
- Blasch, E. Target Tracking Toolbox lecture notes and software to support EE716. Ph.D. Thesis, Wright State University, Dayton, OH, USA, 2001. [Google Scholar]
- Available online: https://www.pdx.edu/biomedical-signal-processing-lab/signal-point-kalman-filters-and-the-rebel-toolkit (accessed on 10 September 2021).
- Wan, E.A.; Van Der Merwe, R.; Haykin, S. The unscented Kalman filter. Kalman Filter. Neural Net. 2001, 5, 221–280. [Google Scholar]
- Paul, A.S. Sigma-point Kalman Smoothing: Algorithms and Analysis with Applications to Indoor Tracking. Ph.D. Thesis, Oregon Health & Science University, Portland, OR, USA, 2010. [Google Scholar]
- Straka, O.; Flídr, M.; Duník, J.; Ŝimandl, M. A software framework and tool for nonlinear state estimation. IFAC Proc. Vol. 2009, 42, 510–515. [Google Scholar] [CrossRef]
- Straka, O.; Flídr, M.; Duník, J.; Simandl, M.; Blasch, E. Nonlinear estimation framework in target tracking. In Proceedings of the 2010 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010. [Google Scholar]
- Blasch, E.P.; Straka, O.; Duník, J.; Šimandl, M. Multitarget tracking performance analysis using the non-credibility index in the nonlinear estimation framework (NEF) toolbox. In Proceedings of the IEEE 2010 National Aerospace & Electronics Conference, Dayton, OH, USA, 14–16 July 2010; pp. 107–115. [Google Scholar]
- Crouse, D.F. The tracker component library: Free routines for rapid prototyping. IEEE Aerosp. Electron. Syst. Mag. 2017, 32, 18–27. [Google Scholar] [CrossRef]
- Thomas, P.A.; Barr, J.; Balaji, B.; White, K. An open source framework for tracking and state estimation (’Stone Soup’). In Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10200, p. 1020008. [Google Scholar]
- Last, D.; Thomas, P.; Hiscocks, S.; Barr, J.; Kirkland, D.; Rashid, M.; Li, S.B.; Vladimirov, L. Stone Soup: Announcement of beta release of an open-source framework for tracking and state estimation. In Signal Processing, Sensor/Information Fusion, and Target Recognition XXVIII; International Society for Optics and Photonics: Bellingham, WA, USA, 2019; Volume 11018, p. 1101807. [Google Scholar]
- Costa, P.C.; Laskey, K.B.; Blasch, E.; Jousselme, A.L. Towards unbiased evaluation of uncertainty reasoning: The URREF ontology. In Proceedings of the 2012 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 2301–2308. [Google Scholar]
- Xu, Z. Performance evaluation of business administration training room in application-oriented universities. In Proceedings of the 2020 2nd International Conference on Computer Science Communication and Network Security (CSCNS2020), Sanya, China, 22–23 December 2021; Volume 336, p. 09015. [Google Scholar]
- Zhang, G.; Hui, G.; Zhang, G.; Hu, Y.; Zhao, Z. A Novel Comprehensive Evaluation Method of Forest State Based on Unit Circle. Forests 2019, 10, 5. [Google Scholar] [CrossRef] [Green Version]
- Li, X. Application Research on the Model of the Performance Evaluation of Enterprise Informatization. J. Inf. 2008, 12, 15–17. [Google Scholar]
- Bar-Shalom, Y.; Li, X.R.; Kirubarajan, T. Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software; John Wiley & Sons: Hoboken, NJ, USA, 2004; p. 584. [Google Scholar]
- Popp, R.L.; Kirubarajan, T.; Pattipati, K.R. Survey of assignment techniques for multitarget tracking. Multitarg.-Multisens. Tracking Appl. Adv. 2000, 3, 77–159. [Google Scholar]
- Colegrove, S.B.; Cheung, B.; Davey, S.J. Tracking system performance assessment. In Proceedings of the Sixth International Conference of Information Fusion, Cairns, Australia, 8–10 July 2003; Volume 2, pp. 926–933. [Google Scholar]
- Sheng, X.; Chen, Y.; Guo, L.; Yin, J.; Han, X. Multitarget Tracking Algorithm Using Multiple GMPHD Filter Data Fusion for Sonar Networks. Sensors 2018, 18, 3193. [Google Scholar] [CrossRef] [Green Version]
- Ristic, B.; Vo, B.N.; Clark, D.; Vo, B.T. A Metric for Performance Evaluation of Multi-Target Tracking Algorithms. IEEE Trans. Signal Process. 2011, 59, 3452–3457. [Google Scholar] [CrossRef]
- Kulmon, P.; Stukovska, P. Assessing Multiple-Target Tracking Performance Of GNN Association Algorithm. In Proceedings of the 2018 19th International Radar Symposium (IRS), Bonn, Germany, 20–22 June 2018. [Google Scholar]
- Evirgen, E.A. Multi sensor track fusion performance metrics. In Proceedings of the 2016 24th Signal Processing and Communication Application Conference (SIU), Zonguldak, Turkey, 16–19 May 2016; pp. 97–100. [Google Scholar]
- Gorji, A.A.; Tharmarasa, R.; Kirubarajan, T. Performance measures for multiple target tracking problems. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
- de Villiers, J.P.; Focke, R.W.; Pavlin, G.; Jousselme, A.L.; Dragos, V.; Laskey, K.B.; Costa, P.; Blasch, E. Evaluation metrics for the practical application of URREF ontology: An illustration on data criteria. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017. [Google Scholar]
- García-Fernández, Á.F.; Rahmathullah, A.S.; Svensson, L. A time-weighted metric for sets of trajectories to assess multi-object tracking algorithms. arXiv 2021, arXiv:2110.13444. [Google Scholar]
- Rothrock, R.L.; Drummond, O.E. Performance metrics for multiple-sensor multiple-target tracking. In Proceedings of the SPIE Conference on Signal and Data Processing of Small Targets, San Diego, CA, USA, 30 July–2 August 2001; Volume 4048, pp. 521–531. [Google Scholar]
- Drummond, O.; Fridling, B. Ambiguities in evaluating performance of multiple target tracking algorithms. Proc. Spie Int. Soc. Opt. Eng. 1992, 1698, 326–337. [Google Scholar]
- Li, X.; Zhao, Z. Evaluation of estimation algorithms part I: Incomprehensive measures of performance. Aerosp. Electron. Syst. IEEE Trans. 2006, 42, 1340–1358. [Google Scholar] [CrossRef]
- Blackman, S.; Popoli, R. Design and Analysis of Modern Tracking Systems; Artech House Publishers: London, UK, 1999; p. 1015. [Google Scholar]
- Bernardin, K.; Stiefelhagen, R. Evaluating multiple object tracking performance: The clear mot metrics. Eurasip J. Image Video Process. 2008, 2008, 246309. [Google Scholar] [CrossRef] [Green Version]
- Milan, A.; Schindler, K.; Roth, S. Challenges of ground truth evaluation of multitarget tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, OR, USA, 23–28 June 2013; pp. 735–742. [Google Scholar]
- Pelletier, M.; Sivagnanam, S.; Blasch, E.P. A track scoring MOP for perimeter surveillance radar evaluation. In Proceedings of the 2012 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 2028–2034. [Google Scholar]
- Blasch, E.P.; Straka, O.; Yang, C.; Qiu, D.; Šimandl, M.; Ajgl, J. Distributed tracking fidelity-metric performance analysis using confusion matrices. In Proceedings of the 2012 15th International Conference on Information Fusion, Singapore, 9–12 July 2012. [Google Scholar]
- Evers, C.; Löllmann, H.W.; Mellmann, H.; Schmidt, A.; Barfuss, H.; Naylor, P.A.; Kellermann, W. LOCATA challenge-overview of evaluation measures. Trans. Signal Process. 2008, 56, 3447–3457. [Google Scholar]
- Mori, S.; Chang, K.C.; Chong, C.Y.; Dunn, K.P. Tracking performance evaluation-prediction of track purity. In Signal and Data Processing of Small Targets 1989; International Society for Optics and Photonics: Bellingham, WA, USA, 1989; Volume 1096, pp. 215–223. [Google Scholar]
- Blasch, E.P.; Valin, P. Track purity and current assignment ratio for target tracking and identification evaluation. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
- Blasch, E. Fusion Evaluation Tutorial. In Proceedings of the International Conference on Information Fusion, Stockholm, Sweden, 28 June–1 July 2004. [Google Scholar]
- Coraluppi, S.; Grimmett, D.; de Theije, P. Benchmark Evaluation of Multistatic Trackers. In Proceedings of the 2006 9th International Conference on Information Fusion, Florence, Italy, 10–13 July 2006. [Google Scholar]
- Grimmett, D.; Coraluppi, S.; Cour, B.; Hempel, C.G.; Lang, T.; Theije, P.; Willett, P. MSTWG multistatic tracker evaluation using simulated scenario data sets. In Proceedings of the 2008 11th International Conference on Information Fusion, Cologne, Germany, 30 June–3 July 2008. [Google Scholar]
- Guerriero, M.; Svensson, L.; Svensson, D.; Willett, P. Shooting two birds with two bullets: How to find Minimum Mean OSPA estimates. In Proceedings of the 2010 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010. [Google Scholar]
- Chang, F.; Chen, Z.; Wang, W.; Wang, L. The Hausdorff distance template matching algorithm based on Kalman filter for target tracking. In Proceedings of the 2009 IEEE International Conference on Automation and Logistics, Shenyang, China, 5–7 August 2009; pp. 836–840. [Google Scholar]
- Da, K.; Li, T.; Zhu, Y.; Fu, Q. A Computationally Efficient Approach for Distributed Sensor Localization and Multitarget Tracking. IEEE Commun. Lett. 2020, 24, 335–338. [Google Scholar] [CrossRef]
- Hoffman, J.R.; Mahler, R. Multitarget miss distance and its applications. In Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002, Annapolis, MD, USA, 8–11 July 2002; Volume 1, pp. 149–155. [Google Scholar]
- Schuhmacher, D.; Vo, B.T.; Vo, B.N. A Consistent Metric for Performance Evaluation of Multi-Object Filters. IEEE Trans. Signal Process. 2008, 56, 3447–3457. [Google Scholar] [CrossRef] [Green Version]
- Ristic, B.; Vo, B.N.; Clark, D. Performance evaluation of multitarget tracking using the OSPA metric. In Proceedings of the 2010 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010. [Google Scholar]
- Nagappa, S.; Clark, D.E.; Mahler, R. Incorporating track uncertainty into the OSPA metric. In Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, USA, 5–8 July 2011. [Google Scholar]
- El-Fallah, A.I.; Ravichandran, R.B.; Mehra, R.K.; Hoffman, J.R.; Alford, M.G. Scientific performance evaluation for distributed sensor management and adaptive data fusion. In Signal Processing, Sensor Fusion, and Target Recognition X; International Society for Optics and Photonics: Bellingham, WA, USA, 2001; Volume 4380, pp. 328–338. [Google Scholar]
- Hoffman, J.R.; Mahler, R.; Zajic, T. User-defined information and scientific performance evaluation. In Signal Processing, Sensor Fusion, and Target Recognition X; International Society for Optics and Photonics: Bellingham, WA, USA, 2001; Volume 4380, pp. 300–311. [Google Scholar]
- Mahler, R. Scientific performance metrics for data fusion: New results. In Signal Processing, Sensor Fusion, and Target Recognition IX; International Society for Optics and Photonics: Bellingham, WA, USA, 2000; Volume 4052, pp. 172–182. [Google Scholar]
- Villani, C. Optimal Transport. Old and New; Springer: Berlin/Heidelberg, Germany, 2009; p. 976. [Google Scholar]
- Hoffman, J.R.; Mahler, R. Multitarget Miss Distance via Optimal Assignment. IEEE Trans. Syst. Man Cybern. Part Syst. Hum. 2004, 34, 327–336. [Google Scholar] [CrossRef]
- García-Femández, Á.F.; Svensson, L. Spooky effect in optimal OSPA estimation and how GOSPA solves it. In Proceedings of the 2019 22th International Conference on Information Fusion (FUSION), Ottawa, ON, Canada, 2–5 July 2019. [Google Scholar]
- Vu, T.; Evans, R. A new performance metric for multiple target tracking based on optimal subpattern assignment. In Proceedings of the 17th International Conference on Information Fusion (FUSION), Salamanca, Spain, 7–10 July 2014. [Google Scholar]
- Mei, L.; Li, H.; Zhou, Y.; Li, D.; Long, W.; Xing, F. Output-Only Damage Detection of Shear Building Structures Using an Autoregressive Model-Enhanced Optimal Subpattern Assignment Metric. Sensors 2020, 20, 2050. [Google Scholar] [CrossRef] [Green Version]
- Lian, F.; Zhang, G.H.; Duan, Z.S.; Han, C.Z. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection. Sensors 2016, 16, 169. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, W.; Han, C. Dual Sensor Control Scheme for Multi-Target Tracking. Sensors 2018, 18, 1653. [Google Scholar] [CrossRef] [Green Version]
- Schubert, R.; Klöden, H.; Wanielik, G.; Kälberer, S. Performance evaluation of Multiple Target Tracking in the absence of reference data. In Proceedings of the 2010 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010. [Google Scholar]
- Rahmathullah, A.S.; García-Fernández, Á.F.; Svensson, L. Generalized optimal sub-pattern assignment metric. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017. [Google Scholar]
- Xia, Y.; Granstrcom, K.; Svensson, L.; García-Fernández, A.F. Performance evaluation of multi-bernoulli conjugate priors for multitarget filtering. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017. [Google Scholar]
- Beard, M.; Vo, B.T.; Vo, B.N. OSPA(2): Using the OSPA metric to evaluate multitarget tracking performance. In Proceedings of the 2017 International Conference on Control, Automation and Information Sciences (ICCAIS), Chiang Mai, Thailand, 31 October–1 November 2017; pp. 86–91. [Google Scholar]
- Beard, M.; Ba, T.V.; Vo, B.N. A Solution for Large-Scale Multi-Object Tracking. IEEE Trans. Signal Process. 2020, 68, 2754–2769. [Google Scholar] [CrossRef] [Green Version]
- Votruba, P.; Nisley, R.; Rothrock, R.; Zombro, B. Single Integrated Air Picture (SIAP) Metrics Implementation; Technical Report; Single Integrated Air Picture System Engineering Task Force: Arlington, VA, USA, 2001. [Google Scholar]
- Available online: https://stonesoup.readthedocs.io/en/v0.1b7/auto_examples/Metrics.html (accessed on 16 September 2021).
- Shang, J.; Vargas, L. New Concepts and Applications of AHP in the Internet Era. J. Multi-Criteria Decis. Anal. 2012, 19, 1–2. [Google Scholar] [CrossRef]
- Cho, J.; Lee, J. Development of a new technology product evaluation model for assessing commercialization opportunities using Delphi method and fuzzy AHP approach. Expert Syst. Appl. 2013, 40, 5314–5330. [Google Scholar] [CrossRef]
- Song, W.; Wen, W.; Guo, Q.; Chen, H.; Zhao, J. Performance evaluation of sensor information fusion system based on cloud theory and fuzzy pattern recognition. In Proceedings of the 2020 IEEE International Conference on Information Technology, Big Data and Artificial Intelligence (ICIBA), Chongqing, China, 6–8 November 2020; Volume 1, pp. 299–303. [Google Scholar]
- Ma, L.Y.; Shi, Q.S. Method of Safety Assessment about the Electric Power Supply Company Based on Cloud Gravity Center Theory. Adv. Mater. Res. 2012, 354, 1149–1156. [Google Scholar] [CrossRef]
- Zhang, C.; Shi, Q.; Liu, T.L. Air Base Damage Evaluation Using Cloud Barycenter Evaluation Method. Adv. Mater. Res. 2012, 430, 803–807. [Google Scholar] [CrossRef]
- Gu, H.; Cheng, Z.; Quan, S. Equipment maintenance support capability evaluation using cloud barycenter evaluation method. Telkomnika Indones. J. Electr. Eng. 2013, 11, 599–606. [Google Scholar] [CrossRef]
- Liu, H.; Li, B.; Sun, Y.; Dou, X.; Zhang, Y.; Fan, X. Safety Evaluation of Large-size Transportation Bridges Based on Combination Weighting Fuzzy Comprehensive Evaluation Method. Iop Conf. Ser. Earth Environ. Sci. 2021, 787, 012194. [Google Scholar] [CrossRef]
- Zhang, L.; Pan, Z. Fuzzy Comprehensive Evaluation Based on Measure of Medium Truth Scale. In Proceedings of the 2009 International Conference on Artificial Intelligence and Computational Intelligence, Shanghai, China, 7–8 November 2009; Volume 2, pp. 83–87. [Google Scholar]
- Wang, J.; Zhang, Y.; Wang, Y.; Gu, L. Assessment of Building Energy Efficiency Standards Based on Fuzzy Evaluation Algorithm. Eng. Sustain. 2019, 173, 1–14. [Google Scholar] [CrossRef]
- Delgado, A.; Cuadra, D.; Simon, K.; Bonilla, K.; Lee, E. Evaluation of Water Quality in the Lower Huallaga River Watershed using the Grey Clustering Analysis Method. Int. J. Adv. Comput. Sci. Appl. 2021, 12. [Google Scholar] [CrossRef]
- Delgado, A.; Fernandez, A.; Chirinos, B.; Barboza, G.; Lee, E. Impact of the Mining Activity on the Water Quality in Peru Applying the Fuzzy Logic with the Grey Clustering Method. Int. J. Adv. Comput. Sci. Appl. 2021, 12. [Google Scholar] [CrossRef]
- Dang, Y.G.; Liu, S.F.; Liu, B. Study on the Integrated Grey Clustering Method under the Clustering Coefficient with Non-Distinguished Difference. Chin. J. Manag. Sci. 2005, 13, 69–73. [Google Scholar]
- Jiskani, I.M.; Han, S.; Rehman, A.U.; Shahani, N.M.; Brohi, M.A. An Integrated Entropy Weight and Grey Clustering Method-Based Evaluation to Improve Safety in Mines. Min. Metall. Explor. 2021, 38, 1773–1787. [Google Scholar] [CrossRef]
- Rahmathullah, A.S.; García-Fernández, Á.F.; Svensson, L. A metric on the space of finite sets of trajectories for evaluation of multitarget-tracking algorithms. IEEE Trans. Signal Process. 2016, 68, 3908–3917. [Google Scholar]
- Rezatofighi, H.; Nguyen, T.; Vo, B.N.; Vo, B.T.; Reid, I. How trustworthy are the existing performance evaluations for basic vision tasks? arXiv 2020, arXiv:2008.03533. [Google Scholar]
- Zhou, J.; Li, T.; Wang, X. State Estimation with Linear Equality Constraints Based on Trajectory Function of Time and Karush-Kuhn-Tucker Conditions. In Proceedings of the 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), Xi’an, China, 14–17 October 2021; pp. 438–443. [Google Scholar]
- Tahk, M.; Speyer, J.L. Target tracking problems subject to kinematic constraints. IEEE Trans. Autom. Control. 1990, 35, 324–326. [Google Scholar] [CrossRef]
- Ko, S.; Bitmead, R.R. State estimation for linear systems with state equality constraints. Automatica 2007, 43, 1363–1368. [Google Scholar] [CrossRef]
- Simon, D.; Chia, T.L. Kalman filtering with state equality constraints. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 128–136. [Google Scholar] [CrossRef] [Green Version]
- Boyd, S.; Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Xu, L.; Li, X.R.; Duan, Z.; Lan, J. Modeling and state estimation for dynamic systems with linear equality constraints. IEEE Trans. Signal Process. 2013, 61, 2927–2939. [Google Scholar] [CrossRef]
- Li, T.; Chen, H.; Sun, S.; Corchado, J.M. Joint Smoothing and Tracking Based on Continuous-Time Target Trajectory Function Fitting. IEEE Trans. Autom. Sci. Eng. 2019, 16, 1476–1483. [Google Scholar] [CrossRef] [Green Version]
- Zhou, J.; Li, T.; Wang, X.; Zheng, L. Target Tracking With Equality/Inequality Constraints Based on Trajectory Function of Time. IEEE Signal Process. Lett. 2021, 28, 1330–1334. [Google Scholar] [CrossRef]
- Li, T. Single-Road-Constrained Positioning Based on Deterministic Trajectory Geometry. IEEE Comm. Lett. 2019, 23, 80–83. [Google Scholar] [CrossRef]
- Li, T.; Fan, H. From Target Tracking to Targeting Track: A Data-Driven Approach to Non-cooperative Target Detection and Tracking. arXiv 2021, arXiv:2104.11122. [Google Scholar]
Metric | Description |
---|---|
Ambiguity | A measure of the number of tracks assigned to each true object |
Completeness | The percentage of live objects with tracks on them |
LS | The percentage of time spent tracking true objects across the dataset |
LT | 1/R, where R is the average number of excess tracks assigned; |
the higher this value, the better | |
Positional Accuracy | Given by the average positional error of the track to the truth |
Spuriousness | The percentage of tracks unsigned to any object |
Velocity Accuracy | The average error in the velocity of the track to the truth |
Number of Targets | The total number of targets |
Number of Tracks | The total number of tracks |
Comments | Excellent | Good | Fair | Worse | Poor |
---|---|---|---|---|---|
Number field interval | [1,c1] | [c1,c2] | [c2,c3] | [c3,c4] | [c4,0] |
Comments | Number Field Interval | Numeral Characteristics |
---|---|---|
excellent | [1,0.8] | (0.9,0.033) |
good | [0.8,0.6] | (0.7,0.033) |
fair | [0.6,0.4] | (0.5,0.033) |
worse | [0.4,0.2] | (0.3,0.033) |
poor | [0.2,0] | (0.1,0.033) |
Parameter | Expectations | Entropy |
---|---|---|
C1 | 0.86 | 0.33 |
C2 | 0.66 | 0.33 |
C3 | 0.64 | 0.33 |
C4 | 0.64 | 0.33 |
C5 | 0.5 | 0.33 |
C6 | 0.48 | 0.33 |
C7 | 0.56 | 0.33 |
C8 | 0.46 | 0.33 |
C9 | 0.68 | 0.33 |
C10 | 0.48 | 0.33 |
C11 | 0.5 | 0.33 |
C12 | 0.7 | 0.33 |
C13 | 0.76 | 0.33 |
C14 | 0.76 | 0.33 |
Arithmetic | TPE | TL | TVE | TPD | RFA |
---|---|---|---|---|---|
PS | 5.2545 | 1 | 2.1056 | 0.958 | 0.00042 |
PRO | 4.4997 | 1 | 2.8736 | 0.973 | 0.0085 |
KKT | 4.4997 | 2 | 3.0789 | 0.965 | 0.014 |
KKT_KF | 3.9048 | 1 | 2.505 | 0.966 | 0.0007 |
UKF | 6.3551 | 1 | 2.505 | 0.968 | 0.001 |
T-FoT | 5.5309 | 2 | 3.0789 | 0.961 | 0.005 |
Excellent Grey Category | Good Grey Category |
---|---|
Medium Grey Category | Poor Grey Category |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, Y.; Hu, Z.; Li, T.; Fan, H. Performance Evaluation Metrics and Approaches for Target Tracking: A Survey. Sensors 2022, 22, 793. https://doi.org/10.3390/s22030793
Song Y, Hu Z, Li T, Fan H. Performance Evaluation Metrics and Approaches for Target Tracking: A Survey. Sensors. 2022; 22(3):793. https://doi.org/10.3390/s22030793
Chicago/Turabian StyleSong, Yan, Zheng Hu, Tiancheng Li, and Hongqi Fan. 2022. "Performance Evaluation Metrics and Approaches for Target Tracking: A Survey" Sensors 22, no. 3: 793. https://doi.org/10.3390/s22030793
APA StyleSong, Y., Hu, Z., Li, T., & Fan, H. (2022). Performance Evaluation Metrics and Approaches for Target Tracking: A Survey. Sensors, 22(3), 793. https://doi.org/10.3390/s22030793