SAFS: Object Tracking Algorithm Based on Self-Adaptive Feature Selection
Abstract
:1. Introduction
- We characterize the sub-template of each feature and calculate the similarity value matrix of each sub-template based on the linear property of the maximum posterior probability [21].
- We optimize the calculation of Jeffreys’ entropy [22] and use it as a metric of similarity value, which can measure the difference between categories more efficiently.
- We propose an object tracking method based on a self-adaptive feature selection algorithm, called SAFS, which can obtain better tracking performance without human labor.
2. Adaptive Multi-Feature Selection Tracking Algorithm
- (1)
- Description of object features: calculate the description of each feature of the target area, the area to be matched, and the searching area, respectively. Including the feature description of the target in the first frame and the feature description of the searching area and the area to be matched in each subsequent frame.
- (2)
- Flexibly select the most proper feature: the optimal feature calculated in the previous frame is selected as the descriptive feature of this frame in the tracking process. Then, the object location is calculated iteratively according to the maximum posterior probability similarity criterion.
- (3)
- Selection for multi-feature: according to the central position of the object, divide the foreground and background of the searching area, and calculate the similarity of each feature. According to the similarity value map, the j-divergence entropy [22] is used to compute the ability of different features to distinguish the foreground and background. Select the most distinguishing feature according to the j-divergence entropy [22], and use this feature in the next frame tracking.
2.1. Multi-Feature Description and Similarity Measure of Sub-Template
- Decrease the feature vector dimension of multiple features;
- Flexibly select the most proper feature.
- The template of target area:
- The template of area to be matched:
- The template of searching area:
2.2. Description of Feature Selection
2.3. Introduction of Relative Entropy and j-Divergence Entropy
- It can be used to measure the similarity between two functions whose values are positive.
- For two functions that are entirely the same, their relative entropy is zero. The bigger the difference is, the bigger the relative entropy is.
- If the values of two randomly distributed probability density functions are greater than zero, the relative entropy can measure the difference between the two random distributions.
- Relative entropy is not symmetric, and commutative law does not work.
2.4. The Discriminant Algorithm for Distinguishing Ability of Multi-Features
2.5. Selectively Update for Sub-Template
- Adaptively update target size based on the characteristics of the maximum posterior probability similarity criterion;
- Two thresholds are set to control the number and degree of template updating.
2.6. Steps of Adaptive Feature Selection Algorithm
- Set the target position of the last frame as initial position. Calculate the color feature description and texture feature description in searching area centered on the initial position;
- According to the formula , calculate the similarity contribution and of each pixel in the searching area in current frame. According to the separability result of the previous frame, the similarity contribution value of the corresponding feature is selected for tracking;
- Initialize k = 0 which represents the number of initial iterations;
- Calculate the position of the center to be matched of the next iteration by Formula (6). At the same time, set .
- Repeat step 4 until ;
- Calculate the distinguishing abilities of color and texture features in current scene by Formula (5);
- According to Formula (7) [25], calculate whether the target size changes adaptively;
- According to Formula (8) [25], determine whether to update the target.
- Import the next picture in the sequence and jump to step 1.
3. Experiments
3.1. Experimental Dataset and Evaluation Metrics
3.2. Implementation Description
3.3. Comparison Experiment
3.4. Tracking Process Analysis
- (1)
- CarScale: The most significant attribute of this sequence is scale variation. The scale of the car is increasing as it comes fast from afar. In the first 155 frames, all algorithms can track the object correctly. Starting from frame 165, KCF gradually loses the target and re-tracks the target at frame 201 the end because of the occlusion of the tree during these frames, which means KCF cannot handle occlusion issues well. From the overall point of view, LBPT and SAFS show better performance. Although most of the algorithms can track the target well, the tracking boxes cannot increase with the scale of the target. This is a point that our algorithm needs to pay attention to and improve.
- (2)
- David: At first, the environment is dark. LBPT, DSST, KCF and SAFS get better results. From frame 162, LBP&PPM loses the object because the light is too dim, resulting in sparse texture feature. RGB&PPM loses the target between frame 451 and frame 536 because of the confusing effect of the poster on the wall. During the tracking process, although the LBPT has tracked the object, some offset has occurred. The reason for the offset is that when the color sub-template was updated in the previous tracking process, a template drift occurred, which resulted in a matching shift. The performance of LBPT shows that the combined feature of color and texture can simultaneously show the characteristics of different features in the tracking, but this type of feature will interfere with each other, resulting in inaccurate positioning of the target. DSST, KCF and SAFS can always track the object well.
- (3)
- Panda: This is a long sequence with 1000 frames. LBP&PPM loses the target since frame 51 and RGP&PPM loses the target since frame 138. This indicates that a single feature cannot track the target well. LBPT and DSST lose the target when the panda passes the brand for the second time. KCF losses the target several times due to the occlusion of trees or interference of similar objects. Our SAFS algorithm is very robust, so it can track the target very well throughout the process.
- (4)
- FaceOcc1: This is a relatively simple target tracking sequence because it only has one attribute—occlusion. From frame 523, LBPT and DSST lose the tracking of the face, which indicate that they cannot deal with the object occlusion well. Other algorithms show good performance in this sequence.
3.5. Speed Analysis
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Mangawati, A.; Leesan, M.; Aradhya, H.V.R. Object Tracking Algorithms for video surveillance applications. In Proceedings of the International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 667–671. [Google Scholar]
- Verma, R. A review of object detection and tracking methods. Int. J. Adv. Eng. Res. Dev. 2017, 4, 569–578. [Google Scholar]
- Ross, D.A.; Lim, J.; Lin, R.S.; Yang, M.H. Incremental Learning for Robust Visual Tracking. Int. J. Comput. Vis. 2008, 77, 125–141. [Google Scholar] [CrossRef]
- Mei, X.; Ling, H. Robust visual tracking using L1 minimization. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1436–1443. [Google Scholar]
- Yu, K.; Lin, Y.; Lafferty, J. Learning image representations from the pixel level via hierarchical sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 1713–1720. [Google Scholar]
- Riahi, D.; Bilodeau, G.A. Multiple object tracking based on sparse generative appearance modeling. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 4017–4021. [Google Scholar]
- Tkach, A.; Tagliasacchi, A.; Remelli, E.; Pauly, M.; Fitzgibbon, A. Online generative model personalization for hand tracking. ACM Trans. Graph. (ToG) 2017, 36, 1–11. [Google Scholar] [CrossRef]
- Grabner, H.; Grabner, M.; Bischof, H. Real-Time Tracking via On-line Boosting. Br. Mach. Vis. Conf. 2006, 1, 47–56. [Google Scholar]
- Wu, Y.; Lim, J.; Yang, M.H. Online object tracking. A benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 2411–2418. [Google Scholar]
- Hare, S.; Golodetz, S.; Saffari, A.; Vineet, V.; Cheng, M.M.; Hicks, S.L.; Torr, P.H. Struck: Structured output tracking with kernels. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 263–270. [Google Scholar]
- Kalal, Z.; Matas, J.; Mikolajczyk, K. Pn learning: Boot-strapping binary classiers by structural constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 49–56. [Google Scholar]
- Bolme, D.S.; Beveridge, J.R.; Draper, B.A.; Lui, Y.M. Visual object tracking using adaptive correlation filters. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2544–2550. [Google Scholar]
- Danelljan, M.; Häger, G.; Khan, F.; Felsberg, M. Accurate scale estimation for robust visual tracking. In Proceedings of the British Machine Vision Conference, Nottingham, UK, 1–5 September 2014. [Google Scholar]
- Zhang, J.; Jin, X.; Sun, J.; Wang, J.; Sangaiah, A.K. Spatial and semantic convolutional features for robust visual object tracking. Multimed. Tools Appl. 2020, 79, 15095–15115. [Google Scholar] [CrossRef]
- Zhang, T.; Xu, C.; Yang, M.H. Multi-task correlation particle filter for robust object tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4335–4343. [Google Scholar]
- Perez-Cham, O.E.; Puente, C.; Soubervielle-Montalvo, C.; Olague, G.; Aguirre-Salado, C.A.; Nuñez-Varela, A.S. Parallelization of the honeybee search algorithm for object tracking. Appl. Sci. 2020, 10, 2122. [Google Scholar] [CrossRef] [Green Version]
- Bae, S.H.; Yoon, K.J. Confidence-based data association and discriminative deep appearance learning for robust online multi-object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 595–610. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Xu, J.; Yu, J.; Wang, Q.; Yoo, B.; Han, J.J. AFOD: Adaptive Focused Discriminative Segmentation Tracker. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 666–682. [Google Scholar]
- Varfolomieiev, A. Channel-independent spatially regularized discriminative correlation filter for visual object tracking. J. Real-Time Image Process. 2021, 18, 233–243. [Google Scholar] [CrossRef]
- Tschannen, M.; Djolonga, J.; Ritter, M.; Mahendran, A.; Houlsby, N.; Gelly, S.; Lucic, M. Self-supervised learning of video-induced visual invariances. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 13806–13815. [Google Scholar]
- Feng, Z.; Lu, N.; Jiang, P. Posterior probability measure for image matching. Pattern Recognit. 2008, 41, 2422–2433. [Google Scholar] [CrossRef]
- Clarke, B.S.; Barron, A.R. Jeffreys’ prior is asymptotically least favorable under entropy risk. J. Stat. Plan. Inference 1994, 41, 37–60. [Google Scholar] [CrossRef]
- Mika, S.; Ratsch, G.; Weston, J.; Scholkopf, B.; Mullers, K.R. Fisher discriminant analysis with kernels. In Proceedings of the Neural Networks for Signal Processing IX: 1999 IEEE Signal Processing Society Workshop, Madison, WI, USA, 25 August 1999; pp. 41–48. [Google Scholar]
- Joe, H. Relative entropy measures of multivariate dependence. J. Am. Stat. Assoc. 1989, 84, 157–164. [Google Scholar] [CrossRef]
- Guo, W.; Feng, Z.; Ren, X. Object tracking using local multiple features and a posterior probability measure. Sensors 2017, 17, 739. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, H.U.; Lee, D.Y.; Sim, J.Y.; Kim, C.S. Sowp: Spatially ordered and weighted patch descriptor for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3011–3019. [Google Scholar]
- Joukhadar, A.; Scheuer, A.; Laugier, C. Fast contact detection between moving deformable polyhedra. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyongju, Korea, 17–21 October 1999; pp. 1810–1815. [Google Scholar]
- Ning, J.; Zhang, L.; Zhang, D.; Wu, C. Robust object tracking using joint color-texture histogram. Int. J. Pattern Recognit. Artif. Intell. 2009, 23, 1245–1263. [Google Scholar] [CrossRef] [Green Version]
- Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 583–596. [Google Scholar] [CrossRef] [PubMed] [Green Version]
System | CPU | Frequency | RAM | Software |
---|---|---|---|---|
Windows10 | Intel (R) Core (TM) i7-7700 | 3.60 GHz | 16.0 | MATLAB R2019b |
Video Sequence | LBP&PPM [21] | RGB&PPM [21] | LBPT [28] | DSST [13] | KCF [29] | SAFS |
---|---|---|---|---|---|---|
CarScale | 89 | 100 | 100 | 85 | 88 | 100 |
David | 71 | 99 | 96 | 100 | 99 | 99 |
Liquor | 68 | 75 | 28 | 39 | 75 | 98 |
Panda | 27 | 16 | 58 | 57 | 18 | 100 |
Skating2 | 71 | 95 | 90 | 83 | 92 | 91 |
Crossing | 11 | 100 | 80 | 97 | 12 | 96 |
FaceOcc1 | 100 | 100 | 96 | 91 | 100 | 100 |
Football1 | 24 | 76 | 24 | 90 | 61 | 95 |
Average SR | 57.6 | 82.6 | 71.5 | 80.3 | 92.5 | 97.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, W.; Gao, J.; Tian, Y.; Yu, F.; Feng, Z. SAFS: Object Tracking Algorithm Based on Self-Adaptive Feature Selection. Sensors 2021, 21, 4030. https://doi.org/10.3390/s21124030
Guo W, Gao J, Tian Y, Yu F, Feng Z. SAFS: Object Tracking Algorithm Based on Self-Adaptive Feature Selection. Sensors. 2021; 21(12):4030. https://doi.org/10.3390/s21124030
Chicago/Turabian StyleGuo, Wenhua, Jiabao Gao, Yanbin Tian, Fan Yu, and Zuren Feng. 2021. "SAFS: Object Tracking Algorithm Based on Self-Adaptive Feature Selection" Sensors 21, no. 12: 4030. https://doi.org/10.3390/s21124030
APA StyleGuo, W., Gao, J., Tian, Y., Yu, F., & Feng, Z. (2021). SAFS: Object Tracking Algorithm Based on Self-Adaptive Feature Selection. Sensors, 21(12), 4030. https://doi.org/10.3390/s21124030