The Extraction of Foreground Regions of the Moving Objects Based on Spatio-Temporal Information under a Static Camera
Abstract
:1. Introduction
- (1)
- Using the VIBE method to extract the foreground region of the current frame, which would result in the “ghosting” region and the false detection region caused by the shaking of tree branches and leaves.
- (2)
- Using the frame difference method to obtain the foreground region of adjacent frames, which is then subject to morphological processing to eliminate “holes”, thereby obtaining a relatively complete object.
- (3)
- Performing an AND operation on the VIBE result and the processed frame difference result, which is then subject to morphological processing, so that the “ghosting” region and the false detection region caused by the shaking of tree branches and leaves that occurred in the VIBE result could be eliminated, thereby obtaining the final extraction result of foreground region.
2. Related Work
2.1. Frame Difference Method
2.2. Background Modeling Method
3. The Proposed Method
4. Experimental Evaluation
4.1. Evaluation Metrics
4.2. DML Dataset
4.3. CDnet 2014 Dataset
4.4. The Collected Data
4.5. Algorithm Runtime Statistics
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Li, D.W.; Xu, L.H.; Goodman, E.D. Illumination-robust foreground detection in a video surveillance system. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 1637–1650. [Google Scholar] [CrossRef]
- Hu, Y.; Sirlantzis, K.; Howells, G.; Ragot, N.; Rodríguez, P. An online background subtraction algorithm deployed on a NAO humanoid robot based monitoring system. Robot. Auton. Syst. 2016, 85, 37–47. [Google Scholar] [CrossRef]
- Kalsotra, R.; Arora, S. Background subtraction for moving object detection: Explorations of recent developments and challenges. Vis. Comput. 2022, 38, 4151–4178. [Google Scholar] [CrossRef]
- Sun, Z.; Hua, Z.; Li, H. Small Moving Object Detection Algorithm Based on Motion Information. arXiv 2023, arXiv:2301.01917. [Google Scholar]
- Li, X.; Nabati, R.; Singh, K.; Corona, E.; Metsis, V.; Parchami, A. EMOD: Efficient Moving Object Detection via Image Eccentricity Analysis and Sparse Neural Networks. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), Waikoloa, HI, USA, 3–7 January 2023; pp. 51–59. [Google Scholar]
- Liu, H.; Yu, Y.; Liu, S.; Wang, W. A Military Object Detection Model of UAV Reconnaissance Image and Feature Visualization. Appl. Sci. 2022, 12, 12236. [Google Scholar] [CrossRef]
- Yin, Q.; Hu, Q.; Liu, H.; Zhang, F.; Wang, Y.; Lin, Z.; An, W.; Guo, Y. Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
- Sultana, M.; Mahmood, A.; Jung, S.K. Unsupervised moving object detection in complex scenes using adversarial regularizations. IEEE Trans. Multimed. 2021, 23, 2005–2018. [Google Scholar] [CrossRef]
- Hu, Y.; Sirlantzis, K.; Howells, G.; Ragot, N.; Rodriguez, P. An online background subtraction algorithm using a contiguously weighted linear regression model. In Proceedings of the European Signal Processing Conference (EUSIPCO), Nice, France, 31 August–4 September 2015; pp. 1845–1849. [Google Scholar]
- Tamulionis, M.; Sledevič, T.; Abromavičius, V.; Kurpytė-Lipnickė, D.; Navakauskas, D.; Serackis, A.; Matuzevičius, D. Finding the Least Motion-Blurred Image by Reusing Early Features of Object Detection Network. Appl. Sci. 2023, 13, 1264. [Google Scholar] [CrossRef]
- Li, J.; Liu, P.; Huang, X.; Cui, W.; Zhang, T. Learning Motion Constraint-Based Spatio-Temporal Networks for Infrared Dim Target Detections. Appl. Sci. 2022, 12, 11519. [Google Scholar] [CrossRef]
- Antonio Velázquez, J.A.; Romero Huertas, M.; Alejo Eleuterio, R.; Gutiérrez, E.E.G.; López, F.D.R.; Lara, E.R. Pedestrian Localization in a Video Sequence Using Motion Detection and Active Shape Models. Appl. Sci. 2022, 12, 5371. [Google Scholar] [CrossRef]
- Chapel, M.N.; Bouwmans, T. Moving objects detection with a moving camera: A comprehensive review. Comput. Sci. Rev. 2020, 38, 100310. [Google Scholar] [CrossRef]
- Lipton, A.J.; Fujiyoshi, H.; Patil, R.S. Moving target classification and tracking from real-time video. In Proceedings of the Fourth IEEE Workshop on Applications of Computer Vision. WACV’98 (Cat. No. 98EX201), Princeton, NJ, USA, 19–21 October 1998; pp. 8–14. [Google Scholar]
- Singla, N.S. Motion detection based on frame difference method. Int. J. Inf. Comput. Technol. 2014, 4, 1559–1565. [Google Scholar]
- Liu, H.Y.; Meng, W.T.; Liu, Z. Key frame extraction of online video based on optimized frame difference. In Proceedings of the 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, Chongqing, China, 29–31 May 2012; pp. 1238–1242. [Google Scholar]
- Han, X.W.; Gao, Y.; Zheng, L.; Niu, D. Research on moving object detection algorithm based on improved three frame difference method and optical flow. In Proceedings of the 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, China, 18–20 September 2015; pp. 580–584. [Google Scholar]
- Lei, M.Y.; Geng, J.P. Fusion of Three-frame Difference Method and Background Difference Method to Achieve Infrared Human Target Detection. In Proceedings of the 2019 IEEE 1st International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Kunming, China, 17–19 October 2019; pp. 381–384. [Google Scholar]
- Zang, X.H.; Li, G.; Yang, J.; Wang, W. Adaptive difference modelling for background subtraction. In Proceedings of the 2017 IEEE Visual Communications and Image Processing (VCIP), St. Petersburg, FL, USA, 10–13 December 2017; pp. 1–4. [Google Scholar]
- Zhang, Y.; Liu, Q.L. Moving target detection method based on adaptive threshold. Comput. Eng. Appl. 2014, 50, 166–168. [Google Scholar]
- Zhang, F.; Zhu, J. Research and Application of Moving Target Detection. In Proceedings of the International Conference on Robots & Intelligent System, Vancouver, BC, Canada, 24–28 September 2017; pp. 239–241. [Google Scholar]
- Sobral, A.; Vacavant, A. A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput. Vis. Image Underst. 2014, 122, 4–21. [Google Scholar] [CrossRef]
- Bouwmans, T. Traditional and recent approaches in background modeling for foreground detection: An overview. Comput. Sci. Rev. 2014, 11, 31–66. [Google Scholar] [CrossRef]
- Elgammal, A. Wide Area Surveillance; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Barnich, O.; Droogenbroeck, M.V. ViBe: A universal background subtraction algorithm for video sequences. IEEE Trans. Image Process. 2010, 20, 1709–1724. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jodoin, P.M.; Mignotte, M.; Konrad, J. Statistical background subtraction using spatial cues. IEEE Trans. Circuits Syst. Video Technol. 2007, 17, 1758–1763. [Google Scholar] [CrossRef]
- Shoushtarian, B.; Bez, H.E. A practical adaptive approach for dynamic background subtraction using an invariant colour model and object tracking. Pattern Recognit. Lett. 2005, 26, 5–26. [Google Scholar] [CrossRef]
- Bouwmans, T.; Javed, S.; Sultana, M.; Jung, S.K. Deep neural network concepts for background subtraction: A systematic review and comparative evaluation. Neural Netw. 2019, 117, 8–66. [Google Scholar] [CrossRef] [Green Version]
- Kalsotra, R.; Arora, S. A comprehensive survey of video datasets for background subtraction. IEEE Access 2019, 7, 59143–59171. [Google Scholar] [CrossRef]
- Zheng, W.; Wang, K.; Wang, F.Y. A novel background subtraction algorithm based on parallel vision and Bayesian GANs. Neurocomputing 2020, 394, 178–200. [Google Scholar] [CrossRef]
- Ru, C.; Wen, W.; Zhong, Y. Raman spectroscopy for on-line monitoring of botanical extraction process using convolutional neural network with background subtraction. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 2023, 284, 121494. [Google Scholar] [CrossRef] [PubMed]
- Zhao, C.; Hu, K.; Basu, A. Universal background subtraction based on arithmetic distribution neural network. IEEE Trans. Image Process. 2022, 31, 2934–2949. [Google Scholar] [CrossRef] [PubMed]
- Elgammal, A.; Harwood, D.; Davis, L. Non-parametric model for background subtraction. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2000; pp. 751–767. [Google Scholar]
- Hofmann, M.; Tiefenbacher, P.; Rigoll, G. Background segmentation with feedback: The pixel-based adaptive segmented. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; pp. 38–43. [Google Scholar]
- Rodriguez, P.; Wohlberg, B. Incremental principal component pursuit for video background modeling. J. Math. Imaging Vis. 2016, 55, 1–18. [Google Scholar] [CrossRef]
- Gonzales, R.C.; Wintz, P. Digital Image Processing; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1987. [Google Scholar]
- Wang, X.Y.; Hu, H.M.; Zhang, Y.G. Pedestrian Detection Based on Spatial Attention Module for Outdoor Video Surveillance. In Proceedings of the 2019 IEEE Fifth International Conference on Multimedia Big Data (BigMM), Singapore, 11–13 September 2019; pp. 247–251. [Google Scholar]
- Goyette, N.; Jodoin, P.M.; Porikli, F.; Konrad, J.; Ishwar, P. Changedetection. net: A new change detection benchmark dataset. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; pp. 1–8. [Google Scholar]
- Wang, Y.; Jodoin, P.M.; Porikli, F.; Konrad, J.; Benezeth, Y.; Ishwar, P. CDnet 2014: An expanded change detection benchmark dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, 23–28 June 2014; pp. 387–394. [Google Scholar]
- Wang, Y.; Luo, Z.M.; Jodoin, P.M. Interactive deep learning method for segmenting moving objects. Pattern Recognit. Lett. 2017, 96, 66–75. [Google Scholar] [CrossRef]
- Stauffer, C.; Grimson, W.E.L. Adaptive background mixture models for real-time tracking. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999; pp. 246–252. [Google Scholar]
Method | PR | RE | FPR | F-Measure |
---|---|---|---|---|
VIBE method [25] | 0.568 | 0.927 | 0.194 | 0.705 |
Frame difference method [14] | 0.602 | 0.794 | 0.173 | 0.685 |
GMM method [41] | 0.654 | 0.693 | 0.158 | 0.672 |
PBAS method [34] | 0.706 | 0.931 | 0.137 | 0.803 |
incPCP method [35] | 0.539 | 0.935 | 0.241 | 0.684 |
CSTI method | 0.781 | 0.946 | 0.117 | 0.856 |
Method | PR | RE | FPR | F-Measure |
---|---|---|---|---|
VIBE method [25] | 0.654 | 0.923 | 0.183 | 0.767 |
Frame difference method [14] | 0.693 | 0.782 | 0.159 | 0.714 |
GMM method [41] | 0.739 | 0.753 | 0.142 | 0.730 |
PBAS method [34] | 0.751 | 0.912 | 0.154 | 0.824 |
incPCP method [35] | 0.679 | 0.931 | 0.217 | 0.785 |
CSTI method | 0.804 | 0.958 | 0.108 | 0.874 |
Method | PR | RE | FPR | F-Measure |
---|---|---|---|---|
VIBE method [25] | 0.584 | 0.916 | 0.237 | 0.713 |
Frame difference method [14] | 0.633 | 0.845 | 0.195 | 0.724 |
GMM method [41] | 0.541 | 0.904 | 0.261 | 0.670 |
PBAS method [34] | 0.722 | 0.931 | 0.184 | 0.813 |
incPCP method [35] | 0.559 | 0.927 | 0.251 | 0.697 |
CSTI method | 0.769 | 0.952 | 0.124 | 0.851 |
Method | PR | RE | FPR | F-Measure |
---|---|---|---|---|
VIBE method [25] | 0.755 | 0.921 | 0.161 | 0.830 |
Frame difference method [14] | 0.524 | 0.878 | 0.241 | 0.656 |
GMM method [41] | 0.709 | 0.864 | 0.193 | 0.779 |
PBAS method [34] | 0.772 | 0.934 | 0.143 | 0.845 |
incPCP method [35] | 0.714 | 0.939 | 0.182 | 0.811 |
CSTI method | 0.847 | 0.953 | 0.075 | 0.897 |
Method | VIBE Method | Frame Difference Method | GMM Method | PBAS Method | incPCP Method | CSTI Method |
---|---|---|---|---|---|---|
Algorithm running time | 54 ms | 1 ms | 18 ms | 863 ms | 2.1 s | 62 ms |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Yu, L.; Li, S.; Wang, G.; Jiang, X.; Li, W. The Extraction of Foreground Regions of the Moving Objects Based on Spatio-Temporal Information under a Static Camera. Electronics 2023, 12, 3346. https://doi.org/10.3390/electronics12153346
Zhang Y, Yu L, Li S, Wang G, Jiang X, Li W. The Extraction of Foreground Regions of the Moving Objects Based on Spatio-Temporal Information under a Static Camera. Electronics. 2023; 12(15):3346. https://doi.org/10.3390/electronics12153346
Chicago/Turabian StyleZhang, Yugui, Lina Yu, Shuang Li, Gang Wang, Xin Jiang, and Wenfa Li. 2023. "The Extraction of Foreground Regions of the Moving Objects Based on Spatio-Temporal Information under a Static Camera" Electronics 12, no. 15: 3346. https://doi.org/10.3390/electronics12153346
APA StyleZhang, Y., Yu, L., Li, S., Wang, G., Jiang, X., & Li, W. (2023). The Extraction of Foreground Regions of the Moving Objects Based on Spatio-Temporal Information under a Static Camera. Electronics, 12(15), 3346. https://doi.org/10.3390/electronics12153346