An Effective Motion-Tracking Scheme for Machine-Learning Applications in Noisy Videos
Abstract
:1. Introduction
2. Related Work
3. Proposed Scheme
3.1. Modified Simplest Color Balance (SCB) Algorithm
Algorithm 1: Modified simplest color balance algorithm |
percent = 10 //bottom 10% in histogram half_percent = percent/200.0 // double scale for subdivision, 1/100 -> 1/200 Red, Green, Blue = Split and save the RGB channels of the image // The logic below operates once in each of the Red, Green, and Blue channels. flat = reshape “channel” matrix array to vector array and sorted n_cols = length of flat low_val = flat[int(n_cols × half_percent)] // The part lower than low_val in the channel creates a matrix in which True and other False values are stored. low_mask <- if channel > low_val True else False matrix thresholded <- Stores a two-dimensional matrix in which low_val is assigned to a value lower than low_val. pixel_avg = average of thresholded histogram normalized = Normalize the thresholded channel to a value between 0 and b_cut. out_channel = After re-combining the three channels of Red, Green, and Blue, the grayscale image is returned. |
3.2. Image Binarization
3.3. Subsection
3.4. Object Tracking
4. Evaluation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lu, S.; Wang, B.; Wang, H.; Chen, L.; Linjian, M.; Zhang, X. A real-time object detection algorithm for video. Comput. Electr. Eng. 2019, 77, 398–408. [Google Scholar] [CrossRef]
- Wageeh, Y.; Mohamed, H.E.D.; Fadl, A.; Anas, O.; El Masry, N.; Nabil, A.; Atia, A. YOLO fish detection with Euclidean tracking in fish farms. J. Ambient Intell. Humaniz. Comput. 2021, 12, 5–12. [Google Scholar] [CrossRef]
- Lin, S.D.; Chang, T.; Chen, W. Multiple Object Tracking using YOLO-based Detector. J. Imaging Sci. Technol. 2021, 65, 40401-1. [Google Scholar] [CrossRef]
- Krishna, N.M.; Reddy, R.Y.; Reddy, M.S.C.; Madhav, K.P.; Sudham, G. Object Detection and Tracking Using Yolo. In Proceedings of the 2021 Third International Conference on Inventive Research in Computing Applications(ICIRCA), Coimbatore, India, 2–4 September 2021; pp. 1–7. [Google Scholar]
- Yang, H.; Liu, P.; Hu, Y.; Fu, J. Research on underwater object recognition based on YOLOv3. Microsyst. Technol. 2021, 27, 1837–1844. [Google Scholar] [CrossRef]
- Chen, J.W.; Lin, W.J.; Cheng, H.J.; Hung, C.L.; Lin, C.Y.; Chen, S.P. A smartphone-based application for scale pest detection using multiple-object detection methods. Electronics 2021, 10, 372. [Google Scholar] [CrossRef]
- Montalbo, F.J.P. A Computer-Aided Diagnosis of Brain Tumors Using a Fine-Tuned YOLO-based Model with Transfer Learning. KSII Trans. Internet Inf. Syst. (TIIS) 2020, 14, 4816–4834. [Google Scholar]
- Yüzkat, M.; Ilhan, H.O.; Aydin, N. Multi-model CNN fusion for sperm morphology analysis. Comput. Biol. Med. 2021, 137, 104790. [Google Scholar] [CrossRef]
- Yang, H.; Kang, S.; Park, C.; Lee, J.; Yu, K.; Min, K. A Hierarchical deep model for food classification from photographs. KSII Trans. Internet Inf. Syst. (TIIS) 2020, 14, 1704–1720. [Google Scholar]
- Stojnić, V.; Risojević, V.; Muštra, M.; Jovanović, V.; Filipi, J.; Kezić, N.; Babić, Z. A Method for Detection of Small Moving Objects in UAV Videos. Remote Sens. 2021, 13, 653. [Google Scholar] [CrossRef]
- Ibraheam, M.; Li, K.F.; Gebali, F.; Sielecki, L.E. A Performance Comparison and Enhancement of Animal Species Detection in Images with Various R-CNN Models. AI 2021, 2, 552–577. [Google Scholar] [CrossRef]
- Long, K.; Tang, L.; Pu, X.; Ren, Y.; Zheng, M.; Gao, L.; Song, C.; Han, S.; Zhou, M.; Deng, F. Probability-based Mask R-CNN for pulmonary embolism detection. Neurocomputing 2021, 422, 345–353. [Google Scholar] [CrossRef]
- Wu, Q.; Feng, D.; Cao, C.; Zeng, X.; Feng, Z.; Wu, J.; Huang, Z. Improved Mask R-CNN for Aircraft Detection in Remote Sensing Images. Sensors 2021, 21, 2618. [Google Scholar] [CrossRef] [PubMed]
- Zhu, R.; Cui, Y.; Hou, E.; Huang, J. Efficient detection and robust tracking of spermatozoa in microscopic video. IET Image Process. 2021, 15, 3200–3210. [Google Scholar] [CrossRef]
- Somasundaram, D.; Nirmala, M. Faster region convolutional neural network and semen tracking algorithm for sperm analysis. Comput. Methods Programs Biomed. 2021, 200, 105918. [Google Scholar] [CrossRef] [PubMed]
- Zhang, N.; Feng, Y.; Lee, E.J. Activity Object Detection Based on Improved Faster R-CNN. J. Korea Multimed. Soc. 2021, 24, 416–422. [Google Scholar]
- Shen, J.; Liu, N.; Sun, H.; Tao, X.; Li, Q. Vehicle Detection in Aerial Images Based on Hyper Feature Map in Deep Convolutional Network. KSII Trans. Internet Inf. Syst. (TIIS) 2019, 13, 1989–2011. [Google Scholar]
- Ilhan, H.O.; Serbes, G.; Aydin, N. Automated sperm morphology analysis approach using a directional masking technique. Comput. Biol. Med. 2020, 112, 103845. [Google Scholar] [CrossRef]
- Chang, V.; Heutte, L.; Petitjean, C.; Härtel, S.; Hitschfeld, N. Automatic classification of human sperm head morphology. Comput. Biol. Med. 2017, 84, 205–216. [Google Scholar] [CrossRef]
- Ilhan, H.O.; Serbes, G. Sperm morphology analysis by using the fusion of two-stage fine-tuned deep network. Biomed. Signal Process. Control 2021, 71, 103246. [Google Scholar] [CrossRef]
- Limare, N.; Lisani, J.L.; Morel, J.M.; Petro, A.B.; Sbert, C. Simplest color balance. Image Process. Online (IPOL) 2011, 1, 297–315. [Google Scholar] [CrossRef] [Green Version]
- Urbano, L.F.; Masson, P.; VerMilyea, M.; Kam, M. Automatic tracking and motility analysis of human in time-lapse images. IEEE Trans. Med. Imaging 2016, 36, 792–801. [Google Scholar] [CrossRef] [PubMed]
- Jati, G.; Gunawan, A.A.; Lestari, S.W.; Jatmiko, W.; Hilman, M.H. Multi-sperm tracking using Hungarian Kalman filter on low frame rate video. In Proceedings of the 2016 International Conference on Advanced Computer Science Information Systems(ICACSIS), Malang, Indonesia, 15–16 October 2016; pp. 530–535. [Google Scholar]
- Ilhan, H.O.; Yuzkat, M.; Aydin, N. Sperm Motility Analysis by using Recursive Kalman Filters with the smartphone based data acquisition and reporting approach. Expert Syst. Appl. 2021, 186, 115774. [Google Scholar] [CrossRef]
Threshold (P[max]) | Detected Object |
---|---|
1/4 | 19 |
1/2 | 32 |
3/4 | 55 |
Information | |||
---|---|---|---|
Tracker number | Integer | ||
Last center point | Tuple (x, y) | ||
Last area | Integer | ||
Set color | Tuple (R, G, B) | R = 0~255, G = 0~255, B = 0~255 | |
Distance traveled | Float | ||
State | Integer | 0 = stop, 1 = move, 2 = hold | |
Labels | Class List | Label number | Integer |
Center point | Tuple (x, y) | ||
Area | Integer | ||
Matching frame | Integer |
Video Number | Precision | Recall | Loss |
---|---|---|---|
1 | 0.95 | 0.96 | 0.08 |
2 | 0.96 | 0.98 | 0.09 |
3 | 0.95 | 0.95 | 0.06 |
4 | 0.98 | 0.97 | 0.06 |
5 | 0.97 | 0.96 | 0.01 |
6 | 0.99 | 0.95 | 0.01 |
7 | 0.97 | 0.98 | 0.08 |
8 | 0.96 | 0.97 | 0.08 |
9 | 0.95 | 0.92 | 0.05 |
Video Number | Frame 1 | Frame 2 | Frame 3 | Frame 4 | Frame 5 | Frame 6 | Frame 7 |
---|---|---|---|---|---|---|---|
Video 1 | 115/0.21 | 113/0.03 | 133/0.02 | 141/0.02 | 126/0.02 | 129/0.02 | 133/0.19 |
Video 2 | 183/0.23 | 178/0.04 | 173/0.03 | 185/0.03 | 187/0.01 | 189/0.03 | 194/0.03 |
Video 3 | 179/0.18 | 183/0.04 | 185/0.03 | 184/0.03 | 172/0.03 | 191/0.03 | 190/0.03 |
Video 4 | 112/0.22 | 127/0.03 | 125/0.02 | 134/0.02 | 129/0.02 | 135/0.02 | 129/0.02 |
Video 5 | 28/0.16 | 27/0.03 | 25/0.02 | 27/0.02 | 25/0.02 | 28/0.02 | 23/0.02 |
Video 6 | 47/0.14 | 50/0.02 | 46/0.16 | 48/0.02 | 44/0.02 | 45/0.02 | 44/0.02 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, H.; Lee, H.-W.; Lee, J.; Bae, O.; Hong, C.-P. An Effective Motion-Tracking Scheme for Machine-Learning Applications in Noisy Videos. Appl. Sci. 2023, 13, 3338. https://doi.org/10.3390/app13053338
Kim H, Lee H-W, Lee J, Bae O, Hong C-P. An Effective Motion-Tracking Scheme for Machine-Learning Applications in Noisy Videos. Applied Sciences. 2023; 13(5):3338. https://doi.org/10.3390/app13053338
Chicago/Turabian StyleKim, HaeHwan, Ho-Woong Lee, JinSung Lee, Okhwan Bae, and Chung-Pyo Hong. 2023. "An Effective Motion-Tracking Scheme for Machine-Learning Applications in Noisy Videos" Applied Sciences 13, no. 5: 3338. https://doi.org/10.3390/app13053338
APA StyleKim, H., Lee, H. -W., Lee, J., Bae, O., & Hong, C. -P. (2023). An Effective Motion-Tracking Scheme for Machine-Learning Applications in Noisy Videos. Applied Sciences, 13(5), 3338. https://doi.org/10.3390/app13053338