Spectral-Spatial Feature Enhancement Algorithm for Nighttime Object Detection and Tracking
Abstract
:1. Introduction
- A novel algorithmic framework is proposed for nighttime object detection and tracking tasks. We perform a feature-enhancing preprocessing operation on nighttime object images. Object features in images at night are more prominent to improve the accuracy of object detection.
- Introduce the concept of domain adaptation in transfer learning to create a day-night discriminator, which can align the target features of day and night and narrow the domain gap between them.
- Low-light enhancement and Gabor filtering are performed on the dataset to enhance the features, and the spectral and spatial features are fully utilized to improve the tracking performance.
2. Related Work
2.1. Target Detection
2.2. Domain Adaptation
3. Proposed Method
Algorithm 1 Spectral-spatial feature enhancement algorithm for nighttime object detection and tracking |
Input: Target dataset (datasets for the target domain, the source domain, and the test) 1. The target domain data is preprocessed, including low-light enhancement, object detection, and dynamic programming with (1)–(3). 4. Siamese network is trained to get the tracker head, and the loss function is obtained. 5. The test data is preprocessed, including low-light enhancement and Gabor filter with (1)–(3) and (11)–(12). 6. The feature-enhanced test data is detected and tracked by SDT with (13). Output: Object detection maps and location data |
3.1. Preprocessing
3.1.1. Low Light Enhancement
3.1.2. Video Object Detection
3.1.3. Dynamic Programming
3.2. Dat-Net
3.2.1. Feature Extractor
3.2.2. Transformer Adaptive Structure
3.2.3. Tracker Head
3.2.4. Feature Discrimination Structure
3.2.5. Loss Function
3.2.6. Gabor Filter
4. Experimental Results
4.1. Datasets
4.2. Evaluation Metrics
- Intersection over Union
- Success rate (SR)
- Precision
4.3. Overall Performance
4.3.1. Comparison Algorithms
4.3.2. Parameter Settings
4.3.3. Experimental Results
5. Discussion
- The results in Table 1 show that SFDT has the best performance. It demonstrates that the algorithm can achieve competitive long-term tracking performance and significantly improve the tracking performance of the tracker. Figure 2 shows the detection and tracking results of the algorithm proposed in this paper. It can be seen that the proposed method can effectively track the target.
- In particular, compared with the UDAT algorithm without preprocessing the test set, SFDT performed better than UDAT on both precision and SR. The accuracy is improved by 2%, and the SR is improved by 3%. This means that the data enhanced by lighting and texture features are more suitable for tracking. Under this condition, the network can detect the location of the target more accurately.
- Paper [60] points out that the low-light enhanced test set is not conducive to object tracking. As shown in Figure 1, the low-light enhanced images are too bright, and the image’s details are lost. We speculate that overexposure leads to a decrease in tracking performance. The addition of texture feature enhancement made image details more obvious. Moreover, the brightness of the images after Gabor filtering becomes lower, which makes up for the loss of the previous step. However, the image brightness is still brighter than the original image, which is good for detection and tracking.
- As shown in Figure 2, two groups of consecutive frames are detected. In the first group of frames, there is only one vehicle as the target, and it was successfully detected; in the second group of frames, there are multiple vehicles in the background as interference, and the vehicles that appear continuously are still successfully tracked by the proposed algorithm. The tracking performance of the proposed algorithm is well illustrated.
- Although the proposed algorithm is superior to the comparison algorithms in tracking and monitoring, the computational complexity of the algorithm increases because the preprocessing step is improved in this algorithm. A texture feature extraction method based on the Gabor filter is adopted. The processing time for the same dataset is longer than the proposed algorithm in the paper [60]. There is no evaluation for real-time object detection and tracking performance. We will investigate this in future work.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Peng, F.; Xu, Q.; Li, Y.; Zheng, M.; Su, H. Improved Kernel Correlation Filter Based Moving Target Tracking for Robot Grasping. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
- Liu, C.; Ibrayim, M.; Hamdulla, A. Multi-Feature Single Target Robust Tracking Fused with Particle Filter. Sensors 2022, 22, 1879. [Google Scholar] [CrossRef] [PubMed]
- Uzair, M.; Brinkworth, R.S.; Finn, A. Bio-inspired video enhancement for small moving target detection. IEEE Trans. Image Process. 2020, 30, 1232–1244. [Google Scholar] [CrossRef] [PubMed]
- Abro, G.E.M.; Zulkifli, S.A.B.M.; Masood, R.J.; Asirvadam, V.S.; Laouti, A. Comprehensive Review of UAV Detection, Security, and Communication Advancements to Prevent Threats. Drones 2022, 6, 284. [Google Scholar] [CrossRef]
- Fan, H.; Bai, H.; Lin, L.; Yang, F.; Chu, P.; Deng, G.; Yu, S.; Huang, M.; Liu, J.; Xu, Y.; et al. Lasot: A high-quality large-scale single object tracking benchmark. Int. J. Comput. Vis. 2021, 129, 439–461. [Google Scholar] [CrossRef]
- Huang, L.; Zhao, X.; Huang, K. Got-10k: A large high-diversity benchmark for generic object tracking in the wild. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 1562–1577. [Google Scholar] [CrossRef] [Green Version]
- Real, E.; Shlens, J.; Mazzocchi, S.; Pan, X.; Vanhoucke, V. Youtube-boundingboxes: A large high-precision human-annotated data set for object detection in video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5296–5305. [Google Scholar]
- Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; Snoussi, H. Target tracking using machine learning and Kalman filter in wireless sensor networks. IEEE Sens. J. 2014, 14, 3715–3725. [Google Scholar] [CrossRef] [Green Version]
- Zhu, S.; Chen, C.; Li, W.; Yang, B.; Guan, X. Distributed optimal consensus filter for target tracking in heterogeneous sensor networks. IEEE Trans. Cybern. 2013, 43, 1963–1976. [Google Scholar] [CrossRef]
- Zhan, R.; Wan, J. Iterated unscented Kalman filter for passive target tracking. IEEE Trans. Aerosp. Electron. Syst. 2007, 43, 1155–1163. [Google Scholar] [CrossRef]
- Hao, J.; Zhou, Y.; Zhang, G.; Lv, Q.; Wu, Q. A review of target tracking algorithm based on UAV. In Proceedings of the 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018; IEEE: Piscatway, NJ, USA, 2018; pp. 328–333. [Google Scholar]
- Guo, H.; Li, W.; Zhou, N.; Sun, H.; Han, Z. Research and Implementation of Robot Vision Scanning Tracking Algorithm Based on Deep Learning. Scanning 2022, 2022, 3330427. [Google Scholar] [CrossRef] [PubMed]
- Ding, Q.; Ding, Z. Machine learning model for feature recognition of sports competition based on improved TLD algorithm. J. Intell. Fuzzy Syst. 2021, 40, 2697–2708. [Google Scholar] [CrossRef]
- Hossain, S.; Lee, D.j. Deep learning-based real-time multiple-object detection and tracking from aerial imagery via a flying robot with GPU-based embedded devices. Sensors 2019, 19, 3371. [Google Scholar] [CrossRef] [Green Version]
- Leclerc, M.; Tharmarasa, R.; Florea, M.C.; Boury-Brisset, A.C.; Kirubarajan, T.; Duclos-Hindié, N. Ship classification using deep learning techniques for maritime target tracking. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; IEEE: Piscatway, NJ, USA, 2018; pp. 737–744. [Google Scholar]
- Yang, B.; Cao, X.; Yuen, C.; Qian, L. Offloading optimization in edge computing for deep-learning-enabled target tracking by internet of UAVs. IEEE Internet Things J. 2020, 8, 9878–9893. [Google Scholar] [CrossRef]
- Peng, Y.; Tang, Z.; Zhao, G.; Cao, G.; Wu, C. Motion Blur Removal for Uav-Based Wind Turbine Blade Images Using Synthetic Datasets. Remote Sens. 2021, 14, 87. [Google Scholar] [CrossRef]
- Cao, Z.; Fu, C.; Ye, J.; Li, B.; Li, Y. HiFT: Hierarchical feature transformer for aerial tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 15457–15466. [Google Scholar]
- Chen, Z.; Zhong, B.; Li, G.; Zhang, S.; Ji, R. Siamese box adaptive network for visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6668–6677. [Google Scholar]
- Zhao, B.; Gong, X.; Wang, J.; Zhao, L. Low-Light Image Enhancement Based on Multi-Path Interaction. Sensors 2021, 21, 4986. [Google Scholar] [CrossRef]
- Feng, W.; Quan, Y.; Dauphin, G. Label noise cleaning with an adaptive ensemble method based on noise detection metric. Sensors 2020, 20, 6718. [Google Scholar] [CrossRef]
- Ye, J.; Fu, C.; Cao, Z.; An, S.; Zheng, G.; Li, B. Tracker Meets Night: A Transformer Enhancer for UAV Tracking. IEEE Robot. Autom. Lett. 2022, 7, 3866–3873. [Google Scholar] [CrossRef]
- Ye, J.; Fu, C.; Zheng, G.; Cao, Z.; Li, B. DarkLighter: Light up the darkness for UAV tracking. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; IEEE: Piscatway, NJ, USA, 2021; pp. 3079–3085. [Google Scholar]
- Rakhmatulin, I.; Kamilaris, A.; Andreasen, C. Deep neural networks to detect weeds from crops in agricultural environments in real-time: A review. Remote Sens. 2021, 13, 4486. [Google Scholar] [CrossRef]
- Zhu, H.; Wei, H.; Li, B.; Yuan, X.; Kehtarnavaz, N. A Review of Video Object Detection: Datasets, Metrics and Methods. Appl. Sci. 2020, 10, 7834. [Google Scholar] [CrossRef]
- Yang, L.; Liu, S.; Zhao, Y. Deep-Learning Based Algorithm for Detecting Targets in Infrared Images. Appl. Sci. 2022, 12, 3322. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Carreira, J.; Sminchisescu, C. CPMC: Automatic object segmentation using constrained parametric min-cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1312–1328. [Google Scholar] [CrossRef] [PubMed]
- Van de Sande, K.E.; Uijlings, J.R.; Gevers, T.; Smeulders, A.W. Segmentation as selective search for object recognition. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Piscatway, NJ, USA, 2011; pp. 1879–1886. [Google Scholar]
- Pont-Tuset, J.; Arbelaez, P.; Barron, J.T.; Marques, F.; Malik, J. Multiscale combinatorial grouping for image segmentation and object proposal generation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 128–140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang Lin, L.; Liu, S.; Chen, Y.W. Method and Apparatus of Candidate Generation for Single Sample Mode in Video Coding. US Patent 10,021,418, 10 July 2018. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Feng, W.; Dauphin, G.; Huang, W.; Quan, Y.; Liao, W. New margin-based subsampling iterative technique in modified random forests for classification. Knowl.-Based Syst. 2019, 182, 104845. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Feng, W.; Quan, Y.; Dauphin, G.; Li, Q.; Gao, L.; Huang, W.; Xia, J.; Zhu, W.; Xing, M. Semi-supervised rotation forest based on ensemble margin theory for the classification of hyperspectral image with limited training data. Inf. Sci. 2021, 575, 611–638. [Google Scholar] [CrossRef]
- Kong, L.; Wang, J.; Zhao, P. YOLO-G: A Lightweight Network Model for Improving the Performance of Military Targets Detection. IEEE Access 2022, 10, 55546–55564. [Google Scholar] [CrossRef]
- Dong, J.; Xia, S.; Zhao, Y.; Cao, Q.; Li, Y.; Liu, L. Indoor target tracking with deep learning-based YOLOv3 model. In Proceedings of the Fourteenth International Conference on Digital Image Processing (ICDIP 2022), Wuhan, China, 20–23 May 2022; SPIE: Bellingham, WA, USA, 2022; Volume 12342, pp. 992–998. [Google Scholar]
- Jiang, S.; Xu, B.; Zhao, J.; Shen, F. Faster and simpler siamese network for single object tracking. arXiv 2021, arXiv:2105.03049. [Google Scholar]
- Tao, R.; Gavves, E.; Smeulders, A.W.M. Siamese Instance Search for Tracking. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1420–1429. [Google Scholar]
- Bertinetto, L.; Valmadre, J.; Henriques, J.F.; Vedaldi, A.; Torr, P.H. Fully-convolutional siamese networks for object tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 850–865. [Google Scholar]
- Li, B.; Wu, W.; Wang, Q.; Zhang, F.; Xing, J.; Yan, J. Siamrpn++: Evolution of siamese visual tracking with very deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4282–4291. [Google Scholar]
- Guo, D.; Wang, J.; Cui, Y.; Wang, Z.; Chen, S. SiamCAR: Siamese fully convolutional classification and regression for visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 6269–6277. [Google Scholar]
- Xu, Y.; Wang, Z.; Li, Z.; Yuan, Y.; Yu, G. Siamfc++: Towards robust and accurate visual tracking with target estimation guidelines. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–8 February 2020; Volume 34, pp. 12549–12556. [Google Scholar]
- Chen, X.; Yan, B.; Zhu, J.; Wang, D.; Yang, X.; Lu, H. Transformer Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8126–8135. [Google Scholar]
- Wang, N.; Zhou, W.; Wang, J.; Li, H. Transformer meets tracker: Exploiting temporal context for robust visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1571–1580. [Google Scholar]
- Liu, Y.; Zhang, S.; Li, Y.; Yang, J. Learning to Adapt via Latent Domains for Adaptive Semantic Segmentation. Adv. Neural Inf. Process. Syst. 2021, 34, 1167–1178. [Google Scholar]
- Rakshit, S.; Bandyopadhyay, H.; Bharambe, P.; Desetti, S.N.; Banerjee, B.; Chaudhuri, S. Open-Set Domain Adaptation Under Few Source-Domain Labeled Samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4029–4038. [Google Scholar]
- Chen, Y.; Li, W.; Sakaridis, C.; Dai, D.; Van Gool, L. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3339–3348. [Google Scholar]
- Yu, Q.; Fan, K.; Wang, Y.; Zheng, Y. Faster MDNet for Visual Object Tracking. Appl. Sci. 2022, 12, 2336. [Google Scholar] [CrossRef]
- Moon, J.; Das, D.; Lee, C.G. A Multistage Framework With Mean Subspace Computation and Recursive Feedback for Online Unsupervised Domain Adaptation. IEEE Trans. Image Process. 2022, 31, 4622–4636. [Google Scholar] [CrossRef] [PubMed]
- Acharya, D.; Tennakoon, R.; Muthu, S.; Khoshelham, K.; Hoseinnezhad, R.; Bab-Hadiashar, A. Single-image localisation using 3D models: Combining hierarchical edge maps and semantic segmentation for domain adaptation. Autom. Constr. 2022, 136, 104152. [Google Scholar] [CrossRef]
- He, L.; Liu, C.; Li, J.; Li, Y.; Li, S.; Yu, Z. Hyperspectral image spectral–spatial-range Gabor filtering. IEEE Trans. Geosci. Remote. Sens. 2020, 58, 4818–4836. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. arXiv 2021, arXiv:2103.00860. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Zheng, J.; Ma, C.; Peng, H.; Yang, X. Learning to Track Objects from Unlabeled Videos. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 13526–13535. [Google Scholar] [CrossRef]
- Ye, J.; Fu, C.; Zheng, G.; Paudel, D.P.; Chen, G. Unsupervised domain adaptation for nighttime aerial tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8896–8905. [Google Scholar]
- Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y.; et al. A survey on vision transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 87–110. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015; PMLR: Moscow Region, Russia, 2015; pp. 1180–1189. [Google Scholar]
- Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
- Grigorescu, S.E.; Petkov, N.; Kruizinga, P. Comparison of texture features based on Gabor filters. IEEE Trans. Image Process. 2002, 11, 1160–1167. [Google Scholar] [CrossRef] [Green Version]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscatway, NJ, USA, 2009; pp. 248–255. [Google Scholar]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection over Union. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
- Lukezic, A.; Matas, J.; Kristan, M. D3S-A Discriminative Single Shot Segmentation Tracker. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Zhang, Z.; Peng, H.; Fu, J.; Li, B.; Hu, W. Ocean: Object-Aware Anchor-Free Tracking. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 771–787. [Google Scholar]
- Zhang, L.; Gonzalez-Garcia, A.; Weijer, J.V.D.; Danelljan, M.; Khan, F.S. Learning the Model Update for Siamese Trackers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lv, Y.; Feng, W.; Wang, S.; Dauphin, G.; Zhang, Y.; Xing, M. Spectral-Spatial Feature Enhancement Algorithm for Nighttime Object Detection and Tracking. Symmetry 2023, 15, 546. https://doi.org/10.3390/sym15020546
Lv Y, Feng W, Wang S, Dauphin G, Zhang Y, Xing M. Spectral-Spatial Feature Enhancement Algorithm for Nighttime Object Detection and Tracking. Symmetry. 2023; 15(2):546. https://doi.org/10.3390/sym15020546
Chicago/Turabian StyleLv, Yan, Wei Feng, Shuo Wang, Gabriel Dauphin, Yali Zhang, and Mengdao Xing. 2023. "Spectral-Spatial Feature Enhancement Algorithm for Nighttime Object Detection and Tracking" Symmetry 15, no. 2: 546. https://doi.org/10.3390/sym15020546
APA StyleLv, Y., Feng, W., Wang, S., Dauphin, G., Zhang, Y., & Xing, M. (2023). Spectral-Spatial Feature Enhancement Algorithm for Nighttime Object Detection and Tracking. Symmetry, 15(2), 546. https://doi.org/10.3390/sym15020546