Adaptive Kalman Filter Fusion Positioning Based on Wi-Fi and Vision
Abstract
:1. Introduction
- In the implementation of the Wi-Fi positioning component, a random forest algorithm incorporating area restriction is proposed in this study to enhance the accuracy of Wi-Fi positioning in terms of its adaptability to environmental changes.
- In the development of the visual positioning component, YOLOv7 target detection and Deep SORT target tracking algorithms are introduced to optimize the efficiency and accuracy of image processing and effectively improve the stability of visual positioning.
- An adaptive Kalman filter positioning technique fusing Wi-Fi signals and visual information is proposed. The technique uses Wi-Fi signal strength and visual information for preliminary positioning analysis, and it dynamically adjusts the parameters and weights of the filter by monitoring the residuals of the camera and Wi-Fi measurements in real time. The technique effectively integrates the two types of sensing data and aims to improve the accuracy and stability of the indoor positioning system.
- Through experimental verification in a real environment, this study could significantly reduce the uncertainty of each independent technique and show good positioning performance in complex indoor environments so as to achieve more accurate and robust positioning effect, which has a wide range of application prospects and practical value.
2. Related Work
3. System Overview
4. Proposed Methodology
4.1. Wi-Fi-Based Indoor Positioning
4.2. Vision-Based Indoor Positioning
4.2.1. Pedestrian Detection and Tracking Based on Video Images
4.2.2. Coordinate Transformation Models
4.3. Fusion Positioning Based on Adaptive Kalman Filtering
Algorithm 1 FAKF Update Step (Update) |
Input: Camera and WiFi measurements (z_camera, z_wifi) Output: Updated state estimate x and covariance matrix P
|
Algorithm 2 FAKF Adaptive Adjustment (Adapt) |
Input: Residuals from camera and WiFi data (innovation_camera, innovation_wifi) Output: Updated process and measurement noise covariance matrices
|
5. Experimental Setup and Experimental Results
5.1. Experimental Setting and Data Collection
5.1.1. Hardware Parameters
5.1.2. Description of the Experimental Environment
5.1.3. Establishment of the Wi-Fi Fingerprint Database
5.1.4. Acquisition of Visual Information
5.2. Comparison and Analysis of Experimental Results
5.2.1. Experimental Methods
5.2.2. Visual Positioning Component Evaluation
- Accuracy: The accuracy of the model was 1 for all path and threshold conditions, indicating that it demonstrated an extremely good ability to correctly classify positive class samples without false positives. This result indicates that the model is very reliable in the prediction of positive class samples and is able to ensure that all positive class instances are identified without error.
- Recall: Path 1 had a low recall at lower thresholds, but the recall gradually improved as the threshold increased and significantly improved at higher thresholds. In contrast, path 2 and path 3 showed better recall at all thresholds, especially at larger thresholds (Th = 2.70 m), where path 2 reached 0.880 and path 3 reached 0.846. This suggests that path 2 and path 3 were able to identify the positive class instances more efficiently, and especially at larger thresholds.
- F1 score: The F1 scores of path 2 and path 3 gradually increased with the increase in the threshold, and they reached their best values of 0.936 and 0.917, respectively, at the threshold of 2.70 m, which indicates that these two paths achieved a better balance between precision and recall and that they can provide ideal comprehensive performance, especially under larger threshold conditions.
5.2.3. Wi-Fi and Visual Fusion Positioning Evaluation
6. Conclusions
- Optimization of visual positioning methods: The performance of the visual positioning methods used in this study in large-scale environments has the potential to be further improved. Future work should focus on optimizing the visual positioning techniques to improve their accuracy in complex scenes, especially in poor visible light conditions or with large scene variations, so as to enhance the accuracy of the overall data fusion positioning results.
- Upgrading of fused positioning methods: The FAKF used in this study relies on an adaptive adjustment mechanism for data fusion. Future research can further explore and implement more advanced sensor fusion strategies, such as a multi-sensor data fusion framework, while introducing more sophisticated fusion techniques to more effectively cope with nonlinear problems and variable factors in complex environments.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Liu, W.; Chambers, T.; Clevenger, K.A.; Pfeiffer, K.A.; Rzotkiewicz, Z.; Park, H.; Pearson, A.L. Quantifying time spent outdoors: A versatile method using any type of global positioning system (GPS) and accelerometer devices. PLoS ONE 2024, 19, e0299943. [Google Scholar] [CrossRef] [PubMed]
- Wu, D.; Chen, R.; Yu, Y.; Zheng, X.; Xu, Y.; Liu, Z. Indoor Passive Visual Positioning by CNN-Based Pedestrian Detection. Micromachines 2022, 13, 1413. [Google Scholar] [CrossRef] [PubMed]
- Qin, P.; Hu, Q.; Yu, H. An Internet of Electronic-Visual Things Indoor Localization System Using Adaptive Kalman Filter. IEEE Sens. J. 2023, 23, 16058–16067. [Google Scholar] [CrossRef]
- Shang, S.; Wang, L. Overview of WiFi fingerprinting-based indoor positioning. IET Commun. 2022, 16, 725–733. [Google Scholar] [CrossRef]
- Zhang, J. Research on visual recognition and positioning of industrial robots based on big data technology. Appl. Math. Nonlinear Sci. 2024, 9. [Google Scholar] [CrossRef]
- Zhuang, Y.; Sun, X.; Li, Y.; Huai, J.; Hua, L.; Yang, X.; Cao, X.; Zhang, P.; Cao, Y.; Qi, L.; et al. Multi-sensor integrated navigation/positioning systems using data fusion: From analytics-based to learning-based approaches. Inf. Fusion 2023, 95, 62–90. [Google Scholar] [CrossRef]
- Zhu, Z.; Zhu, S. Asynchronous Kalman filtering for dynamic response reconstruction by fusing multi-type sensor data with arbitrary sampling frequencies. Mech. Syst. Signal Process. 2024, 215, 111395. [Google Scholar] [CrossRef]
- Cahyadi, M.N.; Asfihani, T.; Mardiyanto, R.; Erfianti, R. Performance of GPS and IMU sensor fusion using unscented Kalman filter for precise i-Boat navigation in infinite wide waters. Geod. Geodyn. 2023, 14, 265–274. [Google Scholar] [CrossRef]
- Zhang, W.; Yu, K.; Wang, W.; Li, X. A Self-Adaptive AP Selection Algorithm Based on Multi-Objective Optimization for Indoor WiFi Positioning. IEEE Internet Things J. 2020, 8, 1406–1416. [Google Scholar] [CrossRef]
- Luo, H.; Hu, X.; Zou, Y.; Jing, X.; Song, C.; Ni, Q. Research on a reference signal optimisation algorithm for indoor Bluetooth positioning. Appl. Math. Nonlinear Sci. 2021, 6, 525–534. [Google Scholar]
- Guo, H.; Yin, H.; Song, S.; Zhu, X.; Ren, D. Application of density clustering with noise combined with particle swarm optimization in UWB indoor positioning. Sci. Rep. 2024, 14, 13121. [Google Scholar] [CrossRef] [PubMed]
- Carotenuto, R.; Merenda, M.; Iero, D.; Della Corte, F.G. An Indoor Ultrasonic System for Autonomous 3-D Positioning. IEEE Trans. Instrum. Meas. 2019, 68, 2507–2518. [Google Scholar] [CrossRef]
- Wu, Y.; Zhao, W.; Zhang, J. Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration. Sensors 2022, 22, 5806. [Google Scholar] [CrossRef] [PubMed]
- Zheng, Z.; Liao, Z.; Xue, Y.; Li, Y. Rapid establishment of indoor wifi positioning database in shopping malls based on wearable navigation device (weartrack). Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 46, 293–299. [Google Scholar] [CrossRef]
- Shin, B.; Kim, T.; Lee, T. Real-Time Three-Dimensional Pedestrian Localization System Using Smartphones. Sensors 2024, 24, 652. [Google Scholar] [CrossRef]
- Lin, B. Research on abnormal data recognition method of optical network based on WiFi triangular location. Int. J. Auton. Adapt. Commun. Syst. 2023, 16, 357–375. [Google Scholar] [CrossRef]
- Beomju, S.; Jung, L.H.; Taikjin, L. Novel indoor fingerprinting method based on RSS sequence matching. Measurement 2023, 223, 113719. [Google Scholar]
- Shchekotov, M. Indoor localization methods based on WiFi lateration and signal strength data collection. In Proceedings of the XXth Conference of Open Innovations Association FRUCT, Yaroslavl, Russia, 20–24 April 2015; Volume 388, pp. 186–191. [Google Scholar]
- Kim, Y.; Kim, J.; You, C.; Park, H. Integrated indoor positioning methods to optimize computations and prediction accuracy enhancement. Comput. Intell. 2024, 40, e12620. [Google Scholar] [CrossRef]
- Hemin, Y.; Jiansheng, P. Robot Indoor Positioning and Navigation Based on Improved WiFi Location Fingerprint Positioning Algorithm. Wirel. Commun. Mob. Comput. 2022, 2022, 8274455. [Google Scholar]
- Ghazaleh, K.; Laura, R.; Jukka, T. Toward Accurate Indoor Positioning: An RSS-Based Fusion of UWB and Machine-Learning-Enhanced WiFi. Sensors 2022, 22, 3204. [Google Scholar] [CrossRef]
- Li, S.; Yu, B.; Jin, Y.; Huang, L.; Zhang, H.; Liang, X. Image-Based Indoor Localization Using Smartphone Camera. Wirel. Commun. Mob. Comput. 2021, 2021, 3279059. [Google Scholar] [CrossRef]
- Yan, J.; He, G.; Basiri, A.; Hancock, C. 3-D Passive-Vision-Aided Pedestrian Dead Reckoning for Indoor Positioning. IEEE Trans. Instrum. Meas. 2020, 69, 1370–1386. [Google Scholar] [CrossRef]
- Liu, W.; Qin, C.; Deng, Z.; Jiang, H. LRF-WiVi: A WiFi and Visual Indoor Localization Method Based on Low-Rank Fusion. Sensors 2022, 22, 8821. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Zeng, Y.; Xu, L.; Guo, S.; Heidari, A.A.; Chen, H.; Zhang, Y. From coarse to fine: Two-stage deep residual attention generative adversarial network for repair of iris textures obscured by eyelids and eyelashes. iScience 2023, 26, 107169. [Google Scholar] [CrossRef]
- Tang, C.; Sun, W.; Zhang, X.; Zheng, J.; Wu, W.; Sun, J. A novel fingerprint positioning method applying vision-based definition for wifi-based localization. IEEE Sens. J. 2023, 23, 16092–16106. [Google Scholar] [CrossRef]
- Chang, L.; Yiding, W.; Xiaoming, L. An Improved YOLOv7 Lightweight Detection Algorithm for Obscured Pedestrians. Sensors 2023, 23, 5912. [Google Scholar] [CrossRef]
- Sheng, W.; Shen, J.; Huang, Q.; Liu, Z.; Ding, Z. Multi-objective pedestrian tracking method based on YOLOv8 and improved DeepSORT. Math. Biosci. Eng. MBE 2024, 21, 1791–1805. [Google Scholar] [CrossRef]
- Bi, Q.; Liu, Z.; Wang, M.; Lai, M.; Xiao, L.; Yan, Y.; Liu, X. An automatic camera calibration method based on checkerboard. Trait. Du Signal 2017, 34, 209–226. [Google Scholar] [CrossRef]
Path | Metrics | Th = 1.60 m | Th = 1.80 m | Th = 2.00 m | Th = 2.50 m | Th = 2.70 m |
---|---|---|---|---|---|---|
Path 1 | precision | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
recall | 0.3056 | 0.3611 | 0.3611 | 0.5278 | 0.5556 | |
F1 score | 0.4681 | 0.5306 | 0.5306 | 0.6909 | 0.7143 | |
Path 2 | precision | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
recall | 0.5600 | 0.6000 | 0.6000 | 0.7600 | 0.8800 | |
F1 score | 0.7179 | 0.7500 | 0.7500 | 0.8636 | 0.9362 | |
Path 3 | precision | 1.0000 | 1.0000 | 1.0000 | 1.0000 | 1.0000 |
recall | 0.5128 | 0.5897 | 0.6410 | 0.7949 | 0.8462 | |
F1 score | 0.6780 | 0.7419 | 0.7813 | 0.8857 | 0.9167 |
Path | Wi-Fi RMSE (m) | Visual RMSE (m) | FAKF RMSE (m) | FAKF MAE (m) | KF RMSE (m) | KF MAE (m) |
---|---|---|---|---|---|---|
Path 1 | 1.30900 | 1.79313 | 0.88482 | 0.68337 | 1.58489 | 1.12138 |
Path 2 | 1.46400 | 1.04980 | 0.89663 | 0.59074 | 1.44230 | 1.14330 |
Path 3 | 1.43900 | 1.19865 | 1.05999 | 0.83463 | 1.84172 | 1.53824 |
50% | 75% | 80% | 90% | |
---|---|---|---|---|
Wi-Fi positioning error (m) | 1.17 | 1.64 | 1.77 | 1.81 |
Visual positioning error (m) | 2.42 | 3.67 | 3.93 | 4.32 |
FAKF positioning error (m) | 1.01 | 1.31 | 1.45 | 1.91 |
50% | 75% | 80% | 90% | |
---|---|---|---|---|
Wi-Fi positioning error (m) | 0.93 | 1.60 | 1.62 | 2.78 |
Visual positioning error (m) | 1.68 | 2.66 | 2.74 | 2.83 |
FAKF positioning error (m) | 0.47 | 1.15 | 1.32 | 2.67 |
50% | 75% | 80% | 90% | |
---|---|---|---|---|
Wi-Fi positioning error (m) | 1.78 | 2.52 | 2.66 | 2.93 |
Visual positioning error (m) | 1.68 | 2.41 | 2.61 | 3.01 |
FAKF positioning error (m) | 1.05 | 1.87 | 1.93 | 2.46 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhong, S.; Cheng, L.; Yuan, H.; Li, X. Adaptive Kalman Filter Fusion Positioning Based on Wi-Fi and Vision. Sensors 2025, 25, 671. https://doi.org/10.3390/s25030671
Zhong S, Cheng L, Yuan H, Li X. Adaptive Kalman Filter Fusion Positioning Based on Wi-Fi and Vision. Sensors. 2025; 25(3):671. https://doi.org/10.3390/s25030671
Chicago/Turabian StyleZhong, Shuxin, Li Cheng, Haiwen Yuan, and Xuan Li. 2025. "Adaptive Kalman Filter Fusion Positioning Based on Wi-Fi and Vision" Sensors 25, no. 3: 671. https://doi.org/10.3390/s25030671
APA StyleZhong, S., Cheng, L., Yuan, H., & Li, X. (2025). Adaptive Kalman Filter Fusion Positioning Based on Wi-Fi and Vision. Sensors, 25(3), 671. https://doi.org/10.3390/s25030671