Next Article in Journal
Predicting Emotion with Biosignals: A Comparison of Classification and Regression Models for Estimating Valence and Arousal Level Using Wearable Sensors
Next Article in Special Issue
Adaptive Model Predictive Control for Mobile Robots with Localization Fluctuation Estimation
Previous Article in Journal
Face Mask Identification Using Spatial and Frequency Features in Depth Image from Time-of-Flight Camera
Previous Article in Special Issue
DCP-SLAM: Distributed Collaborative Partial Swarm SLAM for Efficient Navigation of Autonomous Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Indoor Location Technology with High Accuracy Using Simple Visual Tags

College of Mechanical and Vehicle Engineering, Chongqing University, Chongqing 400044, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(3), 1597; https://doi.org/10.3390/s23031597
Submission received: 2 January 2023 / Revised: 19 January 2023 / Accepted: 30 January 2023 / Published: 1 February 2023
(This article belongs to the Special Issue Advanced Sensing and Control Technologies for Autonomous Robots)

Abstract

:
To achieve low-cost and robustness, an indoor location system using simple visual tags is designed by comprehensively considering accuracy and computation complexity. Only the color and shape features are used for tag detection, by which both algorithm complexity and data storage requirement are reduced. To manage the nonunique problem caused by the simple tag features, a fast query and matching method is further presented by using the view field of the camera and the tag azimuth. Then, based on the relationship analysis between the spatial distribution of tags and location error, a pose and position estimation method using the weighted least square algorithm is designed and works together with the interactive algorithm by the designed switching strategy. By using the techniques presented, a favorable balance is achieved between the algorithm complexity and the location accuracy. The simulation and experiment results show that the proposed method can manage the singular problem of the overdetermined equations effectively and attenuate the negative effect of unfavorable label groups. Compared with the ultrawide band technology, the location error is reduced by more than 62%.

1. Introduction

With the development of artificial intelligent technologies, intelligent mobile platforms are becoming more widely used in the fields of healthcare, warehousing, logistics, industrial production, etc. [1,2]. Indoor location with high accuracy, as one of the key technologies, is the basis for autonomous functions such as navigation and decision. Compared with other application scenarios, it has special requirements on computation complexity, convenience of deployment, and robustness to environmental disturbances. Outdoors, as a mature technology, satellites can provide accurate location and timing signals at any time. In doors, however, the satellite signal becomes ineffective because of attenuation caused by buildings, interference during transmission, and multipath effect [3].
The indoor location technology can be divided into two types according to the signals used, i.e., wireless signal and visual information. The widely used wireless signals include WIFI [4], Bluetooth [5] and UWB (ultrawide band) [6], etc. Indoor location can be realized by measuring the distance, angle, or location fingerprint [3,4,5,6]. However, the cost to deploy base stations is comparatively high. Recently, with the development of Internet-of-Things technologies, some researchers devote themselves to indoor location using wireless local networks, which have already been deployed in many indoor environments [7,8]. Alhammadi et al. developed a three-dimensional indoor location system based on the Bayesian graphical model [9]. However, the average location error is more than 3 m and it can hardly be used in autonomous driving systems of intelligent mobile platforms. Moreover, there exists a multipath effect for the wireless communication signals. The visual location mainly includes SLAM (simultaneous location and mapping) [10] and location by visual tags [11]. SLAM can be applied to unknown environments by extracting and matching the texture feature automatically, but the algorithm has a higher computation complexity and its performance is also easily degraded by environmental disturbances. These pose great challenges on practical applications, especially for the scenario with a high demand on real-time and robust performance. Comparatively, the location algorithm using visual tags can achieve low computation complexity by prior storage of the feature and position information of tags. The selection of visual tags and pose calculation are important to location performance.
For the selection of visual tags, Ref. [12] uses special objects such as signs, desks, and doors, whose SURF feature is extracted to form the offline tag database. This method need not change the environment, but the information of texture features is still very large and it is also easily affected by environmental factors such as light. To reduce the storage requirement, only roof corners are considered in [13]. In [14], semantic segmentation technology is combined with the location algorithm for better performance, but much computing resource is required by the semantic segmentation process. No extra effort is needed to reconstruct the environment if using these already existing indoor objects. For the same type of object, however, their texture features are similar, which makes it difficult to discriminate different tags. To manage this problem, special tags are designed by coding the information of ID, coordinate, etc., in the tag to ensure its uniqueness [11]. On the contrary, the complexity of a QR code requires more time for detection, which even reaches hundreds of milliseconds [15]. Moreover, the texture information of complex QR codes is also easily degraded by environmental light and contaminants such as dust and dirt.
For the pose calculation, image matching and geometrical constraint are two widely used methods. In [11], the geometrical relationship of view between the tag image recorded in the database and that acquired in real time is used to calculate the position. Additionally, template matching is adopted in [16] to realize location by affine transformation. The fundamental of image matching is to find key texture features and establish their relationships between different images. This leads to a high degree of algorithm complexity. Comparatively, the computation of the algorithm using geometrical constraint is much less. It realizes location by geometrical relationship between the tags and sensors. The location precision reaches centimeter level by using a binocular camera for the detection of distance [13]. However, the computation resource required for the distance measurement by stereovision is very high [17]. In [18], the direct measured distance from multiple tags is used to construct overdetermined equations according to geometrical constraints, and the location is estimated by LSM (least square method). This approach has better comprehensive advantage on both location accuracy and computation complexity. When the redundancy of tags is not enough or the spatial distribution is special, however, the equations will become singular, which leads to a sharp increase in location error [19].
Considering the aforementioned problems, an indoor location system only using simple visual tags is designed for the autonomous driving system of intelligent mobile platforms. To manage the nonunique problem caused by the simple texture feature of tags, a fast query and matching algorithm is designed by using the FOV (field of view) of the camera and the tag azimuth. Furthermore, based on the relationship analysis between the spatial distribution of tags and location error, a pose and position estimation algorithm using WLSM (weighted least square method) is designed. Further, it works together with the interactive algorithm by the designed switching strategy to adapt the singular condition. A favorable balance has been achieved between the algorithm complexity and location accuracy by the presented techniques. The effectiveness of the location system designed herein is validated by both simulation and experiments, and compared with UWB, the location error is reduced greatly. The main contributions of this study are summarized as follows:
(1) A fast query and matching algorithm is designed to manage the nonunique problem of tags, which cannot be distinguished directly by their texture features.
(2) A cooperative strategy is proposed to combine WLSM and the interactive algorithm together to realize the estimation of pose and position with high accuracy under both general and singular conditions.
(3) An indoor location system only using simple visual tags is developed, which has the advantages of low consumption of computation resources, high accuracy, easy deployment, and robustness to environmental disturbances.
(4) The effectiveness of the proposed indoor location system is validated by both simulation and experiment tests.
The remainder of this paper is organized as follows: Section 2 describes the indoor location system design. The tag-matching strategy and the position calculation algorithm are introduced in Section 3 and Section 4, respectively; the effectiveness of the designed location system is validated by numerical simulations and comparative tests in Section 5; and Section 6 concludes the paper.

2. Indoor Location System Design

The visual tags and calculation algorithm of position are the two main parts in the designed system.

2.1. Simple Visual Tag Design

Based on the following reasons, the tags shown in Figure 1 are designed [20]:
(1) It is difficult to control the natural texture, which is also easily influenced by environmental factors;
(2) Tagging with a code is unique, but the detection algorithm is more complex and its sophisticated texture is also easily degraded;
(3) Cameras have strong capability to measure colors, and RGB are the three primary colors;
(4) It is easy to detect a circle, which also has the advantage of fine detection robustness and invariance from different views.
In this study, the tag is detected by the color and shape features [21], and the azimuth is measured by the tag’s center. In this way, better anti-interference capability can be achieved with less computation complexity. Only using these simple tag features, however, we cannot discriminate them directly. Accordingly, the pose and position cannot be calculated directly from the tag coordinates.

2.2. Location Algorithm with High Accuracy

To manage the nonunique problem of the simple visual tags, the location algorithm shown in Figure 2 is proposed after considering the advantages of the geometrical constraint method, i.e., high location accuracy and low algorithm complexity.
Considering that most intelligent platforms move indoors within the horizontal plane, the angle measurement method by vision is used to measure the tag position, as shown in Figure 3.
The optic center of camera ( x , y ) , azimuth θ , and the tag position ( x i , y i ) satisfy [18,19]:
tan ( θ + α i ) = y i y x i x ,     α i = arctan u i u 0 f ,     i Ω
where f is the focal distance and Ω is composed of the tag identifier, which can be observed by the camera. If the tag feature is unique and its coordinate is known, the pose and position can be solved from (1) directly, when the number of observed tags is no smaller than three. Information from more tags is beneficial to location accuracy and robustness [18]. Unfortunately, the designed tag only has color and shape features, which are nonunique. In the next section, a fast query and matching algorithm is designed by using the FOV of the camera and the tag azimuth. In this way, the matched tag can be found without searching the entire database. Then, with the coordinate of observed tags, a hybrid algorithm is presented in Section 3 to estimate the pose and position by combing WLSM and the interactive algorithm to create a favorable balance between algorithm complexity and location accuracy.

3. Matching of Tag

Considering the following reasons, the matching algorithm shown in Figure 4 is designed:
(1) The camera only can detect the tags in its FOV, which can be used to quickly search the database to determine the candidate tags for better efficiency;
(2) When the observed tag is consistent with a candidate tag, their azimuths should be exactly equal. This fact can be used to realize tag matching.

3.1. Determination of Candidate Tag

To determine the candidate tag quickly, the location result from the last cycle is used to estimate the current pose and position by DR (dead rocking) [22]. Then, according to the FOV model shown in Figure 5, the possible area where the candidate tag may exist can be determined. On the contrary, a tag locating outside this area cannot be the candidate tag.
In Figure 5, the coordinate of A is calculated by DR [22], and the coordinates of B and C, i.e., ( x B , y B ) and ( x C , y C ) , are calculated by their geometrical relationship:
[ x B y B ] = [ x A y A ] + h cos γ [ cos ( θ + γ ) sin ( θ + γ ) ]   and   [ x C y C ] = [ x A y A ] + h cos γ [ cos ( θ γ ) sin ( θ γ ) ]
where h is the detection range and γ is the half horizontal view angle.
To improve the efficiency, the whole location area is rasterized to ensure that only one tag is allowed in one grid at most, and the stored tag information can be indexed by the grid number directly. According to the FOV model shown in Figure 5, the candidate tags are determined by the following steps:
Step 1: The grids covered by the camera FOV are determined by the coordinates of A, B, and C, which are calculated by (2).
Step 2: According to the grid number covered by the camera FOV, the possible tags are roughly selected from the database.
Step 3: For each tag L i selected in Step 2, whether it lies in FOV is judged accurately by the sign of pairwise dot products between the vectors, L i A , L i B , and L i C .
Step 4: The set of candidate tags Ω c is composed of all tags whose pairwise dot products have the same sign. Additionally, the ideal azimuth of tags β i ,   i Ω c is calculated by (1) according to the camera’s pose and position derived by DR.

3.2. Tag Matching by Azimuth

Considering the fact that when the candidate tag is consistent with the observed tag, their azimuths should be equal, we design the following strategy for tag matching:
Step 1: For any observed and unmatched tag, the error e i between its azimuth α and that of the candidate tag with the same color and shape feature is calculated by
e i = | α β i | ,   i Ω c .
Step 2: If the minimum e i is smaller than the threshold, the candidate tag L k ,   k = arg min i e i matches the observed tag. Otherwise, it is considered as a disturbance to be deleted.
Compared with the method using all feature information to perform the match by total optimization, this strategy can effectively avoid the matching error caused by the incomplete information of unobserved tags, which may be shaded by other objects, etc. With the aforementioned process, the coordinate of observed tags can be retrieved from the tag database quickly and accurately.

4. Pose and Position Calculation with High Accuracy

Theoretically, the pose and position can be calculated by (1) when the number of effective tags is not smaller than three [18]. In practical application, the following factors will degrade the location accuracy:
(1) The pixel error caused by factors such as environmental disturbances, camera pose, and tag position, is unfavorable for location accuracy.
(2) When solving the pose and position by (1), there exists matrix inversion. Under some special conditions, singularity causes the matrix to be invertible.
To manage these problems, a WLSM is designed to estimate the pose and position with high accuracy by analysis of the error transfer characteristics using (1), and furthermore, a cooperation strategy is presented to schedule WLSM and the interactive algorithm to adapt to the singular condition.

4.1. Analysis of Error Transfer Characteristic

Considering the fact that the pose and position can be calculated with three effective tags, the following error transfer characteristic is obtained by the sensitive analysis of (1):
[ d α i d α j d α k ] = G [ d x d y d θ ] ,     G = [ sin ( α i + θ ) r i cos ( α i + θ ) r i 1 sin ( α j + θ ) r j cos ( α j + θ ) r j 1 sin ( α k + θ ) r k cos ( α k + θ ) r k 1 ] .
where α i and r i are the measured azimuth of the tag L i and its distance to the camera. When the camera and the tags are on the same circle, as shown in Figure 6, we have the following equations according to Ptolemy theorem and law of sines:
r i d j k + r k d i j = r j d i k ,     d i j sin ( α i α j ) = d j k sin ( α j α k ) = d i k sin ( α i α k ) .
Under this condition, it is known from (5) that | G | = 0 and the position equations become ill-conditioned. This implies that a small error in the measured azimuth will result in significant deviation in the solved pose and position. Accordingly, the following cooperation strategy is designed. When the condition number of G is smaller than the threshold, the pose and position is calculated by WLSM as it is given in Section 4.2. Otherwise, the following interactive searching algorithm is used:
Step 1: The possible azimuth range is discretized according to the requirement of accuracy.
Step 2: For each discretized azimuth, the camera coordinate can be calculated by the following equation according to the information of any two tags:
x = ( t i t θ 1 ) ( t j t θ 1 ) ( y i y j ) + ( t j t θ 1 ) ( t θ + t i ) x i ( t i t θ 1 ) ( t θ + t j ) x j ( t θ 2 + 1 ) ( t j t i ) y = ( t θ + t i ) ( t θ + t j ) ( x i x j ) + ( t i t θ 1 ) ( t θ + t j ) y i ( t j t θ 1 ) ( t θ + t i ) y j ( t θ 2 + 1 ) ( t i t j ) t i = tan α i ,     t j = tan α j ,     t θ = tan θ
where ( x i , y i ) and α i are the stored position of tag L i and its measured azimuth, respectively, and θ is the discretized azimuth.
Step 3: For each discretized azimuth, the variance of camera’s coordinate is calculated. The coordinate with the smallest scattered degree and the corresponding azimuth are then selected as the effective pose and position value.

4.2. Position Estimation by Weighted Least Square Method

Under the noncircular conditions, the pose and position can be solved by every three tags. It has the advantage of fine real-time performance and good robustness to noise to solve the aforementioned overdetermined equations by LSM [23]. However, the location error from different tag groups is influenced by many factors, such as environments, image resolution, etc. To further improve the location accuracy, a weighted least square estimation algorithm is presented by designing the evaluation index of solution quality.
Taking the estimation of θ as an example, the following equation is derived for any group of tags, L i , L j , and L k , from the geometrical constraint described by (1) [18]:
σ t θ 2 = [ t θ t i t θ t j t θ t k ] [ t θ t i t θ t j t θ t k ] T f 2 σ u 2 , t θ = ( t k t i ) ( y j + t j x k + y i t i x i ) ( t j t i ) ( y k + t k x k + y i t i x i ) ( t k t i ) ( x j t j y j + x i + t i y i ) ( t j t i ) ( x k + t k y k + x i t i y i )
where σ t θ 2 and σ u 2 are the variances of the tangent value of θ and the pixel coordinate u . Equation (7) establishes the relationship between the pixel error of the tag center and the azimuth error.
When using WLSM, the maximum likelihood solution can be obtained by setting the weight as the reciprocal of error variance [23]. From the error relationship described by (7), the WLSM used to optimally estimate the azimuth is designed as:
θ = atan t θ ,     t θ = ( A T W A ) 1 A T W B , A = [ a 1 a 2 ] T , B = [ b 1 b 2 ] T , W = diag ( 1 / ω 1 , 1 / ω 2 , ) , a n = ( t k t i ) ( x i x j + [ t i t j ] [ y i y j ] ) ( t j t i ) ( x i x k + [ t i t k ] [ y i y k ] ) , b n = ( t k t i ) ( y i y j + [ t i t j ] [ x i x j ] ) ( t j t i ) ( y i y k + [ t i t k ] [ x i x k ] , ω n = [ t θ t i t θ t j t θ t k ] [ t θ t i t θ t j t θ t k ] T .

5. Validation and Analysis

In this section, following validation, the effectiveness of the proposed method by numerical simulations is further compared with UWB by experiments.

5.1. Simulation Validation and Analysis

The location scenarios shown in Figure 7 are designed to indicate the effectiveness of the designed cooperation strategy and WLSM. Under condition (a), all tags locate on the same circle, which is used to validate the cooperative strategy. Under condition (b), tags 1, 2, and 3 are on the same circle, but tag 4 is not, and the camera moves along the circle. This condition is used to validate that the designed WLSM can attenuate the negative influence from the unfavorable tag group. The scenario is simulated in Prescan and the designed location algorithm runs in Matlab. Prescan provides the integrated interfaces for cosimulation with Matlab [24]. During the simulation, the time step is set to be 0.03 s, the radius of the circular trajectory is 5 m, and the moving speed of the camera is about 0.5 m/s.
Shown in Figure 8 are the location results when the camera moves from the outside of the circle along the arrow direction shown in Figure 7a. At about 1.3~1.9 s, the camera and tags are on the same circle. The overdetermined Equation (1) becomes singular and the location error of LSM increases quickly to a large value. On the contrary, the proposed cooperative algorithm can detect the singularity and switch to the interactive algorithm as soon as possible. Accordingly, the location accuracy can be ensured during the whole simulation process.
Since pose and position can be solved with any 3 tags, under condition (b), 4 tags can generate 4 combinations with every 3 tags. The combination of tags 2, 3, and 4 is denoted by group 1, the combination of tags 1, 2, and 3 is denoted by group 2, the combination of tags 1, 3, and 4 is denoted by group 3, and the combination of tags 1, 2, and 4 is denoted by group 4. Among them, the tags in group 2 and the camera are always on the same circle. The solution quality for group 2 is worst. It is found from Figure 9a that the contribution from group 2 on the location results is almost zero. This shows that the designed index to evaluate the solution quality given in Section 4.2 can measure the confidence level of results from different tag groups. Accordingly, the presented WLSM using this index as the weight can successfully attenuate the negative influence from unfavorable tag groups. Compared with LSM, it can be found from Figure 9c,d that the maximum errors of azimuth, longitudinal, and lateral location results are reduced by 89.47%, 51.61%, and 88.78%, respectively.

5.2. Comparative Test of Location Accuracy

To further test the location accuracy, the designed location system is compared with UWB under the indoor condition shown in Figure 10 considering the following reasons: (1) Methods such as ORB-SLAM have a high requirement for computation resources. (2) The visual methods using complex texture features can hardly work under such disturbed conditions because some main features are covered or light obviously changes. (3) The average error of the methods using location fingerprint of wireless signals is up to several meters. (4) Comparatively, UWB has the advantages of low computation complexity, high accuracy, and robustness to environmental disturbances such as light and dynamic covering of objects. The room size is about 4.1 m × 5.2 m, where there are several desks, which may degrade the location performance of UWB. About 20 tags are stuck on the walls around the room, and 4 UWB stations are used. To simulate the environmental disturbances, the light is turned on/off and the tags are covered randomly by some objects during the test.
During the test, the location algorithm runs in a Raspberry 4b board with the rate 30 fps. The board with the camera is held by hand and moves along the defined trajectory as best we can with the speed of about 0.5 m/s. The main test equipment with their technical parameters is listed in Table 1. The camera is used to detect the tag, and in addition to the algorithm, the location results from both the proposed method and UWB are recorded by the Raspberry 4b.
The location results are shown in Figure 11 when the camera moves along the defined trajectory.
The average location error of the results is shown in Table 2. Compared with UWB, the longitudinal and lateral location error of the designed system is reduced by 65.93% and 69.75%, respectively.
Since it is difficult to control the node to move along the trajectory accurately and strictly, the statistical error in Table 2 is larger than their actual values. Ten test points are selected where the location results are measured statically for accuracy, and the results are shown in Figure 12. On the whole, the average location error of UWB is more than 13.5 cm, reaching a maximum of 49 cm at position (1.61 cm, −0.50 cm). The wireless signal of UWB is reflected by the objects in the room, which degrades the location performance of UWB by multipath effect. Even though with the simulated environmental disturbances of light and covering, the average error of the proposed location system is about 5.5 cm and the maximum error is only 9 cm. The location accuracy is obviously improved by the proposed techniques in this study.

6. Conclusions

To realize indoor locations with high accuracy, low cost, and robustness, an indoor location system is designed in this study by presenting a fast tag-matching strategy and cooperative-solving algorithm. The system locates according to simple visual tags, which are robust to environmental disturbances and easily deployed. The cooperative-solving algorithm for estimation of pose and position can work under cocircular conditions, and compared with LSM, the maximum errors of azimuth, longitudinal, and lateral location results are reduced by 89.47%, 51.61%, and 88.78%, respectively. The average location error for the designed location system under disturbed environments is about 5.5 cm, and the location accuracy is higher than UWB by 62%.

Author Contributions

Conceptualization and methodology, F.G.; validation J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the State Key Laboratory of Automotive Safety and Energy under grant KFY2209.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data are available upon request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vladareanu, L.; Vlădareanu, V.; Pop, N.; Migdalovici, M.; Boșcoianu, M.; Pop, S.; Ciocîrlan, A. Robot extenics control developed by versatile, intelligent and portable robot Vipro platform applied on firefighting robots. Int. J. Online Biomed. Eng. 2020, 16, 99–107. [Google Scholar] [CrossRef]
  2. Gao, F.; Dang, D.F.; He, Y.D. Robust coordinated control of nonlinear heterogeneous platoon interacted by uncertain topology. IEEE Trans. Intell. Transp. Syst. 2022, 23, 4982–4992. [Google Scholar]
  3. Li, S.; Rashidzadeh, R. Hybrid indoor location positioning system. IET Wirel. Sens. Syst. 2019, 9, 257–264. [Google Scholar] [CrossRef]
  4. Ding, J.; Wang, Y.; Fu, S.; Si, H.; Zhang, J.; Gao, S. Multiview features fusion and Adaboost based indoor localization on Wifi platform. IEEE Sens. J. 2022, 22, 16607–16616. [Google Scholar] [CrossRef]
  5. Youm, S.; Shin, K.S. Improved method of Bluetooth low energy based location tracking using neural networks. Sens. Mater. 2021, 33, 2083–2093. [Google Scholar] [CrossRef]
  6. Liu, F.; Zhang, J.; Wang, J.; Han, H.; Yang, D. An UWB/vision fusion scheme for determining pedestrians’ indoor location. Sensors 2020, 20, 1139. [Google Scholar] [CrossRef] [PubMed]
  7. Billa, A.; Shayea, I.; Alhammadi, A.; Abdullah, Q.; Roslee, M. An overview of indoor localization technologies: Toward IoT navigation services. In Proceedings of the International Symposium on Telecommunication Technologies, Shah Alam, Malaysia, 9–11 November 2020. [Google Scholar]
  8. Kordi, K.A.; Alhammadi, A.; Roslee, M.; Alias, M.Y.; Abdullah, Q. A review on wireless emerging IoT indoor localization. In Proceedings of the International Symposium on Telecommunication Technologies, Shah Alam, Malaysia, 9–11 November 2020. [Google Scholar]
  9. Alhammadi, A.; Hashim, F.; ARasid, M.F.; Alraih, S. A three-dimensional pattern recognition localization system based on a Bayesian graphical model. Int. J. Distrib. Sens. Netw. 2020, 16, 1–12. [Google Scholar] [CrossRef]
  10. Song, J.; Kook, J. Visual SLAM based spatial recognition and visualization method for mobile AR system. Appl. Syst. Innov. 2022, 5, 11. [Google Scholar] [CrossRef]
  11. Kayhani, N.; Zhao, W.; McCabe, B.; Schoellig, A.P. Tag-based visual-inertial localization of unmanned aerial vehicles in indoor construction environments using an on-manifold extended Kalman filter. Autom. Constr. 2022, 135, 104112. [Google Scholar] [CrossRef]
  12. Li, M.; Chen, R.; Liao, X.; Guo, B.; Zhang, W.; Guo, G. A precise indoor visual positioning approach using a built image feature database and signle user image from smartphone. Remote Sens. 2020, 12, 869. [Google Scholar] [CrossRef]
  13. Hong, W.; Xia, H.; An, X.; Liu, X. Natural landmarks based localization algorithm for indoor robot with binocular vision. In Proceedings of the Chinese Control and Decision Conference, Chongqing, China, 28–30 May 2017. [Google Scholar]
  14. Li, N.; Tu, W.; Ai, H.; Deng, H.; Tao, J.; Hu, T.; Sun, X. VISEL: A visual and magnetic fusion-based large scale indoor localization system with improved high-precision sematic maps. Int. J. Intell. Syst. 2022, 37, 7992–8020. [Google Scholar] [CrossRef]
  15. Park, D.; Lee, S.J.; Kwon, J.; Choi, S. Localization of mobile robot based on artificial landmark. J. Korean Inst. Commun. Inf. Sci. 2020, 45, 2150–2153. [Google Scholar] [CrossRef]
  16. Li, M.; Zhao, L.; Tan, D.; Tong, X. BLE fingerprint indoor localization algorithm based on eight-neighborhood template matching. Sensors 2020, 19, 4859. [Google Scholar] [CrossRef]
  17. Shao, F.; Lin, W.; Gu, S.; Jiang, G.; Srikanthan, T. Perceptual full-reference quality assessment of stereoscopic images by considering binocular visual characteristics. IEEE Trans. Image Process. 2013, 22, 1940–1953. [Google Scholar] [CrossRef]
  18. Xie, H.L.; Zhu, Q.D.; Liu, P. Self localization method of panoramic stereo vision robot based on maximum likelihood estimation. Appl. Technol. 2017, 44, 40–45. [Google Scholar]
  19. Jiang, L.Z.; Lei, W.; Wang, J. Radar angle measurement accuracy evaluation system based on machine vision. Comput. Meas. Control. 2014, 22, 2842–2843, 2866. [Google Scholar]
  20. Lee, J.K.; Wang, Y.M.; Lu, C.S.; Wang, H.C.; Chou, T.R. The enhancement of graphic QR code recognition using convolutional neural networks. In Proceedings of the International Conference on Innovation, Communication and Engineering, Zhenzhou, China, 25–30 October 2019. [Google Scholar]
  21. Gao, F.; Wang, C. Hybrid strategy for traffic light detection by combining classical and self-learning detectors. IET Intell. Transp. Syst. 2020, 14, 735–741. [Google Scholar] [CrossRef]
  22. Nazemzadeh, P.; Fontanelli, D.; Macii, D. Optimal placement of landmarks for indoor localization using sensors with a limited range. In Proceedings of the International Conference on Indoor Positioning and Indoor Navigation, Alcala de Henares, Spain, 4–7 October 2016. [Google Scholar]
  23. Chen, Y.; Hu, J.; Wang, J.H. Data driven weighted least square localization algorithm for UAV. Command. Inf. Syst. Technol. 2020, 11, 76–80. [Google Scholar]
  24. Gao, F.; Han, Z.; Yang, Y. Performance Limit Evaluation by Evolution Test with Application to Automatic Parking System. IEEE Trans. Intell. Veh. 2022. Available online: https://ieeexplore.ieee.org/document/9645203 (accessed on 19 January 2023).
Figure 1. Simple visual tags.
Figure 1. Simple visual tags.
Sensors 23 01597 g001
Figure 2. Location algorithm with high accuracy.
Figure 2. Location algorithm with high accuracy.
Sensors 23 01597 g002
Figure 3. Angle measurement by vision.
Figure 3. Angle measurement by vision.
Sensors 23 01597 g003
Figure 4. Matching process for observed and stored tags.
Figure 4. Matching process for observed and stored tags.
Sensors 23 01597 g004
Figure 5. FOV model of camera.
Figure 5. FOV model of camera.
Sensors 23 01597 g005
Figure 6. Schematic for cocircular condition.
Figure 6. Schematic for cocircular condition.
Sensors 23 01597 g006
Figure 7. Simulation conditions.
Figure 7. Simulation conditions.
Sensors 23 01597 g007
Figure 8. Location results under condition (a).
Figure 8. Location results under condition (a).
Sensors 23 01597 g008
Figure 9. Location results under condition (b).
Figure 9. Location results under condition (b).
Sensors 23 01597 g009aSensors 23 01597 g009b
Figure 10. Indoor location scenario.
Figure 10. Indoor location scenario.
Sensors 23 01597 g010
Figure 11. Comparative test results.
Figure 11. Comparative test results.
Sensors 23 01597 g011
Figure 12. Comparative test results under statistical conditions.
Figure 12. Comparative test results under statistical conditions.
Sensors 23 01597 g012
Table 1. Test equipment and technical parameters.
Table 1. Test equipment and technical parameters.
EquipmentParameterValue
CameraResolution1920 × 1080
Frame rate30 fps
Field of view78 degree
Focal length747.1 pix
Raspberry 4bCPUBCM2837B0SOC
Number of cores4
Basic frequency1.4 GHz
RAM1 G
UWBLocation precision10 cm
Communication range≤130 m
Update frequency≤50 Hz
Table 2. Statistical location errors.
Table 2. Statistical location errors.
Average ErrorLong. Location (cm)Lat. Location (cm)Azimuth (°)
UWB36.132.4/
Proposed location system12.39.80.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, F.; Ma, J. Indoor Location Technology with High Accuracy Using Simple Visual Tags. Sensors 2023, 23, 1597. https://doi.org/10.3390/s23031597

AMA Style

Gao F, Ma J. Indoor Location Technology with High Accuracy Using Simple Visual Tags. Sensors. 2023; 23(3):1597. https://doi.org/10.3390/s23031597

Chicago/Turabian Style

Gao, Feng, and Jie Ma. 2023. "Indoor Location Technology with High Accuracy Using Simple Visual Tags" Sensors 23, no. 3: 1597. https://doi.org/10.3390/s23031597

APA Style

Gao, F., & Ma, J. (2023). Indoor Location Technology with High Accuracy Using Simple Visual Tags. Sensors, 23(3), 1597. https://doi.org/10.3390/s23031597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop