Next Article in Journal
Stator Non-Uniform Radial Ventilation Design Methodology for a 15 MW Turbo-Synchronous Generator Based on Single Ventilation Duct Subsystem
Next Article in Special Issue
Methodology for the Correction of the Spatial Orientation Angles of the Unmanned Aerial Vehicle Using Real Time GNSS, a Shoreline Image and an Electronic Navigational Chart
Previous Article in Journal
Application and Suitability of Polymeric Materials as Insulators in Electrical Equipment
Previous Article in Special Issue
Determining the Seasonal Variability of the Territorial Sea Baseline in Poland (2018–2020) Using Integrated USV/GNSS/SBES Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementation of QR Code Recognition Technology Using Smartphone Camera for Indoor Positioning

Department of Information Communication Engineering, Chosun University, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Energies 2021, 14(10), 2759; https://doi.org/10.3390/en14102759
Submission received: 2 April 2021 / Revised: 3 May 2021 / Accepted: 6 May 2021 / Published: 11 May 2021

Abstract

:
Numerous studies on positioning technology are ongoing for recognizing the positions of objects accurately. Vision-, sensor-, and signal-based technologies are combined for recognizing the positions of objects outdoors and indoors. While positioning technologies involving wireless communication based on sensors and signals are commonly used in outdoor environments, the performance becomes degraded in indoor environments. Therefore, a vision-based indoor positioning method using a QR code is proposed in this study. A user’s position is measured by determining the current position of a smartphone device accurately based on the QR code recognized with a smartphone camera. The direction, distance, and position are acquired using the relationship between the three-dimensional spatial coordinate information of the camera and the center point coordinates of a two-dimensional planar QR code obtained through camera calibration.

1. Introduction

Indoor positioning systems (IPS) are used for locating people or objects in an indoor environment. In recent years, more interest has grown in IPS technologies as they enable a number of location-based indoor tracking solutions, including applications such as behavior recognition, asset tracking, healthcare, and smart building. With the increasing popularization of smartphones, the number of smartphone users has increased considerably. Over the recent years, location-based services such as geographic information, street view and satellite map have been developed for smartphone users by using the global positioning system (GPS) [1], which works well in outdoor positioning. However, the performance is not good for indoor positioning as the indoor terminal cannot get the signal from satellites directly. Various other techniques are introduced to resolve the problem associated with the indoor environment. Radio-frequency identification (RFID) [2] is one of the most commonly used methods for indoor positioning. However, the localization accuracy largely depends on the density of reference tags. The increased density with increased number of reference tags may generate RF interferences. The Wi-Fi based localization technique is another prominent tool for indoor positioning. Wi-Fi-based indoor positioning [3] have been more widely used in buildings than other local methods like RFID. Multiple methods can be employed to track indoor Wi-Fi-enabled terminals. Received signal strength indicator (RSSI) and time of flight (ToF) are two primary methods in use. However, wireless signal-strength based approaches suffer from location errors, whereas time-of-flight (ToF) based solutions provide good accuracy but are not scalable. Similarly, Bluetooth [4], and Beacon [5,6,7,8] based methods are also widely used for recognizing positions in indoor environments.
In recent days, vision-based positioning method is gaining popularity because it provides reliable and low-cost services using a camera without extra pre-deployed hardware. The vision based IPS tracks objects by matching the current images with stored frames or pre-constructed models for relative or global pose estimation are commonly used for indoor positioning systems.
In this study, we aim to track the position of a user using the vision-based system in indoor environment. We use the smart phone camera to acquire the image data and QR codes to store the location information. We measure the position of the user by determining the current position of a smartphone device accurately based on the QR code recognized with a smartphone camera. The smartphone camera captures the images of the QR codes attached on the wall in different locations in the room. Each one of this QR codes has unique identifier and coded to contain the indoor reference location information. Next, the two-dimensional location of QR codes is converted to three-dimensional coordinates using the camera calibration algorithm. The position information is acquired in real time using the focal length and angle of the camera, which were obtained by converting the aforementioned information. Similarly, the direction of the user is estimated based on the lengths of the left and right side vertices within the QR code.
The remainder of this paper is organized as follows. Section 2 consists of related works, followed by the overall descriptions of the proposed algorithm and the camera calibration mechanism in Section 3. In Section 4 and Section 5, the detailed description of the proposed algorithm and the experimental results are provided, respectively. Section 6 concludes this study.

2. Related Works

Numerous vision-based algorithms have been proposed for track the objects in indoor environment. Fundamentally, they are classified into three categories, matching based [9], marker based [10], barcode based and infrared based. As a method of image matching, for instance, fingerprinting [11,12] creates the database of images captured form various indoor spaces and angles. Next the images capture from the user’s position is matched with images in the database to determine the actual position. Image matching techniques such as SURF and SIFT are commonly used to query images. An extended period of time of infrared is required for determining the positions; thus, this method has low efficiency and convenience. Second, a marker-based position recognition method can obtain different types of information according to the marker shape and minimize the errors in positioning through geometric computations. Markers have the advantages of being able to obtain two-dimensional information or a high recognition rate. Marker based IPS for mobile robot was proposed by Li et al. [13]. The robots capture and process markers attached to the ceiling while moving from one place to next. In this method, the location information and direction information are obtained based on the markers. Subsequently, a marker map was constructed based on location and direction information. However, an insufficient amount of information may be obtained from markers. There is a high probability of occurrence of errors when objects having an identical shape are input, as markers generally have a simple geometric shape. Third, a barcode-based method can contain a greater amount of information and has a higher recognition rate compared with the marker-based method [14]. However, this method is inappropriate for geometrical transformations, such as a rotation or tilt, and requires a specific program or hardware device for generating and recognizing barcodes.
Fourth, an infrared-based position recognition method involves installing several infrared sensors in various places within a relevant indoor space, and users possess an infrared generator [15]. Their position is tracked by detecting the signals generated by the infrared generator. One of the first indoor localization systems, active badge [16], used IR signals. This system was intended for personnel tracking using a set of tags each emitting IR signal with a unique code every 15 s. Signals are picked up by badge sensors installed at various rooms inside the building providing room-level accuracy. Badge sensors were powered and connected to a network using a special four wire system using telephone twisted-pairs cable and RS232 data-transfer format.
Additionally, QR codes have been increasingly used in recent years [17,18] for indoor positioning. QR codes can contain a considerable amount of information, and they have a high recognition rate, recognition speed, and restorability compared with barcodes. Li and Huang [17] used high-definition RGB camera and the depth sensor of the Kinect to detect the QR-code based landmarks for IPS. The RGB camera of the Kinect sensor was used to capture the images of the QR codes attached on the wall in different locations in the room. The distance between the Kinect sensor and the QR code was measured using the depth sensor of the Kinect. Based on the measured distance and location information of QR code the exact position of user was estimated.

3. Proposed Algorithm

The vision-based indoor positioning technology is proposed to improve the performance of existing wireless communication methods, such as GPS and beacon, comprehensively [19,20]. In indoor environments, the methods involving communication or sensors within a smartphone device entail a high probability of occurrence of errors depending on the surrounding environment. Accordingly, to reduce errors and improve accuracy, QR codes that can carry a large amount of information are used, and the camera calibration algorithm is used to obtain the position information. The proposed algorithm consists of three steps as shown in Figure 1.
Step 1
  • A QR code of size 50 × 50 to be input is created and captured with a smartphone camera in real time. Once the QR code is recognized within an image, the indoor information is verified when the recognized QR code is created, and the basic information of the current space is obtained.
Step 2
  • All the planar data of the QR code recognized in Step 1 are computed using the algorithm for obtaining the current position information of a smartphone user.
  • The reference symbols are created at the four corners of the QR codes. The center points of the four quadrangle symbols are connected to create a new quadrangle. The center of gravity is extracted using the area of the newly created quadrangle. If the obtained center point is defined as two-dimensional planar coordinates, the coordinates of a point in a three-dimensional space are computed inversely to create the coordinate information of the current position.
  • Based on the lengths of the top, bottom, left, and right sides of the quadrangle created when the QR code is captured from a distance of 1 m, the information of the current direction can be obtained according to the changes in the length.
Step 3
  • The position is detected with a high accuracy based on the coordinates of the center of gravity from the obtained QR code and the area, side length, and fixed height of the quadrangle created using the reference symbols of the QR code.

3.1. Camera Calibration

Camera calibration is the process of extracting the position on a two-dimensional plane onto which three-dimensional coordinates are projected, or inversely, extracting the parameters for restoring the three-dimensional spatial coordinates using the two-dimensional planar coordinates. Figure 2 shows the process of converting three-dimensional spatial coordinates to two-dimensional planar coordinates.
The coordinate values of the QR code, which is a two-dimensional image, are used to obtain three-dimensional coordinates, which are recognized by a smartphone camera. The intrinsic and extrinsic parameters of a camera are extracted to restore the three-dimensional spatial coordinates. The intrinsic parameters have an inherent relationship, such as the focal length or center point of a camera, whereas the extrinsic parameters have a geometrical relationship with external spaces, such as the current height or direction of a camera. Equation (1) shows the relationship between the three-dimensional coordinates and the two-dimensional coordinates.
P i = K · E · P w = K · [ R | t ] · P w
In this equation, P i is the coordinate of a point on a two-dimensional plane, K is the intrinsic parameter of the camera, E is the extrinsic parameter between the camera and an external space, and P w is the coordinate of a point in a three-dimensional space. Here, E consists of a 3 × 3 rotation matrix and 3 × 1 translation vector. K is formed as the distance between the center of the camera lens and a point in a three-dimensional space, optical center, and the direction of distortion. Accordingly, Equation (2) can be obtained.
f · x c z c = S x · X i O x ,   f · y c z c = S y · Y i O y
In the equation, f is the focal length of the camera, x c , y c ,   z c are the coordinates of a point in a three-dimensional space of the camera, S x ,   S y are the changes in the size of the coordinates in the camera and image, and O x ,   O y is the optical center. Figure 3 shows the focal length, which is the distance from the camera lens to an image sensor.
The relationship between the two-dimensional planar coordinates of a point that can be obtained from Equation (2) and the three-dimensional spatial coordinates of the point can be defined as in Equation (3).
x , y , z = P · X ,   Y , Z ,   1 = K · E · X , Y , Z , 1  
The distance between the center point of the input QR code and the center point of the camera lens can be measured using Equation (3). When converting the coordinates of a point on a two-dimensional plane to the coordinates of the point in the three-dimensional space of the camera, x c ,   y c are divided by z c and then multiplied with, f which is the focal length of the camera.

3.2. Detecting Camera Direction

The angle between the user and the QR code can be obtained by finding the rotation and tilt of the direction of the QR code recognized by a smartphone camera. Assuming that a point P w in a three-dimensional space has the coordinates P s = x s ,   y s ,   z s , the coordinates of the center of the camera lens are P c = x c ,   y c ,   z c , the rotation matrix is R , and the translation vector is T , Equation (4) can be obtained.
P c = R P s + T ,   P s = R 1 P c T
A quadrangle is formed by connecting the center points of the reference symbols obtained when the QR code is recognized for acquiring the direction information as shown in Figure 4. The lengths of the top, bottom, left, and right sides of the newly created quadrangle are measured.
The area of the quadrangle is calculated using the measured lengths as follows:
S q u a d r a n g l e = 1 2 · a c ¯ · b d ¯ · sin θ
The area of the quadrangle is measured using a c ¯ , which connects the corner points a and c, and b d ¯ , which connects b and d. Then, the center of gravity of the quadrangle is calculated. With respect to the diagonal segments a c ¯ and b d ¯ , four triangles T 1 , T 2 , T 3 , and T 4 are created. The center of gravity of each triangle is calculated using Equation (6).
G = a 1 + b 1 + c 1 3 , a 2 + b 2 + c 2 3
If the calculated centers of gravity of the four triangles are respectively defined as G 1 , G 2 , G 3 , and G 4 , the intersection of the segment G 1 G 2 ¯ connecting G 1 and G 2 , and the segment G 3 G 4 ¯ connecting G 3 and G 4 becomes the center of gravity of the quadrangle.

4. Simulation Environment and Method

4.1. Simulation Environment

A smartphone (Galaxy Note 8) was used for the experiment in this study. The smartphone has a dual-camera structure with a camera having a 12-MP dual pixel image sensor with an F1.7 wide-angle lens and a camera having a 12-MP image sensor and F2.4 telephoto lens. The QR code created for recognition was enlarged to a size of 50 cm × 50 cm. The camera height was fixed at 1.5 m for ensuring the accuracy of the experiment. The distance between the camera and the QR code ranged between 1 and 9 m and was measured at intervals of 25 cm.

4.2. Simulation Method

In this paper, a method is proposed for obtaining the information of the current position of a user based on QR code recognition, camera calibration, and the center of gravity. A quadrangle is created based on the coordinates of the center of four reference symbols in the QR code recognized with a smartphone camera. Using the area and side lengths of the created quadrangle, the direction of a user standing in front of the QR code is obtained. Four triangles are created by connecting the diagonal segments of the quadrangle, and then, the center of gravity is calculated for each triangle. Subsequently, the centers of gravity of the four triangles are connected to find the coordinates of the center of gravity of the quadrangle. The current coordinates of the smartphone are determined by inversely calculating the obtained coordinates, and these coordinate values are linked with the coordinates of the center of gravity of the QR code to calculate the distance.
Figure 5 shows the pictures taken for the experiment from the left, right, and center at distances of 1 and 2 m. The results show that the pictures are taken in the front direction when the center point of the camera and the center of gravity are adjacent to each other. Moreover, the direction of movement can be presumed based on the increasing distance between the center point of the camera and the center of gravity when moving to the left or right. The performance was measured in a total of five directions in the front, left (10°, 20°), and right (10°, 20°) with respect to the QR code.
In addition, we experimented with a positioning algorithm that combined positioning data using QR code, beacon and PDR measurement data. It tracks the actual user’s location by combining beacon positioning data, PDR-based location measurement data and distance and coordinate measurement data using a QR code.

5. Experiment Result

Through the experiment proposed in this study, the distance between the QR code and the smartphone camera was measured from the front, left, and right, and the user’s current position and direction could be presumed. The minimum measurement distance in the experiment was 1 m because the size of the QR code is 50 × 50, and the camera could not capture the QR code entirely from less than 1 m away; thus, the minimum distance was set to 1 m. Figure 6 shows the values of the distance measured from the front.
In Figure 6a, the green bar in the graph represents the actual distance measured, whereas the blue bar represents the average distance measured 10 times repeatedly at the interval of 25 cm only when the angle is perfectly toward the front. Using the application designed with the proposed algorithm, the QR code is recognized, and a quadrangle is created by connecting the four center coordinates of the four reference symbols. Four triangles are created from the two diagonal segments of the quadrangle. Then, their centers of gravity are obtained to calculate the center of gravity of the quadrangle. By matching the coordinates of the center of gravity and those of the camera’s center point, the user is determined to be at the front when the distance between the two coordinates is (0, 0). In Figure 6b, the average margin of error is less than 0.02, which is very small until the difference in the distance between the actual QR code and the user and the distance measured with the application program designed with the proposed algorithm is 1–8 m. Beyond 8 m, a significant error of 0.08–04 occurs because of the complexity of the QR code recognized by the smartphone camera and a lowered recognition rate due to the increased distance. The user position can be estimated with an error range between 0 cm and 4 cm with a high accuracy.
Figure 7 shows the values of the distance measured from the left (10°).
The graph in Figure 7a shows the distance measured from 10° to the left with respect to the QR code and the actual distance. The average of the distance measured 10 times repeatedly from 10° to the left at the interval of 25 cm was taken. The shape of a quadrangle created when the QR code is recognized changes as the angle changes. The current direction of the user can be detected based on the center of gravity of the changed quadrangle and the length of the sides obtained from the reference images taken at 1 m from the front. The shape of the QR code recognized by the camera transformed from a square to a trapezoid. The left-side segment became longer than the right-side segment as the shape changed to a trapezoid; thus, the user is determined to be on the left-hand side with respect to the QR code. The graph in Figure 7b shows that the average error value increases by 0.03 when the distance is measured from the left (10°) compared with when measured from the front. The error value increases in a similar manner for each interval overall, as the recognition rate of a small alignment pattern in the bottom-right corner among the four reference symbols of the QR code is low as the camera moves toward the left. Therefore, the current position of the user can be estimated with the error range between 0 and 8 cm, whereas the overall average error value is 0.07.
Figure 8 shows the values of the distance measured from the left (20°).
The graph in Figure 8a shows the distance measured from 20° to the left with respect to the QR code and the actual distance. The average of the distance measured repeatedly for 10 times from 20° to the left at the interval of 25 cm was taken. As the measurement angle is greater than the previous experiment condition, which was 10° to the left, the direction of the user can be detected more accurately as the shape of the QR code changes greatly. In Figure 8b, the distance measured from the left (20°) decreases as the angle of a small alignment pattern in the bottom-right corner among the reference symbols of the QR code increases. This results in a higher average error due to a low recognition rate. Therefore, the current position of the user can be estimated with the error range between 0 cm and 32 cm, whereas the overall average error value is 0.09. A higher error is observed in specific sections compared with the average error value, most likely due to the recognition error of the alignment pattern mentioned above.
Figure 9 shows the values of the distance measured from the right (10°).
The graph in Figure 9a shows the distance measured from 10° to the right with respect to the QR code and the actual distance. The average of the distance measured 10 times repeatedly from 10° to the right at the interval of 25 cm was taken. The distance measured from the right shows more accurate results than the distance measured from the left at the same angle. It can be inferred that the recognition error of an alignment pattern that occurred in the left-hand side did not occur as frequently in the right-hand side. In Figure 9b, the distance measured from the right (10°) is almost identical to the distance measured from the front and has a higher accuracy than the distance measured from the left (10°). No significant difference from the actual distance is observed, and only a small error is observed. Therefore, the current position of the user can be estimated with the error range between 0 cm and 8 cm, whereas the overall average error value is 0.05.
Figure 10 shows the values of the distance measured from the right (20°).
The graph in Figure 10a shows the distance measured from 20° to the right with respect to the QR code and the actual distance. The average of the distance measured 10 times repeatedly from 20° to the right at the interval of 25 cm was taken. Unlike the left side, a recognition error of a small alignment pattern did not occur when the camera was moved 20° to the right, thus exhibiting a similar level of accuracy as when the camera was moved 10° to the right. The graph in Figure 10b shows that the distance measured from 20° to the right is almost identical to the distance measured from 10° to the right, with a high accuracy and a very small error. Therefore, the current position of the user can be estimated with the error range between 0 cm and 11 cm, whereas the overall average error value is 0.06.
Table 1 presents the average error between the measurement distance and the actual distance.
The overall average error between the measurement distance and the actual distance is listed in Table 1. From the front, the margin of error was small up to the distance at which the QR code was recognized accurately and then increased with the distance due to a low recognition rate caused by the complexity of the QR code and the hardware issues of a smartphone. On the left side, the recognition rate was low due to the problem of failing to recognize a small alignment pattern in the bottom-right corner among the four reference symbols of the QR code when finding the center of gravity used in the proposed algorithm. On the right side, the recognition rate was almost identical to the recognition rate from the front, and a high accuracy was observed except for the lowered recognition rate when the shape of the recognized QR code changed from a square to a trapezoid. A constant value was computed as the position of the smartphone was fixed in this experiment. A high accuracy and performance were obtained as the experimental errors could easily occur at the actual position of a smartphone held by users.
To measure complex positioning performance, performance was evaluated using beacon and PDR positioning data. Table 2 shows the results of measuring distances using only beacon and PDR positioning data.
Our study is backward compatible with IPS using hybrid localization technology proposed in [8]. Based on this technology we measured distance combined with the positioning data of the beacon and PDR as shown in the Table 2. One set measured the distance from 1 to 9 m in each of the front and left 10°, left 20°, right 10° and right 20° directions. There are a total of 5 sets was performed. If the measured position value exceeds 1 m, the error rate increases rapidly. Accordingly, the average error increases as the distance increases. The average error of the total measurement distance is 1.24 m, which is a high value. These beacons and PDR positioning data are combined with positioning data using a QR code. The experimental results are shown in Table 3.
Table 3 shows the average error of the measurement distance combined with vision positioning, beacon, and PDR. One set measured the distance from 1 to 9 m in each of the front and left 10°, left 20°, right 10° and right 20° directions. There are a total of 5 sets was performed. Measurements from set 1 to set 3 resulted in a very small amount of error. In the 4th set, the measurement error of the beacon increased due to noise generated in the surrounding environment. The total average error of the measurement distance is 1 m, showing a very slight error.

6. Conclusions

This paper has proposed a vision-based indoor positioning technology using a smartphone camera and QR code. The proposed algorithm analyzes the current position of a user based on camera calibration, and the center of gravity and the lengths of the sides of a quadrangle obtained using a QR code recognized by the smartphone camera. Through camera calibration, the coordinates of a three-dimensional space of the smartphone camera are inversely computed using the coordinates of a two-dimensional plane of the QR code. A quadrangle is created by connecting the center points of the four reference symbols of the QR code, and the direction is obtained using the area and side lengths of the created quadrangle. Moreover, the quadrangle is divided into four triangles to compute their centers of gravity, which are in turn used to calculate the center of gravity of the quadrangle. Accordingly, the distance between the smartphone and the QR code was measured within a range of 1–9 m. The measured position values had a high accuracy and efficiency.
In future studies, measures to replace the alignment pattern of reference symbols or reduce the complexity of QR codes will be examined to improve the problem of low recognition rate. Furthermore, studies will be conducted on combining wireless communication technology, such as beacon or PDR, and sensor- or vision-based positioning technology using QR codes for developing a complex positioning technology.

Author Contributions

Conceptualization, G.-R.K.; data curation, J.-I.K.; project administration, J.-I.K. and H.-S.G.; software, J.-I.K. and H.-S.G.; supervision, G.-R.K.; validation, G.-R.K. and J.-Y.P.; writing—original draft, J.-I.K.; writing—review & editing, G.-R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by research funds from Chosun University, 2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions, e.g., privacy or ethical. The data presented in this study are available on request from the corresponding author. The data are not publicly available due to data sharing policy of hosting institute.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mahiddin, N.A.; Madi, E.N.; Dhalila, S.; Hasan, E.F.; Safie, S.; Safie, N. User Position Detection in an Indoor Environment. Int. J. Multimed. Ubiquitous Eng. 2013, 8, 303–312. [Google Scholar]
  2. Nakamori, E.; Tsukuda, D.; Fujimoto, M.; Oda, Y.; Wada, T.; Okada, H.; Mutsuura, K. A new indoor position estimation method of RFID tags for continuous moving navigation systems. In Proceedings of the IEEE 2012 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sydney, Australia, 13–15 November 2012. [Google Scholar]
  3. Nowicki, M.; Rostkowska, M.; Skrzypczyński, P. Indoor navgation using QR codes and WiFi signals with an implementation on mobile platform. In Proceedings of the 2016 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 21–23 September 2016; pp. 156–161. [Google Scholar]
  4. Rida, M.E.; Liu, F.; Jadi, Y.; Algawhari, A.A.A.; Askourih, A. Indoor Location Position Based on Bluetooth Signal Strength. In Proceedings of the IEEE 2015 2nd International Conference on Information Science and Control Engineering, Shanghai, China, 24–26 April 2015. [Google Scholar]
  5. Subedi, S.; Kwon, G.R.; Shin, S.; Hwang, S.S.; Pyun, J.Y. Beacon based indoor positioning system using weighted centroid localization approach. In Proceedings of the 2016 Eighth International Conference on Ubiquitous and Future Networks (ICUFN), Vienna, Austria, 5–8 July 2016. [Google Scholar]
  6. Subedi, S.; Pyun, J.Y. Practical Fingerprinting Localization for Indoor Positioning System by Using Beacons. J. Sens. 2017, 2017, 1–16. [Google Scholar]
  7. Subedi, S.; Gang, H.S.; Ko, N.Y.; Hwang, S.S.; Pyun, J.Y. Improving Indoor Fingerprinting Positioning with Affinity Propagation Clustering and Weighted Centroid Fingerprint. IEEE Access 2019, 7, 31738–31750. [Google Scholar]
  8. Gang, H.-S.; Pyun, J.-Y. A Smartphone Indoor Positioning System Using Hybrid Localization Technology. Energies 2019, 12, 3702. [Google Scholar] [CrossRef] [Green Version]
  9. Jimenez, A.R.; Seco, F.; Prieto, C.; Guevara, J. A Comparison of Pedestrian Dead-Reckoning Algorithms suing a Low-Cost MEMS IMU. In Proceedings of the IEEE International Symposium on Intelligent Signal Processing, Budapest, Hungary, 26–28 August 2009; pp. 26–28. [Google Scholar]
  10. Mulloni, A.; Wagner, D.; Barakonyi, I.; Schmalstieg, D. Indoor Positioning and Navigation with Camera Phones. IEEE Pervasive Comput. 2009, 8, 22–31. [Google Scholar] [CrossRef]
  11. Subhan, F.; Hasbullah, H.; Rozyyev, A.; Bakhsh, S.T. Indoor positioning in Bluetooth networks using fingerprinting and lateration approach. In Proceedings of the IEEE 2011 International Conference on Information Science and Applications, Jeju Island, Korea, 26–29 April 2011. [Google Scholar]
  12. Guo, X.; Ansari, N.; Li, L.; Li, H. Indoor Localization by Fusing a Group of Fingerprints Based on Random Forests. IEEE Internet Things J. 2018, 5, 4686–4698. [Google Scholar] [CrossRef] [Green Version]
  13. Li, Y.; Zhu, S.; Yu, Y.; Wang, Z. An improved graph-based visual localization system for indoor mobile robot using newly designed markers. Int. J. Adv. Robot. Syst. 2018, 15, 1–15. [Google Scholar] [CrossRef] [Green Version]
  14. Pearson, J.; Robinson, S.; Jones, M. BookMark Appropriating existing infrastructure to facilitate scalable indoor navigation. Int. J. Hum.-Comput. Stud. 2017, 103, 22–34. [Google Scholar] [CrossRef] [Green Version]
  15. Gorostiza, E.M.; Lázaro Galilea, J.L.; Meca Meca, F.J.; Salido Monzú, D.; Espinosa Zapata, F.; Pallarés Puerto, L. Infrared sensor system for mobilerobot positioning in intelligent spaces. Sensors 2011, 11, 5416–5438. [Google Scholar] [CrossRef] [PubMed]
  16. Want, R.; Hopper, A.; Falcao, V.; Gibbons, J. The active badge location system. ACM Trans. Inf. Syst. 1992, 10, 91–102. [Google Scholar] [CrossRef]
  17. Li, Z.; Huang, J. Study on the use of Q-R codes as landmarks for indoor positioning: Preliminary results. In Proceedings of the 2018 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 23–26 April 2018. [Google Scholar]
  18. Guo, X.; Ansari, N.; Hu, F.; Shao, Y.; Elikplim, N.R.; Li, L. A Survey on Fusion-Based Indoor Positioning. IEEE Commun. Surv. Tutor. 2020, 22, 566–594. [Google Scholar] [CrossRef]
  19. Shin, B.; Lee, J.H.; Lee, H.; Kim, E.; Kim, J.; Lee, S.; Cho, Y.S.; Park, S.; Lee, T. Indoor 3D Pedestrian Tracking Algorithm Based on PDR using Smartphone. In Proceedings of the 12th International Conference on Control, Automation and Systems, Jeju Island, Korea, 17–21 October 2012; pp. 1442–1445. [Google Scholar]
  20. Faragher, R.M.; Harle, R.K. SmartSLAM—An efficient smartphone indoor positioning system exploiting machine learning and opportunistic sensing. In Proceedings of the 26th International Technical Meeting of the Satellite Division of the Institute of Navigation ION GNSS, Nashville, TN, USA, 16–20 September 2013; Volume 13, pp. 1–4. [Google Scholar]
Figure 1. Flowchart of the proposed algorithm.
Figure 1. Flowchart of the proposed algorithm.
Energies 14 02759 g001
Figure 2. Conversion relationship between three-dimensional spatial coordinates and two-dimensional planar coordinates.
Figure 2. Conversion relationship between three-dimensional spatial coordinates and two-dimensional planar coordinates.
Energies 14 02759 g002
Figure 3. Intrinsic parameter of a camera (focal length).
Figure 3. Intrinsic parameter of a camera (focal length).
Energies 14 02759 g003
Figure 4. Coordinates of vertices within the QR code.
Figure 4. Coordinates of vertices within the QR code.
Energies 14 02759 g004
Figure 5. Pictures taken from the (a) front, (b) left, and (c) right at a distance of 1 m. Pictures taken from the (d) front, (e) left, and (f) right at a distance of 2 m.
Figure 5. Pictures taken from the (a) front, (b) left, and (c) right at a distance of 1 m. Pictures taken from the (d) front, (e) left, and (f) right at a distance of 2 m.
Energies 14 02759 g005
Figure 6. (a) Distance measured from the front, (b) distance error value measured from the front.
Figure 6. (a) Distance measured from the front, (b) distance error value measured from the front.
Energies 14 02759 g006
Figure 7. (a) Distance measured from the left (10°), (b) distance error value measured from the left (10°).
Figure 7. (a) Distance measured from the left (10°), (b) distance error value measured from the left (10°).
Energies 14 02759 g007
Figure 8. (a) Distance measured from the left (20°), (b) distance error value measured from the left (20°).
Figure 8. (a) Distance measured from the left (20°), (b) distance error value measured from the left (20°).
Energies 14 02759 g008
Figure 9. (a) Distance measured from the right (10°), (b) distance error value measured from the right (10°).
Figure 9. (a) Distance measured from the right (10°), (b) distance error value measured from the right (10°).
Energies 14 02759 g009
Figure 10. (a) Distance measured from the right (20°), (b) distance error value measured from the right (20°).
Figure 10. (a) Distance measured from the right (20°), (b) distance error value measured from the right (20°).
Energies 14 02759 g010
Table 1. Average error between the measurement distance and the actual distance.
Table 1. Average error between the measurement distance and the actual distance.
Measurement LocationAverage Error of Measurement Distance (Unit: m)
Front0.04
Left (10°)0.07
Right (10°)0.05
Left (20°)0.09
Right (20°)0.06
Table 2. Average error between beacon measurement data and PDR measurement data.
Table 2. Average error between beacon measurement data and PDR measurement data.
Measurement SetAverage Error of Measurement Distance (Unit: m)
11.75
21.07
31.21
41.11
51.06
Total average1.24
Table 3. Average error of measurement distance combined with beacon, PDR and vision positioning data.
Table 3. Average error of measurement distance combined with beacon, PDR and vision positioning data.
Measurement SetAverage Error of Measurement Distance (Unit: m)
10.68
20.47
30.61
42.17
51.07
Total average1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J.-I.; Gang, H.-S.; Pyun, J.-Y.; Kwon, G.-R. Implementation of QR Code Recognition Technology Using Smartphone Camera for Indoor Positioning. Energies 2021, 14, 2759. https://doi.org/10.3390/en14102759

AMA Style

Kim J-I, Gang H-S, Pyun J-Y, Kwon G-R. Implementation of QR Code Recognition Technology Using Smartphone Camera for Indoor Positioning. Energies. 2021; 14(10):2759. https://doi.org/10.3390/en14102759

Chicago/Turabian Style

Kim, Ji-In, Hui-Seon Gang, Jae-Young Pyun, and Goo-Rak Kwon. 2021. "Implementation of QR Code Recognition Technology Using Smartphone Camera for Indoor Positioning" Energies 14, no. 10: 2759. https://doi.org/10.3390/en14102759

APA Style

Kim, J. -I., Gang, H. -S., Pyun, J. -Y., & Kwon, G. -R. (2021). Implementation of QR Code Recognition Technology Using Smartphone Camera for Indoor Positioning. Energies, 14(10), 2759. https://doi.org/10.3390/en14102759

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop