Next Article in Journal
Open-Source Artificial Intelligence Privacy and Security: A Review
Previous Article in Journal
Atrial Fibrillation Type Classification by a Convolutional Neural Network Using Contrast-Enhanced Computed Tomography Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Self-Positioning with the Zero Moment Point Model and Enhanced Position Accuracy Using Fiducial Markers

Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology (AIST), 6-2-3 Kashiwanoha, Kashiwa 277-0882, Japan
*
Author to whom correspondence should be addressed.
Computers 2024, 13(12), 310; https://doi.org/10.3390/computers13120310
Submission received: 23 October 2024 / Revised: 15 November 2024 / Accepted: 19 November 2024 / Published: 25 November 2024

Abstract

:
Many companies are turning their attention to digitizing the work efficiency of employees in large factories and warehouses, and the demand for measuring individual self-location indoors is increasing. While methods combining wireless network technology and Pedestrian Dead Reckoning (PDR) have been developed, they face challenges such as high infrastructure costs and low accuracy. In this study, we propose a novel approach that combines high-accuracy fiducial markers with the Center of Gravity Zero Moment Point (COG ZMP) model. Combining fiducial markers enables precise estimation of self-position on a map. Furthermore, the use of high-accuracy fiducial markers corrects modeling errors in the COG ZMP model, enhancing accuracy. This method was evaluated using an optical motion capture system, confirming high accuracy with a relative error of less than 3%. Thus, this approach allows for high-accuracy self-position estimation with minimal computational load and standalone operation. Moreover, it offers a cost-effective solution, contributing to society by enabling low-cost, high-performance self-positioning. This research enables high-accuracy standalone self-positioning and contributes to the advancement of indoor positioning technology.

1. Introduction

Accurately estimating an individual’s self-position in space is a pressing technological need. Information on the work and movement of employees in large factories and warehouses is necessary to verify work efficiency [1,2]. In addition, information on the movement of users in large commercial facilities is also necessary information for the retail industry. While GPS-based self-positioning is effective outdoors, it becomes impractical when navigating indoors due to the inability to access GPS information. Consequently, various indoor positioning techniques have been developed to address this challenge [3].
One set of methods utilizes wireless network technology [4]. This approach calculates the location of a moving object by installing a receiver and a transmitter on the object and within the environment, respectively [5]. Examples of such wireless network technologies include Bluetooth Low Energy (BLE), Radio-Frequency Identification (RFID), Ultra-Wide Band (UWB), and WiFi. Although these methods simplify measurements, they necessitate the installation of dedicated devices within the environment, leading to high initial costs. Moreover, some of these devices require a continuous power supply, restricting their installation locations. Conversely, Pedestrian Dead Reckoning (PDR) is a method for estimating movement trajectories using only Inertial Measurement Units (IMUs) worn on the body [6,7,8,9,10]. PDR performs integration when obtaining position, therefore errors tend to accumulate if used continuously [11]. Methods have also been proposed to improve the accuracy of PDR position calculations using large amounts of trajectory data [12], and methods have also been proposed to correct integration errors using PDR based on human behavior patterns and acoustic data [13]. PDR calculates relative movement trajectories based on the inertial data acquired from IMUs. While this method is convenient as it relies solely on IMUs, it requires a separate mechanism for determining absolute position, as it does not provide information regarding the absolute position within the environment. Methods relying on a single IMU have lower equipment costs. However, since they can only calculate relative positions, they are often combined with methods like BLE mentioned earlier to calculate self-positions relative to the environment. This combination, however, does not resolve the issue of high installation costs. Moreover, due to the low measurement accuracy of BLE and Wi-Fi in isolation [14], self-location calculations are often carried out probabilistically, using techniques such as particle filters (PFs), which increase computational costs proportionate to the number of particles. The technology known as Simultaneous Localization and Mapping (SLAM), which utilizes natural features of the environment for self-position and map generation, is recognized [15,16]. It can be operated without installation costs in the environment; however, it faces the challenge of vulnerability to environmental changes.
In light of these challenges, conventional methods are often characterized by high installation and computation costs, making it challenging to achieve stand-alone, real-time self-positioning with cost-effective wearable devices. Consequently, this research endeavors to develop a method capable of real-time self-positioning on a map while minimizing both the environmental installation cost and computational overhead.

2. Proposed Method

In this study, we utilize fiducial markers as non-powered equipment to gather information on position and orientation. Additionally, an IMU is employed as a means to calculate a pedestrian’s self-position when marker information is unavailable. Thus, in this research, we propose a wearable device that integrates a camera and an IMU, as illustrated in Figure 1. This device is equipped with an IMU on the back of the neck and a front-facing camera, enabling simultaneous acquisition of inertial data and images. By processing these camera and inertial data on a compact PC (in this research, processing was conducted on a notebook PC for convenience), we achieve real-time self-position calculation in a standalone manner. In the future, we plan to implement “THINKLET” [17], which integrates a camera, IMU, and computer, as described in reference [18].
Fiducial markers are strategically placed within the environment. By applying image processing to two-dimensional markers like AR markers, we can calculate the relative position between the camera and the markers. These markers are equipped with unique ID information, which, when linked to position data in the global coordinate system, enables precise determination of the camera’s position and orientation in global coordinates. The use of fiducial markers for position and orientation measurement comes with the advantages of low computational and installation costs, absence of power requirements, and installation location flexibility.
A method has been proposed to calculate the three-dimensional self-position using multiple markers [19,20]. This method can calculate the camera position even if the marker positions are arbitrary, but it has the problem of requiring a large number of markers to be attached. Various methods have been developed to compute the self-location of both humans and robots by integrating fiducial markers and IMUs [16,21,22,23]. The authors of this paper have also developed a self-positioning estimation method using IMUs and high-accuracy fiducial markers [24]. By correcting the integration errors of IMUs based on marker information, we achieved the estimation of three-dimensional motion trajectories. However, real-time self-positioning calculation was not possible. Therefore, we developed a method that combines PDR with high-accuracy fiducial markers [18]. While the accuracy of PDR is higher when compared to the second-order integration of accelerations, errors increase with movement.
In this study, our goal is to achieve real-time and accurate motion trajectory estimation using IMUs alone. However, since there are limitations to estimation accuracy with IMUs alone, we aim to develop a method that improves accuracy by applying corrections through high-accuracy fiducial markers in the motion estimation calculations.

3. Walking Path Estimation Using the ZMP Model

The IMU is equipped with an acceleration sensor, angular velocity sensor, and geomagnetic sensor. Theoretically, position information can be obtained by second-order integration of acceleration information, but since the acceleration obtained from the acceleration sensor includes a drift term, integrating it increases the error and reduces the accuracy of position estimation. Therefore, methods of estimating position without integrating acceleration have been studied.
There has been research on PDR as a method for calculating the relative position of pedestrians using IMUs [6]. Among these, the step-and-heading algorithm has been adopted as a method that can be implemented using an inexpensive IMU. The step-and-heading algorithm calculates the distance traveled by estimating the stride length from the walking speed. However, even if the speed can be inferred with high accuracy, if the speed includes drift errors, errors due to integration will inevitably occur.

3.1. Estimating Walking Trajectories

In controlling the walking motion of humanoid robots, an index called Zero Moment Point (ZMP) is often used [25]. ZMP refers to the representative point of ground reaction force during walking, and is defined as the point where inertial force and ground reaction force are balanced. This mechanical relationship between ZMP and the center of gravity (COG) is called the COG ZMP model [26]. The COG ZMP model assumes that a person contacts the environment only with the soles of their feet, and that there is sufficient friction with the environment. The center of gravity ZMP model is shown in Equation (1).
x z = x m x ¨ m z ¨ m + g z m
where x m , y m , z m is the position of COG, x z , y z is the position of ZMP, and g is the gravitational acceleration (See Figure 2). A method for planning walking motion based on this COG ZMP model has been proposed [26]. Also, the similarity to human walking is mentioned, and it can be said that the COG ZMP model includes important elements of human walking.
As can be seen from Equation (1), this equation is a relational expression between the COG position and acceleration, so it is thought that the COG trajectory can be obtained without integration by using Equation (1). Therefore, in this research, we aimed to construct a walking route algorithm that focuses on this COG ZMP model.
Since the walking path can be considered as the position of the COG, we aim to obtain this COG trajectory. Here, when calculating with the IMU alone, the COG height and ZMP are also unknown quantities. By assuming that the COG height is constant (vertical acceleration is zero), there is no need to calculate the COG height in real time. It is not possible to obtain COG and ZMP at the same time with an IMU alone. Therefore, the problem is to find the COG position relative to ZMP as in the Equation (2). The COG position relative to this ZMP is called RCOG. Since RCOG is the relative amount of COG movement during the single-leg support phase, it is calculated step by step.
x r c o g = x m x z = x ¨ m g z c
where x r c o g is the RCOG, z c is the height of the COG. If the position of the ZMP remains unchanged during the single-leg support phase, the left side of the above equation will be the amount of walking movement, but in reality, a person’s ZMP moves from the heel to the toe (see Figure 2b). Therefore, when calculating using the above formula, an error occurs by the amount of movement of ZMP within the sole of the foot. Therefore, by adding the size of the subject’s foot, the amount of movement in one step can be obtained. The amount of movement in one step is shown in Equation (3).
x r c o g = x ¨ m g z c + l f o o t
where l f o o t is the foot size. By using Equation (3), the walking distance can be calculated directly from the information from the acceleration sensor.

3.2. Extraction of Gait Cycle

From the discussion so far, since RCOG is a movement trajectory for each step, it is necessary to extract a person’s walking step by step. Methods to extract walking patterns based on acceleration have been studied [27]. Therefore, in this study, we also segment walking based on acceleration. When human walking is regarded as an inverted pendulum, when the COG and ankle joints are aligned on the vertical axis, the height of the COG is maximized and the acceleration is minimized (see Figure 3). On the other hand, when the support legs are replaced, the COG height becomes the minimum and the acceleration becomes the maximum. Therefore, when the COG height is maximal and sufficiently large, it is determined that the supporting leg has been replaced.

3.3. Complementing the Data of Double Support Phase

Human walking has a single-leg support phase and a double-leg support phase, which occur alternately. As shown in Equation (2), RCOG, which is the center of gravity position based on the ZMP standard, is defined only in the single-leg support phase, so it is the amount of movement from heel strike (HS) to toe off (TF). Here, a schematic diagram when RCOG is calculated continuously based on Equation (2) is shown in Figure 4. During the single-leg support phase, RCOG increases from negative to positive, but during double-leg support, it rapidly decreases from positive to negative. For this reason, it is necessary to distinguish between double-leg support and single-leg support. However, since it is difficult to accurately distinguish and acquire HS and TF, a method for simply estimating HS and TF is considered.
RCOG decreased during the double-leg support phase and increased during the single-leg support phase. As can be seen from Figure 4, the period between the minimum and maximum values of the trajectory extracted from the support leg exchange is considered to be the single-leg support period. Therefore, the minimum value in the first half of the extracted data and the maximum value in the second half are obtained, and the distance between them is used as the movement trajectory. Furthermore, the values before the minimum value were the same as the minimum value, and the values after the maximum value were the same as the maximum value. This calculation method is shown in Figure 5.
In the original COG ZMP model, the COG and ZMP in the front–back direction and the left–right direction are calculated, but in this research, the COG trajectory on only the Sagittal plane is sufficient to estimate the human movement path in the direction of travel. Therefore, the left–right direction (y-axis) is set to a constant number (y = 0). Since the object actually moves on a two-dimensional plane, the locus of movement on the two-dimensional plane can be obtained by multiplying the x- and y-coordinates by the orientation matrix with respect to the reference coordinates.
x c y c = R t x r c o g 0
where x c , y c is the calculated position and R t is the rotation matrix. A walking route is calculated by connecting the RCOGs calculated using this method one after another. Since there is no repeated calculation in this calculation method, the amount of calculation is small.

4. Adjusting Parameters with Fiducial Markers

In the previous chapter, the amount of walking movement is estimated based on an IMU, but this is a relative amount of movement and not an absolute position on the map. Therefore, in this study, we use high-accuracy visual markers to calculate the absolute position of the device wearer on the map. Although the proposed method is capable of estimating the amount of walking movement as long as the COG height and foot size are obtained, it is not guaranteed that the amount of walking movement of a person will behave according to the model. Therefore, an error occurs between the actual trajectory and the estimated value. Therefore, this trajectory is corrected based on data obtained from another sensor. Although BLE tags can be considered as sensors other than IMUs, this research uses high-accuracy fiducial markers [28,29].

4.1. Identification of Position and Orientation Relative to the Map

High-accuracy fiducial markers have extremely high accuracy, with an orientation error of less than a few degrees and a position error of less than 1%, so the information obtained from high-accuracy fiducial markers can be used as the true value. It is desirable to be able to measure location and orientation using a small number of devices. Methods using reflective markers require multiple cameras and large-scale equipment, making them inappropriate.
Fiducial markers provide ID, position, and orientation information. The position and orientation information of the marker on the map is linked to the ID, making it possible to calculate the position and orientation of the wearer on the map. Figure 6 shows the high-accuracy fiducial markers developed by Tanaka et al. They consist of a 2D barcode and black circles at the four corners, and the position and orientation are calculated from the black circles at the four corners [28]. A software library provided (LEAG SDK 1.3.2) by LEAG Solutions was used to calculate the position and orientation of the high-accuracy fiducial markers [30].
The position and orientation of the wearer on the map can be obtained from the relative position and orientation information of the wearer obtained from the marker and the position and orientation information linked to the ID. Every time a marker is recognized, information about the wearer’s position and orientation relative to the map is updated.
p c w = w R m R c m p c m + w p m
where p c w is the position of wearer with respect to the world coordinates system, p c m is the position of wearer with respect to the marker coordinates system, p m w is the position of marker with respect to the world coordinates system, R m w is the orientation of marker with respect to the world coordinates system, and R c m is the orientation of wearer with respect to the marker coordinates system.
Koide et al. developed a VIO-based self-localization algorithm [20]. However, their method requires a large number of markers. In practice, it is difficult to place a large number of markers in the environment, so markers should be placed sparsely. Elgendy et al. realized self-localization using markers [31]. In their study, the method is performed using markers only, so the self-localization is not updated in spaces where the markers cannot be detected.
In this study, markers are placed at intervals of several meters to 10 m, and it is assumed that the intervals are supplemented by the IMU.

4.2. Correction of Gait Trajectory Parameters

We consider a method to correct the calculation results of walking movement estimation using high-accuracy fiducial markers. The proposed method calculates the trajectory for each step and adds up the number of steps to obtain the overall trajectory. This problem can be replaced with the problem of correcting the movement trajectory of one step using an arbitrary function. In this study, for simplicity, we use a first-order linear function as the correction function. Equation (6) shows the correction formula.
x ^ r c o g = α z c g x ¨ m + l f o o t + β
where x ^ r c o g is the corrected value, and α and β are the correction variables. From this, it can be thought of as a problem of identifying α and β . Here, α is the correction amount within one step, and β is the correction amount for each step.
The parameter α is identified by comparing the continuous marker information within one step and the movement trajectory information obtained by the proposed method. Let the data group of markers and RCOGs acquired within an arbitrary step be L m , and the oldest data among them be L 0 . Let p i and r i be the relative displacements of the marker and RCOG based on this L 0 . These two parameters are the relative movement amounts within one step, and the error between them is considered to depend on z c ; therefore p i and r i can be written as p i = α r i using α . From this, α is calculated by Equation (7).
α = i = 1 N r i r ¯ p i p ¯ i = 1 N r i r ¯ 2
where N is the number of data points, r ¯ i is the average of r i , and p ¯ i is the average of p i .
Next, consider correcting parameters from errors that occur while marker information is not obtained. Letting Δ P be the Euclidean norm of the error between the coordinates calculated by the proposed method and the coordinates obtained from the marker; this is corrected by α or β . Since α was corrected according to Equation (7), Δ P is corrected by β . When the number of steps during the period in which marker information is not obtained is M, the total amount of movement before correction is expressed as Equation (8), and the total amount of movement after correction is expressed as Equation (9).
i = 1 M x i , r c o g = i = 1 M z c g x ¨ m + l f o o t
i = 1 M x ^ i , r c o g = i = 1 M α z c g x ¨ m + l f o o t + β
Here, if we set α = 1 and take the difference between the two equations, we obtain Δ X = i = 1 M β = M β . If Δ X = Δ P , then β = Δ P M . By acquiring the information of high-accuracy fiducial markers by the above method, it is possible to identify the position and orientation on the map and correct the calculation result of walking movement in real time.

4.3. Measurement System

A system that simultaneously acquires information from the IMU and markers and calculates the wearer’s movement trajectory in real time using the aforementioned algorithm is described. The IMU acquires data every 60 Hz. Then, it calculates the RCOG based on Equation (3). At the same time, it monitors the walking state. When a support leg change is detected, the movement trajectory for one step is calculated. This is then added to the past history to obtain the current position.
At the same time, it constantly acquires marker information. If a marker is not detected, the above calculation is performed. When a marker is detected, the self-position and self-orientation are updated based on Equation (5). After that, when a support leg change is detected and the movement trajectory for one step is calculated, the self-position is corrected going back to the time when the marker was detected. At the same time, α and β are updated, and the calculation of the movement trajectory using the updated coefficients is reflected from the next step. The flow of the proposed method is shown in Figure 7.

5. Experiments

We conducted an experiment to evaluate the accuracy of the proposed method. An optical motion capture system was used to accurately measure a person’s walking trajectory. This research received approval from the Life Science Committee of the National Institute of Advanced Industrial Science and Technology (AIST) (ID: 2022-1247).

5.1. Experimental Conditions

Figure 8 shows a photograph of the experimental environment and a rough movement trajectory. In the experimental environment, one high-accuracy fiducial marker (100 mm on a side) was installed, and in order to acquire the position and orientation of this marker in the experimental environment, four reflective markers were attached around it as shown in Figure 8a. A reflective marker was attached to the head of the experiment participant, and its trajectory on a two-dimensional plane was taken as the true value. The position and orientation of the high-accuracy fiducial marker in the environment are calculated from the reflective markers attached around the high-accuracy fiducial marker, and the position of the experiment participant is calculated by coordinate transformation. The monocular camera was a Megapixel USB camera (GS720P02-L100, Global Shutter, CMOS, 1280 × 720, f = 3.6 mm). The IMU was an MTw device (Movella Inc. [32]) that can connect to a PC wirelessly. The MTw can calculate orientation data with high accuracy using a real-time Kalman filter. The information from the camera and IMU was simultaneously measured by a laptop computer (CPU: Core i5-8265U, Memory: 16 GB) using theMAC3D System developed by Motion Analysis Corporation, which measured location information in a laboratory of about 8 m × 6 m by 24 cameras. Walking motion was measured in three trials for each of the three movement routes shown in Figure 8b. In each trial, the wearer walked 5 laps. This method was evaluated using the two-dimensional coordinate data at this time. In addition, z c = 1600 mm and l f o o t = 280 mm. z c is originally the height of the center of gravity, but it is not possible to directly obtain the acceleration of the center of gravity. Therefore, the height of the IMU (the height of the wearer’s neck) was calculated as zpos. lfoot is the size of the wearer’s foot.

5.2. Experimental Results

Experimental results are shown in Figure 9, Figure 10 and Figure 11. The red circles in Figure 9 are the results of the proposed method that combines markers and IMUs, and the blue circles are the measurement results of markers only. The green line is the true value. The movement route is (2) in Figure 8b.
From Figure 9 and Figure 10, it can be seen that the trajectory of the proposed method and the true value are close. Here, some discontinuous changes in the red circle could be seen. The RCOG obtained by Equation (2) is a continuous trajectory, but by adding the foot size l f o o t as in Equation (3), the trajectory becomes discontinuous by the amount of l f o o t . In this experiment, no large error occurred because the travel path was short, and β was not corrected. Also, the correction of α is small.
For all measurement results, the Root Mean Square Error (RMSE), the Mean Square Error (MSE), and the correlation coefficient were calculated between the estimated value using the proposed method and the true value. Figure 11 shows the calculation results.
The average value of the x-axis in RMSE is 0.439 m, and the average value of the y-axis is 0.325 m, which is more than 12 m per lap (relative error about 3%), which is sufficiently small compared to the travel distance. In addition, the correlation coefficients are both high at 0.967 and 0.942. This confirms that this method can estimate a person’s walking trajectory with high accuracy.
Furthermore, a comparison with conventional methods was conducted. PDR [9] calculated offline, which can be measured solely with an IMU like our method, was compared. The gait detection was performed using our method. PDR involves arbitrary adjustment parameters, and in this calculation, the values of α = 7.35 and β = 1.24 reported in the literature [9] were used. Additionally, considering the correction by markers in our method, the comparison was made using data from intervals where markers were not detected. RMSE and correlation coefficients were computed against the ground truth. The computational results are presented in Figure 12.
From Figure 12, it is evident that our method has a smaller RMSE compared to PDR. While the correlation coefficient of our method is higher than that of PDR, there was not a significant difference in the results along the x-axis. Therefore, it can be confirmed that our method is superior to conventional methods. Since PDR involves integration in the calculation process, the increase in RMSE is likely due to the influence of integration errors.

5.3. Experiments in Large Areas

In the experiment in the previous section, the error between the IMU calculation result and the position obtained with the fiducial marker was small, so the correction effect of α and β was small. Therefore, the effect of correction using α and β through walking experiments in a wide space was verified. The experiment was conducted in the corridor and open space on the 3rd floor of the AIST Kashiwa Center. The drawing and dimensions are shown in Figure 13.
The device wearer walked around a rectangle approximately 11,000 mm × 9000 mm. Fiducial markers were set up in three locations, marker 1 was set as the origin, and the relative distances to the other two markers were determined by actual measurements. In this experiment, in order to verify the correction effect, two types of z c were used: 1000 mm and 2500 mm. l f o o t was set to 280 mm as in the previous section. It is expected that the estimation error will increase by making z c smaller or larger compared to the experiment in the previous section. We verified whether these errors can be appropriately corrected by the proposed method.
Figure 14 shows the results when z c = 1000 mm. The red line is the first lap, the pink line is the second lap, the orange line is the third lap, and the yellow line is the termination. The thick line is the measurement result using the marker. The green line is the approximate movement trajectory.
In the first lap, the estimated movement distance is small, and a large error has occurred compared to the measured value of the markers. It can be seen that the error is becoming smaller every time the marker measurement is repeated. Eventually, α = 1.54 and β = 171 . The initial error, which was approximately 4.84 m, reduced to 1.54 m by the end. There were also points along the way where the error decreased to 0.389 m.
Figure 15 shows the results when z c = 2500 mm. The red line is the first lap, and the pink line is the termination. The thick line is the measurement result using the marker. In this case, the accuracy was improved with a small number of corrections, so only one lap was used.
Because z c is large, the initial estimated trajectory is also large, and it can be seen that it is gradually improved by correction. In the end, α = 0.651 and β = 98.7 . The initial error was approximately 2.82 m, decreasing to 0.62 m in the end. In this method, it is necessary to set z c and l f o o t , but it has been shown that appropriate values can be automatically obtained by correction using markers. However, if there is not a sufficient number of position and orientation data obtained from markers, appropriate corrections will not be made. In addition, since this method does not guarantee the convergence of the correction results, there is a risk that the correction values will diverge if used for a long time. A framework that can evaluate the validity of correction results in real time is required.

5.4. Discussion

By setting appropriate parameters, it is possible to accurately calculate the movement trajectory using only the IMU up to a distance of about 12 m (see the movement result from marker 3 to marker 1 in Figure 15). However, if there is no update by the marker, the error may increase. For this reason, it is desirable to place markers at intervals of about 10 m.
The proposed method is intended for indoor use, and it is unlikely that lighting will significantly reduce the accuracy of marker detection. In addition, because binarization is used to process the markers, there is no effect on the calculations as long as the contrast is sufficient. Note that lighting will significantly reduce the quality of the markers in strong light that is as strong as the sun and comes from a perpendicular direction to the marker (outdoors, this would be the morning sun or the evening sun). Also, detection accuracy may decrease in dark places with no lighting. In such cases, some ingenuity is required, such as incorporating a mechanism that allows the marker itself to emit light.
Also, in crowded places, people may become obstructions and the marker may be lost. Therefore, it is thought that this risk can be minimized by placing the marker at a height higher than the height of people.
Although the results of the experiments so far have not shown that differences in walking speed have a significant effect on the experiments, the IMU has a relatively low sampling rate of 60 Hz, so a high moving speed may affect the trajectory calculation. For robust and accurate calculations, it would be preferable to use an IMU with a high sampling rate.
Our method can determine the parameters required for calculation from systematic features. Therefore, unlike methods that use machine learning or optimization calculations, the proposed method does not require data to be collected in advance. However, it should be noted that machine learning methods may be more advantageous when a large amount of training data is available.

6. Conclusions

Estimating self-position accurately in indoor environments where GPS is unavailable poses a challenging problem, unlike outdoor scenarios. In large-scale warehouses, in order for employees to immediately understand their own location and for managers to calculate the work efficiency of each employee, a method is required to simultaneously measure the self-locations of multiple people indoors. In addition, in order to obtain a large amount of data, the device must be inexpensive and portable. In this study, we proposed a method that leverages wearable devices equipped with cameras and IMUs, along with the use of fiducial markers, to enable straightforward and high-accuracy self-position estimation. Usually, integrating accelerations can accumulate errors. However, in this research, we developed an algorithm that estimates the wearer’s movement trajectory without the need for integration by employing the COG ZMP model. Given that the COG ZMP model simplifies the representation of human walking, there was a concern about estimation errors due to modeling. Therefore, we also introduced an algorithm for real-time tuning of parameters related to the walking path using fiducial marker information. Upon evaluating our proposed approach using an optical motion capture system, we confirmed a relative error of approximately 3% with respect to the distance traveled, indicating a high level of estimation accuracy. Furthermore, by applying real-time corrections using fiducial markers, we demonstrated the ability to estimate an appropriate trajectory even when setting parameters for the center of gravity ZMP model to arbitrary values. The accuracy achieved in this experiment is sufficient to consider the work efficiency of employees in warehouses and factories. However, it is important to note that our method does not employ convergence calculations or optimizations, as real-time computation is necessary. Consequently, we cannot assess the validity of adjustment parameters, and there is a possibility of parameter divergence. In the future, we plan to explore algorithms that ensure convergence to suitable parameter values and validate the effectiveness of our approach through experiments in real-world environments, such as commercial facilities and indoor workspaces. Since this method uses a walking model, it cannot correctly infer running motion. In the future, we will also consider a position estimation method using a running model.

Author Contributions

Conceptualization, K.O. and H.T.; methodology, K.O.; software, K.O.; validation, K.O.; formal analysis, K.O.; investigation, K.O. and H.T.; resources, K.O.; data curation, K.O.; writing—original draft preparation, K.O.; writing—review and editing, K.O. and H.T.; visualization, K.O.; supervision, K.O. and H.T.; project administration, H.T.; funding acquisition, H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ogiso, S.; Mori, I.; Miura, T.; Nakae, S.; Okuma, T.; Haga, Y.; Hatakeyama, S.; Kimura, K.; Kimura, A.; Kurata, T. Integration of BLE-based proximity detection with particle filter for day-long stability in workplaces. In Proceedings of the 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, USA, 24–27 April 2023; pp. 1060–1065. [Google Scholar]
  2. Nakae, S.; Ogiso, S.; Mori, I.; Miura, T.; Okuma, T.; Haga, Y.; Hatakeyama, S.; Kimura, K.; Kimura, A.; Kurata, T. Geospatial intelligence system for evaluating the work environment and physical load of factory workers. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–28 July 2023. [Google Scholar]
  3. Correa, A.; Barcelo, M.; Morell, A.; Vicario, J.L. A Review of Pedestrian Indoor Positioning Systems for Mass Market Applications. Sensors 2017, 17, 1927. [Google Scholar] [CrossRef] [PubMed]
  4. Patwari, N.; Ash , J.N.; Kyperountas, S.; Hero, A.O.; Moses, R.L.; Correal, N.S. Locating the nodes: Cooperative localization in wireless sensor networks. IEEE Signal Process. Mag. 2005, 22, 54–69. [Google Scholar] [CrossRef]
  5. Banin, L.; Bar-Shalom, O.; Dvorecki, N.; Amizur, Y. Scalable Wi-Fi Client Self-Positioning Using Cooperative FTM-Sensors. IEEE Trans. Instrum. Meas. 2019, 68, 10. [Google Scholar] [CrossRef]
  6. Harle, R. A Survey of Indoor Inertial Positioning Systems for Pedestrians. IEEE Commun. Surv. Tutor. 2013, 15, 1281–1293. [Google Scholar] [CrossRef]
  7. Kawaguchi, N.; Nozaki , J.; Yoshida, T.; Hiroi, K.; Yonezawa , T.; Kaji, K. End-to-End Walking Speed Estimation Method for Smartphone PDR using DualCNN-LSTM. In Proceedings of the Tenth International Conference on Indoor Positioning and Indoor Navigation(IPIN2019), Pisa, Italy, 30 September–3 October 2019. [Google Scholar]
  8. Ban, R.; Kaji, K.; Hiroi, K.; Kawaguchi, N. Indoor Positioning Method Integrating Pedestrain Dead Reckoning with Magnetic Field and WiFi Fingerprints. In Proceedings of the 8th International Conference on Mobile Computing and Ubiquitous Networking (ICMU), Hakodate, Japan, 20–22 January 2015; IEEE: Piscatway, NJ, USA; pp. 167–172. [Google Scholar]
  9. Kourogi, M.; Kurata:, T. Personal Positioning based on Walking Locomotion Analysis with Self-Contained Sensors and a Wearable Camera. In Proceedings of the Second International Symposium on Mixed and Augmented Reality (ISMAR03), Tokyo, Japan, 7–10 October 2003; pp. 103–112. [Google Scholar]
  10. Zhao, H.; Cheng, W.; Yang, N.; Qiu, S.; Wang, Z.; Wang, J. Smartphone-Based 3D Indoor Pedestrian Positioning through Multi-Modal Data Fusion. Sensors 2019, 19, 4554. [Google Scholar] [CrossRef] [PubMed]
  11. Jackermeier, R.; Ludwig, B. Exploring the Limits of PDR-based Indoor Localisation Systems under Realistic Conditions. J. Locat. Based Serv. 2018, 12, 231–272. [Google Scholar] [CrossRef]
  12. Yotsuya, K.; Naito, K.; Chujo, N.; Mizuno, T.; Kaji1, K. Method to Improve Accuracy of Indoor PDR Trajectories Using a Large Number of Trajectories. J. Inf. Process. 2020, 28, 44–54. [Google Scholar] [CrossRef]
  13. Wang, M.; Duan, N.; Zhou, Z.; Zheng, F.; Qiu, H.; Li, X.; Zhang, G. Indoor PDR Positioning Assisted by Acoustic Source Localization, and Pedestrian Movement Behavior Recognition, Using a Dual-Microphone Smartphone. Wirel. Commun. Mob. Comput. 2021, 2021, 9981802. [Google Scholar] [CrossRef]
  14. Wu, Y.; Chen, R.; Fu, W.; Zhou, W.L.H.; Guo, G. Indoor positioning based on tightly coupling of PDR and one single Wi-Fi FTM AP. GEO-Spat. Inf. Sci. 2023, 26, 480–495. [Google Scholar] [CrossRef]
  15. Angermann, M.; Robertson, P. FootSLAM: Pedestrian Simultaneous Localization and Mapping Without Exteroceptive Sensors-Hitchhiking on Human Perception and Cognition. Proc. IEEE 2012, 100, 1840–1848. [Google Scholar] [CrossRef]
  16. Vidal, A.; Rebecq, H.; Horstschaefer, T.; Scaramuzza, D. Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios. IEEE Robot. Autom. Lett. 2018, 3, 994–1001. [Google Scholar] [CrossRef]
  17. Fairly Devices. Available online: https://linklet.ai/en (accessed on 3 April 2023).
  18. Ogata, K.; Tanaka, H.; Kourogi, M. Work Recognition and Movement Trajectory Acquisition using a Multi-Sensing Wearable Device. In Proceedings of the 2023 IEEE Conference on Systems, Man, and Cybernetics (SMC), Oahu, HI, USA, 1–4 October 2023. [Google Scholar]
  19. Koide, K.; Menegatti, E. Non-overlapping RGB-D Camera Network Calibration with Monocular Visual Odometry. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 9005–9011. [Google Scholar]
  20. Koide, K.; Oishi, S.; Yokozuka, M.; Banno, A. Scalable Fiducial Tag Localization on a 3D Prior Map via Graph-Theoretic Global Tag-Map Registration. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 5347–5353. [Google Scholar]
  21. Foxlin, E.; Naimark, L. VIS-Tracker: A Wearable Vision-Inertial Self-Tracker. In Proceedings of the IEEE Virtual Reality, Los Angeles, CA, USA, 22–26 March 2003. [Google Scholar]
  22. Neges, M.; Koch, C.; Ko¨nig, M.; Abramovici, M. Combining visual natural markers and IMU for improved AR based indoor navigation. Adv. Eng. Inform. 2017, 31, 18–31. [Google Scholar] [CrossRef]
  23. Lynen, S.; Achtelik, M.W.; Weiss, S.; Chli, M.; Siegwart, R. A Robust and Modular Multi-Sensor Fusion Approach Applied to MAV Navigation. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems(IROS), Tokyo, Japan, 3–7 November 2013; pp. 3923–3929. [Google Scholar]
  24. Ogata, K.; Tanaka, H.; Matsumoto:, Y. High Accuracy Three-Dimensional Self-Localization using Visual Markers and Inertia Measurement Unit. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 1154–1160. [Google Scholar]
  25. Vukobratovic, M.; Stepaneko, J. On the stability of anthropomorphic systems, Mathematical Biosciences. Math. Biosci. 1972, 15, 1–37. [Google Scholar] [CrossRef]
  26. Kajita, S. Introduction to Humanoid Robotics; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  27. Zijlstra, W. Assessment of spatio-temporal parameters during unconstrained walking. Eur. J. Appl. Physiol. 2004, 92, 39–44. [Google Scholar] [CrossRef] [PubMed]
  28. Tanaka, H.; Ogata, K.; Matsumoto, Y. Improving the accuracy of visual markers by four dots and image interpolation. In Proceedings of the 2016 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Tokyo, Japan, 17–20 December 2016; pp. 178–183. [Google Scholar]
  29. Tanaka, H.; Ogata, K.; Matsumoto, Y. Solving Pose Ambiguity of Planar Visual Marker by Wavelike Two-tone Patterns. In Proceedings of the 2017 IEEE International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 28 September 2017. [Google Scholar]
  30. LEAG Solutions Corportation. Available online: https://leag.jp/ (accessed on 16 August 2024). (In Japanese).
  31. Elgendy, M.; Sik-Lanyi, C.; Kelemen, A. A Novel Marker Detection System for People with Visual Impairment Using the Improved Tiny-YOLOv3 Model. Comput. Methods Progr. Biomed. 2021, 205, 106112. [Google Scholar] [CrossRef] [PubMed]
  32. Movella Inc. Available online: https://www.movella.com/ (accessed on 22 November 2024).
Figure 1. Appearance and operation image of the measurement device.
Figure 1. Appearance and operation image of the measurement device.
Computers 13 00310 g001
Figure 2. Schematic diagram of COG ZMP model. (a) Single leg support phase. (b) Double leg support phase.
Figure 2. Schematic diagram of COG ZMP model. (a) Single leg support phase. (b) Double leg support phase.
Computers 13 00310 g002
Figure 3. Inverted pendulum and support leg switching.
Figure 3. Inverted pendulum and support leg switching.
Computers 13 00310 g003
Figure 4. Schematic diagram of gait during the double support phase. (a) Movement of the reference point. (b) Decision to change the supporting leg based on center of gravity acceleration.
Figure 4. Schematic diagram of gait during the double support phase. (a) Movement of the reference point. (b) Decision to change the supporting leg based on center of gravity acceleration.
Computers 13 00310 g004
Figure 5. Extracting the single-leg support period and complementing the data of the double-leg support period. (a) Obtain the positions of the maximum and minimum values. (b) Complement the RCOG during the double support phase. (c) Final RCOG calculation result.
Figure 5. Extracting the single-leg support period and complementing the data of the double-leg support period. (a) Obtain the positions of the maximum and minimum values. (b) Complement the RCOG during the double support phase. (c) Final RCOG calculation result.
Computers 13 00310 g005
Figure 6. High-accuracy fiducial markers. (a) Marker with lenticular lens. (b) Marker without lens.
Figure 6. High-accuracy fiducial markers. (a) Marker with lenticular lens. (b) Marker without lens.
Computers 13 00310 g006
Figure 7. Flow of the proposed algorithm.
Figure 7. Flow of the proposed algorithm.
Computers 13 00310 g007
Figure 8. Experimental environment. (a) Appearance of reference markers and reflective markers. (b) Overview of the walking route. layout.
Figure 8. Experimental environment. (a) Appearance of reference markers and reflective markers. (b) Overview of the walking route. layout.
Computers 13 00310 g008
Figure 9. Trajectory on the xy plane.
Figure 9. Trajectory on the xy plane.
Computers 13 00310 g009
Figure 10. Time series trajectories on the x and y axes.
Figure 10. Time series trajectories on the x and y axes.
Computers 13 00310 g010
Figure 11. RMSE, MSE and correlation coefficient between estimated and true values.
Figure 11. RMSE, MSE and correlation coefficient between estimated and true values.
Computers 13 00310 g011
Figure 12. RMSE and correlation coefficient between estimated and true values.
Figure 12. RMSE and correlation coefficient between estimated and true values.
Computers 13 00310 g012
Figure 13. Corridor and open space layout.
Figure 13. Corridor and open space layout.
Computers 13 00310 g013
Figure 14. Results of walking movement trajectory in open space ( z c = 1000 mm).
Figure 14. Results of walking movement trajectory in open space ( z c = 1000 mm).
Computers 13 00310 g014
Figure 15. Results of walking movement trajectory in open space ( z c = 2500 mm).
Figure 15. Results of walking movement trajectory in open space ( z c = 2500 mm).
Computers 13 00310 g015
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ogata, K.; Tanaka, H. Real-Time Self-Positioning with the Zero Moment Point Model and Enhanced Position Accuracy Using Fiducial Markers. Computers 2024, 13, 310. https://doi.org/10.3390/computers13120310

AMA Style

Ogata K, Tanaka H. Real-Time Self-Positioning with the Zero Moment Point Model and Enhanced Position Accuracy Using Fiducial Markers. Computers. 2024; 13(12):310. https://doi.org/10.3390/computers13120310

Chicago/Turabian Style

Ogata, Kunihiro, and Hideyuki Tanaka. 2024. "Real-Time Self-Positioning with the Zero Moment Point Model and Enhanced Position Accuracy Using Fiducial Markers" Computers 13, no. 12: 310. https://doi.org/10.3390/computers13120310

APA Style

Ogata, K., & Tanaka, H. (2024). Real-Time Self-Positioning with the Zero Moment Point Model and Enhanced Position Accuracy Using Fiducial Markers. Computers, 13(12), 310. https://doi.org/10.3390/computers13120310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop