1. Introduction
During last decade, great progress has been made on the development of Indoor Positioning Systems (IPSs). Both academia and industry are targeting high-accurate IPSs for a variety of applications and scenarios. It is commonly accepted in both communities that the positioning of emergency responders during their missions is one of the most challenging scenarios for an IPS [
1,
2,
3,
4,
5,
6,
7,
8]. The missions’ scenarios are unstructured, cover wide operative areas and the surrounding environment is harsh and highly dynamic. All these application specific constraints make the use of preinstalled infrastructure, maps and localization method that require an offline or calibration phase (e.g., fingerprinting) unfeasible [
2,
5,
7,
8].
Currently, based on the technological principle, the IPSs for emergency responders can be classified as radio signal-based, IMU-based and hybrid systems [
8]. Radio signal-based IPSs can be designed based on Wi-Fi [
5,
9,
10,
11], ultra-wide band (UWB) [
12,
13], ZigBee [
14,
15], Bluetooth [
9,
16] and RFID [
9,
16]. These can be used alone or combined to improve the IPS accuracy due to the different granularities that each technology provides. The main advantages of these systems are: the capabilities of radios waves to travel through obstacles, the system performance is not affected by the user’s motion (e.g., walking, running, and crawling) and can be improved by deploying more nodes on the scenario, and the positioning infrastructure can be reused for communication [
8]. Whereas, their disadvantages are as follows: requires at least three different ranging measurements to compute the user’s position, the interference with the emergency responders’ operations (they have to deploy nodes as they enter a building), the performance degradation due to the radio propagation phenomena (e.g., non-line-of-sight NLOS conditions, high temperatures, thick smoke and humidity), and the risk of some anchor nodes being destroyed by the fire or falling debris [
8].
IMU-based IPSs are another area that has attracted many researchers to address the problem of localization for emergency responders [
4,
7,
17,
18,
19,
20,
21,
22,
23,
24]. These systems are based on inertial and motion sensors (e.g., 3D accelerometer, 3D gyroscope, 3D magnetometer, and barometer) that compose an inertial measurement unit (IMU). These IMUs can be mounted on the head [
17,
20], chest [
4], foot [
19,
21], dual foot [
23,
24] and different body segments [
7,
18,
22]. The advantages of such IPSs are: zero radiation signature, low-cost, they do not require additional infrastructure, they provide continuous positioning, and are capable of operating in all indoor environments [
8]. The drawbacks of these systems are that the error grows quickly as the time interval without position correction increases, the performance of the IPS is affected by the type of the movement, and they do not have a communication infrastructure to send the computed position to the incident commander [
8].
Finally, the hybrid systems combine both technologies (radio and IMU) to overcome the limitations when each technology is used alone. A data fusion algorithm is used to merge the positioning data obtained from both subsystems. Relate Trails [
25], PPL [
26], Virtual Lifeline [
27], GLANSER [
28], WASP [
29], and the work of Simon et al. [
30] are examples of hybrid systems for emergency responders. The advantages of these systems are: improved accuracy, continuous position estimation, bidirectional communication, adaptability to all environments, and immunity to the user’s motion [
8]. However, these IPSs also have some drawbacks, namely: increased complexity and development time due to the data fusion algorithms, higher cost and energy consumption, and a radiation signature [
8].
In this paper, the UWB-based IPS of the PROTACTICAL Personal Protective Equipment (PPE) is presented and discussed. The PROTACTICAL PPE is a project financed by the Portuguese QREN program (I&IDT-Project in Co-promotion No. 23267), which aims to improve the emergency responder’s performance, resilience, and safety. Besides monitoring the position of the emergency responders, the PROTACTICAL PPE also provides thermal isolation, monitoring of physiological and environmental parameters, and real-time communication between the emergency responder and the incident commander. When compared with other localization technologies, the UWB is capable of providing robust signaling, through-wall propagation and provides a large bandwidth that allows high-resolution ranging even in harsh environments, which makes it an attractive solution for emergency responders’ IPSs [
7,
31,
32,
33,
34]. Typically, the UWB transceivers rely on the Time-of-Flight (ToF) technique for the ranging estimation. However, in indoor environments, these estimates are likely to be affected by NLOS propagation, which leads to positive biases in distance estimation [
35,
36]. Due to the unstructured nature of the environments where the missions of emergency responders take place, it is very difficult to predict which obstruction caused the NLOS propagation. The only assumption that can be made is that the human body can lead to it, since the UWB transceiver is mounted on the user [
31,
32,
33,
37].
Emergency responder’s missions represent a challenging indoor positioning application that imposes strict requirements on the design of the IPS. Therefore, the goal of this paper is to evaluate and compare different positioning algorithms and select the one that best suits in such scenario. So, based on the IPS requirements defined in [
8], the performance assessment of the algorithms is conceived as follows:
High Performance with Low Ranging Measurements—unlike Wireless Sensor Networks (WSNs) applications, where tens of ranging measurements can be available [
38,
39,
40,
41], during emergency responders’ missions the availability of radio signals is generally low. This happens due to the following reasons: no reliable infrastructure exists in a building capable of computing the emergency responders’ position, the deployment cannot interfere with the emergency responder’s activities, the low penetration capability of UWB signals in indoor environments (up to 40 m in NLOS scenarios), and the risk of some anchor nodes being destroyed by the fire or falling debris. So, the performance (accuracy, precision and root mean square error (RMSE)) of the positioning algorithms is assessed with only three ranging measurements. This is the minimum number of measurements required to compute the user’s position;
Information Accessibility—the position update rate of the UWB-based IPS is between 1 and 2 Hz, which clearly comply with the requirements (<40 s) [
8,
11]. Due to the nature of the UWB technology, the information security is also guaranteed;
Immunity to Environment-Related Perturbations—the performance of the system must be independent of the scenario, so the positioning algorithms are tested under different scenarios (atrium and lab), propagation conditions (LOS and NLOS due to a human body) and movement (static and dynamic). In other words, we want to evaluate which positioning algorithm presents higher immunity to noisy measurements and, therefore, is more likely to provide a robust position estimation under the different propagation conditions of indoor environments. The performance of the localization algorithms is also compared after running a NLOS identification and error mitigation algorithm developed for the NLOS caused by the human body. This method proved to be an effective way to improve the performance of all algorithm under NLOS condition.
Although this work is conducted within the scope of an IPS for emergency responders, the work here performed and its conclusions are transversal to other indoor positioning applications. The remainder of the paper is organized as follows:
Section 2 presents the materials and methods used to perform and validate this research. In this section, the UWB transceivers used to acquire the ranging measurements, the NLOS identification and error mitigation algorithm for UWB transceivers mounted on the human body, the localization algorithms, the performance metrics, and the experimental setup are described. The results and the discussion of the experiments are presented in
Section 3 and
Section 4, respectively. Finally,
Section 5 presents the conclusions of the work performed.
2. Materials and Methods
2.1. DW1000 UWB Transceiver
The DW1000 chip is a UWB transceiver compliant with the IEEE802.15.4-2011 standard developed by DecaWave (Dublin, Ireland) that allows very accurate ranging measurements [
42]. The main advantage of this UWB transceiver when compared with its competitors (e.g., Ubisense and Time Domain products) is the low cost of each transceiver (approx. €19 per unit). These transceivers can transmit pulses that are few nanoseconds long with a bandwidth of 500 or 900 MHz and a frequency center that spans from 3.5 to 6.5 GHz. The high temporal resolution required to perform UWB communication allows an accuracy of the ranging measurements down to a few centimeters in line-of-sight (LOS) conditions [
42]. Due to its high bandwidth and spectrum usage, the transmit power density of the UWB transceivers is limited to −41.3 dBm/MHz to avoid inter-system interference. This restriction limits the operational range of the UWB transceivers, up to 300 m in LOS and 40 m in NLOS [
42].
The ranging measurements in these transceivers is performed based on the two way ranging (TWR) that relies on ToF technique. The tag node is responsible for starting the ranging procedure and the anchor for computing the respective distance. In this work, the DW1000 UWB transceivers are configured to operate on channel 4 (500-MHz bandwidth with a center frequency of 3993.6 MHz), preamble length of 1024 and a data rate of 110 kb/s.
2.2. NLOS Identification and Error Mitigation
This section is a summary of work conducted by the authors that is to be published separately and is currently under review. Full details of the measurement campaign and the NLOS identification and error mitigation algorithm are detailed in [
43].
A key feature of the DW1000 UWB transceivers is their built-in diagnostic capability. Through the processing of the CIR data obtained from the received waveforms, it is possible to infer the propagation condition (LOS or NLOS). Common metrics to assess the channel condition are: amplitude (e.g., RSS, maximum amplitude of the received signal, power difference and power ratio), temporal (e.g., ToA, RMS delay-spread, peak-to-lead delay, rise time, mean excess delay and maximum excess delay), and CIR data distribution (kurtosis and skewness) [
35,
44,
45]. The amplitude-based statistics are immediately, or with little processing, available from the DW1000 UWB transceivers, whereas the temporal and CIR data distribution statistics require an additional processing that can add a delay of 4–5 s [
44]. This additional delay can compromise the real-time requirements of IPS for emergency responders.
A measurement campaign was conducted in a corridor to evaluate the impact that different propagation conditions have on the amplitude-based statistics and to determine which is better for NLOS identification. During this measurement campaign four different propagation conditions were assessed, one LOS and three NLOS (caused by a fire door, a wall, and the human body). For each test, the distance between the two transceivers varies from 1 up to 44 m. Based on the results obtained, it was verified that highest ranging error is obtained when the UWB transceiver is mounted on the human body and the best metric for NLOS identification is the power difference (
).
is defined as follows:
where
and
are, respectively, the estimated received power and the RSS in the first path, which are defined as follows [
46]:
where
is the channel impulse response power,
is the preamble accumulation count,
is a system predefined constant (121.74 dBm for a pulse repetition frequency of 64 MHz), and
,
, and
are the first path amplitude points. All these parameters are acquired from the registers of the DW1000 UWB transceiver after the reception of a ranging message.
According to the results of the measurement campaign under the human body influence, the
metric is uncorrelated with the true distance and has a low overlap between the LOS and NLOS condition. So, a simple threshold based algorithm was implemented for NLOS identification:
where is the static threshold that minimizes the misclassification rate. An accuracy of 93% was obtained in the identification of the NLOS condition caused by the human body.
For the NLOS error mitigation, since it varies with the distance, a second-order polynomial model was proposed:
where
is the estimated ranging error,
is the estimated distance, and
,
, and
are the curve specific parameters. This model was obtained by calculating the median of the estimated distances—for each evaluated distance—and the NLOS error—measured in the NLOS dataset with the human body influence. With the proposed NLOS error mitigation the standard deviation of the ranging measurements was reduced by 60% (from 3.72 to 1.47 m), and the ranging error was successfully approximated to a white Gaussian distribution.
Table 1 shows the parameters values for the proposed NLOS identification and error mitigation algorithm. These values were obtained from the curve fitting toolbox of Matlab and are used during the experimental evaluation.
2.3. ToA-Based Localization Algorithms
In this subsection, four typical ToA-based localization algorithms are introduced. All the analyzed algorithms rely on ranging measurements, provided by the DW1000 UWB transceiver, to compute the 2D localization of the tag node. To keep the complexity of the localization algorithm low, the tag’s height is not computed by the localization algorithm. As an alternative, it can be obtained from additional sensors like barometers or pressure sensors. However, the extension to 3D is straightforward for all proposed algorithms. For simplicity, the position of the anchor nodes is known and does not change during experiments.
The four ToA-based localization algorithms studied are: the analytical method, the least-squares method, the nonlinear least-squares method based on a first-order Taylor expansion (Taylor series), and the EKF. Each algorithm has different complexity and is designed to address different issues on localization. The first two algorithms are the simplest to implement and their differences lie in scalability and flexibility. For the analytical method, the number of possible ranging measurement combinations has to be known beforehand since one equation has to be defined for each tag-anchor pair. On the other hand, the least-squares method allows adding more tag-anchor pairs without having to rewrite the algorithm. Both algorithms do not handle with the covariance of the ranging measurements error. To deal with the nonlinearity issue aroused by the localization problem and the covariance of the ranging measurements, both the nonlinear least-squares method based on a first-order Taylor expansion and the EKF are proposed. Although the complexity of these algorithms is higher, it is expected to observe an improvement in performance when compared with the first two algorithms. While the Taylor series is an extension to the trilateration-based localization algorithms, the EKF is a predictive algorithm that aims to predict the next state based on a system model and the ranging measurements.
For the trilateration-based localization algorithms, the position of the tag is determined as the intersection of all circles. The center and radius of each imaginary circle are given by the coordinates of the corresponding anchor node and the ranging measurement between that anchor and the tag, respectively. Therefore, the circles can be described as:
where
is the position of the tag,
is the known position of anchor
,
is the number of anchor nodes, and
is the true distance between anchor
and the tag. The value of
is obtained by applying the following equation:
where
is the speed of light, ToA is the reported Time of Arrival,
is the true distance between the transmitter and the receiver,
is the ranging error in the LOS scenario, which includes all typical sources of error of a UWB ranging system (i.e., finite bandwidth, clock drift, PCB losses, thermal noise, etc.), and
is the result of
ranging error with the positive and random bias
caused by multipath propagation in the NLOS scenario.
As demonstrated in the previous subsection, the accuracy of range estimation is affected by several phenomena (e.g., noise, multipath, fading to ground-bounce, and NLOS). If the ranging error is additive, this results that the circles will not intersect at one single point. On the other hand, if the ranging error is subtractive, the circles may not intersect. So, the goal of the localization algorithm is to estimate the tag position as close as the true tag position, even in the presence of noisy measurements.
2.3.1. Analytical Method
The analytical method is the simplest localization algorithm. This method determines the tag position by solving the nonlinear equations directly [
39,
47,
48,
49].
So, for the scenario when only three anchor nodes are available, which is the minimum number of different ranging measurements required, the localization problem is a set of three equations with two unknowns:
Different techniques were proposed to solve the nonlinear equations above. In this work, the linear algorithm proposed in [
49] is used as the analytical method. This method computes the position of the tag by the intersection of two virtual lines created from the two points where the imaginary circles intersect. So, for an IPS with only three anchor nodes, the line that passes through the intersection of the two circles (e.g., the circles centered at anchors 1 and 2) can be found by differencing the corresponding ranges in (8). The resulting equation is:
where
is the norm of the position of anchor
.
If the same procedure is repeated for anchors 2 and 3, the following line equation is obtained:
A new line can be created for anchors 1 and 3. However, this line is not independent of the above lines, i.e., does not add useful information about the tag position since the three lines will always intersect in the same point [
49]. So, for 2D localization only two lines are needed.
The position of the tag is obtained by solving (9) and (10) in terms of
, equating the obtained results, and solving in terms of
. The resulting equation is:
where:
By substituting (11) into either (9) or (10) and solving in terms of
, gives:
An important consideration about this method is that the two lines may not intersect due to ranging errors or due to the geometric distribution of the anchors. In such scenario, the position of the tag cannot be computed.
2.3.2. Least-Squares Method
The nonlinear equations in (8) can be expressed in a matrix form after some mathematical manipulation. So, they can be written as [
47]:
where:
and:
To avoid the quadratic parameter
, Caffery proposed an alternative method for cancelling out the nonlinear terms and producing a linear model [
49]. This method works by selecting on an equation in (8) (e.g.,
) and subtracting it from the other equations. However, the accuracy of this method is highly dependent on the distance from the selected anchor node and the tag, deteriorating as the distance between these nodes increase [
47]. So, to keep the complexity of the localization algorithm low, it was selected the traditional least-squares method.
2.3.3. Nonlinear Least-Squares Method based on a First-Order Taylor Expansion (Taylor Series)
A common strategy to linearize the nonlinear function
in (8) around a reference point
is to use the Taylor series expansion. If an initial estimation of the position is available
and the higher terms are neglected, the function
can be expressed as [
39,
47,
50]:
where
is the vector of the initial estimation,
is the vector of the anchor nodes’ coordinates, and
represents the Jacobian matrix of
around
, which can be represented as:
This assumption is only valid if the initial estimation is sufficiently close to the true location of the tag node. The initial position estimation is computed based on the analytical method proposed above. The value of
is obtained as following:
Equation (21) can be written in matrix form as:
where:
and
represents the range estimation error. The mean and variance of range estimation error are defined according the channel state (LOS or NLOS).
Since the ranging measurements are independent and its error follow a Gaussian distribution, the weighted least squares solution of (24), with respect to
, can be determined based on the maximum likelihood (ML) estimation and is given as [
51,
52]:
where
is the covariance matrix of the estimation error
, whose terms are independent and zero-mean Gaussian random variables, and can be represented as:
The value of
was experimentally determined during the measurement campaign described in
Section 2.2. This value has been calculated based on the mean of the variances calculated for each measurement point. A
is used for the experimental evaluation.
Based on the initial position estimation
and the computed
, the position estimation can be updated as follows:
By iterating the above process, the position estimation can be repeatedly refined. The process is repeated until the convergence is achieved, i.e.,
and
turn out to be satisfactorily small according to some criterion, or the maximum iterations are achieved [
39,
47,
50]. The final position estimation is defined based on the position whose convergence criterion was minimum. In this paper the convergence criterion is:
This convergence criterion represents the sum of the square error between the Euclidean distance estimates of the previous and current position. Each distance is weighted according to the covariance matrix . So, if a measurement is taken in NLOS, the uncertainty (covariance) of that measurement is higher and, therefore, it will have a higher impact on the value of .
2.3.4. Extended Kalman Filter Method
Unlike the previous localization algorithms, the EKF is tailored for tracking mobile nodes. Its main advantage is that the EKF can process single measurements at a time and provide position estimates in real-time. The EKF performance highly depends on the correct definition of the system dynamics [
41]. Based on the information acquired from the motion (e.g., velocity, acceleration, angular velocity), different EKF formalizations can be made to model the movement of a person. A commonly used model for pedestrians is to model the movement of the mobile device as random. This simple model has proven to be more robust than other complex models because of the fact that human’s movement is unpredictable and, therefore, better modeled as Gaussian noise. Other alternative are models that include the velocity, velocity and acceleration, and orientation.
In this work, the random model was selected to describe the pedestrian movement. With this model, the changes in position are given by Gaussian noise. Therefore, the state transition model of the system can be defined as:
where
and
represent, respectively, the current and the previous position state vectors.
is the process noise that allows changes in position and orientation with covariance matrix
. The values of the covariance matrix
were determined empirically. The matrix
represents the state transition matrix and is modeled as an identity matrix:
The measurement model can be represented by:
where
is the current ranging measurements vector,
is the observation matrix, and
is the measurement noise whose covariance is
. The index
indicates that the parameters can change over time. The observation matrix
and the corresponding Jacobian
are derived from (8) and are given as follows:
Based on the models described above, the EKF estimates the tag position based on two different stages: prediction and update:
Prediction Stage:
During the prediction stage, the EKF predicts the state vector (
) and error covariance matrix (
), which are given as follows:
Update Stage:
As soon as new ranging measurements are available (
), the update stage can be applied. This stage aims to refine the state vector (
) and the error covariance
) estimates. The first step of the update stage is computing the Kalman gain. The Kalman gain is the ratio between the uncertainty of the prediction and the uncertainty of the measurements, and is computed as follows:
where
is the transpose matrix of the observation matrix
and
is the inverse of the residual covariance matrix
that is computed as follows:
Then, the Kalman gain is used to combine the received ranging measurement information with the information from the prediction stage in order to compute the update state as follows:
where:
is known as innovation or measurement residual, and
represents the predicted measurements. In terms of EKF performance, lower values of innovation or Kalman gain are desirable, i.e., small values of innovation or Kalman gain imply small corrections in the predicted state and, therefore, a smoother tracking system. The last step of the EKF is the update of the error covariance as follows:
where
in an identity matrix with appropriate dimensions.
For static devices, however, the predicted state vector () and the predicted covariance matrix () are expected to remain unchanged between measurements. So, for static devices, the covariance matrix is removed from Equation (38). The update stage is the same as for mobile devices.
2.4. Performance Evaluation Metrics
Performance metrics provide the basis for comparing localization algorithms [
53]. So, the performance of the above positioning algorithms under the different scenarios is compared based on the following metrics: accuracy, precision, and Root Mean Square Error (RMSE).
Traditionally, the accuracy is represented by the mean distance error and the precision is defined as the success probability of position estimates with respect to the accuracy [
53,
54]. However, this strategy lacks to provide useful information about an IPS’s precision, since the precision is always associated with the accuracy and these metrics are independent. So, both accuracy and precision are presented as a cumulative distribution function (CDF) and they are expressed as a value for a specific percentage (e.g., an accuracy of 1.6m with 95% probability).
2.4.1. Accuracy
The accuracy is the most used metric to evaluate the performance of a positioning algorithm or IPS. It represents the difference between the true position and the estimated position. This metric is generally measured as the Euclidean distance between the estimated position and the true position, as defined by the following equation:
where
are the Cartesian coordinates estimated by the localization algorithm, and
are the true Cartesian coordinates.
2.4.2. Precision
The precision measures the reproducibility of successive position estimates. This metric can be used to assess the robustness of the positioning algorithm as it reveals the variation of position estimates over several trials [
53]. To compute the precision, we first compute the median position of the 200 position estimations for a single test run. Then, the Euclidean distance to each estimated position is computed based on the median position. The precision is computed based on the following equation:
2.4.3. Root Mean Square Error (RMSE)
Unlike the accuracy metric, the RMSE metric allows computing the localization error for both X and Y coordinates. The RMSE value per each coordinate can be computed by:
where
represents the coordinate axis.
The
and
values can be combined to compute the Net RMSE, which is the net error of the localization algorithm. The RMSE values are biased towards large errors, i.e., a large error makes a larger contribution in RMSE than in a simple average. The Net RMSE can be computed as following:
2.5. Experimental Setup
In this section, we describe the deployment scenarios used to evaluate the performance of the algorithms described above. In these experiments, the DW1000 UWB transceivers, already described in
Section 2.1, are used to collect the ranging measurements needed to run the localization algorithms. Two types of nodes are considered, anchor and tag nodes. Both nodes are identical in terms of hardware. The anchor nodes are placed on a tripod at an antenna height of 1.33 m and their position is known. The tag is responsible for starting the ranging message with an anchor node, computing the corresponding distance between the nodes, acquiring the channel propagation parameters necessary for NLOS identification and mitigation, and logging this data to a computer through a USB connection. The tag node repeats this process continuously for all anchor nodes available, starting from anchor 1 to anchor n. Where n is the number of anchor nodes available. This cycle is repeated until all the samples per point are collected, or the user completes the predefined path. All the localization algorithms are implemented in MATLAB, run offline, and use the same data set. In this way, we guarantee that all the algorithms are evaluated under the same conditions and, therefore, a fair comparison can be performed.
Figure 1 illustrates the two test beds considered to evaluate the localization algorithms. The two scenarios are, respectively, an atrium with 9.4 m× 7 m free space area (
Figure 1a), and a lab with an area of 10.7 m × 7 m, desks, metallic cabinets, and textile machines (
Figure 1b).
The gray squares represent the location of anchor nodes, the blue star represents an example of the tag location, and the black dots are the calibration points evaluated for the static scenarios. The position of the calibration points was acquired based on a digital laser rangefinder. The red line represents the path performed during the dynamic test. The main goal of considering these two environments is to assess how different propagation characteristics affect the performance of each algorithm. In other words, we want to verify which algorithm has higher immunity to environment-related perturbations. For each scenario described above, three sets of measurements were conducted: static without body interference (Case 1), where the tag is placed on the top of a tripod at an antenna height of 1.33 m, static under body influence (Case 2), where the tag was mounted next the right side of the waist of the human body at an antenna height of 1.08 m; and dynamic (Case 3), where the tag was mounted next the right side of the waist of the human body at an antenna height of 1.08 m and the user walks through a predefined path. For Cases 2 and 3 an additional distance correction is performed before run the NLOS identification and mitigation algorithm. This distance correction aims to correct the distance error due to the difference in heights between anchors and tag when the tag is mounted on the human body. The calibration points of the both scenarios were taken in a cross form, centered at the middle of the scenario, and with a spacing between points of 0.50 m. We choose this configuration because of machines and desks in the lab scenario, which does not allow us to collect a grid of points evenly distributed. Nevertheless, with this approach the performance of each algorithm in both x and y directions can be assessed, both scenarios can be easily compared, and the measurement campaign took less time. For each evaluated point in the static measurements, 200 samples were collected per anchor node. Whereas, in the dynamic case the experiment was run five times. The goal of this experiments is to distinguish between device-related effects (e.g., clock drift, antenna placement, and radiation pattern) and body effects, as well as, between static and dynamic situations. During the experiments no other people were allowed to stay or walk through the scenarios.
4. Discussion
UWB is an attractive way to perform localization, especially for IPSs that require an accurate position information and high measurement rate. The DW1000 UWB transceiver combines the UWB technology with ToA measurements, resulting in high accurate ranging estimates even in strong multipath environments. Additionally, it provides long range communication (300 m in LOS and 40 m in NLOS) and at an affordable price.
The EKF method persistently showed the best performance for all the evaluated metrics, for both atrium and lab scenarios and with or without human body influence. Although the gain in accuracy is evident, is the gain in precision that is really remarkable. For a 99th percentile, the precision reported by the EKF method in the worst scenario (lab under the body influence) can be three times lower than the second best performing algorithm. Whereas, for a 95th percentile, the precision reported is five times lower. Unlike the other methods, the EKF takes the previous estimates into account. This has a smoothing effect on the position estimation, making them more stable over the time, which can explain the higher performance of the EKF method.
By comparing the results of the different metrics for all algorithms under the two scenarios evaluated, the negative effect that the furniture and machines have on the performance of the positioning algorithms is clear. For the Analytical and Least Squares methods the accuracy worsens more than three times (from 0.36 to 1.22 m), the precision worsens more than nine times (from 0.08 to 0.72 m) and the RMSE more than two times (from 0.12 to 0.29 m). On the other hand, the Taylor Series and the EKF methods are more resilient to noisy measurements. Nevertheless, the accuracy degrades more than two times, the precision more than two times for the EKF and eight times for the Taylor Series method and the RMSE more than two times for the EKF and more than 50% for the Taylor Series. It is clear that the methods that account for the noise statistics achieve better results however at the cost of an increased complexity and computational requirements.
When body influence is considered, the performance of all algorithms worsens drastically. For the best performing algorithm (EKF) and for the best scenario (atrium), the accuracy error grows from 0.26 to 1.18 m when no mitigation algorithm is applied and to 0.7 m when the mitigation algorithm proposed is used. Another interesting observation is that the error reported in the lab is generally lower than the atrium. This can be explained by the positive impact of the multipath components created by objects and furniture in the lab scenario. When a human body blocks the direct path, it can completely block the RF propagation and the signal reaches the receiver through the phenomena of creeping waves [
37]. However, these creeping waves have a lower propagation velocity, inducing an additional delay in the ToA estimation and therefore a higher error in the distance estimate. In such scenario, a multipath component due to a near object can be received first, resulting in a lower error.
Regarding the NLOS mitigation algorithm proposed, we can see that the overall performance of all algorithms was improved. The only exception occurred for the EKF method in the lab scenario, which can be explained by the used noise statistics. The noise statistics were obtained from the tests carried out in an open free space, which do not corresponds with the propagation conditions of the lab scenario. If the noise statistics of the lab scenario were used, a performance improvement is expected for the EKF method. However, this will make the performance of the IPS highly dependent on the lab scenario and this scenario does not represent all the propagation conditions for indoor environments, making the system scenario-dependent and, therefore, it does not comply with the emergency responders’ requirements. Although the accuracy of all methods is improved by running the NLOs mitigation algorithm—especially for the methods that do not use noise statistics—is the precision that is significantly improved (up to 50%).
Regarding the dynamic tests, the performance of all algorithms degrades as the number of anchor nodes under NLOS condition increases. As expected, the best trajectory is provided by the EKF method, which is especially visible in the lab scenario. However, when there is only one anchor node in NLOS condition, the performance of the Analytical and the Least Squares methods outperforms the EKF method. The lower performance of the EKF when compared with the other positioning algorithms can be explained by the low position update rate of the UWB systems (between 1 and 2 Hz).
Contrary to what was initially expected, the performance of the Taylor Series method was very unsatisfying. Especially if we account the increased complexity of this method when compared with the Analytical and the Least Squares methods. Its poor performance was even more evident when the body influence is accounted, in these scenarios the performance of the Taylor Series method was the worst. This poor performance can be explained by the initial estimate guess, obtained from the Analytical method. In the presence of noisy measurements, this first estimation will be too far from the true point, therefore, the algorithm will not reach the convergence, resulting in a poorer position estimate. A performance gain in the accuracy of the Taylor Series method can be obtained if the covariance matrix is tuned for each point. However, this will make the proposed IPS highly dependent on the evaluated scenario, which is against the requirements defined for emergency responders.
The performance of both Analytical and the Least Squares methods is the same in all considered scenarios. This was already expected as the only difference between methods is the mathematical formulation of the localization problem. The Analytical method is designed to locate tags with only three ranging measurements whereas the Least Squares method allows using more than three ranging measurements. When more than three ranging measurements are available, the performance of Least Squares method is expected to overcome the performance of the Analytical method.
Other important issues when selecting the localization algorithm of an IPS are its complexity and computational requirements. These parameters are important due to the necessary tradeoff between accuracy and energy consumption. However, since the application is for supporting emergency responders, it can be assumed that the IPS only needs to be operational for a few hours, so the complexity and computational requirements of the localization algorithm are not necessarily problems in our research as long as the gateway-PROTACTICAL has enough computational capabilities to run the positioning algorithm. Among all the considered algorithms, the EKF method is the one that requires more computational capabilities, followed by the Taylor Series, Least Squares and Analytical methods.
To keep the complexity of the localization algorithms a low as possible, in this study only the 2D position is computed. Since the main goal of knowing the 3D position is to identify which floor the emergency responder is, this information can be obtained from sensors like barometers and altimeters. Although the accuracy achieved by these sensors is lower than the UWB ranging measurements, it is enough to comply with the emergency responders’ requirements. Additionally, this design choice alleviates the deployment woes as the emergency responders have not to concern with the non-coplanarity of the anchor nodes—requirement for the 3D computation of the positioning algorithms.
Anchor placement is a very important issue in the algorithms’ performance that has not been addressed here. Although the performance of all positioning algorithms has been assessed using a minimum number of anchor nodes (three), the anchor nodes have been placed ensuring that there are always three non-collinear points. However, in a real deployment by emergency responders, the anchor nodes are more likely to be deployed in a trail topology. This different topology may affect the performance and, more important, the reliability of the positioning algorithms evaluated. Further tests are required to evaluate the impact that this topology may have on algorithms’ performance. A commonly used strategy to increase the accuracy and precision of the IPS is to add more anchor nodes. However, if the number of anchor nodes is increased, the complexity of the positioning algorithms will also increase, as well as, the energy consumption and the network latency and overhead.