1. Introduction
Localization has gained a growing interest among researchers and industries due to technological advancements in wireless sensor networks (WSNs). WSNs are typically made up of a set of small, low-energy devices called sensor nodes, optionally with a base station for monitoring purposes. Localization in a WSN is defined as the task of determining real-time coordinates of the sensor nodes, given that the exact positions of some of the static nodes, also called anchor nodes, are known [
1].
The past few decades have witnessed a dramatic increase in the development of indoor localization due to its applicability in various real-time location-based services and smart monitoring systems, particularly in buildings and places where the Global Positioning System (GPS) is denied due to poor satellite reception. Techniques for indoor localization can be generally divided into two broad categories, namely, radio frequency (RF)- and non-RF-based. The latter technique exploits non-RF sensors to estimate the movement of a target node and its distance from other nodes. Examples of these include radar, acoustic, ultrasound, and optical sensors. The RF-based technique, on the other hand, utilizes received signal strength between two sensor nodes to estimate the distance between them, hence it is often termed as the received signal strength indicator (RSSI) method. This method does not require additional hardware for estimation, and is preferable in many cases owing to its penetrability as compared to the non-RF approaches [
2].
Several RF technologies have been proposed and tested in the literature for RSSI-based indoor localizations, and the most popular ones are WiFi, Bluetooth Low Energy (BLE), Ultrawide Band (UWB), and Zigbee. Each of them has its own advantages and disadvantages, and its efficacy greatly depends on the nature of applications and hardware performance. WiFi, for instance, has become the simplest solution for localization in buildings with high availability of access points with the highest data-rate transfer and throughput, but at the cost of high power consumption. Hence, it may not be suitable for indoor tracking, particularly when sensor nodes are dependent on batteries to operate. BLE, on the other hand, consumes significantly less power, as the maximum data rate is considerably lower than that of WiFi, but it has been demonstrated to be more accurate than WiFi when used for localization due to better estimation on the RSSI-distance model [
3]. Both WiFi and BLE signals, however, are easily perturbed by multipath effects, signal reflections, and even small noise perturbation due to the inherent structure of narrowband radio waves, which makes precise distance measurement more difficult to achieve. To address this problem, UWB radio signals have been designed with very short impulse transmissions to preserve the signal’s integrity and strength, as they are not easily overlapped with noise or multipath effects. The downsides of UWB are, nonetheless, the high cost of the equipment and a requirement for dedicated infrastructure [
4]. ZigBee is another Bluetooth-like technology that operates at about a quarter of Bluetooth’s 1 Mbps maximum data rate. The low data rate makes it unsuitable for applications with high-speed data transmission, but the solution enables multiyear battery life and connection to a large number of nodes. A detailed comparison between these technologies can be found in [
2,
5].
Localizing a mobile node in an indoor environment is relatively more challenging than localizing a static node as it continuously changes its location with time [
6]. Moreover, for RSSI-based localizations, a node at some places may only receive two RSSI values at certain time instances while moving due to occlusions from the environment and limitations on the communication modules, such as sudden transceiver failures, or limited energy and bandwidth. This scenario makes it impossible to localize if one is to use traditional triangulation or multilateration-based methods. In order to circumvent this problem, several studies have begun to exploit compressive sensing theory [
7] that allows sparse and noisy signals to be recovered with only a small number of RSSI measurements. Via this theory, not only can the number of anchor nodes be minimized [
8], it is also possible to localize multiple target nodes at the same time [
9]. The work in [
9] shows how the compressive-sensing technique can be adopted along with
minimization programs to localize multiple target nodes, where the results are then validated via a number of simulations. In the case where a sufficiently large number of anchor nodes are available, fingerprinting is one of the approaches that has been extensively adopted by many researchers as it is able to construct a radio map from recorded RSSI values during the offline phase [
10,
11]. Hence, any changes of signal strength during the online localization phase are cross-checked with the stored fingerprints to estimate the best location of the target node, either deterministically [
12] or probabilistically [
13]. This method is particularly popular for localizing smartphone or mobile-device users in buildings equipped with many WiFi access points. Nonetheless, the major challenges that are always associated with this approach are the inherent noise and large fluctuations of the signal strength due to the time-varying orientation of smartphones or unexpected obstacles between anchors and users. In this regard, several techniques have been proposed such as incorporating artificial-intelligence-based methods and using suitable filters to minimize the influence of multipath effects and interference [
14,
15,
16,
17,
18,
19]. Specific examples include a reliability-augmented particle filter, where the uncertainty of a user’s true step length can be compensated [
11], and fusing fingerprints with uncertain mutual distances that are constrained within specific bounds [
10].
In the field of mobile robotics, non-RF based devices are typically utilized as proprioceptive sensors in the odometry approach, which is a common technique to localize wheeled mobile robots. While these sensors can estimate the path and pose of the robot relatively more accurate than by using the RSSI-based methods, errors due to wheel slippage, sensor drifts, and other environmental disturbances may inevitably accumulate and eventually deteriorate the tracking performance [
20]. To prevent the uncertainties in odometry from growing unbounded, a probabilistic state estimation was introduced which fuses odometry with other exteroceptive sensors such as sonar, lidar, laser range finders, and cameras [
21]. The most used algorithms reported in the literature are Markov [
22] and Kalman filter localizations [
23] where both rely on probabilistic motion and measurement models. Markov localization assumes that the map is known and uses fine-grained and metric discretization of the state space. The configuration space is divided into a number of cells where each cell contains the probability of the robot to be in that cell. During the prediction and update stages, all cells are updated; thus, this method entails high computational costs. The Kalman filter, on the other hand, is computationally less intensive, as the probability distributions of both robot and sensor model are assumed to be Gaussian; hence, only the mean and variance of the distribution need to be updated at each time step [
24].
As an alternative to the probabilistic-based method where the optimal solution heavily relies on map accuracy, sensing, and the computational capabilities of the robot, integrating odometry with WSN-based localizations has also become a new trend in recent years [
20,
25,
26,
27]. This particular strategy has recently found a wide variety of applications, such as human tracking [
14,
28,
29,
30], navigational assistance [
5], warehouse management [
2,
31], and monitoring underground areas in post-disaster environments [
32]. In smart buildings where WSN is installed for electrical usage and environmental monitoring purposes [
33], this approach can keep operational costs to a minimum as no additional hardware is required to assist the odometry-based localization. The work in [
27] shows an example where the RSSI and phase shift of signals from passive UHF-RFID tags, which are installed in the ceiling of the environment, could be combined with odometry data to improve localization performance. Fusing WiFi- or BLE-based RSSI with inertial measurement units (IMUs), which typically consists of an accelerometer and a gyroscope, is also becoming more popular due to the advanced technology of smartphones and mobile devices [
23,
29,
34]. From the WSN perspective, another notable benefit of introducing mobility to the network is that it can increase its capability on various aspects, including flexible topology adjustment and automatic node deployment [
35]. In addition, precise and fast localization of mobile nodes in GPS-denied environments allows several crucial tasks to be executed, such as search-and-rescue missions and map-building during emergency cases.
While integrating WSN with odometry localizations can be advantageous for compensating deficiencies from individual methods, suitable fusion techniques must be employed in order to achieve optimal experimental results without sacrificing the desired design requirements. A number of studies suggested adopting the Kalman filter and/or its variants [
20,
36], while others preferred different approaches due to some disputes over the suitability or effectiveness of the filters particularly for compensating high variability or uncertainties in the signal strength [
18,
19]. In practice, however, the accuracy, dynamic range of the signals and their variations highly depend on the specific hardware platforms used, the infrastructure impact, as well as spatial or environmental factors, which make any direct comparison between existing methods less conclusive. In other words, while some techniques work tremendously well in a particular environment, this does not mean others are less effective in general. Nonetheless, approaches that have been experimented in the literature often pave the way for future research in and development of WSN technologies.
The focus of this study is on localizing a continuously moving target node using Zigbee-based RSSI measurements and odometry when no knowledge of the map or fingerprint database is available. To the best of the authors’ knowledge, little attention has been paid in the literature to date on investigating and optimizing the indoor-localization technique for a continuously moving node within this scope. Plus, many studies on indoor localizations focused on reducing the steady-state position error. Alternatively stated, when target nodes are in a continuous motion, only their final positions or their positions while they are stationary have been of interest for performance evaluations. In this work, we introduce mobility to the target node by using a nonholonomic wheeled mobile robot with a specific velocity profile. The main contributions of this paper can be stated as follows:
a novel approach to compensate for uncertainties in the Zigbee-based RSSI and odometry localizations for a continuously moving target node;
a new methodological framework to fuse Zigbee-based RSSI and odometry-based localizations with convex searches on optimal weighting parameters; and
a self-adaptive filtering technique that autonomously optimizes the weighting parameter during the target node’s translational and rotational motions which exhibit different but consistent error patterns.
The methods proposed in this work are also computationally less onerous as localization is only corrected using weighting parameters that are updated on the basis of the robot’s rotational velocity. Several real-time experiments consisting of four different trajectories with different numbers of straight paths and curves were carried out to validate the proposed techniques. Both temporal and spatial analyses demonstrated that, when odometry data and RSSI values are available, the proposed methods provide significant improvements on localization performance over existing approaches that also include the extended Kalman filter. In addition, we also showed that the new self-adaptive filtering technique which optimizes the error minimizer when the rotational velocity within a specified range is detected considerably outperformed the rest.
The remainder of the paper is organized as follows:
Section 2 explains the indoor localization framework and strategies, which begins with a brief description on the system architecture and the path-loss propagation model, followed by the RSSI-based localization method, odometry, and the main proposed strategies involving convex optimizations when both the RSSI-based method and odometry are hybridized.
Section 3 presents the experimental results, qualitative and quantitative performance evaluations and discussions. Results of the proposed methods are then concluded and further discussed in
Section 4 with future works. For notations and abbreviations that are frequently used in this manuscript, readers are referred to the last section before the references.
3. Experiment Results, Performance Evaluation, and Discussion
For the new set of experiments, localization performance was evaluated on four different trajectories (Trajectories 1–4) with different numbers of straight paths and curves. For each trajectory, two objects (chairs) were randomly placed in the experimental area as illustrated in
Figure 9, and the value of
R was increased to 8 for both
and
to compensate for the increased multipath effects. Results obtained by Lemma 1, Proposition 1, and Corollary 1 were then compared with those from the theoretical path-loss model, as well as the extended Kalman filter approach, which is an existing method from the literature for sensor fusion. For clarity purposes, we used
to denote Proposition 1,
to denote Corollary 1, and
to represent the extended Kalman filter.
For the first trajectory, i.e., Trajectory 1, a similar path as in the preliminary experiment was selected, and the localizations of Node M in the
-plane via the aforementioned techniques were plotted with different colors as indicated by the legends in
Figure 10. The corresponding temporal analysis on localizations of Node M in
x- and
y-directions is presented in
Figure 11, while the corresponding error along with the
and
positions of the robot (based on odometry) are shown in
Figure 12. The mobile robot was initially placed at
, and chairs were placed at locations as indicated by the maroon boxes. As seen in
Figure 10, localizations that were only based on the RSSI methods (i.e.,
and
) were relatively more scattered as compared to the other four techniques. The
method, despite showing a smooth trajectory (grey line), resulted in localization that was relatively too far from the actual position, which can also be clearly seen in
Figure 11. It was also observed that
and
gave better location estimation for Node M from its initial until final positions. A much closer estimation to the actual position can be seen via
, particularly when the robot steered at each edge of the trajectory.
Referring to the bottom plot of
Figure 12, errors due to
and
in the first rotational motion (
) were much smaller as compared to errors due to other methods. However, in the second, third, and fourth rotational movements, the error from
was considerably larger than those from
,
, and
, which was a consequence from accumulated errors due to wheel slippage after each turn. In order to provide better evaluation, the experiment for Trajectory 1 was repeated five times, and the numerical results on localization error for each method are compared in
Table 3. From the table, the sensor-fusion method via
recorded the highest error, while
recorded the lowest. Besides that, a positive impact can be seen from the result of
where the error was marginally lower than that from
due to the increased value of the filter coefficient
R from Lemma 1 for this experiment. We can also conclude that
provides a huge improvement over
for the comparison between RSSI-based methods, and fusing
with
as described in Proposition 1 (i.e.,
) significantly reduces the localization error. A further error reduction was accomplished through
, which prioritized
’s localization at each turn via the self-adaptive filtering approach.
For Trajectory 2, which follows a triangle-like path, the corresponding localizations and error plots are depicted in
Figure 13,
Figure 14 and
Figure 15, while the numerical results are presented in
Table 4. Although there were only three curves involved, the same trend could be observed whereby
and
showed smaller errors than the rest in the first rotational motion, but the error due
ratcheted up afterwards as it was accumulated. In this case, however, the duration during the rotational motion was slightly smaller than that from the first trajectory; hence, competitive performance between
and
could be seen from the numerical results of Tests 1–5. Comparing the average errors in
Table 4, the method with the worst performance appeared to be similar to that from the first trajectory, which was
, followed by
,
, and
. Both
and
, on the other hand, exhibited the same performance as in the previous experiment.
Another random trajectory with six linear motions alternating with six rotational motions was designed for Trajectory 3, and the corresponding localizations and error plots are illustrated in
Figure 16,
Figure 17 and
Figure 18. In this case, the number of curves involved and the duration for rotational movement were relatively higher than those from the previous two trajectories and, as predicted, the approach via
worsened as the error piled up after each turn. This is also shown numerically in
Table 5, which presents a slightly different trend, where the error due to
for each test was constantly higher than that from
. The proposed optimal sensor-fusion methods nevertheless led to the best performance, with error due to
being the least.
For the last trajectory, Trajectory 4, a continuous rotational motion was assigned to NSBot2 to form a circle-like path. The corresponding localizations and error plots are depicted in
Figure 19,
Figure 20 and
Figure 21, while the numerical results are presented in
Table 6. As can be seen in
Figure 19, localizations based on
and
were closer to the actual trajectory, whereas those from other methods clearly drifted. In this case, the trajectory via
became much worse in comparison with the last three paths due to wheel slippage, and as
for
,
prioritized
, which consequently led to a higher localization error. The numerical results for this last experiment are recorded in
Table 6, which shows a significantly small error from
as compared to those from
,
,
, and
. Interestingly, although
led to a huge localization error in this scenario, the self-adaptive filtering method introduced in Corollary 1 resulted in a further improvement for each test, as highlighted in the last column of
Table 6.
A summary of the average localization errors from the four experiments is presented in
Table 7, along with total duration for rotational motion
(in seconds and percentage, i.e., with respect to total execution time
). From the summarized numerical results, the performance via
was consistently the highest regardless of the value of
. On the contrary, the approach via
was the worst for all trajectories. This signifies that, while the extended Kalman filter usually results in a better performance particularly for sensor fusions, the error is likely to overshoot when localizing a mobile node at a certain speed range. This finding may be explained by the fact that a moving node leads to (possibly highly) nonlinear responses, as exemplified by the previous temporal analyses which could eventually make it unsuitable for localization via this approach. A closer inspection of
and
shows that the error via
was lower than that via
when
was sufficiently small as can be observed in Trajectory 2. As
became slightly higher,
and
became more competitive. Nevertheless,
is advantageous when the mobile node is in full rotational movement, as revealed in the last experiment. From the table, it is also apparent that the proposed sensor-fusion techniques via
and
led to significant improvements for the first three trajectories, which involved rotational motions with durations of less than 37% of the total execution time. When the duration is sufficiently large, as typified by Trajectory 4, the performance can be further enhanced via
. This desirable outcome is conclusively due to the optimization of
in Corollary 1, which searches for the best error minimizer during the rotational motion.