Next Article in Journal
Resource Allocation in UAV-Enabled NOMA Networks for Enhanced Six-G Communications Systems
Previous Article in Journal
Enhancing Outdoor Moving Target Detection: Integrating Classical DSP with mmWave FMCW Radars in Dynamic Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Indoor Localization Based on Integration of Wi-Fi with Geomagnetic and Light Sensors on an Android Device Using a DFF Network

Electronic Engineering Department, Kwangwoon University, Seoul 01897, Republic of Korea
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(24), 5032; https://doi.org/10.3390/electronics12245032
Submission received: 20 November 2023 / Revised: 2 December 2023 / Accepted: 14 December 2023 / Published: 16 December 2023
(This article belongs to the Section Networks)

Abstract

:
Sensor-related indoor localization has attracted considerable attention in recent years. The accuracy of conventional fingerprint solutions based on a single sensor, such as a Wi-Fi sensor, is affected by multipath interferences from other electronic devices that are produced as a result of complex indoor environments. Light sensors and magnetic (i.e., geomagnetic) field sensors can be used to enhance the accuracy of a system since they are less vulnerable to disturbances. In this paper, we propose a deep feedforward (DFF)-neural-network-based method, termed DFF-WGL, which integrates the data from the embedded Wi-Fi sensor, geomagnetic field sensor, and light sensor (WGL) in a smart device to localize the device in an indoor environment. DFF-WGL does not require complex and expensive auxiliary equipment, except for basic fluorescent lamps and low-density Wi-Fi signal coverage, conditions that are easily satisfied in modern offices or educational buildings. The proposed system was implemented on a commercial off-the-shelf android device, and performance was evaluated through an experimental analysis conducted in two different indoor testbeds, one measuring 60.5 m2 and the other measuring 38 m2, with 242 and 60 reference points, respectively. The results indicate that the model prediction with an input consisting of the combination of light, a magnetic field sensor, and two Wi-Fi RSS signals achieved mean localization errors of 0.01 m and 0.04 m in the two testbeds, respectively, compared with any subset of combination of sensors, verifying the effectiveness of the proposed DFF-WGL method.

1. Introduction

The indoor localization of smartphones is typically achieved by using embedded sensors to gather information about the surrounding environment [1,2,3,4,5,6,7]. These sensors can detect a variety of signals, including electronic signals from Bluetooth or Wi-Fi sensors; light signals from light or camera sensors; motion or inertial signals from accelerometers, gyroscopes, and magnetometers given in inertial measurement units (IMU); etc. These signals are then processed to estimate the smartphone’s location within an indoor environment. In addition to the schemes based on using the signals acquired from smartphone platforms for indoor localization, wireless sensor networks (WSNs) [8,9] and Frequency-Modulated Continuous Wave radar (FMCW) [10,11,12]-based schemes have garnered research attention in recent years.
On a smartphone platform, Wi-Fi and Bluetooth signals are the two main electronic signals used for indoor localization. Even though channel state information (CSI)-based Wi-Fi [13] can be used to track the location of smartphones with higher accuracy than received signal strength (RSS)-based Wi-Fi localization, this method requires specific hardware and software that are not commonly found in most commercial devices. However, RSS-fingerprint-based radio map methods are commonly researched with regard to both Wi-Fi [14] and Bluetooth localization [15]. These methods require a minimum infrastructure consisting of Wi-Fi access points (AP) and Bluetooth Low-Energy (BLE) beacons.
On the contrary, the geomagnetic field is a naturally occurring, ubiquitous, and relatively stable phenomenon. Furthermore, the use of geomagnetic field fingerprinting as a localization technique presents an appealing alternative to Wi-Fi- and Bluetooth-fingerprinting methods, as it does not rely on any infrastructure. Some authors have performed geomagnetic fingerprinting for indoor localization [6,16,17,18]. On the one hand, electronic signals are less stable and pervasive than the geomagnetic field. On the other hand, the geomagnetic field has a consistent presence, and its force fingerprinting can exhibit distinctiveness within a confined indoor space, making it a viable option for fingerprint-based indoor localization.
In comparison to electronic signals and geomagnetic field fingerprinting, light signals are more stable and less prone to interference, making them a potentially reliable option for indoor localization. Moreover, the unique fingerprinting of light signals can offer precise localization information within a confined indoor space. Some studies have explored the use of visible light communication signals and smartphones for indoor localization [19,20]. However, light sensor localization schemes suffer from the drawback of being limited by dark or non-line-of-sight (NLOS) environmental conditions.
In conventional RSS-based Wi-Fi localization schemes that do not use fusion, numerous Wi-Fi RSS samples are required during the site surveillance process to maintain high localization accuracy. Due to hardware limitations with regard to scanning Wi-Fi signals, the Wi-Fi scan speed is usually not very fast, making the site surveillance process laborious and tedious. Approaches involving crowdsourcing [21,22,23] have been commonly employed to tackle this issue; however, they are susceptible to a decrease in accuracy due to variations in device accuracy. In the event of the unfeasibility of crowdsourcing, we aim to achieve precise localization and reduce site-surveying time by incorporating data from light and magnetic sensors with a higher sampling rate than that of Wi-Fi scanning whilst minimizing the number of Wi-Fi sample data.
Fingerprint augmentation (FA) is another method used in Wi-Fi-fingerprint-based localization schemes in which the fingerprint of each reference point (RP) in a fingerprint database is used to generate a virtual RSS fingerprint via interpolation [24,25,26]. This technique leverages the known RSS fingerprints at RPs to estimate the signal strengths at locations in between, effectively expanding the coverage of the database and improving localization accuracy. Linear and k-nearest neighbor (KNN) interpolation [27] are commonly employed for this purpose, allowing a system to make accurate predictions even in areas with limited or no direct fingerprint measurements.
To construct the training dataset for the deep learning model by combining data from multiple sensors, we employed Sequential Feature Selection [28]. In this technique, there is an initially empty set of features, which are incrementally added through a search across the feature space. Then, the features that contribute to enhancing the overall accuracy of the model are selected. This systematic approach ensures that the most informative sensor data are included in the training dataset, facilitating the development of a robust and accurate model. Furthermore, it allows for the efficient utilization of sensor data, optimizing a model’s performance for various applications in sensor-based tasks.
The proposed method is designed for localizing robots by equipping a tablet-like device on the top part of the robot rather than for human targets equipped with smartphones in their hands or pockets. This is because the direction of and gesturing applied to smartphones are more complex, and in such scenarios, an accelerometer and a gyroscope yielding measurements in IMU should be used, while the collected light sensor data may be less related to the actual target’s position. In summary, the main contributions of this study are as follows:
  • We proposed a fusion method to integrate the fingerprint information from the embedded light sensor, magnetometer, and Wi-Fi sensor in a commercial off-the-shelf Android device for indoor localization. The method does not require heavy equipment or expensive infrastructure, so it can be quickly deployed for a new region of interest.
  • We utilized a DFF neural network model as a tool for analyzing and detecting specific signal features within the fingerprint signals collected from the embedded sensors in commercially available mobile devices. By leveraging the DFF model, we achieved significant improvements compared to conventional fingerprint localization schemes. The proposed method demonstrates a potential for practical application in real-world scenarios.
  • We assessed the varying impacts of fingerprint data, such as light-illumination-level signals, magnetic field strengths, and Wi-Fi signals, on localization performance. We conducted a series of comparative experiments to analyze the contribution of each sensor type. Through the results obtained, one can establish criteria to determine sensor combinations adaptively for diverse application scenarios.
The subsequent sections of this paper are structured as follows. Section 2 provides an overview of the relevant literature and distinguishes the proposed approach from the current state-of-the-art systems. Section 3 outlines the localization model that utilizes the DFF network. Section 4 delineates the experimental design, data analysis, and system performance evaluation. Finally, Section 5 summarizes the conclusions drawn from this study and outlines possible future research directions.

2. Related Works

In recent years, various researchers have proposed a multitude of valuable approaches, employing a fusion of diverse sensors embedded within smart devices to achieve reliable indoor localization for location-based services (LBS) in indoor environments. For instance, in [29], a fusion method combining BLE and inertial navigation based on a particle filter was proposed. In [30], the fusion process involves CSI and magnetic field strength (MFS) for smartphones. Additionally, the authors of [31] presented a fusion technique that combines crowdsourced Wi-Fi fingerprinting with micro-electro-mechanical system sensors using an enhanced complementary filter. In [32], the fusion method incorporates accelerometers, gyroscopes, and magnetometers in smartphones along with Wi-Fi fingerprinting for pedestrian dead reckoning (PDR). Migicol [33] introduced a fusion method that combines Wi-Fi and magnetic signals based on a two-pass bidirectional particle-filtering approach to enhance accuracy. In [34], a fusion algorithm based on PDR and Wi-Fi RSS fingerprinting was proposed for achieving high accuracy in indoor positioning. This approach utilizes PDR data for step counting and distance estimation and RSS fingerprinting for correcting position errors. VMag, proposed in [35], is an infrastructure-free indoor-positioning method fusing geomagnetic and visual images captured using smartphone cameras; it also utilizes particle-filtering and neural network techniques. Moreover, a pedestrian-positioning method [36] fusing IMU data of smartphones and surveillance video was proposed, although these data may not always be readily available in certain environments.
Additionally, the combination of magnetic sensor data and deep learning has also been researched. In [37], the researchers proposed a deep learning method for creating a permanent magnet localization model for tongue tracking by training a feedforward neural network. In [38], a deep-learning-based neural network is introduced within the context of DC magnetic cleanliness for space missions entailing the modelling of magnetic dipoles. In [39], a real-time magnetic localization method was introduced, combining a hybrid feedforward neural network and the Levenberg–Marquardt (LM) algorithm.
In recent times, machine learning and deep learning have yielded successful results in the realm of indoor localization. In [40], the authors proposed a deep neural network (DNNs)-based indoor localization method for a smartphone integrating Wi-Fi fine-timing measurement (FTM) and RSS. Convolutional neural network (CNN)-based schemes have been proposed in recent years [41,42]. In [42], a method fusing Wi-Fi and MFS signals to fingerprint images was proposed in order to implement an accurate and orientation-free positioning system. Approaches based on recurrent neural networks (RNNs) and their variants, long-short-term-memory (LSTM)-based methods, were proposed in [16,43]. In [43], the authors formulate the indoor localization problem as a recursive function approximation problem, which was solved by using an LSTM network to fuse Wi-Fi and PDR signals obtained from smartphones. The neural networks used in these methods are complex and require careful parameter tuning and long training times. The authors of [44] proposed an LSTM-network-based method fusing Wi-Fi RSS measurements and PDR techniques to estimate the location of a smartphone. The accuracy of this system is influenced by Wi-Fi signal quality and requires periodic calibration for accurate PDR estimates. The author of [45] utilized a deep LSTM neural network to integrate data from magnetic and light sensors in smartphones for indoor localization. This method leverages the high sampling rate of magnetic and light sensors to improve the accuracy of indoor localization. The drawback of this approach is that it requires a large quantity of labeled training data for the deep LSTM model, which may be time consuming and costly to obtain.
In summary, most of the aforementioned methods rely on extensive offline fingerprint databases, complex infrastructure, or intricate neural networks that incur long training times or high hardware costs. In this paper, we propose an infrastructure-free method making use of a relatively lightweight DFF neural network, along with the integration of data from light, magnetometer, and Wi-Fi sensors in a smart device, designed to achieve stable and accurate indoor localization. The advantage of sensor fusion lies in the reduction in site-surveying time for building fingerprint radio maps in comparison to relying on a single type of sensor data, such as Wi-Fi RSS signals. Moreover, the use of multiple sensor sources can enhance robustness against environmental interference.

3. Proposed DFF-WGL Framework

We present DFF-WGL, an Android-device-based multiple sensor fusion scheme for localizing a mobile device (MD) in an indoor environment. DFF-WGL employs a deep learning model—the DFF neural network—to extract fingerprint features from a radio map, which is created using signals from the embedded light sensor, magnetometer, and Wi-Fi sensor in an Android device. DFF-WGL provides a real-time indoor localization framework consisting of two phases, namely, offline site surveillance and online localization, as illustrated in Figure 1.
In the offline phase, an MD is used to survey the region of interest (RoI), collecting fingerprint data at predefined reference points. The fingerprints of light-illumination-level signals, MFS signals, and Wi-Fi signals from two preselected access points (APs) are collected as prior information. Data preprocessing, including filtering, augmentation, and normalization, is performed after data collection. The index of each RP in the RoI is stored and used as a label to train the DFF network. After training the model, it is deployed on the Android platform to enable location estimation prediction during the online phase. During the online phase, when an MD is located in an unknown position, it collects new sensor data from the three embedded sensors and inputs them into the trained model. After the preprocessing process, the raw data are then fed to the trained network model, and the location estimate of the MD is calculated using the output probability from the DFF network’s output layer weighted according to the RP’s coordinates.

3.1. System Description

We developed an Android application for collecting fingerprint data. To expedite site surveillance, we implemented a multi-threaded approach for reading data from the light sensor, magnetometer, and Wi-Fi scanner, as depicted in Figure 2. Each sensor-reading process is handled via a separate thread. The light sensor and magnetometer require approximately 200 ms to read each sample of data, whereas the Wi-Fi scanner extracts the RSS data for specific Wi-Fi signals at one sample per second. As all three threads begin executing simultaneously upon a device’s readiness at the RP, the threads for the light sensor and magnetometer complete their respective reading processes before the Wi-Fi-scanning thread. Consequently, the total time spent at each RP depends on the Wi-Fi-scanning time required for 100 samples. Detailed information about the data collection process is given in Section 4.1.
Light-illumination-level data, MFS data, and Wi-Fi RSS data from two APs at k RPs are collected from the embedded sensors in the MD to construct the fingerprinting database, which can be described as
F = L 1 M 1 x M 1 y M 1 z W 1 1 W 1 2 S 1 L 2 M 2 x M 2 y M 2 z W 2 1 W 2 2 S 2           L k M k x M k y M k z W k 1 W k 2 S k
where L k , M k x , M k y , M k z , and W k 1 , W k 2 denote the light-illumination-level signal vector; the MFS signal vector in the x, y, and z axes; and the Wi-Fi RSS signal vector from two different Aps collected at the kth RP, with the same vector length. In our experiment, we collected 100 sample data at each RP; hence, the vector length is 100. S k denotes the label of the kth RP, for example, ‘RP0’.

3.2. Data Preprocessing

3.2.1. Filtering for Light Illumination Level Data and Wi-Fi RSS Data

To eliminate the initial instability of the data collected from a tablet’s embedded light sensor and Wi-Fi sensor, several low-pass filtering methods can be employed. Notable options include the exponential moving average (EMA) filter [46], the simple moving average (SMA) filter [47], and the moving median (MM) filter [48]. These filters are adept at smoothing erratic or unstable data, resulting in a more consistent and representative signal. As shown in Figure 3, the SMA, however, has a drawback: it initially produces a filtered value of zero. This characteristic may not align with the typical behavior of light sensor data, where real-world light illumination levels do not always start at zero. The EMA, on the other hand, tends to generate non-integer values, which may not be suitable for training neural network models that work optimally with integer data. The Moving Median (MM) filter, however, maintains the integer data type in its output. This feature makes it particularly well suited for light sensor data, where the original readings are in integer form. When training neural network models for tasks involving light illumination levels, using integer values as inputs is advantageous. Therefore, the MM filter is a fitting choice for filtering such data. Because MFS is less susceptible to interference, we did not apply any filtering to the MFS data in this scheme.

3.2.2. Data Augmentation

Linear interpolation [49] was used as a data augmentation method for enhancing our sensor data for three critical sensor types: light illumination level, MFS, and Wi-Fi RSS data. This method involves filling in data gaps between reference points by estimating intermediate values based on known data points. For example, in the case of light illumination data, we used linear interpolation to predict illumination levels at unmeasured points within the data range. This process expanded our dataset, increasing its granularity and coverage, while preserving the original data’s characteristics, as shown in Figure 4.

3.2.3. Normalization for MFS signal

To enhance the practicality of the proposed technique, we utilized pre-installed signal sources, such as ceiling lamps and Wi-Fi access points. As both light illumination and MFS signals can be obtained from any location, we selected the Wi-Fi signals that exhibit stable signal strength and can cover the entire RoI to generate a useful signal fingerprint. Specifically, as the MFS data stem from three different axes, it is necessary to normalize the data from each axis to mitigate potential errors. The values M = m x , m y , m z obtained from the magnetometer can be normalized using l 2   n o r m :
M n o r m = [ m x   M 2 , m y   M 2 , m z   M 2 ]
where | | M | | 2 = m x 2 + m y 2 + m z 2 , m x , m y , and m z represent the magnetic signal from the x-axis, y-axis, and z-axis, respectively.

3.2.4. Feature-Level Fusion for Multiple Sensor data

To enhance the localization accuracy of the deep learning model, feature-level fusion [50] was employed in the construction of the training dataset. This method entails concatenating feature vectors from three distinct embedded sensors, as illustrated in Figure 4. The dataset matrix includes raw filtered light-illumination-level data, normalized MFS data, and two Wi-Fi signal RSS data, each placed in a separate row. These rows correspond to different RPs and serve as inputs to the deep learning model, directly connecting to the input layer of the DFF model. This approach optimizes the model’s ability to leverage information from multiple sensor sources, ultimately improving its accuracy in localization tasks.

3.3. Structure of DFF Networks

A deep feedforward network [51], also known as a deep feedforward (DFF) neural network, is a type of artificial neural network that is widely used in machine learning and deep learning applications. In a DFF network, the neurons are organized into layers, with each layer having a specific role in the computation of the network’s output. The input layer receives the input data, which are processed by hidden layers so that complex representations and model hierarchical features can be learned. The output layer produces the network’s multi-class classification output. Our proposed scheme, shown in Figure 4, adopts a multi-class classification approach. The network takes the light-illumination-level signal; MFS signals from the x, y, and z-axes; and two Wi-Fi RSS signals as inputs.
For a multi-class classification problem with classes denoted as K, the DFF network will have K output neurons, each corresponding to the prediction probability value for each possible class. Let X be the input data and Y be the output data. We define a DFF with three layers as follows.
The first layer takes the input X and applies an affine transformation with weights W 1 and biases b 1 , followed by a non-linear activation function (Relu) f 1 :
Z 1 = f 1 W 1 · X + b 1
For each subsequent layer l = 2 L , we apply another affine transformation with weights W l and biases b l , followed by a non-linear activation function (Relu) f l :
Z l = f l W l · Z l 1 + b l
The output layer applies another affine transformation with weights W l and biases b l , followed by a softmax activation function to obtain the predicted class probabilities:
Y ^ = s o f t m a x W L · Z L 1 + b L
where s o f t m a x z i = e z i / j = 1 K e z j , z i denotes the scores inferred by the network for each class in K . The loss function used during training is the cross-entropy loss function:
l o s s Y , Y ^ = j = 1 K Y i · log Y ^ i
We train the network using backpropagation via the RMSprop optimization algorithm, where the weights are updated using
v d w = β ·   v d w + 1 β · d w 2
v d b = β ·   v d b + 1 β · d b 2
W = W η v d w + ε d w  
b = b η v d b + ε d b  
where v d w is the moving average of the squared gradients of the weight parameter w , calculated over time using the previous value v d w and the current squared gradient d w 2 . v d b is the moving average of the squared gradients of the bias parameter b , calculated over time using the previous value v d b and the current squared gradient d b 2 . d w and d b are the gradients of the loss function with respect to the weight parameters w and b , respectively. β is the hyperparameter that controls the decay rate of the moving average, with a typical value of 0.9. η is the learning rate that controls the step size for updating the parameters. ε is a small constant added to the denominator of the update rule to avoid division by zero and improve numerical stability. Finally, W and b are the weight and bias that are being updated.
The detailed DFF network structure we adopted in our scheme includes the following layers: the first layer, which is the input layer with six input nodes; the second Dense layer, consisting of 256 nodes with a ReLU activation function; and the third Dense layer, which also includes 256 nodes with a ReLU activation function. The output layer contains either 242 or 60 nodes (depending on the experiment in testbed 1 or testbed 2) with a Softmax activation function.

3.4. Generation of Location Estimate

In the online phase, when the MD is located in an unknown position, a set of newly collected sensor sample data is denoted as F t = [ l t , m t x , m t y , m t z , w t 1 , w t 2 ] , where l t , m t x , m t y , m t z , w t 1 , and w t 2 are the mean values received over time t. The output layer of the DFF network, acting as a multi-class classifier in Figure 4, consists of a group of probability values that correspond to the number of RPs in the RoI. y i represents the probability value output from the last layer of the DFF model when the MD is located at the i th RP with the current sample data F t as an input. Since the summation of all the probability values in the output layer equals one, the final location estimate can be calculated as follows:
x ^ = i = 1 K y i   ×   l r p i
where K denotes the quantity of RPs in RoI, and l r p i represents the centroid coordinate of the i th RP.

4. Experiments and Analysis

4.1. System Description

This study was conducted in two distinct indoor testbeds located on the sixth floor of the Hwado Building at Kwangwoon University in Seoul, South Korea. The first testbed was a wide rectangular corridor area measuring 60.5   m 2 ( 11   m × 5.5   m ) , while the second testbed was a narrow rectangular corridor area measuring 38   m 2 ( 19   m × 2   m ) . A total of 242 and 60 RPs were designated for the two areas, respectively, and the distances between adjacent points were 0.5 m and 1 m as determined according to Euclidean distance. The experimental setup, including a floor plan and a photographic depiction of the testbeds, can be found in Figure 5 and Figure 6, respectively.
During the site surveillance stage, an Android tablet serving as the MD was placed on a box atop a swivel chair with the monitor oriented vertically towards the ceiling to enable accurate measurements of environmental light illumination from lamps. The swivel chair was only moved to marked RPs, and at each RP, a period of approximately 1 min 40 s is required for the MD to collect 100 samples each of light sensor data; MFS data from the x-axis, y-axis, and z-axis of the magnetic sensor; and RSS data from two selected Wi-Fi signals. The label assigned to each RP was used to annotate the corresponding environment data in the dataset, which was temporarily stored in an Excel file format on the tablet. Once the site survey stage was completed, the dataset was uploaded to a computer using UART.
In a single round of data collection at an RP, a total of 100 measurements of the light illumination level; 100 measurements of the MFS along the x-axis, y-axis, and z-axis; and 100 measurements of RSS for two selected Wi-Fi RSS signals were acquired. These measurements were utilized to construct four independent fingerprint datasets. In testbed1, datasets 1 and 2 each have a length of 145,200 data points (242 × * 100 × 6), representing 100 measurements for 242 RPs and six sensor signals. Similarly, in testbed 2, datasets 3 and 4 each have a length of 36,000 data points (60 × 100 × 6), representing 100 measurements for 60 RPs and six sensor signals. The detailed information about the four datasets is summarized in Table 1. A 3D visualization of the four datasets is shown in Figure 7.
The DFF model was trained on Google Colab using a dataset partitioned into training, validation, and test sets in a ratio of 6:2:2. Keras, a deep learning API within the TensorFlow framework, was utilized to develop the model. Specifically, the TensorFlow API version was 2.12.0, and the Keras version was 2.12.0.
During the online localization process, the trained neural network runs on the Android platform, while the MD collects new environment data and then inputs them to the trained model. Finally, the estimated position is calculated and subsequently displayed on the tablet’s monitor.

4.2. Fingerprint Correlation Analysis

To establish the practicability of the proposed scheme, which involves the integration of data from a light sensor, a magnetometer, and a Wi-Fi sensor to achieve localization, Pearson correlation coefficient computation was performed using the four authentic fingerprint datasets. The Pearson correlation coefficient r is calculated as follows:
r = ( x i x ¯ ) ( y i y ¯ ) ( x i x ¯ ) 2 ( y i y ¯ ) 2
where x i and y i are the values in the first and second datasets, and x ¯ and y ¯ are the mean values for the first and second datasets, respectively.
Figure 8a,b present the correlation matrices for testbeds 1 and 2, respectively, based on four distinct fingerprint datasets. The figure reveals that the light sensor data demonstrate the highest correlation coefficient, which can be attributed to their relatively stable illumination intensity, which persists unless there is a change in the light source position or damage to the source itself. Conversely, the MFS data exhibit a lower correlation coefficient compared to the light-illumination-level data, indicating that the MFS signal varies over time. Lastly, the Wi-Fi RSS signal exhibits the lowest correlation coefficient across both testbeds, further accentuating the inherent instability of Wi-Fi RSS signals in indoor environments and their proneness to electronic interference.

4.3. Impact of Different Sensor Combinations

To assess the impact of various sensor combinations on the localization error distribution, we devised multiple groups of sensor signal combinations. We conducted a comparative analysis to elucidate the distinct contributions of these sensor signals using a test dataset sourced from the offline Dataset1. For ease of exposition, the MFS signals from the x, y, and z-axes were treated as a single signal type, while the two Wi-Fi RSS signals, despite their intrinsic similarity, were categorized as two distinct signal types. Consequently, cases C1 to C4 encompass a single type of sensor signal, cases C5 to C8 correspond to two types of sensor signals, cases C9 to C11 involve three distinct signal types, and case C12 integrates four diverse signal types. The cumulative distribution function (CDF) of the localization error under the proposed model is presented in Figure 9.
Firstly, we present the experimental results for single-sensor signal-based localization (C1~C4). Beginning with case C1, it is evident that prediction performance remained subpar, and localization accuracy was consistently low across various learning rates. Although the light-illumination-level signal exhibits remarkable stability, its coverage area is limited. As illustrated in Figure 7a,g, the light illumination levels recorded on different dates are nearly identical. However, valid illumination is detected only within specific regions beneath the lamps, distinct from the case for other RPs. In most other areas, the illumination levels are close to zero. Consequently, the network model struggled to acquire sufficient location-related information solely from the light-illumination-level signal. This deficiency is reflected in the notably low prediction accuracy within the test datasets.
Similarly, the performance of a single Wi-Fi signal (C1~C4) also proved unsatisfactory. The highest accuracy within 2 m did not exceed 20%, and more than 50% of localization errors exceeded 3 m, as evidenced in Figure 9C3,C4. These outcomes underscore the unreliability of single-Wi-Fi-signal-based fingerprinting methods in indoor environments. The presence of severe multipath effects and interference results in frequent fluctuations in indoor Wi-Fi signals, thereby exacerbating challenges within the localization system. Contrastingly, the results obtained from the MFS signal (C2) outshine those of the other single-signal experiments. Notably, the embedded magnetometer captured three-dimensional data from the x, y, and z axes, providing C2 with more information than other single-signal approaches and consequently yielding superior prediction performance.
In the experiments conducted from C5 to C8, we investigated the performance when two sensor signals were used. A comparison between C8, C3, and C4 highlights the advantage of employing two Wi-Fi signals over a single Wi-Fi signal, resulting in improved prediction accuracy. However, the test accuracy of 21.7% achieved in C8 remains insufficient for accurate indoor localization. The second lowest test accuracy, 54.2%, was observed in C6, where one light-illumination-level signal and Wi-Fi signal 1 were used. Comparing C6 (54.2%) with C8 (21.7%) and C3 (6.8%), we can observe that the light-illumination-level signal increases test accuracy by 47.4%, while Wi-Fi signal 2 only boosts it by 14.9% when compared to C3 (6.8%). This suggests that the light-illumination-level signal, due to its stability and strong location-related characteristics, may be a preferable choice over Wi-Fi in specific applications.
The highest test accuracy was achieved in C5, amounting to 92.1%, followed closely by that of 89.1% in C7. The discrepancy in accuracy between C5 and C7, attributed to the use of a light-illumination-level signal versus Wi-Fi signal 1, aligns with the earlier analysis.
Moving on to C9~C11, designed to assess performance under three sensor signals, we found that the inclusion of the light-illumination-level signal in C9 (96.05%) contributed a 6.95% accuracy increase when compared to C7 (89.1%). Furthermore, in C9 (96.05%) and C11 (93.5%), the illumination-level signal provided a 2.55% accuracy boost over Wi-Fi signal 2. Comparing C8 (21.7%), C10 (78.05%), and C11 (93.58%), we can observe that the MFS signals outperform the light-illumination-level signal, with the former exhibiting a substantial 71.88% accuracy increase, which can be compared to the 56.35% accuracy boost provided by the latter. These results align closely with the comparisons between C3 (6.8%), C6 (54.2%), and C7 (89.1%), demonstrating accuracy improvements of approximately 82.3% and 47.4%, respectively.
The final experiment, denoted as C12, employed all available sensor signals, achieving a remarkable 96.85% test accuracy with a learning rate of 0.01 and an even higher accuracy of 97.32% with a learning rate of 0.001. Notably, C12 demonstrated the highest test accuracy among all twelve experiments and exhibited the smallest localization error within the test dataset. C12 (97.32%) and C9 (95.85%) displayed similar performance, with the slight variance observed being attributable to the influence of Wi-Fi signal 2. Despite its relatively modest contribution, Wi-Fi signal 2 still contributed to a 1.47% accuracy improvement.

4.4. Impact of Learning Rate

To assess the influence of different learning rates ( l r ) on the neural network model’s performance, we selected four representative values, namely, 0.1, 0.01, 0.001, and 0.0001, for a comprehensive evaluation across all comparative experiments. Underfitting occurred when l r = 0.1 , as this learning rate proved excessively large in relation to the dataset, hindering the model’s ability to capture intricate dataset features. This resulted in non-convergence, elevated test loss, and a test accuracy lower than the training accuracy during the training process. Conversely, overfitting occurs at an l r = 0.0001 , at which point the model learns an excessive number of intricate details, including noise and inaccuracies, leading to the highest training and test losses among the four learning rates.
In Figure 9, encompassing all twelve experiments, l r = 0.0001 yields the lowest average localization error, with l r = 0.1 ranking as the second lowest, except for the specific cases of Figure 9C1,C3,C4, where the models fail to converge and attain exceedingly low accuracy. Comparatively, the model employing l r = 0.01 exhibited slightly improved localization accuracy. This improvement is attributed to the larger step length in the gradient descent algorithm associated with l r = 0.001 , which enhances the learning process’s efficiency based on the current dataset.

4.5. Performance in Real-Time Localization

Based on previous experimental findings and parameter analyses, we conducted real-time online experiments to assess the practical performance of the proposed DFF-WGL scheme within two distinct testbeds. Separate experimental paths were designed for testbeds 1 and 2. In Figure 10a,b, we present 2D plots depicting twelve sets of localization errors obtained under varying sensor signal combinations while the MD collected real-time environmental data in both testbeds. To facilitate the comparative evaluation of system performance across different signal combinations, we employed box plots in Figure 10, revealing the distribution, spread, and central tendency (median) of the localization errors. Subsequently, in Figure 11, we compare the mean, median, and variance values of the localization errors for each result group.
In Figure 10, showing the box plot, and Figure 11, presenting the histogram, we can observe that in testbed1, for the results for C1, C3, C4, C6, and C8, the combination of light-illumination-level data and two Wi-Fi RSS signals failed to provide sufficient and stable location-related information. Consequently, the DFF network struggled to converge and make accurate localization predictions in testbed1. Even with the addition of light and Wi-Fi signals in C10, the localization error only marginally decreased, resulting in a mean error of approximately 0.87 m. In contrast, the MFS signal from the magnetometer, specifically in C2, achieved a localization error as low as 0.15 m. However, this signal exhibits instability, as indicated by the 2D figure and variance values in Figure 11. Combining the MFS signal with the light sensor in C5 yields superior results compared to the combination with Wi-Fi in C7, with the latter combination producing lower mean localization errors and variances.
Comparing the results from C9 to C12, we discerned that C10 performs the poorest, relying solely on the light sensor signal and two Wi-Fi RSS signals. Next in performance is C11, followed by C9, as the light-illumination-level signal offers more stable information than a Wi-Fi signal. The most favorable localization error is achieved in the C12 combination, which provides sufficient and stable fingerprint information to enable the DFF network to make accurate localization predictions.
For the results obtained in testbed2, the majority of outcomes align consistently with those observed in testbed1. It is important to highlight that in the case of the C1 combination, there are three ceiling lamps in testbed2, resulting in a higher volume of light-illumination-level data collected by the MD compared to that in the testbed1 experiment. Consequently, the mean localization error for C1 in testbed2 is approximately 1.1 m, an improvement over the 2.1 m error observed in testbed1. Due to the contribution of the light illumination signal, both C6 and C10 failed to achieve satisfactory localization performance in testbed1 but exhibited nearly zero median values in testbed2.
In conclusion, the sensors’ ability to furnish adequate and stable fingerprint information corresponds to the following sequence: magnetometer > light sensor > Wi-Fi sensor. These real-time experimental results align with the analyses conducted using offline datasets (Figure 9).

4.6. Performance Comparison

To evaluate the performance of the proposed scheme, we compared the localization error with that of the existing state-of-the-art schemes, including EZ [52], BPNN [53], and Magicol [33], as shown in Figure 12. It can be observed that the proposed DFF-WGL scheme outperforms Magicol by more than 80 percentile error within 2 m. The Magicol fusing Wi-Fi signals, MFS signals, and dead reckoning with a particle filter achieved high accuracy but also presented high computational complexity. EZ and BPNN utilize only Wi-Fi signals to estimate the target’s position. Conversely, the proposed scheme achieved the best localization performance by integrating signals from multiple sensors with a deep feed-forward network.

5. Conclusions

In this study, we proposed a novel approach for indoor active localization using a DFF neural network for classification by fusing fingerprint data obtained from multiple sensors, including the light sensor, magnetometer, and Wi-Fi sensor embedded in a commercial Android device. We adopted a neural network as a multi-class classifier to learn the fine features from the signal fingerprints collected from each RP within the region of interest. Different sensor combinations were evaluated, and localization performance was compared in two real indoor environments. Since the magnetometer is sensitive to the direction and position of the device, in this scheme, we kept the mobile device in a stable state and facing the same direction and oriented the device’s monitor vertically towards the ceiling. This ensured that the embedded light sensor near the front camera collected stable light-illumination-level data throughout the experimental process.
Since magnetic measurements can be obstructed by magnetic materials or 50 Hz signals and harmonics in the power network in indoor environments, we will set about solving this problem by implementing more stringent measurement procedures, as described in [54,55], to adapt to more complex application scenarios in the future. We also aim to increase the practicality of our proposed scheme in larger regions of interest under more complex indoor environments and test the performance of the proposed scheme at different times of day. We also aim to solve the direction-aware problem. Additionally, the proposed scheme will be further developed to adaptively sense variations in light intensity and generate appropriate localization strategies to enable the constant usage of this scheme.

Author Contributions

Conceptualization, methodology, and writing—original draft preparation, C.S.; validation and data curation, J.Z. and K.J.; writing—review and editing and supervision, Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (NRF-2021R1F1A1049509).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy and ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Davidson, P.; Piché, R. A survey of selected indoor positioning methods for smartphones. IEEE Commun. Surv. Tutor. 2017, 19, 1347–1370. [Google Scholar] [CrossRef]
  2. Zafari, F.; Gkelias, A.; Leung, K.K. A survey of indoor localization systems and technologies. IEEE Commun. Surv. Tutor. 2019, 21, 2568–2599. [Google Scholar] [CrossRef]
  3. He, S.; Chan, S.H.G. Wi-Fi fingerprint-based indoor positioning: Recent advances and comparisons. IEEE Commun. Surv. Tutor. 2015, 18, 466–490. [Google Scholar] [CrossRef]
  4. Celik, A.; Romdhane, I.; Kaddoum, G.; Eltawil, A.M. A top-down survey on optical wireless communications for the internet of things. IEEE Commun. Surv. Tutor. 2022, 25, 1–45. [Google Scholar] [CrossRef]
  5. Naser, R.S.; Lam, M.C.; Qamar, F.; Zaidan, B.B. Smartphone-based indoor localization systems: A systematic literature review. Electronics 2023, 12, 1814. [Google Scholar] [CrossRef]
  6. He, S.; Kang, G.S. Geomagnetism for smartphone-based indoor localization: Challenges, advances, and comparisons. ACM Comput. Surv. 2017, 50, 1–37. [Google Scholar] [CrossRef]
  7. Wang, Q.; Fu, M.X.; Wang, J.Q.; Luo, H.Y.; Sun, L.; Ma, Z.C.; Li, W.; Zhang, C.Y.; Huang, R.; Li, X.D.; et al. Recent advances in pedestrian inertial navigation based on smartphone: A review. IEEE Sens. J. 2022, 22, 22319–22343. [Google Scholar] [CrossRef]
  8. Sun, C.; Zhou, B.; Yang, S.; Kim, Y. Geometric midpoint algorithm for device-free localization in low-density wireless sensor networks. Electronics 2021, 10, 2924. [Google Scholar] [CrossRef]
  9. Sun, C.; Zhou, J.; Jang, K.-S.; Kim, Y. Intelligent mesh cluster algorithm for device-free localization in wireless sensor networks. Electronics 2023, 12, 3426. [Google Scholar] [CrossRef]
  10. Zhou, J.; Sun, C.; Jang, K.; Yang, S.; Kim, Y. Human activity recognition based on continuous-wave radar and bidirectional gate recurrent unit. Electronics 2023, 12, 4060. [Google Scholar] [CrossRef]
  11. Yang, S.; Kim, Y. Single 24-GHz FMCW radar-based indoor device-free human localization and posture sensing with CNN. IEEE Sens. J. 2023, 23, 3059–3068. [Google Scholar] [CrossRef]
  12. Lee, J.; Park, K.; Kim, Y. Deep Learning-Based Device-Free Localization Scheme for Simultaneous Estimation of Indoor Location and Posture Using FMCW Radars. Sensors 2022, 22, 4447. [Google Scholar] [CrossRef] [PubMed]
  13. Gao, Z.; Gao, Y.; Wang, S.; Li, D.; Xu, Y. CRISLoc: Reconstructable CSI fingerprinting for indoor smartphone localization. IEEE Internet Things J. 2021, 8, 3422–3437. [Google Scholar] [CrossRef]
  14. Wu, Y.; Chen, R.; Li, W.; Yu, Y.; Zhou, H.; Yan, K. Indoor positioning based on walking-surveyed Wi-Fi fingerprint and corner reference trajectory-geomagnetic database. IEEE Sens. J. 2021, 21, 18964–18977. [Google Scholar] [CrossRef]
  15. Luo, R.C.; Hsiao, T.J. Indoor localization system based on hybrid Wi-Fi/BLE and hierarchical topological fingerprinting approach. IEEE Trans. Veh. Technol. 2019, 68, 10791–10806. [Google Scholar] [CrossRef]
  16. Shu, M.; Chen, G.; Zhang, Z.; Xu, L. Indoor geomagnetic positioning using direction-aware multiscale recurrent neural networks. IEEE Sens. J. 2023, 23, 3321–3333. [Google Scholar] [CrossRef]
  17. Hou, L.; Li, Y.; Zhuang, Y.; Zhou, B.; Tsai, G.-J.; Luo, Y.; El-Sheimy, N. Orientation-aided stochastic magnetic matching for indoor localization. IEEE Sens. J. 2020, 20, 1003–1010. [Google Scholar] [CrossRef]
  18. Sun, M.; Wang, Y.; Xu, S.; Yang, H.; Zhang, K. Indoor geomagnetic positioning using the enhanced genetic algorithm-based extreme learning machine. IEEE Trans. Instrum. Meas. 2021, 70, 2508611. [Google Scholar] [CrossRef]
  19. Zhang, C.; Zhang, X. Visible light localization using conventional light fixtures and smartphones. IEEE Trans. Mob. Comput. 2019, 18, 2968–2983. [Google Scholar] [CrossRef]
  20. Hussain, B.; Wang, Y.; Chen, R.; Cheng, H.C.; Yue, C.P. Lidr: Visible-light-communication-assisted dead reckoning for accurate indoor localization. IEEE Internet Things J. 2022, 9, 15742–15755. [Google Scholar] [CrossRef]
  21. Wu, C.; Yang, Z.; Liu, Y. Smartphones based crowdsourcing for indoor localization. IEEE Trans. Mob. Comput. 2015, 14, 444–457. [Google Scholar] [CrossRef]
  22. Zhao, W.; Han, S.; Hu, R.Q.; Meng, W.; Jia, Z. Crowdsourcing and multisource fusion-based fingerprint sensing in smartphone localization. IEEE Sens. J. 2018, 18, 3236–3247. [Google Scholar] [CrossRef]
  23. Rajab, A.M.; Wang, B. Automatic radio map database maintenance and updating based on crowdsourced samples for indoor localization. IEEE Sens. J. 2022, 22, 575–588. [Google Scholar] [CrossRef]
  24. Caso, G.; Nardis, L.D.; Benedetto, M.D. Low-complexity offline and online strategies for Wi-Fi fingerprinting indoor positioning systems. In Geographical and Fingerprinting Data to Create Systems for Indoor Positioning and Indoor/Outdoor Navigation; Academic Press: New York, NY, USA, 2019; pp. 129–145. [Google Scholar]
  25. Hernández, N.; Ocaña, M.; Alonso, J.M.; Kim, E. Continuous space estimation: Increasing WiFi-based indoor localization resolution without increasing the site-survey effort. Sensors 2017, 17, 147. [Google Scholar] [CrossRef] [PubMed]
  26. Lan, T.; Wang, X.; Chen, Z.; Zhu, J.; Zhang, S. Fingerprint augment based on super-resolution for Wi-Fi fingerprint based indoor localization. IEEE Sens. J. 2022, 22, 12152–12162. [Google Scholar] [CrossRef]
  27. Ni, K.S.; Nguyen, T.Q. Adaptable K-nearest neighbor for image interpolation. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 1297–1300. [Google Scholar]
  28. Gürbüz, S.Z.; Erol, B.; Çagliyan, B.; Tekeli, B. Operational assessment and adaptive selection of micro-Doppler features. IET Radar Sonar Navigat. 2015, 9, 1196–1204. [Google Scholar] [CrossRef]
  29. Chen, J.; Zhou, B.; Bao, S.; Liu, X.; Gu, Z.; Li, L.; Zhao, Y.; Zhu, J.; Lia, Q. A data-driven inertial navigation/bluetooth fusion algorithm for indoor positioning. IEEE Sens. J. 2021, 22, 5288–5301. [Google Scholar] [CrossRef]
  30. Li, P.; Yang, X.; Yin, Y.; Gao, S.; Niu, Q. Smartphone-based indoor localization with integrated fingerprint signal. IEEE Access 2020, 8, 33178–33187. [Google Scholar] [CrossRef]
  31. Yu, Y.; Chen, R.; Chen, L.; Li, W.; Wu, Y.; Zhou, H. Autonomous 3D indoor localization based on crowdsourced Wi-Fi fingerprinting and MEMS sensors. IEEE Sens. J. 2021, 22, 5248–5259. [Google Scholar] [CrossRef]
  32. Zou, H.; Chen, Z.; Jiang, H.; Xie, L.; Spanos, C. Accurate indoor localization and tracking using mobile phone inertial sensors, WiFi and iBeacon. In Proceedings of the 2017 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL), Kauai, HI, USA, 27–30 March 2017; pp. 1–4. [Google Scholar]
  33. Shu, Y.; Bo, C.; Shen, G.; Zhao, C.; Li, L.; Zhao, F. Magicol: Indoor localization using pervasive magnetic field and opportunistic WiFi sensing. IEEE J. Sel. Areas Commun. 2015, 33, 1443–1457. [Google Scholar] [CrossRef]
  34. Shi, L.F.; Wang, Y.; Liu, G.X.; Chen, S.; Zhao, Y.L.; Shi, Y.F. A fusion algorithm of indoor positioning based on PDR and RSS fingerprint. IEEE Sens. J. 2018, 18, 9691–9698. [Google Scholar] [CrossRef]
  35. Liu, Z.; Zhang, L.; Liu, Q.; Yin, Y.; Cheng, L.; Zimmermann, R. Fusion of magnetic and visual sensors for indoor localization: Infrastructure-free and more effective. IEEE Trans. Multimed. 2017, 19, 874–888. [Google Scholar] [CrossRef]
  36. Yang, F.; Gou, L.; Cai, X. Pedestrian positioning scheme based on the fusion of smartphone IMU sensors and commercially surveillance video. IEEE Sens. J. 2022, 22, 4697–4708. [Google Scholar] [CrossRef]
  37. Sebkhi, N.; Sahadat, N.; Hersek, S.; Bhavsar, A.; Siahpoushan, S.; Ghoovanloo, M.; Inan, O.T. A deep neural network-based permanent magnet localization for tongue tracking. IEEE Sens. J. 2019, 19, 9324–9331. [Google Scholar] [CrossRef]
  38. Spantideas, S.T.; Giannopoulos, A.E.; Kapsalis, N.C.; Capsalis, C.N. A deep learning method for modeling the magnetic signature of spacecraft equipment using multiple magnetic dipoles. IEEE Magn. Lett. 2021, 12, 2100905. [Google Scholar] [CrossRef]
  39. Qin, Y.; Lv, B.; Dai, H.; Han, J. An hFFNN-LM based real-time and high precision magnet localization method. IEEE Trans. Instrum. Meas. 2022, 71, 2509009. [Google Scholar] [CrossRef]
  40. Numan, P.E.; Park, H.; Laoudias, C.; Horsmanheimo, S.; Kim, S. Smartphone-based indoor localization via network learning with fusion of FTM/RSSI measurements. IEEE Netw. Lett. 2023, 5, 21–25. [Google Scholar] [CrossRef]
  41. Lee, N.; Han, D. Magnetic indoor positioning system using deep neural network. In Proceedings of the 8th International Conference on Indoor Positioning and Indoor Navigation (IPIN), Sapporo, Japan, 18–21 September 2017. [Google Scholar]
  42. Shao, W.; Luo, H.; Zhao, F.; Ma, Y.; Zhao, Z.; Crivello, A. Indoor positioning based on fingerprint-image and deep learning. IEEE Access 2018, 6, 74699–74712. [Google Scholar] [CrossRef]
  43. Zhang, M.; Jia, J.; Chen, J.; Deng, Y.; Wang, X.; Aghvami, A.H. Indoor localization fusing WiFi with smartphone inertial sensors using LSTM networks. IEEE Internet Things J. 2021, 8, 13608–13623. [Google Scholar] [CrossRef]
  44. Yu, D.; Li, C.; Xiao, J. Neural networks-based Wi-Fi/PDR indoor navigation fusion methods. IEEE Trans. Instrum. Meas. 2023, 72, 2503514. [Google Scholar] [CrossRef]
  45. Wang, X.; Yu, Z.; Mao, S. DeepML: Deep LSTM for Indoor Localization with Smartphone Magnetic and Light Sensors. In Proceedings of the 2018 IEEE International Conference on Communications (ICC), Kansas City, MO, USA, 20–24 May 2018. [Google Scholar]
  46. Alexander, B.; Ivan, T.; Denis, B. Analysis of noisy signal restoration quality with exponential moving average filter. In Proceedings of the 2016 International Siberian Conference on Control and Communications (SIBCON), Moscow, Russia, 12–14 May 2016; pp. 1–4. [Google Scholar]
  47. Serheiev-Horchynskyi, O. Analysis of Frequency Characteristics of Simple Moving Average Digital Filtering System. In Proceedings of the 2019 IEEE International Scientific Practical Conference Problems of Info Communications Science and Technology (PIC S&T), Kyiv, Ukraine, 8–11 October 2019; pp. 97–100. [Google Scholar]
  48. Pratt, W.K. Digital Image Processing, 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  49. Yang, P.; Xu, J.; Wang, S. Position fingerprint localization method based on linear interpolation in robot auditory system. In Proceedings of the Chinese Automation Congress, Jinan, China, 20–22 October 2017; pp. 2766–2771. [Google Scholar]
  50. Tahmoush, D. Review of Micro-Doppler Signatures. IET Radar Sonar Navig. 2015, 9, 1140–1146. [Google Scholar] [CrossRef]
  51. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  52. Chintalapudi, K.; Iyer, A.P.; Padmanabhan, V.N. Indoor localization without the pain. In Proceedings of the Sixteenth Annual International Conference on Mobile Computing and Networking-MobiCom ’10, Chicago, IL, USA, 20–24 September 2010; pp. 173–184. [Google Scholar]
  53. Zhou, C.; Wieser, A. Application of backpropagation neural networks to both stages of fingerprinting based WIPS. In Proceedings of the 2016 Fourth International Conference on Ubiquitous Positioning, Indoor Navigation and Location Based Services (UPINLBS), Shanghai, China, 2–4 November 2016; pp. 207–217. [Google Scholar]
  54. Tsatalas, S.; Vergos, D.; Spantideas, S.; Kapsalis, N.; Kakarakis, S.-D.; Livanos, N.; Hammal, S.; Alifragkis, E.; Bougas, A.; Capsalis, C.; et al. A novel multi-magnetometer facility for on-ground characterization of spacecraft equipment. Measurement 2019, 146, 948–960. [Google Scholar] [CrossRef]
  55. Polirpo, A.; Cucca, M. New facility for S/C magnetic cleanliness program. In Proceedings of the 2012 ESA Workshop on Aerospace EMC, Venice, Italy, 21–23 May 2012; IEEE: Piscataway, NJ, USA, 2012. [Google Scholar]
Figure 1. Schematic diagram of the proposed deep-feed-forward-neural-network-based indoor localization scheme integrating data from multiple sensors.
Figure 1. Schematic diagram of the proposed deep-feed-forward-neural-network-based indoor localization scheme integrating data from multiple sensors.
Electronics 12 05032 g001
Figure 2. Multiple threads for reading data from light sensor, magnetometer, and Wi-Fi scanner in the offline site surveillance process.
Figure 2. Multiple threads for reading data from light sensor, magnetometer, and Wi-Fi scanner in the offline site surveillance process.
Electronics 12 05032 g002
Figure 3. Different low-pass filters—EMA, SMA, and Moving Median filters—used to smooth sensor data. (a) Light-illumination-level data. (b) Wi-Fi RSS data.
Figure 3. Different low-pass filters—EMA, SMA, and Moving Median filters—used to smooth sensor data. (a) Light-illumination-level data. (b) Wi-Fi RSS data.
Electronics 12 05032 g003
Figure 4. Framework of the multiple-sensor-integrated indoor localization method based on DFF network.
Figure 4. Framework of the multiple-sensor-integrated indoor localization method based on DFF network.
Electronics 12 05032 g004
Figure 5. (a) Schematic floor plan of testbed1. (b) Experimental setup photograph depicting data collection using mobile device.
Figure 5. (a) Schematic floor plan of testbed1. (b) Experimental setup photograph depicting data collection using mobile device.
Electronics 12 05032 g005
Figure 6. (a) Schematic floor plan of testbed2. (b) Experimental setup photograph depicting data collection using mobile device.
Figure 6. (a) Schematic floor plan of testbed2. (b) Experimental setup photograph depicting data collection using mobile device.
Electronics 12 05032 g006
Figure 7. Three-dimensional figure depicting four groups of sensor fingerprint datasets collected in different testbeds and on different dates. Dataset1: (af), collected in testbed1 on 12 June 2022; Dataset2: (gl), collected in testbed1 on 22 June 2022; Dataset3: (mr), collected in testbed2 on 13 April 2023; Dataset4: (sx), collected in testbed2 on 20 April 2023.
Figure 7. Three-dimensional figure depicting four groups of sensor fingerprint datasets collected in different testbeds and on different dates. Dataset1: (af), collected in testbed1 on 12 June 2022; Dataset2: (gl), collected in testbed1 on 22 June 2022; Dataset3: (mr), collected in testbed2 on 13 April 2023; Dataset4: (sx), collected in testbed2 on 20 April 2023.
Electronics 12 05032 g007
Figure 8. Visualization of the correlation matrix. The label “D1” represents Dataset 1; the label “D2” represents Dataset 2; the label “D3” represents Dataset 3; the label “D4” represents Dataset 4. (a) testbed1; (b) testbed2.
Figure 8. Visualization of the correlation matrix. The label “D1” represents Dataset 1; the label “D2” represents Dataset 2; the label “D3” represents Dataset 3; the label “D4” represents Dataset 4. (a) testbed1; (b) testbed2.
Electronics 12 05032 g008
Figure 9. The CDF for the estimated localization error based on Dataset 1 under different sensor signal combinations.
Figure 9. The CDF for the estimated localization error based on Dataset 1 under different sensor signal combinations.
Electronics 12 05032 g009
Figure 10. Illustration of real-time localization experiment results for an Android device under a designed path in (a) Testbed1 and (b) Testbed2 under different sensor signal combinations. Black points indicate the RPs, the blue line indicates the designed path, and the red point indicates the location estimates. The box plot on the right side shows the localization error distribution.
Figure 10. Illustration of real-time localization experiment results for an Android device under a designed path in (a) Testbed1 and (b) Testbed2 under different sensor signal combinations. Black points indicate the RPs, the blue line indicates the designed path, and the red point indicates the location estimates. The box plot on the right side shows the localization error distribution.
Electronics 12 05032 g010aElectronics 12 05032 g010b
Figure 11. Statistical analysis of localization error: mean, median, and variance across different sensor signal combinations in (a) Testbed1 and (b) Testbed2.
Figure 11. Statistical analysis of localization error: mean, median, and variance across different sensor signal combinations in (a) Testbed1 and (b) Testbed2.
Electronics 12 05032 g011
Figure 12. Comparison of the localization performance for different tracking models.
Figure 12. Comparison of the localization performance for different tracking models.
Electronics 12 05032 g012
Table 1. Detailed information about datasets.
Table 1. Detailed information about datasets.
DatasetLocationSamples/SignalSignal KindsRPTotal DataDate
1Testbed11006242145,20012 June 2022
2Testbed11006242145,20022 June 2022
3Testbed210066036,00013 April 2023
4Testbed210066036,00020 April 2023
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, C.; Zhou, J.; Jang, K.; Kim, Y. Indoor Localization Based on Integration of Wi-Fi with Geomagnetic and Light Sensors on an Android Device Using a DFF Network. Electronics 2023, 12, 5032. https://doi.org/10.3390/electronics12245032

AMA Style

Sun C, Zhou J, Jang K, Kim Y. Indoor Localization Based on Integration of Wi-Fi with Geomagnetic and Light Sensors on an Android Device Using a DFF Network. Electronics. 2023; 12(24):5032. https://doi.org/10.3390/electronics12245032

Chicago/Turabian Style

Sun, Chao, Junhao Zhou, Kyongseok Jang, and Youngok Kim. 2023. "Indoor Localization Based on Integration of Wi-Fi with Geomagnetic and Light Sensors on an Android Device Using a DFF Network" Electronics 12, no. 24: 5032. https://doi.org/10.3390/electronics12245032

APA Style

Sun, C., Zhou, J., Jang, K., & Kim, Y. (2023). Indoor Localization Based on Integration of Wi-Fi with Geomagnetic and Light Sensors on an Android Device Using a DFF Network. Electronics, 12(24), 5032. https://doi.org/10.3390/electronics12245032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop