sensors-logo

Journal Browser

Journal Browser

Artificial Neural Networks for Navigation Sensor Integration Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (15 December 2018) | Viewed by 29898

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geomatics Engineering, The University of Calgary, Calgary, AB T2N 1N4, Canada
Interests: intelligent and autonomous systems; navigation & positioning technologies; satellite technologies; multi-sensor systems; wireless positioning; vehicles & transportation systems; driverless cars; technology development; applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
High Definition Maps Research Center, Department of Geomatics, National Cheng Kung University, No.1, Ta-Hsueh Road, Tainan 701, Taiwan
Interests: inertial navigation system; optimal multi-sensor fusion; seamless mapping and navigation applications; artificial intelligence and collaborative mobile mapping technology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China
Interests: 3D vision; LiDAR; mobile mapping; geospatial big data analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue is open for submissions of both review and original research articles related to artificial neural networks for navigation sensor integration applications. In particular, papers are welcome concerning utilizing artificial neural networks for developing algorithms for applications ranging from indoor/outdoor navigation, navigation sensor calibration, multi-sensor fusion schemes, pedestrian navigation, vehicular navigation, airborne navigation, space borne navigation, unmanned systems, location-based service, navigation quality control, as well as innovative approaches for developing intelligent navigators for autonomous vehicles. Original contributions that look at integrated sensor base sensor fusion technologies using artificial intelligence with remote sensing platforms for mobile mapping and unknown territory exploration applications (ground vehicle, drone, airborne, and space borne) are also encouraged.

Prof. Dr. Naser El-Sheimy
Prof. Dr. Kai-Wei Chiang
Prof. Dr. Cheng Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial neural networks
  • Artificial intelligence
  • Sensor fusion
  • Multi-sensor systems
  • Sensor calibration
  • Unmanned systems
  • Pedestrian navigation
  • Vehicular navigation
  • Airborne navigation
  • Indoor/outdoor navigation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 3128 KiB  
Article
A Human Activity Recognition Algorithm Based on Stacking Denoising Autoencoder and LightGBM
by Xile Gao, Haiyong Luo, Qu Wang, Fang Zhao, Langlang Ye and Yuexia Zhang
Sensors 2019, 19(4), 947; https://doi.org/10.3390/s19040947 - 23 Feb 2019
Cited by 70 | Viewed by 5957
Abstract
Recently, the demand for human activity recognition has become more and more urgent. It is widely used in indoor positioning, medical monitoring, safe driving, etc. Existing activity recognition approaches require either the location information of the sensors or the specific domain knowledge, which [...] Read more.
Recently, the demand for human activity recognition has become more and more urgent. It is widely used in indoor positioning, medical monitoring, safe driving, etc. Existing activity recognition approaches require either the location information of the sensors or the specific domain knowledge, which are expensive, intrusive, and inconvenient for pervasive implementation. In this paper, a human activity recognition algorithm based on SDAE (Stacking Denoising Autoencoder) and LightGBM (LGB) is proposed. The SDAE is adopted to sanitize the noise in raw sensor data and extract the most effective characteristic expression with unsupervised learning. The LGB reveals the inherent feature dependencies among categories for accurate human activity recognition. Extensive experiments are conducted on four datasets of distinct sensor combinations collected by different devices in three typical application scenarios, which are human moving modes, current static, and dynamic behaviors of users. The experimental results demonstrate that our proposed algorithm achieves an average accuracy of 95.99%, outperforming other comparative algorithms using XGBoost, CNN (Convolutional Neural Network), CNN + Statistical features, or single SDAE. Full article
Show Figures

Figure 1

23 pages, 7783 KiB  
Article
Pedestrian Stride-Length Estimation Based on LSTM and Denoising Autoencoders
by Qu Wang, Langlang Ye, Haiyong Luo, Aidong Men, Fang Zhao and Yan Huang
Sensors 2019, 19(4), 840; https://doi.org/10.3390/s19040840 - 18 Feb 2019
Cited by 73 | Viewed by 10535
Abstract
Accurate stride-length estimation is a fundamental component in numerous applications, such as pedestrian dead reckoning, gait analysis, and human activity recognition. The existing stride-length estimation algorithms work relatively well in cases of walking a straight line at normal speed, but their error overgrows [...] Read more.
Accurate stride-length estimation is a fundamental component in numerous applications, such as pedestrian dead reckoning, gait analysis, and human activity recognition. The existing stride-length estimation algorithms work relatively well in cases of walking a straight line at normal speed, but their error overgrows in complex scenes. Inaccurate walking-distance estimation leads to huge accumulative positioning errors of pedestrian dead reckoning. This paper proposes TapeLine, an adaptive stride-length estimation algorithm that automatically estimates a pedestrian’s stride-length and walking-distance using the low-cost inertial-sensor embedded in a smartphone. TapeLine consists of a Long Short-Term Memory module and Denoising Autoencoders that aim to sanitize the noise in raw inertial-sensor data. In addition to accelerometer and gyroscope readings during stride interval, extracted higher-level features based on excellent early studies were also fed to proposed network model for stride-length estimation. To train the model and evaluate its performance, we designed a platform to collect inertial-sensor measurements from a smartphone as training data, pedestrian step events, actual stride-length, and cumulative walking-distance from a foot-mounted inertial navigation system module as training labels at the same time. We conducted elaborate experiments to verify the performance of the proposed algorithm and compared it with the state-of-the-art SLE algorithms. The experimental results demonstrated that the proposed algorithm outperformed the existing methods and achieves good estimation accuracy, with a stride-length error rate of 4.63% and a walking-distance error rate of 1.43% using inertial-sensor embedded in smartphone without depending on any additional infrastructure or pre-collected database when a pedestrian is walking in both indoor and outdoor complex environments (stairs, spiral stairs, escalators and elevators) with natural motion patterns (fast walking, normal walking, slow walking, running, jumping). Full article
Show Figures

Figure 1

18 pages, 18439 KiB  
Article
Wireless Fingerprinting Uncertainty Prediction Based on Machine Learning
by You Li, Zhouzheng Gao, Zhe He, Yuan Zhuang, Ahmed Radi, Ruizhi Chen and Naser El-Sheimy
Sensors 2019, 19(2), 324; https://doi.org/10.3390/s19020324 - 15 Jan 2019
Cited by 38 | Viewed by 5020
Abstract
Although wireless fingerprinting has been well researched and widely used for indoor localization, its performance is difficult to quantify. Therefore, when wireless fingerprinting solutions are used as location updates in multi-sensor integration, it is challenging to set their weight accurately. To alleviate this [...] Read more.
Although wireless fingerprinting has been well researched and widely used for indoor localization, its performance is difficult to quantify. Therefore, when wireless fingerprinting solutions are used as location updates in multi-sensor integration, it is challenging to set their weight accurately. To alleviate this issue, this paper focuses on predicting wireless fingerprinting location uncertainty by given received signal strength (RSS) measurements through the use of machine learning (ML). Two ML methods are used, including an artificial neural network (ANN)-based approach and a Gaussian distribution (GD)-based method. The predicted location uncertainty is evaluated and further used to set the measurement noises in the dead-reckoning/wireless fingerprinting integrated localization extended Kalman filter (EKF). Indoor walking test results indicated the possibility of predicting the wireless fingerprinting uncertainty through ANN the effectiveness of setting measurement noises adaptively in the integrated localization EKF. Full article
Show Figures

Figure 1

29 pages, 17384 KiB  
Article
Radar and Visual Odometry Integrated System Aided Navigation for UAVS in GNSS Denied Environment
by Mostafa Mostafa, Shady Zahran, Adel Moussa, Naser El-Sheimy and Abu Sesay
Sensors 2018, 18(9), 2776; https://doi.org/10.3390/s18092776 - 23 Aug 2018
Cited by 56 | Viewed by 7257
Abstract
Drones are becoming increasingly significant for vast applications, such as firefighting, and rescue. While flying in challenging environments, reliable Global Navigation Satellite System (GNSS) measurements cannot be guaranteed all the time, and the Inertial Navigation System (INS) navigation solution will deteriorate dramatically. Although [...] Read more.
Drones are becoming increasingly significant for vast applications, such as firefighting, and rescue. While flying in challenging environments, reliable Global Navigation Satellite System (GNSS) measurements cannot be guaranteed all the time, and the Inertial Navigation System (INS) navigation solution will deteriorate dramatically. Although different aiding sensors, such as cameras, are proposed to reduce the effect of these drift errors, the positioning accuracy by using these techniques is still affected by some challenges, such as the lack of the observed features, inconsistent matches, illumination, and environmental conditions. This paper presents an integrated navigation system for Unmanned Aerial Vehicles (UAVs) in GNSS denied environments based on a Radar Odometry (RO) and an enhanced Visual Odometry (VO) to handle such challenges since the radar is immune against these issues. The estimated forward velocities of a vehicle from both the RO and the enhanced VO are fused with the Inertial Measurement Unit (IMU), barometer, and magnetometer measurements via an Extended Kalman Filter (EKF) to enhance the navigation accuracy during GNSS signal outages. The RO and VO are integrated into one integrated system to help overcome their limitations, since the RO measurements are affected while flying over non-flat terrain. Therefore, the integration of the VO is important in such scenarios. The experimental results demonstrate the proposed system’s ability to significantly enhance the 3D positioning accuracy during the GNSS signal outage. Full article
Show Figures

Figure 1

Back to TopTop