sensors-logo

Journal Browser

Journal Browser

Information Fusion and Machine Learning for Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 January 2021) | Viewed by 38603

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science and Engineering Department, Universidad Carlos III de Madrid, Edificio Sabatini, 28911 Leganes, Spain
Interests: data fusion; machine learning; Internet of Things (IoT); ambient intelligent; AAL; privacy
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Science Department, Universidad Carlos III de Madrid, Avenida Gregorio Peces-Barba Martínez, 22, 28270 Colmenarejo, Madrid, Spain
Interests: machine learning; computer vision; data mining; neural networks; IoT
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In today’s digital world, information is the key factor in making decisions. Ubiquitous electronic sources, such as sensors and video, provide a steady stream of data, while text-based data from databases, the Internet, email, chat, VOIP (Voice over Internet Protocol), and social media are growing exponentially. The ability to make sense of data by fusing them into new knowledge would provide clear advantages in making decisions.

Fusion systems aim to integrate sensor data and information in databases, knowledge bases, contextual information, etc., in order to describe situations. In a sense, the goal of information fusion is to attain a global view of a scenario in order to make the best decision.

One of the main goals of future research in data fusion (DF) is the application of machine learning (ML) techniques on this fused information to extract knowledge. How to apply ML in these large data sets and what techniques could be applied depending on the data stored are the main points of this Special Issue.

The key aspect in modern DF applications is the appropriate integration of all types of information or knowledge—observational data, knowledge models (a priori or inductively learned), and contextual information. Each of these categories has a distinctive nature and potential support for the result of the fusion process:

Observational Data: Observational data are the fundamental data about a dynamic scenario, as collected from some observational capability (sensors of any type). These data are about the observable entities in the world that are of interest;

Contextual Information: Contextual information has become fundamental to developing models in complex scenarios. Context and the elements of what could be called contextual information could be defined as “the set of circumstances surrounding a task that are potentially of relevance to its completion”. Due to its task relevance, fusion or estimating/inferring the task implies the development of a best-possible estimate taking into account this lateral knowledge.

Learned Knowledge: DF systems combine multisource data to provide inferences, exploiting models of the expected behaviors of entities (physical models, such as cinematics, or logical models, such as expected behaviors, depending on the context). In those cases where a priori knowledge for DF process development cannot be formed, one possibility is to try and excise knowledge through online machine learning processes, operating on observational and other data. These are procedural and algorithmic methods for discovering the relationships among, and the behaviors of, the entities of interest.

This Special Issue invites contributions on the following topics (but is not limited to them):

Data fusion of distributed sensors
Context definition and management
Machine learning techniques
Reduction complexity in data sets
Recommendation systems
Integration of IA techniques 
Reasoning systems in data fusion environments
Integration of data fusion
Ambient intelligence
Data fusion on autonomous systems
Virtual and augmented reality
Human computer interaction
Visual pattern recognition
Environment modeling and reconstruction from images
Surveillance systems
Visual systems 
Data fusion and ML in UAVs
Big data analytics platforms and tools for data fusion and analytics
Cloud computing technologies and their use for big data, data fusion, and data analytics

Prof. Dr. Jose Manuel Molina López
Dr. Miguel Angel Patricio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • data fusion
  • data analytics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 1411 KiB  
Article
Outlier Detection Transilience-Probabilistic Model for Wind Tunnels Based on Sensor Data
by Encarna Quesada, Juan J. Cuadrado-Gallego, Miguel Ángel Patricio and Luis Usero
Sensors 2021, 21(7), 2532; https://doi.org/10.3390/s21072532 - 4 Apr 2021
Viewed by 2225
Abstract
Anomaly Detection research is focused on the development and application of methods that allow for the identification of data that are different enough—compared with the rest of the data set that is being analyzed—and considered anomalies (or, as they are more commonly called, [...] Read more.
Anomaly Detection research is focused on the development and application of methods that allow for the identification of data that are different enough—compared with the rest of the data set that is being analyzed—and considered anomalies (or, as they are more commonly called, outliers). These values mainly originate from two sources: they may be errors introduced during the collection or handling of the data, or they can be correct, but very different from the rest of the values. It is essential to correctly identify each type as, in the first case, they must be removed from the data set but, in the second case, they must be carefully analyzed and taken into account. The correct selection and use of the model to be applied to a specific problem is fundamental for the success of the anomaly detection study and, in many cases, the use of only one model cannot provide sufficient results, which can be only reached by using a mixture model resulting from the integration of existing and/or ad hoc-developed models. This is the kind of model that is developed and applied to solve the problem presented in this paper. This study deals with the definition and application of an anomaly detection model that combines statistical models and a new method defined by the authors, the Local Transilience Outlier Identification Method, in order to improve the identification of outliers in the sensor-obtained values of variables that affect the operations of wind tunnels. The correct detection of outliers for the variables involved in wind tunnel operations is very important for the industrial ventilation systems industry, especially for vertical wind tunnels, which are used as training facilities for indoor skydiving, as the incorrect performance of such devices may put human lives at risk. In consequence, the use of the presented model for outlier detection may have a high impact in this industrial sector. In this research work, a proof-of-concept is carried out using data from a real installation, in order to test the proposed anomaly analysis method and its application to control the correct performance of wind tunnels. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

28 pages, 9192 KiB  
Article
Forecasting Nonlinear Systems with LSTM: Analysis and Comparison with EKF
by Juan Pedro Llerena Caña, Jesús García Herrero and José Manuel Molina López
Sensors 2021, 21(5), 1805; https://doi.org/10.3390/s21051805 - 5 Mar 2021
Cited by 5 | Viewed by 3118
Abstract
Certain difficulties in path forecasting and filtering problems are based in the initial hypothesis of estimation and filtering techniques. Common hypotheses include that the system can be modeled as linear, Markovian, Gaussian, or all at one time. Although, in many cases, there are [...] Read more.
Certain difficulties in path forecasting and filtering problems are based in the initial hypothesis of estimation and filtering techniques. Common hypotheses include that the system can be modeled as linear, Markovian, Gaussian, or all at one time. Although, in many cases, there are strategies to tackle problems with approaches that show very good results, the associated engineering process can become highly complex, requiring a great deal of time or even becoming unapproachable. To have tools to tackle complex problems without starting from a previous hypothesis but to continue to solve classic challenges and sharpen the implementation of estimation and filtering systems is of high scientific interest. This paper addresses the forecast–filter problem from deep learning paradigms with a neural network architecture inspired by natural language processing techniques and data structure. Unlike Kalman, this proposal performs the process of prediction and filtering in the same phase, while Kalman requires two phases. We propose three different study cases of incremental conceptual difficulty. The experimentation is divided into five parts: the standardization effect in raw data, proposal validation, filtering, loss of measurements (forecasting), and, finally, robustness. The results are compared with a Kalman filter, showing that the proposal is comparable in terms of the error within the linear case, with improved performance when facing non-linear systems. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

26 pages, 2809 KiB  
Article
Multi-Sensor Fusion for Underwater Vehicle Localization by Augmentation of RBF Neural Network and Error-State Kalman Filter
by Nabil Shaukat, Ahmed Ali, Muhammad Javed Iqbal, Muhammad Moinuddin and Pablo Otero
Sensors 2021, 21(4), 1149; https://doi.org/10.3390/s21041149 - 6 Feb 2021
Cited by 57 | Viewed by 7154
Abstract
The Kalman filter variants extended Kalman filter (EKF) and error-state Kalman filter (ESKF) are widely used in underwater multi-sensor fusion applications for localization and navigation. Since these filters are designed by employing first-order Taylor series approximation in the error covariance matrix, they result [...] Read more.
The Kalman filter variants extended Kalman filter (EKF) and error-state Kalman filter (ESKF) are widely used in underwater multi-sensor fusion applications for localization and navigation. Since these filters are designed by employing first-order Taylor series approximation in the error covariance matrix, they result in a decrease in estimation accuracy under high nonlinearity. In order to address this problem, we proposed a novel multi-sensor fusion algorithm for underwater vehicle localization that improves state estimation by augmentation of the radial basis function (RBF) neural network with ESKF. In the proposed algorithm, the RBF neural network is utilized to compensate the lack of ESKF performance by improving the innovation error term. The weights and centers of the RBF neural network are designed by minimizing the estimation mean square error (MSE) using the steepest descent optimization approach. To test the performance, the proposed RBF-augmented ESKF multi-sensor fusion was compared with the conventional ESKF under three different realistic scenarios using Monte Carlo simulations. We found that our proposed method provides better navigation and localization results despite high nonlinearity, modeling uncertainty, and external disturbances. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

26 pages, 16821 KiB  
Article
Using Convolutional Neural Networks with Multiple Thermal Sensors for Unobtrusive Pose Recognition
by Matthew Burns, Federico Cruciani, Philip Morrow, Chris Nugent and Sally McClean
Sensors 2020, 20(23), 6932; https://doi.org/10.3390/s20236932 - 4 Dec 2020
Cited by 6 | Viewed by 2450
Abstract
The desire to remain living in one’s own home rather than a care home by those in need of 24/7 care is one that requires a level of understanding for the actions of an environment’s inhabitants. This can potentially be accomplished with the [...] Read more.
The desire to remain living in one’s own home rather than a care home by those in need of 24/7 care is one that requires a level of understanding for the actions of an environment’s inhabitants. This can potentially be accomplished with the ability to recognise Activities of Daily Living (ADLs); however, this research focuses first on producing an unobtrusive solution for pose recognition where the preservation of privacy is a primary aim. With an accurate manner of predicting an inhabitant’s poses, their interactions with objects within the environment and, therefore, the activities they are performing, can begin to be understood. This research implements a Convolutional Neural Network (CNN), which has been designed with an original architecture derived from the popular AlexNet, to predict poses from thermal imagery that have been captured using thermopile infrared sensors (TISs). Five TISs have been deployed within the smart kitchen in Ulster University where each provides input to a corresponding trained CNN. The approach is evaluated using an original dataset and an F1-score of 0.9920 was achieved with all five TISs. The limitations of utilising a ceiling-based TIS are investigated and each possible permutation of corner-based TISs is evaluated to satisfy a trade-off between the number of TISs, the total sensor cost and the performances. These tests are also promising as F1-scores of 0.9266, 0.9149 and 0.8468 were achieved with the isolated use of four, three, and two corner TISs, respectively. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

17 pages, 1507 KiB  
Article
Fusion of Environmental Sensing on PM2.5 and Deep Learning on Vehicle Detecting for Acquiring Roadside PM2.5 Concentration Increments
by Wen-Cheng Vincent Wang, Tai-Hung Lin, Chun-Hu Liu, Chih-Wen Su and Shih-Chun Candice Lung
Sensors 2020, 20(17), 4679; https://doi.org/10.3390/s20174679 - 19 Aug 2020
Cited by 9 | Viewed by 2747
Abstract
Traffic emission is one of the major contributors to urban PM2.5, an important environmental health hazard. Estimating roadside PM2.5 concentration increments (above background levels) due to vehicles would assist in understanding pedestrians’ actual exposures. This work combines PM2.5 sensing [...] Read more.
Traffic emission is one of the major contributors to urban PM2.5, an important environmental health hazard. Estimating roadside PM2.5 concentration increments (above background levels) due to vehicles would assist in understanding pedestrians’ actual exposures. This work combines PM2.5 sensing and vehicle detecting to acquire roadside PM2.5 concentration increments due to vehicles. An automatic traffic analysis system (YOLOv3-tiny-3l) was applied to simultaneously detect and track vehicles with deep learning and traditional optical flow techniques, respectively, from governmental cameras that have low resolutions of only 352 × 240 pixels. Evaluation with 20% of the 2439 manually labeled images from 23 cameras showed that this system has 87% and 84% of the precision and recall rates, respectively, for five types of vehicles, namely, sedan, motorcycle, bus, truck, and trailer. By fusing the research-grade observations from PM2.5 sensors installed at two roadside locations with vehicle counts from the nearby governmental cameras analyzed by YOLOv3-tiny-3l, roadside PM2.5 concentration increments due to on-road sedans were estimated to be 0.0027–0.0050 µg/m3. This practical and low-cost method can be further applied in other countries to assess the impacts of vehicles on roadside PM2.5 concentrations. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

17 pages, 1508 KiB  
Article
Collaborative Filtering to Predict Sensor Array Values in Large IoT Networks
by Fernando Ortega, Ángel González-Prieto, Jesús Bobadilla and Abraham Gutiérrez
Sensors 2020, 20(16), 4628; https://doi.org/10.3390/s20164628 - 17 Aug 2020
Cited by 4 | Viewed by 2613
Abstract
Internet of Things (IoT) projects are increasing in size over time, and some of them are growing to reach the whole world. Sensor arrays are deployed world-wide and their data is sent to the cloud, making use of the Internet. These huge networks [...] Read more.
Internet of Things (IoT) projects are increasing in size over time, and some of them are growing to reach the whole world. Sensor arrays are deployed world-wide and their data is sent to the cloud, making use of the Internet. These huge networks can be used to improve the quality of life of the humanity by continuously monitoring many useful indicators, like the health of the users, the air quality or the population movements. Nevertheless, in this scalable context, a percentage of the sensor data readings can fail due to several reasons like sensor reliabilities, network quality of service or extreme weather conditions, among others. Moreover, sensors are not homogeneously replaced and readings from some areas can be more precise than others. In order to address this problem, in this paper we propose to use collaborative filtering techniques to predict missing readings, by making use of the whole set of collected data from the IoT network. State of the art recommender systems methods have been chosen to accomplish this task, and two real sensor array datasets and a synthetic dataset have been used to test this idea. Experiments have been carried out varying the percentage of failed sensors. Results show a good level of prediction accuracy which, as expected, decreases as the failure rate increases. Results also point out a failure rate threshold below which is better to make use of memory-based approaches, and above which is better to choose model-based methods. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

21 pages, 3907 KiB  
Article
Architecture for Trajectory-Based Fishing Ship Classification with AIS Data
by David Sánchez Pedroche, Daniel Amigo, Jesús García and José Manuel Molina
Sensors 2020, 20(13), 3782; https://doi.org/10.3390/s20133782 - 6 Jul 2020
Cited by 37 | Viewed by 5453
Abstract
This paper proposes a data preparation process for managing real-world kinematic data and detecting fishing vessels. The solution is a binary classification that classifies ship trajectories into either fishing or non-fishing ships. The data used are characterized by the typical problems found in [...] Read more.
This paper proposes a data preparation process for managing real-world kinematic data and detecting fishing vessels. The solution is a binary classification that classifies ship trajectories into either fishing or non-fishing ships. The data used are characterized by the typical problems found in classic data mining applications using real-world data, such as noise and inconsistencies. The two classes are also clearly unbalanced in the data, a problem which is addressed using algorithms that resample the instances. For classification, a series of features are extracted from spatiotemporal data that represent the trajectories of the ships, available from sequences of Automatic Identification System (AIS) reports. These features are proposed for the modelling of ship behavior but, because they do not contain context-related information, the classification can be applied in other scenarios. Experimentation shows that the proposed data preparation process is useful for the presented classification problem. In addition, positive results are obtained using minimal information. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

20 pages, 5551 KiB  
Article
A Novel Fault Diagnosis Approach for Chillers Based on 1-D Convolutional Neural Network and Gated Recurrent Unit
by Zhuozheng Wang, Yingjie Dong, Wei Liu and Zhuo Ma
Sensors 2020, 20(9), 2458; https://doi.org/10.3390/s20092458 - 26 Apr 2020
Cited by 48 | Viewed by 5098
Abstract
The safety of an Internet Data Center (IDC) is directly determined by the reliability and stability of its chiller system. Thus, combined with deep learning technology, an innovative hybrid fault diagnosis approach (1D-CNN_GRU) based on the time-series sequences is proposed in this study [...] Read more.
The safety of an Internet Data Center (IDC) is directly determined by the reliability and stability of its chiller system. Thus, combined with deep learning technology, an innovative hybrid fault diagnosis approach (1D-CNN_GRU) based on the time-series sequences is proposed in this study for the chiller system using 1-Dimensional Convolutional Neural Network (1D-CNN) and Gated Recurrent Unit (GRU). Firstly, 1D-CNN is applied to automatically extract the local abstract features of the sensor sequence data. Secondly, GRU with long and short term memory characteristics is applied to capture the global features, as well as the dynamic information of the sequence. Moreover, batch normalization and dropout are introduced to accelerate network training and address the overfitting issue. The effectiveness and reliability of the proposed hybrid algorithm are assessed on the RP-1043 dataset; based on the experimental results, 1D-CNN_GRU displays the best performance compared with the other state-of-the-art algorithms. Further, the experimental results reveal that 1D-CNN_GRU has a superior identification rate for minor faults. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

15 pages, 5046 KiB  
Article
Prediction of I–V Characteristic Curve for Photovoltaic Modules Based on Convolutional Neural Network
by Jie Li, Runran Li, Yuanjie Jia and Zhixin Zhang
Sensors 2020, 20(7), 2119; https://doi.org/10.3390/s20072119 - 9 Apr 2020
Cited by 11 | Viewed by 3642
Abstract
Photovoltaic (PV) modules are exposed to the outside, which is affected by radiation, the temperature of the PV module back-surface, relative humidity, atmospheric pressure and other factors, which makes it difficult to test and analyze the performance of photovoltaic modules. Traditionally, the equivalent [...] Read more.
Photovoltaic (PV) modules are exposed to the outside, which is affected by radiation, the temperature of the PV module back-surface, relative humidity, atmospheric pressure and other factors, which makes it difficult to test and analyze the performance of photovoltaic modules. Traditionally, the equivalent circuit method is used to analyze the performance of PV modules, but there are large errors. In this paper—based on machine learning methods and large amounts of photovoltaic test data—convolutional neural network (CNN) and multilayer perceptron (MLP) neural network models are established to predict the I–V curve of photovoltaic modules. Furthermore, the accuracy and the fitting degree of these methods for current–voltage (I–V) curve prediction are compared in detail. The results show that the prediction accuracy of the CNN and MLP neural network model is significantly better than that of the traditional equivalent circuit models. Compared with MLP models, the CNN model has better accuracy and fitting degree. In addition, the error distribution concentration of CNN has better robustness and the pre-test curve is smoother and has better nonlinear segment fitting effects. Thus, the CNN is superior to MLP model and the traditional equivalent circuit model in complex climate conditions. CNN is a high-confidence method to predict the performance of PV modules. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Figure 1

Other

Jump to: Research

18 pages, 3761 KiB  
Letter
Application-Oriented Retinal Image Models for Computer Vision
by Ewerton Silva, Ricardo da S. Torres, Allan Pinto, Lin Tzy Li, José Eduardo S. Vianna, Rodolfo Azevedo and Siome Goldenstein
Sensors 2020, 20(13), 3746; https://doi.org/10.3390/s20133746 - 4 Jul 2020
Cited by 1 | Viewed by 2884
Abstract
Energy and storage restrictions are relevant variables that software applications should be concerned about when running in low-power environments. In particular, computer vision (CV) applications exemplify well that concern, since conventional uniform image sensors typically capture large amounts of data to be further [...] Read more.
Energy and storage restrictions are relevant variables that software applications should be concerned about when running in low-power environments. In particular, computer vision (CV) applications exemplify well that concern, since conventional uniform image sensors typically capture large amounts of data to be further handled by the appropriate CV algorithms. Moreover, much of the acquired data are often redundant and outside of the application’s interest, which leads to unnecessary processing and energy spending. In the literature, techniques for sensing and re-sampling images in non-uniform fashions have emerged to cope with these problems. In this study, we propose Application-Oriented Retinal Image Models that define a space-variant configuration of uniform images and contemplate requirements of energy consumption and storage footprints for CV applications. We hypothesize that our models might decrease energy consumption in CV tasks. Moreover, we show how to create the models and validate their use in a face detection/recognition application, evidencing the compromise between storage, energy, and accuracy. Full article
(This article belongs to the Special Issue Information Fusion and Machine Learning for Sensors)
Show Figures

Graphical abstract

Back to TopTop