sensors-logo

Journal Browser

Journal Browser

Recent Advances in Machine Learning-Based Vision and Sensing Integrated into Cloud, and IoT Edge Computing Environments

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 37769

Special Issue Editors


E-Mail Website
Guest Editor
1. CITAB, University of Trás-os-Montes and Alto Douro, Vila Real, Portugal
2. Algoritmi Center, University of Minho, 4800-058 Guimarães, Portugal
Interests: computer vision; machine learning; hyperspectral imaging; image classification; object detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Algoritmi Center, University of Minho, 4800-058 Guimarães, Portugal
Interests: deep learning; computer vision; energy-efficiency IoT communication mechanisms; field programmable gate arrays; IoT sensor devices; object detection; wireless sensor networks; wireless body area networks

E-Mail Website
Guest Editor
Algoritmi Center, University of Minho, 4710-057 Braga, Portugal
Interests: artificial intelligence; computer vision; deep learning; object detection; data science

E-Mail Website
Guest Editor
Algoritmi Center, University of Minho, 4800-058 Guimarães, Portugal
Interests: signal processing; communications; indoor positioning; embedded systems; hardware/software codesign; deep learning; object detection

Special Issue Information

Dear Colleagues,

The purpose of this Special Issue is to showcase cutting-edge and trending developments in solutions that exploit artificial intelligence (AI) for applications with thoughtful impact on our daily life, on technological developments and, ultimately, on the roadmap of the industry and scientific community. 

The rapid and successful expansion of machine learning and deep learning algorithms across several fields has resulted in improved technical advancements and data-driven solutions, such as complex neural network algorithms, in order to discover knowledge from vast amounts of structured or unstructured data. These advances are especially noticed in applications such as audio-visual signal processing, object detection and tracking, pattern recognition, and data science. Additionally, machine learning and deep learning and their improved techniques are expected to be included in IoT devices, such as local and remote sensors, and imaging systems.

The rapid growth in the area of IoT makes this technology omnipresent, and thus can be applied in almost any imaginable application. Empowering this technology with intelligence is a very challenging task, but an interesting and promising interdisciplinary area of research. Tiny machine learning (tinyML) is an emerging area aiming at providing algorithms, hardware and software capable of performing inferences on resource constraint devices at extremely low power. Combining these topics brings opportunities to multiple application fields where sensor fusion in IoT devices (IMU, biomedical, audio, etc.) and the trend of commercially available cameras and scanners targeting battery-operated devices can be explored.

The interdisciplinary scope of this special call seeks contributions from the scientific community in a wide range of topics on computer vision, machine/deep learning techniques, applied to IoT sensing, including but not limited to the following:

  • Application of machine/deep learning techniques for industrial, medical, and biomedical fields;
  • Machine/Deep learning for active and passive sensors;
  • Real-time signal/image processing algorithms and architectures (e.g., FPGA, DSP, GPU);
  • Machine learning models for sensor networks (SNs);
  • Deep and reinforcement learning for SNs;
  • Intelligence image processing algorithms for SNs;
  • Big data analytics for data processing from SNs;
  • Applications of AI in SN domains: energy, IoT, Industry 4.0, etc;
  • Interpreters and code generator frameworks for tiny systems;
  • Optimizations for efficient execution using tiny machine learning;
  • Intelligent vehicles;
  • Advanced driver assistant systems;
  • Remote sensing image processing;
  • Biomedical signal/image analysis;
  • Wearable sensor signal processing and its applications;
  • Sensor data fusion and integration;
  • Visual pattern recognition;
  • Image and video processing (e.g., denoising, deblurring, super-resolution, etc.);
  • Image and video understanding (e.g., novel feature extraction, classification, semantic segmentation, object detection and recognition, action recognition, tracking, etc.);
  • Interpreters and code generator frameworks for tiny systems;
  • Optimizations for efficient execution using tiny machine learning;
  • Novel tinyML applications across all fields and emerging use cases;
  • In-sensor processing, design, and implementation.

Prof. Dr. Pedro Melo-Pinto
Dr. Duarte Fernandes
Dr. Antonio Silva
Prof. Dr. João L. Monteiro
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Computer vision
  • Data representation, summarization, and visualization
  • Decision algorithms
  • Deep learning
  • Edge AI
  • Internet of Things (IoT)
  • Machine learning
  • Object detection
  • On-server inference
  • Resource constraint edge devices
  • Sensor fusion
  • Supervised, semi-supervised, and unsupervised learning
  • TinyML

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 2031 KiB  
Article
Towards an Effective Service Allocation in Fog Computing
by Rayan A. Alsemmeari, Mohamed Yehia Dahab, Badraddin Alturki, Abdulaziz A. Alsulami and Raed Alsini
Sensors 2023, 23(17), 7327; https://doi.org/10.3390/s23177327 - 22 Aug 2023
Cited by 1 | Viewed by 1269
Abstract
The Internet of Things (IoT) generates a large volume of data whenever devices are interconnected and exchange data across a network. Consequently, a variety of services with diverse needs arises, including capacity requirements, data quality, and latency demands. These services operate on fog [...] Read more.
The Internet of Things (IoT) generates a large volume of data whenever devices are interconnected and exchange data across a network. Consequently, a variety of services with diverse needs arises, including capacity requirements, data quality, and latency demands. These services operate on fog computing devices, which are limited in power and bandwidth compared to the cloud. The primary challenge lies in determining the optimal location for service implementation: in the fog, in the cloud, or in a hybrid setup. This paper introduces an efficient allocation technique that moves processing closer to the network’s fog side. It explores the optimal allocation of devices and services while maintaining resource utilization within an IoT architecture. The paper also examines the significance of allocating services to devices and optimizing resource utilization in fog computing. In IoT scenarios, where a wide range of services and devices coexist, it becomes crucial to effectively assign services to devices. We propose priority-based service allocation (PSA) and sort-based service allocation (SSA) techniques, which are employed to determine the optimal order for the utilizing devices to perform different services. Experimental results demonstrate that our proposed technique reduces data communication over the network by 88%, which is achieved by allocating most services locally in the fog. We increased the distribution of services to fog devices by 96%, while simultaneously minimizing the wastage of fog resources. Full article
Show Figures

Figure 1

25 pages, 11687 KiB  
Article
A Framework for Representing, Building and Reusing Novel State-of-the-Art Three-Dimensional Object Detection Models in Point Clouds Targeting Self-Driving Applications
by António Linhares Silva, Pedro Oliveira, Dalila Durães, Duarte Fernandes, Rafael Névoa, João Monteiro, Pedro Melo-Pinto, José Machado and Paulo Novais
Sensors 2023, 23(14), 6427; https://doi.org/10.3390/s23146427 - 15 Jul 2023
Cited by 1 | Viewed by 1441
Abstract
The rapid development of deep learning has brought novel methodologies for 3D object detection using LiDAR sensing technology. These improvements in precision and inference speed performances lead to notable high performance and real-time inference, which is especially important for self-driving purposes. However, the [...] Read more.
The rapid development of deep learning has brought novel methodologies for 3D object detection using LiDAR sensing technology. These improvements in precision and inference speed performances lead to notable high performance and real-time inference, which is especially important for self-driving purposes. However, the developments carried by these approaches overwhelm the research process in this area since new methods, technologies and software versions lead to different project necessities, specifications and requirements. Moreover, the improvements brought by the new methods may be due to improvements in newer versions of deep learning frameworks and not just the novelty and innovation of the model architecture. Thus, it has become crucial to create a framework with the same software versions, specifications and requirements that accommodate all these methodologies and allow for the easy introduction of new methods and models. A framework is proposed that abstracts the implementation, reusing and building of novel methods and models. The main idea is to facilitate the representation of state-of-the-art (SoA) approaches and simultaneously encourage the implementation of new approaches by reusing, improving and innovating modules in the proposed framework, which has the same software specifications to allow for a fair comparison. This makes it possible to determine if the key innovation approach outperforms the current SoA by comparing models in a framework with the same software specifications and requirements. Full article
Show Figures

Figure 1

15 pages, 1669 KiB  
Article
Rain Discrimination with Machine Learning Classifiers for Opportunistic Rain Detection System Using Satellite Micro-Wave Links
by Christian Gianoglio, Ayham Alyosef, Matteo Colli, Sara Zani and Daniele D. Caviglia
Sensors 2023, 23(3), 1202; https://doi.org/10.3390/s23031202 - 20 Jan 2023
Cited by 8 | Viewed by 2331
Abstract
In the climate change scenario the world is facing, extreme weather events can lead to increasingly serious disasters. To improve managing the consequent risks, there is a pressing need to have real-time systems that provide accurate monitoring and possibly forecasting which could help [...] Read more.
In the climate change scenario the world is facing, extreme weather events can lead to increasingly serious disasters. To improve managing the consequent risks, there is a pressing need to have real-time systems that provide accurate monitoring and possibly forecasting which could help to warn people in the affected areas ahead of time and save them from hazards. The oblique earth-space links (OELs) have been used recently as a method for real-time rainfall detection. This technique poses two main issues related to its indirect nature. The first one is the classification of rainy and non-rainy periods. The second one is the determination of the attenuation baseline, which is an essential reference for estimating rainfall intensity along the link. This work focuses mainly on the first issue. Data referring to eighteen rain events were used and have been collected by analyzing a satellite-to-earth link quality and employing a tipping bucket rain gauge (TBRG) properly positioned, used as reference. It reports a comparison among the results obtained by applying four different machine learning (ML) classifiers, namely the support vector machine (SVM), neural network (NN), random forest (RF), and decision tree (DT). Various data arrangements were explored, using a preprocessed version of the TBRG data, and extracting two different sets of characteristics from the microwave link data, containing 6 or 12 different features, respectively. The achieved results demonstrate that the NN classifier has outperformed the other classifiers. Full article
Show Figures

Figure 1

20 pages, 5408 KiB  
Article
Lightweight and Energy-Efficient Deep Learning Accelerator for Real-Time Object Detection on Edge Devices
by Kyungho Kim, Sung-Joon Jang, Jonghee Park, Eunchong Lee and Sang-Seol Lee
Sensors 2023, 23(3), 1185; https://doi.org/10.3390/s23031185 - 20 Jan 2023
Cited by 10 | Viewed by 3569
Abstract
Tiny machine learning (TinyML) has become an emerging field according to the rapid growth in the area of the internet of things (IoT). However, most deep learning algorithms are too complex, require a lot of memory to store data, and consume an enormous [...] Read more.
Tiny machine learning (TinyML) has become an emerging field according to the rapid growth in the area of the internet of things (IoT). However, most deep learning algorithms are too complex, require a lot of memory to store data, and consume an enormous amount of energy for calculation/data movement; therefore, the algorithms are not suitable for IoT devices such as various sensors and imaging systems. Furthermore, typical hardware accelerators cannot be embedded in these resource-constrained edge devices, and they are difficult to drive real-time inference processing as well. To perform the real-time processing on these battery-operated devices, deep learning models should be compact and hardware-optimized, and hardware accelerator designs also have to be lightweight and consume extremely low energy. Therefore, we present an optimized network model through model simplification and compression for the hardware to be implemented, and propose a hardware architecture for a lightweight and energy-efficient deep learning accelerator. The experimental results demonstrate that our optimized model successfully performs object detection, and the proposed hardware design achieves 1.25× and 4.27× smaller logic and BRAM size, respectively, and its energy consumption is approximately 10.37× lower than previous similar works with 43.95 fps as a real-time process under an operating frequency of 100 MHz on a Xilinx ZC702 FPGA. Full article
Show Figures

Figure 1

13 pages, 3182 KiB  
Article
Comparison of Pre-Trained YOLO Models on Steel Surface Defects Detector Based on Transfer Learning with GPU-Based Embedded Devices
by Hoan-Viet Nguyen, Jun-Hee Bae, Yong-Eun Lee, Han-Sung Lee and Ki-Ryong Kwon
Sensors 2022, 22(24), 9926; https://doi.org/10.3390/s22249926 - 16 Dec 2022
Cited by 15 | Viewed by 5384
Abstract
Steel is one of the most basic ingredients, which plays an important role in the machinery industry. However, the steel surface defects heavily affect its quality. The demand for surface defect detectors draws much attention from researchers all over the world. However, there [...] Read more.
Steel is one of the most basic ingredients, which plays an important role in the machinery industry. However, the steel surface defects heavily affect its quality. The demand for surface defect detectors draws much attention from researchers all over the world. However, there are still some drawbacks, e.g., the dataset is limited accessible or small-scale public, and related works focus on developing models but do not deeply take into account real-time applications. In this paper, we investigate the feasibility of applying stage-of-the-art deep learning methods based on YOLO models as real-time steel surface defect detectors. Particularly, we compare the performance of YOLOv5, YOLOX, and YOLOv7 while training them with a small-scale open-source NEU-DET dataset on GPU RTX 2080. From the experiment results, YOLOX-s achieves the best accuracy of 89.6% mAP on the NEU-DET dataset. Then, we deploy the weights of trained YOLO models on Nvidia devices to evaluate their real-time performance. Our experiments devices consist of Nvidia Jetson Nano and Jetson Xavier AGX. We also apply some real-time optimization techniques (i.e., exporting to TensorRT, lowering the precision to FP16 or INT8 and reducing the input image size to 320 × 320) to reduce detection speed (fps), thus also reducing the mAP accuracy. Full article
Show Figures

Figure 1

14 pages, 3180 KiB  
Article
Deep Learning-Based Feature Extraction of Acoustic Emission Signals for Monitoring Wear of Grinding Wheels
by D. González, J. Alvarez, J. A. Sánchez, L. Godino and I. Pombo
Sensors 2022, 22(18), 6911; https://doi.org/10.3390/s22186911 - 13 Sep 2022
Cited by 12 | Viewed by 2530
Abstract
Tool wear monitoring is a critical issue in advanced manufacturing systems. In the search for sensing devices that can provide information about the grinding process, Acoustic Emission (AE) appears to be a promising technology. The present paper presents a novel deep learning-based proposal [...] Read more.
Tool wear monitoring is a critical issue in advanced manufacturing systems. In the search for sensing devices that can provide information about the grinding process, Acoustic Emission (AE) appears to be a promising technology. The present paper presents a novel deep learning-based proposal for grinding wheel wear status monitoring using an AE sensor. The most relevant finding is the possibility of efficient feature extraction form frequency plots using CNNs. Feature extraction from FFT plots requires sound domain-expert knowledge, and thus we present a new approach to automated feature extraction using a pre-trained CNN. Using the features extracted for different industrial grinding conditions, t-SNE and PCA clustering algorithms were tested for wheel wear state identification. Results are compared for different industrial grinding conditions. The initial state of the wheel, resulting from the dressing operation, is clearly identified for all the experiments carried out. This is a very important finding, since dressing strongly affects operation performance. When grinding parameters produce acute wear of the wheel, the algorithms show very good clustering performance using the features extracted by the CNN. Performance of both t-SNE and PCA was very much the same, thus confirming the excellent efficiency of the pre-trained CNN for automated feature extraction from FFT plots. Full article
Show Figures

Figure 1

13 pages, 874 KiB  
Article
0-Dimensional Persistent Homology Analysis Implementation in Resource-Scarce Embedded Systems
by Sérgio Branco, João G. Carvalho, Marco S. Reis, Nuno V. Lopes and Jorge Cabral
Sensors 2022, 22(10), 3657; https://doi.org/10.3390/s22103657 - 11 May 2022
Cited by 1 | Viewed by 2681
Abstract
Persistent Homology (PH) analysis is a powerful tool for understanding many relevant topological features from a given dataset. PH allows finding clusters, noise, and relevant connections in the dataset. Therefore, it can provide a better view of the problem and a way of [...] Read more.
Persistent Homology (PH) analysis is a powerful tool for understanding many relevant topological features from a given dataset. PH allows finding clusters, noise, and relevant connections in the dataset. Therefore, it can provide a better view of the problem and a way of perceiving if a given dataset is equal to another, if a given sample is relevant, and how the samples occupy the feature space. However, PH involves reducing the problem to its simplicial complex space, which is computationally expensive and implementing PH in such Resource-Scarce Embedded Systems (RSES) is an essential add-on for them. However, due to its complexity, implementing PH in such tiny devices is considerably complicated due to the lack of memory and processing power. The following paper shows the implementation of 0-Dimensional Persistent Homology Analysis in a set of well-known RSES, using a technique that reduces the memory footprint and processing power needs of the 0-Dimensional PH algorithm. The results are positive and show that RSES can be equipped with this real-time data analysis tool. Full article
Show Figures

Figure 1

18 pages, 1615 KiB  
Article
Efficient Hardware Design and Implementation of the Voting Scheme-Based Convolution
by Pedro Pereira, João Silva, António Silva, Duarte Fernandes and Rui Machado
Sensors 2022, 22(8), 2943; https://doi.org/10.3390/s22082943 - 12 Apr 2022
Cited by 3 | Viewed by 2262
Abstract
Due to a point cloud’s sparse nature, a sparse convolution block design is necessary to deal with its particularities. Mechanisms adopted in computer vision have recently explored the advantages of data processing in more energy-efficient hardware, such as the FPGA, as a response [...] Read more.
Due to a point cloud’s sparse nature, a sparse convolution block design is necessary to deal with its particularities. Mechanisms adopted in computer vision have recently explored the advantages of data processing in more energy-efficient hardware, such as the FPGA, as a response to the need to run these algorithms on resource-constrained edge devices. However, implementing it in hardware has not been properly explored, resulting in a small number of studies aimed at analyzing the potential of sparse convolutions and their efficiency on resource-constrained hardware platforms. This article presents the design of a customizable hardware block for the voting convolution. We carried out an in-depth analysis to determine under which conditions the use of the voting scheme is justified instead of dense convolutions. The proposed hardware design achieves an energy consumption about 8.7 times lower than similar works in the literature by ignoring unnecessary arithmetic operations with null weights and leveraging data dependency. Access to data memory was also reduced to the minimum necessary, leading to improvements of around 55% in processing time. To evaluate both the performance and applicability of the proposed solution, the voting convolution was integrated into the well-known PointPillars model, where it achieves improvements between 23.05% and 80.44% without a significant effect on detection performance. Full article
Show Figures

Figure 1

24 pages, 7295 KiB  
Article
Resource-Constrained Onboard Inference of 3D Object Detection and Localisation in Point Clouds Targeting Self-Driving Applications
by António Silva, Duarte Fernandes, Rafael Névoa, João Monteiro, Paulo Novais, Pedro Girão, Tiago Afonso and Pedro Melo-Pinto
Sensors 2021, 21(23), 7933; https://doi.org/10.3390/s21237933 - 28 Nov 2021
Cited by 11 | Viewed by 2822
Abstract
Research about deep learning applied in object detection tasks in LiDAR data has been massively widespread in recent years, achieving notable developments, namely in improving precision and inference speed performances. These improvements have been facilitated by powerful GPU servers, taking advantage of their [...] Read more.
Research about deep learning applied in object detection tasks in LiDAR data has been massively widespread in recent years, achieving notable developments, namely in improving precision and inference speed performances. These improvements have been facilitated by powerful GPU servers, taking advantage of their capacity to train the networks in reasonable periods and their parallel architecture that allows for high performance and real-time inference. However, these features are limited in autonomous driving due to space, power capacity, and inference time constraints, and onboard devices are not as powerful as their counterparts used for training. This paper investigates the use of a deep learning-based method in edge devices for onboard real-time inference that is power-effective and low in terms of space-constrained demand. A methodology is proposed for deploying high-end GPU-specific models in edge devices for onboard inference, consisting of a two-folder flow: study model hyperparameters’ implications in meeting application requirements; and compression of the network for meeting the board resource limitations. A hybrid FPGA-CPU board is proposed as an effective onboard inference solution by comparing its performance in the KITTI dataset with computer performances. The achieved accuracy is comparable to the PC-based deep learning method with a plus that it is more effective for real-time inference, power limited and space-constrained purposes. Full article
Show Figures

Figure 1

23 pages, 1983 KiB  
Article
A Smart Mirror for Emotion Monitoring in Home Environments
by Simone Bianco, Luigi Celona, Gianluigi Ciocca, Davide Marelli, Paolo Napoletano, Stefano Yu and Raimondo Schettini
Sensors 2021, 21(22), 7453; https://doi.org/10.3390/s21227453 - 9 Nov 2021
Cited by 10 | Viewed by 11050
Abstract
Smart mirrors are devices that can display any kind of information and can interact with the user using touch and voice commands. Different kinds of smart mirrors exist: general purpose, medical, fashion, and other task specific ones. General purpose smart mirrors are suitable [...] Read more.
Smart mirrors are devices that can display any kind of information and can interact with the user using touch and voice commands. Different kinds of smart mirrors exist: general purpose, medical, fashion, and other task specific ones. General purpose smart mirrors are suitable for home environments but the exiting ones offer similar, limited functionalities. In this paper, we present a general-purpose smart mirror that integrates several functionalities, standard and advanced, to support users in their everyday life. Among the advanced functionalities are the capabilities of detecting a person’s emotions, the short- and long-term monitoring and analysis of the emotions, a double authentication protocol to preserve the privacy, and the integration of Alexa Skills to extend the applications of the smart mirrors. We exploit a deep learning technique to develop most of the smart functionalities. The effectiveness of the device is demonstrated by the performances of the implemented functionalities, and the evaluation in terms of its usability with real users. Full article
Show Figures

Figure 1

Back to TopTop