sensors-logo

Journal Browser

Journal Browser

Signal, Image Processing and Computer Vision in Smart Living Applications: Part II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 March 2023) | Viewed by 60235

Special Issue Editor


E-Mail Website
Guest Editor
National Research Council of Italy, Institute for Microelectronics and Microsystems, 73100 Lecce, Italy
Interests: ambient assisted living; active & healthy ageing technologies; wearable sensors; signal processing; image processing; computer vision; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Smart spaces and ubiquitous computing extend pervasive computing capabilities to everyday objects, providing context-aware services in smart living environments. One of the main aspects is building smart environments by integrating information from independent multisensor systems, including cameras and ranging devices. “Smart Living Technologies" aims to make all the environments in which people spend their time (at home, at work, in mobility, etc.) more adapted to the needs of those persons, regardless of whether they are in good physical condition in terms of frailty and disability, disease and social exclusion, and regardless of age groups (children, adults or elderly people, etc.).

The Special Issue refers to the use of key enabling technologies and smart system integration for the development of advanced technological solutions for the realization of products (sensors, devices, etc.) and services, which include ambient assisted living, ambient intelligence and IoT paradigms, and reframing the sense of “Smart Living” to ensure inclusion, safety, comfort, care, healthcare, and environmental sustainability. The creation of smart devices and services passes through innovation in signal processing, image processing, and computer vision techniques. The Special Issue aims to cover technological issues related to the integration of processing aspects in smart living environments. We invite papers that include but are not limited to the following topics:

  • Artificial intelligence
  • Pattern recognition/analysis
  • Biometrics
  • Human analysis
  • Behavior understanding
  • Computer vision
  • Robotics and intelligent systems
  • Document and media analysis
  • Image processing
  • Signal processing
  • Soft computing techniques
  • Ambient intelligence
  • Context-aware computing
  • Machine learning
  • Deep learning
  • Embedded systems and devices
  • Human–computer interfaces
  • Innovative sensing devices and applications
  • Sensor networks and mobile ad hoc networks
  • Security and privacy techniques

Dr. Alessandro Leone
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • signal processing
  • image processing
  • computer vision
  • embedded systems
  • ubiquitous computing
  • multisensor systems
  • artificial intelligence
  • pattern recognition
  • deep learning
  • ambient assisted living
  • active and healthy ageing
  • healthcare applications
  • human computer interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issues

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 30331 KiB  
Article
Optimized Placement of Frost-Measuring Sensors in Heat Exchangers via Image Processing of Frost Formation Pattern
by Martim Aguiar, Pedro Dinis Gaspar and Pedro Dinho Silva
Sensors 2023, 23(11), 5253; https://doi.org/10.3390/s23115253 - 1 Jun 2023
Cited by 3 | Viewed by 1662
Abstract
Heat exchangers (HXs) play a critical role in maintaining human thermal comfort and ensuring product safety and quality in various industries. However, the formation of frost on HX surfaces during cooling operations can significantly impact their performance and energy efficiency. Traditional defrosting methods [...] Read more.
Heat exchangers (HXs) play a critical role in maintaining human thermal comfort and ensuring product safety and quality in various industries. However, the formation of frost on HX surfaces during cooling operations can significantly impact their performance and energy efficiency. Traditional defrosting methods primarily rely on time-based control of heaters or HX operation, overlooking the actual frost formation pattern across the surface. This pattern is influenced by ambient air conditions (humidity and temperature) and surface temperature variations. To address this issue, frost formation sensors can be strategically placed within the HX. However, the non-uniform frost pattern poses challenges in sensor placement. This study proposes an optimized sensor placement approach using computer vision and image processing techniques to analyze the frost formation pattern. Through creating a frost formation map and evaluating various sensor locations, frost detection can be optimized to control defrosting operations with higher accuracy, thereby enhancing the thermal performance and energy efficiency of HXs. The results demonstrate the effectiveness of the proposed method in accurately detecting and monitoring frost formation, providing valuable insights for sensor placement optimization. This approach presents significant potential in enhancing the overall performance and sustainability of the operation of HXs. Full article
Show Figures

Figure 1

24 pages, 9607 KiB  
Article
A Serious Game for the Assessment of Visuomotor Adaptation Capabilities during Locomotion Tasks Employing an Embodied Avatar in Virtual Reality
by Vladimiro Suglia, Antonio Brunetti, Guido Pasquini, Mariapia Caputo, Tommaso Maria Marvulli, Elena Sibilano, Sara Della Bella, Paola Carrozza, Chiara Beni, David Naso, Vito Monaco, Giovanna Cristella, Vitoantonio Bevilacqua and Domenico Buongiorno
Sensors 2023, 23(11), 5017; https://doi.org/10.3390/s23115017 - 24 May 2023
Cited by 2 | Viewed by 2989
Abstract
The study of visuomotor adaptation (VMA) capabilities has been encompassed in various experimental protocols aimed at investigating human motor control strategies and/or cognitive functions. VMA-oriented frameworks can have clinical applications, primarily in the investigation and assessment of neuromotor impairments caused by conditions such [...] Read more.
The study of visuomotor adaptation (VMA) capabilities has been encompassed in various experimental protocols aimed at investigating human motor control strategies and/or cognitive functions. VMA-oriented frameworks can have clinical applications, primarily in the investigation and assessment of neuromotor impairments caused by conditions such as Parkinson’s disease or post-stroke, which affect the lives of tens of thousands of people worldwide. Therefore, they can enhance the understanding of the specific mechanisms of such neuromotor disorders, thus being a potential biomarker for recovery, with the aim of being integrated with conventional rehabilitative programs. Virtual Reality (VR) can be entailed in a framework targeting VMA since it allows the development of visual perturbations in a more customizable and realistic way. Moreover, as has been demonstrated in previous works, a serious game (SG) can further increase engagement thanks to the use of full-body embodied avatars. Most studies implementing VMA frameworks have focused on upper limb tasks and have utilized a cursor as visual feedback for the user. Hence, there is a paucity in the literature about VMA-oriented frameworks targeting locomotion tasks. In this article, the authors present the design, development, and testing of an SG-based framework that addresses VMA in a locomotion activity by controlling a full-body moving avatar in a custom VR environment. This workflow includes a set of metrics to quantitatively assess the participants’ performance. Thirteen healthy children were recruited to evaluate the framework. Several quantitative comparisons and analyses were run to validate the different types of introduced visuomotor perturbations and to evaluate the ability of the proposed metrics to describe the difficulty caused by such perturbations. During the experimental sessions, it emerged that the system is safe, easy to use, and practical in a clinical setting. Despite the limited sample size, which represents the main limitation of the study and can be compensated for with future recruitment, the authors claim the potential of this framework as a useful instrument for quantitatively assessing either motor or cognitive impairments. The proposed feature-based approach gives several objective parameters as additional biomarkers that can integrate the conventional clinical scores. Future studies might investigate the relation between the proposed biomarkers and the clinical scores for specific disorders such as Parkinson’s disease and cerebral palsy. Full article
Show Figures

Figure 1

13 pages, 2196 KiB  
Article
Assessment of a UWB Real Time Location System for Dairy Cows’ Monitoring
by Provvidenza Rita D’Urso, Claudia Arcidiacono, Matti Pastell and Giovanni Cascone
Sensors 2023, 23(10), 4873; https://doi.org/10.3390/s23104873 - 18 May 2023
Cited by 5 | Viewed by 1850
Abstract
In the field of precision livestock farming, many systems have been developed to identify the position of each cow of the herd individually in a specific environment. Challenges still exist in assessing the adequacy of the available systems to monitor individual animals in [...] Read more.
In the field of precision livestock farming, many systems have been developed to identify the position of each cow of the herd individually in a specific environment. Challenges still exist in assessing the adequacy of the available systems to monitor individual animals in specific environments, and in the design of new systems. The main purpose of this research was to evaluate the performance of the SEWIO ultrawide-band (UWB) real time location system for the identification and localisation of cows during their activity in the barn through preliminary analyses in laboratory conditions. The objectives included the quantification of the errors performed by the system in laboratory conditions, and the assessment of the suitability of the system for real time monitoring of cows in dairy barns. The position of static and dynamic points was monitored in different experimental set-ups in the laboratory by the use of six anchors. Then, the errors related to a specific movement of the points were computed and statistical analyses were carried out. In detail, the one-way analysis of variance (ANOVA) was applied in order to assess the equality of the errors for each group of points in relation to their positions or typology, i.e., static or dynamic. In the post-hoc analysis, the errors were separated by Tukey’s honestly significant difference at p > 0.05. The results of the research quantify the errors related to a specific movement (i.e., static and dynamic points) and the position of the points (i.e., central area, perimeter of the investigated area). Based on the results, specific information is provided for the installation of the SEWIO in dairy barns as well as the monitoring of the animal behaviour in the resting area and the feeding area of the breeding environment. The SEWIO system could be a valuable support for farmers in herd management and for researchers in the analysis of animal behavioural activities. Full article
Show Figures

Figure 1

19 pages, 3359 KiB  
Article
A Novel Unsupervised Video Anomaly Detection Framework Based on Optical Flow Reconstruction and Erased Frame Prediction
by Heqing Huang, Bing Zhao, Fei Gao, Penghui Chen, Jun Wang and Amir Hussain
Sensors 2023, 23(10), 4828; https://doi.org/10.3390/s23104828 - 17 May 2023
Cited by 5 | Viewed by 2427
Abstract
Reconstruction-based and prediction-based approaches are widely used for video anomaly detection (VAD) in smart city surveillance applications. However, neither of these approaches can effectively utilize the rich contextual information that exists in videos, which makes it difficult to accurately perceive anomalous activities. In [...] Read more.
Reconstruction-based and prediction-based approaches are widely used for video anomaly detection (VAD) in smart city surveillance applications. However, neither of these approaches can effectively utilize the rich contextual information that exists in videos, which makes it difficult to accurately perceive anomalous activities. In this paper, we exploit the idea of a training model based on the “Cloze Test” strategy in natural language processing (NLP) and introduce a novel unsupervised learning framework to encode both motion and appearance information at an object level. Specifically, to store the normal modes of video activity reconstructions, we first design an optical stream memory network with skip connections. Secondly, we build a space–time cube (STC) for use as the basic processing unit of the model and erase a patch in the STC to form the frame to be reconstructed. This enables a so-called ”incomplete event (IE)” to be completed. On this basis, a conditional autoencoder is utilized to capture the high correspondence between optical flow and STC. The model predicts erased patches in IEs based on the context of the front and back frames. Finally, we employ a generating adversarial network (GAN)-based training method to improve the performance of VAD. By distinguishing the predicted erased optical flow and erased video frame, the anomaly detection results are shown to be more reliable with our proposed method which can help reconstruct the original video in IE. Comparative experiments conducted on the benchmark UCSD Ped2, CUHK Avenue, and ShanghaiTech datasets demonstrate AUROC scores reaching 97.7%, 89.7%, and 75.8%, respectively. Full article
Show Figures

Figure 1

14 pages, 3308 KiB  
Article
Using Object Detection Technology to Identify Defects in Clothing for Blind People
by Daniel Rocha, Leandro Pinto, José Machado, Filomena Soares and Vítor Carvalho
Sensors 2023, 23(9), 4381; https://doi.org/10.3390/s23094381 - 28 Apr 2023
Cited by 4 | Viewed by 4148
Abstract
Blind people often encounter challenges in managing their clothing, specifically in identifying defects such as stains or holes. With the progress of the computer vision field, it is crucial to minimize these limitations as much as possible to assist blind people with selecting [...] Read more.
Blind people often encounter challenges in managing their clothing, specifically in identifying defects such as stains or holes. With the progress of the computer vision field, it is crucial to minimize these limitations as much as possible to assist blind people with selecting appropriate clothing. Therefore, the objective of this paper is to use object detection technology to categorize and detect stains on garments. The defect detection system proposed in this study relies on the You Only Look Once (YOLO) architecture, which is a single-stage object detector that is well-suited for automated inspection tasks. The authors collected a dataset of clothing with defects and used it to train and evaluate the proposed system. The methodology used for the optimization of the defect detection system was based on three main components: (i) increasing the dataset with new defects, illumination conditions, and backgrounds, (ii) introducing data augmentation, and (iii) introducing defect classification. The authors compared and evaluated three different YOLOv5 models. The results of this study demonstrate that the proposed approach is effective and suitable for different challenging defect detection conditions, showing high average precision (AP) values, and paving the way for a mobile application to be accessible for the blind community. Full article
Show Figures

Figure 1

18 pages, 10605 KiB  
Article
Assessing Virtual Reality Spaces for Elders Using Image-Based Sentiment Analysis and Stress Level Detection
by Makrina Viola Kosti, Nefeli Georgakopoulou, Sotiris Diplaris, Theodora Pistola, Konstantinos Chatzistavros, Vasileios-Rafail Xefteris, Athina Tsanousa, Stefanos Vrochidis and Ioannis Kompatsiaris
Sensors 2023, 23(8), 4130; https://doi.org/10.3390/s23084130 - 20 Apr 2023
Cited by 2 | Viewed by 2373
Abstract
Seniors, in order to be able to fight loneliness, need to communicate with other people and be engaged in activities to keep their minds active to increase their social capital. There is an intensified interest in the development of social virtual reality environments, [...] Read more.
Seniors, in order to be able to fight loneliness, need to communicate with other people and be engaged in activities to keep their minds active to increase their social capital. There is an intensified interest in the development of social virtual reality environments, either by commerce or by academia, to address the problem of social isolation of older people. Due to the vulnerability of the social group involved in this field of research, the need for the application of evaluation methods regarding the proposed VR environments becomes even more important. The range of techniques that can be exploited in this field is constantly expanding, with visual sentiment analysis being a characteristic example. In this study, we introduce the use of image-based sentiment analysis and behavioural analysis as a technique to assess a social VR space for elders and present some promising preliminary results. Full article
Show Figures

Figure 1

20 pages, 15063 KiB  
Article
Regression-Based Camera Pose Estimation through Multi-Level Local Features and Global Features
by Meng Xu, Zhihuang Zhang, Yuanhao Gong and Stefan Poslad
Sensors 2023, 23(8), 4063; https://doi.org/10.3390/s23084063 - 18 Apr 2023
Cited by 6 | Viewed by 4099
Abstract
Accurate and robust camera pose estimation is essential for high-level applications such as augmented reality and autonomous driving. Despite the development of global feature-based camera pose regression methods and local feature-based matching guided pose estimation methods, challenging conditions, such as illumination changes and [...] Read more.
Accurate and robust camera pose estimation is essential for high-level applications such as augmented reality and autonomous driving. Despite the development of global feature-based camera pose regression methods and local feature-based matching guided pose estimation methods, challenging conditions, such as illumination changes and viewpoint changes, as well as inaccurate keypoint localization, continue to affect the performance of camera pose estimation. In this paper, we propose a novel relative camera pose regression framework that uses global features with rotation consistency and local features with rotation invariance. First, we apply a multi-level deformable network to detect and describe local features, which can learn appearances and gradient information sensitive to rotation variants. Second, we process the detection and description processes using the results from pixel correspondences of the input image pairs. Finally, we propose a novel loss that combines relative regression loss and absolute regression loss, incorporating global features with geometric constraints to optimize the pose estimation model. Our extensive experiments report satisfactory accuracy on the 7Scenes dataset with an average mean translation error of 0.18 m and a rotation error of 7.44° using image pairs as input. Ablation studies were also conducted to verify the effectiveness of the proposed method in the tasks of pose estimation and image matching using the 7Scenes and HPatches datasets. Full article
Show Figures

Figure 1

14 pages, 1607 KiB  
Article
Recognition Performance Analysis of a Multimodal Biometric System Based on the Fusion of 3D Ultrasound Hand-Geometry and Palmprint
by Monica Micucci and Antonio Iula
Sensors 2023, 23(7), 3653; https://doi.org/10.3390/s23073653 - 31 Mar 2023
Cited by 7 | Viewed by 2026
Abstract
Multimodal biometric systems are often used in a wide variety of applications where high security is required. Such systems show several merits in terms of universality and recognition rate compared to unimodal systems. Among several acquisition technologies, ultrasound bears great potential in high [...] Read more.
Multimodal biometric systems are often used in a wide variety of applications where high security is required. Such systems show several merits in terms of universality and recognition rate compared to unimodal systems. Among several acquisition technologies, ultrasound bears great potential in high secure access applications because it allows the acquisition of 3D information about the human body and is able to verify liveness of the sample. In this work, recognition performances of a multimodal system obtained by fusing palmprint and hand-geometry 3D features, which are extracted from the same collected volumetric image, are extensively evaluated. Several fusion techniques based on the weighted score sum rule and on a wide variety of possible combinations of palmprint and hand geometry scores are experimented with. Recognition performances of the various methods are evaluated and compared through verification and identification experiments carried out on a homemade database employed in previous works. Verification results demonstrated that the fusion, in most cases, produces a noticeable improvement compared to unimodal systems: an EER value of 0.06% is achieved in at least five cases against values of 1.18% and 0.63% obtained in the best case for unimodal palmprint and hand geometry, respectively. The analysis also revealed that the best fusion results do not include any combination between the best scores of unimodal characteristics. Identification experiments, carried out for the methods that provided the best verification results, consistently demonstrated an identification rate of 100%, against 98% and 91% obtained in the best case for unimodal palmprint and hand geometry, respectively. Full article
Show Figures

Figure 1

15 pages, 11431 KiB  
Article
Benchmarking of Contactless Heart Rate Measurement Systems in ARM-Based Embedded Platforms
by Andrea Manni, Andrea Caroppo, Gabriele Rescio, Pietro Siciliano and Alessandro Leone
Sensors 2023, 23(7), 3507; https://doi.org/10.3390/s23073507 - 27 Mar 2023
Cited by 1 | Viewed by 2273
Abstract
Heart rate monitoring is especially important for aging individuals because it is associated with longevity and cardiovascular risk. Typically, this vital parameter can be measured using wearable sensors, which are widely available commercially. However, wearable sensors have some disadvantages in terms of acceptability, [...] Read more.
Heart rate monitoring is especially important for aging individuals because it is associated with longevity and cardiovascular risk. Typically, this vital parameter can be measured using wearable sensors, which are widely available commercially. However, wearable sensors have some disadvantages in terms of acceptability, especially when used by elderly people. Thus, contactless solutions have increasingly attracted the scientific community in recent years. Camera-based photoplethysmography (also known as remote photoplethysmography) is an emerging method of contactless heart rate monitoring that uses a camera and a processing unit on the hardware side, and appropriate image processing methodologies on the software side. This paper describes the design and implementation of a novel pipeline for heart rate estimation using a commercial and low-cost camera as the input device. The pipeline’s performance was tested and compared on a desktop PC, a laptop, and three different ARM-based embedded platforms (Raspberry Pi 4, Odroid N2+, and Jetson Nano). The results showed that the designed and implemented pipeline achieved an average accuracy of about 96.7% for heart rate estimation, with very low variance (between 1.5% and 2.5%) across processing platforms, user distances from the camera, and frame resolutions. Furthermore, benchmark analysis showed that the Odroid N2+ platform was the most convenient in terms of CPU load, RAM usage, and average execution time of the algorithmic pipeline. Full article
Show Figures

Figure 1

16 pages, 3525 KiB  
Article
Patient–Therapist Cooperative Hand Telerehabilitation through a Novel Framework Involving the Virtual Glove System
by Giuseppe Placidi, Alessandro Di Matteo, Daniele Lozzi, Matteo Polsinelli and Eleni Theodoridou
Sensors 2023, 23(7), 3463; https://doi.org/10.3390/s23073463 - 25 Mar 2023
Cited by 7 | Viewed by 2236
Abstract
Telerehabilitation is important for post-stroke or post-surgery rehabilitation because the tasks it uses are reproducible. When combined with assistive technologies, such as robots, virtual reality, tracking systems, or a combination of them, it can also allow the recording of a patient’s progression and [...] Read more.
Telerehabilitation is important for post-stroke or post-surgery rehabilitation because the tasks it uses are reproducible. When combined with assistive technologies, such as robots, virtual reality, tracking systems, or a combination of them, it can also allow the recording of a patient’s progression and rehabilitation monitoring, along with an objective evaluation. In this paper, we present the structure, from actors and functionalities to software and hardware views, of a novel framework that allows cooperation between patients and therapists. The system uses a computer-vision-based system named virtual glove for real-time hand tracking (40 fps), which is translated into a light and precise system. The novelty of this work lies in the fact that it gives the therapist quantitative, not only qualitative, information about the hand’s mobility, for every hand joint separately, while at the same time providing control of the result of the rehabilitation by also quantitatively monitoring the progress of the hand mobility. Finally, it also offers a strategy for patient–therapist interaction and therapist–therapist data sharing. Full article
Show Figures

Figure 1

15 pages, 6727 KiB  
Article
Static Hand Gesture Recognition Using Capacitive Sensing and Machine Learning
by Frazer Noble, Muqing Xu and Fakhrul Alam
Sensors 2023, 23(7), 3419; https://doi.org/10.3390/s23073419 - 24 Mar 2023
Cited by 10 | Viewed by 3142
Abstract
Automated hand gesture recognition is a key enabler of Human-to-Machine Interfaces (HMIs) and smart living. This paper reports the development and testing of a static hand gesture recognition system using capacitive sensing. Our system consists of a 6×18 array of capacitive [...] Read more.
Automated hand gesture recognition is a key enabler of Human-to-Machine Interfaces (HMIs) and smart living. This paper reports the development and testing of a static hand gesture recognition system using capacitive sensing. Our system consists of a 6×18 array of capacitive sensors that captured five gestures—Palm, Fist, Middle, OK, and Index—of five participants to create a dataset of gesture images. The dataset was used to train Decision Tree, Naïve Bayes, Multi-Layer Perceptron (MLP) neural network, and Convolutional Neural Network (CNN) classifiers. Each classifier was trained five times; each time, the classifier was trained using four different participants’ gestures and tested with one different participant’s gestures. The MLP classifier performed the best, achieving an average accuracy of 96.87% and an average F1 score of 92.16%. This demonstrates that the proposed system can accurately recognize hand gestures and that capacitive sensing is a viable method for implementing a non-contact, static hand gesture recognition system. Full article
Show Figures

Figure 1

28 pages, 8511 KiB  
Article
Behavior and Task Classification Using Wearable Sensor Data: A Study across Different Ages
by Francesca Gasparini, Alessandra Grossi, Marta Giltri, Katsuhiro Nishinari and Stefania Bandini
Sensors 2023, 23(6), 3225; https://doi.org/10.3390/s23063225 - 17 Mar 2023
Cited by 3 | Viewed by 2504
Abstract
In this paper, we face the problem of task classification starting from physiological signals acquired using wearable sensors with experiments in a controlled environment, designed to consider two different age populations: young adults and older adults. Two different scenarios are considered. In the [...] Read more.
In this paper, we face the problem of task classification starting from physiological signals acquired using wearable sensors with experiments in a controlled environment, designed to consider two different age populations: young adults and older adults. Two different scenarios are considered. In the first one, subjects are involved in different cognitive load tasks, while in the second one, space varying conditions are considered, and subjects interact with the environment, changing the walking conditions and avoiding collision with obstacles. Here, we demonstrate that it is possible not only to define classifiers that rely on physiological signals to predict tasks that imply different cognitive loads, but it is also possible to classify both the population group age and the performed task. The whole workflow of data collection and analysis, starting from the experimental protocol, data acquisition, signal denoising, normalization with respect to subject variability, feature extraction and classification is described here. The dataset collected with the experiments together with the codes to extract the features of the physiological signals are made available for the research community. Full article
Show Figures

Figure 1

20 pages, 7599 KiB  
Article
Cooktop Sensing Based on a YOLO Object Detection Algorithm
by Iker Azurmendi, Ekaitz Zulueta, Jose Manuel Lopez-Guede, Jon Azkarate and Manuel González
Sensors 2023, 23(5), 2780; https://doi.org/10.3390/s23052780 - 3 Mar 2023
Cited by 6 | Viewed by 3768
Abstract
Deep Learning (DL) has provided a significant breakthrough in many areas of research and industry. The development of Convolutional Neural Networks (CNNs) has enabled the improvement of computer vision-based techniques, making the information gathered from cameras more useful. For this reason, recently, studies [...] Read more.
Deep Learning (DL) has provided a significant breakthrough in many areas of research and industry. The development of Convolutional Neural Networks (CNNs) has enabled the improvement of computer vision-based techniques, making the information gathered from cameras more useful. For this reason, recently, studies have been carried out on the use of image-based DL in some areas of people’s daily life. In this paper, an object detection-based algorithm is proposed to modify and improve the user experience in relation to the use of cooking appliances. The algorithm can sense common kitchen objects and identify interesting situations for users. Some of these situations are the detection of utensils on lit hobs, recognition of boiling, smoking and oil in kitchenware, and determination of good cookware size adjustment, among others. In addition, the authors have achieved sensor fusion by using a cooker hob with Bluetooth connectivity, so it is possible to automatically interact with it via an external device such as a computer or a mobile phone. Our main contribution focuses on supporting people when they are cooking, controlling heaters, or alerting them with different types of alarms. To the best of our knowledge, this is the first time a YOLO algorithm has been used to control the cooktop by means of visual sensorization. Moreover, this research paper provides a comparison of the detection performance among different YOLO networks. Additionally, a dataset of more than 7500 images has been generated and multiple data augmentation techniques have been compared. The results show that YOLOv5s can successfully detect common kitchen objects with high accuracy and fast speed, and it can be employed for realistic cooking environment applications. Finally, multiple examples of the identification of interesting situations and how we act on the cooktop are presented. Full article
Show Figures

Figure 1

20 pages, 12569 KiB  
Article
sSLAM: Speeded-Up Visual SLAM Mixing Artificial Markers and Temporary Keypoints
by Francisco J. Romero-Ramirez, Rafael Muñoz-Salinas, Manuel J. Marín-Jiménez, Miguel Cazorla and Rafael Medina-Carnicer
Sensors 2023, 23(4), 2210; https://doi.org/10.3390/s23042210 - 16 Feb 2023
Cited by 5 | Viewed by 2182
Abstract
Environment landmarks are generally employed by visual SLAM (vSLAM) methods in the form of keypoints. However, these landmarks are unstable over time because they belong to areas that tend to change, e.g., shadows or moving objects. To solve this, some other authors have [...] Read more.
Environment landmarks are generally employed by visual SLAM (vSLAM) methods in the form of keypoints. However, these landmarks are unstable over time because they belong to areas that tend to change, e.g., shadows or moving objects. To solve this, some other authors have proposed the combination of keypoints and artificial markers distributed in the environment so as to facilitate the tracking process in the long run. Artificial markers are special elements (similar to beacons) that can be permanently placed in the environment to facilitate tracking. In any case, these systems keep a set of keypoints that is not likely to be reused, thus unnecessarily increasing the computing time required for tracking. This paper proposes a novel visual SLAM approach that efficiently combines keypoints and artificial markers, allowing for a substantial reduction in the computing time and memory required without noticeably degrading the tracking accuracy. In the first stage, our system creates a map of the environment using both keypoints and artificial markers, but once the map is created, the keypoints are removed and only the markers are kept. Thus, our map stores only long-lasting features of the environment (i.e., the markers). Then, for localization purposes, our algorithm uses the marker information along with temporary keypoints created just in the time of tracking, which are removed after a while. Since our algorithm keeps only a small subset of recent keypoints, it is faster than the state-of-the-art vSLAM approaches. The experimental results show that our proposed sSLAM compares favorably with ORB-SLAM2, ORB-SLAM3, OpenVSLAM and UcoSLAM in terms of speed, without statistically significant differences in accuracy. Full article
Show Figures

Figure 1

28 pages, 2736 KiB  
Article
Few-Shot User-Adaptable Radar-Based Breath Signal Sensing
by Gianfranco Mauro, Maria De Carlos Diez, Julius Ott, Lorenzo Servadei, Manuel P. Cuellar and Diego P. Morales-Santos
Sensors 2023, 23(2), 804; https://doi.org/10.3390/s23020804 - 10 Jan 2023
Cited by 5 | Viewed by 2587
Abstract
Vital signs estimation provides valuable information about an individual’s overall health status. Gathering such information usually requires wearable devices or privacy-invasive settings. In this work, we propose a radar-based user-adaptable solution for respiratory signal prediction while sitting at an office desk. Such an [...] Read more.
Vital signs estimation provides valuable information about an individual’s overall health status. Gathering such information usually requires wearable devices or privacy-invasive settings. In this work, we propose a radar-based user-adaptable solution for respiratory signal prediction while sitting at an office desk. Such an approach leads to a contact-free, privacy-friendly, and easily adaptable system with little reference training data. Data from 24 subjects are preprocessed to extract respiration information using a 60 GHz frequency-modulated continuous wave radar. With few training examples, episodic optimization-based learning allows for generalization to new individuals. Episodically, a convolutional variational autoencoder learns how to map the processed radar data to a reference signal, generating a constrained latent space to the central respiration frequency. Moreover, autocorrelation over recorded radar data time assesses the information corruption due to subject motions. The model learning procedure and breathing prediction are adjusted by exploiting the motion corruption level. Thanks to the episodic acquired knowledge, the model requires an adaptation time of less than one and two seconds for one to five training examples, respectively. The suggested approach represents a novel, quickly adaptable, non-contact alternative for office settings with little user motion. Full article
Show Figures

Graphical abstract

25 pages, 7049 KiB  
Article
The Development of an Energy Efficient Temperature Controller for Residential Use and Its Generalization Based on LSTM
by Tudor George Alexandru, Adriana Alexandru, Florin Dumitru Popescu and Andrei Andraș
Sensors 2023, 23(1), 453; https://doi.org/10.3390/s23010453 - 1 Jan 2023
Cited by 1 | Viewed by 2243
Abstract
Thermostats operate alongside intelligent home automation systems for ensuring both the comfort of the occupants as well as the responsible use of energy. The effectiveness of such solutions relies on the ability of the adopted control methodology to respond to changes in the [...] Read more.
Thermostats operate alongside intelligent home automation systems for ensuring both the comfort of the occupants as well as the responsible use of energy. The effectiveness of such solutions relies on the ability of the adopted control methodology to respond to changes in the surrounding environment. In this regard, process disturbances such as severe wind or fluctuating ambient temperatures must be taken into account. The present paper proposes a new approach for estimating the heat transfer of residential buildings by employing a lumped parameter thermal analysis model. Various control strategies are adopted and tuned into a virtual environment. The knowledge gained is generalized by means of a long short-term memory (LSTM) neural network. Laboratory scale experiments are provided to prove the given concepts. The results achieved highlight the efficiency of the implemented temperature controller in terms of overshoot and energy consumption. Full article
Show Figures

Figure 1

18 pages, 5122 KiB  
Article
An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living
by Giovanni Mezzina and Daniela De Venuto
Sensors 2023, 23(1), 103; https://doi.org/10.3390/s23010103 - 22 Dec 2022
Viewed by 2045
Abstract
Most of the humanoid social robots currently diffused are designed only for verbal and animated interactions with users, and despite being equipped with two upper arms for interactive animation, they lack object manipulation capabilities. In this paper, we propose the MONOCULAR (eMbeddable autONomous [...] Read more.
Most of the humanoid social robots currently diffused are designed only for verbal and animated interactions with users, and despite being equipped with two upper arms for interactive animation, they lack object manipulation capabilities. In this paper, we propose the MONOCULAR (eMbeddable autONomous ObjeCt manipULAtion Routines) framework, which implements a set of routines to add manipulation functionalities to social robots by exploiting the functional data fusion of two RGB cameras and a 3D depth sensor placed in the head frame. The framework is designed to: (i) localize specific objects to be manipulated via RGB cameras; (ii) define the characteristics of the shelf on which they are placed; and (iii) autonomously adapt approach and manipulation routines to avoid collisions and maximize grabbing accuracy. To localize the item on the shelf, MONOCULAR exploits an embeddable version of the You Only Look Once (YOLO) object detector. The RGB camera outcomes are also used to estimate the height of the shelf using an edge-detecting algorithm. Based on the item’s position and the estimated shelf height, MONOCULAR is designed to select between two possible routines that dynamically optimize the approach and object manipulation parameters according to the real-time analysis of RGB and 3D sensor frames. These two routines are optimized for a central or lateral approach to objects on a shelf. The MONOCULAR procedures are designed to be fully automatic, intrinsically protecting sensitive users’ data and stored home or hospital maps. MONOCULAR was optimized for Pepper by SoftBank Robotics. To characterize the proposed system, a case study in which Pepper is used as a drug delivery operator is proposed. The case study is divided into: (i) pharmaceutical package search; (ii) object approach and manipulation; and (iii) delivery operations. Experimental data showed that object manipulation routines for laterally placed objects achieves a best grabbing success rate of 96%, while the routine for centrally placed objects can reach 97% for a wide range of different shelf heights. Finally, a proof of concept is proposed here to demonstrate the applicability of the MONOCULAR framework in a real-life scenario. Full article
Show Figures

Figure 1

14 pages, 1798 KiB  
Article
Image Classification Using Multiple Convolutional Neural Networks on the Fashion-MNIST Dataset
by Olivia Nocentini, Jaeseok Kim, Muhammad Zain Bashir and Filippo Cavallo
Sensors 2022, 22(23), 9544; https://doi.org/10.3390/s22239544 - 6 Dec 2022
Cited by 16 | Viewed by 5525
Abstract
As the elderly population grows, there is a need for caregivers, which may become unsustainable for society. In this situation, the demand for automated help increases. One of the solutions is service robotics, in which robots have automation and show significant promise in [...] Read more.
As the elderly population grows, there is a need for caregivers, which may become unsustainable for society. In this situation, the demand for automated help increases. One of the solutions is service robotics, in which robots have automation and show significant promise in working with people. In particular, household settings and aged people’s homes will need these robots to perform daily activities. Clothing manipulation is a daily activity and represents a challenging area for a robot. The detection and classification are key points for the manipulation of clothes. For this reason, in this paper, we proposed to study fashion image classification with four different neural network models to improve apparel image classification accuracy on the Fashion-MNIST dataset. The network models are tested with the highest accuracy with a Fashion-Product dataset and a customized dataset. The results show that one of our models, the Multiple Convolutional Neural Network including 15 convolutional layers (MCNN15), boosted the state of art accuracy, and it obtained a classification accuracy of 94.04% on the Fashion-MNIST dataset with respect to the literature. Moreover, MCNN15, with the Fashion-Product dataset and the household dataset, obtained 60% and 40% accuracy, respectively. Full article
Show Figures

Figure 1

18 pages, 2631 KiB  
Article
Focal DETR: Target-Aware Token Design for Transformer-Based Object Detection
by Tianming Xie, Zhonghao Zhang, Jing Tian and Lihong Ma
Sensors 2022, 22(22), 8686; https://doi.org/10.3390/s22228686 - 10 Nov 2022
Cited by 6 | Viewed by 2562
Abstract
In this paper, we propose a novel target-aware token design for transformer-based object detection. To tackle the target attribute diffusion challenge of transformer-based object detection, we propose two key components in the new target-aware token design mechanism. Firstly, we propose a target-aware sampling [...] Read more.
In this paper, we propose a novel target-aware token design for transformer-based object detection. To tackle the target attribute diffusion challenge of transformer-based object detection, we propose two key components in the new target-aware token design mechanism. Firstly, we propose a target-aware sampling module, which forces the sampling patterns to converge inside the target region and obtain its representative encoded features. More specifically, a set of four sampling patterns are designed, including small and large patterns, which focus on the detailed and overall characteristics of a target, respectively, as well as the vertical and horizontal patterns, which handle the object’s directional structures. Secondly, we propose a target-aware key-value matrix. This is a unified, learnable, feature-embedding matrix which is directly weighted on the feature map to reduce the interference of non-target regions. With such a new design, we propose a new variant of the transformer-based object-detection model, called Focal DETR, which achieves superior performance over the state-of-the-art transformer-based object-detection models on the COCO object-detection benchmark dataset. Experimental results demonstrate that our Focal DETR achieves a 44.7 AP in the coco2017 test set, which is 2.7 AP and 0.9 AP higher than the DETR and deformable DETR using the same training strategy and the same feature-extraction network. Full article
Show Figures

Figure 1

Review

Jump to: Research

33 pages, 463 KiB  
Review
Review on Human Action Recognition in Smart Living: Sensing Technology, Multimodality, Real-Time Processing, Interoperability, and Resource-Constrained Processing
by Giovanni Diraco, Gabriele Rescio, Pietro Siciliano and Alessandro Leone
Sensors 2023, 23(11), 5281; https://doi.org/10.3390/s23115281 - 2 Jun 2023
Cited by 26 | Viewed by 5364
Abstract
Smart living, a concept that has gained increasing attention in recent years, revolves around integrating advanced technologies in homes and cities to enhance the quality of life for citizens. Sensing and human action recognition are crucial aspects of this concept. Smart living applications [...] Read more.
Smart living, a concept that has gained increasing attention in recent years, revolves around integrating advanced technologies in homes and cities to enhance the quality of life for citizens. Sensing and human action recognition are crucial aspects of this concept. Smart living applications span various domains, such as energy consumption, healthcare, transportation, and education, which greatly benefit from effective human action recognition. This field, originating from computer vision, seeks to recognize human actions and activities using not only visual data but also many other sensor modalities. This paper comprehensively reviews the literature on human action recognition in smart living environments, synthesizing the main contributions, challenges, and future research directions. This review selects five key domains, i.e., Sensing Technology, Multimodality, Real-time Processing, Interoperability, and Resource-Constrained Processing, as they encompass the critical aspects required for successfully deploying human action recognition in smart living. These domains highlight the essential role that sensing and human action recognition play in successfully developing and implementing smart living solutions. This paper serves as a valuable resource for researchers and practitioners seeking to further explore and advance the field of human action recognition in smart living. Full article
Show Figures

Figure 1

Back to TopTop