sensors-logo

Journal Browser

Journal Browser

Microsoft Kinect Sensors: Innovative Solutions, Applications, and Validations

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (25 October 2020) | Viewed by 23298

Special Issue Editors


E-Mail Website
Guest Editor
Associate Professor. Department of Mechanics, Mathematics and Management, Polytechnic Institute of Bari, 70126 Bari, Italy
Interests: CAD; human–computer interaction; virtual and augmented reality; bioengineering

E-Mail Website
Guest Editor
Department of Mechanics, Mathematics and Management, Polytechnic Institute of Bari, 70126 Bari, Italy
Interests: ergonomics and human performance evaluation; virtual and augmented reality for health and industrial applications; human–computer interaction; user-centered design; machine learning; biometrics; digital image processing

Special Issue Information

Dear Colleagues,

Ever since the introduction of RGB-D sensor-based cameras into the entertainment market, we have observed a progressive revolution of their applications. Despite the Microsoft Kinect failing as a gaming interface, its capacity to provide depth information at a low cost has allowed for the exploration of novel applications and goals, ranging from environment monitoring and reconstruction to health care applications. The Microsoft Kinect second-generation sensor, using time-of-flight technology, has allowed for a further improvement in the sensor’s reliability concerning various tasks, such as reconstruction, object tracking and segmentation, and human body detection and “skeletonization”. Furthermore, the device provides a useful microphone array that allows for user interactions using speech recognition. The technology itself constitutes the tracking system at the bases of the Microsoft HoloLens. These successes pushed the company to further develop such technology, thus leading to the release of the third generation of the Kinect sensor, the Kinect Azure DK.

This Special Issue aims to attract scientific contributions dealing with the prototypal development, the validation, and the field application of innovative solutions using sensors belonging to the Microsoft Kinect family, including the recently released Kinect Azure DK.

Submitted papers should clearly demonstrate a novel contribution and an innovative application covering, but not limited to, any of the following topics:

  • object tracking and pose estimation
  • object/geometry recognition, measurement, and extraction
  • point clouds and CAD modelling
  • real-time depth data processing
  • HoloLens spatial mapping
  • body tracking and human motion recognition
  • 3D biometrics
  • data fusion and interoperability
  • gesture interaction and interface design
  • AR–VR applications
  • environment monitoring
  • ergonomics and operator’s safety
  • gait analysis
  • health monitoring
  • rehabilitation applications
  • serious games
  • educational systems
  • industrial applications
  • human–robot cooperation
  • cultural heritage applications

Dr. Antonio E. Uva
Dr. Vito Modesto Manghisi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Microsoft Kinect
  • depth sensor
  • body tracking
  • object reconstruction
  • ergonomics
  • health care
  • human–machine interaction
  • gesture interface

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4206 KiB  
Article
A Body Tracking-Based Low-Cost Solution for Monitoring Workers’ Hygiene Best Practices during Pandemics
by Vito M. Manghisi, Michele Fiorentino, Antonio Boccaccio, Michele Gattullo, Giuseppe L. Cascella, Nicola Toschi, Antonio Pietroiusti and Antonio E. Uva
Sensors 2020, 20(21), 6149; https://doi.org/10.3390/s20216149 - 29 Oct 2020
Cited by 12 | Viewed by 3784
Abstract
Since its beginning at the end of 2019, the pandemic spread of the severe acute respiratory syndrome coronavirus 2 (Sars-CoV-2) caused more than one million deaths in only nine months. The threat of emerging and re-emerging infectious diseases exists as an imminent threat [...] Read more.
Since its beginning at the end of 2019, the pandemic spread of the severe acute respiratory syndrome coronavirus 2 (Sars-CoV-2) caused more than one million deaths in only nine months. The threat of emerging and re-emerging infectious diseases exists as an imminent threat to human health. It is essential to implement adequate hygiene best practices to break the contagion chain and enhance society preparedness for such critical scenarios and understand the relevance of each disease transmission route. As the unconscious hand–face contact gesture constitutes a potential pathway of contagion, in this paper, the authors present a prototype system based on low-cost depth sensors able to monitor in real-time the attitude towards such a habit. The system records people’s behavior to enhance their awareness by providing real-time warnings, providing for statistical reports for designing proper hygiene solutions, and better understanding the role of such route of contagion. A preliminary validation study measured an overall accuracy of 91%. A Cohen’s Kappa equal to 0.876 supports rejecting the hypothesis that such accuracy is accidental. Low-cost body tracking technologies can effectively support monitoring compliance with hygiene best practices and training people in real-time. By collecting data and analyzing them with respect to people categories and contagion statistics, it could be possible to understand the importance of this contagion pathway and identify for which people category such a behavioral attitude constitutes a significant risk. Full article
Show Figures

Figure 1

22 pages, 19609 KiB  
Article
Evaluation of Full-Body Gestures Performed by Individuals with Down Syndrome: Proposal for Designing User Interfaces for All Based on Kinect Sensor
by Marta Sylvia Del Rio Guerra and Jorge Martin-Gutierrez
Sensors 2020, 20(14), 3930; https://doi.org/10.3390/s20143930 - 15 Jul 2020
Cited by 4 | Viewed by 3452
Abstract
The ever-growing and widespread use of touch, face, full-body, and 3D mid-air gesture recognition sensors in domestic and industrial settings is serving to highlight whether interactive gestures are sufficiently inclusive, and whether or not they can be executed by all users. The purpose [...] Read more.
The ever-growing and widespread use of touch, face, full-body, and 3D mid-air gesture recognition sensors in domestic and industrial settings is serving to highlight whether interactive gestures are sufficiently inclusive, and whether or not they can be executed by all users. The purpose of this study was to analyze full-body gestures from the point of view of user experience using the Microsoft Kinect sensor, to identify which gestures are easy for individuals living with Down syndrome. With this information, app developers can satisfy Design for All (DfA) requirements by selecting suitable gestures from existing lists of gesture sets. A set of twenty full-body gestures were analyzed in this study; to do so, the research team developed an application to measure the success/failure rates and execution times of each gesture. The results show that the failure rate for gesture execution is greater than the success rate, and that there is no difference between male and female participants in terms of execution times or the successful execution of gestures. Through this study, we conclude that, in general, people living with Down syndrome are not able to perform certain full-body gestures correctly. This is a direct consequence of limitations resulting from characteristic physical and motor impairments. As a consequence, the Microsoft Kinect sensor cannot identify the gestures. It is important to remember this fact when developing gesture-based on Human Computer Interaction (HCI) applications that use the Kinect sensor as an input device when the apps are going to be used by people who have such disabilities. Full article
Show Figures

Figure 1

17 pages, 6004 KiB  
Article
Optimal Use of Titanium Dioxide Colourant to Enable Water Surfaces to Be Measured by Kinect Sensors
by Andrew Nichols, Matteo Rubinato, Yun-Hang Cho and Jiayi Wu
Sensors 2020, 20(12), 3507; https://doi.org/10.3390/s20123507 - 21 Jun 2020
Cited by 5 | Viewed by 2981
Abstract
Recent studies have sought to use Microsoft Kinect sensors to measure water surface shape in steady flows or transient flow processes. They have typically employed a white colourant, usually titanium dioxide (TiO2), in order to make the surface opaque and visible [...] Read more.
Recent studies have sought to use Microsoft Kinect sensors to measure water surface shape in steady flows or transient flow processes. They have typically employed a white colourant, usually titanium dioxide (TiO2), in order to make the surface opaque and visible to the infrared-based sensors. However, the ability of Kinect Version 1 (KV1) and Kinect Version 2 (KV2) sensors to measure the deformation of ostensibly smooth reflective surfaces has never been compared, with most previous studies using a V1 sensor with no justification. Furthermore, the TiO2 has so far been used liberally and indeterminately, with no consideration as to the type of TiO2 to use, the optimal proportion to use or the effect it may have on the very fluid properties being measured. This paper examines the use of anatase TiO2 with two generations of the Microsoft Kinect sensor. Assessing their performance for an ideal flat surface, it is shown that surface data obtained using the V2 sensor is substantially more reliable. Further, the minimum quantity of colourant to enable reliable surface recognition is discovered (0.01% by mass). A stability test shows that the colourant has a strong tendency to settle over time, meaning the fluid must remain well mixed, having serious implications for studies with low Reynolds number or transient processes such as dam breaks. Furthermore, the effect of TiO2 concentration on fluid properties is examined. It is shown that previous studies using concentrations in excess of 1% may have significantly affected the viscosity and surface tension, and thus the surface behaviour being measured. It is therefore recommended that future studies employ the V2 sensor with an anatase TiO2 concentration of 0.01%, and that the effects of TiO2 on the fluid properties are properly quantified before any TiO2-Kinect-derived dataset can be of practical use, for example, in validation of numerical models or in physical models of hydrodynamic processes. Full article
Show Figures

Figure 1

15 pages, 3033 KiB  
Article
Study of Postural Stability Features by Using Kinect Depth Sensors to Assess Body Joint Coordination Patterns
by Chin-Hsuan Liu, Posen Lee, Yen-Lin Chen, Chen-Wen Yen and Chao-Wei Yu
Sensors 2020, 20(5), 1291; https://doi.org/10.3390/s20051291 - 27 Feb 2020
Cited by 15 | Viewed by 3033
Abstract
A stable posture requires the coordination of multiple joints of the body. This coordination of the multiple joints of the human body to maintain a stable posture is a subject of research. The number of degrees of freedom (DOFs) of the human motor [...] Read more.
A stable posture requires the coordination of multiple joints of the body. This coordination of the multiple joints of the human body to maintain a stable posture is a subject of research. The number of degrees of freedom (DOFs) of the human motor system is considerably larger than the DOFs required for posture balance. The manner of managing this redundancy by the central nervous system remains unclear. To understand this phenomenon, in this study, three local inter-joint coordination pattern (IJCP) features were introduced to characterize the strength, changing velocity, and complexity of the inter-joint couplings by computing the correlation coefficients between joint velocity signal pairs. In addition, for quantifying the complexity of IJCPs from a global perspective, another set of IJCP features was introduced by performing principal component analysis on all joint velocity signals. A Microsoft Kinect depth sensor was used to acquire the motion of 15 joints of the body. The efficacy of the proposed features was tested using the captured motions of two age groups (18–24 and 65–73 years) when standing still. With regard to the redundant DOFs of the joints of the body, the experimental results suggested that an inter-joint coordination strategy intermediate to that of the two extreme coordination modes of total joint dependence and independence is used by the body. In addition, comparative statistical results of the proposed features proved that aging increases the coupling strength, decreases the changing velocity, and reduces the complexity of the IJCPs. These results also suggested that with aging, the balance strategy tends to be more joint dependent. Because of the simplicity of the proposed features and the affordability of the easy-to-use Kinect depth sensor, such an assembly can be used to collect large amounts of data to explore the potential of the proposed features in assessing the performance of the human balance control system. Full article
Show Figures

Figure 1

25 pages, 7951 KiB  
Article
Fast Method of Registration for 3D RGB Point Cloud with Improved Four Initial Point Pairs Algorithm
by Peng Li, Ruisheng Wang, Yanxia Wang and Ge Gao
Sensors 2020, 20(1), 138; https://doi.org/10.3390/s20010138 - 24 Dec 2019
Cited by 10 | Viewed by 4247
Abstract
Three-dimensional (3D) point cloud registration is an important step in three-dimensional (3D) model reconstruction or 3D mapping. Currently, there are many methods for point cloud registration, but these methods are not able to simultaneously solve the problem of both efficiency and precision. We [...] Read more.
Three-dimensional (3D) point cloud registration is an important step in three-dimensional (3D) model reconstruction or 3D mapping. Currently, there are many methods for point cloud registration, but these methods are not able to simultaneously solve the problem of both efficiency and precision. We propose a fast method of global registration, which is based on RGB (Red, Green, Blue) value by using the four initial point pairs (FIPP) algorithm. First, the number of different RGB values of points in a dataset are counted and the colors in the target dataset having too few points are discarded by using a color filter. A candidate point set in the source dataset are then generated by comparing the similarity of colors between two datasets with color tolerance, and four point pairs are searched from the two datasets by using an improved FIPP algorithm. Finally, a rigid transformation matrix of global registration is calculated with total least square (TLS) and local registration with the iterative closest point (ICP) algorithm. The proposed method (RGB-FIPP) has been validated with two types of data, and the results show that it can effectively improve the speed of 3D point cloud registration while maintaining high accuracy. The method is suitable for points with RGB values. Full article
Show Figures

Figure 1

28 pages, 4648 KiB  
Article
A Novel RGB-D SLAM Algorithm Based on Cloud Robotics
by Yanli Liu, Heng Zhang and Chao Huang
Sensors 2019, 19(23), 5288; https://doi.org/10.3390/s19235288 - 1 Dec 2019
Cited by 13 | Viewed by 4807
Abstract
In this paper, we present a novel red-green-blue-depth simultaneous localization and mapping (RGB-D SLAM) algorithm based on cloud robotics, which combines RGB-D SLAM with the cloud robot and offloads the back-end process of the RGB-D SLAM algorithm to the cloud. This paper analyzes [...] Read more.
In this paper, we present a novel red-green-blue-depth simultaneous localization and mapping (RGB-D SLAM) algorithm based on cloud robotics, which combines RGB-D SLAM with the cloud robot and offloads the back-end process of the RGB-D SLAM algorithm to the cloud. This paper analyzes the front and back parts of the original RGB-D SLAM algorithm and improves the algorithm from three aspects: feature extraction, point cloud registration, and pose optimization. Experiments show the superiority of the improved algorithm. In addition, taking advantage of the cloud robotics, the RGB-D SLAM algorithm is combined with the cloud robot and the back-end part of the computationally intensive algorithm is offloaded to the cloud. Experimental validation is provided, which compares the cloud robotic-based RGB-D SLAM algorithm with the local RGB-D SLAM algorithm. The results of the experiments demonstrate the superiority of our framework. The combination of cloud robotics and RGB-D SLAM can not only improve the efficiency of SLAM but also reduce the robot’s price and size. Full article
Show Figures

Figure 1

Back to TopTop