sensors-logo

Journal Browser

Journal Browser

Intelligent Systems and Sensors for Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (15 March 2021) | Viewed by 69345

Special Issue Editors


E-Mail Website
Guest Editor
Electrical, Electronics and Telecommunication Engineering and Naval Architecture Department (DITEN), University of Genoa, 16145 Genova, Italy
Interests: machine learning; embedded systems; edge computing; deep learning for computer vision; machine learning for robotics and prosthetic limbs
Special Issues, Collections and Topics in MDPI journals
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
Interests: signal processing; machine learning; robot perception
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The effective performance of advanced robotic systems greatly depends on two components: a sensing system that can provide valuable and accurate information about the environment, and an intelligent processing system that can properly utilize such information to improve the ability of robots to handle ever more complex tasks.

Machine learning (ML) models provide an enabling technology in support of such intelligent processing systems. The capability of ML to learn from data an inference function represents a key strength for developing robots that are expected to become autonomous and make real-time decisions. This capability in turn enhances the role of sensors in empowering robotics, from industrial robotic systems to humanoid robots.

Bringing ML to embedded systems becomes indeed a requirement for building the next generation of robots. On the other hand, given the constraints imposed by robotics in terms of power consumption, latency, size, and cost, the deployment of a ML model on an embedded system poses major challenges. The main goal is to profit from efficient inference functions that can run on resource-constrained edge devices. Under such paradigm, training might in principle be demanded to a different, more powerful platform. Nonetheless, a more demanding goal is to be able to complete also the training on resource-constrained devices. 

This Special Issue will focus on machine learning based models and methodologies for real-time decision making on advanced robotic systems. The aim is to collect the most recent advances in machine learning research for low-resource embedded systems. Accordingly, the Special Issue welcomes methods and ideas that emphasize the impact of embedded machine learning on robotic technologies. 

The topics of interest for this special issue include, but are not limited to:

  • embedded machine learning
  • low-power inference engines
  • software/hardware techniques for machine learning
  • online learning on resource-constrained edge devices
  • power-efficient machine learning implementations on FPGAs
  • on-chip training of deep neural networks
  • high-performance, low-power computing for deep learning and computer vision
  • high-performance, low-power computing for deep learning-based audio and speech processing
  • intelligent sensors
  • machine learning for sensing and perception
  • machine learning for intelligent autonomous systems

Prof. Paolo Gastaldo
Dr. Lin Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • embedded machine learning
  • intelligent systems
  • robot sensing and perception
  • machine vision
  • autonomous robots
  • edge computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1522 KiB  
Article
T-RexNet—A Hardware-Aware Neural Network for Real-Time Detection of Small Moving Objects
by Alessio Canepa, Edoardo Ragusa, Rodolfo Zunino and Paolo Gastaldo
Sensors 2021, 21(4), 1252; https://doi.org/10.3390/s21041252 - 10 Feb 2021
Cited by 8 | Viewed by 3520
Abstract
This paper presents the T-RexNet approach to detect small moving objects in videos by using a deep neural network. T-RexNet combines the advantages of Single-Shot-Detectors with a specific feature-extraction network, thus overcoming the known shortcomings of Single-Shot-Detectors in detecting small objects. The deep [...] Read more.
This paper presents the T-RexNet approach to detect small moving objects in videos by using a deep neural network. T-RexNet combines the advantages of Single-Shot-Detectors with a specific feature-extraction network, thus overcoming the known shortcomings of Single-Shot-Detectors in detecting small objects. The deep convolutional neural network includes two parallel paths: the first path processes both the original picture, in gray-scale format, and differences between consecutive frames; in the second path, differences between a set of three consecutive frames is only handled. As compared with generic object detectors, the method limits the depth of the convolutional network to make it less sensible to high-level features and easier to train on small objects. The simple, Hardware-efficient architecture attains its highest accuracy in the presence of videos with static framing. Deploying our architecture on the NVIDIA Jetson Nano edge-device shows its suitability to embedded systems. To prove the effectiveness and general applicability of the approach, real-world tests assessed the method performances in different scenarios, namely, aerial surveillance with the WPAFB 2009 dataset, civilian surveillance using the Chinese University of Hong Kong (CUHK) Square dataset, and fast tennis-ball tracking, involving a custom dataset. Experimental results prove that T-RexNet is a valid, general solution to detect small moving objects, which outperforms in this task generic existing object-detection approaches. The method also compares favourably with application-specific approaches in terms of the accuracy vs. speed trade-off. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

17 pages, 11132 KiB  
Article
A Prosthetic Socket with Active Volume Compensation for Amputated Lower Limb
by Ji-Hyeon Seo, Hyuk-Jin Lee, Dong-Wook Seo, Dong-Kyu Lee, Oh-Won Kwon, Moon-Kyu Kwak and Kang-Ho Lee
Sensors 2021, 21(2), 407; https://doi.org/10.3390/s21020407 - 8 Jan 2021
Cited by 10 | Viewed by 9273
Abstract
Typically, the actual volume of the residual limb changes over time. This causes the prosthesis to not fit, and then pain and skin disease. In this study, a prosthetic socket was developed to compensate for the volume change of the residual limb. Using [...] Read more.
Typically, the actual volume of the residual limb changes over time. This causes the prosthesis to not fit, and then pain and skin disease. In this study, a prosthetic socket was developed to compensate for the volume change of the residual limb. Using an inflatable air bladder, the proposed socket monitors the pressure in the socket and keeps the pressure distribution uniform and constant while walking. The socket has three air bladders on anterior and posterior tibia areas, a latching type 3-way pneumatic valve and a portable control device. In the paper, the mechanical properties of the air bladder were investigated, and the electromagnetic analysis was performed to design the pneumatic valve. The controller is based on a hysteresis control algorithm with a closed loop, which keeps the pressure in the socket close to the initial set point over a long period of time. In experiments, the proposed prosthesis was tested through the gait simulator that can imitate a human’s gait cycle. The active volume compensation of the socket was successfully verified during repetitive gait cycle using the weight loads of 50, 70, and 90 kg and the residual limb model with a variety of volumes. It was confirmed that the pressure of the residual limb recovered to the initial state through the active control. The pressure inside the socket had a steady state error of less than 0.75% even if the volume of the residual limb was changed from −7% to +7%. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

18 pages, 3104 KiB  
Article
EDSSA: An Encoder-Decoder Semantic Segmentation Networks Accelerator on OpenCL-Based FPGA Platform
by Hongzhi Huang, Yakun Wu, Mengqi Yu, Xuesong Shi, Fei Qiao, Li Luo, Qi Wei and Xinjun Liu
Sensors 2020, 20(14), 3969; https://doi.org/10.3390/s20143969 - 17 Jul 2020
Cited by 8 | Viewed by 3872
Abstract
Visual semantic segmentation, which is represented by the semantic segmentation network, has been widely used in many fields, such as intelligent robots, security, and autonomous driving. However, these Convolutional Neural Network (CNN)-based networks have high requirements for computing resources and programmability for hardware [...] Read more.
Visual semantic segmentation, which is represented by the semantic segmentation network, has been widely used in many fields, such as intelligent robots, security, and autonomous driving. However, these Convolutional Neural Network (CNN)-based networks have high requirements for computing resources and programmability for hardware platforms. For embedded platforms and terminal devices in particular, Graphics Processing Unit (GPU)-based computing platforms cannot meet these requirements in terms of size and power consumption. In contrast, the Field Programmable Gate Array (FPGA)-based hardware system not only has flexible programmability and high embeddability, but can also meet lower power consumption requirements, which make it an appropriate solution for semantic segmentation on terminal devices. In this paper, we demonstrate EDSSA—an Encoder-Decoder semantic segmentation networks accelerator architecture which can be implemented with flexible parameter configurations and hardware resources on the FPGA platforms that support Open Computing Language (OpenCL) development. We introduce the related technologies, architecture design, algorithm optimization, and hardware implementation of the Encoder-Decoder semantic segmentation network SegNet as an example, and undertake a performance evaluation. Using an Intel Arria-10 GX1150 platform for evaluation, our work achieves a throughput higher than 432.8 GOP/s with power consumption of about 20 W, which is a 1.2× times improvement the energy-efficiency ratio compared to a high-performance GPU. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

12 pages, 1571 KiB  
Article
Fast CNN Stereo Depth Estimation through Embedded GPU Devices
by Cristhian A. Aguilera, Cristhian Aguilera, Cristóbal A. Navarro and Angel D. Sappa
Sensors 2020, 20(11), 3249; https://doi.org/10.3390/s20113249 - 7 Jun 2020
Cited by 10 | Viewed by 5173
Abstract
Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. [...] Read more.
Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models on three different embedded GPU devices, with and without optimization methods, presenting performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically augmenting the runtime speed of current models. In our experiments, we achieve real-time inference speed, in the range of 5–32 ms, for 1216 × 368 input stereo images on the Jetson TX2, Jetson Xavier, and Jetson Nano embedded devices. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

20 pages, 10972 KiB  
Article
SGC-VSLAM: A Semantic and Geometric Constraints VSLAM for Dynamic Indoor Environments
by Shiqiang Yang, Guohao Fan, Lele Bai, Cheng Zhao and Dexin Li
Sensors 2020, 20(8), 2432; https://doi.org/10.3390/s20082432 - 24 Apr 2020
Cited by 19 | Viewed by 4720
Abstract
As one of the core technologies for autonomous mobile robots, Visual Simultaneous Localization and Mapping (VSLAM) has been widely researched in recent years. However, most state-of-the-art VSLAM adopts a strong scene rigidity assumption for analytical convenience, which limits the utility of these algorithms [...] Read more.
As one of the core technologies for autonomous mobile robots, Visual Simultaneous Localization and Mapping (VSLAM) has been widely researched in recent years. However, most state-of-the-art VSLAM adopts a strong scene rigidity assumption for analytical convenience, which limits the utility of these algorithms for real-world environments with independent dynamic objects. Hence, this paper presents a semantic and geometric constraints VSLAM (SGC-VSLAM), which is built on the RGB-D mode of ORB-SLAM2 with the addition of dynamic detection and static point cloud map construction modules. In detail, a novel improved quadtree-based method was adopted for SGC-VSLAM to enhance the performance of the feature extractor in ORB-SLAM (Oriented FAST and Rotated BRIEF-SLAM). Moreover, a new dynamic feature detection method called semantic and geometric constraints was proposed, which provided a robust and fast way to filter dynamic features. The semantic bounding box generated by YOLO v3 (You Only Look Once, v3) was used to calculate a more accurate fundamental matrix between adjacent frames, which was then used to filter all of the truly dynamic features. Finally, a static point cloud was estimated by using a new drawing key frame selection strategy. Experiments on the public TUM RGB-D (Red-Green-Blue Depth) dataset were conducted to evaluate the proposed approach. This evaluation revealed that the proposed SGC-VSLAM can effectively improve the positioning accuracy of the ORB-SLAM2 system in high-dynamic scenarios and was also able to build a map with the static parts of the real environment, which has long-term application value for autonomous mobile robots. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

28 pages, 2265 KiB  
Article
Proposal of Takagi–Sugeno Fuzzy-PI Controller Hardware
by Sérgio N. Silva, Felipe F. Lopes, Carlos Valderrama and Marcelo A. C. Fernandes
Sensors 2020, 20(7), 1996; https://doi.org/10.3390/s20071996 - 2 Apr 2020
Cited by 6 | Viewed by 3375
Abstract
This work proposes dedicated hardware for an intelligent control system on Field Programmable Gate Array (FPGA). The intelligent system is represented as Takagi–Sugeno Fuzzy-PI controller. The implementation uses a fully parallel strategy associated with a hybrid bit format scheme (fixed-point and floating-point). Two [...] Read more.
This work proposes dedicated hardware for an intelligent control system on Field Programmable Gate Array (FPGA). The intelligent system is represented as Takagi–Sugeno Fuzzy-PI controller. The implementation uses a fully parallel strategy associated with a hybrid bit format scheme (fixed-point and floating-point). Two hardware designs are proposed; the first one uses a single clock cycle processing architecture, and the other uses a pipeline scheme. The bit accuracy was tested by simulation with a nonlinear control system of a robotic manipulator. The area, throughput, and dynamic power consumption of the implemented hardware are used to validate and compare the results of this proposal. The results achieved allow the use of the proposed hardware in applications with high-throughput, low-power and ultra-low-latency requirements such as teleoperation of robot manipulators, tactile internet, or industry 4.0 automation, among others. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Graphical abstract

24 pages, 2914 KiB  
Article
Event-Based Feature Extraction Using Adaptive Selection Thresholds
by Saeed Afshar, Nicholas Ralph, Ying Xu, Jonathan Tapson, André van Schaik and Gregory Cohen
Sensors 2020, 20(6), 1600; https://doi.org/10.3390/s20061600 - 13 Mar 2020
Cited by 24 | Viewed by 6052
Abstract
Unsupervised feature extraction algorithms form one of the most important building blocks in machine learning systems. These algorithms are often adapted to the event-based domain to perform online learning in neuromorphic hardware. However, not designed for the purpose, such algorithms typically require significant [...] Read more.
Unsupervised feature extraction algorithms form one of the most important building blocks in machine learning systems. These algorithms are often adapted to the event-based domain to perform online learning in neuromorphic hardware. However, not designed for the purpose, such algorithms typically require significant simplification during implementation to meet hardware constraints, creating trade offs with performance. Furthermore, conventional feature extraction algorithms are not designed to generate useful intermediary signals which are valuable only in the context of neuromorphic hardware limitations. In this work a novel event-based feature extraction method is proposed that focuses on these issues. The algorithm operates via simple adaptive selection thresholds which allow a simpler implementation of network homeostasis than previous works by trading off a small amount of information loss in the form of missed events that fall outside the selection thresholds. The behavior of the selection thresholds and the output of the network as a whole are shown to provide uniquely useful signals indicating network weight convergence without the need to access network weights. A novel heuristic method for network size selection is proposed which makes use of noise events and their feature representations. The use of selection thresholds is shown to produce network activation patterns that predict classification accuracy allowing rapid evaluation and optimization of system parameters without the need to run back-end classifiers. The feature extraction method is tested on both the N-MNIST (Neuromorphic-MNIST) benchmarking dataset and a dataset of airplanes passing through the field of view. Multiple configurations with different classifiers are tested with the results quantifying the resultant performance gains at each processing stage. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

22 pages, 1706 KiB  
Article
High-Level Path Planning for an Autonomous Sailboat Robot Using Q-Learning
by Andouglas Gonçalves da Silva Junior, Davi Henrique dos Santos, Alvaro Pinto Fernandes de Negreiros, João Moreno Vilas Boas de Souza Silva and Luiz Marcos Garcia Gonçalves
Sensors 2020, 20(6), 1550; https://doi.org/10.3390/s20061550 - 11 Mar 2020
Cited by 45 | Viewed by 6273
Abstract
Path planning for sailboat robots is a challenging task particularly due to the kinematics and dynamics modelling of such kinds of wind propelled boats. The problem is divided into two layers. The first one is global were a general trajectory composed of waypoints [...] Read more.
Path planning for sailboat robots is a challenging task particularly due to the kinematics and dynamics modelling of such kinds of wind propelled boats. The problem is divided into two layers. The first one is global were a general trajectory composed of waypoints is planned, which can be done automatically based on some variables such as weather conditions or defined by hand using some human–robot interface (a ground-station). In the second local layer, at execution time, the global route should be followed by making the sailboat proceed between each pair of consecutive waypoints. Our proposal in this paper is an algorithm for the global, path generation layer, which has been developed for the N-Boat (The Sailboat Robot project), in order to compute feasible sailing routes between a start and a target point while avoiding dangerous situations such as obstacles and borders. A reinforcement learning approach (Q-Learning) is used based on a reward matrix and a set of actions that changes according to wind directions to account for the dead zone, which is the region against the wind where the sailboat can not gain velocity. Our algorithm generates straight and zigzag paths accounting for wind direction. The path generated also guarantees the sailboat safety and robustness, enabling it to sail for long periods of time, depending only on the start and target points defined for this global planning. The result is the development of a complete path planner algorithm that, together with the local planner solved in previous work, can be used to allow the final developments of an N-Boat making it a fully autonomous sailboat. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

32 pages, 5055 KiB  
Article
Analysis and Improvements in AprilTag Based State Estimation
by Syed Muhammad Abbas, Salman Aslam, Karsten Berns and Abubakr Muhammad
Sensors 2019, 19(24), 5480; https://doi.org/10.3390/s19245480 - 12 Dec 2019
Cited by 37 | Viewed by 10949
Abstract
In this paper, we analyzed the accuracy and precision of AprilTag as a visual fiducial marker in detail. We have analyzed error propagation along two horizontal axes along with the effect of angular rotation about the vertical axis. We have identified that the [...] Read more.
In this paper, we analyzed the accuracy and precision of AprilTag as a visual fiducial marker in detail. We have analyzed error propagation along two horizontal axes along with the effect of angular rotation about the vertical axis. We have identified that the angular rotation of the camera (yaw angle) about its vertical axis is the primary source of error that decreases the precision to the point where the marker system is not potentially viable for sub-decimeter precise tasks. Other factors are the distance and viewing angle of the camera from the AprilTag. Based on these observations, three improvement steps have been proposed. One is the trigonometric correction of the yaw angle to point the camera towards the center of the tag. Second, the use of a custom-built yaw-axis gimbal, which tracks the center of the tag in real-time. Third, we have presented for the first time a pose-indexed probabilistic sensor error model of the AprilTag using a Gaussian Processes based regression of experimental data, validated by particle filter tracking. Our proposed approach, which can be deployed with all three improvement steps, increases the system’s overall accuracy and precision by manifolds with a slight trade-off with execution time over commonly available AprilTag library. These proposed improvements make AprilTag suitable to be used as precision localization systems for outdoor and indoor applications. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

16 pages, 3333 KiB  
Article
Robust and Accurate Hand–Eye Calibration Method Based on Schur Matric Decomposition
by Jinbo Liu, Jinshui Wu and Xin Li
Sensors 2019, 19(20), 4490; https://doi.org/10.3390/s19204490 - 16 Oct 2019
Cited by 16 | Viewed by 4185
Abstract
To improve the accuracy and robustness of hand–eye calibration, a hand–eye calibration method based on Schur matric decomposition is proposed in this paper. The accuracy of these methods strongly depends on the quality of observation data. Therefore, preprocessing observation data is essential. As [...] Read more.
To improve the accuracy and robustness of hand–eye calibration, a hand–eye calibration method based on Schur matric decomposition is proposed in this paper. The accuracy of these methods strongly depends on the quality of observation data. Therefore, preprocessing observation data is essential. As with traditional two-step hand–eye calibration methods, we first solve the rotation parameters and then the translation vector can be immediately determined. A general solution was obtained from one observation through Schur matric decomposition and then the degrees of freedom were decreased from three to two. Observation data preprocessing is one of the basic unresolved problems with hand–eye calibration methods. A discriminant equation to delete outliers was deduced based on Schur matric decomposition. Finally, the basic problem of observation data preprocessing was solved using outlier detection, which significantly improved robustness. The proposed method was validated by both simulations and experiments. The results show that the prediction error of rotation and translation was 0.06 arcmin and 1.01 mm respectively, and the proposed method performed much better in outlier detection. A minimal configuration for the unique solution was proven from a new perspective. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

16 pages, 3276 KiB  
Article
Methods for Simultaneous Robot-World-Hand–Eye Calibration: A Comparative Study
by Ihtisham Ali, Olli Suominen, Atanas Gotchev and Emilio Ruiz Morales
Sensors 2019, 19(12), 2837; https://doi.org/10.3390/s19122837 - 25 Jun 2019
Cited by 58 | Viewed by 10583
Abstract
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the [...] Read more.
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the objective function as pose error or reprojection error minimization problem. We provide three real and three simulated datasets with rendered images as part of the study. In addition, we propose a robotic arm error modeling approach to be used along with the simulated datasets for generating a realistic response. The tests on simulated data are performed in both ideal cases and with pseudo-realistic robotic arm pose and visual noise. Our methods show significant improvement and robustness on many metrics in various scenarios compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

Back to TopTop