sensors-logo

Journal Browser

Journal Browser

Intelligent Autonomous System

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 6499

Special Issue Editors


E-Mail Website
Guest Editor
Mechanical Engineering Department, Kyung Hee University, Yongin 17104, Republic of Korea
Interests: robot dynamics and control
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Interdisciplinary Studies, Graduate School, DGIST, Daegu 42988, Republic of Korea
Interests: human augmentation; human mobility interaction; automotive in-car UX; brain machine interface

E-Mail Website
Guest Editor
Intelligent Robotics R&D Division KIRO, Pohang 37666, Republic of Korea
Interests: dynamics; optimization algorithms; artificial intelligence; robotics

E-Mail Website
Guest Editor
Artificial Intelligence Department, Sungkyunkwan University, Suwon 16419, Republic of Korea
Interests: robotics; computer vision; artificial intelligence; MEMS/NEMS
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Intelligent autonomous systems are increasingly applied in areas ranging from industrial applications to professional service and household domains. New technologies and application domains push forward the need for research and development, resulting in new challenges to be overcome in order to apply intelligent autonomous systems in a reliable and user-independent way. Recent advances in the areas of artificial intelligence, machine learning and adaptive control enable autonomous systems with improved robustness and flexibility.

This Special Issue will include selected papers from the 18th international conference on Intelligent Autonomous System (IAS-18), to be held in Suwon, Korea, 26-29 June, 2023. The theme of the IAS-18 conference is “Impact and Effect of AI on Intelligent Autonomous Systems“.

The main topics of interest are:

  • Mobile robots;
  • Collaborative robots/cobots;
  • Household robots;
  • Long-term autonomous systems;
  • Humanoid robots;
  • Intelligent machines;
  • Climbing robots;
  • Outdoor and field robots;
  • Autonomous vehicles;
  • Healthcare robots;
  • Applied robots;
  • Flying robots;
  • On-water/underwater robots;
  • Robot swarms;
  • Biomimetic robots;
  • Robot vision;
  • Advanced obstacle avoidance;
  • Robot simulations;
  • Human–robot-interactions;
  • Semantic modelling;
  • Intelligent systems proving grounds;
  • Augmented robotics;
  • Data fusion and machine learning;
  • Localization and SLAM;
  • Robots for Industry 4.0;
  • Robotic competitions;
  • Intelligent sensor and systems;
  • Cloud robotics;
  • Intelligent perception;
  • Mechatronics for intelligent systems.

Prof. Dr. Soon-Geul Lee
Prof. Dr. Jinung An
Dr. Hyunn Joon Chung
Prof. Dr. Sukhan Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

26 pages, 12535 KiB  
Article
Integration of Tracking, Re-Identification, and Gesture Recognition for Facilitating Human–Robot Interaction
by Sukhan Lee, Soojin Lee and Hyunwoo Park
Sensors 2024, 24(15), 4850; https://doi.org/10.3390/s24154850 - 25 Jul 2024
Viewed by 720
Abstract
For successful human–robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human–robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining [...] Read more.
For successful human–robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human–robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS. Full article
(This article belongs to the Special Issue Intelligent Autonomous System)
Show Figures

Figure 1

20 pages, 17703 KiB  
Article
Development of an In-Pipe Inspection Robot for Large-Diameter Water Pipes
by Kwang-Woo Jeon, Eui-Jung Jung, Jong-Ho Bae, Sung-Ho Park, Jung-Jun Kim, Goobong Chung, Hyun-Joon Chung and Hak Yi
Sensors 2024, 24(11), 3470; https://doi.org/10.3390/s24113470 - 28 May 2024
Cited by 2 | Viewed by 1582
Abstract
This paper describes the development of an in-pipe inspection robot system designed for large-diameter water pipes. The robot is equipped with a Magnetic Flux Leakage (MFL) sensor module. The robot system is intended for pipes with diameters ranging from 900 mm to 1200 [...] Read more.
This paper describes the development of an in-pipe inspection robot system designed for large-diameter water pipes. The robot is equipped with a Magnetic Flux Leakage (MFL) sensor module. The robot system is intended for pipes with diameters ranging from 900 mm to 1200 mm. The structure of the in-pipe inspection robot consists of the front and rear driving parts, with the inspection module located centrally. The robot is powered by 22 motors, including eight wheels with motors positioned at both the bottom and the top for propulsion. To ensure that the robot’s center aligns with that of the pipeline during operation, lifting units have been incorporated. The robot is equipped with cameras and LiDAR sensors at the front and rear to monitor the internal environment of the pipeline. Pipeline inspection is conducted using the MFL inspection modules, and the robot’s driving mechanism is designed to execute spiral maneuvers while maintaining contact with the pipeline surface during rotation. The in-pipe inspection robot is configured with wireless communication modules and batteries, allowing for wireless operation. Following its development, the inspection robot underwent driving experiments in actual pipelines to validate its performance. The field test bed used for these experiments is approximately 1 km in length. Results from the driving experiments on the field test bed confirmed the robot’s ability to navigate various curvatures and obstacles within the pipeline. It is posited that the use of the developed in-pipe inspection robot can reduce economic costs and enhance the safety of inspectors when examining aging pipes. Full article
(This article belongs to the Special Issue Intelligent Autonomous System)
Show Figures

Figure 1

16 pages, 3590 KiB  
Article
Mitigating Trunk Compensatory Movements in Post-Stroke Survivors through Visual Feedback during Robotic-Assisted Arm Reaching Exercises
by Seong-Hoon Lee and Won-Kyung Song
Sensors 2024, 24(11), 3331; https://doi.org/10.3390/s24113331 - 23 May 2024
Viewed by 977
Abstract
Trunk compensatory movements frequently manifest during robotic-assisted arm reaching exercises for upper limb rehabilitation following a stroke, potentially impeding functional recovery. These aberrant movements are prevalent among stroke survivors and can hinder their progress in rehabilitation, making it crucial to address this issue. [...] Read more.
Trunk compensatory movements frequently manifest during robotic-assisted arm reaching exercises for upper limb rehabilitation following a stroke, potentially impeding functional recovery. These aberrant movements are prevalent among stroke survivors and can hinder their progress in rehabilitation, making it crucial to address this issue. This study evaluated the efficacy of visual feedback, facilitated by an RGB-D camera, in reducing trunk compensation. In total, 17 able-bodied individuals and 18 stroke survivors performed reaching tasks under unrestricted trunk conditions and visual feedback conditions. In the visual feedback modalities, the target position was synchronized with trunk movement at ratios where the target moved at the same speed, double, and triple the trunk’s motion speed, providing real-time feedback to the participants. Notably, trunk compensatory movements were significantly diminished when the target moved at the same speed and double the trunk’s motion speed. Furthermore, these conditions exhibited an increase in the task completion time and perceived exertion among stroke survivors. This outcome suggests that visual feedback effectively heightened the task difficulty, thereby discouraging unnecessary trunk motion. The findings underscore the pivotal role of customized visual feedback in correcting aberrant upper limb movements among stroke survivors, potentially contributing to the advancement of robotic-assisted rehabilitation strategies. These insights advocate for the integration of visual feedback into rehabilitation exercises, highlighting its potential to foster more effective recovery pathways for post-stroke individuals by minimizing undesired compensatory motions. Full article
(This article belongs to the Special Issue Intelligent Autonomous System)
Show Figures

Figure 1

31 pages, 3927 KiB  
Article
TimeTector: A Twin-Branch Approach for Unsupervised Anomaly Detection in Livestock Sensor Noisy Data (TT-TBAD)
by Junaid Khan Kakar, Shahid Hussain, Sang Cheol Kim and Hyongsuk Kim
Sensors 2024, 24(8), 2453; https://doi.org/10.3390/s24082453 - 11 Apr 2024
Cited by 2 | Viewed by 1509
Abstract
Unsupervised anomaly detection in multivariate time series sensor data is a complex task with diverse applications in different domains such as livestock farming and agriculture (LF&A), the Internet of Things (IoT), and human activity recognition (HAR). Advanced machine learning techniques are necessary to [...] Read more.
Unsupervised anomaly detection in multivariate time series sensor data is a complex task with diverse applications in different domains such as livestock farming and agriculture (LF&A), the Internet of Things (IoT), and human activity recognition (HAR). Advanced machine learning techniques are necessary to detect multi-sensor time series data anomalies. The primary focus of this research is to develop state-of-the-art machine learning methods for detecting anomalies in multi-sensor data. Time series sensors frequently produce multi-sensor data with anomalies, which makes it difficult to establish standard patterns that can capture spatial and temporal correlations. Our innovative approach enables the accurate identification of normal, abnormal, and noisy patterns, thus minimizing the risk of misinterpreting models when dealing with mixed noisy data during training. This can potentially result in the model deriving incorrect conclusions. To address these challenges, we propose a novel approach called “TimeTector-Twin-Branch Shared LSTM Autoencoder” which incorporates several Multi-Head Attention mechanisms. Additionally, our system now incorporates the Twin-Branch method which facilitates the simultaneous execution of multiple tasks, such as data reconstruction and prediction error, allowing for efficient multi-task learning. We also compare our proposed model to several benchmark anomaly detection models using our dataset, and the results show less error (MSE, MAE, and RMSE) in reconstruction and higher accuracy scores (precision, recall, and F1) against the baseline models, demonstrating that our approach outperforms these existing models. Full article
(This article belongs to the Special Issue Intelligent Autonomous System)
Show Figures

Figure 1

15 pages, 1203 KiB  
Article
The Effects of Speed and Delays on Test-Time Performance of End-to-End Self-Driving
by Ardi Tampuu, Kristjan Roosild and Ilmar Uduste
Sensors 2024, 24(6), 1963; https://doi.org/10.3390/s24061963 - 19 Mar 2024
Viewed by 983
Abstract
This study investigates the effects of speed variations and computational delays on the performance of end-to-end autonomous driving systems (ADS). Utilizing 1:10 scale mini-cars with limited computational resources, we demonstrate that different driving speeds significantly alter the task of the driving model, challenging [...] Read more.
This study investigates the effects of speed variations and computational delays on the performance of end-to-end autonomous driving systems (ADS). Utilizing 1:10 scale mini-cars with limited computational resources, we demonstrate that different driving speeds significantly alter the task of the driving model, challenging the generalization capabilities of systems trained at a singular speed profile. Our findings reveal that models trained to drive at high speeds struggle with slower speeds and vice versa. Consequently, testing an ADS at an inappropriate speed can lead to misjudgments about its competence. Additionally, we explore the impact of computational delays, common in real-world deployments, on driving performance. We present a novel approach to counteract the effects of delays by adjusting the target labels in the training data, demonstrating improved resilience in models to handle computational delays effectively. This method, crucially, addresses the effects of delays rather than their causes and complements traditional delay minimization strategies. These insights are valuable for developing robust autonomous driving systems capable of adapting to varying speeds and delays in real-world scenarios. Full article
(This article belongs to the Special Issue Intelligent Autonomous System)
Show Figures

Figure 1

Back to TopTop