applsci-logo

Journal Browser

Journal Browser

Advancements in Intelligent Systems: The Confluence of AI, Machine Learning, and Robotics

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: 15 April 2025 | Viewed by 3552

Special Issue Editors

Special Issue Information

Dear Colleagues,

The intersection of artificial intelligence (AI), machine learning (ML), and robotics is crafting a new frontier in the realm of intelligent systems. These advancements are not just reshaping our technological landscape but are also redefining the possibilities within a myriad of sectors, including healthcare, manufacturing, autonomous transportation, and beyond. The confluence of these fields is spawning systems that can learn, adapt, and interact in ways that were previously the reserve of science fiction.

In this Special Issue, titled "Advancements in Intelligent Systems: The Confluence of AI, Machine Learning, and Robotics," we aim to showcase the synergy between AI, ML, and robotics, exploring how their integration is propelling the development of intelligent systems that are more autonomous, efficient, and capable of tackling complex challenges. From robotics enhancing AI's physical interaction capabilities to ML algorithms optimizing robotic functions and AI augmenting the decision-making prowess of robotic systems, the potential for innovation is boundless.

We invite contributions that delve into the latest research, methodologies, and applications at the nexus of these dynamic fields. Whether it's robotics powered by AI and ML, ML algorithms specifically designed for robotic perception and decision-making, or AI frameworks that elevate robotic autonomy and adaptability, we are interested in cutting-edge research that pushes the boundaries of what intelligent systems can achieve.

Suitable topics include, but are not limited to:

  • AI-driven robotics for autonomous navigation and decision-making.
  • ML algorithms for robotic perception, learning, and environmental interaction.
  • Integration of AI and ML in robotic system design and optimization.
  • Case studies and real-world applications of AI and ML in robotics.
  • Interdisciplinary approaches that blend AI, ML, and robotics for novel applications.
  • Theoretical and practical challenges in the convergence of AI, ML, and robotics.

We look forward to your contributions, which we believe will inspire a broader understanding and stimulate further advancements in the field of intelligent systems. Your research can pave the way for the next generation of intelligent machines, capable of performing tasks with unprecedented precision, autonomy, and adaptability.

Please feel free to contact us if you have any questions or need further information regarding your submission.

Dr. J. Ernesto Solanes
Prof. Dr. Luis Gracia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • robotics
  • autonomous systems
  • intelligent systems
  • decision making
  • perception algorithms
  • system optimization
  • human–robot interaction
  • environmental adaptation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 1880 KiB  
Article
Polish Speech and Text Emotion Recognition in a Multimodal Emotion Analysis System
by Kamil Skowroński, Adam Gałuszka and Eryka Probierz
Appl. Sci. 2024, 14(22), 10284; https://doi.org/10.3390/app142210284 - 8 Nov 2024
Viewed by 444
Abstract
Emotion recognition by social robots is a serious challenge because sometimes people also do not cope with it. It is important to use information about emotions from all possible sources: facial expression, speech, or reactions occurring in the body. Therefore, a multimodal emotion [...] Read more.
Emotion recognition by social robots is a serious challenge because sometimes people also do not cope with it. It is important to use information about emotions from all possible sources: facial expression, speech, or reactions occurring in the body. Therefore, a multimodal emotion recognition system was introduced, which includes the indicated sources of information and deep learning algorithms for emotion recognition. An important part of this system includes the speech analysis module, which was decided to be divided into two tracks: speech and text. An additional condition is the target language of communication, Polish, for which the number of datasets and methods is very limited. The work shows that emotion recognition using a single source—text or speech—can lead to low accuracy of the recognized emotion. It was therefore decided to compare English and Polish datasets and the latest deep learning methods in speech emotion recognition using Mel spectrograms. The most accurate LSTM models were evaluated on the English set and the Polish nEMO set, demonstrating high efficiency of emotion recognition in the case of Polish data. The conducted research is a key element in the development of a decision-making algorithm for several emotion recognition modules in a multimodal system. Full article
Show Figures

Figure 1

17 pages, 1568 KiB  
Article
New Functionality for Moodle E-Learning Platform: Files Communication by Chat Window
by Vasile Baneș, Cristian Ravariu and Avireni Srinivasulu
Appl. Sci. 2024, 14(18), 8569; https://doi.org/10.3390/app14188569 - 23 Sep 2024
Viewed by 573
Abstract
Moodle allows communication between students through the chat window, where you can send text messages and emoticons. A study carried out on 45 students identified which method they prefer to use to send attachments—which seems to them to be the most effective and [...] Read more.
Moodle allows communication between students through the chat window, where you can send text messages and emoticons. A study carried out on 45 students identified which method they prefer to use to send attachments—which seems to them to be the most effective and easy to use. The challenges we started with in this implementation of the solution were the non-existence of this way of transmitting files within the Moodle platform and the need to introduce this new method, which has an impact on the communication process that is beneficial to users. When a requirement arises from users such as sending files through the chat window, a feature that does not exist now, the IT administrator has the possibility to create a new method by implementing a plugin that may be imported into the Moodle platform settings. By writing the necessary parameters, arguments, and command lines in the developed plugin, it was possible to create a new way to send files. This paper presents a new solution that contributes the possibility of transmitting files through the chat window, with various extensions such as .pdf, .zip, .docx, .jpg, .xls, .mp4, and other types and sizes of files that can be sent at any time and as many as desired, not limited by the number of uploads related to the transmission. Full article
Show Figures

Figure 1

Review

Jump to: Research

39 pages, 9734 KiB  
Review
A Survey of Robot Intelligence with Large Language Models
by Hyeongyo Jeong, Haechan Lee, Changwon Kim and Sungtae Shin
Appl. Sci. 2024, 14(19), 8868; https://doi.org/10.3390/app14198868 - 2 Oct 2024
Cited by 1 | Viewed by 2000
Abstract
Since the emergence of ChatGPT, research on large language models (LLMs) has actively progressed across various fields. LLMs, pre-trained on vast text datasets, have exhibited exceptional abilities in understanding natural language and planning tasks. These abilities of LLMs are promising in robotics. In [...] Read more.
Since the emergence of ChatGPT, research on large language models (LLMs) has actively progressed across various fields. LLMs, pre-trained on vast text datasets, have exhibited exceptional abilities in understanding natural language and planning tasks. These abilities of LLMs are promising in robotics. In general, traditional supervised learning-based robot intelligence systems have a significant lack of adaptability to dynamically changing environments. However, LLMs help a robot intelligence system to improve its generalization ability in dynamic and complex real-world environments. Indeed, findings from ongoing robotics studies indicate that LLMs can significantly improve robots’ behavior planning and execution capabilities. Additionally, vision-language models (VLMs), trained on extensive visual and linguistic data for the vision question answering (VQA) problem, excel at integrating computer vision with natural language processing. VLMs can comprehend visual contexts and execute actions through natural language. They also provide descriptions of scenes in natural language. Several studies have explored the enhancement of robot intelligence using multimodal data, including object recognition and description by VLMs, along with the execution of language-driven commands integrated with visual information. This review paper thoroughly investigates how foundation models such as LLMs and VLMs have been employed to boost robot intelligence. For clarity, the research areas are categorized into five topics: reward design in reinforcement learning, low-level control, high-level planning, manipulation, and scene understanding. This review also summarizes studies that show how foundation models, such as the Eureka model for automating reward function design in reinforcement learning, RT-2 for integrating visual data, language, and robot actions in vision-language-action models, and AutoRT for generating feasible tasks and executing robot behavior policies via LLMs, have improved robot intelligence. Full article
Show Figures

Figure 1

Back to TopTop