applsci-logo

Journal Browser

Journal Browser

Artificial Intelligence in Internet of Things: Challenges and Opportunities

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 38048

Special Issue Editor


E-Mail Website
Guest Editor
Department of Mathematics and Informatics, University of Catania, Viale Andrea Doria, 6, 95125 Catania CT, Italy
Interests: multi-agent system; distributed artificial intelligence; autonomous mobile robots; autonomous flying robots
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Internet-of-Things is growing fast and is intended to become a key technology of our near future. In a similar way, the availability of high-performance computers has led to a rebirth of neural techniques, thus bringing machine learning and neural networks a technology that can be really used in day-to-day life. In addition, today the market offers a high number of microcontrollers (MCUs) and embedded platforms featuring high performances and making them able to host “intelligence” in a very small space and with low power consumption.

Given these premises, this special issue aims at gathering contributions in recent researches dealing with challenges and opportunities relevant to the use of artificial intelligence techniques in the Internet-of-Things world. Topics are but not limited to:

  • Distributed artificial intelligence systems for the IoT
  • Platforms, middlewares and programming languages for Intelligent IoT systems
  • Multi-Agent systems in the context of IoT
  • Learning and neural network techniques for embedded platforms or MCUs
  • Logic-based AI for embedded platforms or MCUs
  • Power consumption minimization in the context of AI techniques for embedded platforms or MCUs
  • Low power network protocols for intelligent IoT systems

Dr. Corrado Santoro
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • distributed AI
  • multi-agent system
  • AI in MCUs
  • IoT network protocols

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2194 KiB  
Article
The Concept of Interactive Dynamic Intelligent Virtual Sensors (IDIVS): Bridging the Gap between Sensors, Services, and Users through Machine Learning
by Jan A. Persson, Joseph Bugeja, Paul Davidsson, Johan Holmberg, Victor R. Kebande, Radu-Casian Mihailescu, Arezoo Sarkheyli-Hägele and Agnes Tegen
Appl. Sci. 2023, 13(11), 6516; https://doi.org/10.3390/app13116516 - 26 May 2023
Cited by 1 | Viewed by 1237
Abstract
This paper concerns the novel concept of an Interactive Dynamic Intelligent Virtual Sensor (IDIVS), which extends virtual/soft sensors towards making use of user input through interactive learning (IML) and transfer learning. In research, many studies can be found on using machine learning in [...] Read more.
This paper concerns the novel concept of an Interactive Dynamic Intelligent Virtual Sensor (IDIVS), which extends virtual/soft sensors towards making use of user input through interactive learning (IML) and transfer learning. In research, many studies can be found on using machine learning in this domain, but not much on using IML. This paper contributes by highlighting how this can be done and the associated positive potential effects and challenges. An IDIVS provides a sensor-like output and achieves the output through the data fusion of sensor values or from the output values of other IDIVSs. We focus on settings where people are present in different roles: from basic service users in the environment being sensed to interactive service users supporting the learning of the IDIVS, as well as configurators of the IDIVS and explicit IDIVS teachers. The IDIVS aims at managing situations where sensors may disappear and reappear and be of heterogeneous types. We refer to and recap the major findings from related experiments and validation in complementing work. Further, we point at several application areas: smart building, smart mobility, smart learning, and smart health. The information properties and capabilities needed in the IDIVS, with extensions towards information security, are introduced and discussed. Full article
Show Figures

Figure 1

19 pages, 5127 KiB  
Article
Development of an IoT-Based Precision Irrigation System for Tomato Production from Indoor Seedling Germination to Outdoor Field Production
by Mohammad Hussain Seyar and Tofael Ahamed
Appl. Sci. 2023, 13(9), 5556; https://doi.org/10.3390/app13095556 - 29 Apr 2023
Cited by 10 | Viewed by 3850
Abstract
Proper irrigation management, especially for tomatoes that are sensitive to water, is the key to ensuring sustainable tomato production. Using a low-cost sensor coupled with IoT technology could help to achieve precise control of the moisture content in the plant root-zone soil and [...] Read more.
Proper irrigation management, especially for tomatoes that are sensitive to water, is the key to ensuring sustainable tomato production. Using a low-cost sensor coupled with IoT technology could help to achieve precise control of the moisture content in the plant root-zone soil and apply water on demand with minimum human intervention. An IoT-based precision irrigation system was developed for growing Momotaro tomato seedlings inside a dark chamber. Four irrigation thresholds, 5%, 8%, 12%, and 15%, and two irrigation systems, surface and subsurface drip irrigation, were compared to assess which threshold and irrigation system referred the ideal tomato seedling growth. As a result, the 12% soil moisture threshold applied through the subsurface drip irrigation system significantly (p < 0.05) increased tomato seedling growth in soil composed of a main blend of peat moss, vermiculite, and perlite. Furthermore, in two repeated experiments, a subsurface drip irrigation system with 0.86 distribution uniformity used 10% less water than the surface drip irrigation system. The produced tomato seedlings were transplanted to open fields for further assessment. A low-power wide area networking Long Range Wide Area Network (LoRaWAN) protocol was developed with remote monitoring and controlling capability for irrigation management. Two irrigation systems, including surface and subsurface drip irrigations, were used to compare which system resulted in higher tomato yields. The results showed that the subsurface drip irrigation system with 0.74 distribution uniformity produced 1243 g/plant, while each plant produced 1061 g in the surface drip irrigation system treatment. The results also indicated that the LoRaWAN-based subsurface drip irrigation system was suitable under outdoor conditions with easy operation and robust controlling capability for tomato production. Full article
Show Figures

Figure 1

19 pages, 680 KiB  
Article
Cooperative Task Execution for Object Detection in Edge Computing: An Internet of Things Application
by Petros Amanatidis, Dimitris Karampatzakis, George Iosifidis, Thomas Lagkas and Alexandros Nikitas
Appl. Sci. 2023, 13(8), 4982; https://doi.org/10.3390/app13084982 - 15 Apr 2023
Cited by 6 | Viewed by 2415
Abstract
The development of computer hardware and communications has brought with it many exciting applications in the Internet of Things. More and more Single Board Computers (SBC) with high performance and low power consumption are used to infer deep learning models at the edge [...] Read more.
The development of computer hardware and communications has brought with it many exciting applications in the Internet of Things. More and more Single Board Computers (SBC) with high performance and low power consumption are used to infer deep learning models at the edge of the network. In this article, we investigate a cooperative task execution system in an edge computing architecture. In our topology, the edge server offloads different workloads to end devices, which collaboratively execute object detection on the transmitted sets of images. Our proposed system attempts to provide optimization in terms of execution accuracy and execution time for inferencing deep learning models. Furthermore, we focus on implementing new policies to optimize the E2E execution time and the execution accuracy of the system by highlighting the key role of effective image compression and the batch sizes (splitting decisions) received by the end devices from a server at the network edge. In our testbed, we used the You Only Look Once (YOLO) version 5, which is one of the most popular object detectors. In our heterogeneous testbed, an edge server and three different end devices were used with different characteristics like CPU/TPU, different sizes of RAM, and different neural network input sizes to identify sharp trade-offs. Firstly, we implemented the YOLOv5 on our end devices to evaluate the performance of the model using metrics like Precision, Recall, and mAP on the COCO dataset. Finally, we explore optimal trade-offs for different task-splitting strategies and compression decisions to optimize total performance. We demonstrate that offloading workloads on multiple end devices based on different splitting decisions and compression values improves the system’s performance to respond in real-time conditions without needing a server or cloud resources. Full article
Show Figures

Figure 1

11 pages, 1537 KiB  
Article
Personalized Context-Aware Authentication Protocols in IoT
by Radosław Bułat and Marek R. Ogiela
Appl. Sci. 2023, 13(7), 4216; https://doi.org/10.3390/app13074216 - 27 Mar 2023
Cited by 2 | Viewed by 1661
Abstract
The IoT is a specific type of network with its own communication challenges. There are a multitude of low-power devices monitoring the environment. Thus, the need for authentication may be addressed by many available sensors but should be performed on the fly and [...] Read more.
The IoT is a specific type of network with its own communication challenges. There are a multitude of low-power devices monitoring the environment. Thus, the need for authentication may be addressed by many available sensors but should be performed on the fly and use the personal characteristics of the device’s owner. Thus, a review and a study of the available authentication methods were performed for use in such a context, and as a result, a preliminary algorithm was proposed as a solution. The algorithm utilizes a variety of independent factors, including the user’s personal characteristics, knowledge, the context in which the authentication is taking place, and the use of steganography, to authenticate users in the dispersed environment. This algorithm encodes all of these factors into a single data vector, which is then used to verify the user’s identity or as a digital signature. Using this personalized context-aware protocol, it is possible to increase the reliability of authentication, given the emphasis on usability in low-computing-power but highly sensor-infused environments and devices. Although more testing is needed to optimize it as an industry solution, personalized protocols seem to have a future in the IoT world. Full article
Show Figures

Figure 1

14 pages, 6287 KiB  
Article
Analysis and Visualization of Production Bottlenecks as Part of a Digital Twin in Industrial IoT
by Benjamin Arff, Julian Haasis, Jochen Thomas, Christopher Bonenberger, Wolfram Höpken and Ralf Stetter
Appl. Sci. 2023, 13(6), 3525; https://doi.org/10.3390/app13063525 - 9 Mar 2023
Cited by 6 | Viewed by 2081
Abstract
In the area of industrial Internet of Things (IIoT), digital twins (DTs) are a powerful means for process improvement. In this paper the concept of a DT is explained and analysis possibilities throughout the life-cycle of a product and its production system are [...] Read more.
In the area of industrial Internet of Things (IIoT), digital twins (DTs) are a powerful means for process improvement. In this paper the concept of a DT is explained and analysis possibilities throughout the life-cycle of a product and its production system are explored. The main part of this paper is focused on an approach to the analysis of manufacturing layouts and their parameters. The approach, which is based on a state of the art bottleneck detection method, allows an intelligent representation of the temporal process characteristics. The presented method is widely applicable for any type of manufacturing layout and time-span. The use of elementary heuristics leads to traceable results that can be used for further analysis or optimization. The results of this analysis method can be integrated in a DT and combined with machine learning and explainable artificial intelligence (XAI). The concept for a self-learning DT is explained and implementation possibilities are elucidated. Full article
Show Figures

Figure 1

21 pages, 12603 KiB  
Article
Deep Learning-Based Graffiti Detection: A Study Using Images from the Streets of Lisbon
by Joana Fogaça, Tomás Brandão and João C. Ferreira
Appl. Sci. 2023, 13(4), 2249; https://doi.org/10.3390/app13042249 - 9 Feb 2023
Cited by 1 | Viewed by 4283
Abstract
This research work comes from a real problem from Lisbon City Council that was interested in developing a system that automatically detects in real-time illegal graffiti present throughout the city of Lisbon by using cars equipped with cameras. This system would allow a [...] Read more.
This research work comes from a real problem from Lisbon City Council that was interested in developing a system that automatically detects in real-time illegal graffiti present throughout the city of Lisbon by using cars equipped with cameras. This system would allow a more efficient and faster identification and clean-up of the illegal graffiti constantly being produced, with a georeferenced position. We contribute also a city graffiti database to share among the scientific community. Images were provided and collected from different sources that included illegal graffiti, images with graffiti considered street art, and images without graffiti. A pipeline was then developed that, first, classifies the image with one of the following labels: illegal graffiti, street art, or no graffiti. Then, if it is illegal graffiti, another model was trained to detect the coordinates of graffiti on an image. Pre-processing, data augmentation, and transfer learning techniques were used to train the models. Regarding the classification model, an overall accuracy of 81.4% and F1-scores of 86%, 81%, and 66% were obtained for the classes of street art, illegal graffiti, and image without graffiti, respectively. As for the graffiti detection model, an Intersection over Union (IoU) of 70.3% was obtained for the test set. Full article
Show Figures

Figure 1

35 pages, 3839 KiB  
Article
Machine Intelligence and Autonomous Robotic Technologies in the Corporate Context of SMEs: Deep Learning and Virtual Simulation Algorithms, Cyber-Physical Production Networks, and Industry 4.0-Based Manufacturing Systems
by Marek Nagy, George Lăzăroiu and Katarina Valaskova
Appl. Sci. 2023, 13(3), 1681; https://doi.org/10.3390/app13031681 - 28 Jan 2023
Cited by 76 | Viewed by 6645
Abstract
This study examines Industry 4.0-based technologies, focusing on the barriers to their implementation in European small- and medium-sized enterprises (SMEs). The purpose of this research was to determine the most significant obstacles that prevent SMEs from implementing smart manufacturing, as well as to [...] Read more.
This study examines Industry 4.0-based technologies, focusing on the barriers to their implementation in European small- and medium-sized enterprises (SMEs). The purpose of this research was to determine the most significant obstacles that prevent SMEs from implementing smart manufacturing, as well as to identify the most important components of such an operationalization and to evaluate whether only large businesses have access to technological opportunities given the financial complexities of such an adoption. The study is premised on the notion that, in the setting of cyber-physical production systems, the gap between massive corporations and SMEs may result in significant disadvantages for the latter, leading to their market exclusion by the former. The research aim was achieved by secondary data analysis, where previously gathered data were assessed and analyzed. The need to investigate this topic originates from the fact that SMEs require more research than large corporations, which are typically the focus of mainstream debates. The findings validated Industry 4.0′s critical role in smart process planning provided by deep learning and virtual simulation algorithms, especially for industrial production. The research also discussed the connection options for SMEs as a means of enhancing business efficiency through machine intelligence and autonomous robotic technologies. The interaction between Industry 4.0 and the economic management of organizations is viewed in this study as a possible source of significant added value. Full article
Show Figures

Figure 1

21 pages, 1519 KiB  
Article
Low Rate DDoS Detection Using Weighted Federated Learning in SDN Control Plane in IoT Network
by Muhammad Nadeem Ali, Muhammad Imran, Muhammad Salah ud din and Byung-Seo Kim
Appl. Sci. 2023, 13(3), 1431; https://doi.org/10.3390/app13031431 - 21 Jan 2023
Cited by 35 | Viewed by 3391
Abstract
The Internet of things (IoT) has opened new dimensions of novel services and computing power for modern living standards by introducing innovative and smart solutions. Due to the extensive usage of these services, IoT has spanned numerous devices and communication entities, which makes [...] Read more.
The Internet of things (IoT) has opened new dimensions of novel services and computing power for modern living standards by introducing innovative and smart solutions. Due to the extensive usage of these services, IoT has spanned numerous devices and communication entities, which makes the management of the network a complex challenge. Hence it is urgently needed to redefine the management of the IoT network. Software-defined networking (SDN) intrinsic programmability and centralization features simplify network management, facilitate network abstraction, ease network evolution, has the potential to manage the IoT network. SDN’s centralized control plane promotes efficient network resource management by separating the control and data plane and providing a global picture of the underlying network topology. Apart from the inherent benefits, the centralized SDN architecture also brings serious security threats such as spoofing, sniffing, brute force, API exploitation, and denial of service, and requires significant attention to guarantee a secured network. Among these security threats, Distributed Denial of Service (DDoS) and its variant Low-Rate DDoS (LR-DDoS), is one of the most challenging as the fraudulent user generates malicious traffic at a low rate which is extremely difficult to detect and defend. Machine Learning (ML), especially Federated Learning (FL), has shown remarkable success in detecting and defending against such attacks. In this paper, we adopted Weighted Federated Learning (WFL) to detect Low-Rate DDoS (LR-DDoS) attacks. The extensive MATLAB experimentation and evaluation revealed that the proposed work ignites the LR-DDoS detection accuracy compared with the individual Neural Networks (ANN) training algorithms, existing packet analysis-based, and machine learning approaches. Full article
Show Figures

Figure 1

16 pages, 557 KiB  
Article
Mobility Prediction of Mobile Wireless Nodes
by Shatha Abbas, Mohammed J. F. Alenazi and Amani Samha
Appl. Sci. 2022, 12(24), 13041; https://doi.org/10.3390/app122413041 - 19 Dec 2022
Cited by 1 | Viewed by 2112
Abstract
Artificial intelligence (AI) is a fundamental part of improving information technology systems. Essential AI techniques have revolutionized communication technology, such as mobility models and machine learning classification. Mobility models use a virtual testing methodology to evaluate new or updated products at a reasonable [...] Read more.
Artificial intelligence (AI) is a fundamental part of improving information technology systems. Essential AI techniques have revolutionized communication technology, such as mobility models and machine learning classification. Mobility models use a virtual testing methodology to evaluate new or updated products at a reasonable cost. Classifiers can be used with these models to achieve acceptable predictive accuracy. In this study, we analyzed the behavior of machine learning classification algorithms—more specifically decision tree (DT), logistic regression (LR), k-nearest neighbors (K-NN), latent Dirichlet allocation (LDA), Gaussian naive Bayes (GNB), and support vector machine (SVM)—when using different mobility models, such as random walk, random direction, Gauss–Markov, and recurrent self-similar Gauss–Markov (RSSGM). Subsequently, classifiers were applied in order to detect the most efficient mobility model over wireless nodes. Random mobility models (i.e., random direction and random walk) provided fluctuating accuracy values when machine learning classifiers were applied—resulting values ranged from 39% to 81%. The Gauss–Markov and RSSGM models achieved good prediction accuracy in scenarios using a different number of access points in a defined area. Gauss–Markov reached 89% with the LDA classifier, whereas RSSGM showed the greatest accuracy with all classifiers and through various samples (i.e., 2000, 5000, and 10,000 steps during the whole experiment). Finally, the decision tree classifier obtained better overall results, achieving 98% predictive accuracy for 5000 steps. Full article
Show Figures

Figure 1

17 pages, 6236 KiB  
Article
AI Approaches in Computer-Aided Diagnosis and Recognition of Neoplastic Changes in MRI Brain Images
by Jakub Kluk and Marek R. Ogiela
Appl. Sci. 2022, 12(23), 11880; https://doi.org/10.3390/app122311880 - 22 Nov 2022
Cited by 2 | Viewed by 1773
Abstract
Advanced diagnosis systems provide doctors with an abundance of high-quality data, which allows for diagnosing dangerous diseases, such as brain cancers. Unfortunately, humans flooded with such plentiful information might overlook tumor symptoms. Hence, diagnostical devices are becoming more commonly combined with software systems, [...] Read more.
Advanced diagnosis systems provide doctors with an abundance of high-quality data, which allows for diagnosing dangerous diseases, such as brain cancers. Unfortunately, humans flooded with such plentiful information might overlook tumor symptoms. Hence, diagnostical devices are becoming more commonly combined with software systems, enhancing the decisioning process. This work picks up the subject of designing a neural network based system that allows for automatic brain tumor diagnosis from MRI images and points out important areas. The application intends to speed up the diagnosis and lower the risk of slipping up on a neoplastic lesion. The study based on two types of neural networks, Convolutional Neural Networks and Vision Transformers, aimed to assess the capabilities of the innovative ViT and its possible future evolution compared with well-known CNNs. The research reveals a tumor recognition rate as high as 90% with both architectures, while the Vision Transformer turned out to be easier to train and provided more detailed decision reasoning. The results show that computer-aided diagnosis and ViTs might be a significant part of modern medicine development in IoT and healthcare systems. Full article
Show Figures

Figure 1

23 pages, 6326 KiB  
Article
Forensic Analysis of Blackhole Attack in Wireless Sensor Networks/Internet of Things
by Ahmad Hasan, Muazzam A. Khan, Balawal Shabir, Arslan Munir, Asad Waqar Malik, Zahid Anwar and Jawad Ahmad
Appl. Sci. 2022, 12(22), 11442; https://doi.org/10.3390/app122211442 - 11 Nov 2022
Cited by 9 | Viewed by 2617
Abstract
The internet of things (IoT) is prone to various types of denial of service (DoS) attacks due to their resource-constrained nature. Extensive research efforts have been dedicated to securing these systems, but various vulnerabilities remain. Notably, it is challenging to maintain the confidentiality, [...] Read more.
The internet of things (IoT) is prone to various types of denial of service (DoS) attacks due to their resource-constrained nature. Extensive research efforts have been dedicated to securing these systems, but various vulnerabilities remain. Notably, it is challenging to maintain the confidentiality, integrity, and availability of mobile ad hoc networks due to limited connectivity and dynamic topology. As critical infrastructure including smart grids, industrial control, and intelligent transportation systems is reliant on WSNs and IoT, research efforts that forensically investigate and analyze the cybercrimes in IoT and WSNs are imperative. When a security failure occurs, the causes, vulnerabilities, and facts behind the failure need to be revealed and examined to improve the security of these systems. This research forensically investigates the performance of the ad hoc IoT networks using the ad hoc on-demand distance vector (AODV) routing protocol under the blackhole attack, which is a type of denial of service attack detrimental to IoT networks. This work also examines the traffic patterns in the network and nodes to assess the attack damage and conducts vulnerability analysis of the protocol to carry out digital forensic (DF) investigations. It further reconstructs the networks under different modes and parameters to verify the analysis and provide suggestions to design roubust routing protocols. Full article
Show Figures

Graphical abstract

22 pages, 1710 KiB  
Article
Secure Smart Communication Efficiency in Federated Learning: Achievements and Challenges
by Seyedamin Pouriyeh, Osama Shahid, Reza M. Parizi, Quan Z. Sheng, Gautam Srivastava, Liang Zhao and Mohammad Nasajpour
Appl. Sci. 2022, 12(18), 8980; https://doi.org/10.3390/app12188980 - 7 Sep 2022
Cited by 20 | Viewed by 3107
Abstract
Federated learning (FL) is known to perform machine learning tasks in a distributed manner. Over the years, this has become an emerging technology, especially with various data protection and privacy policies being imposed. FL allows for performing machine learning tasks while adhering to [...] Read more.
Federated learning (FL) is known to perform machine learning tasks in a distributed manner. Over the years, this has become an emerging technology, especially with various data protection and privacy policies being imposed. FL allows for performing machine learning tasks while adhering to these challenges. As with the emergence of any new technology, there will be challenges and benefits. A challenge that exists in FL is the communication costs: as FL takes place in a distributed environment where devices connected over the network have to constantly share their updates, this can create a communication bottleneck. This paper presents the state-of-the-art of the conducted works on communication constraints of FL while maintaining the secure and smart properties that federated learning is known for. Overall, current challenges and possible methods for enhancing the FL models’ efficiency with a perspective on communication are discussed. This paper aims to bridge the gap in all conducted review papers by solely focusing on communication aspects in FL environments. Full article
Show Figures

Figure 1

Back to TopTop