Featured Papers in the Section Internet of Things

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (15 September 2024) | Viewed by 24444

Special Issue Editor

Department of Electrical Engineering & Computer Science, Lassonde School of Engineering, York University, 4700 Keele St., LAS 2016, Toronto, ON M3J 1P3, Canada
Interests: wireless communications and networking; mobile cloud computing; Internet of Things; network protocol design and modeling; performance analysis and optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Internet of Things (IoT) is an exciting research area due to the innovative advances connected devices and systems bring to society. This trend appears quite fitting in the context of the increasing relevance of the wide systemic and interdisciplinary approach of future internet.

Therefore, Future Internet is pleased to announce this new, topical Special Issue, in which we will publish contributed and invited papers referring to pre-selected basic and/or hot topics related to the Internet of Things (IoT).

This Special Issue is particularly soliciting conceptual, theoretical, and experimental contributions, discussing challenges, the state of the art, and solutions to a set of currently unresolved key questions including, but not limited to the following:

  • AI-empowered IoT;
  • Cloud computing;
  • Internet of Things
  • Wireless communications and networking;
  • Mobile cloud computing;
  • Network protocol design and modeling;
  • Performance analysis and optimization;
  • Industry 4.0;
  • Blockchain for Cloud computing and IoT;
  • Network protocols, network design, and architectures;
  • Complex networks;
  • Ad hoc and sensor networks;
  • Software-defined networks.

Dr. Ping Wang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

31 pages, 4049 KiB  
Article
A Novel Deep Learning Framework for Intrusion Detection Systems in Wireless Network
by Khoa Dinh Nguyen Dang, Peppino Fazio and Miroslav Voznak
Future Internet 2024, 16(8), 264; https://doi.org/10.3390/fi16080264 - 25 Jul 2024
Viewed by 1087
Abstract
In modern network security setups, Intrusion Detection Systems (IDS) are crucial elements that play a key role in protecting against unauthorized access, malicious actions, and policy breaches. Despite significant progress in IDS technology, two of the most major obstacles remain: how to avoid [...] Read more.
In modern network security setups, Intrusion Detection Systems (IDS) are crucial elements that play a key role in protecting against unauthorized access, malicious actions, and policy breaches. Despite significant progress in IDS technology, two of the most major obstacles remain: how to avoid false alarms due to imbalanced data and accurately forecast the precise type of attacks before they even happen to minimize the damage caused. To deal with two problems in the most optimized way possible, we propose a two-task regression and classification strategy called Hybrid Regression–Classification (HRC), a deep learning-based strategy for developing an intrusion detection system (IDS) that can minimize the false alarm rate and detect and predict potential cyber-attacks before they occur to help the current wireless network in dealing with the attacks more efficiently and precisely. The experimental results show that our HRC strategy accurately predicts the incoming behavior of the IP data traffic in two different datasets. This can help the IDS to detect potential attacks sooner with high accuracy so that they can have enough reaction time to deal with the attack. Furthermore, our proposed strategy can also deal with imbalanced data. Even when the imbalance is large between categories. This will help significantly reduce the false alarm rate of IDS in practice. These strengths combined will benefit the IDS by making it more active in defense and help deal with the intrusion detection problem more effectively. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

46 pages, 691 KiB  
Article
Deterministic K-Identification for Future Communication Networks: The Binary Symmetric Channel Results
by Mohammad Javad Salariseddigh, Ons Dabbabi, Christian Deppe and Holger Boche
Future Internet 2024, 16(3), 78; https://doi.org/10.3390/fi16030078 - 26 Feb 2024
Viewed by 1526
Abstract
Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has [...] Read more.
Numerous applications of the Internet of Things (IoT) feature an event recognition behavior where the established Shannon capacity is not authorized to be the central performance measure. Instead, the identification capacity for such systems is considered to be an alternative metric, and has been developed in the literature. In this paper, we develop deterministic K-identification (DKI) for the binary symmetric channel (BSC) with and without a Hamming weight constraint imposed on the codewords. This channel may be of use for IoT in the context of smart system technologies, where sophisticated communication models can be reduced to a BSC for the aim of studying basic information theoretical properties. We derive inner and outer bounds on the DKI capacity of the BSC when the size of the goal message set K may grow in the codeword length n. As a major observation, we find that, for deterministic encoding, assuming that K grows exponentially in n, i.e., K=2nκ, where κ is the identification goal rate, then the number of messages that can be accurately identified grows exponentially in n, i.e., 2nR, where R is the DKI coding rate. Furthermore, the established inner and outer bound regions reflects impact of the input constraint (Hamming weight) and the channel statistics, i.e., the cross-over probability. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

24 pages, 1502 KiB  
Article
Enhancing Energy Efficiency in IoT-NDN via Parameter Optimization
by Dennis Papenfuß, Bennet Gerlach, Stefan Fischer and Mohamed Ahmed Hail
Future Internet 2024, 16(2), 61; https://doi.org/10.3390/fi16020061 - 16 Feb 2024
Cited by 1 | Viewed by 1614
Abstract
The IoT encompasses objects, sensors, and everyday items not typically considered computers. IoT devices are subject to severe energy, memory, and computation power constraints. Employing NDN for the IoT is a recent approach to accommodate these issues. To gain a deeper insight into [...] Read more.
The IoT encompasses objects, sensors, and everyday items not typically considered computers. IoT devices are subject to severe energy, memory, and computation power constraints. Employing NDN for the IoT is a recent approach to accommodate these issues. To gain a deeper insight into how different network parameters affect energy consumption, analyzing a range of parameters using hyperparameter optimization seems reasonable. The experiments from this work’s ndnSIM-based hyperparameter setup indicate that the data packet size has the most significant impact on energy consumption, followed by the caching scheme, caching strategy, and finally, the forwarding strategy. The energy footprint of these parameters is orders of magnitude apart. Surprisingly, the packet request sequence influences the caching parameters’ energy footprint more than the graph size and topology. Regarding energy consumption, the results indicate that data compression may be more relevant than expected, and caching may be more significant than the forwarding strategy. The framework for ndnSIM developed in this work can be used to simulate NDN networks more efficiently. Furthermore, the work presents a valuable basis for further research on the effect of specific parameter combinations not examined before. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

20 pages, 945 KiB  
Article
Transforming Educational Institutions: Harnessing the Power of Internet of Things, Cloud, and Fog Computing
by Afzal Badshah, Ghani Ur Rehman, Haleem Farman, Anwar Ghani, Shahid Sultan, Muhammad Zubair and Moustafa M. Nasralla
Future Internet 2023, 15(11), 367; https://doi.org/10.3390/fi15110367 - 13 Nov 2023
Cited by 9 | Viewed by 2452
Abstract
The Internet of Things (IoT), cloud, and fog computing are now a reality and have become the vision of the smart world. Self-directed learning approaches, their tools, and smart spaces are transforming traditional institutions into smart institutions. This transition has a positive impact [...] Read more.
The Internet of Things (IoT), cloud, and fog computing are now a reality and have become the vision of the smart world. Self-directed learning approaches, their tools, and smart spaces are transforming traditional institutions into smart institutions. This transition has a positive impact on learner engagement, motivation, attendance, and advanced learning outcomes. In developing countries, there are many barriers to quality education, such as inadequate implementation of standard operating procedures, lack of involvement from learners and parents, and lack of transparent performance measurement for both institutions and students. These issues need to be addressed to ensure further growth and improvement. This study explored the use of smart technologies (IoT, fog, and cloud computing) to address challenges in student learning and administrative tasks. A novel framework (a five-element smart institution framework) is proposed to connect administrators, teachers, parents, and students using smart technologies to improve attendance, pedagogy, and evaluation. The results showed significant increases in student attendance and homework progress, along with improvements in annual results, student discipline, and teacher/parent engagement. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

15 pages, 4099 KiB  
Article
Reinforcement Learning Approach for Adaptive C-V2X Resource Management
by Teguh Indra Bayu, Yung-Fa Huang and Jeang-Kuo Chen
Future Internet 2023, 15(10), 339; https://doi.org/10.3390/fi15100339 - 15 Oct 2023
Cited by 1 | Viewed by 2001
Abstract
The modulation coding scheme (MCS) index is the essential configuration parameter in cellular vehicle-to-everything (C-V2X) communication. As referenced by the 3rd Generation Partnership Project (3GPP), the MCS index will dictate the transport block size (TBS) index, which will affect the size of transport [...] Read more.
The modulation coding scheme (MCS) index is the essential configuration parameter in cellular vehicle-to-everything (C-V2X) communication. As referenced by the 3rd Generation Partnership Project (3GPP), the MCS index will dictate the transport block size (TBS) index, which will affect the size of transport blocks and the number of physical resource blocks. These numbers are crucial in the C-V2X resource management since it is also bound to the transmission power used in the system. To the authors’ knowledge, this particular area of research has not been previously investigated. Ultimately, this research establishes the fundamental principles for future studies seeking to use the MCS adaptability in many contexts. In this work, we proposed the application of the reinforcement learning (RL) algorithm, as we used the Q-learning approach to adaptively change the MCS index according to the current environmental states. The simulation results showed that our proposed RL approach outperformed the static MCS index and was able to attain stability in a short number of events. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Graphical abstract

38 pages, 7280 KiB  
Article
SEDIA: A Platform for Semantically Enriched IoT Data Integration and Development of Smart City Applications
by Dimitrios Lymperis and Christos Goumopoulos
Future Internet 2023, 15(8), 276; https://doi.org/10.3390/fi15080276 - 18 Aug 2023
Cited by 7 | Viewed by 2531
Abstract
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a [...] Read more.
The development of smart city applications often encounters a variety of challenges. These include the need to address complex requirements such as integrating diverse data sources and incorporating geographical data that reflect the physical urban environment. Platforms designed for smart cities hold a pivotal position in materializing these applications, given that they offer a suite of high-level services, which can be repurposed by developers. Although a variety of platforms are available to aid the creation of smart city applications, most fail to couple their services with geographical data, do not offer the ability to execute semantic queries on the available data, and possess restrictions that could impede the development process. This paper introduces SEDIA, a platform for developing smart applications based on diverse data sources, including geographical information, to support a semantically enriched data model for effective data analysis and integration. It also discusses the efficacy of SEDIA in a proof-of-concept smart city application related to air quality monitoring. The platform utilizes ontology classes and properties to semantically annotate collected data, and the Neo4j graph database facilitates the recognition of patterns and relationships within the data. This research also offers empirical data demonstrating the performance evaluation of SEDIA. These contributions collectively advance our understanding of semantically enriched data integration within the realm of smart city applications. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Graphical abstract

21 pages, 2892 KiB  
Article
Holistic Utility Satisfaction in Cloud Data Centre Network Using Reinforcement Learning
by Pejman Goudarzi, Mehdi Hosseinpour, Roham Goudarzi and Jaime Lloret
Future Internet 2022, 14(12), 368; https://doi.org/10.3390/fi14120368 - 8 Dec 2022
Viewed by 2073
Abstract
Cloud computing leads to efficient resource allocation for network users. In order to achieve efficient allocation, many research activities have been conducted so far. Some researchers focus on classical optimisation theory techniques (such as multi-objective optimisation, evolutionary optimisation, game theory, etc.) to satisfy [...] Read more.
Cloud computing leads to efficient resource allocation for network users. In order to achieve efficient allocation, many research activities have been conducted so far. Some researchers focus on classical optimisation theory techniques (such as multi-objective optimisation, evolutionary optimisation, game theory, etc.) to satisfy network providers and network users’ service-level agreement (SLA) requirements. Normally, in a cloud data centre network (CDCN), it is difficult to jointly satisfy both the cloud provider and cloud customer’ utilities, and this leads to complex combinatorial problems, which are usually NP-hard. Recently, machine learning and artificial intelligence techniques have received much attention from the networking community because of their capability to solve complicated networking problems. In the current work, at first, the holistic utility satisfaction for the cloud data centre provider and customers is formulated as a reinforcement learning (RL) problem with a specific reward function, which is a convex summation of users’ utility functions and cloud provider’s utility. The user utility functions are modelled as a function of cloud virtualised resources (such as storage, CPU, RAM), connection bandwidth, and also, the network-based expected packet loss and round-trip time factors associated with the cloud users. The cloud provider utility function is modelled as a function of resource prices and energy dissipation costs. Afterwards, a Q-learning implementation of the mentioned RL algorithm is introduced, which is able to converge to the optimal solution in an online and fast manner. The simulation results exhibit the enhanced convergence speed and computational complexity properties of the proposed method in comparison with similar approaches from the joint cloud customer/provider utility satisfaction perspective. To evaluate the scalability property of the proposed method, the results are also repeated for different cloud user population scenarios (small, medium, and large). Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Graphical abstract

11 pages, 337 KiB  
Article
Minimization of nth Order Rate Matching in Satellite Networks with One to Many Pairings
by Anargyros J. Roumeliotis, Christos N. Efrem and Athanasios D. Panagopoulos
Future Internet 2022, 14(10), 286; https://doi.org/10.3390/fi14100286 - 30 Sep 2022
Cited by 1 | Viewed by 1485
Abstract
This paper studies the minimization of nth (positive integer) order rate matching in high-throughput multi-beam satellite systems, based on one-to-many capacity allocation pairings, for the first time in the literature. The offered and requested capacities of gateways and users’ beams are exploited, [...] Read more.
This paper studies the minimization of nth (positive integer) order rate matching in high-throughput multi-beam satellite systems, based on one-to-many capacity allocation pairings, for the first time in the literature. The offered and requested capacities of gateways and users’ beams are exploited, respectively. Due to the high complexity of the binary optimization problem, its solution is approached with a two-step heuristic scheme. Firstly, the corresponding continuous, in [0, 1], pairing problem is solved applying the difference of convex optimization theory, and then, a transformation from continuous to binary feasible allocation is provided to extract the pairings among gateways and users’ beams. Comparing with the exponential-time optimal exhaustive mechanism that investigates all possible pairs to extract the best matching for minimizing the rate matching, extended simulations show that the presented approximation for the solution of the non-convex optimization problem has fast convergence and achieves a generally low relative error for lower value of n. Finally, the simulation results show the importance of n in the examined problem. Specifically, pairings originated by the minimization of rate matching with larger n result in more fair rate matching among users’ beams, which is a valuable result for satellite and generally wireless systems operators. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

Review

Jump to: Research

29 pages, 778 KiB  
Review
Large Language Models Meet Next-Generation Networking Technologies: A Review
by Ching-Nam Hang, Pei-Duo Yu, Roberto Morabito and Chee-Wei Tan
Future Internet 2024, 16(10), 365; https://doi.org/10.3390/fi16100365 - 7 Oct 2024
Viewed by 7279
Abstract
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has [...] Read more.
The evolution of network technologies has significantly transformed global communication, information sharing, and connectivity. Traditional networks, relying on static configurations and manual interventions, face substantial challenges such as complex management, inefficiency, and susceptibility to human error. The rise of artificial intelligence (AI) has begun to address these issues by automating tasks like network configuration, traffic optimization, and security enhancements. Despite their potential, integrating AI models in network engineering encounters practical obstacles including complex configurations, heterogeneous infrastructure, unstructured data, and dynamic environments. Generative AI, particularly large language models (LLMs), represents a promising advancement in AI, with capabilities extending to natural language processing tasks like translation, summarization, and sentiment analysis. This paper aims to provide a comprehensive review exploring the transformative role of LLMs in modern network engineering. In particular, it addresses gaps in the existing literature by focusing on LLM applications in network design and planning, implementation, analytics, and management. It also discusses current research efforts, challenges, and future opportunities, aiming to provide a comprehensive guide for networking professionals and researchers. The main goal is to facilitate the adoption and advancement of AI and LLMs in networking, promoting more efficient, resilient, and intelligent network systems. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

19 pages, 2263 KiB  
Review
Industry 4.0 and Beyond: The Role of 5G, WiFi 7, and Time-Sensitive Networking (TSN) in Enabling Smart Manufacturing
by Jobish John, Md. Noor-A-Rahim, Aswathi Vijayan, H. Vincent Poor and Dirk Pesch
Future Internet 2024, 16(9), 345; https://doi.org/10.3390/fi16090345 - 21 Sep 2024
Cited by 1 | Viewed by 1317
Abstract
This paper explores the role that 5G, WiFi 7, and Time-Sensitive Networking (TSN) play in driving smart manufacturing as a fundamental part of the Industry 4.0 vision. It provides an in-depth analysis of each technology’s application in industrial communications, with a focus on [...] Read more.
This paper explores the role that 5G, WiFi 7, and Time-Sensitive Networking (TSN) play in driving smart manufacturing as a fundamental part of the Industry 4.0 vision. It provides an in-depth analysis of each technology’s application in industrial communications, with a focus on TSN and its key elements that enable reliable and secure communication in industrial networks. In addition, this paper includes a comparative study of these technologies, analyzing them based on several industrial use cases, supported secondary applications, industry adoption, and current market trends. This paper concludes by highlighting the challenges and future directions for adopting these technologies in industrial networks and emphasizes their importance in realizing the Industry 4.0 vision within the context of smart manufacturing. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things)
Show Figures

Figure 1

Back to TopTop