Intelligent Big Data Analysis for High-Dimensional Internet of Things

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 September 2024) | Viewed by 8098

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer Science and Technology, Jilin University, Jilin 130012, China
Interests: big data analysis; multi-label learning; feature selection

E-Mail Website
Guest Editor
College of Computer Science and Technology, Jilin University, Jilin 130012, China
Interests: software engineering; natural language processing; data mining

E-Mail Website
Guest Editor
School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
Interests: feature selection; information theory

Special Issue Information

Dear Colleagues,

Intelligent Big Data Analysis for High-Dimensional Internet of Things (IoT) is a complex and critical area of research that demands interdisciplinary expertise in data science, machine learning, and IoT. This article summarizes a Special Issue on this topic and explores its focus, scope, and purpose.

(a) The focus of this Special Issue is to provide a platform for researchers to showcase their latest research and advancements in the field of intelligent big data analysis for high-dimensional IoT. It aims to explore novel algorithms and methodologies that can efficiently analyze, process, and manage the huge volumes of data generated by IoT devices.

(b) The scope of this Special Issue is broad and encompasses various domains related to IoT, such as smart cities, healthcare, transportation, and industrial automation. It includes topics such as big data analytics, machine learning, data mining, natural language processing, and deep learning.

(c) The purpose of this Special Issue is to facilitate knowledge exchange and collaboration among researchers, practitioners, and industry experts. It aims to contribute to the development of intelligent big data analysis techniques that can address the challenges of high-dimensional IoT.

This Special Issue will supplement the existing literature by providing a platform for the latest research and advancements in intelligent big data analysis for high-dimensional IoT. It will also provide insights into the challenges and opportunities of this area of research and suggest future research directions. Furthermore, the Special Issue will be a valuable resource for researchers, practitioners, and industry experts working in the field of IoT and data analytics.

Dr. Wanfu Gao
Dr. Yuzhou Liu
Dr. Ping Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • high-dimensional data
  • big data
  • feature selection
  • Internet of things

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 13960 KiB  
Article
Few-Shot Image Segmentation Using Generating Mask with Meta-Learning Classifier Weight Transformer Network
by Jian-Hong Wang, Phuong Thi Le, Fong-Ci Jhou, Ming-Hsiang Su, Kuo-Chen Li, Shih-Lun Chen, Tuan Pham, Ji-Long He, Chien-Yao Wang, Jia-Ching Wang and Pao-Chi Chang
Electronics 2024, 13(13), 2634; https://doi.org/10.3390/electronics13132634 - 4 Jul 2024
Cited by 1 | Viewed by 1282
Abstract
With the rapid advancement of modern hardware technology, breakthroughs have been made in many areas of artificial intelligence research, leading to the direction of machine replacement or assistance in various fields. However, most artificial intelligence or deep learning techniques require large amounts of [...] Read more.
With the rapid advancement of modern hardware technology, breakthroughs have been made in many areas of artificial intelligence research, leading to the direction of machine replacement or assistance in various fields. However, most artificial intelligence or deep learning techniques require large amounts of training data and are typically applicable to a single task objective. Acquiring such large training datasets can be particularly challenging, especially in domains like medical imaging. In the field of image processing, few-shot image segmentation is an area of active research. Recent studies have employed deep learning and meta-learning approaches to enable models to segment objects in images with only a small amount of training data, allowing them to quickly adapt to new task objectives. This paper proposes a network architecture for meta-learning few-shot image segmentation, utilizing a meta-learning classification weight transfer network to generate masks for few-shot image segmentation. The architecture leverages pre-trained classification weight transfers to generate informative prior masks and employs pre-trained feature extraction architecture for feature extraction of query and support images. Furthermore, it utilizes a Feature Enrichment Module to adaptively propagate information from finer features to coarser features in a top-down manner for query image feature extraction. Finally, a classification module is employed for query image segmentation prediction. Experimental results demonstrate that compared to the baseline using the mean Intersection over Union (mIOU) as the evaluation metric, the accuracy increases by 1.7% in the one-shot experiment and by 2.6% in the five-shot experiment. Thus, compared to the baseline, the proposed architecture with meta-learning classification weight transfer network for mask generation exhibits superior performance in few-shot image segmentation. Full article
(This article belongs to the Special Issue Intelligent Big Data Analysis for High-Dimensional Internet of Things)
Show Figures

Figure 1

17 pages, 1287 KiB  
Article
Two-Level Dynamic Programming-Enabled Non-Metric Data Aggregation Technique for the Internet of Things
by Syed Roohullah Jan, Baraq Ghaleb, Umair Ullah Tariq, Haider Ali, Fariza Sabrina and Lu Liu
Electronics 2024, 13(9), 1651; https://doi.org/10.3390/electronics13091651 - 25 Apr 2024
Viewed by 885
Abstract
The Internet of Things (IoT) has become a transformative technological infrastructure, serving as a benchmark for automating and standardizing various activities across different domains to reduce human effort, especially in hazardous environments. In these networks, devices with embedded sensors capture valuable information about [...] Read more.
The Internet of Things (IoT) has become a transformative technological infrastructure, serving as a benchmark for automating and standardizing various activities across different domains to reduce human effort, especially in hazardous environments. In these networks, devices with embedded sensors capture valuable information about activities and report it to the nearest server. Although IoT networks are exceptionally useful in solving real-life problems, managing duplicate data values, often captured by neighboring devices, remains a challenging issue. Despite various methodologies reported in the literature to minimize the occurrence of duplicate data, it continues to be an open research problem. This paper presents a sophisticated data aggregation approach designed to minimize the ratio of duplicate data values in the refined set with the least possible information loss in IoT networks. First, at the device level, a local data aggregation process filters out outliers and duplicates data before transmission. Second, at the server level, a dynamic programming-based non-metric method identifies the longest common subsequence (LCS) among data from neighboring devices, which is then shared with the edge module. Simulation results confirm the approach’s exceptional performance in optimizing the bandwidth, energy consumption, and response time while maintaining high accuracy and precision, thus significantly reducing overall network congestion. Full article
(This article belongs to the Special Issue Intelligent Big Data Analysis for High-Dimensional Internet of Things)
Show Figures

Figure 1

15 pages, 4216 KiB  
Article
Keyword Data Analysis Using Generative Models Based on Statistics and Machine Learning Algorithms
by Sunghae Jun
Electronics 2024, 13(4), 798; https://doi.org/10.3390/electronics13040798 - 19 Feb 2024
Cited by 2 | Viewed by 1173
Abstract
For text big data analysis, we preprocessed text data and constructed a document–keyword matrix. The elements of this matrix represent the frequencies of keywords occurring in a document. The matrix has a zero-inflation problem because many elements are zero values. Also, in the [...] Read more.
For text big data analysis, we preprocessed text data and constructed a document–keyword matrix. The elements of this matrix represent the frequencies of keywords occurring in a document. The matrix has a zero-inflation problem because many elements are zero values. Also, in the process of preprocessing, the data size of the document–keyword matrix is reduced. However, various machine learning algorithms require a large amount of data, so to solve the problems of data shortage and zero inflation, we propose the use of generative models based on statistics and machine learning. In our experimental tests, we compared the performance of the models using simulation and practical data sets. Thus, we verified the validity and contribution of our research for keyword data analysis. Full article
(This article belongs to the Special Issue Intelligent Big Data Analysis for High-Dimensional Internet of Things)
Show Figures

Figure 1

17 pages, 607 KiB  
Article
A Time-Sensitive Graph Neural Network for Session-Based New Item Recommendation
by Luzhi Wang and Di Jin
Electronics 2024, 13(1), 223; https://doi.org/10.3390/electronics13010223 - 3 Jan 2024
Cited by 3 | Viewed by 1637
Abstract
Session-based recommendation plays an important role in daily life and exists in many scenarios, such as online shopping websites and streaming media platforms. Recently, some works have focused on using graph neural networks (GNNs) to recommend new items in session-based scenarios. However, these [...] Read more.
Session-based recommendation plays an important role in daily life and exists in many scenarios, such as online shopping websites and streaming media platforms. Recently, some works have focused on using graph neural networks (GNNs) to recommend new items in session-based scenarios. However, these methods have encountered several limitations. First, existing methods typically ignore the impact of items’ visited time in constructing session graphs, resulting in a departure from real-world recommendation dynamics. Second, sessions are often sparse, making it challenging for GNNs to learn valuable item embedding and user preferences. Third, the existing methods usually overemphasize the impact of the last item on user preferences, neglecting their interest in multiple items in a session. To address these issues, we introduce a time-sensitive graph neural network for new item recommendation in session-based scenarios, namely, TSGNN. Specifically, TSGNN provides a novel time-sensitive session graph constructing technique to solve the first problem. For the second problem, TSGNN introduces graph augmentation and contrastive learning into it. To solve the third problem, TSGNN designs a time-aware attention mechanism to accurately discern user preferences. By evaluating the compatibility between user preferences and candidate new item embeddings, our method recommends items with high relevance scores for users. Comparative experiments demonstrate the superiority of TSGNN over state-of-the-art (SOTA) methods. Full article
(This article belongs to the Special Issue Intelligent Big Data Analysis for High-Dimensional Internet of Things)
Show Figures

Figure 1

15 pages, 2987 KiB  
Article
GRU- and Transformer-Based Periodicity Fusion Network for Traffic Forecasting
by Yazhe Zhang, Shixuan Liu, Ping Zhang and Bo Li
Electronics 2023, 12(24), 4988; https://doi.org/10.3390/electronics12244988 - 13 Dec 2023
Cited by 4 | Viewed by 1829
Abstract
Accurate traffic prediction is vital for traffic management, control, and urbanization construction. Extensive research efforts have diligently focused on capturing the intricate spatio-temporal relationships that are inherent in traffic data. However, a limited number of studies have fully exploited the potential of periodicity, [...] Read more.
Accurate traffic prediction is vital for traffic management, control, and urbanization construction. Extensive research efforts have diligently focused on capturing the intricate spatio-temporal relationships that are inherent in traffic data. However, a limited number of studies have fully exploited the potential of periodicity, a distinctive and valuable characteristic of transportation systems. In this paper, we propose a novel GRU- and Transformer-Based Periodicity Fusion Network (GTPFN) to distinguish the effects of different types of periodic data and integrate them seamlessly and effectively. Initially, the proposed model captures dynamic spatio-temporal correlations and obtains the candidate prediction result by employing a GRU encoder–decoder with spatial attention, focusing on the hourly data. Subsequently, we design the Pattern Induction Block based on GRU layers to extract regular traffic patterns from daily and weekly data. Finally, the Pattern Fusion Transformer integrates these patterns, followed by a Feedforward layer, to yield the final prediction output. Experiments on the Caltrans Performance Measurement System (PEMS) datasets illustrate that the proposed model outperforms state-of-art baseline models on most predicted horizons. Full article
(This article belongs to the Special Issue Intelligent Big Data Analysis for High-Dimensional Internet of Things)
Show Figures

Figure 1

Back to TopTop