Applied AI-Based Platform Technology and Application, Volume II

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 17520

Special Issue Editor


E-Mail Website
Guest Editor
Department of Software and Computer Engineering, Ajou University, Suwon 16499, Republic of Korea
Interests: in-vehicle network security; industrial control system security; digital forensics; anomaly detection algorithm
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the midst of the fourth industrial revolution, our society is undergoing rapid and diverse changes. Here, the contactless environment caused by COVID-19, which started in 2020, is rapidly applying information communication technologies to the entire society, and this trend is taking the form of a digital transformation, i.e., the integration of various digital technologies into all areas of our real life and business. In this era, various IoT devices are appearing and evolving, and these devices are based on many open platforms, which are in turn based on Linux. In addition, the huge amount of data generated by these devices requires a variety of big data technologies based on artificial intelligence. As such, our main research subject has further expanded from the Internet of things to the Internet of Everything, and a unique human-centered environment has been added to create more diverse devices and services based on the Internet of Behavior. In addition, technologies applying artificial intelligence are solving various problems in society, such as speech recognition, natural language processing, pattern recognition, regression, estimation, and prediction. The open platform is going to evolve further based on Linux and various open operating systems, and different services and contents based on these platforms are going to require big data processing based on artificial intelligence. Cybersecurity and privacy issues are not to be overlooked, of course. As the number of various devices and services increases, the demand for personal information protection is likely to also increase, and so will attacks by hackers and malware. When it comes to legal disputes, it is also necessary to investigate and analyze accidents from a digital forensics point of view.

This Special Issue aims to encourage the exposure of the most recent and advanced research applications, experiments, results, and developments in platform technology and applications, such as big data, smart grid, multimedia, mobility, digital forensics, information security and privacy, the Internet of Things, and artificial intelligence, with a special focus on their practical applications to science, engineering, industry, medicine, robotics, manufacturing, entertainment, optimization, business, and other fields. We kindly invite researchers and practitioners to contribute their high-quality original research or review articles concerning these topics to this Special Issue.

Topics appropriate for this Special Issue include, but are not necessarily limited to, the following:

  • Machine-learning-based platform technology and application;
  • Clustering and classification of industrial data processing;
  • Industrial AI-based applications for platform technology and application;
  • AI security, trust, assurance, and resilience;
  • Digital forensics for platform technology and application;
  • Multimedia and distributed systems for platform technology;
  • Smart home and M2M (machine-to-machine) network for platform technology;
  • Scientific and big data visualization platform;
  • Business management and intelligence;
  • Security and privacy issues in emerging platforms.

Prof. Dr. Taeshik Shon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine-learning-based platform technology and application
  • clustering and classification of industrial data processing
  • industrial AI-based applications for platform technology and application
  • AI security, trust, assurance, and resilience
  • digital forensics for platform technology and application
  • multimedia and distributed systems for platform technology
  • smart home and M2M (machine-to-machine) network for platform technology
  • scientific and big data visualization platform
  • business management and intelligence
  • security and privacy issues in emerging platforms

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 648 KiB  
Article
Data-Driven ICS Network Simulation for Synthetic Data Generation
by Minseo Kim, Seungho Jeon, Jake Cho and Seonghyeon Gong
Electronics 2024, 13(10), 1920; https://doi.org/10.3390/electronics13101920 - 14 May 2024
Viewed by 975
Abstract
Industrial control systems (ICSs) are integral to managing and optimizing processes in various industries, including manufacturing, power generation, and more. However, the scarcity of widely adopted ICS datasets hampers research efforts in areas like optimization and security. This scarcity arises due to the [...] Read more.
Industrial control systems (ICSs) are integral to managing and optimizing processes in various industries, including manufacturing, power generation, and more. However, the scarcity of widely adopted ICS datasets hampers research efforts in areas like optimization and security. This scarcity arises due to the substantial cost and technical expertise required to create physical ICS environments. In response to these challenges, this paper presents a groundbreaking approach to generating synthetic ICS data through a data-driven ICS network simulation. We circumvent the need for expensive hardware by recreating the entire ICS environment in software. Moreover, rather than manually replicating the control logic of ICS components, we leverage existing data to autonomously generate control logic. The core of our method involves the stochastic setting of setpoints, which introduces randomness into the generated data. Setpoints serve as target values for controlling the operation of the ICS process. This approach enables us to augment existing ICS datasets and cater to the data requirements of machine learning-based ICS intrusion detection systems and other data-driven applications. Our simulated ICS environment employs virtualized containers to mimic the behavior of real-world PLCs and SCADA systems, while control logic is deduced from publicly available ICS datasets. Setpoints are generated probabilistically to ensure data diversity. Experimental results validate the fidelity of our synthetic data, emphasizing their ability to closely replicate temporal and statistical characteristics of real-world ICS networks. In conclusion, this innovative data-driven ICS network simulation offers a cost-effective and scalable solution for generating synthetic ICS data. It empowers researchers in the field of ICS optimization and security with diverse, realistic datasets, furthering advancements in this critical domain. Future work may involve refining the simulation model and exploring additional applications for synthetic ICS data. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

18 pages, 6792 KiB  
Article
An Optimal Network-Aware Scheduling Technique for Distributed Deep Learning in Distributed HPC Platforms
by Sangkwon Lee, Syed Asif Raza Shah, Woojin Seok, Jeonghoon Moon, Kihyeon Kim and Syed Hasnain Raza Shah
Electronics 2023, 12(14), 3021; https://doi.org/10.3390/electronics12143021 - 10 Jul 2023
Cited by 2 | Viewed by 1833
Abstract
Deep learning is a growing technique used to solve complex artificial intelligence (AI) problems. Large-scale deep learning has become a significant issue as a result of the expansion of datasets and the complexity of deep learning models. For training large-scale models, the cloud [...] Read more.
Deep learning is a growing technique used to solve complex artificial intelligence (AI) problems. Large-scale deep learning has become a significant issue as a result of the expansion of datasets and the complexity of deep learning models. For training large-scale models, the cloud can be used as a distributed HPC (high-performance computing) tool with benefits in cost and flexibility. However, one of the major performance barriers in distributed deep learning in a distributed HPC environment is the network. The performance is often limited by heavy traffic like many stochastic gradient descent transfers for distributed communication. There are many network studies in distributed deep learning to solve these problems, but most research only focuses on improving communication performance and applying new methods or algorithms like overlapping parameter synchronization to minimize communication delay rather than considering the actual network. In this paper, we are focusing on the actual network, especially in a distributed HPC environment. In such an environment, if cluster nodes are assigned to different zones/regions which means a set of an appropriate number of distributed HPC nodes when performing distributed deep learning tasks, performance degradation due to network delay may occur. The proposed network optimization algorithm ensures that distributed work is placed in the same zone as much as possible to reduce network delay. Furthermore, scoring using network monitoring tools like loss, delay, and throughput is applied to select the optimal node within the zone. Our proposal has been validated on the Kubernetes platform, an open source orchestrator for the automatic management and deployment of micro-services. The performance of distributed deep learning is improved through the proposed scheduler. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

20 pages, 897 KiB  
Article
Forensic Analysis Laboratory for Sport Devices: A Practical Use Case
by Pablo Donaire-Calleja, Antonio Robles-Gómez, Llanos Tobarra and Rafael Pastor-Vargas
Electronics 2023, 12(12), 2710; https://doi.org/10.3390/electronics12122710 - 17 Jun 2023
Cited by 2 | Viewed by 1525
Abstract
At present, the mobile device sector is experiencing significant growth. In particular, wearable devices have become a common element in society. This fact implies that users unconsciously accept the constant dynamic collection of private data about their habits and behaviours. Therefore, this work [...] Read more.
At present, the mobile device sector is experiencing significant growth. In particular, wearable devices have become a common element in society. This fact implies that users unconsciously accept the constant dynamic collection of private data about their habits and behaviours. Therefore, this work focuses on highlighting and analysing some of the main issues that forensic analysts face in this sector, such as the lack of standard procedures for analysis and the common use of private protocols for data communication. Thus, it is almost impossible for a digital forensic specialist to fully specialize in the context of wearables, such as smartwatches for sports activities. With the aim of highlighting these problems, a complete forensic analysis laboratory for such sports devices is described in this paper. We selected a smartwatch belonging to the Garmin Forerunner Series, due to its great popularity. Through an analysis, its strengths and weaknesses in terms of data protection are described. We also analyse how companies are increasingly taking personal data privacy into consideration, in order to minimize unwanted information leaks. Finally, a set of initial security recommendations for the use of these kinds of devices are provided to the reader. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

17 pages, 3166 KiB  
Article
A Framework for Understanding Unstructured Financial Documents Using RPA and Multimodal Approach
by Seongkuk Cho, Jihoon Moon, Junhyeok Bae, Jiwon Kang and Sangwook Lee
Electronics 2023, 12(4), 939; https://doi.org/10.3390/electronics12040939 - 13 Feb 2023
Cited by 4 | Viewed by 4035
Abstract
The financial business process worldwide suffers from huge dependencies upon labor and written documents, thus making it tedious and time-consuming. In order to solve this problem, traditional robotic process automation (RPA) has recently been developed into a hyper-automation solution by combining computer vision [...] Read more.
The financial business process worldwide suffers from huge dependencies upon labor and written documents, thus making it tedious and time-consuming. In order to solve this problem, traditional robotic process automation (RPA) has recently been developed into a hyper-automation solution by combining computer vision (CV) and natural language processing (NLP) methods. These solutions are capable of image analysis, such as key information extraction and document classification. However, they could improve on text-rich document images and require much training data for processing multilingual documents. This study proposes a multimodal approach-based intelligent document processing framework that combines a pre-trained deep learning model with traditional RPA used in banks to automate business processes from real-world financial document images. The proposed framework can perform classification and key information extraction on a small amount of training data and analyze multilingual documents. In order to evaluate the effectiveness of the proposed framework, extensive experiments were conducted using Korean financial document images. The experimental results show the superiority of the multimodal approach for understanding financial documents and demonstrate that adequate labeling can improve performance by up to about 15%. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

16 pages, 6562 KiB  
Article
Methods of Pre-Clustering and Generating Time Series Images for Detecting Anomalies in Electric Power Usage Data
by Sangwon Oh, Seungmin Oh, Tai-Won Um, Jinsul Kim and Young-Ae Jung
Electronics 2022, 11(20), 3315; https://doi.org/10.3390/electronics11203315 - 14 Oct 2022
Cited by 5 | Viewed by 2002
Abstract
As electricity supply expands, it is essential for providers to predict and analyze consumer electricity patterns to plan effective electricity supply policies. In general, electricity consumption data take the form of time series data, and to analyze the data, it is first necessary [...] Read more.
As electricity supply expands, it is essential for providers to predict and analyze consumer electricity patterns to plan effective electricity supply policies. In general, electricity consumption data take the form of time series data, and to analyze the data, it is first necessary to check if there is no data contamination. For this, the process of verifying that there are no abnormalities in the data is essential. Especially for power data, anomalies are often recorded over multiple time units rather than a single point. In addition, due to various external factors, each set of power consumption data does not have consistent data features, so the importance of pre-clustering is highlighted. In this paper, we propose a method using a CNN model using pre-clustering-based time series images to detect anomalies in time series power usage data. For pre-clustering, the performances were compared using k-means, k-shapes clustering, and SOM algorithms. After pre-clustering, a method using the ARIMA model, a statistical technique for anomaly detection, and a CNN-based model by converting time series data into images compared the methods used. As a result, the pre-clustered data produced higher accuracy anomaly detection results than the non-clustered data, and the CNN-based binary classification model using time series images had higher accuracy than the ARIMA model. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

19 pages, 11152 KiB  
Article
Forensic Analysis of IoT File Systems for Linux-Compatible Platforms
by Jino Lee and Taeshik Shon
Electronics 2022, 11(19), 3219; https://doi.org/10.3390/electronics11193219 - 8 Oct 2022
Cited by 5 | Viewed by 3546
Abstract
Due to recent developments in IT technology, various IoT devices have been developed for use in various environments, such as card smart TVs, and smartphones Communication between IoT devices has become possible. Various IoT devices are found in homes and in daily life, [...] Read more.
Due to recent developments in IT technology, various IoT devices have been developed for use in various environments, such as card smart TVs, and smartphones Communication between IoT devices has become possible. Various IoT devices are found in homes and in daily life, and IoT technologies are being combined with vehicles, power, and wearables, amongst others. Although the usage of IoT devices has increased, the level of security technology applied to IoT devices is still insufficient. There is sensitive information stored inside IoT devices, such as personal information and usage history, so if security accidents happen, such as data leakage, it can be very damaging for users. Since research on data storage and acquisition in IoT devices is very important, in this paper we conducted a security analysis, from a forensic perspective, on IoT platform file systems used in various environments. The analysis was conducted on two mechanical platforms: Tizen (VDFS) and Linux (JFFS2 and UBIFS). Through file system metadata analysis, file system type, size, list of files and folders, deleted file information were obtained so that we could analyze file system structure with the obtained information. We also used the obtained information to check the recoverability of deleted data to investigate the recovery plan. In this study, we explain the characteristics of platforms used in various environments, and the characteristics of data stored in each platform. By analyzing the security issues of data stored during platform communications, we aimed to help in solving the problems affecting devices. In addition, we explain the analysis method for file system forensics so that it can be referred to in other platform forensics. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

15 pages, 4696 KiB  
Article
Selective Layer Tuning and Performance Study of Pre-Trained Models Using Genetic Algorithm
by Jae-Cheol Jeong, Gwang-Hyun Yu, Min-Gyu Song, Dang Thanh Vu, Le Hoang Anh, Young-Ae Jung, Yoon-A Choi, Tai-Won Um and Jin-Young Kim
Electronics 2022, 11(19), 2985; https://doi.org/10.3390/electronics11192985 - 21 Sep 2022
Cited by 3 | Viewed by 2347
Abstract
Utilizing pre-trained models involves fully or partially using pre-trained parameters as initialization. In general, configuring a pre-trained model demands practitioners’ knowledge about problems or an exhaustive trial–error experiment according to a given task. In this paper, we propose tuning trainable layers using a [...] Read more.
Utilizing pre-trained models involves fully or partially using pre-trained parameters as initialization. In general, configuring a pre-trained model demands practitioners’ knowledge about problems or an exhaustive trial–error experiment according to a given task. In this paper, we propose tuning trainable layers using a genetic algorithm on a pre-trained model that is fine-tuned on single-channel image datasets for a classification task. The single-channel dataset comprises images from grayscale and preprocessed audio signals transformed into a log-Mel spectrogram. Four deep-learning models used in the experimental evaluation employed the pre-trained model with the ImageNet dataset. The proposed genetic algorithm was applied to find the highest fitness for every generation to determine the selective layer tuning of the pre-trained models. Compared to the conventional fine-tuning method and random layer search, our proposed selective layer search with a genetic algorithm achieves higher accuracy, on average, by 9.7% and 1.88% (MNIST-Fashion), 1.31% and 1.14% (UrbanSound8k), and 2.2% and 0.29% (HospitalAlarmSound), respectively. In addition, our searching method can naturally be applied to various datasets of the same task without prior knowledge about the dataset of interest. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application, Volume II)
Show Figures

Figure 1

Back to TopTop