Next Issue
Volume 15, December
Previous Issue
Volume 15, October
 
 

Information, Volume 15, Issue 11 (November 2024) – 81 articles

Cover Story (view full-size image): Super-resolution is a technique for generating a high-resolution image or video from a low-resolution counterpart by predicting natural and realistic texture information. However, traditional single-image super-resolution methods can lead to a blurry visual effect. Reference-based super-resolution methods have been proposed to recover detailed information accurately, where a high-resolution image is also used as a reference in addition to the input image. However, it requires texture alignment between low-resolution and reference images, which generally involves a lot of time and memory. This paper proposes a lightweight reference-based video super-resolution method using deformable convolution. The proposed method enables reference-based super-resolution techniques even in environments with limited computational resources. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 573 KiB  
Article
Machine Learning-Based Methodologies for Cyber-Attacks and Network Traffic Monitoring: A Review and Insights
by Filippo Genuario, Giuseppe Santoro, Michele Giliberti, Stefania Bello, Elvira Zazzera and Donato Impedovo
Information 2024, 15(11), 741; https://doi.org/10.3390/info15110741 - 20 Nov 2024
Viewed by 453
Abstract
The number of connected IoT devices is increasing significantly due to their many benefits, including automation, improved efficiency and quality of life, and reducing waste. However, these devices have several vulnerabilities that have led to the rapid growth in the number of attacks. [...] Read more.
The number of connected IoT devices is increasing significantly due to their many benefits, including automation, improved efficiency and quality of life, and reducing waste. However, these devices have several vulnerabilities that have led to the rapid growth in the number of attacks. Therefore, several machine learning-based intrusion detection system (IDS) tools have been developed to detect intrusions and suspicious activity to and from a host (HIDS—Host IDS) or, in general, within the traffic of a network (NIDS—Network IDS). The proposed work performs a comparative analysis and an ablative study among recent machine learning-based NIDSs to develop a benchmark of the different proposed strategies. The proposed work compares both shallow learning algorithms, such as decision trees, random forests, Naïve Bayes, logistic regression, XGBoost, and support vector machines, and deep learning algorithms, such as DNNs, CNNs, and LSTM, whose approach is relatively new in the literature. Also, the ensembles are tested. The algorithms are evaluated on the KDD-99, NSL-KDD, UNSW-NB15, IoT-23, and UNB-CIC IoT 2023 datasets. The results show that the NIDS tools based on deep learning approaches achieve better performance in detecting network anomalies than shallow learning approaches, and ensembles outperform all the other models. Full article
Show Figures

Graphical abstract

18 pages, 1819 KiB  
Article
Detecting Adversarial Attacks in IoT-Enabled Predictive Maintenance with Time-Series Data Augmentation
by Flora Amato, Egidia Cirillo, Mattia Fonisto and Alberto Moccardi
Information 2024, 15(11), 740; https://doi.org/10.3390/info15110740 - 20 Nov 2024
Viewed by 439
Abstract
Despite considerable advancements in integrating the Internet of Things (IoT) and artificial intelligence (AI) within the industrial maintenance framework, the increasing reliance on these innovative technologies introduces significant vulnerabilities due to cybersecurity risks, potentially compromising the integrity of decision-making processes. Accordingly, this study [...] Read more.
Despite considerable advancements in integrating the Internet of Things (IoT) and artificial intelligence (AI) within the industrial maintenance framework, the increasing reliance on these innovative technologies introduces significant vulnerabilities due to cybersecurity risks, potentially compromising the integrity of decision-making processes. Accordingly, this study aims to offer comprehensive insights into the cybersecurity challenges associated with predictive maintenance, proposing a novel methodology that leverages generative AI for data augmentation, enhancing threat detection capabilities. Experimental evaluations conducted using the NASA Commercial Modular Aero-Propulsion System Simulation (N-CMAPSS) dataset affirm the viability of this approach leveraging the state-of-the-art TimeGAN model for temporal-aware data generation and building a recurrent classifier for attack discrimination in a balanced dataset. The classifier’s results demonstrate the satisfactory and robust performance achieved in terms of accuracy (between 80% and 90%) and how the strategic generation of data can effectively bolster the resilience of intelligent maintenance systems against cyber threats. Full article
Show Figures

Figure 1

23 pages, 32729 KiB  
Article
PLC-Fusion: Perspective-Based Hierarchical and Deep LiDAR Camera Fusion for 3D Object Detection in Autonomous Vehicles
by Husnain Mushtaq, Xiaoheng Deng, Fizza Azhar, Mubashir Ali and Hafiz Husnain Raza Sherazi
Information 2024, 15(11), 739; https://doi.org/10.3390/info15110739 - 19 Nov 2024
Viewed by 300
Abstract
Accurate 3D object detection is essential for autonomous driving, yet traditional LiDAR models often struggle with sparse point clouds. We propose perspective-aware hierarchical vision transformer-based LiDAR-camera fusion (PLC-Fusion) for 3D object detection to address this. This efficient, multi-modal 3D object detection framework integrates [...] Read more.
Accurate 3D object detection is essential for autonomous driving, yet traditional LiDAR models often struggle with sparse point clouds. We propose perspective-aware hierarchical vision transformer-based LiDAR-camera fusion (PLC-Fusion) for 3D object detection to address this. This efficient, multi-modal 3D object detection framework integrates LiDAR and camera data for improved performance. First, our method enhances LiDAR data by projecting them onto a 2D plane, enabling the extraction of object perspective features from a probability map via the Object Perspective Sampling (OPS) module. It incorporates a lightweight perspective detector, consisting of interconnected 2D and monocular 3D sub-networks, to extract image features and generate object perspective proposals by predicting and refining top-scored 3D candidates. Second, it leverages two independent transformers—CamViT for 2D image features and LidViT for 3D point cloud features. These ViT-based representations are fused via the Cross-Fusion module for hierarchical and deep representation learning, improving performance and computational efficiency. These mechanisms enhance the utilization of semantic features in a region of interest (ROI) to obtain more representative point features, leading to a more effective fusion of information from both LiDAR and camera sources. PLC-Fusion outperforms existing methods, achieving a mean average precision (mAP) of 83.52% and 90.37% for 3D and BEV detection, respectively. Moreover, PLC-Fusion maintains a competitive inference time of 0.18 s. Our model addresses computational bottlenecks by eliminating the need for dense BEV searches and global attention mechanisms while improving detection range and precision. Full article
(This article belongs to the Special Issue Emerging Research in Object Tracking and Image Segmentation)
Show Figures

Figure 1

33 pages, 658 KiB  
Article
SoK: The Impact of Educational Data Mining on Organisational Administration
by Hamad Almaghrabi, Ben Soh, Alice Li and Idrees Alsolbi
Information 2024, 15(11), 738; https://doi.org/10.3390/info15110738 - 19 Nov 2024
Viewed by 414
Abstract
Educational Data Mining (EDM) applies advanced data mining techniques to analyse data from educational settings, traditionally aimed at improving student performance. However, EDM’s potential extends to enhancing administrative functions in educational organisations. This systematisation of knowledge (SoK) explores the use of EDM in [...] Read more.
Educational Data Mining (EDM) applies advanced data mining techniques to analyse data from educational settings, traditionally aimed at improving student performance. However, EDM’s potential extends to enhancing administrative functions in educational organisations. This systematisation of knowledge (SoK) explores the use of EDM in organisational administration, examining peer-reviewed and non-peer-reviewed studies to provide a comprehensive understanding of its impact. This review highlights how EDM can revolutionise decision-making processes, supporting data-driven strategies that enhance administrative efficiency. It outlines key data mining techniques used in tasks like resource allocation, staff evaluation, and institutional planning. Challenges related to EDM implementation, such as data privacy, system integration, and the need for specialised skills, are also discussed. While EDM offers benefits like increased efficiency and informed decision-making, this review notes potential risks, including over-reliance on data and misinterpretation. The role of EDM in developing robust administrative frameworks that align with organisational goals is also explored. This study provides a critical overview of the existing literature and identifies areas for future research, offering insights to optimise educational administration through effective EDM use and highlighting its growing significance in shaping the future of educational organisations. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

22 pages, 1318 KiB  
Article
Fractional Intuitionistic Fuzzy Support Vector Machine: Diabetes Tweet Classification
by Hassan Badi, Alina-Mihaela Patriciu and Karim El Moutaouakil
Information 2024, 15(11), 737; https://doi.org/10.3390/info15110737 - 19 Nov 2024
Viewed by 340
Abstract
Support vector machine (SVM) models apply the Karush–Kuhn–Tucker (KKT-OC) optimality conditions in the ordinary derivative to the primal optimisation problem, which has a major influence on the weights associated with the dissimilarity between the selected support vectors and subsequently on the quality of [...] Read more.
Support vector machine (SVM) models apply the Karush–Kuhn–Tucker (KKT-OC) optimality conditions in the ordinary derivative to the primal optimisation problem, which has a major influence on the weights associated with the dissimilarity between the selected support vectors and subsequently on the quality of the model’s predictions. Recognising the capacity of fractional derivatives to provide machine learning models with more memory through more microscopic differentiations, in this paper we generalise KKT-OC based on ordinary derivatives to KKT-OC using fractional derivatives (Frac-KKT-OC). To mitigate the impact of noise and identify support vectors from noise, we apply the Frac-KKT-OC method to the fuzzy intuitionistic version of SVM (IFSVM). The fractional fuzzy intuitionistic SVM model (Frac-IFSVM) is then evaluated on six sets of data from the UCI and used to predict the sentiments embedded in tweets posted by people with diabetes. Taking into account four performance measures (sensitivity, specificity, F-measure, and G-mean), the Frac-IFSVM version outperforms SVM, FSVM, IFSVM, Frac-SVM, and Frac-FSVM. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

21 pages, 3420 KiB  
Article
Benchmarking for a New Railway Accident Classification Methodology and Its Database: A Case Study in Mexico, the United States, Canada, and the European Union
by Tania Elizabeth Sandoval-Valencia, Adriana del Carmen Téllez-Anguiano, Dante Ruiz-Robles, Ivon Alanis-Fuerte, Alexis Vaed Vázquez-Esquivel and Juan C. Jáuregui-Correa
Information 2024, 15(11), 736; https://doi.org/10.3390/info15110736 - 18 Nov 2024
Viewed by 416
Abstract
Rail accidents have decreased in recent years, although not significantly if measured by train accidents recorded in the last six years. Therefore, it is essential to identify weaknesses in the implementation of security and prevention systems. This research aims to study the trend [...] Read more.
Rail accidents have decreased in recent years, although not significantly if measured by train accidents recorded in the last six years. Therefore, it is essential to identify weaknesses in the implementation of security and prevention systems. This research aims to study the trend and classification of railway accidents, as well as analyze public databases. Using the business management method of benchmarking, descriptive statistics, and a novel approach to the Ishikawa diagram, this study demonstrates best practices and strategies to reduce accidents. Unlike previous studies, this research specifically examines public databases and provides a framework for developing the standardization of railway accident causes and recommendations. The main conclusion is that the proposed classification of railway accident causes, and its associated database, ensures that agencies, researchers, and the government have accessible, easily linkable, and usable data references to enhance their analysis and support the continued reduction of accidents. Full article
Show Figures

Figure 1

13 pages, 296 KiB  
Article
LGFA-MTKD: Enhancing Multi-Teacher Knowledge Distillation with Local and Global Frequency Attention
by Xin Cheng and Jinjia Zhou
Information 2024, 15(11), 735; https://doi.org/10.3390/info15110735 - 18 Nov 2024
Viewed by 296
Abstract
Transferring the extensive and varied knowledge contained within multiple complex models into a more compact student model poses significant challenges in multi-teacher knowledge distillation. Traditional distillation approaches often fall short in this context, as they struggle to fully capture and integrate the wide [...] Read more.
Transferring the extensive and varied knowledge contained within multiple complex models into a more compact student model poses significant challenges in multi-teacher knowledge distillation. Traditional distillation approaches often fall short in this context, as they struggle to fully capture and integrate the wide range of valuable information from each teacher. The variation in the knowledge offered by various teacher models complicates the student model’s ability to learn effectively and generalize well, ultimately resulting in subpar results. To overcome these constraints, We introduce an innovative method that integrates both localized and globalized frequency attention techniques, aiming to substantially enhance the distillation process. By simultaneously focusing on fine-grained local details and broad global patterns, our approach allows the student model to more effectively grasp the complex and diverse information provided by each teacher, therefore enhancing its learning capability. This dual-attention mechanism allows for a more balanced assimilation of specific details and generalized concepts, resulting in a more robust and accurate student model. Extensive experimental evaluations on standard benchmarks demonstrate that our methodology reliably exceeds the performance of current multi-teacher distillation methods, yielding outstanding outcomes regarding both performance and robustness. Specifically, our approach achieves an average performance improvement of 0.55% over CA-MKD, with a 1.05% gain in optimal conditions. These findings suggest that frequency-based attention mechanisms can unlock new potential in knowledge distillation, model compression, and transfer learning. Full article
Show Figures

Graphical abstract

25 pages, 2987 KiB  
Article
Zero Trust VPN (ZT-VPN): A Systematic Literature Review and Cybersecurity Framework for Hybrid and Remote Work
by Syed Muhammad Zohaib, Syed Muhammad Sajjad, Zafar Iqbal, Muhammad Yousaf, Muhammad Haseeb and Zia Muhammad
Information 2024, 15(11), 734; https://doi.org/10.3390/info15110734 - 17 Nov 2024
Viewed by 429
Abstract
Modern organizations have migrated from localized physical offices to work-from-home environments. This surge in remote work culture has exponentially increased the demand for and usage of Virtual Private Networks (VPNs), which permit remote employees to access corporate offices effectively. However, the technology raises [...] Read more.
Modern organizations have migrated from localized physical offices to work-from-home environments. This surge in remote work culture has exponentially increased the demand for and usage of Virtual Private Networks (VPNs), which permit remote employees to access corporate offices effectively. However, the technology raises concerns, including security threats, latency, throughput, and scalability, among others. These newer-generation threats are more complex and frequent, which makes the legacy approach to security ineffective. This research paper gives an overview of contemporary technologies used across enterprises, including the VPNs, Zero Trust Network Access (ZTNA), proxy servers, Secure Shell (SSH) tunnels, the software-defined wide area network (SD-WAN), and Secure Access Service Edge (SASE). This paper also presents a comprehensive cybersecurity framework named Zero Trust VPN (ZT-VPN), which is a VPN solution based on Zero Trust principles. The proposed framework aims to enhance IT security and privacy for modern enterprises in remote work environments and address concerns of latency, throughput, scalability, and security. Finally, this paper demonstrates the effectiveness of the proposed framework in various enterprise scenarios, highlighting its ability to prevent data leaks, manage access permissions, and provide seamless security transitions. The findings underscore the importance of adopting ZT-VPN to fortify cybersecurity frameworks, offering an effective protection tool against contemporary cyber threats. This research serves as a valuable reference for organizations aiming to enhance their security posture in an increasingly hostile threat landscape. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

20 pages, 9472 KiB  
Article
Reduced-Order Model of Coal Seam Gas Extraction Pressure Distribution Based on Deep Neural Networks and Convolutional Autoencoders
by Tianxuan Hao, Lizhen Zhao, Yang Du, Yiju Tang, Fan Li, Zehua Wang and Xu Li
Information 2024, 15(11), 733; https://doi.org/10.3390/info15110733 - 16 Nov 2024
Viewed by 533
Abstract
There has been extensive research on the partial differential equations governing the theory of gas flow in coal mines. However, the traditional Proper Orthogonal Decomposition–Radial Basis Function (POD-RBF) reduced-order algorithm requires significant computational resources and is inefficient when calculating high-dimensional data for coal [...] Read more.
There has been extensive research on the partial differential equations governing the theory of gas flow in coal mines. However, the traditional Proper Orthogonal Decomposition–Radial Basis Function (POD-RBF) reduced-order algorithm requires significant computational resources and is inefficient when calculating high-dimensional data for coal mine gas pressure fields. To achieve the rapid computation of gas extraction pressure fields, this paper proposes a model reduction method based on deep neural networks (DNNs) and convolutional autoencoders (CAEs). The CAE is used to compress and reconstruct full-order numerical solutions for coal mine gas extraction, while the DNN is employed to establish the nonlinear mapping between the physical parameters of gas extraction and the latent space parameters of the reduced-order model. The DNN-CAE model is applied to the reduced-order modeling of gas extraction flow–solid coupling mathematical models in coal mines. A full-order model pressure field numerical dataset for gas extraction was constructed, and optimal hyperparameters for the pressure field reconstruction model and latent space parameter prediction model were determined through hyperparameter testing. The performance of the DNN-CAE model order reduction algorithm was compared to the POD-RBF model order reduction algorithm. The results indicate that the DNN-CAE method has certain advantages over the traditional POD-RBF method in terms of pressure field reconstruction accuracy, overall structure retention, extremum capture, and computational efficiency. Full article
Show Figures

Figure 1

39 pages, 12256 KiB  
Article
Design Strategies to Minimize Mobile Usability Issues in Navigation Design Patterns
by Muhammad Umar, Ibrar Hussain, Toqeer Mahmood, Hamid Turab Mirza and C. M. Nadeem Faisal
Information 2024, 15(11), 732; https://doi.org/10.3390/info15110732 - 15 Nov 2024
Viewed by 392
Abstract
Recent development in mobile technology has significantly improved the quality of life. Everyday life is increasingly becoming dependent on mobile devices as mobile applications are targeting the needs of the end users. However, many end users struggle with navigating mobile applications, leading to [...] Read more.
Recent development in mobile technology has significantly improved the quality of life. Everyday life is increasingly becoming dependent on mobile devices as mobile applications are targeting the needs of the end users. However, many end users struggle with navigating mobile applications, leading to frustration, especially with sophisticated and unfamiliar interfaces. This study focuses on addressing specific usability issues in mobile applications by investigating the impact of introducing a floating action button (FAB) and icons with names at the bottom in popular applications such as YouTube, Plex, and IMDb. The current research includes three studies: Study-1 explores the navigation issues that users face; Study-2 measures the experiences of the users with improved navigation designs; and Study-3 compares the results of Study-1 and Study-2 to evaluate user experience with both existing and improved navigation designs. A total of 147 participants participated and the systems usability scale was used to evaluate the navigation design. The experiments indicated that the existing design patterns are complex and difficult to understand leading to user frustration compared to newly designed and improved navigation designed patterns. Moreover, the proposed newly designed navigation patterns improved the effectiveness, learnability, and usability. Consequently, the results highlight the imperativeness of effective navigation design in improving user satisfaction and lowering frustration with mobile applications. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Graphical abstract

10 pages, 4055 KiB  
Article
Accurately Identifying Sound vs. Rotten Cranberries Using Convolutional Neural Network
by Sayed Mehedi Azim, Austin Spadaro, Joseph Kawash, James Polashock and Iman Dehzangi
Information 2024, 15(11), 731; https://doi.org/10.3390/info15110731 - 15 Nov 2024
Viewed by 450
Abstract
Cranberries, native to North America, are known for their nutritional value and human health benefits. One hurdle to commercial production is losses due to fruit rot. Cranberry fruit rot results from a complex of more than ten filamentous fungi, challenging breeding for resistance. [...] Read more.
Cranberries, native to North America, are known for their nutritional value and human health benefits. One hurdle to commercial production is losses due to fruit rot. Cranberry fruit rot results from a complex of more than ten filamentous fungi, challenging breeding for resistance. Nonetheless, our collaborative breeding program has fruit rot resistance as a significant target. This program currently relies heavily on manual sorting of sound vs. rotten cranberries. This process is labor-intensive and time-consuming, prompting the need for an automated classification (sound vs. rotten) system. Although many studies have focused on classifying different fruits and vegetables, no such approach has been developed for cranberries yet, partly because datasets are lacking for conducting the necessary image analyses. This research addresses this gap by introducing a novel image dataset comprising sound and rotten cranberries to facilitate computational analysis. In addition, we developed CARP (Cranberry Assessment for Rot Prediction), a convolutional neural network (CNN)-based model to distinguish sound cranberries from rotten ones. With an accuracy of 97.4%, a sensitivity of 97.2%, and a specificity of 97.2% on the training dataset and 94.8%, 95.4%, and 92.7% on the independent dataset, respectively, our proposed CNN model shows its effectiveness in accurately differentiating between sound and rotten cranberries. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence 2024)
Show Figures

Figure 1

19 pages, 2108 KiB  
Article
Innovative Transitions: Exploring Demand for Smart City Development in Novi Sad as a European Capital of Culture
by Minja Bolesnikov, Mario Silić, Dario Silić, Boris Dumnić, Jelena Ćulibrk, Maja Petrović and Tamara Gajić
Information 2024, 15(11), 730; https://doi.org/10.3390/info15110730 - 15 Nov 2024
Viewed by 542
Abstract
This study investigates the factors influencing the acceptance and implementation of smart city solutions, with a particular focus on smart mobility and digital services in Novi Sad, one of the leading urban centers in Serbia. Employing a quantitative methodology, the research encompasses citizens’ [...] Read more.
This study investigates the factors influencing the acceptance and implementation of smart city solutions, with a particular focus on smart mobility and digital services in Novi Sad, one of the leading urban centers in Serbia. Employing a quantitative methodology, the research encompasses citizens’ perceptions of the benefits of smart technologies, their level of awareness regarding smart solutions, the degree of engagement in using digital services, and their interest in smart mobility. The results indicate that these factors are crucial for the successful integration of smart technologies. Notably, awareness of smart city initiatives and the perceived benefits, such as improved mobility, reduced traffic congestion, increased energy efficiency, and enhanced quality of life, are highlighted as key prerequisites for the adoption of these solutions. Novi Sad, as the European Capital of Culture in 2022, presents a unique opportunity for the implementation of these technologies. Our findings point to the need for strategic campaigns aimed at educating and raising public awareness. The practical implications of this study could contribute to shaping policies that encourage the development of smart cities, not only in Novi Sad but also in other urban areas across Serbia and the region. This study confirms the importance of citizen engagement and technological literacy in the transformation of urban environments through smart solutions, underscoring the potential of these technologies to improve everyday life and achieve sustainable urban development. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)
Show Figures

Figure 1

19 pages, 851 KiB  
Article
The Ethics and Cybersecurity of Artificial Intelligence and Robotics in Helping The Elderly to Manage at Home
by Jyri Rajamäki and Jaakko Helin
Information 2024, 15(11), 729; https://doi.org/10.3390/info15110729 - 15 Nov 2024
Viewed by 572
Abstract
The aging population, combined with the scarcity of healthcare resources, presents significant challenges for our society. The use of artificial intelligence (AI) and robotics offers a potential solution to these challenges. However, such technologies also raise ethical and cybersecurity concerns related to the [...] Read more.
The aging population, combined with the scarcity of healthcare resources, presents significant challenges for our society. The use of artificial intelligence (AI) and robotics offers a potential solution to these challenges. However, such technologies also raise ethical and cybersecurity concerns related to the preservation of privacy, autonomy, and human contact. In this case study, we examine these ethical challenges and the opportunities brought by AI and robotics in the care of old individuals at home. This article aims to describe the current fragmented state of legislation related to the development and use of AI-based services and robotics and to reflect on their ethics and cybersecurity. The findings indicate that, guided by ethical principles, we can leverage the best aspects of technology while ensuring that old people can maintain a dignified and valued life at home. The careful handling of ethical issues should be viewed as a competitive advantage and opportunity, rather than a burden. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)
Show Figures

Graphical abstract

42 pages, 1544 KiB  
Review
Collaborative Intelligence for Safety-Critical Industries: A Literature Review
by Inês F. Ramos, Gabriele Gianini, Maria Chiara Leva and Ernesto Damiani
Information 2024, 15(11), 728; https://doi.org/10.3390/info15110728 - 12 Nov 2024
Viewed by 545
Abstract
While AI-driven automation can increase the performance and safety of systems, humans should not be replaced in safety-critical systems but should be integrated to collaborate and mitigate each other’s limitations. The current trend in Industry 5.0 is towards human-centric collaborative paradigms, with an [...] Read more.
While AI-driven automation can increase the performance and safety of systems, humans should not be replaced in safety-critical systems but should be integrated to collaborate and mitigate each other’s limitations. The current trend in Industry 5.0 is towards human-centric collaborative paradigms, with an emphasis on collaborative intelligence (CI) or Hybrid Intelligent Systems. In this survey, we search and review recent work that employs AI methods for collaborative intelligence applications, specifically those that focus on safety and safety-critical industries. We aim to contribute to the research landscape and industry by compiling and analyzing a range of scenarios where AI can be used to achieve more efficient human–machine interactions, improved collaboration, coordination, and safety. We define a domain-focused taxonomy to categorize the diverse CI solutions, based on the type of collaborative interaction between intelligent systems and humans, the AI paradigm used and the domain of the AI problem, while highlighting safety issues. We investigate 91 articles on CI research published between 2014 and 2023, providing insights into the trends, gaps, and techniques used, to guide recommendations for future research opportunities in the fast developing collaborative intelligence field. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
Show Figures

Graphical abstract

21 pages, 4646 KiB  
Article
Analysis of Quantum-Classical Hybrid Deep Learning for 6G Image Processing with Copyright Detection
by Jongho Seol, Hye-Young Kim, Abhilash Kancharla and Jongyeop Kim
Information 2024, 15(11), 727; https://doi.org/10.3390/info15110727 - 12 Nov 2024
Viewed by 607
Abstract
This study investigates the integration of quantum computing, classical methods, and deep learning techniques for enhanced image processing in dynamic 6G networks, while also addressing essential aspects of copyright technology and detection. Our findings indicate that quantum methods excel in rapid edge detection [...] Read more.
This study investigates the integration of quantum computing, classical methods, and deep learning techniques for enhanced image processing in dynamic 6G networks, while also addressing essential aspects of copyright technology and detection. Our findings indicate that quantum methods excel in rapid edge detection and feature extraction but encounter difficulties in maintaining image quality compared to classical approaches. In contrast, classical methods preserve higher image fidelity but struggle to satisfy the real-time processing requirements of 6G applications. Deep learning techniques, particularly CNNs, demonstrate potential in complex image analysis tasks but demand substantial computational resources. To promote the ethical use of AI-generated images, we introduce copyright detection mechanisms that employ advanced algorithms to identify potential infringements in generated content. This integration improves adherence to intellectual property rights and legal standards, supporting the responsible implementation of image processing technologies. We suggest that the future of image processing in 6G networks resides in hybrid systems that effectively utilize the strengths of each approach while incorporating robust copyright detection capabilities. These insights contribute to the development of efficient, high-performance image processing systems in next-generation networks, highlighting the promise of integrated quantum-classical–classical deep learning architectures within 6G environments. Full article
(This article belongs to the Section Information Applications)
Show Figures

Graphical abstract

13 pages, 1246 KiB  
Article
Sprint Management in Agile Approach: Progress and Velocity Evaluation Applying Machine Learning
by Yadira Jazmín Pérez Castillo, Sandra Dinora Orantes Jiménez and Patricio Orlando Letelier Torres
Information 2024, 15(11), 726; https://doi.org/10.3390/info15110726 - 12 Nov 2024
Viewed by 461
Abstract
Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace [...] Read more.
Nowadays, technology plays a fundamental role in data collection and analysis, which are essential for decision-making in various fields. Agile methodologies have transformed project management by focusing on continuous delivery and adaptation to change. In multiple project management, assessing the progress and pace of work in Sprints is particularly important. In this work, a data model was developed to evaluate the progress and pace of work, based on the visual interpretation of numerical data from certain graphs that allow tracking, such as the Burndown chart. Additionally, experiments with machine learning algorithms were carried out to validate the effectiveness and potential improvements facilitated by this dataset development. Full article
Show Figures

Graphical abstract

45 pages, 2381 KiB  
Review
AI for Decision Support: Balancing Accuracy, Transparency, and Trust Across Sectors
by Attila Kovari
Information 2024, 15(11), 725; https://doi.org/10.3390/info15110725 - 11 Nov 2024
Viewed by 950
Abstract
This study seeks to understand the key success factors that underpin efficiency, transparency, and user trust in automated decision support systems (DSS) that leverage AI technologies across industries. The aim of this study is to facilitate more accurate decision-making with such AI-based DSS, [...] Read more.
This study seeks to understand the key success factors that underpin efficiency, transparency, and user trust in automated decision support systems (DSS) that leverage AI technologies across industries. The aim of this study is to facilitate more accurate decision-making with such AI-based DSS, as well as build trust through the need for visibility and explainability by increasing user acceptance. This study primarily examines the nature of AI-based DSS adoption and the challenges of maintaining system transparency and improving accuracy. The results provide practical guidance for professionals and decision-makers to develop AI-driven decision support systems that are not only effective but also trusted by users. The results are also important to gain insight into how artificial intelligence fits into and combines with decision-making, which can be derived from research when thinking about embedding systems in ethical standards. Full article
Show Figures

Graphical abstract

19 pages, 6226 KiB  
Article
Optimization of Business Processes Through BPM Methodology: A Case Study on Data Analysis and Performance Improvement
by António Ricardo Teixeira, José Vasconcelos Ferreira and Ana Luísa Ramos
Information 2024, 15(11), 724; https://doi.org/10.3390/info15110724 - 11 Nov 2024
Viewed by 451
Abstract
This study explores the application of the BPM lifecycle to optimize the market analysis process within the market intelligence department of a major energy company. The semi-structured, virtual nature of the process necessitated careful adaptation of BPM methodology, starting with process discovery through [...] Read more.
This study explores the application of the BPM lifecycle to optimize the market analysis process within the market intelligence department of a major energy company. The semi-structured, virtual nature of the process necessitated careful adaptation of BPM methodology, starting with process discovery through data collection, modeling, and validation. Qualitative analysis, including value-added and root-cause analysis, revealed inefficiencies. The redesign strategy focused on selective automation using Python 3.10 scripts and Power BI dashboards, incorporating techniques such as linear programming and forecasting to improve process efficiency and quality while maintaining flexibility. Post-implementation, monitoring through a questionnaire showed positive results, though ongoing interviews were recommended for sustained performance evaluation. This study highlights the value of BPM methodology in enhancing decision-critical processes and offers a model for adaptable, value-driven process improvements in complex organizational environments. Full article
(This article belongs to the Special Issue Blockchain Applications for Business Process Management)
Show Figures

Figure 1

14 pages, 1898 KiB  
Article
Privacy-Preserving ConvMixer Without Any Accuracy Degradation Using Compressible Encrypted Images
by Haiwei Lin, Shoko Imaizumi and Hitoshi Kiya
Information 2024, 15(11), 723; https://doi.org/10.3390/info15110723 - 11 Nov 2024
Viewed by 382
Abstract
We propose an enhanced privacy-preserving method for image classification using ConvMixer, which is an extremely simple model that is similar in spirit to the Vision Transformer (ViT). Most privacy-preserving methods using encrypted images cause the performance of models to degrade due to the [...] Read more.
We propose an enhanced privacy-preserving method for image classification using ConvMixer, which is an extremely simple model that is similar in spirit to the Vision Transformer (ViT). Most privacy-preserving methods using encrypted images cause the performance of models to degrade due to the influence of encryption, but a state-of-the-art method was demonstrated to have the same classification accuracy as that of models without any encryption under the use of ViT. However, the method, in which a common secret key is assigned to each patch, is not robust enough against ciphertext-only attacks (COAs) including jigsaw puzzle solver attacks if compressible encrypted images are used. In addition, ConvMixer is less robust than ViT because there is no position embedding. To overcome this issue, we propose a novel block-wise encryption method that allows us to assign an independent key to each patch to enhance robustness against attacks. In experiments, the effectiveness of the method is verified in terms of image classification accuracy and robustness, and it is compared with conventional privacy-preserving methods using image encryption. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

22 pages, 3889 KiB  
Article
Malware Classification Using Few-Shot Learning Approach
by Khalid Alfarsi, Saim Rasheed and Iftikhar Ahmad
Information 2024, 15(11), 722; https://doi.org/10.3390/info15110722 - 11 Nov 2024
Viewed by 482
Abstract
Malware detection, targeting the microarchitecture of processors, has recently come to light as a potentially effective way to improve computer system security. Hardware Performance Counter data are used by machine learning algorithms in security mechanisms, such as hardware-based malware detection, to categorize and [...] Read more.
Malware detection, targeting the microarchitecture of processors, has recently come to light as a potentially effective way to improve computer system security. Hardware Performance Counter data are used by machine learning algorithms in security mechanisms, such as hardware-based malware detection, to categorize and detect malware. It is crucial to determine whether or not a file contains malware. Many issues have been brought about by the rise in malware, and businesses are losing vital data and dealing with other issues. The second thing to keep in mind is that malware can quickly cause a lot of damage to a system by slowing it down and encrypting a large amount of data on a personal computer. This study provides extensive details on a flexible framework related to machine learning and deep learning techniques using few-shot learning. Malware detection is possible using DT, RF, LR, SVM, and FSL techniques. The logic is that these algorithms make it simple to differentiate between files that are malware-free and those that are not. This indicates that their goal is to reduce the number of false positives in the data. For this, we use two different datasets from an online platform. In this research work, we mainly focus on few-shot learning techniques by using two different datasets. The proposed model has an 97% accuracy rate, which is much greater than that of other techniques. Full article
Show Figures

Figure 1

20 pages, 4260 KiB  
Review
Advances and Challenges in Automated Drowning Detection and Prevention Systems
by Maad Shatnawi, Frdoos Albreiki, Ashwaq Alkhoori, Mariam Alhebshi and Anas Shatnawi
Information 2024, 15(11), 721; https://doi.org/10.3390/info15110721 - 11 Nov 2024
Viewed by 773
Abstract
Drowning is among the most common reasons for children’s death aged one to fourteen around the globe, ranking as the third leading cause of unintentional injury death. With rising populations and the growing popularity of swimming pools in hotels and villas, the incidence [...] Read more.
Drowning is among the most common reasons for children’s death aged one to fourteen around the globe, ranking as the third leading cause of unintentional injury death. With rising populations and the growing popularity of swimming pools in hotels and villas, the incidence of drowning has accelerated. Accordingly, the development of systems for detecting and preventing drowning has become increasingly critical to provide safe swimming settings. In this paper, we propose a comprehensive review of recent existing advancements in automated drowning detection and prevention systems. The existing approaches can be broadly categorized according to their objectives into two main groups: detection-based systems, which alert lifeguards or parents to perform manual rescues, and detection and rescue-based systems, which integrate detection with automatic rescue mechanisms. Automatic drowning detection approaches could be further categorized into computer vision-based approaches, where camera-captured images are analyzed by machine learning algorithms to detect instances of drowning, and sensing-based approaches, where sensing instruments are attached to swimmers to monitor their physical parameters. We explore the advantages and limitations of each approach. Additionally, we highlight technical challenges and unresolved issues related to this domain, such as data imbalance, accuracy, privacy concerns, and integration with rescue systems. We also identify future research opportunities, emphasizing the need for more advanced AI models, uniform datasets, and better integration of detection with autonomous rescue mechanisms. This study aims to provide a critical resource for researchers and practitioners, facilitating the development of more effective systems to enhance water safety and minimize drowning incidents. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Figure 1

13 pages, 185 KiB  
Article
Deep Learning and Knowledge
by Donald Gillies
Information 2024, 15(11), 720; https://doi.org/10.3390/info15110720 - 11 Nov 2024
Viewed by 321
Abstract
This paper considers the question of what kind of knowledge is produced by deep learning. Ryle’s concept of knowledge how is examined and is contrasted with knowledge with a rationale. It is then argued that deep neural networks do produce knowledge how, [...] Read more.
This paper considers the question of what kind of knowledge is produced by deep learning. Ryle’s concept of knowledge how is examined and is contrasted with knowledge with a rationale. It is then argued that deep neural networks do produce knowledge how, but, because of their opacity, they do not in general, though there may be some special cases to the contrary, produce knowledge with a rationale. It is concluded that the distinction between knowledge how and knowledge with a rationale is a useful one for judging whether a particular application of deep learning AI is appropriate. Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)
30 pages, 1536 KiB  
Review
From Data to Diagnosis: Machine Learning Revolutionizes Epidemiological Predictions
by Abdul Aziz Abdul Rahman, Gowri Rajasekaran, Rathipriya Ramalingam, Abdelrhman Meero and Dhamodharavadhani Seetharaman
Information 2024, 15(11), 719; https://doi.org/10.3390/info15110719 - 8 Nov 2024
Viewed by 660
Abstract
The outbreak of epidemiological diseases creates a major impact on humanity as well as on the world’s economy. The consequence of such infectious diseases affects the survival of mankind. The government has to stand up to the negative influence of these epidemiological diseases [...] Read more.
The outbreak of epidemiological diseases creates a major impact on humanity as well as on the world’s economy. The consequence of such infectious diseases affects the survival of mankind. The government has to stand up to the negative influence of these epidemiological diseases and facilitate society with medical resources and economical support. In recent times, COVID-19 has been one of the epidemiological diseases that created lethal effects and a greater slump in the economy. Therefore, the prediction of outbreaks is essential for epidemiological diseases. It may be either frequent or sudden infections in society. The unexpected raise in the application of prediction models in recent years is outstanding. A study on these epidemiological prediction models and their usage from the year 2018 onwards is highlighted in this article. The popularity of various prediction approaches is emphasized and summarized in this article. Full article
(This article belongs to the Special Issue Health Data Information Retrieval)
Show Figures

Graphical abstract

13 pages, 22601 KiB  
Article
Lightweight Reference-Based Video Super-Resolution Using Deformable Convolution
by Tomo Miyazaki, Zirui Guo and Shinichiro Omachi
Information 2024, 15(11), 718; https://doi.org/10.3390/info15110718 - 8 Nov 2024
Viewed by 405
Abstract
Super-resolution is a technique for generating a high-resolution image or video from a low-resolution counterpart by predicting natural and realistic texture information. It has various applications such as medical image analysis, surveillance, remote sensing, etc. However, traditional single-image super-resolution methods can lead to [...] Read more.
Super-resolution is a technique for generating a high-resolution image or video from a low-resolution counterpart by predicting natural and realistic texture information. It has various applications such as medical image analysis, surveillance, remote sensing, etc. However, traditional single-image super-resolution methods can lead to a blurry visual effect. Reference-based super-resolution methods have been proposed to recover detailed information accurately. In reference-based methods, a high-resolution image is also used as a reference in addition to the low-resolution input image. Reference-based methods aim at transferring high-resolution textures from the reference image to produce visually pleasing results. However, it requires texture alignment between low-resolution and reference images, which generally requires a lot of time and memory. This paper proposes a lightweight reference-based video super-resolution method using deformable convolution. The proposed method makes the reference-based super-resolution a technology that can be easily used even in environments with limited computational resources. To verify the effectiveness of the proposed method, we conducted experiments to compare the proposed method with baseline methods in two aspects: runtime and memory usage, in addition to accuracy. The experimental results showed that the proposed method restored a high-quality super-resolved image from a very low-resolution level in 0.0138 s using two NVIDIA RTX 2080 GPUs, much faster than the representative method. Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
Show Figures

Figure 1

20 pages, 11655 KiB  
Article
Variational Color Shift and Auto-Encoder Based on Large Separable Kernel Attention for Enhanced Text CAPTCHA Vulnerability Assessment
by Xing Wan, Juliana Johari and Fazlina Ahmat Ruslan
Information 2024, 15(11), 717; https://doi.org/10.3390/info15110717 - 7 Nov 2024
Viewed by 445
Abstract
Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces [...] Read more.
Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces a novel color augmentation technique called Variational Color Shift (VCS) to boost the recognition accuracy of different networks. VCS generates a color shift of every input image and then resamples the image within that range to generate a new image, thus expanding the number of samples of the original dataset to improve training effectiveness. In contrast to Random Color Shift (RCS), which treats the color offsets as hyperparameters, VCS estimates color shifts by reparametrizing the points sampled from the uniform distribution using predicted offsets according to every image, which makes the color shifts learnable. To better balance the computation and performance, we also propose two variants of VCS: Sim-VCS and Dilated-VCS. In addition, to solve the overfitting problem caused by disturbances in text CAPTCHAs, we propose an Auto-Encoder (AE) based on Large Separable Kernel Attention (AE-LSKA) to replace the convolutional module with large kernels in the text CAPTCHA recognizer. This new module employs an AE to compress the interference while expanding the receptive field using Large Separable Kernel Attention (LSKA), reducing the impact of local interference on the model training and improving the overall perception of characters. The experimental results show that the recognition accuracy of the model after integrating the AE-LSKA module is improved by at least 15 percentage points on both M-CAPTCHA and P-CAPTCHA datasets. In addition, experimental results demonstrate that color augmentation using VCS is more effective in enhancing recognition, which has higher accuracy compared to RCS and PCA Color Shift (PCA-CS). Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
Show Figures

Figure 1

17 pages, 1369 KiB  
Article
Enabling Parallel Performance and Portability of Solid Mechanics Simulations Across CPU and GPU Architectures
by Nathaniel Morgan, Caleb Yenusah, Adrian Diaz, Daniel Dunning, Jacob Moore, Erin Heilman, Evan Lieberman, Steven Walton, Sarah Brown, Daniel Holladay, Russell Marki, Robert Robey and Marko Knezevic
Information 2024, 15(11), 716; https://doi.org/10.3390/info15110716 - 7 Nov 2024
Viewed by 597
Abstract
Efficiently simulating solid mechanics is vital across various engineering applications. As constitutive models grow more complex and simulations scale up in size, harnessing the capabilities of modern computer architectures has become essential for achieving timely results. This paper presents advancements in running parallel [...] Read more.
Efficiently simulating solid mechanics is vital across various engineering applications. As constitutive models grow more complex and simulations scale up in size, harnessing the capabilities of modern computer architectures has become essential for achieving timely results. This paper presents advancements in running parallel simulations of solid mechanics on multi-core CPUs and GPUs using a single-code implementation. This portability is made possible by the C++ matrix and array (MATAR) library, which interfaces with the C++ Kokkos library, enabling the selection of fine-grained parallelism backends (e.g., CUDA, HIP, OpenMP, pthreads, etc.) at compile time. MATAR simplifies the transition from Fortran to C++ and Kokkos, making it easier to modernize legacy solid mechanics codes. We applied this approach to modernize a suite of constitutive models and to demonstrate substantial performance improvements across different computer architectures. This paper includes comparative performance studies using multi-core CPUs along with AMD and NVIDIA GPUs. Results are presented using a hypoelastic–plastic model, a crystal plasticity model, and the viscoplastic self-consistent generalized material model (VPSC-GMM). The results underscore the potential of using the MATAR library and modern computer architectures to accelerate solid mechanics simulations. Full article
(This article belongs to the Special Issue Advances in High Performance Computing and Scalable Software)
Show Figures

Figure 1

14 pages, 4810 KiB  
Article
Two-Stage Combined Model for Short-Term Electricity Forecasting in Ports
by Wentao Song, Xiaohua Cao, Hanrui Jiang, Zejun Li and Ruobin Gao
Information 2024, 15(11), 715; https://doi.org/10.3390/info15110715 - 7 Nov 2024
Viewed by 399
Abstract
With an increasing emphasis on energy conservation, emission reduction, and power consumption management, port enterprises are focusing on enhancing their electricity load forecasting capabilities. Accurate electricity load forecasting is crucial for understanding power usage and optimizing energy allocation. This study introduces a novel [...] Read more.
With an increasing emphasis on energy conservation, emission reduction, and power consumption management, port enterprises are focusing on enhancing their electricity load forecasting capabilities. Accurate electricity load forecasting is crucial for understanding power usage and optimizing energy allocation. This study introduces a novel approach that transcends the limitations of single prediction models by employing a Binary Fusion Weight Determination Method (BFWDM) to optimize and integrate three distinct prediction models: Temporal Pattern Attention Long Short-Term Memory (TPA-LSTM), Multi-Quantile Recurrent Neural Network (MQ-RNN), and Deep Factors. We propose a two-phase process for constructing an optimal combined forecasting model for port power load prediction. In the initial phase, individual prediction models generate preliminary outcomes. In the subsequent phase, these preliminary predictions are used to construct a combination forecasting model based on the BFWDM. The efficacy of the proposed model is validated using two actual port data, demonstrating high prediction accuracy with a Mean Absolute Percentage Error (MAPE) of only 6.23% and 7.94%. This approach not only enhances the prediction accuracy but also improves the adaptability and stability of the model compared to other existing models. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

26 pages, 951 KiB  
Article
Maternal Nutritional Factors Enhance Birthweight Prediction: A Super Learner Ensemble Approach
by Muhammad Mursil, Hatem A. Rashwan, Pere Cavallé-Busquets, Luis A. Santos-Calderón, Michelle M. Murphy and Domenec Puig
Information 2024, 15(11), 714; https://doi.org/10.3390/info15110714 - 6 Nov 2024
Viewed by 412
Abstract
Birthweight (BW) is a widely used indicator of neonatal health, with low birthweight (LBW) being linked to higher risks of morbidity and mortality. Timely and precise prediction of LBW is crucial for ensuring newborn health and well-being. Despite recent machine learning advancements in [...] Read more.
Birthweight (BW) is a widely used indicator of neonatal health, with low birthweight (LBW) being linked to higher risks of morbidity and mortality. Timely and precise prediction of LBW is crucial for ensuring newborn health and well-being. Despite recent machine learning advancements in BW classification based on physiological traits in the mother and ultrasound outcomes, maternal status in essential micronutrients for fetal development is yet to be fully exploited for BW prediction. This study aims to evaluate the impact of maternal nutritional factors, specifically mid-pregnancy plasma concentrations of vitamin B12, folate, and anemia on BW prediction. This study analyzed data from 729 pregnant women in Tarragona, Spain, for early BW prediction and analyzed each factor’s impact and contribution using a partial dependency plot and feature importance. Using a super learner ensemble method with tenfold cross-validation, the model achieved a prediction accuracy of 96.19% and an AUC-ROC of 0.96, outperforming single-model approaches. Vitamin B12 and folate status were identified as significant predictors, underscoring their importance in reducing LBW risk. The findings highlight the critical role of maternal nutritional factors in BW prediction and suggest that monitoring vitamin B12 and folate levels during pregnancy could enhance prenatal care and mitigate neonatal complications associated with LBW. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Graphical abstract

4 pages, 162 KiB  
Editorial
Best IDEAS: Special Issue of the International Database Engineered Applications Symposium
by Peter Z. Revesz
Information 2024, 15(11), 713; https://doi.org/10.3390/info15110713 - 6 Nov 2024
Viewed by 358
Abstract
Database engineered applications cover a broad range of topics including various design and maintenance methods, as well as data analytics and data mining algorithms and learning strategies for enterprise, distributed, or federated data stores [...] Full article
(This article belongs to the Special Issue Best IDEAS: International Database Engineered Applications Symposium)
18 pages, 2925 KiB  
Article
Exploring the Features and Trends of Industrial Product E-Commerce in China Using Text-Mining Approaches
by Zhaoyang Sun, Qi Zong, Yuxin Mao and Gongxing Wu
Information 2024, 15(11), 712; https://doi.org/10.3390/info15110712 - 6 Nov 2024
Viewed by 466
Abstract
Industrial product e-commerce refers to the specific application of the e-commerce concept in industrial product transactions. It enables industrial enterprises to conduct transactions via Internet platforms and reduce circulation and operating costs. Industrial literature, such as policies, reports, and standards related to industrial [...] Read more.
Industrial product e-commerce refers to the specific application of the e-commerce concept in industrial product transactions. It enables industrial enterprises to conduct transactions via Internet platforms and reduce circulation and operating costs. Industrial literature, such as policies, reports, and standards related to industrial product e-commerce, contains much crucial information. Through a systematical analysis of this information, we can explore and comprehend the development characteristics and trends of industrial product e-commerce. To this end, 18 policy documents, 10 industrial reports, and five standards are analyzed by employing text-mining methods. Firstly, natural language processing (NLP) technology is utilized to pre-process the text data related to industrial product commerce. Then, word frequency statistics and TF-IDF keyword extraction are performed, and the word frequency statistics are visually represented. Subsequently, the feature set is obtained by combining these processes with the manual screening method. The original text corpus is used as the training set by employing the skip-gram model in Word2Vec, and the feature words are transformed into word vectors in the multi-dimensional space. The K-means algorithm is used to cluster the feature words into groups. The latent Dirichlet allocation (LDA) method is then utilized to further group and discover the features. The text-mining results provide evidence for the development characteristics and trends of industrial product e-commerce in China. Full article
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop