Artificial Intelligence and Data Science

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 37916

Special Issue Editors

School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China
Interests: data science; network science; knowledge science; anomaly detection
Special Issues, Collections and Topics in MDPI journals
Institute of Innovation, Science and Sustainability, Federation University Australia, Ballarat, VIC 3353, Australia
Interests: data science; artificial intelligence; graph learning; anomaly detection; systems engineering

Special Issue Information

Dear Colleagues,

Data science is the fundamental theory and methodology of data mining. The emergence of artificial intelligence (AI) technology has broadened and deepened data science, which further benefits a variety of applications, including cyber security, fraud detection, healthcare, transportation, etc. Based on a mixture of analysis, modeling, computation, and learning, a hybrid approach integrating AI technology has been proposed to study the process from data to information, to knowledge, and to decision. The development of AI technology will help us clarify the theoretical boundaries and provide new opportunities for the continuous development of data science. At the same time, the development of data science technology and the emergence of new intelligence paradigms will also facilitate the application of AI in many application scenarios.

Although big data and computational intelligence technologies have made great progress in many engineering applications, the theoretical basis and technical mechanism of AI and data science technology are still at an early stage. The single-point breakthrough of either AI or data science can hardly provide sustainable support for big data-driven intelligent applications. The fundamental issues of AI and data science should be considered deeply and urgently. Therefore, this Special Issue aims to enhance or reconstruct the theoretical cornerstones of AI and data science so as to promote the continuous progress and leapfrog development of real-world applications. Specifically, this Special Issue will try to answer the following questions. (1) How to break the boundaries among disciplines, methodologies, and theories to further promote AI and data science technologies? (2) What will be the new paradigm of AI and data science? (3) How can AI and data science technologies further benefit the real-world applications? The topics of interest for this Special Issue address the application of AI and data science methods including, but not limited to:

  • Knowledge-driven AI technologies;
  • Advanced deep learning approaches such as fairness learning;
  • Security, trust, and privacy;
  • Few-shot learning, one-shot learning, and zero-shot learning;
  • Data governance strategies and technologies;
  • Intelligent computing such as auto machine learning, lifelong learning, etc.;
  • Urgent applications such as anomaly detection;
  • Complexity theory;
  • High-performance computing;
  • Big data technologies and applications;
  • Data analytics and visualization;
  • Real-world AI and data science applications such as healthcare, transportation, etc.

Dr. Shuo Yu
Dr. Feng Xia
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • data science
  • deep learning
  • big data
  • data mining

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (25 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4793 KiB  
Article
Real-Time Run-Off-Road Risk Prediction Based on Deep Learning Sequence Forecasting Approach
by Yunteng Chen, Lijun Wei, Qiong Bao and Huansong Zhang
Mathematics 2024, 12(22), 3456; https://doi.org/10.3390/math12223456 - 5 Nov 2024
Viewed by 505
Abstract
Driving risk prediction is crucial for advanced driving technologies, with deep learning approaches leading the way in driving safety analysis. Current driving risk prediction methods typically establish a mapping between driving features and risk statuses. However, status prediction fails to provide detailed risk [...] Read more.
Driving risk prediction is crucial for advanced driving technologies, with deep learning approaches leading the way in driving safety analysis. Current driving risk prediction methods typically establish a mapping between driving features and risk statuses. However, status prediction fails to provide detailed risk sequence information, and existing driving safety analyses seldom focus on run-off-road (ROR) risk. This study extracted 660 near-roadside lane-changing samples from the high-D natural driving dataset. The performance of sequence and status prediction for ROR risk was compared across five mainstream deep learning models: LSTM, CNN, LSTM-CNN, CNN-LSTM-MA, and Transformer. The results indicate the following: (1) The deep learning approach effectively predicts ROR risk. The Macro F1 Score of sequence prediction significantly surpasses that of status prediction, with no notable difference in efficiency; (2) Sequence prediction captures risk evolution trends, such as increases, turns, and declines, providing more comprehensive safety information; (3) The presence of surrounding vehicles significantly impacts lane change duration and ROR risk. This study offers new insights into the quantitative research of ROR risk, demonstrating that risk sequence prediction is superior to status prediction in multiple aspects and can provide theoretical support for the development of roadside safety. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

29 pages, 688 KiB  
Article
Hybrid Approach to Automated Essay Scoring: Integrating Deep Learning Embeddings with Handcrafted Linguistic Features for Improved Accuracy
by Muhammad Faseeh, Abdul Jaleel, Naeem Iqbal, Anwar Ghani, Akmalbek Abdusalomov, Asif Mehmood and Young-Im Cho
Mathematics 2024, 12(21), 3416; https://doi.org/10.3390/math12213416 - 31 Oct 2024
Viewed by 730
Abstract
Automated Essay Scoring (AES) systems face persistent challenges in delivering accuracy and efficiency in evaluations. This study introduces an approach that combines embeddings generated using RoBERTa with handcrafted linguistic features, leveraging Lightweight XGBoost (LwXGBoost) for enhanced scoring precision. The embeddings capture the contextual [...] Read more.
Automated Essay Scoring (AES) systems face persistent challenges in delivering accuracy and efficiency in evaluations. This study introduces an approach that combines embeddings generated using RoBERTa with handcrafted linguistic features, leveraging Lightweight XGBoost (LwXGBoost) for enhanced scoring precision. The embeddings capture the contextual and semantic aspects of essay content, while handcrafted features incorporate domain-specific attributes such as grammar errors, readability, and sentence length. This hybrid feature set allows LwXGBoost to handle high-dimensional data and model intricate feature interactions effectively. Our experiments on a diverse AES dataset, consisting of essays from students across various educational levels, yielded a QWK score of 0.941. This result demonstrates the superior scoring accuracy and the model’s robustness against noisy and sparse data. The research underscores the potential for integrating embeddings with traditional handcrafted features to improve automated assessment systems. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

25 pages, 3788 KiB  
Article
Yet Another Discriminant Analysis (YADA): A Probabilistic Model for Machine Learning Applications
by Richard V. Field, Jr., Michael R. Smith, Ellery J. Wuest and Joe B. Ingram
Mathematics 2024, 12(21), 3392; https://doi.org/10.3390/math12213392 - 30 Oct 2024
Viewed by 458
Abstract
This paper presents a probabilistic model for various machine learning (ML) applications. While deep learning (DL) has produced state-of-the-art results in many domains, DL models are complex and over-parameterized, which leads to high uncertainty about what the model has learned, as well as [...] Read more.
This paper presents a probabilistic model for various machine learning (ML) applications. While deep learning (DL) has produced state-of-the-art results in many domains, DL models are complex and over-parameterized, which leads to high uncertainty about what the model has learned, as well as its decision process. Further, DL models are not probabilistic, making reasoning about their output challenging. In contrast, the proposed model, referred to as Yet Another Discriminate Analysis(YADA), is less complex than other methods, is based on a mathematically rigorous foundation, and can be utilized for a wide variety of ML tasks including classification, explainability, and uncertainty quantification. YADA is thus competitive in most cases with many state-of-the-art DL models. Ideally, a probabilistic model would represent the full joint probability distribution of its features, but doing so is often computationally expensive and intractable. Hence, many probabilistic models assume that the features are either normally distributed, mutually independent, or both, which can severely limit their performance. YADA is an intermediate model that (1) captures the marginal distributions of each variable and the pairwise correlations between variables and (2) explicitly maps features to the space of multivariate Gaussian variables. Numerous mathematical properties of the YADA model can be derived, thereby improving the theoretic underpinnings of ML. Validation of the model can be statistically verified on new or held-out data using native properties of YADA. However, there are some engineering and practical challenges that we enumerate to make YADA more useful. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

15 pages, 1521 KiB  
Article
Advancing Model Generalization in Continuous Cyclic Test-Time Adaptation with Matrix Perturbation Noise
by Jinshen Jiang, Hao Yang, Lin Yang and Yun Zhou
Mathematics 2024, 12(18), 2800; https://doi.org/10.3390/math12182800 - 10 Sep 2024
Viewed by 687
Abstract
Test-time adaptation (TTA) aims to optimize source-pretrained model parameters to target domains using only unlabeled test data. However, traditional TTA methods often risk overfitting to the specific, localized test domains, leading to compromised generalization. Moreover, these methods generally presume static target domains, neglecting [...] Read more.
Test-time adaptation (TTA) aims to optimize source-pretrained model parameters to target domains using only unlabeled test data. However, traditional TTA methods often risk overfitting to the specific, localized test domains, leading to compromised generalization. Moreover, these methods generally presume static target domains, neglecting the dynamic and cyclic nature of real-world settings. To alleviate this limitation, this paper explores the continuous cyclic test-time adaptation (CycleTTA) setting. Our unique approach within this setting employs matrix-wise perturbation noise in batch-normalization statistics to enhance the adaptability of source-pretrained models to dynamically changing target domains, without the need for additional parameters. We demonstrated the effectiveness of our method through extensive experiments, where our approach reduced the average error by 39.8% on the CIFAR10-C dataset using the WideResNet-28-10 model, by 38.8% using the WideResNet-40-2 model, and by 33.8% using the PreActResNet-18 model. Additionally, on the CIFAR100-C dataset with the WideResNet-40-2 model, our method reduced the average error by 5.3%, showcasing significant improvements in model generalization in continuous cyclic testing scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

28 pages, 525 KiB  
Article
Evaluating the Effectiveness of Time Series Transformers for Demand Forecasting in Retail
by José Manuel Oliveira and Patrícia Ramos
Mathematics 2024, 12(17), 2728; https://doi.org/10.3390/math12172728 - 31 Aug 2024
Viewed by 1015
Abstract
This study investigates the effectiveness of Transformer-based models for retail demand forecasting. We evaluated vanilla Transformer, Informer, Autoformer, PatchTST, and temporal fusion Transformer (TFT) against traditional baselines like AutoARIMA and AutoETS. Model performance was assessed using mean absolute scaled error (MASE) and weighted [...] Read more.
This study investigates the effectiveness of Transformer-based models for retail demand forecasting. We evaluated vanilla Transformer, Informer, Autoformer, PatchTST, and temporal fusion Transformer (TFT) against traditional baselines like AutoARIMA and AutoETS. Model performance was assessed using mean absolute scaled error (MASE) and weighted quantile loss (WQL). The M5 competition dataset, comprising 30,490 time series from 10 stores, served as the evaluation benchmark. The results demonstrate that Transformer-based models significantly outperform traditional baselines, with Transformer, Informer, and TFT leading the performance metrics. These models achieved MASE improvements of 26% to 29% and WQL reductions of up to 34% compared to the seasonal Naïve method, particularly excelling in short-term forecasts. While Autoformer and PatchTST also surpassed traditional methods, their performance was slightly lower, indicating the potential for further tuning. Additionally, this study highlights a trade-off between model complexity and computational efficiency, with Transformer models, though computationally intensive, offering superior forecasting accuracy compared to the significantly slower traditional models like AutoARIMA. These findings underscore the potential of Transformer-based approaches for enhancing retail demand forecasting, provided the computational demands are managed effectively. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

22 pages, 7938 KiB  
Article
Short-Term Wind Speed Prediction for Bridge Site Area Based on Wavelet Denoising OOA-Transformer
by Yan Gao, Baifu Cao, Wenhao Yu, Lu Yi and Fengqi Guo
Mathematics 2024, 12(12), 1910; https://doi.org/10.3390/math12121910 - 20 Jun 2024
Viewed by 871
Abstract
Predicting wind speed in advance at bridge sites is essential for ensuring bridge construction safety under high wind conditions. This study proposes a short-term speed prediction model based on outlier correction, Wavelet Denoising, the Osprey Optimization Algorithm (OOA), and the Transformer model. The [...] Read more.
Predicting wind speed in advance at bridge sites is essential for ensuring bridge construction safety under high wind conditions. This study proposes a short-term speed prediction model based on outlier correction, Wavelet Denoising, the Osprey Optimization Algorithm (OOA), and the Transformer model. The outliers caused by data entry and measurement errors are processed by the interquartile range (IQR) method. By comparing the performance of four different wavelets, the best-performing wavelet (Bior2.2) was selected to filter out sharp noise from the data processed by the IQR method. The OOA-Transformer model was utilized to forecast short-term wind speeds based on the filtered time series data. With OOA-Transformer, the seven hyperparameters of the Transformer model were optimized by the Osprey Optimization Algorithm to achieve better performance. Given the outstanding performance of LSTM and its variants in wind speed prediction, the OOA-Transformer model was compared with six other models using the actual wind speed data from the Xuefeng Lake Bridge dataset to validate our proposed model. The experimental results show that the mean absolute percentage error (MAPE), root mean square error (RMSE), and coefficient of determination (R2) of this paper’s method on the test set were 4.16%, 0.0152, and 0.9955, respectively, which are superior to the other six models. The prediction accuracy was found to be high enough to meet the short-term wind speed prediction needs of practical projects. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

25 pages, 752 KiB  
Article
A Machine Learning-Based Framework with Enhanced Feature Selection and Resampling for Improved Intrusion Detection
by Fazila Malik, Qazi Waqas Khan, Atif Rizwan, Rana Alnashwan and Ghada Atteia
Mathematics 2024, 12(12), 1799; https://doi.org/10.3390/math12121799 - 9 Jun 2024
Viewed by 1139
Abstract
Intrusion Detection Systems (IDSs) play a crucial role in safeguarding network infrastructures from cyber threats and ensuring the integrity of highly sensitive data. Conventional IDS technologies, although successful in achieving high levels of accuracy, frequently encounter substantial model bias. This bias is primarily [...] Read more.
Intrusion Detection Systems (IDSs) play a crucial role in safeguarding network infrastructures from cyber threats and ensuring the integrity of highly sensitive data. Conventional IDS technologies, although successful in achieving high levels of accuracy, frequently encounter substantial model bias. This bias is primarily caused by imbalances in the data and the lack of relevance of certain features. This study aims to tackle these challenges by proposing an advanced machine learning (ML) based IDS that minimizes misclassification errors and corrects model bias. As a result, the predictive accuracy and generalizability of the IDS are significantly improved. The proposed system employs advanced feature selection techniques, such as Recursive Feature Elimination (RFE), sequential feature selection (SFS), and statistical feature selection, to refine the input feature set and minimize the impact of non-predictive attributes. In addition, this work incorporates data resampling methods such as Synthetic Minority Oversampling Technique and Edited Nearest Neighbor (SMOTE_ENN), Adaptive Synthetic Sampling (ADASYN), and Synthetic Minority Oversampling Technique–Tomek Links (SMOTE_Tomek) to address class imbalance and improve the accuracy of the model. The experimental results indicate that our proposed model, especially when utilizing the random forest (RF) algorithm, surpasses existing models regarding accuracy, precision, recall, and F Score across different data resampling methods. Using the ADASYN resampling method, the RF model achieves an accuracy of 99.9985% for botnet attacks and 99.9777% for Man-in-the-Middle (MITM) attacks, demonstrating the effectiveness of our approach in dealing with imbalanced data distributions. This research not only improves the abilities of IDS to identify botnet and MITM attacks but also provides a scalable and efficient solution that can be used in other areas where data imbalance is a recurring problem. This work has implications beyond IDS, offering valuable insights into using ML techniques in complex real-world scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

18 pages, 406 KiB  
Article
Enhancing Security and Efficiency: A Fine-Grained Searchable Scheme for Encryption of Big Data in Cloud-Based Smart Grids
by Jing Wen, Haifeng Li, Liangliang Liu and Caihui Lan
Mathematics 2024, 12(10), 1512; https://doi.org/10.3390/math12101512 - 13 May 2024
Viewed by 1010
Abstract
The smart grid, as a crucial part of modern energy systems, handles extensive and diverse data, including inputs from various sensors, metering devices, and user interactions. Outsourcing data storage to remote cloud servers presents an economical solution for enhancing data management within the [...] Read more.
The smart grid, as a crucial part of modern energy systems, handles extensive and diverse data, including inputs from various sensors, metering devices, and user interactions. Outsourcing data storage to remote cloud servers presents an economical solution for enhancing data management within the smart grid ecosystem. However, ensuring data privacy before transmitting it to the cloud is a critical consideration. Therefore, it is common practice to encrypt the data before uploading them to the cloud. While encryption provides data confidentiality, it may also introduce potential issues such as limiting data owners’ ability to query their data. The searchable attribute-based encryption (SABE) not only enables fine-grained access control in a dynamic large-scale environment but also allows for data searches on the ciphertext domain, making it an effective tool for cloud data sharing. Although SABE has become a research hotspot, existing schemes often have limitations in terms of computing efficiency on the client side, weak security of the ciphertext and the trapdoor. To address these issues, we propose an efficient server-aided ciphertext-policy searchable attribute-based encryption scheme (SA-CP-SABE). In SA-CP-SABE, the user’s data access authority is consistent with the search authority. During the search process, calculations are performed not only to determine whether the ciphertext matches the keyword in the trapdoor, but also to assist subsequent user ciphertext decryption by reducing computational complexity. Our scheme has been proven under the random oracle model to achieve the indistinguishability of the ciphertext and the trapdoor and to resist keyword-guessing attacks. Finally, the performance analysis and simulation of the proposed scheme are provided, and the results show that it performs with high efficiency. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

21 pages, 24644 KiB  
Article
WaveSegNet: An Efficient Method for Scrap Steel Segmentation Utilizing Wavelet Transform and Multiscale Focusing
by Jiakui Zhong, Yunfeng Xu and Changda Liu
Mathematics 2024, 12(9), 1370; https://doi.org/10.3390/math12091370 - 30 Apr 2024
Viewed by 878
Abstract
Scrap steel represents a sustainable and recyclable resource, instrumental in diminishing carbon footprints and facilitating the eco-friendly evolution of the steel sector. However, current scrap steel recycling faces a series of challenges, such as high labor intensity and occupational risks for inspectors, complex [...] Read more.
Scrap steel represents a sustainable and recyclable resource, instrumental in diminishing carbon footprints and facilitating the eco-friendly evolution of the steel sector. However, current scrap steel recycling faces a series of challenges, such as high labor intensity and occupational risks for inspectors, complex and diverse sources of scrap steel, varying types of materials, and difficulties in quantifying and standardizing manual visual inspection and rating. Specifically, we propose WaveSegNet, which is based on wavelet transform and a multiscale focusing structure for scrap steel segmentation. Firstly, we utilize wavelet transform to process images and extract features at different frequencies to capture details and structural information in the images. Secondly, we introduce a mechanism of multiscale focusing to further enhance the accuracy of segmentation by extracting and perceiving features at different scales. Through experiments conducted on the public Cityscapes dataset and scrap steel datasets, we have found that WaveSegNet consistently demonstrates superior performance, achieving the highest scores on the mIoU metric. Particularly notable is its performance on the real-world scrap steel dataset, where it outperforms other segmentation algorithms with an average increase of 3.98% in mIoU(SS), reaching 69.8%, and a significant boost of nearly 5.98% in mIoU(MS), achieving 74.8%. These results underscore WaveSegNet’s exceptional capabilities in processing scrap steel images. Additionally, on the publicly available Cityscapes dataset, WaveSegNet shows notable performance enhancements compared with the next best model, Segformer. Moreover, with its modest parameters and computational demands (34.1 M and 322 GFLOPs), WaveSegNet proves to be an ideal choice for resource-constrained environments, demonstrating high computational efficiency and broad applicability. These experimental results attest to the immense potential of WaveSegNet in intelligent scrap steel rating and provide a new solution for the scrap steel recycling industry. These experimental results attest to the immense potential of WaveSegNet in intelligent scrap steel rating and provide a new solution for the scrap steel recycling industry. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

19 pages, 2195 KiB  
Article
A Novel Method for Boosting Knowledge Representation Learning in Entity Alignment through Triple Confidence
by Xiaoming Zhang, Tongqing Chen and Huiyong Wang
Mathematics 2024, 12(8), 1214; https://doi.org/10.3390/math12081214 - 18 Apr 2024
Viewed by 841
Abstract
Entity alignment is an important task in knowledge fusion, which aims to link entities that have the same real-world identity in two knowledge graphs. However, in the process of constructing a knowledge graph, some noise may inevitably be introduced, which must affect the [...] Read more.
Entity alignment is an important task in knowledge fusion, which aims to link entities that have the same real-world identity in two knowledge graphs. However, in the process of constructing a knowledge graph, some noise may inevitably be introduced, which must affect the results of the entity alignment tasks. The triple confidence calculation can quantify the correctness of the triples to reduce the impact of the noise on entity alignment. Therefore, we designed a method to calculate the confidence of the triples and applied it to the knowledge representation learning phase of entity alignment. The method calculates the triple confidence based on the pairing rates of the three angles between the entities and relations. Specifically, the method uses the pairing rates of the three angles as features, which are then fed into a feedforward neural network for training to obtain the triple confidence. Moreover, we introduced the triple confidence into the knowledge representation learning methods to improve their performance in entity alignment. For the graph neural network-based method GCN, we considered entity confidence when calculating the adjacency matrix, and for the translation-based method TransE, we proposed a strategy to dynamically adjust the margin value in the loss function based on confidence. These two methods were then applied to the entity alignment, and the experimental results demonstrate that compared with the knowledge representation learning methods without integrating confidence, the confidence-based knowledge representation learning methods achieved excellent performance in the entity alignment task. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

16 pages, 1586 KiB  
Article
Semantic-Enhanced Knowledge Graph Completion
by Xu Yuan, Jiaxi Chen, Yingbo Wang, Anni Chen, Yiou Huang, Wenhong Zhao and Shuo Yu
Mathematics 2024, 12(3), 450; https://doi.org/10.3390/math12030450 - 31 Jan 2024
Cited by 3 | Viewed by 1594
Abstract
Knowledge graphs (KGs) serve as structured representations of knowledge, comprising entities and relations. KGs are inherently incomplete, sparse, and have a strong need for completion. Although many knowledge graph embedding models have been designed for knowledge graph completion, they predominantly focus on capturing [...] Read more.
Knowledge graphs (KGs) serve as structured representations of knowledge, comprising entities and relations. KGs are inherently incomplete, sparse, and have a strong need for completion. Although many knowledge graph embedding models have been designed for knowledge graph completion, they predominantly focus on capturing observable correlations between entities. Due to the sparsity of KGs, potential semantic correlations are challenging to capture. To tackle this problem, we propose a model entitled semantic-enhanced knowledge graph completion (SE-KGC). SE-KGC effectively addresses the issue by incorporating predefined semantic patterns, enabling the capture of semantic correlations between entities and enhancing features for representation learning. To implement this approach, we employ a multi-relational graph convolution network encoder, which effectively encodes the KG. Subsequently, we utilize a scoring decoder to evaluate triplets. Experimental results demonstrate that our SE-KGC model outperforms other state-of-the-art methods in link-prediction tasks across three datasets. Specifically, compared to the baselines, SE-KGC achieved improvements of 11.7%, 1.05%, and 2.30% in terms of MRR on these three datasets. Furthermore, we present a comprehensive analysis of the contributions of different semantic patterns, and find that entities with higher connectivity play a pivotal role in effectively capturing and characterizing semantic information. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

21 pages, 640 KiB  
Article
Geometric Matrix Completion via Graph-Based Truncated Norm Regularization for Learning Resource Recommendation
by Yazhi Yang, Jiandong Shi, Siwei Zhou and Shasha Yang
Mathematics 2024, 12(2), 320; https://doi.org/10.3390/math12020320 - 18 Jan 2024
Viewed by 1201
Abstract
In the competitive landscape of online learning, developing robust and effective learning resource recommendation systems is paramount, yet the field faces challenges due to high-dimensional, sparse matrices and intricate user–resource interactions. Our study focuses on geometric matrix completion (GMC) and introduces a novel [...] Read more.
In the competitive landscape of online learning, developing robust and effective learning resource recommendation systems is paramount, yet the field faces challenges due to high-dimensional, sparse matrices and intricate user–resource interactions. Our study focuses on geometric matrix completion (GMC) and introduces a novel approach, graph-based truncated norm regularization (GBTNR) for problem solving. GBTNR innovatively incorporates truncated Dirichlet norms for both user and item graphs, enhancing the model’s ability to handle complex data structures. This method synergistically combines the benefits of truncated norm regularization with the insightful analysis of user–user and resource–resource graph relationships, leading to a significant improvement in recommendation performance. Our model’s unique application of truncated Dirichlet norms distinctively positions it to address the inherent complexities in user and item data structures more effectively than existing methods. By bridging the gap between theoretical robustness and practical applicability, the GBTNR approach offers a substantial leap forward in the field of learning resource recommendations. This advancement is particularly critical in the realm of online education, where understanding and adapting to diverse and intricate user–resource interactions is key to developing truly personalized learning experiences. Moreover, our work includes a thorough theoretical analysis, complete with proofs, to establish the convergence property of the GMC-GBTNR model, thus reinforcing its reliability and effectiveness in practical applications. Empirical validation through extensive experiments on diverse real-world datasets affirms the model’s superior performance over existing methods, marking a groundbreaking advancement in personalized education and deepening our understanding of the dynamics in learner–resource interactions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

20 pages, 3422 KiB  
Article
Progressively Multi-Scale Feature Fusion for Image Inpainting
by Wu Wen, Tianhao Li, Amr Tolba, Ziyi Liu and Kai Shao
Mathematics 2023, 11(24), 4908; https://doi.org/10.3390/math11244908 - 8 Dec 2023
Viewed by 1324
Abstract
The rapid advancement of Wise Information Technology of med (WITMED) has made the integration of traditional Chinese medicine tongue diagnosis and computer technology an increasingly significant area of research. The doctor obtains patient’s tongue images to make a further diagnosis. However, the tongue [...] Read more.
The rapid advancement of Wise Information Technology of med (WITMED) has made the integration of traditional Chinese medicine tongue diagnosis and computer technology an increasingly significant area of research. The doctor obtains patient’s tongue images to make a further diagnosis. However, the tongue image may be broken during the process of collecting the tongue image. Due to the extremely complex texture of the tongue and significant individual differences, existing methods fail to fully obtain sufficient feature information, which result in inaccurate inpainted tongue images. To address this problem, we propose a recurrent tongue image inpainting algorithm based on multi-scale feature fusion called Multi-Scale Fusion Module and Recurrent Attention Mechanism Network (MSFM-RAM-Net). We first propose Multi-Scale Fusion Module (MSFM), which preserves the feature information of tongue images at different scales and enhances the consistency between structures. To simultaneously accelerate the inpainting process and enhance the quality of the inpainted results, Recurrent Attention Mechanism (RAM) is proposed. RAM focuses the network’s attention on important areas and uses known information to gradually inpaint image, which can avoid redundant feature information and the problem of texture confusion caused by large missing areas. Finally, we establish a tongue image dataset and use this dataset to qualitatively and quantitatively evaluate the MSFM-RAM-Net. The results shows that the MSFM-RAM-Net has a better effect on tongue image inpainting, with PSNR and SSIM increasing by 2.1% and 3.3%, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

20 pages, 1332 KiB  
Article
Aggregation Methods Based on Quality Model Assessment for Federated Learning Applications: Overview and Comparative Analysis
by Iuliana Bejenar, Lavinia Ferariu, Carlos Pascal and Constantin-Florin Caruntu
Mathematics 2023, 11(22), 4610; https://doi.org/10.3390/math11224610 - 10 Nov 2023
Cited by 1 | Viewed by 1139
Abstract
Federated learning (FL) offers the possibility of collaboration between multiple devices while maintaining data confidentiality, as required by the General Data Protection Regulation (GDPR). Though FL can keep local data private, it may encounter problems when dealing with non-independent and identically distributed data [...] Read more.
Federated learning (FL) offers the possibility of collaboration between multiple devices while maintaining data confidentiality, as required by the General Data Protection Regulation (GDPR). Though FL can keep local data private, it may encounter problems when dealing with non-independent and identically distributed data (non-IID), insufficient local training samples or cyber-attacks. This paper introduces algorithms that can provide a reliable aggregation of the global model by investigating the accuracy of models received from clients. This allows reducing the influence of less confident nodes, who were potentially attacked or unable to perform successful training. The analysis includes the proposed FedAcc and FedAccSize algorithms, together with their new extension based on the Lasso regression, FedLasso. FedAcc and FedAccSize set the confidence in each client based only on local models’ accuracy, while FedLasso exploits additional details related to predictions, like predicted class probabilities, to support a refined aggregation. The ability of the proposed algorithms to protect against intruders or underperforming clients is demonstrated experimentally using testing scenarios involving independent and identically distributed (IID) data as well as non-IID data. The comparison with the established FedAvg and FedAvgM algorithms shows that exploiting the quality of the client models is essential for reliable aggregation, which enables rapid and robust improvement in the global model. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

17 pages, 6778 KiB  
Article
Research on Intelligent Control Method of Launch Vehicle Landing Based on Deep Reinforcement Learning
by Shuai Xue, Hongyang Bai, Daxiang Zhao and Junyan Zhou
Mathematics 2023, 11(20), 4276; https://doi.org/10.3390/math11204276 - 13 Oct 2023
Cited by 1 | Viewed by 1734
Abstract
A launch vehicle needs to adapt to a complex flight environment during flight, and traditional guidance and control algorithms can hardly deal with multi-factor uncertainties due to the high dependency on control models. To solve this problem, this paper designs a new intelligent [...] Read more.
A launch vehicle needs to adapt to a complex flight environment during flight, and traditional guidance and control algorithms can hardly deal with multi-factor uncertainties due to the high dependency on control models. To solve this problem, this paper designs a new intelligent flight control method for a rocket based on the deep reinforcement learning algorithm driven by knowledge and data. In this process, the Markov decision process of the rocket landing section is established by designing a reinforcement function with consideration of the combination effect on the return of the terminal constraint of the launch vehicle and the cumulative return of the flight process of the rocket. Meanwhile, to improve the training speed of the landing process of the launch vehicle and to enhance the generalization ability of the model, the strategic neural network model is obtained and trained via the form of a long short-term memory (LSTM) network combined with a full connection layer as a landing guidance strategy network. The proximal policy optimization (PPO) is the training algorithm of reinforcement learning network parameters combined with behavioral cloning (BC) as the reinforcement learning pre-training imitation learning algorithm. Notably, the rocket-borne environment is transplanted to the Nvidia Jetson TX2 embedded platform for the comparative testing and verification of this intelligent model, which is then used to generate real-time control commands for guiding the actual flying and landing process of the rocket. Further, comparisons of the results obtained from convex landing optimization and the proposed method in this work are performed to prove the effectiveness of this proposed method. The simulation results show that the intelligent control method in this work can meet the landing accuracy requirements of the launch vehicle with a fast convergence speed of 84 steps, and the decision time is only 2.5 ms. Additionally, it has the ability of online autonomous decision making as deployed on the embedded platform. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

24 pages, 2647 KiB  
Article
How Do Citizens View Digital Government Services? Study on Digital Government Service Quality Based on Citizen Feedback
by Xin Ye, Xiaoyan Su, Zhijun Yao, Lu-an Dong, Qiang Lin and Shuo Yu
Mathematics 2023, 11(14), 3122; https://doi.org/10.3390/math11143122 - 14 Jul 2023
Cited by 3 | Viewed by 3241
Abstract
Research on government service quality can help ensure the success of digital government services and has been the focus of numerous studies that proposed different frameworks and approaches. Most of the existing studies are based on traditional researcher-led methods, which struggle to capture [...] Read more.
Research on government service quality can help ensure the success of digital government services and has been the focus of numerous studies that proposed different frameworks and approaches. Most of the existing studies are based on traditional researcher-led methods, which struggle to capture the needs of citizens. In this paper, a citizen-feedback-based analysis framework was proposed to explore citizen demands and analyze the service quality of digital government. Citizen feedback data are a direct expression of citizens’ demands, so the citizen-feedback-based framework can help to obtain more targeted management insights and improve citizen satisfaction. Efficient machine learning methods used in the framework make data collection and processing more efficient, especially for large-scale internet data. With the crawled user feedback data from the Q&A e-government portal of Luzhou, Sichuan Province, China, we conducted experiments on the proposed framework to verify its feasibility. From citizens’ online feedback on Q&A services, we extracted five service quality factors: efficiency, quality, attitude, compliance, and execution of response. The analysis of five service quality factors provides some management insights, which can provide a guide for improvements in Q&A services. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

18 pages, 1440 KiB  
Article
Visual Analytics Using Machine Learning for Transparency Requirements
by Samiha Fadloun, Khadidja Bennamane, Souham Meshoul, Mahmood Hosseini and Kheireddine Choutri
Mathematics 2023, 11(14), 3091; https://doi.org/10.3390/math11143091 - 13 Jul 2023
Viewed by 1477
Abstract
Problem solving applications require users to exercise caution in their data usage practices. Prior to installing these applications, users are encouraged to read and comprehend the terms of service, which address important aspects such as data privacy, processes, and policies (referred to as [...] Read more.
Problem solving applications require users to exercise caution in their data usage practices. Prior to installing these applications, users are encouraged to read and comprehend the terms of service, which address important aspects such as data privacy, processes, and policies (referred to as information elements). However, these terms are often lengthy and complex, making it challenging for users to fully grasp their content. Additionally, existing transparency analytics tools typically rely on the manual extraction of information elements, resulting in a time-consuming process. To address these challenges, this paper proposes a novel approach that combines information visualization and machine learning analyses to automate the retrieval of information elements. The methodology involves the creation and labeling of a dataset derived from multiple software terms of use. Machine learning models, including naïve Bayes, BART, and LSTM, are utilized for the classification of information elements and text summarization. Furthermore, the proposed approach is integrated into our existing visualization tool TranspVis to enable the automatic detection and display of software information elements. The system is thoroughly evaluated using a database-connected tool, incorporating various metrics and expert opinions. The results of our study demonstrate the promising potential of our approach, serving as an initial step in this field. Our solution not only addresses the challenge of extracting information elements from complex terms of service but also provides a foundation for future research in this area. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

13 pages, 1167 KiB  
Article
STAB-GCN: A Spatio-Temporal Attention-Based Graph Convolutional Network for Group Activity Recognition
by Fang Liu, Chunhua Tian, Jinzhong Wang, Youwei Jin, Luxiang Cui and Ivan Lee
Mathematics 2023, 11(14), 3074; https://doi.org/10.3390/math11143074 - 12 Jul 2023
Viewed by 1525
Abstract
Group activity recognition is a central theme in many domains, such as sports video analysis, CCTV surveillance, sports tactics, and social scenario understanding. However, there are still challenges in embedding actors’ relations in a multi-person scenario due to occlusion, movement, and light. Current [...] Read more.
Group activity recognition is a central theme in many domains, such as sports video analysis, CCTV surveillance, sports tactics, and social scenario understanding. However, there are still challenges in embedding actors’ relations in a multi-person scenario due to occlusion, movement, and light. Current studies mainly focus on collective and individual local features from the spatial and temporal perspectives, which results in inefficiency, low robustness, and low portability. To this end, a Spatio-Temporal Attention-Based Graph Convolution Network (STAB-GCN) model is proposed to effectively embed deep complex relations between actors. Specifically, we leverage the attention mechanism to attentively explore spatio-temporal latent relations between actors. This approach captures spatio-temporal contextual information and improves individual and group embedding. Then, we feed actor relation graphs built from group activity videos into our proposed STAB-GCN for further inference, which selectively attends to the relevant features while ignoring those irrelevant to the relation extraction task. We perform experiments on three available group activity datasets, acquiring better performance than state-of-the-art methods. The results verify the validity of our proposed model and highlight the obstructive impacts of spatio-temporal attention-based graph embedding on group activity recognition. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

29 pages, 13241 KiB  
Article
Predicting Popularity of Viral Content in Social Media through a Temporal-Spatial Cascade Convolutional Learning Framework
by Zhixuan Xu and Minghui Qian
Mathematics 2023, 11(14), 3059; https://doi.org/10.3390/math11143059 - 11 Jul 2023
Cited by 7 | Viewed by 3805
Abstract
The viral spread of online content can lead to unexpected consequences such as extreme opinions about a brand or consumers’ enthusiasm for a product. This makes the prediction of viral content’s future popularity an important problem, especially for digital marketers, as well as [...] Read more.
The viral spread of online content can lead to unexpected consequences such as extreme opinions about a brand or consumers’ enthusiasm for a product. This makes the prediction of viral content’s future popularity an important problem, especially for digital marketers, as well as for managers of social platforms. It is not surprising that conventional methods, which heavily rely on either hand-crafted features or unrealistic assumptions, are insufficient in dealing with this challenging problem. Even state-of-art graph-based approaches are either inefficient to work with large-scale cascades or unable to explain what spread mechanisms are learned by the model. This paper presents a temporal-spatial cascade convolutional learning framework called ViralGCN, not only to address the challenges of existing approaches but also to try to provide some insights into actual mechanisms of viral spread from the perspective of artificial intelligence. We conduct experiments on the real-world dataset (i.e., to predict the retweet popularity of micro-blogs on Weibo). Compared to the existing approaches, ViralGCN possesses the following advantages: the flexible size of the input cascade graph, a coherent method for processing both structural and temporal information, and an intuitive and interpretable deep learning architecture. Moreover, the exploration of the learned features also provides valuable clues for managers to understand the elusive mechanisms of viral spread as well as to devise appropriate strategies at early stages. By using the visualization method, our approach finds that both broadcast and structural virality contribute to online content going viral; the cascade with a gradual descent or ascent-then-descent evolving pattern at the early stage is more likely to gain significant eventual popularity, and even the timing of users participating in the cascade has an effect on future popularity growth. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

20 pages, 3655 KiB  
Article
Advance Landslide Prediction and Warning Model Based on Stacking Fusion Algorithm
by Zian Lin, Yuanfa Ji and Xiyan Sun
Mathematics 2023, 11(13), 2833; https://doi.org/10.3390/math11132833 - 24 Jun 2023
Viewed by 1669
Abstract
In landslide disaster warning, a variety of monitoring and warning methods are commonly adopted. However, most monitoring and warning methods cannot provide information in advance, and serious losses are often caused when landslides occur. To advance the warning time before a landslide, an [...] Read more.
In landslide disaster warning, a variety of monitoring and warning methods are commonly adopted. However, most monitoring and warning methods cannot provide information in advance, and serious losses are often caused when landslides occur. To advance the warning time before a landslide, an innovative advance landslide prediction and warning model based on a stacking fusion algorithm using Baishuihe landslide data is proposed in this paper. The Baishuihe landslide area is characterized by unique soil and is in the Three Gorges region of China, with a subtropical monsoon climate. Based on Baishuihe historical data and real-time monitoring of the landslide state, four warning level thresholds and trigger conditions for each warning level are established. The model effectively integrates the results of multiple prediction and warning submodels to provide predictions and advance warnings through the fusion of two stacking learning layers. The possibility that a risk priority strategy can be used as a substitute for the stacking model is also discussed. Finally, an experimental simulation verifies that the proposed improved model can not only provide advance landslide warning but also effectively reduce the frequency of false warnings and mitigate the issues of traditional single models. The stacking model can effectively support disaster prevention and reduction and provide a scientific basis for land use management. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

19 pages, 4183 KiB  
Article
SlowFast Multimodality Compensation Fusion Swin Transformer Networks for RGB-D Action Recognition
by Xiongjiang Xiao, Ziliang Ren, Huan Li, Wenhong Wei, Zhiyong Yang and Huaide Yang
Mathematics 2023, 11(9), 2115; https://doi.org/10.3390/math11092115 - 29 Apr 2023
Cited by 3 | Viewed by 1998
Abstract
RGB-D-based technology combines the advantages of RGB and depth sequences which can effectively recognize human actions in different environments. However, the spatio-temporal information between different modalities is difficult to effectively learn from each other. To enhance the information exchange between different modalities, we [...] Read more.
RGB-D-based technology combines the advantages of RGB and depth sequences which can effectively recognize human actions in different environments. However, the spatio-temporal information between different modalities is difficult to effectively learn from each other. To enhance the information exchange between different modalities, we introduce a SlowFast multimodality compensation block (SFMCB) which is designed to extract compensation features. Concretely, the SFMCB fuses features from two independent pathways with different frame rates into a single convolutional neural network to achieve performance gains for the model. Furthermore, we explore two fusion schemes to combine the feature from two independent pathways with different frame rates. To facilitate the learning of features from independent multiple pathways, multiple loss functions are utilized for joint optimization. To evaluate the effectiveness of our proposed architecture, we conducted experiments on four challenging datasets: NTU RGB+D 60, NTU RGB+D 120, THU-READ, and PKU-MMD. Experimental results demonstrate the effectiveness of our proposed model, which utilizes the SFMCB mechanism to capture complementary features of multimodal inputs. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

19 pages, 3544 KiB  
Article
A Novel Link Prediction Method for Social Multiplex Networks Based on Deep Learning
by Jiaping Cao, Tianyang Lei, Jichao Li and Jiang Jiang
Mathematics 2023, 11(7), 1705; https://doi.org/10.3390/math11071705 - 2 Apr 2023
Cited by 2 | Viewed by 1716
Abstract
Due to the great advances in information technology, an increasing number of social platforms have appeared. Friend recommendation is an important task in social media, but newly built social platforms have insufficient information to predict entity relationships. In this case, platforms with sufficient [...] Read more.
Due to the great advances in information technology, an increasing number of social platforms have appeared. Friend recommendation is an important task in social media, but newly built social platforms have insufficient information to predict entity relationships. In this case, platforms with sufficient information can help newly built platforms. To address this challenge, a model of link prediction in social multiplex networks (LPSMN) is proposed in this work. Specifically, we first extract graph structure features, latent features and explicit features and then concatenate these features as link representations. Then, with the assistance of external information from a mature platform, an attention mechanism is employed to construct a multiplex and enhanced forecasting model. Additionally, we consider the problem of link prediction to be a binary classification problem. This method utilises three different kinds of features to improve link prediction performance. Finally, we use five synthetic networks with various degree distributions and two real-world social multiplex networks (Weibo–Douban and Facebook–Twitter) to build an experimental scenario for further assessment. The numerical results indicate that the proposed LPSMN model improves the prediction accuracy compared with several baseline methods. We also find that with the decline in network heterogeneity, the performance of LPSMN increases. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

18 pages, 674 KiB  
Article
Efficient and Privacy-Preserving Categorization for Encrypted EMR
by Zhiliang Zhao, Shengke Zeng, Shuai Cheng and Fei Hao
Mathematics 2023, 11(3), 754; https://doi.org/10.3390/math11030754 - 2 Feb 2023
Cited by 1 | Viewed by 1356
Abstract
Electronic Health Records (EHRs) must be encrypted for patient privacy; however, an encrypted EHR is a challenge for the administrator to categorize. In addition, EHRs are predictable and possible to be guessed, although they are in encryption style. In this work, we propose [...] Read more.
Electronic Health Records (EHRs) must be encrypted for patient privacy; however, an encrypted EHR is a challenge for the administrator to categorize. In addition, EHRs are predictable and possible to be guessed, although they are in encryption style. In this work, we propose a secure scheme to support the categorization of encrypted EHRs, according to some keywords. In regard to the predictability of EHRs, we focused on guessing attacks from not only the storage server but also the group administrator. The experiment result shows that our scheme is efficient and practical. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

28 pages, 557 KiB  
Article
Efficient Associate Rules Mining Based on Topology for Items of Transactional Data
by Bo Li, Zheng Pei, Chao Zhang and Fei Hao
Mathematics 2023, 11(2), 401; https://doi.org/10.3390/math11020401 - 12 Jan 2023
Cited by 1 | Viewed by 1219
Abstract
A challenge in association rules’ mining is effectively reducing the time and space complexity in association rules mining with predefined minimum support and confidence thresholds from huge transaction databases. In this paper, we propose an efficient method based on the topology space of [...] Read more.
A challenge in association rules’ mining is effectively reducing the time and space complexity in association rules mining with predefined minimum support and confidence thresholds from huge transaction databases. In this paper, we propose an efficient method based on the topology space of the itemset for mining associate rules from transaction databases. To do so, we deduce a binary relation on itemset, and construct a topology space of itemset based on the binary relation and the quotient lattice of the topology according to transactions of itemsets. Furthermore, we prove that all closed itemsets are included in the quotient lattice of the topology, and generators or minimal generators of every closed itemset can be easily obtained from an element of the quotient lattice. Formally, the topology on itemset represents more general associative relationship among items of transaction databases, the quotient lattice of the topology displays the hierarchical structures on all itemsets, and provide us a method to approximate any template of the itemset. Accordingly, we provide efficient algorithms to generate Min-Max association rules or reduce generalized association rules based on the lower approximation and the upper approximation of a template, respectively. The experiment results demonstrate that the proposed method is an alternative and efficient method to generate or reduce association rules from transaction databases. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

18 pages, 618 KiB  
Article
Hierarchical Quantum Information Splitting of an Arbitrary Two-Qubit State Based on a Decision Tree
by Dongfen Li, Yundan Zheng, Xiaofang Liu, Jie Zhou, Yuqiao Tan, Xiaolong Yang and Mingzhe Liu
Mathematics 2022, 10(23), 4571; https://doi.org/10.3390/math10234571 - 2 Dec 2022
Cited by 2 | Viewed by 1419
Abstract
Quantum informatics is a new subject formed by the intersection of quantum mechanics and informatics. Quantum communication is a new way to transmit quantum states through quantum entanglement, quantum teleportation, and quantum information splitting. Based on the research of multiparticle state quantum information [...] Read more.
Quantum informatics is a new subject formed by the intersection of quantum mechanics and informatics. Quantum communication is a new way to transmit quantum states through quantum entanglement, quantum teleportation, and quantum information splitting. Based on the research of multiparticle state quantum information splitting, this paper innovatively combines the decision tree algorithm of machine learning with quantum communication to solve the problem of channel particle allocation in quantum communication, and experiments showed that the algorithm can make the optimal allocation scheme. Based on this scheme, we propose a two-particle state hierarchical quantum information splitting scheme based on the multi-particle state. First, Alice measures the Bell states of the particles she owns and tells the result to the receiver through the classical channel. If the receiver is a high-level communicator, he only needs the help of one of the low-level communicators and all the high-level communicators. After performing a single particle measurement on the z-basis, they send the result to the receiver through the classical channel. When the receiver is a low-level communicator, all communicators need to measure the particles they own and tell the receiver the results. Finally, the receiver performs the corresponding unitary operation according to the received results. In this regard, a complete hierarchical quantum information splitting operation is completed. On the basis of theoretical research, we also carried out experimental verification, security analysis, and comparative analysis, which shows that our scheme is reliable and has high security and efficiency. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science)
Show Figures

Figure 1

Back to TopTop