Deep Learning in Parallel and Distributed Data Applications and Systems

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Information Processes".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 3773

Special Issue Editor


E-Mail Website
Guest Editor
School of Control and Computer Engineering, North China Electric Power University, Zhuhai 519099, China
Interests: distributed systems; deep learning; cloud computing

Special Issue Information

Dear Colleagues,

Deep learning (DL) has become an increasingly important area of research in recent years and has become a critical tool for addressing complex analysis tasks in various data applications, particularly in fields such as computer vision (CV), natural language processing (NLP), and data analytics. This is largely due to the advancement of highly parallel and distributed computing systems, which can support the intensive computations required by deep learning algorithms. However, how to perform model training and inference in an efficient way, e.g., for large models and large datasets, is still challenging with the current techniques. On the other hand, with the increasing complexity of computing paradigms, such as IoT, edge, and cloud computing, deep learning techniques such as deep reinformance learning have been used to optimize the mangement of parallel and distributed data applications and systems. Regardless, as these applications and systems grow in size and complexity, current DL-aided solutions still encounter problems in ensuring the security, privacy, and scalability of parallel and distributed data applications and systems

In this Special Issue, we seek to explore the latest advances and challenges in the context of deep learning in parallel and distributed data applications and systems. Specifically, we are interested in original research that explores novel algorithms, architectures, systems and applications for deep learning in parallel and distributed settings, as well as papers that address the challenges and limitations of the existing approaches. Topics to be covered in this Special Issue might include, but are not limited to, the following:

  • The development of new parallel and distributed architectures for DL;
  • Performance optimization for DL using large-scale datasets;
  • Theoretical foundations of parallel and distributed DL;
  • Challenges and limitations of existing approaches to parallel and distributed deep learning;
  • The use of DL in the optimization of parallel and distributed data applications and systems;
  • The integration of DL with IoT, edge, and cloud computing paradigms;
  • DL for security and privacy protection in distributed and parallel data applications and systems;
  • DL for parallel and distributed applications in CV and NLP;
  • The application of DL to big data applications and systems;
  • Novel applications of DL in parallel and distributed enviroments.

We invite submissions of high-quality, original research papers, as well as review articles that provide a comprehensive overview of the state of the art in the fields of deep learning and parallel and distributed computing. We also welcome papers that present case studies and real-world applications of deep learning in parallel and distributed data applications and systems.

This is a unique opportunity for researchers to share their latest findings and ideas with a broad audience, and to help advance our understanding of the interactions between deep learning and parallel and distributed computing. We look forward to receiving your submissions.

Dr. Long Cheng
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • parallel computing
  • distributed systems
  • big data

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2535 KiB  
Article
Elegante: A Machine Learning-Based Threads Configuration Tool for SpMV Computations on Shared Memory Architecture
by Muhammad Ahmad, Usman Sardar, Ildar Batyrshin, Muhammad Hasnain, Khan Sajid and Grigori Sidorov
Information 2024, 15(11), 685; https://doi.org/10.3390/info15110685 - 1 Nov 2024
Viewed by 566
Abstract
The sparse matrix–vector product (SpMV) is a fundamental computational kernel utilized in a diverse range of scientific and engineering applications. It is commonly used to solve linear and partial differential equations. The parallel computation of the SpMV product is a challenging task. Existing [...] Read more.
The sparse matrix–vector product (SpMV) is a fundamental computational kernel utilized in a diverse range of scientific and engineering applications. It is commonly used to solve linear and partial differential equations. The parallel computation of the SpMV product is a challenging task. Existing solutions often employ a fixed number of threads assignment to rows based on empirical formulas, leading to sub-optimal configurations and significant performance losses. Elegante, our proposed machine learning-powered tool, utilizes a data-driven approach to identify the optimal thread configuration for SpMV computations within a shared memory architecture. It accomplishes this by predicting the best thread configuration based on the unique sparsity pattern of each sparse matrix. Our approach involves training and testing using various base and ensemble machine learning algorithms such as decision tree, random forest, gradient boosting, logistic regression, and support vector machine. We rigorously experimented with a dataset of nearly 1000+ real-world matrices. These matrices originated from 46 distinct application domains, spanning fields like robotics, power networks, 2D/3D meshing, and computational fluid dynamics. Our proposed methodology achieved 62% of the highest achievable performance and is 7.33 times faster, demonstrating a significant disparity from the default OpenMP configuration policy and traditional practice methods of manually or randomly selecting the number of threads. This work is the first attempt where the structure of the matrix is used to predict the optimal thread configuration for the optimization of parallel SpMV computation in a shared memory environment. Full article
Show Figures

Figure 1

26 pages, 10284 KiB  
Article
Real-Time Cost Optimization Approach Based on Deep Reinforcement Learning in Software-Defined Security Middle Platform
by Yuancheng Li and Yongtai Qin
Information 2023, 14(4), 209; https://doi.org/10.3390/info14040209 - 29 Mar 2023
Cited by 8 | Viewed by 1803
Abstract
In today’s business environment, reducing costs is crucial due to the variety of Internet of Things (IoT) devices and security infrastructure. However, applying security measures to complex business scenarios can lead to performance degradation, making it a challenging task. To overcome this problem, [...] Read more.
In today’s business environment, reducing costs is crucial due to the variety of Internet of Things (IoT) devices and security infrastructure. However, applying security measures to complex business scenarios can lead to performance degradation, making it a challenging task. To overcome this problem, we propose a novel algorithm based on deep reinforcement learning (DRL) for optimizing cost in multi-party computation software-defined security middle platforms (MPC-SDSmp) in real-time. To accomplish this, we first integrate fragmented security requirements and infrastructure into the MPC-SDSmp cloud model with privacy protection capabilities to reduce deployment costs. By leveraging the power of DRL and cloud computing technology, we enhance the real-time matching and dynamic adaptation capabilities of the security middle platform (Smp). This enables us to generate a real-time scheduling strategy for Smp resources that meet low-cost goals to reduce operating costs. Our experimental results demonstrate that the proposed method not only reduces the costs by 13.6% but also ensures load balancing, improves the quality-of-service (QoS) satisfaction by 18.7%, and reduces the average response time by 34.2%. Moreover, our solution is highly robust and better suited for real-time environments compared to the existing methods. Full article
Show Figures

Figure 1

Back to TopTop