The Future of Supercomputing

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Network Virtualization and Edge/Fog Computing".

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 3037

Special Issue Editor


E-Mail Website
Guest Editor
High Performance Computing Center Stuttgart, Universität Stuttgart, ‎Stuttgart, Germany
Interests: supercomputing; parallel programming; artificial intelligence; computational fluid dynamics; simulation; philosophy of science in simulation; digital convergence

Special Issue Information

Dear Colleagues,

The world of supercomputing is constantly changing. The advent of supercomputers with a peak performance of one Exaflop is just one part of this world and its development. The Exaflop has focused the attention of researchers and industry on a single problem over the last several years. However, there are more changes and opportunities ahead that need our attention. While Moore’s law may come to an end, new approaches open new paths in new architectures, programming, methods, and applications. New challenges arise from scalability, programmability, reliability, and power consumption.

This Special Issue is dedicated to these future developments and how they will shape the future of supercomputing. Original and innovative contributions addressing these questions are invited. This Special Issue is, however, also open to completely new approaches.

Potential topics include but are not limited to:

  • New architectures;
  • Scalability;
  • Programmability;
  • Algorithms;
  • Methods;
  • Data Analytics/machine learning/artificial intelligence;
  • Cloud computing;
  • Edge computing/Edges of the Internet;
  • Fog computing;
  • Quantum computing;
  • Data centers;
  • Power and Cooling.

Prof. Dr. Michael Resch
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • supercomputing
  • architectures
  • power consumption
  • scalability
  • programmability
  • mathematical methods and algorithms
  • new applications
  • data analytics
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 713 KiB  
Article
Exploiting Machine Learning for Improving In-Memory Execution of Data-Intensive Workflows on Parallel Machines
by Riccardo Cantini, Fabrizio Marozzo, Alessio Orsino, Domenico Talia and Paolo Trunfio
Future Internet 2021, 13(5), 121; https://doi.org/10.3390/fi13050121 - 5 May 2021
Cited by 3 | Viewed by 2645
Abstract
Workflows are largely used to orchestrate complex sets of operations required to handle and process huge amounts of data. Parallel processing is often vital to reduce execution time when complex data-intensive workflows must be run efficiently, and at the same time, in-memory processing [...] Read more.
Workflows are largely used to orchestrate complex sets of operations required to handle and process huge amounts of data. Parallel processing is often vital to reduce execution time when complex data-intensive workflows must be run efficiently, and at the same time, in-memory processing can bring important benefits to accelerate execution. However, optimization techniques are necessary to fully exploit in-memory processing, avoiding performance drops due to memory saturation events. This paper proposed a novel solution, called the Intelligent In-memory Workflow Manager (IIWM), for optimizing the in-memory execution of data-intensive workflows on parallel machines. IIWM is based on two complementary strategies: (1) a machine learning strategy for predicting the memory occupancy and execution time of workflow tasks; (2) a scheduling strategy that allocates tasks to a computing node, taking into account the (predicted) memory occupancy and execution time of each task and the memory available on that node. The effectiveness of the machine learning-based predictor and the scheduling strategy were demonstrated experimentally using as a testbed, Spark, a high-performance Big Data processing framework that exploits in-memory computing to speed up the execution of large-scale applications. In particular, two synthetic workflows were prepared for testing the robustness of the IIWM in scenarios characterized by a high level of parallelism and a limited amount of memory reserved for execution. Furthermore, a real data analysis workflow was used as a case study, for better assessing the benefits of the proposed approach. Thanks to high accuracy in predicting resources used at runtime, the IIWM was able to avoid disk writes caused by memory saturation, outperforming a traditional strategy in which only dependencies among tasks are taken into account. Specifically, the IIWM achieved up to a 31% and a 40% reduction of makespan and a performance improvement up to 1.45× and 1.66× on the synthetic workflows and the real case study, respectively. Full article
(This article belongs to the Special Issue The Future of Supercomputing)
Show Figures

Figure 1

Back to TopTop