Hardware-Friendly Machine Learning and Its Applications, 2nd Edition

A special issue of Micromachines (ISSN 2072-666X). This special issue belongs to the section "E:Engineering and Technology".

Deadline for manuscript submissions: closed (25 June 2023) | Viewed by 8455

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of Illinois Chicago, Chicago, IL 60607, USA
Interests: computer architecture; emerging technologies; machine learning; hardware security; neuromorphic computing; bioinformatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning algorithms, such as those for image object detection, object recognition, multicategory classification, and scene analysis, have shown impressive performance and success in recent decades in various applications, achieving close to human-level perception rates. However, their computational complexity still challenges the state-of-the-art computing platforms, especially when the application of interest is tightly constrained by the requirements of low-power, high-throughput, real-time response, etc. In recent years, there have been enormous advances in implementing machine learning algorithms with application-specific hardware. There is a timely need to map the latest learning algorithms to physical hardware to achieve huge improvements in performance, energy efficiency, and compactness. Recent progress in computational neurosciences and nanoelectronic technology will further help to shed light on future hardware–software platforms for efficient machine learning. This Special Issue aims to explore the potential of efficient machine learning, reveal emerging algorithms and design needs, and promote novel applications. It will also collect contributions on the advancement of methodologies and technologies for the design, evaluation, and optimization of software, hardware, and emerging applications representing the current solution to support the diverse computing scenarios in which machine learning is exploited. 

Topics of interest include, but are not limited to, the following:

  • New microarchitecture designs of hardware accelerators for ML;
  • Sparse learning, feature extraction, and personalization;
  • Deep learning with high speed and high power efficiency;
  • Computing models and hardware architecture co-design for machine learning;
  • New microarchitecture designs of hardware accelerators using emerging devices;
  • Tools for the modeling, simulation, and synthesis of hardware accelerators
  • ML acceleration for edge computing and IoT.

Dr. Arman Roohi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Micromachines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • design methodology
  • co-design
  • framework
  • computing methodologies
  • hardware accelerators
  • DNN compression
  • DNN quantization
  • edge AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 2203 KiB  
Article
Intermittent-Aware Design Exploration of Systolic Array Using Various Non-Volatile Memory: A Comparative Study
by Nedasadat Taheri, Sepehr Tabrizchi and Arman Roohi
Micromachines 2024, 15(3), 343; https://doi.org/10.3390/mi15030343 - 29 Feb 2024
Cited by 1 | Viewed by 1369
Abstract
This paper conducts a comprehensive study on intermittent computing within IoT environments, emphasizing the interplay between different dataflows—row, weight, and output—and a variety of non-volatile memory technologies. We then delve into the architectural optimization of these systems using a spatial architecture, namely IDEA, [...] Read more.
This paper conducts a comprehensive study on intermittent computing within IoT environments, emphasizing the interplay between different dataflows—row, weight, and output—and a variety of non-volatile memory technologies. We then delve into the architectural optimization of these systems using a spatial architecture, namely IDEA, with their processing elements efficiently arranged in a rhythmic pattern, providing enhanced performance in the presence of power failures. This exploration aims to highlight the diverse advantages and potential applications of each combination, offering a comparative perspective. In our findings, using IDEA for the row stationary dataflow with AlexNet on the CIFAR10 dataset, we observe a power efficiency gain of 2.7% and an average reduction of 21% in the required cycles. This study elucidates the potential of different architectural choices in enhancing energy efficiency and performance in IoT systems. Full article
(This article belongs to the Special Issue Hardware-Friendly Machine Learning and Its Applications, 2nd Edition)
Show Figures

Figure 1

18 pages, 3299 KiB  
Article
Tree-Based Machine Learning Models with Optuna in Predicting Impedance Values for Circuit Analysis
by Jung-Pin Lai, Ying-Lei Lin, Ho-Chuan Lin, Chih-Yuan Shih, Yu-Po Wang and Ping-Feng Pai
Micromachines 2023, 14(2), 265; https://doi.org/10.3390/mi14020265 - 20 Jan 2023
Cited by 19 | Viewed by 3790
Abstract
The transmission characteristics of the printed circuit board (PCB) ensure signal integrity and support the entire circuit system, with impedance matching being critical in the design of high-speed PCB circuits. Because the factors affecting impedance are closely related to the PCB production process, [...] Read more.
The transmission characteristics of the printed circuit board (PCB) ensure signal integrity and support the entire circuit system, with impedance matching being critical in the design of high-speed PCB circuits. Because the factors affecting impedance are closely related to the PCB production process, circuit designers and manufacturers must work together to adjust the target impedance to maintain signal integrity. Five machine learning models, including decision tree (DT), random forest (RF), extreme gradient boosting (XGBoost), categorical boosting (CatBoost), and light gradient boosting machine (LightGBM), were used to forecast target impedance values. Furthermore, the Optuna algorithm is used to determine forecasting model hyperparameters. This study applied tree-based machine learning techniques with Optuna to predict impedance. The results revealed that five tree-based machine learning models with Optuna can generate satisfying forecasting accuracy in terms of three measurements, including mean absolute percentage error (MAPE), root mean square error (RMSE), and coefficient of determination (R2). Meanwhile, the LightGBM model with Optuna outperformed the other models. In addition, by using Optuna to tune the parameters of machine learning models, the accuracy of impedance matching can be increased. Thus, the results of this study suggest that the tree-based machine learning techniques with Optuna are a viable and promising alternative for predicting impedance values for circuit analysis. Full article
(This article belongs to the Special Issue Hardware-Friendly Machine Learning and Its Applications, 2nd Edition)
Show Figures

Figure 1

12 pages, 1428 KiB  
Article
A Smart-Anomaly-Detection System for Industrial Machines Based on Feature Autoencoder and Deep Learning
by Imran Ahmed, Misbah Ahmad, Abdellah Chehri and Gwanggil Jeon
Micromachines 2023, 14(1), 154; https://doi.org/10.3390/mi14010154 - 7 Jan 2023
Cited by 12 | Viewed by 2829
Abstract
Machine-health-surveillance systems are gaining popularity in industrial manufacturing systems due to the widespread availability of low-cost devices, sensors, and internet connectivity. In this regard, artificial intelligence provides valuable assistance in the form of deep learning methods to analyze and process big machine data. [...] Read more.
Machine-health-surveillance systems are gaining popularity in industrial manufacturing systems due to the widespread availability of low-cost devices, sensors, and internet connectivity. In this regard, artificial intelligence provides valuable assistance in the form of deep learning methods to analyze and process big machine data. In diverse industrial applications, gears are considered a condemning element; many contributing failures occur due to an unexpected breakdown of the gears. In recent research, anomaly-detection and fault-diagnosis systems have been the gears’ most contributing content. Thus, in work, we presented a smart deep learning-based system to detect anomalies in an industrial machine. Our system used vibrational analysis methods as a deciding tool for different machinery-maintenance decisions. We will first perform a data analysis of the gearbox data set to analyze the data’s insights. By calculating and examining the machine’s vibration, we aim to determine the nature and severity of the defect in the machine and hence detect the anomaly. A gearbox’s vibration signal holds the fault’s signature in the gears, and earlier fault detection of the gearbox is achievable by examining the vibration signal using a deep learning technique. Therefore, we aim to propose a 6-layer autoencoder-based deep learning framework for anomaly detection and fault analysis using a publically available data set of wind-turbine components. The gearbox fault-diagnosis data set is utilized for experimentation, including collecting vibration attributes recorded using SpectraQuest’s gearbox fault-diagnostics simulator. Through comprehensive experiments, we have seen that the framework gains good results compared to others, with an overall accuracy of 91%. Full article
(This article belongs to the Special Issue Hardware-Friendly Machine Learning and Its Applications, 2nd Edition)
Show Figures

Figure 1

Back to TopTop