Next Article in Journal
A Porous Nanostructured ZnO Layer for Ultraviolet Sensing with Quartz Crystal Microbalance Technique
Previous Article in Journal
Improving Performance and Breakdown Voltage in Normally-Off GaN Recessed Gate MIS-HEMTs Using Atomic Layer Etching and Gate Field Plate for High-Power Device Applications
Previous Article in Special Issue
Parameterizable Design on Convolutional Neural Networks Using Chisel Hardware Construction Language
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Editorial for the Beyond Moore’s Law: Hardware Specialization and Advanced System on Chip

College of Science and Engineering, University of Houston Clear Lake, Houston, TX 77058, USA
Micromachines 2023, 14(8), 1583; https://doi.org/10.3390/mi14081583
Submission received: 7 August 2023 / Accepted: 7 August 2023 / Published: 11 August 2023
In the absence of a new transistor technology to replace CMOS, design specialization has emerged as one of the most immediate options for achieving high-performance computing. One notable example of purpose-built architecture for inference workloads is the Google tensor processing unit (TPU). The TPU has demonstrated significantly higher efficiency compared to using a general-purpose chip. In the field of artificial intelligence, there have been numerous successful implementations of application-specific designs and accelerators in industry. For instance, Nervana’s AI architecture, Facebook’s “Big Sur”, and various forms of computer-in-network acceleration for large data centers, such as Microsoft’s FPGA Configurable Cloud and Project Catapult for FPGA-accelerated search. The objective of this Special Issue is to explore a wide range of research and demonstrations on computation-intensive applications for high-performance computing, focusing on the various specialized designs that have been developed.
Specifically, Sha et al. (reference [1]) present a design structure that integrates a CPU-based control plane and an FPGA-based data plane. Their aim is to support multiple network functions while achieving a high performance at 100 Gbps. In the deep learning field, Xu et al. (reference [2]) propose a low-power design for the YOLOv4-tiny model using an FPGA. Their design utilizes 16-bit fixed-point operators, which trade precision for the achievement of over 10 times and 3 times the power dissipation compared to CPU and GPU, respectively. Another paper in the neural network design field is from Xie et al. (reference [3]), who demonstrate an efficient accelerator for N:M sparse convolutional neural networks (CNNs) with layer-wise sparse patterns. Their implementation of FPGA validates the acceleration of classical CNNs such as Alexnet, VGG-16, and ResNet-50. Madineni et al. (reference [4]) present a parameterized design of a CNN network using Chisel, an open-source hardware construction language developed at UC Berkeley. This design allows for flexible implementation options, supporting 16-bit, 32-bit, 64-bit, and 128-bit configurations on FPGA. Popovici et al. (reference [5]) introduce a real-time RISC-V-based CAN-FD bus diagnosis tool and make the design publicly available. Wang et al. (reference [6]) propose a TCP offload engine (TOE) prototype system on an FPGA to support a 100 Gbps high-performance throughput. Their design enables the concurrent processing of from hundreds to 250,000 TCP connection state hardware maintenance on a single network node, thereby improving the overall performance of the network system.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sha, M.; Guo, Z.; Guo, Y.; Zeng, X. A High-Performance and Flexible Architecture for Accelerating SDN on the MPSoC Platform. Micromachines 2022, 13, 1854. [Google Scholar] [CrossRef] [PubMed]
  2. Xu, S.; Zhou, Y.; Huang, Y.; Han, T. YOLOv4-Tiny-Based Coal Gangue Image Recognition and FPGA Implementation. Micromachines 2022, 13, 1983. [Google Scholar] [CrossRef] [PubMed]
  3. Xie, X.; Zhu, M.; Lu, S.; Wang, Z. Efficient Layer-Wise N:M Sparse CNN Accelerator with Flexible SPEC: Sparse Processing Element Clusters. Micromachines 2023, 14, 528. [Google Scholar] [CrossRef] [PubMed]
  4. Madineni, M.; Vega, M.; Yang, X. Parameterizable Design on Convolutional Neural Networks Using Chisel Hardware Construction Language. Micromachines 2023, 14, 531. [Google Scholar] [CrossRef] [PubMed]
  5. Popovici, C.; Stan, A. Real-Time RISC-V-Based CAN-FD Bus Diagnosis Tool. Micromachines 2023, 14, 196. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, K.; Guo, Y.; Guo, Z. Highly Concurrent TCP Session Connection Management System on FPGA Chip. Micromachines 2023, 14, 385. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X. Editorial for the Beyond Moore’s Law: Hardware Specialization and Advanced System on Chip. Micromachines 2023, 14, 1583. https://doi.org/10.3390/mi14081583

AMA Style

Yang X. Editorial for the Beyond Moore’s Law: Hardware Specialization and Advanced System on Chip. Micromachines. 2023; 14(8):1583. https://doi.org/10.3390/mi14081583

Chicago/Turabian Style

Yang, Xiaokun. 2023. "Editorial for the Beyond Moore’s Law: Hardware Specialization and Advanced System on Chip" Micromachines 14, no. 8: 1583. https://doi.org/10.3390/mi14081583

APA Style

Yang, X. (2023). Editorial for the Beyond Moore’s Law: Hardware Specialization and Advanced System on Chip. Micromachines, 14(8), 1583. https://doi.org/10.3390/mi14081583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop