Artificial Intelligence of Things (AIoT)

A special issue of Journal of Low Power Electronics and Applications (ISSN 2079-9268).

Deadline for manuscript submissions: closed (31 December 2020) | Viewed by 24476

Special Issue Editors


E-Mail Website
Guest Editor
CEA-List, 38054 Grenoble, France
Interests: low-power electronics; Internet of Things; SRAM chips; microprocessor chips; CMOS integrated circuits

E-Mail Website
Guest Editor
Department of Control and Computer Engineering, Politecnico di Torino, 10129 Torino, Italy
Interests: edge AI; embedded deep neural networks; design automation; optimization; low-power design

E-Mail Website
Guest Editor
Electrical and Computer Engineering, Utah State University, Logan, UT 84322, USA
Interests: VLSI design and automation; computer architecture; robust circuit design

E-Mail Website
Guest Editor
School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
Interests: embedded systems; MPSoC; NoC; design space exploration; run-time mapping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Artificial intelligence is developing in all sectors of our lives. With increasing data production from multiple devices, traditional data collection from sensors and its raw transmission to servers puts high pressure on communication and energy consumption. To reduce the overall footprint of a distributed AI processing, the sensor must become intelligent and process or pre-process the data locally: the Artificial Intelligence of Things (AIoT) is born.

Collecting and processing data locally in an AIoT node not only reduces the communication payload and the processing latency, it also increases the quality of service and the privacy of data. However, new challenges appear for these AIoT nodes. Firstly, energy efficiency is key for AIoT devices, as local data processing has a high energy cost. Moreover, considering the variable amounts of data collection in time, the AIoT node should optimize its power consumption depending on its task and its current energy level. Secondly, AIoT requires more memory than the current IoT nodes. While the weights used to process the data are sources of area and energy penalty, data quantization, sparse computing, and spiking neural networks are promising approaches to minimize their impact.

This Special Issue of JLPEA will cover all the new challenges and opportunities offered by AIoT.

Topics include but are not limited to the following:

  • Hardware designs and architectural templates of AIoT nodes
  • Distributed AIoT systems
  • AI accelerators targeting the AIoT domain
  • In/near-memory computing architectures
  • Non-volatile memories for AIoT devices
  • Optimization techniques targeting the AIoT domain
  • Low-power design methodologies for AIoT nodes
  • Energy harvesting and power management circuits for AIoT devices
  • Emerging technologies and their application to AIoT devices
Dr. Ivan Miro-Panades
Dr. Andrea Calimera
Dr. Koushik Chakraborty
Dr. Amit Kumar Singh

Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Low Power Electronics and Applications is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

24 pages, 2009 KiB  
Article
Low-Power Audio Keyword Spotting Using Tsetlin Machines
by Jie Lei, Tousif Rahman, Rishad Shafik, Adrian Wheeldon, Alex Yakovlev, Ole-Christoffer Granmo, Fahim Kawsar and Akhil Mathur
J. Low Power Electron. Appl. 2021, 11(2), 18; https://doi.org/10.3390/jlpea11020018 - 9 Apr 2021
Cited by 29 | Viewed by 6506
Abstract
The emergence of artificial intelligence (AI) driven keyword spotting (KWS) technologies has revolutionized human to machine interaction. Yet, the challenge of end-to-end energy efficiency, memory footprint and system complexity of current neural network (NN) powered AI-KWS pipelines has remained ever present. This paper [...] Read more.
The emergence of artificial intelligence (AI) driven keyword spotting (KWS) technologies has revolutionized human to machine interaction. Yet, the challenge of end-to-end energy efficiency, memory footprint and system complexity of current neural network (NN) powered AI-KWS pipelines has remained ever present. This paper evaluates KWS utilizing a learning automata powered machine learning algorithm called the Tsetlin Machine (TM). Through significant reduction in parameter requirements and choosing logic over arithmetic-based processing, the TM offers new opportunities for low-power KWS while maintaining high learning efficacy. In this paper, we explore a TM-based keyword spotting (KWS) pipeline to demonstrate low complexity with faster rate of convergence compared to NNs. Further, we investigate the scalability with increasing keywords and explore the potential for enabling low-power on-chip KWS. Full article
(This article belongs to the Special Issue Artificial Intelligence of Things (AIoT))
Show Figures

Figure 1

18 pages, 3098 KiB  
Article
Highly Adaptive Linear Actor-Critic for Lightweight Energy-Harvesting IoT Applications
by Sota Sawaguchi, Jean-Frédéric Christmann and Suzanne Lesecq
J. Low Power Electron. Appl. 2021, 11(2), 17; https://doi.org/10.3390/jlpea11020017 - 8 Apr 2021
Cited by 1 | Viewed by 2880
Abstract
Reinforcement learning (RL) has received much attention in recent years due to its adaptability to unpredictable events such as harvested energy and workload, especially in the context of edge computing for Internet-of-Things (IoT) nodes. Due to limited resources in IoT nodes, it is [...] Read more.
Reinforcement learning (RL) has received much attention in recent years due to its adaptability to unpredictable events such as harvested energy and workload, especially in the context of edge computing for Internet-of-Things (IoT) nodes. Due to limited resources in IoT nodes, it is difficult to achieve self-adaptability. This paper studies online reactivity issues of fixed learning rate in the linear actor-critic (LAC) algorithm for transmission duty-cycle control. We propose the LAC-AB algorithm that introduces into the LAC algorithm an adaptive learning rate called Adam for actor update to achieve better adaptability. We introduce a definition of “convergence” when quantitative analysis of convergence is performed. Simulation results using real-life one-year solar irradiance data indicate that, unlike the conventional setups of two decay rate β1,β2 of Adam, smaller β1 such as 0.2–0.4 are suitable for power-failure-sensitive applications and 0.5–0.7 for latency-sensitive applications with β2[0.1,0.3]. LAC-AB improves the time of reactivity by 68.5–88.1% in our application; it also fine-tunes the initial learning rate for the initial state and improves the time of fine-tuning by 78.2–84.3%, compared to the LAC. Besides, the number of power failures is drastically reduced to zero or a few occurrences over 300 simulations. Full article
(This article belongs to the Special Issue Artificial Intelligence of Things (AIoT))
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 438 KiB  
Review
A Review of Algorithms and Hardware Implementations for Spiking Neural Networks
by Duy-Anh Nguyen, Xuan-Tu Tran and Francesca Iacopi
J. Low Power Electron. Appl. 2021, 11(2), 23; https://doi.org/10.3390/jlpea11020023 - 24 May 2021
Cited by 41 | Viewed by 8445
Abstract
Deep Learning (DL) has contributed to the success of many applications in recent years. The applications range from simple ones such as recognizing tiny images or simple speech patterns to ones with a high level of complexity such as playing the game of [...] Read more.
Deep Learning (DL) has contributed to the success of many applications in recent years. The applications range from simple ones such as recognizing tiny images or simple speech patterns to ones with a high level of complexity such as playing the game of Go. However, this superior performance comes at a high computational cost, which made porting DL applications to conventional hardware platforms a challenging task. Many approaches have been investigated, and Spiking Neural Network (SNN) is one of the promising candidates. SNN is the third generation of Artificial Neural Networks (ANNs), where each neuron in the network uses discrete spikes to communicate in an event-based manner. SNNs have the potential advantage of achieving better energy efficiency than their ANN counterparts. While generally there will be a loss of accuracy on SNN models, new algorithms have helped to close the accuracy gap. For hardware implementations, SNNs have attracted much attention in the neuromorphic hardware research community. In this work, we review the basic background of SNNs, the current state and challenges of the training algorithms for SNNs and the current implementations of SNNs on various hardware platforms. Full article
(This article belongs to the Special Issue Artificial Intelligence of Things (AIoT))
Show Figures

Figure 1

20 pages, 1242 KiB  
Review
Internet of Things: A Review on Theory Based Impedance Matching Techniques for Energy Efficient RF Systems
by Benoit Couraud, Remy Vauche, Spyridon Nektarios Daskalakis, David Flynn, Thibaut Deleruyelle, Edith Kussener and Stylianos Assimonis
J. Low Power Electron. Appl. 2021, 11(2), 16; https://doi.org/10.3390/jlpea11020016 - 31 Mar 2021
Cited by 9 | Viewed by 4140
Abstract
Within an increasingly connected world, the exponential growth in the deployment of Internet of Things (IoT) applications presents a significant challenge in power and data transfer optimisation. Currently, the maximization of Radio Frequency (RF) system power gain depends on the design of efficient, [...] Read more.
Within an increasingly connected world, the exponential growth in the deployment of Internet of Things (IoT) applications presents a significant challenge in power and data transfer optimisation. Currently, the maximization of Radio Frequency (RF) system power gain depends on the design of efficient, commercial chips, and on the integration of these chips by using complex RF simulations to verify bespoke configurations. However, even if a standard 50Ω transmitter’s chip has an efficiency of 90%, the overall power efficiency of the RF system can be reduced by 10% if coupled with a standard antenna of 72Ω. Hence, it is necessary for scalable IoT networks to have optimal RF system design for every transceiver: for example, impedance mismatching between a transmitter’s antenna and chip leads to a significant reduction of the corresponding RF system’s overall power efficiency. This work presents a versatile design framework, based on well-known theoretical methods (i.e., transducer gain, power wave approach, transmission line theory), for the optimal design in terms of power delivered to a load of a typical RF system, which consists of an antenna, a matching network, a load (e.g., integrated circuit) and transmission lines which connect all these parts. The aim of this design framework is not only to reduce the computational effort needed for the design and prototyping of power efficient RF systems, but also to increase the accuracy of the analysis, based on the explanatory analysis within our design framework. Simulated and measured results verify the accuracy of this proposed design framework over a 0–4 GHz spectrum. Finally, a case study based on the design of an RF system for Bluetooth applications demonstrates the benefits of this RF design framework. Full article
(This article belongs to the Special Issue Artificial Intelligence of Things (AIoT))
Show Figures

Figure 1

Back to TopTop