Editor’s Choice Articles

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3274 KiB  
Article
Advancing Logistics 4.0 with the Implementation of a Big Data Warehouse: A Demonstration Case for the Automotive Industry
by Nuno Silva, Júlio Barros, Maribel Y. Santos, Carlos Costa, Paulo Cortez, M. Sameiro Carvalho and João N. C. Gonçalves
Electronics 2021, 10(18), 2221; https://doi.org/10.3390/electronics10182221 - 10 Sep 2021
Cited by 22 | Viewed by 8747
Abstract
The constant advancements in Information Technology have been the main driver of the Big Data concept’s success. With it, new concepts such as Industry 4.0 and Logistics 4.0 are arising. Due to the increase in data volume, velocity, and variety, organizations are now [...] Read more.
The constant advancements in Information Technology have been the main driver of the Big Data concept’s success. With it, new concepts such as Industry 4.0 and Logistics 4.0 are arising. Due to the increase in data volume, velocity, and variety, organizations are now looking to their data analytics infrastructures and searching for approaches to improve their decision-making capabilities, in order to enhance their results using new approaches such as Big Data and Machine Learning. The implementation of a Big Data Warehouse can be the first step to improve the organizations’ data analysis infrastructure and start retrieving value from the usage of Big Data technologies. Moving to Big Data technologies can provide several opportunities for organizations, such as the capability of analyzing an enormous quantity of data from different data sources in an efficient way. However, at the same time, different challenges can arise, including data quality, data management, and lack of knowledge within the organization, among others. In this work, we propose an approach that can be adopted in the logistics department of any organization in order to promote the Logistics 4.0 movement, while highlighting the main challenges and opportunities associated with the development and implementation of a Big Data Warehouse in a real demonstration case at a multinational automotive organization. Full article
(This article belongs to the Special Issue Big Data and Artificial Intelligence for Industry 4.0)
Show Figures

Figure 1

19 pages, 1956 KiB  
Article
A Comprehensive Analysis of Deep Neural-Based Cerebral Microbleeds Detection System
by Maria Anna Ferlin, Michał Grochowski, Arkadiusz Kwasigroch, Agnieszka Mikołajczyk, Edyta Szurowska, Małgorzata Grzywińska and Agnieszka Sabisz
Electronics 2021, 10(18), 2208; https://doi.org/10.3390/electronics10182208 - 9 Sep 2021
Cited by 13 | Viewed by 2737
Abstract
Machine learning-based systems are gaining interest in the field of medicine, mostly in medical imaging and diagnosis. In this paper, we address the problem of automatic cerebral microbleeds (CMB) detection in magnetic resonance images. It is challenging due to difficulty in distinguishing a [...] Read more.
Machine learning-based systems are gaining interest in the field of medicine, mostly in medical imaging and diagnosis. In this paper, we address the problem of automatic cerebral microbleeds (CMB) detection in magnetic resonance images. It is challenging due to difficulty in distinguishing a true CMB from its mimics, however, if successfully solved, it would streamline the radiologists work. To deal with this complex three-dimensional problem, we propose a machine learning approach based on a 2D Faster RCNN network. We aimed to achieve a reliable system, i.e., with balanced sensitivity and precision. Therefore, we have researched and analysed, among others, impact of the way the training data are provided to the system, their pre-processing, the choice of model and its structure, and also the ways of regularisation. Furthermore, we also carefully analysed the network predictions and proposed an algorithm for its post-processing. The proposed approach enabled for obtaining high precision (89.74%), sensitivity (92.62%), and F1 score (90.84%). The paper presents the main challenges connected with automatic cerebral microbleeds detection, its deep analysis and developed system. The conducted research may significantly contribute to automatic medical diagnosis. Full article
(This article belongs to the Special Issue Machine Learning in Electronic and Biomedical Engineering)
Show Figures

Figure 1

36 pages, 32069 KiB  
Review
A Survey of the Tactile Internet: Design Issues and Challenges, Applications, and Future Directions
by Vaibhav Fanibhare, Nurul I. Sarkar and Adnan Al-Anbuky
Electronics 2021, 10(17), 2171; https://doi.org/10.3390/electronics10172171 - 6 Sep 2021
Cited by 29 | Viewed by 8653
Abstract
The Tactile Internet (TI) is an emerging area of research involving 5G and beyond (B5G) communications to enable real-time interaction of haptic data over the Internet between tactile ends, with audio-visual data as feedback. This emerging TI technology is viewed as the next [...] Read more.
The Tactile Internet (TI) is an emerging area of research involving 5G and beyond (B5G) communications to enable real-time interaction of haptic data over the Internet between tactile ends, with audio-visual data as feedback. This emerging TI technology is viewed as the next evolutionary step for the Internet of Things (IoT) and is expected to bring about a massive change in Healthcare 4.0, Industry 4.0 and autonomous vehicles to resolve complicated issues in modern society. This vision of TI makes a dream into a reality. This article aims to provide a comprehensive survey of TI, focussing on design architecture, key application areas, potential enabling technologies, current issues, and challenges to realise it. To illustrate the novelty of our work, we present a brainstorming mind-map of all the topics discussed in this article. We emphasise the design aspects of the TI and discuss the three main sections of the TI, i.e., master, network, and slave sections, with a focus on the proposed application-centric design architecture. With the help of the proposed illustrative diagrams of use cases, we discuss and tabulate the possible applications of the TI with a 5G framework and its requirements. Then, we extensively address the currently identified issues and challenges with promising potential enablers of the TI. Moreover, a comprehensive review focussing on related articles on enabling technologies is explored, including Fifth Generation (5G), Software-Defined Networking (SDN), Network Function Virtualisation (NFV), Cloud/Edge/Fog Computing, Multiple Access, and Network Coding. Finally, we conclude the survey with several research issues that are open for further investigation. Thus, the survey provides insights into the TI that can help network researchers and engineers to contribute further towards developing the next-generation Internet. Full article
Show Figures

Figure 1

14 pages, 34742 KiB  
Article
Implementation of an Award-Winning Invasive Fish Recognition and Separation System
by Jin Chai, Dah-Jye Lee, Beau Tippetts and Kirt Lillywhite
Electronics 2021, 10(17), 2182; https://doi.org/10.3390/electronics10172182 - 6 Sep 2021
Viewed by 2218
Abstract
The state of Michigan, U.S.A., was awarded USD 1 million in March 2018 for the Great Lakes Invasive Carp Challenge. The challenge sought new and novel technologies to function independently of or in conjunction with those fish deterrents already in place to prevent [...] Read more.
The state of Michigan, U.S.A., was awarded USD 1 million in March 2018 for the Great Lakes Invasive Carp Challenge. The challenge sought new and novel technologies to function independently of or in conjunction with those fish deterrents already in place to prevent the movement of invasive carp species into the Great Lakes from the Illinois River through the Chicago Area Waterway System (CAWS). Our team proposed an environmentally friendly, low-cost, vision-based fish recognition and separation system. The proposed solution won fourth place in the challenge out of 353 participants from 27 countries. The proposed solution includes an underwater imaging system that captures the fish images for processing, fish species recognition algorithm that identify invasive carp species, and a mechanical system that guides the fish movement and restrains invasive fish species for removal. We used our evolutionary learning-based algorithm to recognize fish species, which is considered the most challenging task of this solution. The algorithm was tested with a fish dataset consisted of four invasive and four non-invasive fish species. It achieved a remarkable 1.58% error rate, which is more than adequate for the proposed system, and required only a small number of images for training. This paper details the design of this unique solution and the implementation and testing that were accomplished since the challenge. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

42 pages, 2364 KiB  
Article
Exploiting the Outcome of Outlier Detection for Novel Attack Pattern Recognition on Streaming Data
by Michael Heigl, Enrico Weigelt, Andreas Urmann, Dalibor Fiala and Martin Schramm
Electronics 2021, 10(17), 2160; https://doi.org/10.3390/electronics10172160 - 4 Sep 2021
Cited by 5 | Viewed by 3592
Abstract
Future-oriented networking infrastructures are characterized by highly dynamic Streaming Data (SD) whose volume, speed and number of dimensions increased significantly over the past couple of years, energized by trends such as Software-Defined Networking or Artificial Intelligence. As an essential core component of network [...] Read more.
Future-oriented networking infrastructures are characterized by highly dynamic Streaming Data (SD) whose volume, speed and number of dimensions increased significantly over the past couple of years, energized by trends such as Software-Defined Networking or Artificial Intelligence. As an essential core component of network security, Intrusion Detection Systems (IDS) help to uncover malicious activity. In particular, consecutively applied alert correlation methods can aid in mining attack patterns based on the alerts generated by IDS. However, most of the existing methods lack the functionality to deal with SD data affected by the phenomenon called concept drift and are mainly designed to operate on the output from signature-based IDS. Although unsupervised Outlier Detection (OD) methods have the ability to detect yet unknown attacks, most of the alert correlation methods cannot handle the outcome of such anomaly-based IDS. In this paper, we introduce a novel framework called Streaming Outlier Analysis and Attack Pattern Recognition, denoted as SOAAPR, which is able to process the output of various online unsupervised OD methods in a streaming fashion to extract information about novel attack patterns. Three different privacy-preserving, fingerprint-like signatures are computed from the clustered set of correlated alerts by SOAAPR, which characterizes and represents the potential attack scenarios with respect to their communication relations, their manifestation in the data’s features and their temporal behavior. Beyond the recognition of known attacks, comparing derived signatures, they can be leveraged to find similarities between yet unknown and novel attack patterns. The evaluation, which is split into two parts, takes advantage of attack scenarios from the widely-used and popular CICIDS2017 and CSE-CIC-IDS2018 datasets. Firstly, the streaming alert correlation capability is evaluated on CICIDS2017 and compared to a state-of-the-art offline algorithm, called Graph-based Alert Correlation (GAC), which has the potential to deal with the outcome of anomaly-based IDS. Secondly, the three types of signatures are computed from attack scenarios in the datasets and compared to each other. The discussion of results, on the one hand, shows that SOAAPR can compete with GAC in terms of alert correlation capability leveraging four different metrics and outperforms it significantly in terms of processing time by an average factor of 70 in 11 attack scenarios. On the other hand, in most cases, all three types of signatures seem to reliably characterize attack scenarios such that similar ones are grouped together, with up to 99.05% similarity between the FTP and SSH Patator attack. Full article
(This article belongs to the Special Issue Data Security)
Show Figures

Figure 1

16 pages, 1210 KiB  
Article
Performance of Micro-Scale Transmission & Reception Diversity Schemes in High Throughput Satellite Communication Networks
by Apostolos Z. Papafragkakis, Charilaos I. Kouroriorgas and Athanasios D. Panagopoulos
Electronics 2021, 10(17), 2073; https://doi.org/10.3390/electronics10172073 - 27 Aug 2021
Cited by 5 | Viewed by 2197
Abstract
The use of Ka and Q/V bands could be a promising solution in order to accommodate higher data rate, interactive services; however, at these frequency bands signal attenuation due to the various atmospheric phenomena and more particularly due to rain could constitute a [...] Read more.
The use of Ka and Q/V bands could be a promising solution in order to accommodate higher data rate, interactive services; however, at these frequency bands signal attenuation due to the various atmospheric phenomena and more particularly due to rain could constitute a serious limiting factor in system performance and availability. To alleviate this possible barrier, short- and large-scale diversity schemes have been proposed and examined in the past; in this paper a micro-scale site diversity system is evaluated in terms of capacity gain using rain attenuation time series generated using the Synthetic Storm Technique (SST). Input to the SST was 4 years of experimental rainfall data from two stations with a separation distance of 386 m at the National Technical University of Athens (NTUA) campus in Athens, Greece. Additionally, a novel multi-dimensional synthesizer based on Gaussian Copulas parameterized for the case of multiple-site micro-scale diversity systems is presented and evaluated. In all examined scenarios a significant capacity gain can be observed, thus proving that micro-scale site diversity systems could be a viable choice for enterprise users to increase the achievable data rates and improve the availability of their links. Full article
(This article belongs to the Special Issue State-of-the-Art in Satellite Communication Networks)
Show Figures

Figure 1

18 pages, 3914 KiB  
Article
A CEEMDAN-Assisted Deep Learning Model for the RUL Estimation of Solenoid Pumps
by Ugochukwu Ejike Akpudo and Jang-Wook Hur
Electronics 2021, 10(17), 2054; https://doi.org/10.3390/electronics10172054 - 25 Aug 2021
Cited by 11 | Viewed by 2647
Abstract
This paper develops a data-driven remaining useful life prediction model for solenoid pumps. The model extracts high-level features using stacked autoencoders from decomposed pressure signals (using complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) algorithm). These high-level features are then received by [...] Read more.
This paper develops a data-driven remaining useful life prediction model for solenoid pumps. The model extracts high-level features using stacked autoencoders from decomposed pressure signals (using complementary ensemble empirical mode decomposition with adaptive noise (CEEMDAN) algorithm). These high-level features are then received by a recurrent neural network-gated recurrent units (GRUs) for the RUL estimation. The case study presented demonstrates the robustness of the proposed RUL estimation model with extensive empirical validations. Results support the validity of using the CEEMDAN for non-stationary signal decomposition and the accuracy, ease-of-use, and superiority of the proposed DL-based model for solenoid pump failure prognostics. Full article
Show Figures

Figure 1

20 pages, 46722 KiB  
Article
Design and Optimization of Compact Printed Log-Periodic Dipole Array Antennas with Extended Low-Frequency Response
by Keyur K. Mistry, Pavlos I. Lazaridis, Zaharias D. Zaharis and Tian Hong Loh
Electronics 2021, 10(17), 2044; https://doi.org/10.3390/electronics10172044 - 24 Aug 2021
Cited by 15 | Viewed by 11561
Abstract
This paper initially presents an overview of different miniaturization techniques used for size reduction of printed log-periodic dipole array (PLPDA) antennas, and then continues by presenting a design of a conventional PLPDA design that operates from 0.7–8 GHz and achieves a realized gain [...] Read more.
This paper initially presents an overview of different miniaturization techniques used for size reduction of printed log-periodic dipole array (PLPDA) antennas, and then continues by presenting a design of a conventional PLPDA design that operates from 0.7–8 GHz and achieves a realized gain of around 5.5 dBi in most of its bandwidth. This antenna design is then used as a baseline model to implement a novel technique to extend the low-frequency response. This is completed by replacing the longest straight dipole with a triangular-shaped dipole and by optimizing the four longest dipoles of the antenna using the Trust Region Framework algorithm in CST. The improved antenna with extended low-frequency response operates from 0.4 GHz to 8 GHz with a slightly reduced gain at the lower frequencies. Full article
(This article belongs to the Special Issue Evolutionary Antenna Optimization)
Show Figures

Figure 1

18 pages, 737 KiB  
Article
Miller Plateau Corrected with Displacement Currents and Its Use in Analyzing the Switching Process and Switching Loss
by Sheng Liu, Shuang Song, Ning Xie, Hai Chen, Xiaobo Wu and Menglian Zhao
Electronics 2021, 10(16), 2013; https://doi.org/10.3390/electronics10162013 - 20 Aug 2021
Cited by 4 | Viewed by 5384
Abstract
This paper reveals the relationship between the Miller plateau voltage and the displacement currents through the gate–drain capacitance (CGD) and the drain–source capacitance (CDS) in the switching process of a power transistor. The corrected turn-on Miller plateau [...] Read more.
This paper reveals the relationship between the Miller plateau voltage and the displacement currents through the gate–drain capacitance (CGD) and the drain–source capacitance (CDS) in the switching process of a power transistor. The corrected turn-on Miller plateau voltage and turn-off Miller plateau voltage are different even with a constant current load. Using the proposed new Miller plateau, the turn-on and turn-off sequences can be more accurately analyzed, and the switching power loss can be more accurately predicted accordingly. Switching loss models based on the new Miller plateau have also been proposed. The experimental test result of the power MOSFET (NCE2030K) verified the relationship between the Miller plateau voltage and the displacement currents through CGD and CDS. A carefully designed verification test bench featuring a power MOSFET written in Verilog-A proved the prediction accuracy of the switching waveform and switching loss with the new proposed Miller plateau. The average relative error of the loss model using the new plateau is reduced to 1/2∼1/4 of the average relative error of the loss model using the old plateau; the proposed loss model using the new plateau, which also takes the gate current’s variation into account, further reduces the error to around 5%. Full article
(This article belongs to the Special Issue Advanced Analog Circuits for Emerging Applications)
Show Figures

Figure 1

20 pages, 3754 KiB  
Review
A Review on 5G Sub-6 GHz Base Station Antenna Design Challenges
by Madiha Farasat, Dushmantha N. Thalakotuna, Zhonghao Hu and Yang Yang
Electronics 2021, 10(16), 2000; https://doi.org/10.3390/electronics10162000 - 19 Aug 2021
Cited by 42 | Viewed by 14865
Abstract
Modern wireless networks such as 5G require multiband MIMO-supported Base Station Antennas. As a result, antennas have multiple ports to support a range of frequency bands leading to multiple arrays within one compact antenna enclosure. The close proximity of the arrays results in [...] Read more.
Modern wireless networks such as 5G require multiband MIMO-supported Base Station Antennas. As a result, antennas have multiple ports to support a range of frequency bands leading to multiple arrays within one compact antenna enclosure. The close proximity of the arrays results in significant scattering degrading pattern performance of each band while coupling between arrays leads to degradation in return loss and port-to-port isolations. Different design techniques are adopted in the literature to overcome such challenges. This paper provides a classification of challenges in BSA design and a cohesive list of design techniques adopted in the literature to overcome such challenges. Full article
(This article belongs to the Special Issue Antenna Designs for 5G/IoT and Space Applications)
Show Figures

Graphical abstract

18 pages, 985 KiB  
Article
Congestion Prediction in FPGA Using Regression Based Learning Methods
by Pingakshya Goswami and Dinesh Bhatia
Electronics 2021, 10(16), 1995; https://doi.org/10.3390/electronics10161995 - 18 Aug 2021
Cited by 8 | Viewed by 3275
Abstract
Design closure in general VLSI physical design flows and FPGA physical design flows is an important and time-consuming problem. Routing itself can consume as much as 70% of the total design time. Accurate congestion estimation during the early stages of the design flow [...] Read more.
Design closure in general VLSI physical design flows and FPGA physical design flows is an important and time-consuming problem. Routing itself can consume as much as 70% of the total design time. Accurate congestion estimation during the early stages of the design flow can help alleviate last-minute routing-related surprises. This paper has described a methodology for a post-placement, machine learning-based routing congestion prediction model for FPGAs. Routing congestion is modeled as a regression problem. We have described the methods for generating training data, feature extractions, training, regression models, validation, and deployment approaches. We have tested our prediction model by using ISPD 2016 FPGA benchmarks. Our prediction method reports a very accurate localized congestion value in each channel around a configurable logic block (CLB). The localized congestion is predicted in both vertical and horizontal directions. We demonstrate the effectiveness of our model on completely unseen designs that are not initially part of the training data set. The generated results show significant improvement in terms of accuracy measured as mean absolute error and prediction time when compared against the latest state-of-the-art works. Full article
(This article belongs to the Special Issue Advanced AI Hardware Designs Based on FPGAs)
Show Figures

Figure 1

14 pages, 5235 KiB  
Article
Design and Preliminary Experiment of W-Band Broadband TE02 Mode Gyro-TWT
by Xu Zeng, Chaohai Du, An Li, Shang Gao, Zheyuan Wang, Yichi Zhang, Zhangxiong Zi and Jinjun Feng
Electronics 2021, 10(16), 1950; https://doi.org/10.3390/electronics10161950 - 13 Aug 2021
Cited by 17 | Viewed by 2557
Abstract
The gyrotron travelling wave tube (gyro-TWT) is an ideal high-power, broadband vacuum electron amplifier in millimeter and sub-millimeter wave bands. It can be applied as the source of the imaging radar to improve the resolution and operating range. To satisfy the requirements of [...] Read more.
The gyrotron travelling wave tube (gyro-TWT) is an ideal high-power, broadband vacuum electron amplifier in millimeter and sub-millimeter wave bands. It can be applied as the source of the imaging radar to improve the resolution and operating range. To satisfy the requirements of the W-band high-resolution imaging radar, the design and the experimentation of the W-band broadband TE02 mode gyro-TWT were carried out. In this paper, the designs of the key components of the vacuum tube are introduced, including the interaction area, electron optical system, and transmission system. The experimental results show that when the duty ratio is 1%, the output power is above 60 kW with a bandwidth of 8 GHz, and the saturated gain is above 32 dB. In addition, parasitic mode oscillations were observed in the experiment, which limited the increase in duty ratio and caused the measured gains to be much lower than the simulation results. For this phenomenon, the reasons and the suppression methods are under study. Full article
Show Figures

Graphical abstract

21 pages, 6785 KiB  
Review
Review of Electric Vehicle Technologies, Charging Methods, Standards and Optimization Techniques
by Syed Muhammad Arif, Tek Tjing Lie, Boon Chong Seet, Soumia Ayyadi and Kristian Jensen
Electronics 2021, 10(16), 1910; https://doi.org/10.3390/electronics10161910 - 9 Aug 2021
Cited by 117 | Viewed by 18683
Abstract
This paper presents a state-of-the-art review of electric vehicle technology, charging methods, standards, and optimization techniques. The essential characteristics of Hybrid Electric Vehicle (HEV) and Electric Vehicle (EV) are first discussed. Recent research on EV charging methods such as Battery Swap Station (BSS), [...] Read more.
This paper presents a state-of-the-art review of electric vehicle technology, charging methods, standards, and optimization techniques. The essential characteristics of Hybrid Electric Vehicle (HEV) and Electric Vehicle (EV) are first discussed. Recent research on EV charging methods such as Battery Swap Station (BSS), Wireless Power Transfer (WPT), and Conductive Charging (CC) are then presented. This is followed by a discussion of EV standards such as charging levels and their configurations. Next, some of the most used optimization techniques for the sizing and placement of EV charging stations are analyzed. Finally, based on the insights gained, several recommendations are put forward for future research. Full article
Show Figures

Figure 1

18 pages, 1222 KiB  
Article
Determination of Traffic Characteristics of Elastic Optical Networks Nodes with Reservation Mechanisms
by Maciej Sobieraj, Piotr Zwierzykowski and Erich Leitgeb
Electronics 2021, 10(15), 1853; https://doi.org/10.3390/electronics10151853 - 1 Aug 2021
Cited by 9 | Viewed by 2568
Abstract
With the ever-increasing demand for bandwidth, appropriate mechanisms that would provide reliable and optimum service level to designated or specified traffic classes during heavy traffic loads in networks are becoming particularly sought after. One of these mechanisms is the resource reservation mechanism, in [...] Read more.
With the ever-increasing demand for bandwidth, appropriate mechanisms that would provide reliable and optimum service level to designated or specified traffic classes during heavy traffic loads in networks are becoming particularly sought after. One of these mechanisms is the resource reservation mechanism, in which parts of the resources are available only to selected (pre-defined) services. While considering modern elastic optical networks (EONs) where advanced data transmission techniques are used, an attempt was made to develop a simulation program that would make it possible to determine the traffic characteristics of the nodes in EONs. This article discusses a simulation program that has the advantage of providing the possibility to determine the loss probability for individual service classes in the nodes of an EON where the resource reservation mechanism has been introduced. The initial assumption in the article is that a Clos optical switching network is used to construct the EON nodes. The results obtained with the simulator developed by the authors will allow the influence of the introduced reservation mechanism on the loss probability of calls of individual traffic classes that are offered to the system under consideration to be determined. Full article
(This article belongs to the Special Issue 10th Anniversary of Electronics: Advances in Networks)
Show Figures

Figure 1

17 pages, 6667 KiB  
Article
Analysis of Obstacle Avoidance Strategy for Dual-Arm Robot Based on Speed Field with Improved Artificial Potential Field Algorithm
by Hui Zhang, Yongfei Zhu, Xuefei Liu and Xiangrong Xu
Electronics 2021, 10(15), 1850; https://doi.org/10.3390/electronics10151850 - 31 Jul 2021
Cited by 30 | Viewed by 4747
Abstract
In recent years, dual-arm robots have been favored in various industries due to their excellent coordinated operability. One of the focused areas of study on dual-arm robots is obstacle avoidance, namely path planning. Among the existing path planning methods, the artificial potential field [...] Read more.
In recent years, dual-arm robots have been favored in various industries due to their excellent coordinated operability. One of the focused areas of study on dual-arm robots is obstacle avoidance, namely path planning. Among the existing path planning methods, the artificial potential field (APF) algorithm is widely applied in obstacle avoidance for its simplicity, practicability, and good real-time performance over other planning methods. However, APF is firstly proposed to solve the obstacle avoidance problem of mobile robot in plane, and thus has some limitations such as being prone to fall into local minimum, not being applicable when dynamic obstacles are encountered. Therefore, an obstacle avoidance strategy for a dual-arm robot based on speed field with improved artificial potential field algorithm is proposed. In our method, the APF algorithm is used to establish the attraction and repulsion functions of the robotic manipulator, and then the concepts of attraction and repulsion speed are introduced. The attraction and repulsion functions are converted into the attraction and repulsion speed functions, which mapped to the joint space. By using the Jacobian matrix and its inverse to establish the differential velocity function of joint motion, as well as comparing it with the set collision distance threshold between two robotic manipulators of robot, the collision avoidance can be solved. Meanwhile, after introducing a new repulsion function and adding virtual constraint points to eliminate existing limitations, APF is also improved. The correctness and effectiveness of the proposed method in the self-collision avoidance problem of a dual-arm robot are validated in MATLAB and Adams simulation environment. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

20 pages, 813 KiB  
Article
Improving Semi-Supervised Learning for Audio Classification with FixMatch
by Sascha Grollmisch and Estefanía Cano
Electronics 2021, 10(15), 1807; https://doi.org/10.3390/electronics10151807 - 28 Jul 2021
Cited by 17 | Viewed by 5411
Abstract
Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained with only a fraction of the labeled data. The commonality between recent SSL methods is that [...] Read more.
Including unlabeled data in the training process of neural networks using Semi-Supervised Learning (SSL) has shown impressive results in the image domain, where state-of-the-art results were obtained with only a fraction of the labeled data. The commonality between recent SSL methods is that they strongly rely on the augmentation of unannotated data. This is vastly unexplored for audio data. In this work, SSL using the state-of-the-art FixMatch approach is evaluated on three audio classification tasks, including music, industrial sounds, and acoustic scenes. The performance of FixMatch is compared to Convolutional Neural Networks (CNN) trained from scratch, Transfer Learning, and SSL using the Mean Teacher approach. Additionally, a simple yet effective approach for selecting suitable augmentation methods for FixMatch is introduced. FixMatch with the proposed modifications always outperformed Mean Teacher and the CNNs trained from scratch. For the industrial sounds and music datasets, the CNN baseline performance using the full dataset was reached with less than 5% of the initial training data, demonstrating the potential of recent SSL methods for audio data. Transfer Learning outperformed FixMatch only for the most challenging dataset from acoustic scene classification, showing that there is still room for improvement. Full article
(This article belongs to the Special Issue Machine Learning Applied to Music/Audio Signal Processing)
Show Figures

Figure 1

11 pages, 951 KiB  
Article
Ultralow Voltage FinFET- Versus TFET-Based STT-MRAM Cells for IoT Applications
by Esteban Garzón, Marco Lanuzza, Ramiro Taco and Sebastiano Strangio
Electronics 2021, 10(15), 1756; https://doi.org/10.3390/electronics10151756 - 22 Jul 2021
Cited by 14 | Viewed by 3577
Abstract
Spin-transfer torque magnetic tunnel junction (STT-MTJ) based on double-barrier magnetic tunnel junction (DMTJ) has shown promising characteristics to define low-power non-volatile memories. This, along with the combination of tunnel FET (TFET) technology, could enable the design of ultralow-power/ultralow-energy STT magnetic RAMs (STT-MRAMs) for [...] Read more.
Spin-transfer torque magnetic tunnel junction (STT-MTJ) based on double-barrier magnetic tunnel junction (DMTJ) has shown promising characteristics to define low-power non-volatile memories. This, along with the combination of tunnel FET (TFET) technology, could enable the design of ultralow-power/ultralow-energy STT magnetic RAMs (STT-MRAMs) for future Internet of Things (IoT) applications. This paper presents the comparison between FinFET- and TFET-based STT-MRAM bitcells operating at ultralow voltages. Our study is performed at the bitcell level by considering a DMTJ with two reference layers and exploiting either FinFET or TFET devices as cell selectors. Although ultralow-voltage operation occurs at the expense of reduced reading voltage sensing margins, simulations results show that TFET-based solutions are more resilient to process variations and can operate at ultralow voltages (<0.5 V), while showing energy savings of 50% and faster write switching of 60%. Full article
Show Figures

Figure 1

18 pages, 1840 KiB  
Article
Recurrent Neural Network for Human Activity Recognition in Embedded Systems Using PPG and Accelerometer Data
by Michele Alessandrini, Giorgio Biagetti, Paolo Crippa, Laura Falaschetti and Claudio Turchetti
Electronics 2021, 10(14), 1715; https://doi.org/10.3390/electronics10141715 - 17 Jul 2021
Cited by 48 | Viewed by 4695
Abstract
Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the [...] Read more.
Photoplethysmography (PPG) is a common and practical technique to detect human activity and other physiological parameters and is commonly implemented in wearable devices. However, the PPG signal is often severely corrupted by motion artifacts. The aim of this paper is to address the human activity recognition (HAR) task directly on the device, implementing a recurrent neural network (RNN) in a low cost, low power microcontroller, ensuring the required performance in terms of accuracy and low complexity. To reach this goal, (i) we first develop an RNN, which integrates PPG and tri-axial accelerometer data, where these data can be used to compensate motion artifacts in PPG in order to accurately detect human activity; (ii) then, we port the RNN to an embedded device, Cloud-JAM L4, based on an STM32 microcontroller, optimizing it to maintain an accuracy of over 95% while requiring modest computational power and memory resources. The experimental results show that such a system can be effectively implemented on a constrained-resource system, allowing the design of a fully autonomous wearable embedded system for human activity recognition and logging. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

29 pages, 1934 KiB  
Review
Massive MIMO Techniques for 5G and Beyond—Opportunities and Challenges
by David Borges, Paulo Montezuma, Rui Dinis and Marko Beko
Electronics 2021, 10(14), 1667; https://doi.org/10.3390/electronics10141667 - 13 Jul 2021
Cited by 44 | Viewed by 12644
Abstract
Telecommunications have grown to be a pillar to a functional society and the urge for reliable and high throughput systems has become the main objective of researchers and engineers. State-of-the-art work considers massive Multiple-Input Multiple-Output (massive MIMO) as the key technology for 5G [...] Read more.
Telecommunications have grown to be a pillar to a functional society and the urge for reliable and high throughput systems has become the main objective of researchers and engineers. State-of-the-art work considers massive Multiple-Input Multiple-Output (massive MIMO) as the key technology for 5G and beyond. Large spatial multiplexing and diversity gains are some of the major benefits together with an improved energy efficiency. Current works mostly assume the application of well-established techniques in a massive MIMO scenario, although there are still open challenges regarding hardware and computational complexities and energy efficiency. Fully digital, analog, and hybrid structures are analyzed and a multi-layer massive MIMO transmission technique is detailed. The purpose of this article is to describe the most acknowledged transmission techniques for massive MIMO systems and to analyze some of the most promising ones and identify existing problems and limitations. Full article
Show Figures

Figure 1

26 pages, 5653 KiB  
Article
Deep Learning Techniques for the Classification of Colorectal Cancer Tissue
by Min-Jen Tsai and Yu-Han Tao
Electronics 2021, 10(14), 1662; https://doi.org/10.3390/electronics10141662 - 12 Jul 2021
Cited by 43 | Viewed by 5424
Abstract
It is very important to make an objective evaluation of colorectal cancer histological images. Current approaches are generally based on the use of different combinations of textual features and classifiers to assess the classification performance, or transfer learning to classify different organizational types. [...] Read more.
It is very important to make an objective evaluation of colorectal cancer histological images. Current approaches are generally based on the use of different combinations of textual features and classifiers to assess the classification performance, or transfer learning to classify different organizational types. However, since histological images contain multiple tissue types and characteristics, classification is still challenging. In this study, we proposed the best classification methodology based on the selected optimizer and modified the parameters of CNN methods. Then, we used deep learning technology to distinguish between healthy and diseased large intestine tissues. Firstly, we trained a neural network and compared the network architecture optimizers. Secondly, we modified the parameters of the network layer to optimize the superior architecture. Finally, we compared our well-trained deep learning methods on two different histological image open datasets, which comprised 5000 H&E images of colorectal cancer. The other dataset was composed of nine organizational categories of 100,000 images with an external validation of 7180 images. The results showed that the accuracy of the recognition of histopathological images was significantly better than that of existing methods. Therefore, this method is expected to have great potential to assist physicians to make clinical diagnoses and reduce the number of disparate assessments based on the use of artificial intelligence to classify colorectal cancer tissue. Full article
Show Figures

Figure 1

28 pages, 1136 KiB  
Review
Survey of Millimeter-Wave Propagation Measurements and Models in Indoor Environments
by Ahmed Al-Saman, Michael Cheffena, Olakunle Elijah, Yousef A. Al-Gumaei, Sharul Kamal Abdul Rahim and Tawfik Al-Hadhrami
Electronics 2021, 10(14), 1653; https://doi.org/10.3390/electronics10141653 - 11 Jul 2021
Cited by 35 | Viewed by 6141
Abstract
The millimeter-wave (mmWave) is expected to deliver a huge bandwidth to address the future demands for higher data rate transmissions. However, one of the major challenges in the mmWave band is the increase in signal loss as the operating frequency increases. This has [...] Read more.
The millimeter-wave (mmWave) is expected to deliver a huge bandwidth to address the future demands for higher data rate transmissions. However, one of the major challenges in the mmWave band is the increase in signal loss as the operating frequency increases. This has attracted several research interests both from academia and the industry for indoor and outdoor mmWave operations. This paper focuses on the works that have been carried out in the study of the mmWave channel measurement in indoor environments. A survey of the measurement techniques, prominent path loss models, analysis of path loss and delay spread for mmWave in different indoor environments is presented. This covers the mmWave frequencies from 28 GHz to 100 GHz that have been considered in the last two decades. In addition, the possible future trends for the mmWave indoor propagation studies and measurements have been discussed. These include the critical indoor environment, the roles of artificial intelligence, channel characterization for indoor devices, reconfigurable intelligent surfaces, and mmWave for 6G systems. This survey can help engineers and researchers to plan, design, and optimize reliable 5G wireless indoor networks. It will also motivate the researchers and engineering communities towards finding a better outcome in the future trends of the mmWave indoor wireless network for 6G systems and beyond. Full article
Show Figures

Figure 1

14 pages, 33093 KiB  
Article
Underwater Target Recognition Based on Improved YOLOv4 Neural Network
by Lingyu Chen, Meicheng Zheng, Shunqiang Duan, Weilin Luo and Ligang Yao
Electronics 2021, 10(14), 1634; https://doi.org/10.3390/electronics10141634 - 9 Jul 2021
Cited by 49 | Viewed by 4657
Abstract
The YOLOv4 neural network is employed for underwater target recognition. To improve the accuracy and speed of recognition, the structure of YOLOv4 is modified by replacing the upsampling module with a deconvolution module and by incorporating depthwise separable convolution into the network. Moreover, [...] Read more.
The YOLOv4 neural network is employed for underwater target recognition. To improve the accuracy and speed of recognition, the structure of YOLOv4 is modified by replacing the upsampling module with a deconvolution module and by incorporating depthwise separable convolution into the network. Moreover, the training set used in the YOLO network is preprocessed by using a modified mosaic augmentation, in which the gray world algorithm is used to derive two images when performing mosaic augmentation. The recognition results and the comparison with the other target detectors demonstrate the effectiveness of the proposed YOLOv4 structure and the method of data preprocessing. According to both subjective and objective evaluation, the proposed target recognition strategy can effectively improve the accuracy and speed of underwater target recognition and reduce the requirement of hardware performance as well. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications)
Show Figures

Figure 1

21 pages, 11829 KiB  
Article
Implementing a Gaze Tracking Algorithm for Improving Advanced Driver Assistance Systems
by Agapito Ledezma, Víctor Zamora, Óscar Sipele, M. Paz Sesmero and Araceli Sanchis
Electronics 2021, 10(12), 1480; https://doi.org/10.3390/electronics10121480 - 19 Jun 2021
Cited by 18 | Viewed by 3892
Abstract
Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a [...] Read more.
Car accidents are one of the top ten causes of death and are produced mainly by driver distractions. ADAS (Advanced Driver Assistance Systems) can warn the driver of dangerous scenarios, improving road safety, and reducing the number of traffic accidents. However, having a system that is continuously sounding alarms can be overwhelming or confusing or both, and can be counterproductive. Using the driver’s attention to build an efficient ADAS is the main contribution of this work. To obtain this “attention value” the use of a Gaze tracking is proposed. Driver’s gaze direction is a crucial factor in understanding fatal distractions, as well as discerning when it is necessary to warn the driver about risks on the road. In this paper, a real-time gaze tracking system is proposed as part of the development of an ADAS that obtains and communicates the driver’s gaze information. The developed ADAS uses gaze information to determine if the drivers are looking to the road with their full attention. This work gives a step ahead in the ADAS based on the driver, building an ADAS that warns the driver only in case of distraction. The gaze tracking system was implemented as a model-based system using a Kinect v2.0 sensor and was adjusted on a set-up environment and tested on a suitable-features driving simulation environment. The average obtained results are promising, having hit ratios between 96.37% and 81.84%. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

16 pages, 9144 KiB  
Article
A Sub-6G SP32T Single-Chip Switch with Nanosecond Switching Speed for 5G Applications in 0.25 μm GaAs Technology
by Tianxiang Wu, Jipeng Wei, Hongquan Liu, Shunli Ma, Yong Chen and Junyan Ren
Electronics 2021, 10(12), 1482; https://doi.org/10.3390/electronics10121482 - 19 Jun 2021
Cited by 9 | Viewed by 3984
Abstract
This paper presents a single-pole 32-throw (SP32T) switch with an operating frequency of up to 6 GHz for 5G communication applications. Compared to the traditional SP32T module implemented by the waveguide package with large volume and power, the proposed switch can significantly simplify [...] Read more.
This paper presents a single-pole 32-throw (SP32T) switch with an operating frequency of up to 6 GHz for 5G communication applications. Compared to the traditional SP32T module implemented by the waveguide package with large volume and power, the proposed switch can significantly simplify the system with a smaller size and light weight. The proposed SP32T scheme utilizing tree structure can dramatically reduce the dc power and enhance isolation between different output ports, which makes it suitable for low-power 5G communication. A design methodology of a novel transmission (ABCD) matrix is proposed to optimize the switch, which can achieve low insertion loss and high isolation simultaneously. The average insertion loss and the isolations are 1.5 and 35 dB at 6 GHz operating frequency, respectively. The switch exhibits the measured input return loss which is better than 10 dB at 6 GHz. The 1 dB input compression point of SP32T is 15 dBm. The prototype is designed in 5 V 0.25 μm GaAs technology and occupies a small area of 12 mm2. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

12 pages, 6793 KiB  
Article
Metal-Insulator-Metal Waveguide-Based Racetrack Integrated Circular Cavity for Refractive Index Sensing Application
by Muhammad A. Butt, Andrzej Kaźmierczak, Nikolay L. Kazanskiy and Svetlana N. Khonina
Electronics 2021, 10(12), 1419; https://doi.org/10.3390/electronics10121419 - 12 Jun 2021
Cited by 28 | Viewed by 5609
Abstract
Herein, a novel cavity design of racetrack integrated circular cavity established on metal-insulator-metal (MIM) waveguide is suggested for refractive index sensing application. Over the past few years, we have witnessed several unique cavity designs to improve the sensing performance of the plasmonic sensors [...] Read more.
Herein, a novel cavity design of racetrack integrated circular cavity established on metal-insulator-metal (MIM) waveguide is suggested for refractive index sensing application. Over the past few years, we have witnessed several unique cavity designs to improve the sensing performance of the plasmonic sensors created on the MIM waveguide. The optimized cavity design can provide the best sensing performance. In this work, we have numerically analyzed the device design by utilizing the finite element method (FEM). The small variations in the geometric parameter of the device can bring a significant shift in the sensitivity and the figure of merit (FOM) of the device. The best sensitivity and FOM of the anticipated device are 1400 nm/RIU and ~12.01, respectively. We believe that the sensor design analyzed in this work can be utilized in the on-chip detection of biochemical analytes. Full article
(This article belongs to the Special Issue Nanophotonics for Next-Generation IoT Sensors)
Show Figures

Figure 1

20 pages, 2136 KiB  
Article
Radar-Based Hand Gesture Recognition Using Spiking Neural Networks
by Ing Jyh Tsang, Federico Corradi, Manolis Sifalakis, Werner Van Leekwijck and Steven Latré
Electronics 2021, 10(12), 1405; https://doi.org/10.3390/electronics10121405 - 11 Jun 2021
Cited by 27 | Viewed by 6197
Abstract
We propose a spiking neural network (SNN) approach for radar-based hand gesture recognition (HGR), using frequency modulated continuous wave (FMCW) millimeter-wave radar. After pre-processing the range-Doppler or micro-Doppler radar signal, we use a signal-to-spike conversion scheme that encodes radar Doppler maps into spike [...] Read more.
We propose a spiking neural network (SNN) approach for radar-based hand gesture recognition (HGR), using frequency modulated continuous wave (FMCW) millimeter-wave radar. After pre-processing the range-Doppler or micro-Doppler radar signal, we use a signal-to-spike conversion scheme that encodes radar Doppler maps into spike trains. The spike trains are fed into a spiking recurrent neural network, a liquid state machine (LSM). The readout spike signal from the SNN is then used as input for different classifiers for comparison, including logistic regression, random forest, and support vector machine (SVM). Using liquid state machines of less than 1000 neurons, we achieve better than state-of-the-art results on two publicly available reference datasets, reaching over 98% accuracy on 10-fold cross-validation for both data sets. Full article
(This article belongs to the Special Issue Neuromorphic Sensing and Computing Systems)
Show Figures

Figure 1

19 pages, 662 KiB  
Article
Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning Approach
by Sk Mahmudul Hassan, Arnab Kumar Maji, Michał Jasiński, Zbigniew Leonowicz and Elżbieta Jasińska
Electronics 2021, 10(12), 1388; https://doi.org/10.3390/electronics10121388 - 9 Jun 2021
Cited by 267 | Viewed by 28605
Abstract
The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of [...] Read more.
The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of machine vision. Standard CNN models require a large number of parameters and higher computation cost. In this paper, we replaced standard convolution with depth=separable convolution, which reduces the parameter number and computation cost. The implemented models were trained with an open dataset consisting of 14 different plant species, and 38 different categorical disease classes and healthy plant leaves. To evaluate the performance of the models, different parameters such as batch size, dropout, and different numbers of epochs were incorporated. The implemented models achieved a disease-classification accuracy rates of 98.42%, 99.11%, 97.02%, and 99.56% using InceptionV3, InceptionResNetV2, MobileNetV2, and EfficientNetB0, respectively, which were greater than that of traditional handcrafted-feature-based approaches. In comparison with other deep-learning models, the implemented model achieved better performance in terms of accuracy and it required less training time. Moreover, the MobileNetV2 architecture is compatible with mobile devices using the optimized parameter. The accuracy results in the identification of diseases showed that the deep CNN model is promising and can greatly impact the efficient identification of the diseases, and may have potential in the detection of diseases in real-time agricultural systems. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

17 pages, 1450 KiB  
Review
Machine Learning-Based Data-Driven Fault Detection/Diagnosis of Lithium-Ion Battery: A Critical Review
by Akash Samanta, Sumana Chowdhuri and Sheldon S. Williamson
Electronics 2021, 10(11), 1309; https://doi.org/10.3390/electronics10111309 - 30 May 2021
Cited by 112 | Viewed by 13241
Abstract
Fault detection/diagnosis has become a crucial function of the battery management system (BMS) due to the increasing application of lithium-ion batteries (LIBs) in highly sophisticated and high-power applications to ensure the safe and reliable operation of the system. The application of Machine Learning [...] Read more.
Fault detection/diagnosis has become a crucial function of the battery management system (BMS) due to the increasing application of lithium-ion batteries (LIBs) in highly sophisticated and high-power applications to ensure the safe and reliable operation of the system. The application of Machine Learning (ML) in the BMS of LIB has long been adopted for efficient, reliable, accurate prediction of several important states of LIB such as state of charge, state of health and remaining useful life. Inspired by some of the promising features of ML-based techniques over the conventional LIB fault detection/diagnosis methods such as model-based, knowledge-based and signal processing-based techniques, ML-based data-driven methods have been a prime research focus in the last few years. This paper provides a comprehensive review exclusively on the state-of-the-art ML-based data-driven fault detection/diagnosis techniques to provide a ready reference and direction to the research community aiming towards developing an accurate, reliable, adaptive and easy to implement fault diagnosis strategy for the LIB system. Current issues of existing strategies and future challenges of LIB fault diagnosis are also explained for better understanding and guidance. Full article
Show Figures

Figure 1

20 pages, 5508 KiB  
Article
Bearing Fault Classification Using Ensemble Empirical Mode Decomposition and Convolutional Neural Network
by Rafia Nishat Toma, Cheol-Hong Kim and Jong-Myon Kim
Electronics 2021, 10(11), 1248; https://doi.org/10.3390/electronics10111248 - 24 May 2021
Cited by 44 | Viewed by 4431
Abstract
Condition monitoring is used to track the unavoidable phases of rolling element bearings in an induction motor (IM) to ensure reliable operation in domestic and industrial machinery. The convolutional neural network (CNN) has been used as an effective tool to recognize and classify [...] Read more.
Condition monitoring is used to track the unavoidable phases of rolling element bearings in an induction motor (IM) to ensure reliable operation in domestic and industrial machinery. The convolutional neural network (CNN) has been used as an effective tool to recognize and classify multiple rolling bearing faults in recent times. Due to the nonlinear and nonstationary nature of vibration signals, it is quite difficult to achieve high classification accuracy when directly using the original signal as the input of a convolution neural network. To evaluate the fault characteristics, ensemble empirical mode decomposition (EEMD) is implemented to decompose the signal into multiple intrinsic mode functions (IMFs) in this work. Then, based on the kurtosis value, insignificant IMFs are filtered out and the original signal is reconstructed with the rest of the IMFs so that the reconstructed signal contains the fault characteristics. After that, the 1-D reconstructed vibration signal is converted into a 2-D image using a continuous wavelet transform with information from the damage frequency band. This also transfers the signal into a time-frequency domain and reduces the nonstationary effects of the vibration signal. Finally, the generated images of various fault conditions, which possess a discriminative pattern relative to the types of faults, are used to train an appropriate CNN model. Additionally, with the reconstructed signal, two different methods are used to create an image to compare with our proposed image creation approach. The vibration signal is collected from a self-designed testbed containing multiple bearings of different fault conditions. Two other conventional CNN architectures are compared with our proposed model. Based on the results obtained, it can be concluded that the image generated with fault signatures not only accurately classifies multiple faults with CNN but can also be considered as a reliable and stable method for the diagnosis of fault bearings. Full article
Show Figures

Figure 1

21 pages, 2453 KiB  
Article
A Novel Energy-Efficiency Optimization Approach Based on Driving Patterns Styles and Experimental Tests for Electric Vehicles
by Juan Diego Valladolid, Diego Patino, Giambattista Gruosso, Carlos Adrián Correa-Flórez, José Vuelvas and Fabricio Espinoza
Electronics 2021, 10(10), 1199; https://doi.org/10.3390/electronics10101199 - 18 May 2021
Cited by 19 | Viewed by 4942
Abstract
This article proposes an energy-efficiency strategy based on the optimization of driving patterns for an electric vehicle (EV). The EV studied in this paper is a commercial vehicle only driven by a traction motor. The motor drives the front wheels indirectly through the [...] Read more.
This article proposes an energy-efficiency strategy based on the optimization of driving patterns for an electric vehicle (EV). The EV studied in this paper is a commercial vehicle only driven by a traction motor. The motor drives the front wheels indirectly through the differential drive. The electrical inverter model and the power-train efficiency are established by lookup tables determined by power tests in a dynamometric bank. The optimization problem is focused on maximizing energy-efficiency between the wheel power and battery pack, not only to maintain but also to improve its value by modifying the state of charge (SOC). The solution is found by means of a Particle Swarm Optimization (PSO) algorithm. The optimizer simulation results validate the increasing efficiency with the speed setpoint variations, and also show that the battery SOC is improved. The best results are obtained when the speed variation is between 5% and 6%. Full article
Show Figures

Figure 1

52 pages, 2321 KiB  
Article
Towards Secure Fog Computing: A Survey on Trust Management, Privacy, Authentication, Threats and Access Control
by Abdullah Al-Noman Patwary, Ranesh Kumar Naha, Saurabh Garg, Sudheer Kumar Battula, Md Anwarul Kaium Patwary, Erfan Aghasian, Muhammad Bilal Amin, Aniket Mahanti and Mingwei Gong
Electronics 2021, 10(10), 1171; https://doi.org/10.3390/electronics10101171 - 14 May 2021
Cited by 43 | Viewed by 7164
Abstract
Fog computing is an emerging computing paradigm that has come into consideration for the deployment of Internet of Things (IoT) applications amongst researchers and technology industries over the last few years. Fog is highly distributed and consists of a wide number of autonomous [...] Read more.
Fog computing is an emerging computing paradigm that has come into consideration for the deployment of Internet of Things (IoT) applications amongst researchers and technology industries over the last few years. Fog is highly distributed and consists of a wide number of autonomous end devices, which contribute to the processing. However, the variety of devices offered across different users are not audited. Hence, the security of Fog devices is a major concern that should come into consideration. Therefore, to provide the necessary security for Fog devices, there is a need to understand what the security concerns are with regards to Fog. All aspects of Fog security, which have not been covered by other literature works, need to be identified and aggregated. On the other hand, privacy preservation for user’s data in Fog devices and application data processed in Fog devices is another concern. To provide the appropriate level of trust and privacy, there is a need to focus on authentication, threats and access control mechanisms as well as privacy protection techniques in Fog computing. In this paper, a survey along with a taxonomy is proposed, which presents an overview of existing security concerns in the context of the Fog computing paradigm. Moreover, the Blockchain-based solutions towards a secure Fog computing environment is presented and various research challenges and directions for future research are discussed. Full article
(This article belongs to the Special Issue Embedded IoT: System Design and Applications)
Show Figures

Figure 1

15 pages, 4552 KiB  
Article
Optimization of Multi-Level Operation in RRAM Arrays for In-Memory Computing
by Eduardo Pérez, Antonio Javier Pérez-Ávila, Rocío Romero-Zaliz, Mamathamba Kalishettyhalli Mahadevaiah, Emilio Pérez-Bosch Quesada, Juan Bautista Roldán, Francisco Jiménez-Molinos and Christian Wenger
Electronics 2021, 10(9), 1084; https://doi.org/10.3390/electronics10091084 - 3 May 2021
Cited by 15 | Viewed by 4116
Abstract
Accomplishing multi-level programming in resistive random access memory (RRAM) arrays with truly discrete and linearly spaced conductive levels is crucial in order to implement synaptic weights in hardware-based neuromorphic systems. In this paper, we implemented this feature on 4-kbit 1T1R RRAM arrays by [...] Read more.
Accomplishing multi-level programming in resistive random access memory (RRAM) arrays with truly discrete and linearly spaced conductive levels is crucial in order to implement synaptic weights in hardware-based neuromorphic systems. In this paper, we implemented this feature on 4-kbit 1T1R RRAM arrays by tuning the programming parameters of the multi-level incremental step pulse with verify algorithm (M-ISPVA). The optimized set of parameters was assessed by comparing its results with a non-optimized one. The optimized set of parameters proved to be an effective way to define non-overlapped conductive levels due to the strong reduction of the device-to-device variability as well as of the cycle-to-cycle variability, assessed by inter-levels switching tests and during 1 k reset-set cycles. In order to evaluate this improvement in real scenarios, the experimental characteristics of the RRAM devices were captured by means of a behavioral model, which was used to simulate two different neuromorphic systems: an 8 × 8 vector-matrix-multiplication (VMM) accelerator and a 4-layer feedforward neural network for MNIST database recognition. The results clearly showed that the optimization of the programming parameters improved both the precision of VMM results as well as the recognition accuracy of the neural network in about 6% compared with the use of non-optimized parameters. Full article
(This article belongs to the Special Issue Resistive Memory Characterization, Simulation, and Compact Modeling)
Show Figures

Figure 1

22 pages, 5068 KiB  
Article
An Autonomous Grape-Harvester Robot: Integrated System Architecture
by Eleni Vrochidou, Konstantinos Tziridis, Alexandros Nikolaou, Theofanis Kalampokas, George A. Papakostas, Theodore P. Pachidis, Spyridon Mamalis, Stefanos Koundouras and Vassilis G. Kaburlasos
Electronics 2021, 10(9), 1056; https://doi.org/10.3390/electronics10091056 - 29 Apr 2021
Cited by 37 | Viewed by 6457
Abstract
This work pursues the potential of extending “Industry 4.0” practices to farming toward achieving “Agriculture 4.0”. Our interest is in fruit harvesting, motivated by the problem of addressing the shortage of seasonal labor. In particular, here we present an integrated system architecture of [...] Read more.
This work pursues the potential of extending “Industry 4.0” practices to farming toward achieving “Agriculture 4.0”. Our interest is in fruit harvesting, motivated by the problem of addressing the shortage of seasonal labor. In particular, here we present an integrated system architecture of an Autonomous Robot for Grape harvesting (ARG). The overall system consists of three interdependent units: (1) an aerial unit, (2) a remote-control unit and (3) the ARG ground unit. Special attention is paid to the ARG; the latter is designed and built to carry out three viticultural operations, namely harvest, green harvest and defoliation. We present an overview of the multi-purpose overall system, the specific design of each unit of the system and the integration of all subsystems. In addition, the fully sensory-based sensing system architecture and the underlying vision system are analyzed. Due to its modular design, the proposed system can be extended to a variety of different crops and/or orchards. Full article
(This article belongs to the Special Issue Control of Mobile Robots)
Show Figures

Figure 1

19 pages, 6227 KiB  
Article
An Active/Reactive Power Control Strategy for Renewable Generation Systems
by Iván Andrade, Rubén Pena, Ramón Blasco-Gimenez, Javier Riedemann, Werner Jara and Cristián Pesce
Electronics 2021, 10(9), 1061; https://doi.org/10.3390/electronics10091061 - 29 Apr 2021
Cited by 11 | Viewed by 4489
Abstract
The development of distributed generation, mainly based on renewable energies, requires the design of control strategies to allow the regulation of electrical variables, such as power, voltage (V), and frequency (f), and the coordination of multiple generation units in microgrids or islanded systems. [...] Read more.
The development of distributed generation, mainly based on renewable energies, requires the design of control strategies to allow the regulation of electrical variables, such as power, voltage (V), and frequency (f), and the coordination of multiple generation units in microgrids or islanded systems. This paper presents a strategy to control the active and reactive power flow in the Point of Common Connection (PCC) of a renewable generation system operating in islanded mode. Voltage Source Converters (VSCs) are connected between individual generation units and the PCC to control the voltage and frequency. The voltage and frequency reference values are obtained from the P–V and Q–f droop characteristics curves, where P and Q are the active and reactive power supplied to the load, respectively. Proportional–Integral (PI) controllers process the voltage and frequency errors and set the reference currents (in the dq frame) to be imposed by each VSC. Simulation results considering high-power solar and wind generation systems are presented to validate the proposed control strategy. Full article
(This article belongs to the Section Systems & Control Engineering)
Show Figures

Figure 1

19 pages, 6036 KiB  
Article
Facial Emotion Recognition Using Transfer Learning in the Deep CNN
by M. A. H. Akhand, Shuvendu Roy, Nazmul Siddique, Md Abdus Samad Kamal and Tetsuya Shimamura
Electronics 2021, 10(9), 1036; https://doi.org/10.3390/electronics10091036 - 27 Apr 2021
Cited by 178 | Viewed by 22350
Abstract
Human facial emotion recognition (FER) has attracted the attention of the research community for its promising applications. Mapping different facial expressions to the respective emotional states are the main task in FER. The classical FER consists of two major steps: feature extraction and [...] Read more.
Human facial emotion recognition (FER) has attracted the attention of the research community for its promising applications. Mapping different facial expressions to the respective emotional states are the main task in FER. The classical FER consists of two major steps: feature extraction and emotion recognition. Currently, the Deep Neural Networks, especially the Convolutional Neural Network (CNN), is widely used in FER by virtue of its inherent feature extraction mechanism from images. Several works have been reported on CNN with only a few layers to resolve FER problems. However, standard shallow CNNs with straightforward learning schemes have limited feature extraction capability to capture emotion information from high-resolution images. A notable drawback of the most existing methods is that they consider only the frontal images (i.e., ignore profile views for convenience), although the profile views taken from different angles are important for a practical FER system. For developing a highly accurate FER system, this study proposes a very Deep CNN (DCNN) modeling through Transfer Learning (TL) technique where a pre-trained DCNN model is adopted by replacing its dense upper layer(s) compatible with FER, and the model is fine-tuned with facial emotion data. A novel pipeline strategy is introduced, where the training of the dense layer(s) is followed by tuning each of the pre-trained DCNN blocks successively that has led to gradual improvement of the accuracy of FER to a higher level. The proposed FER system is verified on eight different pre-trained DCNN models (VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-152, Inception-v3 and DenseNet-161) and well-known KDEF and JAFFE facial image datasets. FER is very challenging even for frontal views alone. FER on the KDEF dataset poses further challenges due to the diversity of images with different profile views together with frontal views. The proposed method achieved remarkable accuracy on both datasets with pre-trained models. On a 10-fold cross-validation way, the best achieved FER accuracies with DenseNet-161 on test sets of KDEF and JAFFE are 96.51% and 99.52%, respectively. The evaluation results reveal the superiority of the proposed FER system over the existing ones regarding emotion detection accuracy. Moreover, the achieved performance on the KDEF dataset with profile views is promising as it clearly demonstrates the required proficiency for real-life applications. Full article
(This article belongs to the Special Issue Deep Learning Technologies for Machine Vision and Audition)
Show Figures

Figure 1

25 pages, 815 KiB  
Article
Accelerating Neural Network Inference on FPGA-Based Platforms—A Survey
by Ran Wu, Xinmin Guo, Jian Du and Junbao Li
Electronics 2021, 10(9), 1025; https://doi.org/10.3390/electronics10091025 - 25 Apr 2021
Cited by 56 | Viewed by 11499
Abstract
The breakthrough of deep learning has started a technological revolution in various areas such as object identification, image/video recognition and semantic segmentation. Neural network, which is one of representative applications of deep learning, has been widely used and developed many efficient models. However, [...] Read more.
The breakthrough of deep learning has started a technological revolution in various areas such as object identification, image/video recognition and semantic segmentation. Neural network, which is one of representative applications of deep learning, has been widely used and developed many efficient models. However, the edge implementation of neural network inference is restricted because of conflicts between the high computation and storage complexity and resource-limited hardware platforms in applications scenarios. In this paper, we research neural networks which are involved in the acceleration on FPGA-based platforms. The architecture of networks and characteristics of FPGA are analyzed, compared and summarized, as well as their influence on acceleration tasks. Based on the analysis, we generalize the acceleration strategies into five aspects—computing complexity, computing parallelism, data reuse, pruning and quantization. Then previous works on neural network acceleration are introduced following these topics. We summarize how to design a technical route for practical applications based on these strategies. Challenges in the path are discussed to provide guidance for future work. Full article
(This article belongs to the Section Artificial Intelligence Circuits and Systems (AICAS))
Show Figures

Figure 1

30 pages, 1401 KiB  
Review
Drone Deep Reinforcement Learning: A Review
by Ahmad Taher Azar, Anis Koubaa, Nada Ali Mohamed, Habiba A. Ibrahim, Zahra Fathy Ibrahim, Muhammad Kazim, Adel Ammar, Bilel Benjdira, Alaa M. Khamis, Ibrahim A. Hameed and Gabriella Casalino
Electronics 2021, 10(9), 999; https://doi.org/10.3390/electronics10090999 - 22 Apr 2021
Cited by 190 | Viewed by 23896
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and [...] Read more.
Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios. Full article
Show Figures

Figure 1

18 pages, 1006 KiB  
Article
Automated Quantum Hardware Selection for Quantum Workflows
by Benjamin Weder, Johanna Barzen, Frank Leymann and Marie Salm
Electronics 2021, 10(8), 984; https://doi.org/10.3390/electronics10080984 - 20 Apr 2021
Cited by 14 | Viewed by 4198
Abstract
The execution of a quantum algorithm typically requires various classical pre- and post-processing tasks. Hence, workflows are a promising means to orchestrate these tasks, benefiting from their reliability, robustness, and features, such as transactional processing. However, the implementations of the tasks may be [...] Read more.
The execution of a quantum algorithm typically requires various classical pre- and post-processing tasks. Hence, workflows are a promising means to orchestrate these tasks, benefiting from their reliability, robustness, and features, such as transactional processing. However, the implementations of the tasks may be very heterogeneous and they depend on the quantum hardware used to execute the quantum circuits of the algorithm. Additionally, today’s quantum computers are still restricted, which limits the size of the quantum circuits that can be executed. As the circuit size often depends on the input data of the algorithm, the selection of quantum hardware to execute a quantum circuit must be done at workflow runtime. However, modeling all possible alternative tasks would clutter the workflow model and require its adaptation whenever a new quantum computer or software tool is released. To overcome this problem, we introduce an approach to automatically select suitable quantum hardware for the execution of quantum circuits in workflows. Furthermore, it enables the dynamic adaptation of the workflows, depending on the selection at runtime based on reusable workflow fragments. We validate our approach with a prototypical implementation and a case study demonstrating the hardware selection for Simon’s algorithm. Full article
(This article belongs to the Special Issue Quantum Computing System Design and Architecture)
Show Figures

Figure 1

17 pages, 1603 KiB  
Article
Self-Biased and Supply-Voltage Scalable Inverter-Based Operational Transconductance Amplifier with Improved Composite Transistors
by Luis Henrique Rodovalho, Cesar Ramos Rodrigues and Orazio Aiello
Electronics 2021, 10(8), 935; https://doi.org/10.3390/electronics10080935 - 14 Apr 2021
Cited by 25 | Viewed by 4412
Abstract
This paper deals with a single-stage single-ended inverter-based Operational Transconductance Amplifiers (OTA) with improved composite transistors for ultra-low-voltage supplies, while maintaining a small-area, high power-efficiency and low output signal distortion. The improved composite transistor is a combination of the conventional composite transistor and [...] Read more.
This paper deals with a single-stage single-ended inverter-based Operational Transconductance Amplifiers (OTA) with improved composite transistors for ultra-low-voltage supplies, while maintaining a small-area, high power-efficiency and low output signal distortion. The improved composite transistor is a combination of the conventional composite transistor and forward-body-biasing to further increase voltage gain. The impact of the proposed technique on performance is demonstrated through post-layout simulations referring to the TSMC 180 nm technology process. The proposed OTA achieves 54 dB differential voltage gain, 210 Hz gain–bandwidth product for a 10 pF capacitive load, with a power consumption of 273 pW with a 0.3 V power supply, and occupies an area of 1026 μm2. For a 0.6 V voltage supply, the proposed OTA improves its voltage gain to 73 dB, and achieves a 15 kHz gain–bandwidth product with a power consumption of 41 nW. Full article
(This article belongs to the Special Issue Analog Microelectronic Circuit Design and Applications)
Show Figures

Figure 1

33 pages, 8507 KiB  
Review
Embedded Intelligence on FPGA: Survey, Applications and Challenges
by Kah Phooi Seng, Paik Jen Lee and Li Minn Ang
Electronics 2021, 10(8), 895; https://doi.org/10.3390/electronics10080895 - 8 Apr 2021
Cited by 63 | Viewed by 10439
Abstract
Embedded intelligence (EI) is an emerging research field and has the objective to incorporate machine learning algorithms and intelligent decision-making capabilities into mobile and embedded devices or systems. There are several challenges to be addressed to realize efficient EI implementations in hardware such [...] Read more.
Embedded intelligence (EI) is an emerging research field and has the objective to incorporate machine learning algorithms and intelligent decision-making capabilities into mobile and embedded devices or systems. There are several challenges to be addressed to realize efficient EI implementations in hardware such as the need for: (1) high computational processing; (2) low power consumption (or high energy efficiency); and (3) scalability to accommodate different network sizes and topologies. In recent years, an emerging hardware technology which has demonstrated strong potential and capabilities for EI implementations is the FPGA (field programmable gate array) technology. This paper presents an overview and review of embedded intelligence on FPGA with a focus on applications, platforms and challenges. There are four main classification and thematic descriptors which are reviewed and discussed in this paper for EI: (1) EI techniques including machine learning and neural networks, deep learning, expert systems, fuzzy intelligence, swarm intelligence, self-organizing map (SOM) and extreme learning; (2) applications for EI including object detection and recognition, indoor localization and surveillance monitoring, and other EI applications; (3) hardware and platforms for EI; and (4) challenges for EI. The paper aims to introduce interested researchers to this area and motivate the development of practical FPGA solutions for EI deployment. Full article
Show Figures

Figure 1

23 pages, 2721 KiB  
Article
An Interpretable Deep Learning Model for Automatic Sound Classification
by Pablo Zinemanas, Martín Rocamora, Marius Miron, Frederic Font and Xavier Serra
Electronics 2021, 10(7), 850; https://doi.org/10.3390/electronics10070850 - 2 Apr 2021
Cited by 30 | Viewed by 8160
Abstract
Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or [...] Read more.
Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach. Full article
(This article belongs to the Special Issue Machine Learning Applied to Music/Audio Signal Processing)
Show Figures

Figure 1

16 pages, 5585 KiB  
Article
A Simulated Annealing Algorithm and Grid Map-Based UAV Coverage Path Planning Method for 3D Reconstruction
by Sichen Xiao, Xiaojun Tan and Jinping Wang
Electronics 2021, 10(7), 853; https://doi.org/10.3390/electronics10070853 - 2 Apr 2021
Cited by 60 | Viewed by 5422
Abstract
With the extensive application of 3D maps, acquiring high-quality images with unmanned aerial vehicles (UAVs) for precise 3D reconstruction has become a prominent topic of study. In this research, we proposed a coverage path planning method for UAVs to achieve full coverage of [...] Read more.
With the extensive application of 3D maps, acquiring high-quality images with unmanned aerial vehicles (UAVs) for precise 3D reconstruction has become a prominent topic of study. In this research, we proposed a coverage path planning method for UAVs to achieve full coverage of a target area and to collect high-resolution images while considering the overlap ratio of the collected images and energy consumption of clustered UAVs. The overlap ratio of the collected image set is guaranteed through a map decomposition method, which can ensure that the reconstruction results will not get affected by model breaking. In consideration of the small battery capacity of common commercial quadrotor UAVs, ray-scan-based area division was adopted to segment the target area, and near-optimized paths in subareas were calculated by a simulated annealing algorithm to find near-optimized paths, which can achieve balanced task assignment for UAV formations and minimum energy consumption for each UAV. The proposed system was validated through a site experiment and achieved a reduction in path length of approximately 12.6% compared to the traditional zigzag path. Full article
(This article belongs to the Special Issue Advances in SLAM and Data Fusion for UAVs/Drones)
Show Figures

Figure 1

17 pages, 7946 KiB  
Article
Real-Time Face Mask Detection Method Based on YOLOv3
by Xinbei Jiang, Tianhan Gao, Zichen Zhu and Yukang Zhao
Electronics 2021, 10(7), 837; https://doi.org/10.3390/electronics10070837 - 1 Apr 2021
Cited by 120 | Viewed by 12773
Abstract
The rapid outbreak of COVID-19 has caused serious harm and infected tens of millions of people worldwide. Since there is no specific treatment, wearing masks has become an effective method to prevent the transmission of COVID-19 and is required in most public areas, [...] Read more.
The rapid outbreak of COVID-19 has caused serious harm and infected tens of millions of people worldwide. Since there is no specific treatment, wearing masks has become an effective method to prevent the transmission of COVID-19 and is required in most public areas, which has also led to a growing demand for automatic real-time mask detection services to replace manual reminding. However, few studies on face mask detection are being conducted. It is urgent to improve the performance of mask detectors. In this paper, we proposed the Properly Wearing Masked Face Detection Dataset (PWMFD), which included 9205 images of mask wearing samples with three categories. Moreover, we proposed Squeeze and Excitation (SE)-YOLOv3, a mask detector with relatively balanced effectiveness and efficiency. We integrated the attention mechanism by introducing the SE block into Darknet53 to obtain the relationships among channels so that the network can focus more on the important feature. We adopted GIoUloss, which can better describe the spatial difference between predicted and ground truth boxes to improve the stability of bounding box regression. Focal loss was utilized for solving the extreme foreground-background class imbalance. Besides, we performed corresponding image augmentation techniques to further improve the robustness of the model on the specific task. Experimental results showed that SE-YOLOv3 outperformed YOLOv3 and other state-of-the-art detectors on PWMFD and achieved a higher 8.6% mAP compared to YOLOv3 while having a comparable detection speed. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

13 pages, 2243 KiB  
Article
Semi-Automatic Guidance vs. Manual Guidance in Agriculture: A Comparison of Work Performance in Wheat Sowing
by Antonio Scarfone, Rodolfo Picchio, Angelo del Giudice, Francesco Latterini, Paolo Mattei, Enrico Santangelo and Alberto Assirelli
Electronics 2021, 10(7), 825; https://doi.org/10.3390/electronics10070825 - 31 Mar 2021
Cited by 13 | Viewed by 4500
Abstract
The use of digital systems in precision agriculture is becoming more and more attractive for farmers at every level. A few years ago, the use of these technologies was limited to large farms, due to the considerable income needed to amortize the large [...] Read more.
The use of digital systems in precision agriculture is becoming more and more attractive for farmers at every level. A few years ago, the use of these technologies was limited to large farms, due to the considerable income needed to amortize the large investment required. Although this technology has now become more affordable, there is a lack of scientific data directed to demonstrate how these systems are able to determine quantifiable advantages for farmers. Thus, the transition towards precision agriculture is still very slow. This issue is not just negatively affecting the agriculture economy, but it is also slowing down potential environmental benefits that may result from it. The starting point of precision agriculture can be considered as the introduction of satellite tractor guidance. For instance, with semi-automatic and automatic tractor guidance, farmers can profit from more accuracy and higher machine performance during several farm operations such as plowing, harrowing, sowing, and fertilising. The goal of this study is to compare semi-automatic guidance with manual guidance in wheat sowing, evaluating parameters such as machine performance, seed supply and operational costs of both the configurations. Full article
Show Figures

Figure 1

15 pages, 1729 KiB  
Article
On the Sampling of the Fresnel Field Intensity over a Full Angular Sector
by Rocco Pierri and Raffaele Moretta
Electronics 2021, 10(7), 832; https://doi.org/10.3390/electronics10070832 - 31 Mar 2021
Cited by 6 | Viewed by 2435
Abstract
In this article, the question of how to sample the square amplitude of the radiated field in the framework of phaseless antenna diagnostics is addressed. In particular, the goal of the article is to find a discretization scheme that exploits a non-redundant number [...] Read more.
In this article, the question of how to sample the square amplitude of the radiated field in the framework of phaseless antenna diagnostics is addressed. In particular, the goal of the article is to find a discretization scheme that exploits a non-redundant number of samples and returns a discrete model whose mathematical properties are similar to those of the continuous one. To this end, at first, the lifting technique is used to obtain a linear representation of the square amplitude of the radiated field. Later, a discretization scheme based on the Shannon sampling theorem is exploited to discretize the continuous model. More in detail, the kernel of the related eigenvalue problem is first recast as the Fourier transform of a window function, and after, it is evaluated. Finally, the sampling theory approach is applied to obtain a discrete model whose singular values approximate all the relevant singular values of the continuous linear model. The study refers to a strip source whose square magnitude of the radiated field is observed in the Fresnel zone over a 2D observation domain. Full article
(This article belongs to the Special Issue Photonic and Microwave Sensing Developments and Applications)
Show Figures

Figure 1

16 pages, 6257 KiB  
Article
A Gated Oscillator Clock and Data Recovery Circuit for Nanowatt Wake-Up and Data Receivers
by Matteo D’Addato, Alessia M. Elgani, Luca Perilli, Eleonora Franchi Scarselli, Antonio Gnudi, Roberto Canegallo and Giulio Ricotti
Electronics 2021, 10(7), 780; https://doi.org/10.3390/electronics10070780 - 25 Mar 2021
Cited by 3 | Viewed by 3599
Abstract
This article presents a data-startable baseband logic featuring a gated oscillator clock and data recovery (GO-CDR) circuit for nanowatt wake-up and data receivers (WuRxs). At each data transition, the phase misalignment between the data coming from the analog front-end (AFE) and the clock [...] Read more.
This article presents a data-startable baseband logic featuring a gated oscillator clock and data recovery (GO-CDR) circuit for nanowatt wake-up and data receivers (WuRxs). At each data transition, the phase misalignment between the data coming from the analog front-end (AFE) and the clock is cleared by the GO-CDR circuit, thus allowing the reception of long data streams. Any free-running frequency mismatch between the GO and the bitrate does not limit the number of receivable bits, but only the maximum number of equal consecutive bits (Nm). To overcome this limitation, the proposed system includes a frequency calibration circuit, which reduces the frequency mismatch to ±0.5%, thus enabling the WuRx to be used with different encoding techniques up to Nm = 100. A full WuRx prototype, including an always-on clockless AFE operating in subthreshold, was fabricated with STMicroelectronics 90 nm BCD technology. The WuRx is supplied with 0.6 V, and the power consumption, excluding the calibration circuit, is 12.8 nW during the rest state and 17 nW at a 1 kbps data rate. With a 1 kbps On-Off Keying (OOK) modulated input and −35 dBm of input RF power after the input matching network (IMN), a 10−3 missed detection rate with a 0 bit error tolerance is measured, transmitting 63 bit packets with the Nm ranging from 1 to 63. The total sensitivity, including the estimated IMN gain at 100 MHz and 433 MHz, is −59.8 dBm and −52.3 dBm, respectively. In comparison with an ideal CDR, the degradation of the sensitivity due to the GO-CDR is 1.25 dBm. False alarm rate measurements lasting 24 h revealed zero overall false wake-ups. Full article
(This article belongs to the Special Issue Energy Efficient Circuit Design Techniques for Low Power Systems)
Show Figures

Figure 1

16 pages, 7162 KiB  
Article
Modeling Small UAV Micro-Doppler Signature Using Millimeter-Wave FMCW Radar
by Marco Passafiume, Neda Rojhani, Giovanni Collodi and Alessandro Cidronali
Electronics 2021, 10(6), 747; https://doi.org/10.3390/electronics10060747 - 22 Mar 2021
Cited by 22 | Viewed by 5989
Abstract
With the increase in small unmanned aerial vehicle (UAV) applications in several technology areas, detection and small UAVs classification have become of interest. To cope with small radar cross-sections (RCSs), slow-flying speeds, and low flying altitudes, the micro-Doppler signature provides some of the [...] Read more.
With the increase in small unmanned aerial vehicle (UAV) applications in several technology areas, detection and small UAVs classification have become of interest. To cope with small radar cross-sections (RCSs), slow-flying speeds, and low flying altitudes, the micro-Doppler signature provides some of the most distinctive information to identify and classify targets in many radar systems. In this paper, we introduce an effective model for the micro-Doppler effect that is suitable for frequency-modulated continuous-wave (FMCW) radar applications, and exploit it to investigate UAV signatures. The latter depends on the number of UAV motors, which are considered vibrational sources, and their rotation speed. To demonstrate the reliability of the proposed model, it is used to build simulated FMCW radar images, which are compared with experimental data acquired by a 77 GHz FMCW multiple-input multiple-output (MIMO) cost-effective automotive radar platform. The experimental results confirm the model’s ability to estimate the class of the UAV, namely its number of motors, in different operative scenarios. In addition, the experimental results show that the motors rotation speed does not imprint a significant signature on the classification of the UAV; thus, the estimation of the number of motors represents the only viable parameter for small UAV classification using the micro-Doppler effect. Full article
Show Figures

Figure 1

13 pages, 4575 KiB  
Article
Objective Assessment of Walking Impairments in Myotonic Dystrophy by Means of a Wearable Technology and a Novel Severity Index
by Giovanni Saggio, Alessandro Manoni, Vito Errico, Erica Frezza, Ivan Mazzetta, Rosario Rota, Roberto Massa and Fernanda Irrera
Electronics 2021, 10(6), 708; https://doi.org/10.3390/electronics10060708 - 17 Mar 2021
Cited by 1 | Viewed by 2229
Abstract
Myotonic dystrophy type 1 (DM1) is a genetic inherited autosomal dominant disease characterized by multisystem involvement, including muscle, heart, brain, eye, and endocrine system. Although several methods are available to evaluate muscle strength, endurance, and dexterity, there are no validated outcome measures aimed [...] Read more.
Myotonic dystrophy type 1 (DM1) is a genetic inherited autosomal dominant disease characterized by multisystem involvement, including muscle, heart, brain, eye, and endocrine system. Although several methods are available to evaluate muscle strength, endurance, and dexterity, there are no validated outcome measures aimed at objectively evaluating qualitative and quantitative gait alterations. Advantageously, wearable sensing technology has been successfully adopted in objectifying the assessment of motor disabilities in different medical occurrences, so that here we consider the adoption of such technology specifically for DM1. In particular, we measured motor tasks through inertial measurement units on a cohort of 13 DM1 patients and 11 healthy control counterparts. The motor tasks consisted of 16 meters of walking both at a comfortable speed and fast pace. Measured data consisted of plantar-flexion and dorsi-flexion angles assumed by both ankles, so to objectively evidence the footdrop behavior of the DM1 disease, and to define a novel severity index, termed SI-Norm2, to rate the grade of walking impairments. According to the obtained results, our approach could be useful for a more precise stratification of DM1 patients, providing a new tool for a personalized rehabilitation approach. Full article
(This article belongs to the Special Issue Wearable Electronics for Assessing Human Motor (dis)Abilities)
Show Figures

Figure 1

23 pages, 92419 KiB  
Article
Virtual Scenario Simulation and Modeling Framework in Autonomous Driving Simulators
by Mingyun Wen, Jisun Park, Yunsick Sung, Yong Woon Park and Kyungeun Cho
Electronics 2021, 10(6), 694; https://doi.org/10.3390/electronics10060694 - 16 Mar 2021
Cited by 9 | Viewed by 4399
Abstract
Recently, virtual environment-based techniques to train sensor-based autonomous driving models have been widely employed due to their efficiency. However, a simulated virtual environment is required to be highly similar to its real-world counterpart to ensure the applicability of such models to actual autonomous [...] Read more.
Recently, virtual environment-based techniques to train sensor-based autonomous driving models have been widely employed due to their efficiency. However, a simulated virtual environment is required to be highly similar to its real-world counterpart to ensure the applicability of such models to actual autonomous vehicles. Though advances in hardware and three-dimensional graphics engine technology have enabled the creation of realistic virtual driving environments, the myriad of scenarios occurring in the real world can only be simulated up to a limited extent. In this study, a scenario simulation and modeling framework that simulates the behavior of objects that may be encountered while driving is proposed to address this problem. This framework maximizes the number of scenarios, their types, and the driving experience in a virtual environment. Furthermore, a simulator was implemented and employed to evaluate the performance of the proposed framework. Full article
(This article belongs to the Special Issue AI-Based Autonomous Driving System)
Show Figures

Figure 1

20 pages, 9218 KiB  
Article
Toward an Advanced Human Monitoring System Based on a Smart Body Area Network for Industry Use
by Kento Takabayashi, Hirokazu Tanaka and Katsumi Sakakibara
Electronics 2021, 10(6), 688; https://doi.org/10.3390/electronics10060688 - 15 Mar 2021
Cited by 9 | Viewed by 2513
Abstract
This research provides a study on a smart body area network (SmartBAN) physical layer (PHY), as an of the Internet of medical things (IoMT) technology, for an advanced human monitoring system in industrial use. The SmartBAN provides a new PHY and a medium [...] Read more.
This research provides a study on a smart body area network (SmartBAN) physical layer (PHY), as an of the Internet of medical things (IoMT) technology, for an advanced human monitoring system in industrial use. The SmartBAN provides a new PHY and a medium access control (MAC) layer, improving its performance and providing very low-latency emergency information transmission with low energy consumption compared with other wireless body area network (WBAN) standards. On the other hand, IoMT applications are expected to become more advanced with smarter wearable devices, such as augmented reality-based human monitoring and work support in a factory. Therefore, it is possible to develop more advanced human monitoring systems for industrial use by combining the SmartBAN with multimedia devices. However, the SmartBAN PHY is not designed to transmit multimedia information such as audio and video. To address this issue, multilevel phase shift keying (PSK) modulation is applied to the SmartBAN PHY, and the symbol rate is improved by setting the roll-off rate appropriately to realize the system. The numerical results show that a sufficient link budget, receiver sensitivity and fade margin were obtained even when those approaches were applied to the SmartBAN PHY. The results indicate that these techniques are required for high-quality audio or video transmission, as well as vital sign data transmission, in a SmartBAN. Full article
(This article belongs to the Special Issue Smart Bioelectronics and Wearable Systems)
Show Figures

Figure 1

Back to TopTop