Next Issue
Volume 10, November-2
Previous Issue
Volume 10, October-2
 
 

Electronics, Volume 10, Issue 21 (November-1 2021) – 167 articles

Cover Story (view full-size image): Power grids are rapidly progressing into smart grids, where technologies of renewables, energy storage systems, and electric mobility are each more presented and distributed, including in the context of smart homes. Thus, since such technologies are natively operating in DC, a revolution in the electrical grid craving a convergence to DC grids may be predicted. Nevertheless, traditional loads natively operating in AC will continue to be used, highlighting the importance of hybrid AC/DC grids. Considering this new paradigm, innovative control algorithms regarding the role of front-end AC/DC converters in hybrid AC/DC smart homes is of utmost importance, including for providing unipolar or bipolar DC grids for interfacing native DC technologies, as well as regarding power quality from a smart grid point of view. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 2315 KiB  
Article
Electric Motors for Variable-Speed Drive of Lock Valves
by Aleksey V. Udovichenko, Dmitri Kaluzhskij, Nikita Uvarov and Ali Mekhtiyev
Electronics 2021, 10(21), 2727; https://doi.org/10.3390/electronics10212727 - 8 Nov 2021
Cited by 2 | Viewed by 2363
Abstract
Improving the operational reliability of nuclear power plants, combined heat and power plants (CHP), as well as oil and gas pipelines is a priority task in the development of a variable-speed drive for lock valves used at these facilities. This paper analyzes the [...] Read more.
Improving the operational reliability of nuclear power plants, combined heat and power plants (CHP), as well as oil and gas pipelines is a priority task in the development of a variable-speed drive for lock valves used at these facilities. This paper analyzes the technical requirements for such devices: the motor has been selected, its electrical equilibrium and moment equations have been obtained; recommendations for the selection of the kinematic drive scheme have been formulated. Based on the theoretical data obtained, a prototype has been developed, manufactured, and tested. Full article
(This article belongs to the Special Issue Advanced Technologies in Energy-Efficient Convertors)
Show Figures

Figure 1

20 pages, 8805 KiB  
Article
A Fully Integrated 64-Channel Recording System for Extracellular Raw Neural Signals
by Xiangwei Zhang, Quan Li, Chengying Chen, Yan Li, Fuqiang Zuo, Xin Liu, Hao Zhang, Xiaosong Wang and Yu Liu
Electronics 2021, 10(21), 2726; https://doi.org/10.3390/electronics10212726 - 8 Nov 2021
Cited by 2 | Viewed by 2333
Abstract
This paper presents a fully integrated 64-channel neural recording system for local field potential and action potential. It mainly includes 64 low-noise amplifiers, 64 programmable amplifiers and filters, 9 switched-capacitor (SC) amplifiers, and a 10-bit successive approximation register analogue-to-digital converter (SAR ADC). Two [...] Read more.
This paper presents a fully integrated 64-channel neural recording system for local field potential and action potential. It mainly includes 64 low-noise amplifiers, 64 programmable amplifiers and filters, 9 switched-capacitor (SC) amplifiers, and a 10-bit successive approximation register analogue-to-digital converter (SAR ADC). Two innovations have been proposed. First, a two-stage amplifier with high-gain, rail-to-rail input and output, and dynamic current enhancement improves the speed of SC amplifiers. The second is a clock logic that can be used to align the switching clock of 64 channels with the sampling clock of ADC. Implemented in an SMIC 0.18 μm Complementary Metal Oxide Semiconductor (CMOS) process, the 64-channel system chip has a die area of 4 × 4 mm2 and is packaged in a QFN−88 of 10 × 10 mm2. Supplied by 1.8 V, the total power is about 8.28 mW. For each channel, rail-to-rail electrode DC offset can be rejected, the referred-to-input noise within 1 Hz–10 kHz is about 5.5 μVrms, the common-mode rejection ratio at 50 Hz is about 69 dB, and the output total harmonic distortion is 0.53%. Measurement results also show that multiple neural signals are able to be simultaneously recorded. Full article
(This article belongs to the Special Issue Brain Machine Interfaces)
Show Figures

Figure 1

40 pages, 1556 KiB  
Review
Overview of Signal Processing and Machine Learning for Smart Grid Condition Monitoring
by Elhoussin Elbouchikhi, Muhammad Fahad Zia, Mohamed Benbouzid and Soumia El Hani
Electronics 2021, 10(21), 2725; https://doi.org/10.3390/electronics10212725 - 8 Nov 2021
Cited by 27 | Viewed by 5476
Abstract
Nowadays, the main grid is facing several challenges related to the integration of renewable energy resources, deployment of grid-level energy storage devices, deployment of new usages such as the electric vehicle, massive usage of power electronic devices at different electric grid stages and [...] Read more.
Nowadays, the main grid is facing several challenges related to the integration of renewable energy resources, deployment of grid-level energy storage devices, deployment of new usages such as the electric vehicle, massive usage of power electronic devices at different electric grid stages and the inter-connection with microgrids and prosumers. To deal with these challenges, the concept of a smart, fault-tolerant, and self-healing power grid has emerged in the last few decades to move towards a more resilient and efficient global electrical network. The smart grid concept implies a bi-directional flow of power and information between all key energy players and requires smart information technologies, smart sensors, and low-latency communication devices. Moreover, with the increasing constraints, the power grid is subjected to several disturbances, which can evolve to a fault and, in some rare circumstances, to catastrophic failure. These disturbances include wiring issues, grounding, switching transients, load variations, and harmonics generation. These aspects justify the need for real-time condition monitoring of the power grid and its subsystems and the implementation of predictive maintenance tools. Hence, researchers in industry and academia are developing and implementing power systems monitoring approaches allowing pervasive and effective communication, fault diagnosis, disturbance classification and root cause identification. Specifically, a focus is placed on power quality monitoring using advanced signal processing and machine learning approaches for disturbances characterization. Even though this review paper is not exhaustive, it can be considered as a valuable guide for researchers and engineers who are interested in signal processing approaches and machine learning techniques for power system monitoring and grid-disturbance classification purposes. Full article
(This article belongs to the Special Issue Resilience-Oriented Smart Grid Systems)
Show Figures

Figure 1

21 pages, 1340 KiB  
Article
Stable, Low Power and Bit-Interleaving Aware SRAM Memory for Multi-Core Processing Elements
by Nandakishor Yadav, Youngbae Kim, Shuai Li and Kyuwon Ken Choi
Electronics 2021, 10(21), 2724; https://doi.org/10.3390/electronics10212724 - 8 Nov 2021
Cited by 4 | Viewed by 3705
Abstract
The machine learning and convolutional neural network (CNN)-based intelligent artificial accelerator needs significant parallel data processing from the cache memory. The separate read port is mostly used to design built-in computational memory (CRAM) to reduce the data processing bottleneck. This memory uses multi-port [...] Read more.
The machine learning and convolutional neural network (CNN)-based intelligent artificial accelerator needs significant parallel data processing from the cache memory. The separate read port is mostly used to design built-in computational memory (CRAM) to reduce the data processing bottleneck. This memory uses multi-port reading and writing operations, which reduces stability and reliability. In this paper, we proposed a self-adaptive 12T SRAM cell to increase the read stability for multi-port operation. The self-adaptive technique increases stability and reliability. We increased the read stability by refreshing the storing node in the read mode of operation. The proposed technique also prevents the bit-interleaving problem. Further, we offered a butterfly-inspired SRAM bank to increase the performance and reduce the power dissipation. The proposed SRAM saves 12% more total power than the state-of-the-art 12T SRAM cell-based SRAM. We improve the write performance by 28.15% compared with the state-of-the-art 12T SRAM design. The total area overhead of the proposed architecture compared to the conventional 6T SRAM cell-based SRAM is only 1.9 times larger than the 6T SRAM cell. Full article
(This article belongs to the Special Issue Applied AI-Based Platform Technology and Application)
Show Figures

Figure 1

43 pages, 7682 KiB  
Article
A Robust Framework for MADS Based on DL Techniques on the IoT
by Hussah Talal and Rachid Zagrouba
Electronics 2021, 10(21), 2723; https://doi.org/10.3390/electronics10212723 - 8 Nov 2021
Cited by 1 | Viewed by 2521
Abstract
Day after day, new types of malware are appearing, renewing, and continuously developing, which makes it difficult to identify and stop them. Some attackers exploit artificial intelligence (AI) to create renewable malware with different signatures that are difficult to detect. Therefore, the performance [...] Read more.
Day after day, new types of malware are appearing, renewing, and continuously developing, which makes it difficult to identify and stop them. Some attackers exploit artificial intelligence (AI) to create renewable malware with different signatures that are difficult to detect. Therefore, the performance of the traditional malware detection systems (MDS) and protection mechanisms were weakened so the malware can easily penetrate them. This poses a great risk to security in the internet of things (IoT) environment, which is interconnected and has big and continuous data. Penetrating any of the things in the IoT environment leads to a penetration of the entire IoT network and control different devices on it. Also, the penetration of the IoT environment leads to a violation of users’ privacy, and this may result in many risks, such as obtaining and stealing the user’s credit card information or theft of identity. Therefore, it is necessary to propose a robust framework for a MDS based on DL that has a high ability to detect renewable malware and propose malware Anomaly detection systems (MADS) work as a human mind to solve the problem of security in IoT environments. RoMADS model achieves high results: 99.038% for Accuracy, 99.997% for Detection rate. The experiment results overcome eighteen models of the previous research works related to this field, which proved the effectiveness of RoMADS framework for detecting malware in IoT. Full article
Show Figures

Figure 1

24 pages, 4707 KiB  
Article
Assessment of the Readiness of Industrial Enterprises for Automation and Digitalization of Business Processes
by Irina Krakovskaya and Julia Korokoshko
Electronics 2021, 10(21), 2722; https://doi.org/10.3390/electronics10212722 - 8 Nov 2021
Cited by 7 | Viewed by 2772
Abstract
The purpose of this article is to identify the promising areas of digitalization in the work of industrial enterprises at the national and regional level. The study was conducted on the basis of industrial enterprises of the Republic of Mordovia using the methods [...] Read more.
The purpose of this article is to identify the promising areas of digitalization in the work of industrial enterprises at the national and regional level. The study was conducted on the basis of industrial enterprises of the Republic of Mordovia using the methods of a systematic approach, comparative and strategic analysis, mathematical statistics, etc. As a result, we assessed the impact of the digital transformation of the economy on the development of industrial enterprises in Russia and the Republic of Mordovia, changes in the efficiency of enterprises associated with the expansion of the use IT, the degree of satisfaction of enterprises with the use of specific tools of information and communication technologies, etc. Spearman’s rank linear correlation demonstrates positive and negative effects of ICT using the industrial enterprises. The novelty and practical value of the obtained results consists in the fact that confirmed research hypotheses reflect both specific regional factors and systemic nationwide problems of digitalization of the Russian industry, automation of business process, allow us to outline the priority areas of digital transformation of business models not only of the studied enterprises industry of the region, but also the non-resource sector of the industry in general. Full article
Show Figures

Graphical abstract

22 pages, 2511 KiB  
Review
Privacy Preservation Models for Third-Party Auditor over Cloud Computing: A Survey
by Abdul Razaque, Mohamed Ben Haj Frej, Bandar Alotaibi and Munif Alotaibi
Electronics 2021, 10(21), 2721; https://doi.org/10.3390/electronics10212721 - 8 Nov 2021
Cited by 17 | Viewed by 3312
Abstract
Cloud computing has become a prominent technology due to its important utility service; this service concentrates on outsourcing data to organizations and individual consumers. Cloud computing has considerably changed the manner in which individuals or organizations store, retrieve, and organize their personal information. [...] Read more.
Cloud computing has become a prominent technology due to its important utility service; this service concentrates on outsourcing data to organizations and individual consumers. Cloud computing has considerably changed the manner in which individuals or organizations store, retrieve, and organize their personal information. Despite the manifest development in cloud computing, there are still some concerns regarding the level of security and issues related to adopting cloud computing that prevent users from fully trusting this useful technology. Hence, for the sake of reinforcing the trust between cloud clients (CC) and cloud service providers (CSP), as well as safeguarding the CC’s data in the cloud, several security paradigms of cloud computing based on a third-party auditor (TPA) have been introduced. The TPA, as a trusted party, is responsible for checking the integrity of the CC’s data and all the critical information associated with it. However, the TPA could become an adversary and could aim to deteriorate the privacy of the CC’s data by playing a malicious role. In this paper, we present the state of the art of cloud computing’s privacy-preserving models (PPM) based on a TPA. Three TPA factors of paramount significance are discussed: TPA involvement, security requirements, and security threats caused by vulnerabilities. Moreover, TPA’s privacy preserving models are comprehensively analyzed and categorized into different classes with an emphasis on their dynamicity. Finally, we discuss the limitations of the models and present our recommendations for their improvement. Full article
(This article belongs to the Special Issue Big Data Privacy-Preservation)
Show Figures

Figure 1

18 pages, 2888 KiB  
Article
MemBox: Shared Memory Device for Memory-Centric Computing Applicable to Deep Learning Problems
by Yongseok Choi, Eunji Lim, Jaekwon Shin and Cheol-Hoon Lee
Electronics 2021, 10(21), 2720; https://doi.org/10.3390/electronics10212720 - 8 Nov 2021
Viewed by 2332
Abstract
Large-scale computational problems that need to be addressed in modern computers, such as deep learning or big data analysis, cannot be solved in a single computer, but can be solved with distributed computer systems. Since most distributed computing systems, consisting of a large [...] Read more.
Large-scale computational problems that need to be addressed in modern computers, such as deep learning or big data analysis, cannot be solved in a single computer, but can be solved with distributed computer systems. Since most distributed computing systems, consisting of a large number of networked computers, should propagate their computational results to each other, they can suffer the problem of an increasing overhead, resulting in lower computational efficiencies. To solve these problems, we proposed an architecture of a distributed system that used a shared memory that is simultaneously accessible by multiple computers. Our architecture aimed to be implemented in FPGA or ASIC. Using an FPGA board that implemented our architecture, we configured the actual distributed system and showed the feasibility of our system. We compared the results of the deep learning application test using our architecture with that using Google Tensorflow’s parameter server mechanism. We showed improvements in our architecture beyond Google Tensorflow’s parameter server mechanism and we determined the future direction of research by deriving the expected problems. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

12 pages, 1953 KiB  
Article
IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes
by Ahmed R. Nasser, Ahmed M. Hasan, Amjad J. Humaidi, Ahmed Alkhayyat, Laith Alzubaidi, Mohammed A. Fadhel, José Santamaría and Ye Duan
Electronics 2021, 10(21), 2719; https://doi.org/10.3390/electronics10212719 - 8 Nov 2021
Cited by 62 | Viewed by 5459
Abstract
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert [...] Read more.
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert user to the type-1 Diabetes class once a certain critical level is surpassed. This can lead the body of the patient to work at critical levels until the medicine is taken in order to reduce the glucose level, consequently increasing the risk of causing considerable health damages in case of the intake is delayed. To overcome the latter, a new approach based on cutting-edge software and hardware technologies is proposed in this paper. Specifically, an artificial intelligence deep learning (DL) model is proposed to predict glucose levels in 30 min horizons. Moreover, Cloud computing and IoT technologies are considered to implement the prediction model and combine it with the existing wearable CGM model to provide the patients with the prediction of future glucose levels. Among the many DL methods in the state-of-the-art (SoTA) have been considered a cascaded RNN-RBM DL model based on both recurrent neural networks (RNNs) and restricted Boltzmann machines (RBM) due to their superior properties regarding improved prediction accuracy. From the conducted experimental results, it has been shown that the proposed Cloud&DL-based wearable approach achieves an average accuracy value of 15.589 in terms of RMSE, then outperforms similar existing blood glucose prediction methods in the SoTA. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

46 pages, 4167 KiB  
Review
A Survey of Swarm Intelligence Based Load Balancing Techniques in Cloud Computing Environment
by M. A. Elmagzoub, Darakhshan Syed, Asadullah Shaikh, Noman Islam, Abdullah Alghamdi and Syed Rizwan
Electronics 2021, 10(21), 2718; https://doi.org/10.3390/electronics10212718 - 8 Nov 2021
Cited by 24 | Viewed by 6057
Abstract
Cloud computing offers flexible, interactive, and observable access to shared resources on the Internet. It frees users from the requirements of managing computing on their hardware. It enables users to not only store their data and computing over the internet but also can [...] Read more.
Cloud computing offers flexible, interactive, and observable access to shared resources on the Internet. It frees users from the requirements of managing computing on their hardware. It enables users to not only store their data and computing over the internet but also can access it whenever and wherever it is required. The frequent use of smart devices has helped cloud computing to realize the need for its rapid growth. As more users are adapting to the cloud environment, the focus has been placed on load balancing. Load balancing allocates tasks or resources to different devices. In cloud computing, and load balancing has played a major role in the efficient usage of resources for the highest performance. This requirement results in the development of algorithms that can optimally assign resources while managing load and improving quality of service (QoS). This paper provides a survey of load balancing algorithms inspired by swarm intelligence (SI). The algorithms considered in the discussion are Genetic Algorithm, BAT Algorithm, Ant Colony, Grey Wolf, Artificial Bee Colony, Particle Swarm, Whale, Social Spider, Dragonfly, and Raven roosting Optimization. An analysis of the main objectives, area of applications, and targeted issues of each algorithm (with advancements) is presented. In addition, performance analysis has been performed based on average response time, data center processing time, and other quality parameters. Full article
(This article belongs to the Special Issue Cloud Computing and Applications, Volume II)
Show Figures

Figure 1

25 pages, 2015 KiB  
Review
Stock Market Prediction Using Machine Learning Techniques: A Decade Survey on Methodologies, Recent Developments, and Future Directions
by Nusrat Rouf, Majid Bashir Malik, Tasleem Arif, Sparsh Sharma, Saurabh Singh, Satyabrata Aich and Hee-Cheol Kim
Electronics 2021, 10(21), 2717; https://doi.org/10.3390/electronics10212717 - 8 Nov 2021
Cited by 86 | Viewed by 47872
Abstract
With the advent of technological marvels like global digitization, the prediction of the stock market has entered a technologically advanced era, revamping the old model of trading. With the ceaseless increase in market capitalization, stock trading has become a center of investment for [...] Read more.
With the advent of technological marvels like global digitization, the prediction of the stock market has entered a technologically advanced era, revamping the old model of trading. With the ceaseless increase in market capitalization, stock trading has become a center of investment for many financial investors. Many analysts and researchers have developed tools and techniques that predict stock price movements and help investors in proper decision-making. Advanced trading models enable researchers to predict the market using non-traditional textual data from social platforms. The application of advanced machine learning approaches such as text data analytics and ensemble methods have greatly increased the prediction accuracies. Meanwhile, the analysis and prediction of stock markets continue to be one of the most challenging research areas due to dynamic, erratic, and chaotic data. This study explains the systematics of machine learning-based approaches for stock market prediction based on the deployment of a generic framework. Findings from the last decade (2011–2021) were critically analyzed, having been retrieved from online digital libraries and databases like ACM digital library and Scopus. Furthermore, an extensive comparative analysis was carried out to identify the direction of significance. The study would be helpful for emerging researchers to understand the basics and advancements of this emerging area, and thus carry-on further research in promising directions. Full article
Show Figures

Figure 1

12 pages, 2828 KiB  
Article
Green’s Functions of Multi-Layered Plane Media with Arbitrary Boundary Conditions and Its Application on the Analysis of the Meander Line Slow-Wave Structure
by Zheng Wen, Jirun Luo and Wenqi Li
Electronics 2021, 10(21), 2716; https://doi.org/10.3390/electronics10212716 - 8 Nov 2021
Cited by 1 | Viewed by 2086
Abstract
A method was proposed for solving the dyadic Green’s functions (DGF) and scalar Green’s functions (SGF) of multi-layered plane media in this paper. The DGF and SGF were expressed in matrix form, where the variables of the boundary conditions (BCs) can be separated [...] Read more.
A method was proposed for solving the dyadic Green’s functions (DGF) and scalar Green’s functions (SGF) of multi-layered plane media in this paper. The DGF and SGF were expressed in matrix form, where the variables of the boundary conditions (BCs) can be separated in matrix form. The obtained DGF and SGF are in explicit form and suitable for arbitrary boundary conditions, owing to the matrix form expression and the separable variables of the BCs. The Green’s functions with typical BCs were obtained, and the dispersion characteristic of the meander line slow-wave structure (ML-SWS) is analyzed based on the proposed DGF. The relative error between the theoretical results and the simulated ones with different relative permittivity is under 3%, which demonstrates that the proposed DGF is suitable for electromagnetic analysis to complicated structure including the ML-SWS. Full article
(This article belongs to the Special Issue High-Frequency Vacuum Electron Devices)
Show Figures

Figure 1

13 pages, 4503 KiB  
Article
Miniaturized Broadband-Multiband Planar Monopole Antenna in Autonomous Vehicles Communication System Device
by Ming-An Chung and Chih-Wei Yang
Electronics 2021, 10(21), 2715; https://doi.org/10.3390/electronics10212715 - 8 Nov 2021
Cited by 11 | Viewed by 2968
Abstract
The article mainly presents that a simple antenna structure with only two branches can provide the characteristics of dual-band and wide bandwidths. The recommended antenna design is composed of a clockwise spiral shape, and the design has a gradual impedance change. Thus, this [...] Read more.
The article mainly presents that a simple antenna structure with only two branches can provide the characteristics of dual-band and wide bandwidths. The recommended antenna design is composed of a clockwise spiral shape, and the design has a gradual impedance change. Thus, this antenna is ideal for applications also recommended in these wireless standards, including 5G, B5G, 4G, V2X, ISM band of WLAN, Bluetooth, WiFI 6 band, WiMAX, and Sirius/XM Radio for in-vehicle infotainment systems. The proposed antenna with a dimension of 10 × 5 mm is simple and easy to make and has a lot of copy production. The operating frequency is covered with a dual-band from 2000 to 2742 MHz and from 4062 to beyond 8000 MHz and, it is also demonstrated that the measured performance results of return loss, radiation, and gain are in good agreement with simulations. The radiation efficiency can reach 91% and 93% at the lower and higher bands. Moreover, the antenna gain can achieve 2.7 and 6.75 dBi at the lower and higher bands, respectively. This antenna design has a low profile, low cost, and small size features that may be implemented in autonomous vehicles and mobile IoT communication system devices. Full article
(This article belongs to the Special Issue Recent Advances in Antenna Design for 5G Heterogeneous Networks)
Show Figures

Figure 1

11 pages, 3843 KiB  
Article
Analysis and Verification of Traction Motor Iron Loss for Hybrid Electric Vehicles Based on Current Source Analysis Considering Inverter Switching Carrier Frequency
by Jin-Hwan Lee, Woo-Jung Kim and Sang-Yong Jung
Electronics 2021, 10(21), 2714; https://doi.org/10.3390/electronics10212714 - 8 Nov 2021
Cited by 5 | Viewed by 2663
Abstract
In this study, a current source analysis method considering the inverter switching frequency is proposed to improve the precision of loss analysis of a traction motor for a hybrid electric vehicle. Because the iron loss of the traction motor is sensitively influenced by [...] Read more.
In this study, a current source analysis method considering the inverter switching frequency is proposed to improve the precision of loss analysis of a traction motor for a hybrid electric vehicle. Because the iron loss of the traction motor is sensitively influenced by input current fluctuations, the current source analysis using the actual current obtained from an inverter is the ideal method for accurate analysis. However, as the traction motor and inverter should be manufactured to obtain the real current, the traction motor is generally designed based on an ideal current source analysis. Our proposed method is an analytic technique that fits the loss of a traction motor similar to the actual loss by injecting harmonics of the same order of the inverter switching frequency into the ideal input current. Our method is compared with the analysis of the ideal current source to assess the difference in loss. In addition, a test motor was manufactured, and an efficiency test was conducted to compare the efficiency and verify the effectiveness of our method. Full article
Show Figures

Figure 1

18 pages, 1672 KiB  
Article
Optimal Sizing and Cost Minimization of Solar Photovoltaic Power System Considering Economical Perspectives and Net Metering Schemes
by Abdul Rauf, Ali T. Al-Awami, Mahmoud Kassas and Muhammad Khalid
Electronics 2021, 10(21), 2713; https://doi.org/10.3390/electronics10212713 - 7 Nov 2021
Cited by 11 | Viewed by 3538
Abstract
In this paper, economic feasibility of installing small-scale solar photovoltaic (PV) system is studied at the residential and commercial buildings from an end-user perspective. Based on given scenarios, the best sizing methodology of solar PV system installation has been proposed focusing primarily on [...] Read more.
In this paper, economic feasibility of installing small-scale solar photovoltaic (PV) system is studied at the residential and commercial buildings from an end-user perspective. Based on given scenarios, the best sizing methodology of solar PV system installation has been proposed focusing primarily on the minimum payback period under given (rooftop) area for solar PV installation by the customer. The strategy is demonstrated with the help of a case study using real-time monthly load profile data of residential as well as commercial load/customers and current market price for solar PVs and inverters. In addition, sensitivity analysis has also been carried out to examine the effectiveness of net metering scheme for fairly high participation from end users. Since Saudi Arabia’s Electricity and Co-generation Regulatory Authority (ECRA) has recently approved and published the net metering scheme for small-scale solar PV systems allowing end users to generate and export energy surplus to the utility grid, the proposed scheme has become vital and its practical significance is justified with figures and graphs obtained through computer simulations. Full article
(This article belongs to the Special Issue Resilience-Oriented Smart Grid Systems)
Show Figures

Figure 1

14 pages, 4765 KiB  
Article
Octave-Band Four-Beam Antenna Arrays with Stable Beam Direction Fed by Broadband 4 × 4 Butler Matrix
by Andrzej Dudek, Piotr Kanios, Kamil Staszek, Slawomir Gruszczynski and Krzysztof Wincza
Electronics 2021, 10(21), 2712; https://doi.org/10.3390/electronics10212712 - 7 Nov 2021
Cited by 6 | Viewed by 2867
Abstract
A novel concept of four-beam antenna arrays operating in a one-octave frequency range that allows stable beam directions and beamwidths to be achieved is proposed. As shown, such radiation patterns can be obtained when radiating elements are appropriately spaced and fed by a [...] Read more.
A novel concept of four-beam antenna arrays operating in a one-octave frequency range that allows stable beam directions and beamwidths to be achieved is proposed. As shown, such radiation patterns can be obtained when radiating elements are appropriately spaced and fed by a broadband 4 × 4 Butler matrix with directional filters connected to its outputs. In this solution, broadband radiating elements are arranged in such a way that, for the lower and upper frequencies, two separate subarrays can be distinguished, each one consisting of identically arranged radiating elements. The subarrays are fed by a broadband Butler matrix at the output to which an appropriate feeding network based on directional filters is connected. These filters ensure smooth signal switching across the operational bandwidth between elements utilized at lower and higher frequency bands. Therefore, as shown, it is possible to control both beamwidths and beam directions of the resulting multi-beam antenna arrays. Moreover, two different concepts of the feeding network connected in between the Butler matrix and radiating elements for lowering the sidelobes are discussed. The theoretical analyses of the proposed antenna arrays are shown and confirmed by measurements of the developed two-antenna arrays consisting of eight and twelve radiating elements, operating in a 2–4 GHz frequency range. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

22 pages, 465 KiB  
Article
Dataset Generation for Development of Multi-Node Cyber Threat Detection Systems
by Jędrzej Bieniasz and Krzysztof Szczypiorski
Electronics 2021, 10(21), 2711; https://doi.org/10.3390/electronics10212711 - 7 Nov 2021
Cited by 3 | Viewed by 3225
Abstract
This paper presents a new approach to generate datasets for cyber threat research in a multi-node system. For this purpose, the proof-of-concept of such a system is implemented. The system will be used to collect unique datasets with examples of information hiding techniques. [...] Read more.
This paper presents a new approach to generate datasets for cyber threat research in a multi-node system. For this purpose, the proof-of-concept of such a system is implemented. The system will be used to collect unique datasets with examples of information hiding techniques. These techniques are not present in publicly available cyber threat detection datasets, while the cyber threats that use them represent an emerging cyber defense challenge worldwide. The network data were collected thanks to the development of a dedicated application that automatically generates random network configurations and runs scenarios of information hiding techniques. The generated datasets were used in the data-driven research workflow for cyber threat detection, including the generation of data representations (network flows), feature selection based on correlations, data augmentation of training datasets, and preparation of machine learning classifiers based on Random Forest and Multilayer Perceptron architectures. The presented results show the usefulness and correctness of the design process to detect information hiding techniques. The challenges and research directions to detect cyber deception methods are discussed in general in the paper. Full article
(This article belongs to the Special Issue Cybersecurity and Data Science)
Show Figures

Graphical abstract

19 pages, 7859 KiB  
Article
Method for the Analysis of Three-Phase Networks Containing Nonlinear Circuit Elements in View of an Efficient Power Flow Computation
by Claudiu Tufan and Iosif Vasile Nemoianu
Electronics 2021, 10(21), 2710; https://doi.org/10.3390/electronics10212710 - 7 Nov 2021
Cited by 6 | Viewed by 2036
Abstract
The present paper is devoted to applying the Hănțilă method for solving nonlinear three-phase circuits characterized by different reactance values on the three sequences (positive, negative and zero). Nonlinear elements, which are components of the circuit, are substituted by real voltage or current [...] Read more.
The present paper is devoted to applying the Hănțilă method for solving nonlinear three-phase circuits characterized by different reactance values on the three sequences (positive, negative and zero). Nonlinear elements, which are components of the circuit, are substituted by real voltage or current sources, whose values are an iteratively corrected function of the voltage across or the current through them, respectively. The analysis is carried out in the frequency domain and facilitates an easy evaluation of the power transfer on each harmonic. The paper presents numerical implementations of the method for two case studies. For validation, the results are compared against those obtained using the software LTspice in the time domain. Finally, the power flow on the harmonics and the overall power balance are analyzed. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Graphical abstract

13 pages, 1490 KiB  
Article
Investigating the Experience of Social Engineering Victims: Exploratory and User Testing Study
by Bilikis Banire, Dena Al Thani and Yin Yang
Electronics 2021, 10(21), 2709; https://doi.org/10.3390/electronics10212709 - 6 Nov 2021
Cited by 4 | Viewed by 3140
Abstract
The advent of mobile technologies and social network applications has led to an increase in malicious scams and social engineering (SE) attacks which are causing loss of money and breaches of personal information. Understanding how SE attacks spread can provide useful information in [...] Read more.
The advent of mobile technologies and social network applications has led to an increase in malicious scams and social engineering (SE) attacks which are causing loss of money and breaches of personal information. Understanding how SE attacks spread can provide useful information in curbing them. Artificial Intelligence (AI) has demonstrated efficacy in detecting SE attacks, but the acceptability of such a detection approach is yet to be investigated across users with different levels of SE awareness. This paper conducted two studies: (1) exploratory study where qualitative data were collected from 20 victims of SE attacks to inform the development of an AI-based tool for detecting fraudulent messages; and (2) a user testing study with 48 participants with different occupations to determine the detection tool acceptability. Overall, six major themes emerged from the victims’ actions “experiences: reasons for falling for attacks; attack methods; advice on preventing attacks; detection methods; attack context and victims”. The user testing study showed that the AI-based tool was accepted by all users irrespective of their occupation. The categories of users’ occupations can be attributed to the level of SE awareness. Information security awareness should not be limited to organizational levels but extend to social media platforms as public information. Full article
(This article belongs to the Special Issue Hybrid Developments in Cyber Security and Threat Analysis)
Show Figures

Figure 1

16 pages, 2667 KiB  
Article
Human Action Recognition of Spatiotemporal Parameters for Skeleton Sequences Using MTLN Feature Learning Framework
by Faisal Mehmood, Enqing Chen, Muhammad Azeem Akbar and Abeer Abdulaziz Alsanad
Electronics 2021, 10(21), 2708; https://doi.org/10.3390/electronics10212708 - 5 Nov 2021
Cited by 10 | Viewed by 2256
Abstract
Human action recognition (HAR) by skeleton data is considered a potential research aspect in computer vision. Three-dimensional HAR with skeleton data has been used commonly because of its effective and efficient results. Several models have been developed for learning spatiotemporal parameters from skeleton [...] Read more.
Human action recognition (HAR) by skeleton data is considered a potential research aspect in computer vision. Three-dimensional HAR with skeleton data has been used commonly because of its effective and efficient results. Several models have been developed for learning spatiotemporal parameters from skeleton sequences. However, two critical problems exist: (1) previous skeleton sequences were created by connecting different joints with a static order; (2) earlier methods were not efficient enough to focus on valuable joints. Specifically, this study aimed to (1) demonstrate the ability of convolutional neural networks to learn spatiotemporal parameters of skeleton sequences from different frames of human action, and (2) to combine the process of all frames created by different human actions and fit in the spatial structure information necessary for action recognition, using multi-task learning networks (MTLNs). The results were significantly improved compared with existing models by executing the proposed model on an NTU RGB+D dataset, an SYSU dataset, and an SBU Kinetic Interaction dataset. We further implemented our model on noisy expected poses from subgroups of the Kinetics dataset and the UCF101 dataset. The experimental results also showed significant improvement using our proposed model. Full article
Show Figures

Figure 1

17 pages, 2130 KiB  
Article
Securing IoT Data Using Steganography: A Practical Implementation Approach
by Fatiha Djebbar
Electronics 2021, 10(21), 2707; https://doi.org/10.3390/electronics10212707 - 5 Nov 2021
Cited by 6 | Viewed by 3231
Abstract
Adding network connectivity to any “thing” can certainly provide great value, but it also brings along potential cybersecurity risks. To fully benefit from the Internet of Things “IoT” system’s capabilities, the validity and accuracy of transmitted data should be ensured. Due to the [...] Read more.
Adding network connectivity to any “thing” can certainly provide great value, but it also brings along potential cybersecurity risks. To fully benefit from the Internet of Things “IoT” system’s capabilities, the validity and accuracy of transmitted data should be ensured. Due to the constrained environment of IoT devices, practical security implementation presents a great challenge. In this paper, we present a noise-resilient, low-overhead, lightweight steganography solution adequate for use in the IoT environment. The accuracy of hidden data is tested against corruption using multiple modulations and coding schemes (MCSs). Additive white Gaussian noise (AWGN) is added to the modulated data to simulate the noisy channel as well as several wireless technologies such as cellular, WiFi, and vehicular communications that are used between communicating IoT devices. The presented scheme is capable of hiding a high payload in audio signals (e.g., speech and music) with a low bit error rate (BER), high undetectability, low complexity, and low perceptibility. The proposed algorithm is evaluated using well-established performance evaluation techniques and has been demonstrated to be a practical candidate for the mass deployment of IoT devices. Full article
(This article belongs to the Special Issue Cyber Security for Internet of Things)
Show Figures

Figure 1

14 pages, 1531 KiB  
Article
Improving Text-to-Code Generation with Features of Code Graph on GPT-2
by Incheon Paik and Jun-Wei Wang
Electronics 2021, 10(21), 2706; https://doi.org/10.3390/electronics10212706 - 5 Nov 2021
Cited by 6 | Viewed by 6733
Abstract
Code generation, as a very hot application area of deep learning models for text, consists of two different fields: code-to-code and text-to-code. A recent approach, GraphCodeBERT uses code graph, which is called data flow, and showed good performance improvement. The base model architecture [...] Read more.
Code generation, as a very hot application area of deep learning models for text, consists of two different fields: code-to-code and text-to-code. A recent approach, GraphCodeBERT uses code graph, which is called data flow, and showed good performance improvement. The base model architecture of it is bidirectional encoder representations from transformers (BERT), which uses the encoder part of a transformer. On the other hand, generative pre-trained transformer (GPT)—another multiple transformer architecture—uses the decoder part and shows great performance in the multilayer perceptron model. In this study, we investigate the improvement of code graphs with several variances on GPT-2 to refer to the abstract semantic tree used to collect the features of variables in the code. Here, we mainly focus on GPT-2 with additional features of code graphs that allow the model to learn the effect of the data stream. The experimental phase is divided into two parts: fine-tuning of the existing GPT-2 model, and pre-training from scratch using code data. When we pre-train a new model from scratch, the model produces an outperformed result compared with using the code graph with enough data. Full article
(This article belongs to the Special Issue Advances in Data Mining and Knowledge Discovery)
Show Figures

Figure 1

25 pages, 7853 KiB  
Article
A New Hybrid Prime Code for OCDMA Network Multimedia Applications
by Morsy A. Morsy and Moustafa H. Aly
Electronics 2021, 10(21), 2705; https://doi.org/10.3390/electronics10212705 - 5 Nov 2021
Cited by 7 | Viewed by 1840
Abstract
This paper presents a new family of spreading code sequences called hybrid prime code (HPC), to be used as source code for the optical code division multiple access (OCDMA) network for large network capacity. The network capacity directly depends on the number of [...] Read more.
This paper presents a new family of spreading code sequences called hybrid prime code (HPC), to be used as source code for the optical code division multiple access (OCDMA) network for large network capacity. The network capacity directly depends on the number of available code sequences provided and their correlation properties. Therefore, the proposed HPC is designed based on combining two or more different code words belonging to two or more different prime numbers. This increases the number of code sequences generated. The code construction method utilized allows the generation of different code sets, each with different code length and weight, according to the number of prime numbers used. In addition, the incoherent pulse position modulation (PPM) OCDMA system is proposed based on the HPC code. Furthermore, the bit error rate (BER) performance analysis is introduced versus the received optical power and the number of active users. Moreover, the error vector magnitude (EVM) is calculated versus the optical signal-to-noise ratio. This work proves that using two prime numbers simultaneously generates far more codes than using prime numbers separately. It also achieved an OCDMA system capacity higher than the system that uses the optical orthogonal codes (OOCs), modified prime codes (MPCs) families, and two code families with separate simultaneously prime numbers, at a BER below 10−9 which is the optimum level. Full article
Show Figures

Figure 1

19 pages, 3726 KiB  
Article
Piecewise Parabolic Approximate Computation Based on an Error-Flattened Segmenter and a Novel Quantizer
by Mengyu An, Yuanyong Luo, Muhan Zheng, Yuxuan Wang, Hongxi Dong, Zhongfeng Wang, Chenglei Peng and Hongbing Pan
Electronics 2021, 10(21), 2704; https://doi.org/10.3390/electronics10212704 - 5 Nov 2021
Cited by 6 | Viewed by 1740
Abstract
This paper proposes a novel Piecewise Parabolic Approximate Computation method for hardware function evaluation, which mainly incorporates an error-flattened segmenter and an implementation quantizer. Under a required software maximum absolute error (MAE), the segmenter adaptively selects a minimum number of parabolas to approximate [...] Read more.
This paper proposes a novel Piecewise Parabolic Approximate Computation method for hardware function evaluation, which mainly incorporates an error-flattened segmenter and an implementation quantizer. Under a required software maximum absolute error (MAE), the segmenter adaptively selects a minimum number of parabolas to approximate the objective function. By completely imitating the circuit’s behavior before actual implementation, the quantizer calculates the minimum quantization bit width to ensure a non-redundant fixed-point hardware architecture with an MAE of 1 unit of least precision (ulp), eliminating the iterative design time for the circuits. The method causes the number of segments to reach the theoretical limit, and has great advantages in the number of segments and the size of the look-up table (LUT). To prove the superiority of the proposed method, six common functions were implemented by the proposed method under TSMC-90 nm technology. Compared to the state-of-the-art piecewise quadratic approximation methods, the proposed method has advantages in the area with roughly the same delay. Furthermore, a unified function-evaluation unit was also implemented under TSMC-90 nm technology. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 4470 KiB  
Article
Integrating Vehicle Positioning and Path Tracking Practices for an Autonomous Vehicle Prototype in Campus Environment
by Jui-An Yang and Chung-Hsien Kuo
Electronics 2021, 10(21), 2703; https://doi.org/10.3390/electronics10212703 - 5 Nov 2021
Cited by 8 | Viewed by 2762
Abstract
This paper presents the implementation of an autonomous electric vehicle (EV) project in the National Taiwan University of Science and Technology (NTUST) campus in Taiwan. The aim of this work was to integrate two important practices of realizing an autonomous vehicle in a [...] Read more.
This paper presents the implementation of an autonomous electric vehicle (EV) project in the National Taiwan University of Science and Technology (NTUST) campus in Taiwan. The aim of this work was to integrate two important practices of realizing an autonomous vehicle in a campus environment, including vehicle positioning and path tracking. Such a project is helpful to the students to learn and practice key technologies of autonomous vehicles conveniently. Therefore, a laboratory-made EV was equipped with real-time kinematic GPS (RTK-GPS) to provide centimeter position accuracy. Furthermore, the model predictive control (MPC) was proposed to perform the path tracking capability. Nevertheless, the RTK-GPS exhibited some robust positioning concerns in practical application, such as a low update rate, signal obstruction, signal drift, and network instability. To solve this problem, a multisensory fusion approach using an unscented Kalman filter (UKF) was utilized to improve the vehicle positioning performance by further considering an inertial measurement unit (IMU) and wheel odometry. On the other hand, the model predictive control (MPC) is usually used to control autonomous EVs. However, the determination of MPC parameters is a challenging task. Hence, reinforcement learning (RL) was utilized to generalize the pre-trained datum value for the determination of MPC parameters in practice. To evaluate the performance of the RL-based MPC, software simulations using MATLAB and a laboratory-made, full-scale electric vehicle were arranged for experiments and validation. In a 199.27 m campus loop path, the estimated travel distance error was 0.82% in terms of UKF. The MPC parameters generated by RL also achieved a better tracking performance with 0.227 m RMSE in path tracking experiments, and they also achieved a better tracking performance when compared to that of human-tuned MPC parameters. Full article
(This article belongs to the Special Issue Unmanned Vehicles and Intelligent Robotic Alike Systems)
Show Figures

Figure 1

26 pages, 1235 KiB  
Article
Cloud Storage Service Architecture Providing the Eventually Consistent Totally Ordered Commit History of Distributed Key-Value Stores for Data Consistency Verification
by Beom-Heyn Kim and Young Yoon
Electronics 2021, 10(21), 2702; https://doi.org/10.3390/electronics10212702 - 5 Nov 2021
Viewed by 2493
Abstract
Cloud storage services are one of the most popular cloud computing service types these days. Various cloud storage services such as Amazon S3, DropBox, Google Drive, and Microsoft OneDrive currently support billions of users. Nevertheless, data consistency of the underlying distributed key-value store [...] Read more.
Cloud storage services are one of the most popular cloud computing service types these days. Various cloud storage services such as Amazon S3, DropBox, Google Drive, and Microsoft OneDrive currently support billions of users. Nevertheless, data consistency of the underlying distributed key-value store of cloud storage services remains a serious concern, making potential customers of cloud services hesitate to migrate their data to the cloud. Researchers have explored how to allow clients to verify the behavior of untrusted cloud storage services with respect to consistency models. However, previous proposals are limited because they rely on a strongly consistent history server to provide a totally ordered history for clients. This work presents Relief, a novel cloud storage service exposing an eventually consistent totally ordered commit history of the underlying distributed key-value store to enable client-side data consistency verification for various consistency models. By empirically evaluating our system, we demonstrate that Relief is an efficient solution to overcome the limitation of previous approaches. Full article
(This article belongs to the Special Issue Cloud Database Systems)
Show Figures

Figure 1

24 pages, 2612 KiB  
Article
Noninvasive Detection of Respiratory Disorder Due to COVID-19 at the Early Stages in Saudi Arabia
by Wadii Boulila, Syed Aziz Shah, Jawad Ahmad, Maha Driss, Hamza Ghandorh, Abdullah Alsaeedi, Mohammed Al-Sarem and Faisal Saeed
Electronics 2021, 10(21), 2701; https://doi.org/10.3390/electronics10212701 - 5 Nov 2021
Cited by 5 | Viewed by 2413
Abstract
The Kingdom of Saudi Arabia has suffered from COVID-19 disease as part of the global pandemic due to severe acute respiratory syndrome coronavirus 2. The economy of Saudi Arabia also suffered a heavy impact. Several measures were taken to help mitigate its impact [...] Read more.
The Kingdom of Saudi Arabia has suffered from COVID-19 disease as part of the global pandemic due to severe acute respiratory syndrome coronavirus 2. The economy of Saudi Arabia also suffered a heavy impact. Several measures were taken to help mitigate its impact and stimulate the economy. In this context, we present a safe and secure WiFi-sensing-based COVID-19 monitoring system exploiting commercially available low-cost wireless devices that can be deployed in different indoor settings within Saudi Arabia. We extracted different activities of daily living and respiratory rates from ubiquitous WiFi signals in terms of channel state information (CSI) and secured them from unauthorized access through permutation and diffusion with multiple substitution boxes using chaos theory. The experiments were performed on healthy participants. We used the variances of the amplitude information of the CSI data and evaluated their security using several security parameters such as the correlation coefficient, mean-squared error (MSE), peak-signal-to-noise ratio (PSNR), entropy, number of pixel change rate (NPCR), and unified average change intensity (UACI). These security metrics, for example, lower correlation and higher entropy, indicate stronger security of the proposed encryption method. Moreover, the NPCR and UACI values were higher than 99% and 30, respectively, which also confirmed the security strength of the encrypted information. Full article
Show Figures

Figure 1

20 pages, 4007 KiB  
Article
RETRACTED: Thyristor Aging-State-Evaluation Method Based on State Information and Tensor Domain Theory
by Zhaoyu Lei, Jianyi Guo, Yingfu Tian, Jiemin Yang, Yinwu Xiong, Jie Zhang, Ben Shang and Youping Fan
Electronics 2021, 10(21), 2700; https://doi.org/10.3390/electronics10212700 - 5 Nov 2021
Cited by 2 | Viewed by 2098 | Retraction
Abstract
The thyristor is the key device for the converter of the ultra-high-voltage DC (UHVDC) project to realize AC–DC conversion. The reliability of thyristors is directly related to the safe operation of the UHVDC transmission system. Due to the complex operating environment of the [...] Read more.
The thyristor is the key device for the converter of the ultra-high-voltage DC (UHVDC) project to realize AC–DC conversion. The reliability of thyristors is directly related to the safe operation of the UHVDC transmission system. Due to the complex operating environment of the thyristor, there are many interrelated parameters that may affect the aging state of thyristors. To extract useful information from the massive high-dimensional data and further obtain the aging state of thyristors, a supervised tensor domain classification (STDC) method based on the adaptive syn-thetic sampling method, the gradient-boosting decision tree, and tensor domain theory is proposed in this paper. Firstly, the algorithm applies the continuous medium theory to analogize the aging state points of the thyristor to the mass points in the continuous medium. Then, the algorithm applies the concept of the tensor domain to identify the aging state of the thyristor and to transform the original state-identification problem into the state classification surface determination of the tensor domain. Secondly, a temporal fuzzy clustering algorithm is applied to realize automatic positioning of the classification surface of each tensor sub-domain. Furthermore, to solve the problem of unbalanced sample size between aging class data and normal class data in the state-identification domain, the improved adaptive synthetic sampling algorithm is applied to preprocess the data. The gradient-boosting decision tree algorithm is applied to solve the multi-classification problem of the thyristor. Finally, the comparison between the algorithm proposed and the conventional algorithm is performed through the field-test data provided by the CSG EHV Power Transmission Company of China’s Southern Power Grid. It is verified that the evaluation method proposed has higher recognition accuracy and can effectively classify the thyristor states. Full article
(This article belongs to the Section Microelectronics)
Show Figures

Figure 1

14 pages, 2464 KiB  
Article
Coordinated Control System between Grid–VSC and a DC Microgrid with Hybrid Energy Storage System
by Miguel Montilla-DJesus, Édinson Franco-Mejía, Edwin Rivas Trujillo, José Luis Rodriguez-Amenedo and Santiago Arnaltes
Electronics 2021, 10(21), 2699; https://doi.org/10.3390/electronics10212699 - 4 Nov 2021
Cited by 3 | Viewed by 2179
Abstract
Direct current microgrids (DCMGs) are currently presented as an alternative solution for small systems that feed sensitive electrical loads into DC. According to the scientific literature, DCMG maintains good voltage regulation. However, when the system is in islanded mode, very pronounced voltage variations [...] Read more.
Direct current microgrids (DCMGs) are currently presented as an alternative solution for small systems that feed sensitive electrical loads into DC. According to the scientific literature, DCMG maintains good voltage regulation. However, when the system is in islanded mode, very pronounced voltage variations are presented, compromising the system’s ability to achieve reliable and stable energy management. Therefore, the authors propose a solution, connecting the electrical network through a grid-tied voltage source converter (GVSC) in order to reduce voltage variations. A coordinated control strategy between the DCMG and GVSC is proposed to regulate the DC voltage and find a stable power flow between the various active elements, which feed the load. The results show that the control strategy between the GVSC and DCMG, when tested under different disturbances, improves the performance of the system, making it more reliable and stable. Furthermore, the GVSC supports the AC voltage at the point of common coupling (PCC) without reducing the operating capacity of the DCMG and without exceeding even its most restrictive limit. All simulations were carried out in MATLAB 2020. Full article
(This article belongs to the Special Issue Control of Microgrids)
Show Figures

Figure 1

16 pages, 1879 KiB  
Article
A Novel Low-Area Point Multiplication Architecture for Elliptic-Curve Cryptography
by Muhammad Rashid, Mohammad Mazyad Hazzazi, Sikandar Zulqarnain Khan, Adel R. Alharbi, Asher Sajid and Amer Aljaedi
Electronics 2021, 10(21), 2698; https://doi.org/10.3390/electronics10212698 - 4 Nov 2021
Cited by 8 | Viewed by 2167
Abstract
This paper presents a Point Multiplication (PM) architecture of Elliptic-Curve Cryptography (ECC) over GF(2163) with a focus on the optimization of hardware resources and latency at the same time. The hardware resources are reduced with the use of [...] Read more.
This paper presents a Point Multiplication (PM) architecture of Elliptic-Curve Cryptography (ECC) over GF(2163) with a focus on the optimization of hardware resources and latency at the same time. The hardware resources are reduced with the use of a bit-serial (traditional schoolbook) multiplication method. Similarly, the latency is optimized with the reduction in a critical path using pipeline registers. To cope with the pipelining, we propose to reschedule point addition and double instructions, required for the computation of a PM operation in ECC. Subsequently, the proposed architecture over GF(2163) is modeled in Verilog Hardware Description Language (HDL) using Vivado Design Suite. To provide a fair performance evaluation, we synthesize our design on various FPGA (field-programmable gate array) devices. These FPGA devices are Virtex-4, Virtex-5, Virtex-6, Virtex-7, Spartan-7, Artix-7, and Kintex-7. The lowest area (433 FPGA slices) is achieved on Spartan-7. The highest speed is realized on Virtex-7, where our design achieves 391 MHz clock frequency and requires 416 μs for one PM computation (latency). For power, the lowest values are achieved on the Artix-7 (56 μW) and Kintex-7 (61 μW) devices. A ratio of throughput over area value of 4.89 is reached for Virtex-7. Our design outperforms most recent state-of-the-art solutions (in terms of area) with an overhead of latency. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Previous Issue
Back to TopTop