Next Issue
Volume 13, September-1
Previous Issue
Volume 13, August-1
 
 

Electronics, Volume 13, Issue 16 (August-2 2024) – 246 articles

Cover Story (view full-size image): Interest in Unmanned Aerial Vehicles (UAVs) has been increasingly growing in recent years, especially for purposes other than those for which they were initially used (civil and military purposes). The present work proposes an intelligent automatic charging system (Intelligent Charging Network) created using PC Engines Alix and an experimental drone prototype using a Raspberry Pi 3 and a Navio 2 module. At the same time, an efficient Intelligent Charging Network–drone communication system and a data transmission system are proposed, which allow images acquired by the drone to be transferred directly to the server used for data storage for their subsequent processing as well as the transmission of the flight plan from the QGroundControl application to the drone. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 3548 KiB  
Article
Adapting CLIP for Action Recognition via Dual Semantic Supervision and Temporal Prompt Reparameterization
by Lujuan Deng, Jieqing Tan and Fangmei Liu
Electronics 2024, 13(16), 3348; https://doi.org/10.3390/electronics13163348 - 22 Aug 2024
Viewed by 580
Abstract
The contrastive vision–language pre-trained model CLIP, driven by large-scale open-vocabulary image–text pairs, has recently demonstrated remarkable zero-shot generalization capabilities in diverse downstream image tasks, which has made numerous models dominated by the “image pre-training followed by fine-tuning” paradigm exhibit promising results on standard [...] Read more.
The contrastive vision–language pre-trained model CLIP, driven by large-scale open-vocabulary image–text pairs, has recently demonstrated remarkable zero-shot generalization capabilities in diverse downstream image tasks, which has made numerous models dominated by the “image pre-training followed by fine-tuning” paradigm exhibit promising results on standard video benchmarks. However, as models scale up, full fine-tuning adaptive strategy for specific tasks becomes difficult in terms of training and storage. In this work, we propose a novel method that adapts CLIP to the video domain for efficient recognition without destroying the original pre-trained parameters. Specifically, we introduce temporal prompts to realize the object of reasoning about the dynamic content of videos for pre-trained models that lack temporal cues. Then, by replacing the direct learning style of prompt vectors with a lightweight reparameterization encoder, the model can be adapted to domain-specific adjustment to learn more generalizable representations. Furthermore, we predefine a Chinese label dictionary to enhance video representation by co-supervision of Chinese and English semantics. Extensive experiments on video action recognition benchmarks show that our method achieves competitive or even better performance than most existing methods with fewer trainable parameters in both general and few-shot recognition scenarios. Full article
Show Figures

Figure 1

24 pages, 15090 KiB  
Article
Multi-Agent Collaborative Path Planning Algorithm with Multiple Meeting Points
by Jianlin Mao, Zhigang He, Dayan Li, Ruiqi Li, Shufan Zhang and Niya Wang
Electronics 2024, 13(16), 3347; https://doi.org/10.3390/electronics13163347 - 22 Aug 2024
Viewed by 756
Abstract
Traditional multi-agent path planning algorithms often lead to path overlap and excessive energy consumption when dealing with cooperative tasks due to the single-agent-single-task configuration. For this reason, the “many-to-one” cooperative planning method has been proposed, which, although improved, still faces challenges in the [...] Read more.
Traditional multi-agent path planning algorithms often lead to path overlap and excessive energy consumption when dealing with cooperative tasks due to the single-agent-single-task configuration. For this reason, the “many-to-one” cooperative planning method has been proposed, which, although improved, still faces challenges in the vast search space for meeting points and unreasonable task handover locations. This paper proposes the Cooperative Dynamic Priority Safe Interval Path Planning with a multi-meeting-point and single-meeting-point solving mode switching (Co-DPSIPPms) algorithm to achieve multi-agent path planning with task handovers at multiple or single meeting points. First, the initial priority is set based on the positional relationships among agents within the cooperative group, and the improved Fermat point method is used to locate multiple meeting points quickly. Second, considering that agents must pick up sub-tasks or conduct task handovers midway, a segmented path planning strategy is proposed to ensure that cooperative agents can efficiently and accurately complete task handovers. Finally, an automatic switching strategy between multi-meeting-point and single-meeting-point solving modes is designed to ensure the algorithm’s success rate. Tests show that Co-DPSIPPms outperforms existing algorithms in 1-to-1 and m-to-1 cooperative tasks, demonstrating its efficiency and practicality. Full article
(This article belongs to the Special Issue Path Planning for Mobile Robots, 2nd Edition)
Show Figures

Figure 1

23 pages, 1445 KiB  
Article
Dynamic Edge-Based High-Dimensional Data Aggregation with Differential Privacy
by Qian Chen, Zhiwei Ni, Xuhui Zhu, Moli Lyu, Wentao Liu and Pingfan Xia
Electronics 2024, 13(16), 3346; https://doi.org/10.3390/electronics13163346 - 22 Aug 2024
Viewed by 658
Abstract
Edge computing enables efficient data aggregation for services like data sharing and analysis in distributed IoT applications. However, uploading dynamic high-dimensional data to an edge server for efficient aggregation is challenging. Additionally, there is the significant risk of privacy leakage associated with direct [...] Read more.
Edge computing enables efficient data aggregation for services like data sharing and analysis in distributed IoT applications. However, uploading dynamic high-dimensional data to an edge server for efficient aggregation is challenging. Additionally, there is the significant risk of privacy leakage associated with direct such data uploading. Therefore, we propose an edge-based differential privacy data aggregation method leveraging progressive UMAP with a dynamic time window based on LSTM (EDP-PUDL). Firstly, a model of the dynamic time window based on a long short-term memory (LSTM) network was developed to divide dynamic data. Then, progressive uniform manifold approximation and projection (UMAP) with differential privacy was performed to reduce the dimension of the window data while preserving privacy. The privacy budget was determined by the data volume and the attribute’s Shapley value, adding DP noise. Finally, the privacy analysis and experimental comparisons demonstrated that EDP-PUDL ensures user privacy while achieving superior aggregation efficiency and availability compared to other algorithms used for dynamic high-dimensional data aggregation. Full article
Show Figures

Figure 1

15 pages, 270 KiB  
Article
Bi-Level Orthogonal Multi-Teacher Distillation
by Shuyue Gong and Weigang Wen
Electronics 2024, 13(16), 3345; https://doi.org/10.3390/electronics13163345 - 22 Aug 2024
Viewed by 568
Abstract
Multi-teacher knowledge distillation is a powerful technique that leverages diverse information sources from multiple pre-trained teachers to enhance student model performance. However, existing methods often overlook the challenge of effectively transferring knowledge to weaker student models. To address this limitation, we propose BOMD [...] Read more.
Multi-teacher knowledge distillation is a powerful technique that leverages diverse information sources from multiple pre-trained teachers to enhance student model performance. However, existing methods often overlook the challenge of effectively transferring knowledge to weaker student models. To address this limitation, we propose BOMD (Bi-level Optimization for Multi-teacher Distillation), a novel approach that combines bi-level optimization with multiple orthogonal projections. Our method employs orthogonal projections to align teacher feature representations with the student’s feature space while preserving structural properties. This alignment is further reinforced through a dedicated feature alignment loss. Additionally, we utilize bi-level optimization to learn optimal weighting factors for combining knowledge from heterogeneous teachers, treating the weights as upper-level variables and the student’s parameters as lower-level variables. Extensive experiments on multiple benchmark datasets demonstrate the effectiveness and flexibility of BOMD. Our method achieves state-of-the-art performance on the CIFAR-100 benchmark for multi-teacher knowledge distillation across diverse scenarios, consistently outperforming existing approaches. BOMD shows significant improvements for both homogeneous and heterogeneous teacher ensembles, even when distilling to compact student models. Full article
Show Figures

Figure 1

21 pages, 3115 KiB  
Article
Phishing Webpage Detection via Multi-Modal Integration of HTML DOM Graphs and URL Features Based on Graph Convolutional and Transformer Networks
by Jun-Ho Yoon, Seok-Jun Buu and Hae-Jung Kim
Electronics 2024, 13(16), 3344; https://doi.org/10.3390/electronics13163344 - 22 Aug 2024
Viewed by 1197
Abstract
Detecting phishing webpages is a critical task in the field of cybersecurity, with significant implications for online safety and data protection. Traditional methods have primarily relied on analyzing URL features, which can be limited in capturing the full context of phishing attacks. In [...] Read more.
Detecting phishing webpages is a critical task in the field of cybersecurity, with significant implications for online safety and data protection. Traditional methods have primarily relied on analyzing URL features, which can be limited in capturing the full context of phishing attacks. In this study, we propose an innovative approach that integrates HTML DOM graph modeling with URL feature analysis using advanced deep learning techniques. The proposed method leverages Graph Convolutional Networks (GCNs) to model the structure of HTML DOM graphs, combined with Convolutional Neural Networks (CNNs) and Transformer Networks to capture the character and word sequence features of URLs, respectively. These multi-modal features are then integrated using a Transformer network, which is adept at selectively capturing the interdependencies and complementary relationships between different feature sets. We evaluated our approach on a real-world dataset comprising URL and HTML DOM graph data collected from 2012 to 2024. This dataset includes over 80 million nodes and edges, providing a robust foundation for testing. Our method demonstrated a significant improvement in performance, achieving a 7.03 percentage point increase in classification accuracy compared to state-of-the-art techniques. Additionally, we conducted ablation tests to further validate the effectiveness of individual features in our model. The results validate the efficacy of integrating HTML DOM structure and URL features using deep learning. Our framework significantly enhances phishing detection capabilities, providing a more accurate and comprehensive solution to identifying malicious webpages. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Figure 1

35 pages, 1125 KiB  
Review
Review of Smart-Home Security Using the Internet of Things
by George Vardakis, George Hatzivasilis, Eleftheria Koutsaki and Nikos Papadakis
Electronics 2024, 13(16), 3343; https://doi.org/10.3390/electronics13163343 - 22 Aug 2024
Cited by 1 | Viewed by 6737
Abstract
As the Internet of Things (IoT) continues to revolutionize the way we interact with our living spaces, the concept of smart homes has become increasingly prevalent. However, along with the convenience and connectivity offered by IoT-enabled devices in smart homes comes a range [...] Read more.
As the Internet of Things (IoT) continues to revolutionize the way we interact with our living spaces, the concept of smart homes has become increasingly prevalent. However, along with the convenience and connectivity offered by IoT-enabled devices in smart homes comes a range of security challenges. This paper explores the landscape of smart-home security. In contrast to similar surveys, this study also examines the particularities of popular categories of smart devices, like home assistants, TVs, AR/VR, locks, sensors, etc. It examines various security threats and vulnerabilities inherent in smart-home ecosystems, including unauthorized access, data breaches, and device tampering. Additionally, the paper discusses existing security mechanisms and protocols designed to mitigate these risks, such as encryption, authentication, and intrusion-detection systems. Furthermore, it highlights the importance of user awareness and education in maintaining the security of smart-home environments. Finally, the paper proposes future research directions and recommendations for enhancing smart-home security with IoT, including the development of robust security best practices and standards, improved device authentication methods, and more effective intrusion-detection techniques. By addressing these challenges, the potential of IoT-enabled smart homes to enhance convenience and efficiency while ensuring privacy, security, and cyber-resilience can be realized. Full article
Show Figures

Figure 1

20 pages, 2056 KiB  
Article
A Deep Learning Approach for Fault-Tolerant Data Fusion Applied to UAV Position and Orientation Estimation
by Majd Saied, Abbas Mishi, Clovis Francis and Ziad Noun
Electronics 2024, 13(16), 3342; https://doi.org/10.3390/electronics13163342 - 22 Aug 2024
Viewed by 743
Abstract
This work introduces a novel fault-tolerance technique for data fusion in Unmanned Aerial Vehicles (UAVs), designed to address sensor faults through a deep learning-based framework. Unlike traditional methods that rely on hardware redundancy, our approach leverages Long Short-Term Memory (LSTM) networks for state [...] Read more.
This work introduces a novel fault-tolerance technique for data fusion in Unmanned Aerial Vehicles (UAVs), designed to address sensor faults through a deep learning-based framework. Unlike traditional methods that rely on hardware redundancy, our approach leverages Long Short-Term Memory (LSTM) networks for state estimation and a moving average (MA) algorithm for fault detection. The novelty of our technique lies in its dual strategy: utilizing LSTMs to analyze residuals and detect errors, while the MA algorithm identifies faulty sensors by monitoring variations in sensor data. This method allows for effective error correction and system recovery by replacing faulty measurements with reliable ones, eliminating the need for a fault-free prediction model. The approach has been validated through offline testing on real sensor data from a hexarotor UAV with simulated faults, demonstrating its efficacy in maintaining robust UAV operations without resorting to redundant hardware solutions. Full article
Show Figures

Figure 1

34 pages, 2908 KiB  
Article
A Hybrid Contrast and Texture Masking Model to Boost High Efficiency Video Coding Perceptual Rate-Distortion Performance
by Javier Ruiz Atencia, Otoniel López-Granado, Manuel Pérez Malumbres, Miguel Martínez-Rach, Damian Ruiz Coll, Gerardo Fernández Escribano and Glenn Van Wallendael
Electronics 2024, 13(16), 3341; https://doi.org/10.3390/electronics13163341 - 22 Aug 2024
Viewed by 570
Abstract
As most of the videos are destined for human perception, many techniques have been designed to improve video coding based on how the human visual system perceives video quality. In this paper, we propose the use of two perceptual coding techniques, namely contrast [...] Read more.
As most of the videos are destined for human perception, many techniques have been designed to improve video coding based on how the human visual system perceives video quality. In this paper, we propose the use of two perceptual coding techniques, namely contrast masking and texture masking, jointly operating under the High Efficiency Video Coding (HEVC) standard. These techniques aim to improve the subjective quality of the reconstructed video at the same bit rate. For contrast masking, we propose the use of a dedicated weighting matrix for each block size (from 4×4 up to 32×32), unlike the HEVC standard, which only defines an 8×8 weighting matrix which it is upscaled to build the 16×16 and 32×32 weighting matrices (a 4×4 weighting matrix is not supported). Our approach achieves average Bjøntegaard Delta-Rate (BD-rate) gains of between 2.5% and 4.48%, depending on the perceptual metric and coding mode used. On the other hand, we propose a novel texture masking scheme based on the classification of each coding unit to provide an over-quantization depending on the coding unit texture level. Thus, for each coding unit, its mean directional variance features are computed to feed a support vector machine model that properly predicts the texture type (plane, edge, or texture). According to this classification, the block’s energy, the type of coding unit, and its size, an over-quantization value is computed as a QP offset (DQP) to be applied to this coding unit. By applying both techniques in the HEVC reference software, an overall average of 5.79% BD-rate gain is achieved proving their complementarity. Full article
(This article belongs to the Special Issue Recent Advances in Image/Video Compression and Coding)
Show Figures

Figure 1

22 pages, 7018 KiB  
Article
PAN: Improved PointNet++ for Pavement Crack Information Extraction
by Jiakai Fan, Weidong Song, Jinhe Zhang, Shangyu Sun, Guohui Jia and Guang Jin
Electronics 2024, 13(16), 3340; https://doi.org/10.3390/electronics13163340 - 22 Aug 2024
Viewed by 687
Abstract
Maintenance and repair of expressways are becoming increasingly important due to the growing frequency of their use. Accurate pavement crack information extraction helps with routine maintenance and reduces the risk of traffic accidents. The traditional 2D crack image detection method has limitations and [...] Read more.
Maintenance and repair of expressways are becoming increasingly important due to the growing frequency of their use. Accurate pavement crack information extraction helps with routine maintenance and reduces the risk of traffic accidents. The traditional 2D crack image detection method has limitations and cannot effectively obtain depth information. Three-dimensional crack extraction from 3D point cloud has become a new solution that can capture pavement crack information more comprehensively and accurately. However, the existing algorithms are not effective in the feature extraction of cracks due to the different and irregular shapes and sizes of pavement cracks and interference from the external environment. To solve this, a new method for detecting pavement cracks in point clouds, namely point attention net (PAN), is herein proposed. It uses a two-branch attention fusion module to focus on space and feature information in the cloud and capture features of crack points at different scales. It also uses the Poly Loss function to solve the imbalance of foreground and background points in pavement point cloud data. Experiments on the LNTU-RDD-LiDAR dataset were carried out to verify the effectiveness of the proposed method. Compared with the traditional method and the latest point cloud segmentation technology, the performance indexes of mIoU, Acc, F1, and Rec achieved significant improvement, reaching 75.4%, 91.5%, 75.4%, and 67.1%, respectively. Full article
(This article belongs to the Special Issue Fault Detection Technology Based on Deep Learning)
Show Figures

Figure 1

33 pages, 1322 KiB  
Review
Outlier Detection in Streaming Data for Telecommunications and Industrial Applications: A Survey
by Roland N. Mfondoum, Antoni Ivanov, Pavlina Koleva, Vladimir Poulkov and Agata Manolova
Electronics 2024, 13(16), 3339; https://doi.org/10.3390/electronics13163339 - 22 Aug 2024
Viewed by 934
Abstract
Streaming data are present all around us. From traditional radio systems streaming audio to today’s connected end-user devices constantly sending information or accessing services, data are flowing constantly between nodes across various networks. The demand for appropriate outlier detection (OD) methods in the [...] Read more.
Streaming data are present all around us. From traditional radio systems streaming audio to today’s connected end-user devices constantly sending information or accessing services, data are flowing constantly between nodes across various networks. The demand for appropriate outlier detection (OD) methods in the fields of fault detection, special events detection, and malicious activities detection and prevention is not only persistent over time but increasing, especially with the recent developments in Telecommunication systems such as Fifth Generation (5G) networks facilitating the expansion of the Internet of Things (IoT). The process of selecting a computationally efficient OD method, adapted for a specific field and accounting for the existence of empirical data, or lack thereof, is non-trivial. This paper presents a thorough survey of OD methods, categorized by the applications they are implemented in, the basic assumptions that they use according to the characteristics of the streaming data, and a summary of the emerging challenges, such as the evolving structure and nature of the data and their dimensionality and temporality. A categorization of commonly used datasets in the context of streaming data is produced to aid data source identification for researchers in this field. Based on this, guidelines for OD method selection are defined, which consider flexibility and sample size requirements and facilitate the design of such algorithms in Telecommunications and other industries. Full article
(This article belongs to the Special Issue Knowledge Engineering and Data Mining, 3rd Edition)
Show Figures

Figure 1

18 pages, 3413 KiB  
Review
Green Energy Management in Manufacturing Based on Demand Prediction by Artificial Intelligence—A Review
by Izabela Rojek, Dariusz Mikołajewski, Adam Mroziński and Marek Macko
Electronics 2024, 13(16), 3338; https://doi.org/10.3390/electronics13163338 - 22 Aug 2024
Cited by 1 | Viewed by 1921
Abstract
Energy efficiency in production systems and processes is a key global research topic, especially in light of the Green Deal, Industry 4.0/5.0 paradigms, and rising energy prices. Research on improving the energy efficiency of production based on artificial intelligence (AI) analysis brings promising [...] Read more.
Energy efficiency in production systems and processes is a key global research topic, especially in light of the Green Deal, Industry 4.0/5.0 paradigms, and rising energy prices. Research on improving the energy efficiency of production based on artificial intelligence (AI) analysis brings promising solutions, and the digital transformation of industry towards green energy is slowly becoming a reality. New production planning rules, the optimization of the use of the Industrial Internet of Things (IIoT), industrial cyber-physical systems (ICPSs), and the effective use of production data and their optimization with AI bring further opportunities for sustainable, energy-efficient production. The aim of this study is to systematically evaluate and quantify the research results, trends, and research impact on energy management in production based on AI-based demand forecasting. The value of the research includes the broader use of AI which will reduce the impact of the observed environmental and economic problems in the areas of reducing energy consumption, forecasting accuracy, and production efficiency. In addition, the demand for Green AI technologies in creating sustainable solutions, reducing the impact of AI on the environment, and improving the accuracy of forecasts, including in the area of optimization of electricity storage, will increase. A key emerging research trend in green energy management in manufacturing is the use of AI-based demand forecasting to optimize energy consumption, reduce waste, and increase sustainability. An innovative perspective that leverages AI’s ability to accurately forecast energy demand allows manufacturers to align energy consumption with production schedules, minimizing excess energy consumption and emissions. Advanced machine learning (ML) algorithms can integrate real-time data from various sources, such as weather patterns and market demand, to improve forecast accuracy. This supports both sustainability and economic efficiency. In addition, AI-based demand forecasting can enable more dynamic and responsive energy management systems, paving the way for smarter, more resilient manufacturing processes. The paper’s contribution goes beyond mere description, making analyses, comparisons, and generalizations based on the leading current literature, logical conclusions from the state-of-the-art, and the authors’ knowledge and experience in renewable energy, AI, and mechatronics. Full article
(This article belongs to the Special Issue Advanced Industry 4.0/5.0: Intelligence and Automation)
Show Figures

Figure 1

20 pages, 4443 KiB  
Article
Transient Synchronous Stability Analysis of Grid-Following Converter Considering Outer-Loop Control with Current Limiting
by Leke Chen, Lin Zhu, Yang Liu, Nan Ye, Yonghao Hu and Lin Guan
Electronics 2024, 13(16), 3337; https://doi.org/10.3390/electronics13163337 - 22 Aug 2024
Viewed by 619
Abstract
With the evolution of modern power systems, inverter-based resources have become increasingly prevalent. As critical energy conversion interfaces, grid-following converters exhibit dynamic performances, presenting challenges for system security and stability. This paper focuses on the transient synchronization stability of converters after disturbances, highlighting [...] Read more.
With the evolution of modern power systems, inverter-based resources have become increasingly prevalent. As critical energy conversion interfaces, grid-following converters exhibit dynamic performances, presenting challenges for system security and stability. This paper focuses on the transient synchronization stability of converters after disturbances, highlighting differences in mechanisms compared to synchronous generators. Although previous studies on the transient synchronization stability of converters have been conducted, they primarily concentrate on the dynamics of the phase-locked loop, with limited consideration of the effects of outer-loop control. This has created a cognitive bottleneck in understanding the transient synchronization mechanisms of converters. To address these challenges, this paper models a grid-following voltage source converter system, incorporating detailed converter control strategies and current-limiting control. The stability regions of the stable equilibrium point under various fault severities are first analyzed. Then, the impacts of outer-loop control, including PI control and current-limiting control, on transient synchronization are examined. The study systematically elucidates the influence of outer-loop control on the transient synchronization stability of converters. Finally, the validity of the proposed theory is confirmed through simulations conducted in PSCAD/EMTDC. Full article
Show Figures

Figure 1

15 pages, 4982 KiB  
Article
Research on DC Electric Shock Protection Method Based on Sliding Curvature Accumulation Quantity
by Hongzhang Zhu, Chuanping Wu, Yao Xie, Yang Zhou, Xiujin Liao and Jian Li
Electronics 2024, 13(16), 3336; https://doi.org/10.3390/electronics13163336 - 22 Aug 2024
Viewed by 609
Abstract
To address the limitations of current DC residual current protection methods, which primarily rely on the amplitude of DC residual current for fault detection and fail to safeguard against electric shocks at two points on the same side in DC Isolated Terra (IT) [...] Read more.
To address the limitations of current DC residual current protection methods, which primarily rely on the amplitude of DC residual current for fault detection and fail to safeguard against electric shocks at two points on the same side in DC Isolated Terra (IT) System systems, this paper introduces a novel protection method based on DC electric shock features. This paper first analyzes the sliding curvature accumulation and peak rise time features of DC basic residual current, load mutation current, and animal body electric shock current under multi-factor conditions. The analysis shows that sliding curvature accumulation in the range of 0.1 ≤ K ≤ 1 and a peak rise time of Δt ≥ 20 ms can effectively distinguish animal body electric shock. Then, based on this electric shock’s distinctive characteristics, an approach for identifying types of electric shock is developed. Finally, a DC residual current protective device (DC-RCD) is designed. The prototype test results demonstrate that the DC-RCD has an action time of ts < 70 ms. The proposed method accurately provides protection against electric shocks and effectively addresses the issue of inadequate protection when two fault points occur on the same side within an IT system. This approach holds significant reference value for the development of next-generation DC-RCDs. Full article
(This article belongs to the Section Power Electronics)
Show Figures

Figure 1

11 pages, 1849 KiB  
Article
Improved Segmentation of Cellular Nuclei Using UNET Architectures for Enhanced Pathology Imaging
by Simão Castro, Vitor Pereira and Rui Silva
Electronics 2024, 13(16), 3335; https://doi.org/10.3390/electronics13163335 - 22 Aug 2024
Viewed by 821
Abstract
Medical imaging is essential for pathology diagnosis and treatment, enhancing decision making and reducing costs, but despite various computational methodologies proposed to improve imaging modalities, further optimization is needed for broader acceptance. This study explores deep learning (DL) methodologies for classifying and segmenting [...] Read more.
Medical imaging is essential for pathology diagnosis and treatment, enhancing decision making and reducing costs, but despite various computational methodologies proposed to improve imaging modalities, further optimization is needed for broader acceptance. This study explores deep learning (DL) methodologies for classifying and segmenting pathological imaging data, optimizing models to accurately predict and generalize from training to new data. Different CNN and U-Net architectures are implemented for segmentation tasks, with their performance evaluated on histological image datasets using enhanced pre-processing techniques such as resizing, normalization, and data augmentation. These are trained, parameterized, and optimized using metrics such as accuracy, the DICE coefficient, and intersection over union (IoU). The experimental results show that the proposed method improves the efficiency of cell segmentation compared to networks, such as U-NET and W-UNET. The results show that the proposed pre-processing has improved the IoU from 0.9077 to 0.9675, about 7% better results; also, the values of the DICE coefficient obtained improved from 0.9215 to 0.9916, about 7% better results, surpassing the results reported in the literature. Full article
(This article belongs to the Special Issue Real-Time Computer Vision)
Show Figures

Figure 1

17 pages, 1642 KiB  
Article
Leveraging Time-Critical Computation and AI Techniques for Task Offloading in Internet of Vehicles Network Applications
by Peifeng Liang, Wenhe Chen, Honghui Fan and Hongjin Zhu
Electronics 2024, 13(16), 3334; https://doi.org/10.3390/electronics13163334 - 22 Aug 2024
Viewed by 585
Abstract
Vehicular fog computing (VFC) is an innovative computing paradigm with an exceptional ability to improve the vehicles’ capacity to manage computation-intensive applications with both low latency and energy consumption. Moreover, more and more Artificial Intelligence (AI) technologies are applied in task offloading on [...] Read more.
Vehicular fog computing (VFC) is an innovative computing paradigm with an exceptional ability to improve the vehicles’ capacity to manage computation-intensive applications with both low latency and energy consumption. Moreover, more and more Artificial Intelligence (AI) technologies are applied in task offloading on the Internet of Vehicles (IoV). Focusing on the problems of computing latency and energy consumption, in this paper, we propose an AI-based Vehicle-to-Everything (V2X) model for tasks and resource offloading model for an IoV network, which ensures reliable low-latency communication, efficient task offloading in the IoV network by using a Software-Defined Vehicular-based FC (SDV-F) architecture. To fit to time-critical data transmission task distribution, the proposed model reduces unnecessary task allocation at the fog computing layer by proposing an AI-based task-allocation algorithm in the IoV layer to implement the task allocation of each vehicle. By applying AI technologies such as reinforcement learning (RL), Markov decision process, and deep learning (DL), the proposed model intelligently makes decision on maximizing resource utilization at the fog layer and minimizing the average end-to-end delay of time-critical IoV applications. The experiment demonstrates the proposed model can efficiently distribute the fog layer tasks while minimizing the delay. Full article
(This article belongs to the Special Issue AI in Information Processing and Real-Time Communication)
Show Figures

Figure 1

17 pages, 8979 KiB  
Article
Determining the Optimal Window Duration to Enhance Emotion Recognition Based on Galvanic Skin Response and Photoplethysmography Signals
by Marcos F. Bamonte, Marcelo Risk and Victor Herrero
Electronics 2024, 13(16), 3333; https://doi.org/10.3390/electronics13163333 - 22 Aug 2024
Viewed by 713
Abstract
Automatic emotion recognition using portable sensors is gaining attention due to its potential use in real-life scenarios. Existing studies have not explored Galvanic Skin Response and Photoplethysmography sensors exclusively for emotion recognition using nonlinear features with machine learning (ML) classifiers such as Random [...] Read more.
Automatic emotion recognition using portable sensors is gaining attention due to its potential use in real-life scenarios. Existing studies have not explored Galvanic Skin Response and Photoplethysmography sensors exclusively for emotion recognition using nonlinear features with machine learning (ML) classifiers such as Random Forest, Support Vector Machine, Gradient Boosting Machine, K-Nearest Neighbor, and Decision Tree. In this study, we proposed a genuine window sensitivity analysis on a continuous annotation dataset to determine the window duration and percentage of overlap that optimize the classification performance using ML algorithms and nonlinear features, namely, Lyapunov Exponent, Approximate Entropy, and Poincaré indices. We found an optimum window duration of 3 s with 50% overlap and achieved accuracies of 0.75 and 0.74 for both arousal and valence, respectively. In addition, we proposed a Strong Labeling Scheme that kept only the extreme values of the labels, which raised the accuracy score to 0.94 for arousal. Under certain conditions mentioned, traditional ML models offer a good compromise between performance and low computational cost. Our results suggest that well-known ML algorithms can still contribute to the field of emotion recognition, provided that window duration, overlap percentage, and nonlinear features are carefully selected. Full article
Show Figures

Figure 1

17 pages, 29032 KiB  
Article
Real-Time Dense Visual SLAM with Neural Factor Representation
by Weifeng Wei, Jie Wang, Xiaolong Xie, Jie Liu and Pengxiang Su
Electronics 2024, 13(16), 3332; https://doi.org/10.3390/electronics13163332 - 22 Aug 2024
Viewed by 936
Abstract
Developing a high-quality, real-time, dense visual SLAM system poses a significant challenge in the field of computer vision. NeRF introduces neural implicit representation, marking a notable advancement in visual SLAM research. However, existing neural implicit SLAM methods suffer from long runtimes and face [...] Read more.
Developing a high-quality, real-time, dense visual SLAM system poses a significant challenge in the field of computer vision. NeRF introduces neural implicit representation, marking a notable advancement in visual SLAM research. However, existing neural implicit SLAM methods suffer from long runtimes and face challenges when modeling complex structures in scenes. In this paper, we propose a neural implicit dense visual SLAM method that enables high-quality real-time reconstruction even on a desktop PC. Firstly, we propose a novel neural scene representation, encoding the geometry and appearance information of the scene as a combination of the basis and coefficient factors. This representation allows for efficient memory usage and the accurate modeling of high-frequency detail regions. Secondly, we introduce feature integration rendering to significantly improve rendering speed while maintaining the quality of color rendering. Extensive experiments on synthetic and real-world datasets demonstrate that our method achieves an average improvement of more than 60% for Depth L1 and ATE RMSE compared to existing state-of-the-art methods when running at 9.8 Hz on a desktop PC with a 3.20 GHz Intel Core i9-12900K CPU and a single NVIDIA RTX 3090 GPU. This remarkable advancement highlights the crucial importance of our approach in the field of dense visual SLAM. Full article
(This article belongs to the Special Issue Advances of Artificial Intelligence and Vision Applications)
Show Figures

Figure 1

13 pages, 7269 KiB  
Article
A High-Quality and Space-Efficient Design for Memristor Emulation
by Atul Kumar and Bhartendu Chaturvedi
Electronics 2024, 13(16), 3331; https://doi.org/10.3390/electronics13163331 - 22 Aug 2024
Cited by 1 | Viewed by 567
Abstract
The paper presents a new design for a compact memristor emulator that uses a single active component and a grounded capacitor. This design incorporates a current backward transconductance amplifier as the active element, enabling the emulation of both grounded and floating memristors in [...] Read more.
The paper presents a new design for a compact memristor emulator that uses a single active component and a grounded capacitor. This design incorporates a current backward transconductance amplifier as the active element, enabling the emulation of both grounded and floating memristors in incremental and decremental modes. The paper provides an in-depth analysis of the circuit, covering ideal, non-ideal, and parasitic factors. The theoretical performance of the memristor emulator is confirmed through post-layout simulations with 180 nm generic process design kit (gpdk) technology, demonstrating its capability to operate at low voltages (±1 V) with minimal power consumption. Additionally, the emulator shows strong performance under variations in process, voltage, and temperature (PVT) and functions effectively at a frequency of 2 MHz. Experimental validation using commercially available integrated circuits further supports the proposed design. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

17 pages, 3683 KiB  
Article
Depth Image Rectification Based on an Effective RGB–Depth Boundary Inconsistency Model
by Hao Cao, Xin Zhao, Ang Li and Meng Yang
Electronics 2024, 13(16), 3330; https://doi.org/10.3390/electronics13163330 - 22 Aug 2024
Viewed by 661
Abstract
Depth image has been widely involved in various tasks of 3D systems with the advancement of depth acquisition sensors in recent years. Depth images suffer from serious distortions near object boundaries due to the limitations of depth sensors or estimation methods. In this [...] Read more.
Depth image has been widely involved in various tasks of 3D systems with the advancement of depth acquisition sensors in recent years. Depth images suffer from serious distortions near object boundaries due to the limitations of depth sensors or estimation methods. In this paper, a simple method is proposed to rectify the erroneous object boundaries of depth images with the guidance of reference RGB images. First, an RGB–Depth boundary inconsistency model is developed to measure whether collocated pixels in depth and RGB images belong to the same object. The model extracts the structures of RGB and depth images, respectively, by Gaussian functions. The inconsistency of two collocated pixels is then statistically determined inside large-sized local windows. In this way, pixels near object boundaries of depth images are identified to be erroneous when they are inconsistent with collocated ones in RGB images. Second, a depth image rectification method is proposed by embedding the model into a simple weighted mean filter (WMF). Experiment results on two datasets verify that the proposed method well improves the RMSE and SSIM of depth images by 2.556 and 0.028, respectively, compared with recent optimization-based and learning-based methods. Full article
(This article belongs to the Special Issue Recent Advancements in Signal and Vision Analysis)
Show Figures

Figure 1

24 pages, 1749 KiB  
Article
Improved African Vulture Optimization Algorithm Based on Random Opposition-Based Learning Strategy
by Xingsheng Kuang, Junfa Hou, Xiaotong Liu, Chengming Lin, Zhu Wang and Tianlei Wang
Electronics 2024, 13(16), 3329; https://doi.org/10.3390/electronics13163329 - 22 Aug 2024
Viewed by 687
Abstract
This paper proposes an improved African vulture optimization algorithm (IROAVOA), which integrates the random opposition-based learning strategy and disturbance factor to solve problems such as the relatively weak global search capability and the poor ability to balance exploration and exploitation stages. IROAVOA is [...] Read more.
This paper proposes an improved African vulture optimization algorithm (IROAVOA), which integrates the random opposition-based learning strategy and disturbance factor to solve problems such as the relatively weak global search capability and the poor ability to balance exploration and exploitation stages. IROAVOA is divided into two parts. Firstly, the random opposition-based learning strategy is introduced in the population initialization stage to improve the diversity of the population, enabling the algorithm to more comprehensively explore the potential solution space and improve the convergence speed of the algorithm. Secondly, the disturbance factor is introduced at the exploration stage to increase the randomness of the algorithm, effectively avoiding falling into the local optimal solution and allowing a better balance of the exploration and exploitation stages. To verify the effectiveness of the proposed algorithm, comprehensive testing was conducted using the 23 benchmark test functions, the CEC2019 test suite, and two engineering optimization problems. The algorithm was compared with seven state-of-the-art metaheuristic algorithms in benchmark test experiments and compared with five algorithms in engineering optimization experiments. The experimental results indicate that IROAVOA achieved better mean and optimal values in all test functions and achieved significant improvement in convergence speed. It can also solve engineering optimization problems better than the other five algorithms. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

9 pages, 4326 KiB  
Communication
A Highly Integrated Millimeter-Wave Circularly Polarized Wide-Angle Scanning Antenna Unit
by Guishan Yuan, Sai Guo, Kan Wang and Jiawen Xu
Electronics 2024, 13(16), 3328; https://doi.org/10.3390/electronics13163328 - 22 Aug 2024
Viewed by 658
Abstract
This paper introduces a novel, small-sized, highly integrated, circularly polarized wide-angle scanning antenna using substrate-integrated waveguide (SIW) technology at millimeter-wave frequencies. The antenna unit addresses requirements for high data transmission rates, wide spatial coverage, and strong interference resistance in communication systems. By integrating [...] Read more.
This paper introduces a novel, small-sized, highly integrated, circularly polarized wide-angle scanning antenna using substrate-integrated waveguide (SIW) technology at millimeter-wave frequencies. The antenna unit addresses requirements for high data transmission rates, wide spatial coverage, and strong interference resistance in communication systems. By integrating radiating square waveguides, circular polarizers, filters, and matching loads, the antenna enhances out-of-band suppression, eliminates cross-polarization, and reduces manufacturing complexity and costs. Utilizing this antenna unit as a component, a 4 × 4 phased array antenna with a two-dimensional ±60° scanning capability is designed and simulated. The simulation and measurement results confirm that the phased array antenna achieves the desired scan range with a gain reduction of less than 3.9 dB. Full article
Show Figures

Figure 1

22 pages, 16164 KiB  
Article
Reducing Noise and Impact of High-Frequency Torque Ripple Caused by Injection Voltages by Using Self-Regulating Random Model Algorithm for SynRMs Sensorless Speed Control
by Yibo Guo, Lingyun Pan, Yang Yang, Yimin Gong and Xiaolei Che
Electronics 2024, 13(16), 3327; https://doi.org/10.3390/electronics13163327 - 22 Aug 2024
Viewed by 732
Abstract
For the sensorless control in a low-speed range of synchronous reluctance motors (SynRMs), injecting random high-frequency (HF) square-wave-type voltages has become a widely used and technologically mature method. It can solve the noise problem of traditional injection signal methods. However, all injection signal [...] Read more.
For the sensorless control in a low-speed range of synchronous reluctance motors (SynRMs), injecting random high-frequency (HF) square-wave-type voltages has become a widely used and technologically mature method. It can solve the noise problem of traditional injection signal methods. However, all injection signal methods will cause problems such as torque ripple, which causes speed fluctuations. This article proposes a self-regulating random model algorithm for the random injection signal method, which includes a quantity adaptive module for adding additional random processes, an evaluation module for evaluating torque deviation degree, and an updated model module that is used to receive signals from the other two modules and complete model changes and output random model elements. The main function of this algorithm is to create a model that updates to suppress the evaluation value deviation based on the evaluation situation and outputs an optimal sequence of random numbers, thereby limiting speed bias always in a small range; this can reduce unnecessary changes in the output value of the speed regulator. The feasibility and effectiveness of the proposed algorithm and control method have been demonstrated in experiments based on a 5-kW synchronous reluctance motor. Full article
(This article belongs to the Special Issue Power Electronics in Renewable Systems)
Show Figures

Figure 1

14 pages, 6513 KiB  
Article
An Improved SPWM Strategy for Effectively Reducing Total Harmonic Distortion
by Shaoru Zhang, Huixian Li, Yang Liu, Xiaoyan Liu, Qing Lv, Xiuju Du and Jielu Zhang
Electronics 2024, 13(16), 3326; https://doi.org/10.3390/electronics13163326 - 21 Aug 2024
Viewed by 836
Abstract
In the inverter circuit, the speed at which the MOSFET is impacted by the presence of a parasitic inductor within the printed circuit board (PCB) leads to a delay in the switching process. Furthermore, the parasitic inductor within the circuit can easily form [...] Read more.
In the inverter circuit, the speed at which the MOSFET is impacted by the presence of a parasitic inductor within the printed circuit board (PCB) leads to a delay in the switching process. Furthermore, the parasitic inductor within the circuit can easily form an LC oscillation with the parasitic capacitor of the MOSFET. These two issues result in an inconsistency between the actual output of the MOSFET and the driving signal waveform, leading to distortion in the sinusoidal pulse width modulation (SPWM) waveform and an increase in total harmonic distortion (THD). It is a common practice to mitigate gate oscillation by introducing a resistor at the gate of the MOSFET. However, elevating the resistance leads to deceleration in the charging process of the MOSFET’s parasitic capacitor, consequently causing an increase in the switching delay, and thereby increasing THD. Therefore, an effective strategy to reduce THD is proposed in this paper, while augmenting the gate resistance, computing the MOSFET switching delay, and applying corrective compensation. In this way, the inherent issues of the switch are addressed, resulting in inverter output waveforms that closely resemble sine waves and reduced THD. Through a combination of simulation and empirical experimentation, the efficacy of the proposed approach in significantly reducing THD in the inverter’s output waveform has been empirically substantiated. Full article
Show Figures

Figure 1

19 pages, 674 KiB  
Article
Zero-Shot Proxy with Incorporated-Score for Lightweight Deep Neural Architecture Search
by Thi-Trang Nguyen and Ji-Hyeong Han
Electronics 2024, 13(16), 3325; https://doi.org/10.3390/electronics13163325 - 21 Aug 2024
Viewed by 762
Abstract
Designing a high-performance neural network is a difficult task. Neural architecture search (NAS) methods aim to solve this process. However, the construction of a high-quality accuracy predictor, which is a key component of NAS, usually requires significant computation. Therefore, zero-shot proxy-based NAS methods [...] Read more.
Designing a high-performance neural network is a difficult task. Neural architecture search (NAS) methods aim to solve this process. However, the construction of a high-quality accuracy predictor, which is a key component of NAS, usually requires significant computation. Therefore, zero-shot proxy-based NAS methods have been actively and extensively investigated. In this work, we propose a new efficient zero-shot proxy, Incorporated-Score, to rank deep neural network architectures instead of using an accuracy predictor. The proposed Incorporated-Score proxy is generated by incorporating the zen-score and entropy information of the network, and it does not need to train any network. We then introduce an optimal NAS algorithm called Incorporated-NAS that targets the maximization of the Incorporated-Score of the neural network within the specified inference budgets. The experiments show that the network designed by Incorporated-NAS with Incorporated-Score outperforms the previously proposed Zen-NAS and achieves a new SOTAaccuracy on the CIFAR-10, CIFAR-100, and ImageNet datasets with a lightweight scale. Full article
(This article belongs to the Special Issue Towards Efficient and Reliable AI at the Edge)
Show Figures

Figure 1

20 pages, 3209 KiB  
Article
IPLog: An Efficient Log Parsing Method Based on Few-Shot Learning
by Shuxian Liu, Libo Yun, Shuaiqi Nie, Guiheng Zhang and Wei Li
Electronics 2024, 13(16), 3324; https://doi.org/10.3390/electronics13163324 - 21 Aug 2024
Viewed by 697
Abstract
Log messages from enterprise-level software systems contain crucial runtime details. Engineers can convert log messages into structured data through log parsing, laying the foundation for downstream tasks such as log anomaly detection. Existing log parsing schemes usually underperform in production environments for several [...] Read more.
Log messages from enterprise-level software systems contain crucial runtime details. Engineers can convert log messages into structured data through log parsing, laying the foundation for downstream tasks such as log anomaly detection. Existing log parsing schemes usually underperform in production environments for several reasons: first, they often ignore the semantics of log messages; second, they are often not adapted to different systems, and their performance varies greatly; and finally, they are difficult to adapt to the complexity and variety of log formats in the real environment. In response to the limitations of current approaches, we introduce IPLog (Intelligent Parse Log), a parsing method designed to address these issues. IPLog samples a limited set of log samples based on the distribution of templates in the system’s historical logs, and allows the model to make full use of the small number of log samples to recognize common patterns of keywords and parameters through few-shot learning, and thus can be easily adapted to different systems. In addition, IPLog can further improve the grouping accuracy of log templates through a novel manual feedback merge query strategy based on the longest common prefix, thus enhancing the model’s adaptability to handle complex log formats in production environments. We conducted experiments on four newly released public log datasets, and the experimental results show that IPLog can achieve an average grouping accuracy (GA) of 0.987 and parsing accuracy (PA) of 0.914 on the four public datasets, which are the best among the mainstream parsing schemes. These results demonstrate that IPLog is effective for log parsing tasks. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

10 pages, 786 KiB  
Article
Design and Implementation of a Printed Circuit Model for a Wideband Circularly Polarized Bow-Tie Antenna
by Matthew J. Dodd and Atef Z. Elsherbeni
Electronics 2024, 13(16), 3323; https://doi.org/10.3390/electronics13163323 - 21 Aug 2024
Viewed by 553
Abstract
A crossed bow-tie antenna design for S- and C-Band (2.44–7.62 GHz) with a peak gain of 7.29 dBi is presented to achieve wideband radiation efficiency greater than 90% and circular polarization with a single feed point. The polarization of the antenna is modeled [...] Read more.
A crossed bow-tie antenna design for S- and C-Band (2.44–7.62 GHz) with a peak gain of 7.29 dBi is presented to achieve wideband radiation efficiency greater than 90% and circular polarization with a single feed point. The polarization of the antenna is modeled by the input admittance of crossed bow-ties, and the model predictions are validated by experiments. A wideband matching network is designed to be tightly integrated with the antenna and produce a 103% impedance bandwidth. The matching network is decomposed into an equivalent circuit model, and an analysis is presented to demonstrate the principles of the matching network design. A prototype of the optimized antenna design is fabricated and measured to validate the analysis. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

20 pages, 1490 KiB  
Article
Adaptive Clustering and Scheduling for UAV-Enabled Data Aggregation
by Tien-Dung Nguyen, Tien Pham Van, Duc-Tai Le and Hyunseung Choo
Electronics 2024, 13(16), 3322; https://doi.org/10.3390/electronics13163322 - 21 Aug 2024
Viewed by 558
Abstract
Using unmanned aerial vehicles (UAVs) is an effective way to gather data from Internet of Things (IoT) devices. To reduce data gathering time and redundancy, thereby enabling the timely response of state-of-the-art systems, one can partition a network into clusters and perform aggregation [...] Read more.
Using unmanned aerial vehicles (UAVs) is an effective way to gather data from Internet of Things (IoT) devices. To reduce data gathering time and redundancy, thereby enabling the timely response of state-of-the-art systems, one can partition a network into clusters and perform aggregation within each cluster. Existing works solved the UAV trajectory planning problem, in which the energy consumption and/or flight time of the UAV is the minimization objective. The aggregation scheduling within each cluster was neglected, and they assumed that data must be ready when the UAV arrives at the cluster heads (CHs). This paper addresses the minimum time aggregation scheduling problem in duty-cycled networks with a single UAV. We propose an adaptive clustering method that takes into account the trajectory and speed of the UAV. The transmission schedule of IoT devices and the UAV departure times are jointly computed so that (1) the UAV flies continuously throughout the shortest path among the CHs to minimize the hovering time and energy consumption, and (2) data are aggregated at each CH right before the UAV arrival, to maximize the data freshness. Intensive simulation shows that the proposed scheme reduces up to 35% of the aggregation delay compared to other benchmarking methods. Full article
(This article belongs to the Special Issue Ubiquitous Sensor Networks, 2nd Edition)
Show Figures

Figure 1

13 pages, 5155 KiB  
Article
A Linear Regression-Based Methodology to Improve the Stability of a Low-Cost GPS Receiver Using the Precision Timing Signals from an Atomic Clock
by Shilpa Manandhar, Sneha Saravanan, Yu Song Meng and Yung Chuen Tan
Electronics 2024, 13(16), 3321; https://doi.org/10.3390/electronics13163321 - 21 Aug 2024
Viewed by 3390
Abstract
The global positioning system (GPS) is widely known for its applications in navigation, timing, and positioning. However, its accuracy can be greatly impacted by the performance of its receiver clocks, especially for a low-cost receiver equipped with lower-grade clocks like crystal oscillators. The [...] Read more.
The global positioning system (GPS) is widely known for its applications in navigation, timing, and positioning. However, its accuracy can be greatly impacted by the performance of its receiver clocks, especially for a low-cost receiver equipped with lower-grade clocks like crystal oscillators. The objective of this study is to develop a model to improve the stability of a low-cost receiver. To achieve this, a machine-learning-based linear regression algorithm is proposed to predict the differences of the low-cost GPS receiver compared to the precision timing source. Experiments were conducted using low-cost receivers like Ublox and expensive receivers like Septentrio. The model was implemented and the clocks of low-cost receivers were steered. The outcomes demonstrate a notable enhancement in the stability of low-cost receivers after the corrections were applied. This improvement underscores the efficacy of the proposed model in enhancing the performance of low-cost GPS receivers. Consequently, these low-cost receivers can be cost-effectively utilized for various purposes, particularly in applications requiring the deployment of numerous GPS receivers to achieve extensive spatial coverage. Full article
Show Figures

Figure 1

29 pages, 5764 KiB  
Article
Evaluation and Improvement of the Flexibility of Biomass Blended Burning Units in a Virtual Power Plant
by Qiwei Zheng, Heng Chen, Kaijie Gou, Peiyuan Pan, Gang Xu and Guoqiang Zhang
Electronics 2024, 13(16), 3320; https://doi.org/10.3390/electronics13163320 - 21 Aug 2024
Viewed by 616
Abstract
Aiming at the problems of small thermal power units and biomass mixed combustion units with small generation loads and insufficient primary frequency modulation capability, which cannot be connected to the virtual power plant, this paper adopts a variety of flexibility retrofit methods for [...] Read more.
Aiming at the problems of small thermal power units and biomass mixed combustion units with small generation loads and insufficient primary frequency modulation capability, which cannot be connected to the virtual power plant, this paper adopts a variety of flexibility retrofit methods for the units and explores the peak load capability of the units. Then, multiple units are coupled, and the unit coupling scheme with better economy and environmental protection is screened using comprehensive evaluation indexes. While evaluating the peaking load space of multiple unit coupling, the units’ primary frequency regulation capability and new energy consumption capability are improved. According to the calculation results, the low-pressure cylinder zero-output retrofit has the largest peaking potential among different technical paths, in which unit #3 has 27.55 MW of peaking space. The compression heat pump decoupling retrofit has the best economy, in which the daily profit of unit #3 increases from 0.93 to 1.02 million CNY with an increase of 0.09 million CNY. After the unit has been retrofitted with steam extraction, the three units can be coupled to meet the national feed-in standards. The multiple unit coupling can accommodate up to 203.44 MW of other energy sources while meeting the standard. Full article
Show Figures

Figure 1

21 pages, 2071 KiB  
Article
Trajectory Control of Quadrotors via Spiking Neural Networks
by Yesim Oniz
Electronics 2024, 13(16), 3319; https://doi.org/10.3390/electronics13163319 - 21 Aug 2024
Viewed by 562
Abstract
In this study, a novel control scheme based on spiking neural networks (SNNs) has been proposed to accomplish the trajectory tracking of quadrotor unmanned aerial vehicles (UAVs). The update rules for the network parameters have been derived using the Lyapunov stability theorem. Three [...] Read more.
In this study, a novel control scheme based on spiking neural networks (SNNs) has been proposed to accomplish the trajectory tracking of quadrotor unmanned aerial vehicles (UAVs). The update rules for the network parameters have been derived using the Lyapunov stability theorem. Three different trajectories have been utilized in the simulated and experimental studies to verify the efficacy of the proposed control scheme. The acquired results have been compared with the responses obtained for proportional–integral–derivative (PID) and traditional neural network controllers. Simulated and experimental studies demonstrate that the proposed SNN-based controller is capable of providing better tracking accuracy and robust system response in the presence of disturbing factors. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop