Next Issue
Volume 24, November-2
Previous Issue
Volume 24, October-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 24, Issue 21 (November-1 2024) – 328 articles

Cover Story (view full-size image): This study delves into robotic manipulation techniques that leverage environmental contact to achieve higher precision in the performing of tasks. By exploring inverse kinematics and computationally efficient quadratic programming, which optimizes movement using forward kinematics, the research addresses the challenge of maintaining control accuracy. Geometrical methods are also examined to facilitate simpler assembly and control. The approaches were implemented on a physical robotic platform, allowing for performance evaluations in real-time. The findings offer practical insights into how environmental interaction can be strategically employed to enhance robotic capabilities in dynamic conditions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 12596 KiB  
Article
ARMNet: A Network for Image Dimensional Emotion Prediction Based on Affective Region Extraction and Multi-Channel Fusion
by Jingjing Zhang, Jiaying Sun, Chunxiao Wang, Zui Tao and Fuxiao Zhang
Sensors 2024, 24(21), 7099; https://doi.org/10.3390/s24217099 - 4 Nov 2024
Viewed by 663
Abstract
Compared with discrete emotion space, image emotion analysis based on dimensional emotion space can more accurately represent fine-grained emotion. Meanwhile, this high-precision representation of emotion requires dimensional emotion prediction methods to sense and capture emotional information in images as accurately and richly as [...] Read more.
Compared with discrete emotion space, image emotion analysis based on dimensional emotion space can more accurately represent fine-grained emotion. Meanwhile, this high-precision representation of emotion requires dimensional emotion prediction methods to sense and capture emotional information in images as accurately and richly as possible. However, the existing methods mainly focus on emotion recognition by extracting the emotional regions where salient objects are located while ignoring the joint influence of objects and background on emotion. Furthermore, in the existing literature, when fusing multi-level features, no consideration has been given to the varying contributions of features from different levels to emotional analysis, which makes it difficult to distinguish valuable and useless features and cannot improve the utilization of effective features. This paper proposes an image emotion prediction network named ARMNet. In ARMNet, a unified affective region extraction method that integrates eye fixation detection and attention detection is proposed to enhance the combined influence of objects and backgrounds. Additionally, the multi-level features are fused with the consideration of their different contributions through an improved channel attention mechanism. In comparison to the existing methods, experiments conducted on the CGnA10766 dataset demonstrate that the performance of valence and arousal, as measured by Mean Squared Error (MSE), Mean Absolute Error (MAE), and Coefficient of Determination (R²), has improved by 4.74%, 3.53%, 3.62%, 1.93%, 6.29%, and 7.23%, respectively. Furthermore, the interpretability of the network is enhanced through the visualization of attention weights corresponding to emotional regions within the images. Full article
(This article belongs to the Special Issue Recent Advances in Smart Mobile Sensing Technology)
Show Figures

Figure 1

21 pages, 18890 KiB  
Article
Experimental and Numerical Studies of the Temperature Field in a Dielectrophoretic Cell Separation Device Subject to Joule Heating
by Yoshinori Seki and Shigeru Tada
Sensors 2024, 24(21), 7098; https://doi.org/10.3390/s24217098 - 4 Nov 2024
Viewed by 664
Abstract
Technologies for rapid and high-throughput separation of rare cells from large populations of other types of cells have recently attracted much attention in the field of bioengineering. Among the various cell separation technologies proposed in the past, dielectrophoresis has shown particular promise because [...] Read more.
Technologies for rapid and high-throughput separation of rare cells from large populations of other types of cells have recently attracted much attention in the field of bioengineering. Among the various cell separation technologies proposed in the past, dielectrophoresis has shown particular promise because of its preciseness of manipulation and noninvasiveness to cells. However, one drawback of dielectrophoresis devices is that their application of high voltage generates Joule heat that exposes the cells within the device to high temperatures. To further explore this problem, this study investigated the temperature field in a previously developed cell separation device in detail. The temperature rise at the bottom of the microfluidic channel in the device was measured using a micro-LIF method. Moreover, the thermofluidic behavior of the cell separation device was numerically investigated by adopting a heat generation model that takes the electric-field-dependent heat generation term into account in the energy equation. Under the operating conditions of the previously developed cell separation device, the experimentally obtained temperature rise in the device was approximately 20 °C, and the numerical simulation results generally agreed well. Next, parametric calculations were performed with changes in the flow rate of the cell sample solution and the solution conductivity, and a temperature increase of more than 40 °C was predicted. The results demonstrated that an increase in temperature within the cell separation device may have a significant impact on the physiological functions of the cells, depending on the operating conditions of the device. Full article
Show Figures

Figure 1

15 pages, 541 KiB  
Communication
Improving Factuality by Contrastive Decoding with Factual and Hallucination Prompts
by Bojie Lv, Ao Feng and Chenlong Xie
Sensors 2024, 24(21), 7097; https://doi.org/10.3390/s24217097 - 4 Nov 2024
Viewed by 629
Abstract
Large language models have demonstrated impressive capabilities in many domains. But they sometimes generate irrelevant or nonsensical text, or produce outputs that deviate from the provided input, an occurrence commonly referred to as hallucination. To mitigate this issue, we introduce a novel decoding [...] Read more.
Large language models have demonstrated impressive capabilities in many domains. But they sometimes generate irrelevant or nonsensical text, or produce outputs that deviate from the provided input, an occurrence commonly referred to as hallucination. To mitigate this issue, we introduce a novel decoding method that incorporates both factual and hallucination prompts (DFHP). It applies contrastive decoding to highlight the disparity in output probabilities between factual prompts and hallucination prompts. Experiments on both multiple-choice and text generation tasks show that our approach significantly improves factual accuracy of large language models without additional training. On the TruthfulQA dataset, the DFHP method significantly improves factual accuracy of the LLaMA model, with an average improvement of 6.4% for the 7B, 13B, 30B, and 65B versions. Its high accuracy in factuality makes it an ideal choice for high reliability tasks like medical diagnosis and legal cases. Full article
(This article belongs to the Special Issue Advances in Security for Emerging Intelligent Systems)
Show Figures

Figure 1

31 pages, 6715 KiB  
Article
Modeling of Static Stress Identification Using Electromechanical Impedance of Embedded Piezoelectric Plate
by Xianfeng Wang, Hui Liu, Guoxiong Liu and Dan Xu
Sensors 2024, 24(21), 7096; https://doi.org/10.3390/s24217096 - 4 Nov 2024
Viewed by 616
Abstract
Working stress is an important indicator reflecting the health status of structures. Passive-monitoring technology using the piezoelectric effect can effectively monitor the dynamic stress of structures. However, under static loads, the charge generated by the piezoelectric devices can only be preserved when the [...] Read more.
Working stress is an important indicator reflecting the health status of structures. Passive-monitoring technology using the piezoelectric effect can effectively monitor the dynamic stress of structures. However, under static loads, the charge generated by the piezoelectric devices can only be preserved when the external circuit impedance is infinitely large, which means passive-monitoring techniques are unable to monitor static and quasi-static stress caused by slow-changing actions. In current studies, experimental observations have shown that the impedance characteristics of piezoelectric devices are affected by external static loads, yet the underlying mechanisms remain inadequately explained. This is because the impedance characteristics of piezoelectric devices are actually dynamic characteristics under alternating voltage. Most existing impedance analysis models are based on linear elastic dynamics. Within this framework, the impact of static stress on dynamic characteristics, including impedance characteristics, cannot be addressed. Accounting for static stress in impedance modeling is a challenging problem. In this study, the static stress applied on an embedded piezoelectric plate is abstracted as the initial stress of the piezoelectric plate. Based on nonlinear elastic dynamic governing equations, using the displacement method, an impedance analysis model of an embedded piezoelectric plate considering initial stress is established and verified through a fundamental experiment and a finite element analysis. Based on this, the explicit analytical relation between initial stress and impedance characterizations is provided, the mechanism of the effect of initial stress on the impedance characterizations is revealed, and procedures to identify static stress using impedance characterizations is proposed. Moreover, the sensitivities of the impedance characterizations in response to the initial stress are thoroughly discussed. This study mainly provides a theoretical basis for monitoring static stress using the electromechanical impedance of an embedded piezoelectric plate. And the results of the present study can help with the performance prediction and design optimization of piezoelectric-based static stress sensors. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

42 pages, 2065 KiB  
Review
Passive and Active Exoskeleton Solutions: Sensors, Actuators, Applications, and Recent Trends
by D. M. G. Preethichandra, Lasitha Piyathilaka, Jung-Hoon Sul, Umer Izhar, Rohan Samarasinghe, Sanura Dunu Arachchige and Liyanage C. de Silva
Sensors 2024, 24(21), 7095; https://doi.org/10.3390/s24217095 - 4 Nov 2024
Viewed by 1469
Abstract
Recent advancements in exoskeleton technology, both passive and active, are driven by the need to enhance human capabilities across various industries as well as the need to provide increased safety for the human worker. This review paper examines the sensors, actuators, mechanisms, design, [...] Read more.
Recent advancements in exoskeleton technology, both passive and active, are driven by the need to enhance human capabilities across various industries as well as the need to provide increased safety for the human worker. This review paper examines the sensors, actuators, mechanisms, design, and applications of passive and active exoskeletons, providing an in-depth analysis of various exoskeleton technologies. The main scope of this paper is to examine the recent developments in the exoskeleton developments and their applications in different fields and identify research opportunities in this field. The paper examines the exoskeletons used in various industries as well as research-level prototypes of both active and passive types. Further, it examines the commonly used sensors and actuators with their advantages and disadvantages applicable to different types of exoskeletons. Communication protocols used in different exoskeletons are also discussed with the challenges faced. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

11 pages, 483 KiB  
Communication
Optimizing the Agricultural Internet of Things (IoT) with Edge Computing and Low-Altitude Platform Stations
by Deshan Yang, Jingwen Wu and Yixin He
Sensors 2024, 24(21), 7094; https://doi.org/10.3390/s24217094 - 4 Nov 2024
Viewed by 557
Abstract
Using low-altitude platform stations (LAPSs) in the agricultural Internet of Things (IoT) enables the efficient and precise monitoring of vast and hard-to-reach areas, thereby enhancing crop management. By integrating edge computing servers into LAPSs, data can be processed directly at the edge in [...] Read more.
Using low-altitude platform stations (LAPSs) in the agricultural Internet of Things (IoT) enables the efficient and precise monitoring of vast and hard-to-reach areas, thereby enhancing crop management. By integrating edge computing servers into LAPSs, data can be processed directly at the edge in real time, significantly reducing latency and dependency on remote cloud servers. Motivated by these advancements, this paper explores the application of LAPSs and edge computing in the agricultural IoT. First, we introduce an LAPS-aided edge computing architecture for the agricultural IoT, in which each task is segmented into several interdependent subtasks for processing. Next, we formulate a total task processing delay minimization problem, taking into account constraints related to task dependency and priority, as well as equipment energy consumption. Then, by treating the task dependencies as directed acyclic graphs, a heuristic task processing algorithm with priority selection is developed to solve the formulated problem. Finally, the numerical results show that the proposed edge computing scheme outperforms state-of-the-art works and the local computing scheme in terms of the total task processing delay. Full article
(This article belongs to the Special Issue Wireless Sensor Networks in Industrial/Agricultural Environments)
Show Figures

Figure 1

28 pages, 2910 KiB  
Review
A Review of Visual Estimation Research on Live Pig Weight
by Zhaoyang Wang, Qifeng Li, Qinyang Yu, Wentai Qian, Ronghua Gao, Rong Wang, Tonghui Wu and Xuwen Li
Sensors 2024, 24(21), 7093; https://doi.org/10.3390/s24217093 - 4 Nov 2024
Viewed by 707
Abstract
The weight of live pigs is directly related to their health, nutrition management, disease prevention and control, and the overall economic benefits to livestock enterprises. Direct weighing can induce stress responses in pigs, leading to decreased productivity. Therefore, modern livestock industries are increasingly [...] Read more.
The weight of live pigs is directly related to their health, nutrition management, disease prevention and control, and the overall economic benefits to livestock enterprises. Direct weighing can induce stress responses in pigs, leading to decreased productivity. Therefore, modern livestock industries are increasingly turning to non-contact techniques for estimating pig weight, such as automated monitoring systems based on computer vision. These technologies provide continuous, real-time weight-monitoring data without disrupting the pigs’ normal activities or causing stress, thereby enhancing breeding efficiency and management levels. Two methods of pig weight estimation based on image and point cloud data are comprehensively analyzed in this paper. We first analyze the advantages and disadvantages of the two methods and then discuss the main problems and challenges in the field of pig weight estimation technology. Finally, we predict the key research areas and development directions in the future. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

10 pages, 2161 KiB  
Article
Evaluating Alternative Registration Planes in Imageless, Computer-Assisted Navigation Systems for Direct Anterior Total Hip Arthroplasty
by John E. Farey, Yuan Chai, Joshua Xu, Vincent Maes, Ameneh Sadeghpour, Neri A. Baker, Jonathan M. Vigdorchik and William L. Walter
Sensors 2024, 24(21), 7092; https://doi.org/10.3390/s24217092 - 4 Nov 2024
Viewed by 525
Abstract
(1) Background: Imageless computer navigation systems have the potential to improve the accuracy of acetabular cup position in total hip arthroplasty (THA). Popular imageless navigation methods include locating the patient in a three-dimensional space (registration method) while using a baseline to angle the [...] Read more.
(1) Background: Imageless computer navigation systems have the potential to improve the accuracy of acetabular cup position in total hip arthroplasty (THA). Popular imageless navigation methods include locating the patient in a three-dimensional space (registration method) while using a baseline to angle the acetabular cup (reference plane). This study aims to compare the accuracy of different methods for determining postoperative acetabular cup positioning in THA via the direct anterior approach. (2) Methods: Fifty-one participants were recruited. Optical and inertial sensor imageless navigation systems were used simultaneously with three combinations of registration methods and reference planes: the anterior pelvic plane (APP), the anterior superior iliac spine (ASIS) and the table tilt (TT) method. Postoperative acetabular cup position, inclination, and anteversion were assessed using CT scans. (3) Results: For inclination, the mean absolute error (MAE) was lower using the TT method (2.4° ± 1.7°) compared to the ASIS (2.8° ± 1.7°, p = 0.17) and APP method (3.7° ± 2.1°, p < 0.001). For anteversion, the MAE was significantly lower for the TT method (2.4° ± 1.8°) in contrast to the ASIS (3.9° ± 3.2°, p = 0.005) and APP method (9.1° ± 6.2°, p < 0.001). (4) Conclusion: A functional reference plane is superior to an anatomic reference plane to accurately measure intraoperative acetabular cup inclination and anteversion in THA using inertial imageless navigation systems. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 5545 KiB  
Article
Edge Computing for AI-Based Brain MRI Applications: A Critical Evaluation of Real-Time Classification and Segmentation
by Khuhed Memon, Norashikin Yahya, Mohd Zuki Yusoff, Rabani Remli, Aida-Widure Mustapha Mohd Mustapha, Hilwati Hashim, Syed Saad Azhar Ali and Shahabuddin Siddiqui
Sensors 2024, 24(21), 7091; https://doi.org/10.3390/s24217091 - 4 Nov 2024
Viewed by 773
Abstract
Medical imaging plays a pivotal role in diagnostic medicine with technologies like Magnetic Resonance Imagining (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), and ultrasound scans being widely used to assist radiologists and medical experts in reaching concrete diagnosis. Given the recent massive [...] Read more.
Medical imaging plays a pivotal role in diagnostic medicine with technologies like Magnetic Resonance Imagining (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), and ultrasound scans being widely used to assist radiologists and medical experts in reaching concrete diagnosis. Given the recent massive uplift in the storage and processing capabilities of computers, and the publicly available big data, Artificial Intelligence (AI) has also started contributing to improving diagnostic radiology. Edge computing devices and handheld gadgets can serve as useful tools to process medical data in remote areas with limited network and computational resources. In this research, the capabilities of multiple platforms are evaluated for the real-time deployment of diagnostic tools. MRI classification and segmentation applications developed in previous studies are used for testing the performance using different hardware and software configurations. Cost–benefit analysis is carried out using a workstation with a NVIDIA Graphics Processing Unit (GPU), Jetson Xavier NX, Raspberry Pi 4B, and Android phone, using MATLAB, Python, and Android Studio. The mean computational times for the classification app on the PC, Jetson Xavier NX, and Raspberry Pi are 1.2074, 3.7627, and 3.4747 s, respectively. On the low-cost Android phone, this time is observed to be 0.1068 s using the Dynamic Range Quantized TFLite version of the baseline model, with slight degradation in accuracy. For the segmentation app, the times are 1.8241, 5.2641, 6.2162, and 3.2023 s, respectively, when using JPEG inputs. The Jetson Xavier NX and Android phone stand out as the best platforms due to their compact size, fast inference times, and affordability. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

27 pages, 4877 KiB  
Review
A Review of Cutting-Edge Sensor Technologies for Improved Flood Monitoring and Damage Assessment
by Yixin Tao, Bingwei Tian, Basanta Raj Adhikari, Qi Zuo, Xiaolong Luo and Baofeng Di
Sensors 2024, 24(21), 7090; https://doi.org/10.3390/s24217090 - 4 Nov 2024
Viewed by 1580
Abstract
Floods are the most destructive, widespread, and frequent natural hazards. The extent of flood events is accelerating in the context of climate change, where flood management and disaster mitigation remain important long-term issues. Different studies have been utilizing data and images from various [...] Read more.
Floods are the most destructive, widespread, and frequent natural hazards. The extent of flood events is accelerating in the context of climate change, where flood management and disaster mitigation remain important long-term issues. Different studies have been utilizing data and images from various types of sensors for mapping, assessment, forecasting, early warning, rescue, and other disaster prevention and mitigation activities before, during, and after floods, including flash floods, coastal floods, and urban floods. These monitoring processes evolved from early ground-based observations relying on in situ sensors to high-precision, high-resolution, and high-coverage monitoring by airborne and remote sensing sensors. In this study, we have analyzed the different kinds of sensors from the literature review, case studies, and other methods to explore the development history of flood sensors and the driving role of floods in different countries. It is found that there is a trend towards the integration of flood sensors with artificial intelligence, and their state-of-the-art determines the effectiveness of local flood management to a large extent. This study helps to improve the efficiency of flood monitoring advancement and flood responses as it explores the different types of sensors and their effectiveness. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

15 pages, 2604 KiB  
Article
A Deep Cryptographic Framework for Securing the Healthcare Network from Penetration
by Arjun Singh, Vijay Shankar Sharma, Shakila Basheer and Chiranji Lal Chowdhary
Sensors 2024, 24(21), 7089; https://doi.org/10.3390/s24217089 - 4 Nov 2024
Viewed by 610
Abstract
Ensuring the security of picture data on a network presents considerable difficulties because of the requirement for conventional embedding systems, which ultimately leads to subpar performance. It poses a risk of unauthorized data acquisition and misuse. Moreover, the previous image security-based techniques faced [...] Read more.
Ensuring the security of picture data on a network presents considerable difficulties because of the requirement for conventional embedding systems, which ultimately leads to subpar performance. It poses a risk of unauthorized data acquisition and misuse. Moreover, the previous image security-based techniques faced several challenges, including high execution times. As a result, a novel framework called Graph Convolutional-Based Twofish Security (GCbTS) was introduced to secure the images used in healthcare. The medical data are gathered from the Kaggle site and included in the proposed architecture. Preprocessing is performed on the data inserted to remove noise, and the hash 1 value is computed. Using the generated key, these separated images are put through the encryption process to encrypt what they contain. Additionally, to verify the user’s identity, the encrypted data calculates the hash 2 values contrasted alongside the hash 1 value. Following completion of the verification procedure, the data are restored to their original condition and made accessible to authorized individuals by decrypting them with the collective key. Additionally, to determine the effectiveness, the calculated results of the suggested model are connected to the operational copy, which depends on picture privacy. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

12 pages, 977 KiB  
Article
A Study of the Effect of Temperature on the Capacitance Characteristics of a Metal-μhemisphere Resonant Gyroscope
by Xiangxian Yao, Hui Zhao, Zhong Su, Xibing Gu and Sirui Chu
Sensors 2024, 24(21), 7088; https://doi.org/10.3390/s24217088 - 4 Nov 2024
Viewed by 516
Abstract
Metal-μhemispherical resonant gyros (M-μHRGs) are widely used in highly dynamic navigation systems in extreme environments due to their high accuracy and structural stability. However, the effect of temperature variations on the capacitance characteristics of M-μHRGs has not [...] Read more.
Metal-μhemispherical resonant gyros (M-μHRGs) are widely used in highly dynamic navigation systems in extreme environments due to their high accuracy and structural stability. However, the effect of temperature variations on the capacitance characteristics of M-μHRGs has not been fully investigated, which is crucial for optimizing the performance of the gyro. This study aims to systematically analyze the effect of temperature on the static and dynamic capacitances of M-μHRGs. In this study, an M-μHRG structure based on a 16-tooth metal oscillator is designed, and conducted simulation experiments using non-contact capacitance measurement method and COMSOL Multiphysics 6.2 finite element simulation software in the temperature range of 233.15 K to 343.15 K. The modeling analysis of the static capacitance takes into account the thermal expansion effect, and the results show that static capacitance remains stable across the measured temperature range, with minimal effect from temperature. The dynamic capacitance exhibits significant nonlinear variations under different temperature conditions, especially in the two end temperature intervals (below 273.15 K and above 313.15 K), where the capacitance values show local extremes and fluctuations. In order to capture this nonlinear behavior, the experimental data were smoothed and fitted using the LOESS method, revealing a complex trend of the capacitance variation with temperature. The results show that the M-μHRG has good capacitance stability in the mid-temperature range, but its dynamic performance is significantly affected at extreme temperatures. This study provides a theoretical reference for the optimal design of M-μHRGs in high- and low-temperature environments. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

15 pages, 4606 KiB  
Article
Lower Limb Motion Recognition Based on sEMG and CNN-TL Fusion Model
by Zhiwei Zhou, Qing Tao, Na Su, Jingxuan Liu, Qingzheng Chen and Bowen Li
Sensors 2024, 24(21), 7087; https://doi.org/10.3390/s24217087 - 4 Nov 2024
Viewed by 504
Abstract
To enhance the classification accuracy of lower limb movements, a fusion recognition model integrating a surface electromyography (sEMG)-based convolutional neural network, transformer encoder, and long short-term memory network (CNN-Transformer-LSTM, CNN-TL) was proposed in this study. By combining these advanced techniques, significant improvements in [...] Read more.
To enhance the classification accuracy of lower limb movements, a fusion recognition model integrating a surface electromyography (sEMG)-based convolutional neural network, transformer encoder, and long short-term memory network (CNN-Transformer-LSTM, CNN-TL) was proposed in this study. By combining these advanced techniques, significant improvements in movement classification were achieved. Firstly, sEMG data were collected from 20 subjects as they performed four distinct gait movements: walking upstairs, walking downstairs, walking on a level surface, and squatting. Subsequently, the gathered sEMG data underwent preprocessing, with features extracted from both the time domain and frequency domain. These features were then used as inputs for the machine learning recognition model. Finally, based on the preprocessed sEMG data, the CNN-TL lower limb action recognition model was constructed. The performance of CNN-TL was then compared with that of the CNN, LSTM, and SVM models. The results demonstrated that the accuracy of the CNN-TL model in lower limb action recognition was 3.76%, 5.92%, and 14.92% higher than that of the CNN-LSTM, CNN, and SVM models, respectively, thereby proving its superior classification performance. An effective scheme for improving lower limb motor function in rehabilitation and assistance devices was thus provided. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

13 pages, 380 KiB  
Article
TEA-GCN: Transformer-Enhanced Adaptive Graph Convolutional Network for Traffic Flow Forecasting
by Xiaxia He, Wenhui Zhang, Xiaoyu Li and Xiaodan Zhang
Sensors 2024, 24(21), 7086; https://doi.org/10.3390/s24217086 - 4 Nov 2024
Viewed by 600
Abstract
Traffic flow forecasting is crucial for improving urban traffic management and reducing resource consumption. Accurate traffic conditions prediction requires capturing the complex spatial-temporal dependencies inherent in traffic data. Traditional spatial-temporal graph modeling methods often rely on fixed road network structures, failing to account [...] Read more.
Traffic flow forecasting is crucial for improving urban traffic management and reducing resource consumption. Accurate traffic conditions prediction requires capturing the complex spatial-temporal dependencies inherent in traffic data. Traditional spatial-temporal graph modeling methods often rely on fixed road network structures, failing to account for the dynamic spatial correlations that vary over time. To address this, we propose a Transformer-Enhanced Adaptive Graph Convolutional Network (TEA-GCN) that alternately learns temporal and spatial correlations in traffic data layer-by-layer. Specifically, we design an adaptive graph convolutional module to dynamically capture implicit road dependencies at different time levels and a local-global temporal attention module to simultaneously capture long-term and short-term temporal dependencies. Experimental results on two public traffic datasets demonstrate the effectiveness of the proposed model compared to other state-of-the-art traffic flow prediction methods. Full article
(This article belongs to the Special Issue Data and Network Analytics in Transportation Systems)
Show Figures

Figure 1

20 pages, 12482 KiB  
Article
Development and Design of an Online Quality Inspection System for Electric Car Seats
by Fangjie Wei, Dongqiang Wang and Xi Zhang
Sensors 2024, 24(21), 7085; https://doi.org/10.3390/s24217085 - 3 Nov 2024
Viewed by 807
Abstract
As the market share of electric vehicles continues to rise, consumer demands for comfort within the vehicle interior have also increased. The noise generated by electric seats during operation has become one of the primary sources of in-cabin noise. However, the offline detection [...] Read more.
As the market share of electric vehicles continues to rise, consumer demands for comfort within the vehicle interior have also increased. The noise generated by electric seats during operation has become one of the primary sources of in-cabin noise. However, the offline detection methods for electric seat noise severely limit production capacity. To address this issue, this paper presents an online quality inspection system for automotive electric seats, developed using LabVIEW. This system is capable of simultaneously detecting both the noise and electrical functions of electric seats, thereby resolving problems associated with multiple detection processes and low integration levels that affect production efficiency on the assembly line. The system employs NI boards (9250 + 9182) to collect noise data, while communication between LabVIEW and the Programmable Logic Controller (PLC) allows for programmed control of the seat motor to gather motor current. Additionally, a supervisory computer was developed to process the collected data, which includes generating frequency and time-domain graphs, conducting data analysis and evaluation, and performing database queries. By being co-located with the production line, the system features a highly integrated hardware and software design that facilitates the online synchronous detection of noise performance and electrical functions in automotive electric seats, effectively streamlining the detection process and enhancing overall integration. Practical verification results indicate that the system improves the production line cycle time by 34.84%, enabling rapid and accurate identification of non-conforming items in the seat motor, with a detection time of less than 86 s, thereby meeting the quality inspection needs for automotive electric seats. Full article
(This article belongs to the Special Issue Signal Processing and Sensing Technologies for Fault Diagnosis)
Show Figures

Figure 1

18 pages, 7087 KiB  
Article
Steady-State Visual Evoked Potential-Based Brain–Computer Interface System for Enhanced Human Activity Monitoring and Assessment
by Yuankun Chen, Xiyu Shi, Varuna De Silva and Safak Dogan
Sensors 2024, 24(21), 7084; https://doi.org/10.3390/s24217084 - 3 Nov 2024
Viewed by 694
Abstract
Advances in brain–computer interfaces (BCIs) have enabled direct and functional connections between human brains and computing systems. Recent developments in artificial intelligence have also significantly improved the ability to detect brain activity patterns. In particular, using steady-state visual evoked potentials (SSVEPs) in BCIs [...] Read more.
Advances in brain–computer interfaces (BCIs) have enabled direct and functional connections between human brains and computing systems. Recent developments in artificial intelligence have also significantly improved the ability to detect brain activity patterns. In particular, using steady-state visual evoked potentials (SSVEPs) in BCIs has enabled noticeable advances in human activity monitoring and identification. However, the lack of publicly available electroencephalogram (EEG) datasets has limited the development of SSVEP-based BCI systems (SSVEP-BCIs) for human activity monitoring and assisted living. This study aims to provide an open-access multicategory EEG dataset created under the SSVEP-BCI paradigm, with participants performing forward, backward, left, and right movements to simulate directional control commands in a virtual environment developed in Unity. The purpose of these actions is to explore how the brain responds to visual stimuli of control commands. An SSVEP-BCI system is proposed to enable hands-free control of a virtual target in the virtual environment allowing participants to maneuver the virtual target using only their brain activity. This work demonstrates the feasibility of using SSVEP-BCIs in human activity monitoring and assessment. The preliminary experiment results indicate the effectiveness of the developed system with high accuracy, successfully classifying 89.88% of brainwave activity. Full article
Show Figures

Figure 1

16 pages, 5991 KiB  
Article
Advanced Imaging Integration: Multi-Modal Raman Light Sheet Microscopy Combined with Zero-Shot Learning for Denoising and Super-Resolution
by Pooja Kumari, Shaun Keck, Emma Sohn, Johann Kern and Matthias Raedle
Sensors 2024, 24(21), 7083; https://doi.org/10.3390/s24217083 - 3 Nov 2024
Viewed by 918
Abstract
This study presents an advanced integration of Multi-modal Raman Light Sheet Microscopy with zero-shot learning-based computational methods to significantly enhance the resolution and analysis of complex three-dimensional biological structures, such as 3D cell cultures and spheroids. The Multi-modal Raman Light Sheet Microscopy system [...] Read more.
This study presents an advanced integration of Multi-modal Raman Light Sheet Microscopy with zero-shot learning-based computational methods to significantly enhance the resolution and analysis of complex three-dimensional biological structures, such as 3D cell cultures and spheroids. The Multi-modal Raman Light Sheet Microscopy system incorporates Rayleigh scattering, Raman scattering, and fluorescence detection, enabling comprehensive, marker-free imaging of cellular architecture. These diverse modalities offer detailed spatial and molecular insights into cellular organization and interactions, critical for applications in biomedical research, drug discovery, and histological studies. To improve image quality without altering or introducing new biological information, we apply Zero-Shot Deconvolution Networks (ZS-DeconvNet), a deep-learning-based method that enhances resolution in an unsupervised manner. ZS-DeconvNet significantly refines image clarity and sharpness across multiple microscopy modalities without requiring large, labeled datasets, or introducing artifacts. By combining the strengths of multi-modal light sheet microscopy and ZS-DeconvNet, we achieve improved visualization of subcellular structures, offering clearer and more detailed representations of existing data. This approach holds significant potential for advancing high-resolution imaging in biomedical research and other related fields. Full article
Show Figures

Figure 1

23 pages, 3124 KiB  
Article
Quantification of Size-Binned Particulate Matter in Electronic Cigarette Aerosols Using Multi-Spectral Optical Sensing and Machine Learning
by Hao Jiang and Keith Kolaczyk
Sensors 2024, 24(21), 7082; https://doi.org/10.3390/s24217082 - 3 Nov 2024
Viewed by 739
Abstract
To monitor health risks associated with vaping, we introduce a multi-spectral optical sensor powered by machine learning for real-time characterization of electronic cigarette aerosols. The sensor can accurately measure the mass of particulate matter (PM) in specific particle size channels, providing essential information [...] Read more.
To monitor health risks associated with vaping, we introduce a multi-spectral optical sensor powered by machine learning for real-time characterization of electronic cigarette aerosols. The sensor can accurately measure the mass of particulate matter (PM) in specific particle size channels, providing essential information for estimating lung deposition of vaping aerosols. For the sensor’s input, wavelength-specific optical attenuation signals are acquired for three separate wavelengths in the ultraviolet, red, and near-infrared range, and the inhalation pressure is collected from a pressure sensor. The sensor’s outputs are PM mass in three size bins, specified as 100–300 nm, 300–600 nm, and 600–1000 nm. Reference measurements of electronic cigarette aerosols, obtained using a custom vaping machine and a scanning mobility particle sizer, provided the ground truth for size-binned PM mass. A lightweight two-layer feedforward neural network was trained using datasets acquired from a wide range of puffing conditions. The performance of the neural network was tested using unseen data collected using new combinations of puffing conditions. The model-predicted values matched closely with the ground truth, and the accuracy reached 81–87% for PM mass in three size bins. Given the sensor’s straightforward optical configuration and the direct collection of signals from undiluted vaping aerosols, the achieved accuracy is notably significant and sufficiently reliable for point-of-interest sensing of vaping aerosols. To the best of our knowledge, this work represents the first instance where machine learning has been applied to directly characterize high-concentration undiluted electronic cigarette aerosols. Our sensor holds great promise in tracking electronic cigarette users’ puff topography with quantification of size-binned PM mass, to support long-term personalized health and wellness. Full article
(This article belongs to the Special Issue Optical Spectroscopic Sensing and Imaging)
Show Figures

Graphical abstract

23 pages, 3632 KiB  
Article
Towards the Development of an Optical Biosensor for the Detection of Human Blood for Forensic Analysis
by Hayley Costanzo, Maxine den Hartog, James Gooch and Nunzianda Frascione
Sensors 2024, 24(21), 7081; https://doi.org/10.3390/s24217081 - 3 Nov 2024
Viewed by 727
Abstract
Blood is a common biological fluid in forensic investigations, offering significant evidential value. Currently employed presumptive blood tests often lack specificity and are sample destructive, which can compromise downstream analysis. Within this study, the development of an optical biosensor for detecting human red [...] Read more.
Blood is a common biological fluid in forensic investigations, offering significant evidential value. Currently employed presumptive blood tests often lack specificity and are sample destructive, which can compromise downstream analysis. Within this study, the development of an optical biosensor for detecting human red blood cells (RBCs) has been explored to address such limitations. Aptamer-based biosensors, termed aptasensors, offer a promising alternative due to their high specificity and affinity for target analytes. Aptamers are short, single-stranded DNA or RNA sequences that form stable three-dimensional structures, allowing them to bind to specific targets selectively. A nanoflare design has been employed within this work, consisting of a quenching gold nanoparticle (AuNP), DNA aptamer sequences, and complementary fluorophore-labelled flares operating through a fluorescence resonance energy transfer (FRET) mechanism. In the presence of RBCs, the aptamer–flare complex is disrupted, restoring fluorescence and indicating the presence of blood. Two aptamers, N1 and BB1, with a demonstrated binding affinity to RBCs, were selected for inclusion within the nanoflare. This study aimed to optimise three features of the design: aptamer conjugation to AuNPs, aptamer hybridisation to complementary flares, and flare displacement in the presence of RBCs. Fluorescence restoration was achieved with both the N1 and BB1 nanoflares, demonstrating the potential for a functional biosensor to be utilised within the forensic workflow. It is hoped that introducing such an aptasensor could enhance the forensic workflow. This aptasensor could replace current tests with a specific and sensitive reagent that can be used for real-time detection, improving the standard of forensic blood analysis. Full article
(This article belongs to the Special Issue Nanomaterials for Sensor Applications)
Show Figures

Figure 1

19 pages, 3033 KiB  
Article
A Cross-Attention-Based Class Alignment Network for Cross-Subject EEG Classification in a Heterogeneous Space
by Sufan Ma and Dongxiao Zhang
Sensors 2024, 24(21), 7080; https://doi.org/10.3390/s24217080 - 3 Nov 2024
Viewed by 512
Abstract
Background: Domain adaptation (DA) techniques have emerged as a pivotal strategy in addressing the challenges of cross-subject classification. However, traditional DA methods are inherently limited by the assumption of a homogeneous space, requiring that the source and target domains share identical feature dimensions [...] Read more.
Background: Domain adaptation (DA) techniques have emerged as a pivotal strategy in addressing the challenges of cross-subject classification. However, traditional DA methods are inherently limited by the assumption of a homogeneous space, requiring that the source and target domains share identical feature dimensions and label sets, which is often impractical in real-world applications. Therefore, effectively addressing the challenge of EEG classification under heterogeneous spaces has emerged as a crucial research topic. Methods: We present a comprehensive framework that addresses the challenges of heterogeneous spaces by implementing a cross-domain class alignment strategy. We innovatively construct a cross-encoder to effectively capture the intricate dependencies between data across domains. We also introduce a tailored class discriminator accompanied by a corresponding loss function. By optimizing the loss function, we facilitate the aggregation of features with corresponding classes between the source and target domains, while ensuring that features from non-corresponding classes are dispersed. Results: Extensive experiments were conducted on two publicly available EEG datasets. Compared to advanced methods that combine label alignment with transfer learning, our method demonstrated superior performance across five heterogeneous space scenarios. Notably, in four heterogeneous label space scenarios, our method outperformed the advanced methods by an average of 7.8%. Moreover, in complex scenarios involving both heterogeneous label spaces and heterogeneous feature spaces, our method outperformed the state-of-the-art methods by an average of 4.1%. Conclusions: This paper presents an efficient model for cross-subject EEG classification under heterogeneous spaces, which significantly addresses the challenges of EEG classification within heterogeneous spaces, thereby opening up new perspectives and avenues for research in related fields. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 1482 KiB  
Review
A Comprehensive Evaluation of Iris Segmentation on Benchmarking Datasets
by Mst Rumana Sumi, Priyanka Das, Afzal Hossain, Soumyabrata Dey and Stephanie Schuckers
Sensors 2024, 24(21), 7079; https://doi.org/10.3390/s24217079 - 3 Nov 2024
Viewed by 651
Abstract
Iris is one of the most widely used biometric modalities because of its uniqueness, high matching performance, and inherently secure nature. Iris segmentation is an essential preliminary step for iris-based biometric authentication. The authentication accuracy is directly connected with the iris segmentation accuracy. [...] Read more.
Iris is one of the most widely used biometric modalities because of its uniqueness, high matching performance, and inherently secure nature. Iris segmentation is an essential preliminary step for iris-based biometric authentication. The authentication accuracy is directly connected with the iris segmentation accuracy. In the last few years, deep-learning-based iris segmentation methodologies have increasingly been adopted because of their ability to handle challenging segmentation tasks and their advantages over traditional segmentation techniques. However, the biggest challenge to the biometric community is the scarcity of open-source resources for adoption for application and reproducibility. This review provides a comprehensive examination of available open-source iris segmentation resources, including datasets, algorithms, and tools. In the process, we designed three U-Net and U-Net++ architecture-influenced segmentation algorithms as standard benchmarks, trained them on a large composite dataset (>45K samples), and created 1K manually segmented ground truth masks. Overall, eleven state-of-the-art algorithms were benchmarked against five datasets encompassing multiple sensors, environmental conditions, demography, and illumination. This assessment highlights the strengths, limitations, and practical implications of each method and identifies gaps that future studies should address to improve segmentation accuracy and robustness. To foster future research, all resources developed during this work would be made publicly available. Full article
Show Figures

Figure 1

20 pages, 2385 KiB  
Article
Age-Related Influence on Static and Dynamic Balance Abilities: An Inertial Measurement Unit-Based Evaluation
by Tzu-Tung Lin, Lin-Yen Cheng, Chien-Cheng Chen, Wei-Ren Pan, Yin-Keat Tan, Szu-Fu Chen and Fu-Cheng Wang
Sensors 2024, 24(21), 7078; https://doi.org/10.3390/s24217078 - 3 Nov 2024
Viewed by 600
Abstract
Balance control, a complex sensorimotor skill, declines with age. Assessing balance is crucial for identifying fall risk and implementing interventions in the older population. This study aimed to measure age-dependent changes in static and dynamic balance using inertial measurement units in a clinical [...] Read more.
Balance control, a complex sensorimotor skill, declines with age. Assessing balance is crucial for identifying fall risk and implementing interventions in the older population. This study aimed to measure age-dependent changes in static and dynamic balance using inertial measurement units in a clinical setting. This study included 82 healthy participants aged 20–85 years. For the dynamic balance test, participants stood on a horizontally swaying balance board. For the static balance test, they stood on one leg. Inertial measurement units attached to their bodies recorded kinematic data, with average absolute angular velocities assessing balance capabilities. In the dynamic test, the younger participants had smaller average absolute angular velocities in most body parts than those of the middle-aged and older groups, with no significant differences between the middle-aged and older groups. Conversely, in the single-leg stance tests, the young and middle-aged groups outperformed the older group, with no significant differences between the young and middle-aged groups. Thus, dynamic and static balance decline at different stages with age. These results highlight the complementary role of inertial measurement unit-based evaluation in understanding the effect of age on postural control mechanisms, offering valuable insights for tailoring rehabilitation protocols in clinical settings. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

20 pages, 1946 KiB  
Article
Two-Stream Modality-Based Deep Learning Approach for Enhanced Two-Person Human Interaction Recognition in Videos
by Hemel Sharker Akash, Md Abdur Rahim, Abu Saleh Musa Miah, Hyoun-Sup Lee, Si-Woong Jang and Jungpil Shin
Sensors 2024, 24(21), 7077; https://doi.org/10.3390/s24217077 - 3 Nov 2024
Viewed by 877
Abstract
Human interaction recognition (HIR) between two people in videos is a critical field in computer vision and pattern recognition, aimed at identifying and understanding human interaction and actions for applications such as healthcare, surveillance, and human–computer interaction. Despite its significance, video-based HIR faces [...] Read more.
Human interaction recognition (HIR) between two people in videos is a critical field in computer vision and pattern recognition, aimed at identifying and understanding human interaction and actions for applications such as healthcare, surveillance, and human–computer interaction. Despite its significance, video-based HIR faces challenges in achieving satisfactory performance due to the complexity of human actions, variations in motion, different viewpoints, and environmental factors. In the study, we proposed a two-stream deep learning-based HIR system to address these challenges and improve the accuracy and reliability of HIR systems. In the process, two streams extract hierarchical features based on the skeleton and RGB information, respectively. In the first stream, we utilised YOLOv8-Pose for human pose extraction, then extracted features with three stacked LSM modules and enhanced them with a dense layer that is considered the final feature of the first stream. In the second stream, we utilised SAM on the input videos, and after filtering the Segment Anything Model (SAM) feature, we employed integrated LSTM and GRU to extract the long-range dependency feature and then enhanced them with a dense layer that was considered the final feature for the second stream module. Here, SAM was utilised for segmented mesh generation, and ImageNet was used for feature extraction from images or meshes, focusing on extracting relevant features from sequential image data. Moreover, we newly created a custom filter function to enhance computational efficiency and eliminate irrelevant keypoints and mesh components from the dataset. We concatenated the two stream features and produced the final feature that fed into the classification module. The extensive experiment with the two benchmark datasets of the proposed model achieved 96.56% and 96.16% accuracy, respectively. The high-performance accuracy of the proposed model proved its superiority. Full article
(This article belongs to the Special Issue Computer Vision and Sensors-Based Application for Intelligent Systems)
Show Figures

Figure 1

20 pages, 9098 KiB  
Article
Local–Global Feature Adaptive Fusion Network for Building Crack Detection
by Yibin He, Zhengrong Yuan, Xinhong Xia, Bo Yang, Huiting Wu, Wei Fu and Wenxuan Yao
Sensors 2024, 24(21), 7076; https://doi.org/10.3390/s24217076 - 3 Nov 2024
Viewed by 607
Abstract
Cracks represent one of the most common types of damage in building structures and it is crucial to detect cracks in a timely manner to maintain the safety of the buildings. In general, tiny cracks require focusing on local detail information while complex [...] Read more.
Cracks represent one of the most common types of damage in building structures and it is crucial to detect cracks in a timely manner to maintain the safety of the buildings. In general, tiny cracks require focusing on local detail information while complex long cracks and cracks similar to the background require more global features for detection. Therefore, it is necessary for crack detection to effectively integrate local and global information. Focusing on this, a local–global feature adaptive fusion network (LGFAF-Net) is proposed. Specifically, we introduce the VMamba encoder as the global feature extraction branch to capture global long-range dependencies. To enhance the ability of the network to acquire detailed information, the residual network is added as another local feature extraction branch, forming a dual-encoding network to enhance the performance of crack detection. In addition, a multi-feature adaptive fusion (MFAF) module is proposed to integrate local and global features from different branches and facilitate representative feature learning. Furthermore, we propose a building exterior wall crack dataset (BEWC) captured by unmanned aerial vehicles (UAVs) to evaluate the performance of the proposed method used to identify wall cracks. Other widely used public crack datasets are also utilized to verify the generalization of the method. Extensive experiments performed on three crack datasets demonstrate the effectiveness and superiority of the proposed method. Full article
(This article belongs to the Special Issue Sensor-Fusion-Based Deep Interpretable Networks)
Show Figures

Figure 1

3 pages, 163 KiB  
Editorial
Underwater Wireless Communications
by Hamada Esmaiel and Haixin Sun
Sensors 2024, 24(21), 7075; https://doi.org/10.3390/s24217075 - 3 Nov 2024
Viewed by 640
Abstract
Effective underwater wireless communications (UWCs) are essential for a variety of military and civil applications, such as submarine communication and discovery of new natural resources in the underwater environment [...] Full article
(This article belongs to the Special Issue Underwater Wireless Communications)
23 pages, 23514 KiB  
Article
Deep-Learning-Based Automated Building Construction Progress Monitoring for Prefabricated Prefinished Volumetric Construction
by Wei Png Chua and Chien Chern Cheah
Sensors 2024, 24(21), 7074; https://doi.org/10.3390/s24217074 - 2 Nov 2024
Viewed by 878
Abstract
Prefabricated prefinished volumetric construction (PPVC) is a relatively new technique that has recently gained popularity for its ability to improve flexibility in scheduling and resource management. Given the modular nature of PPVC assembly and the large amounts of visual data amassed throughout a [...] Read more.
Prefabricated prefinished volumetric construction (PPVC) is a relatively new technique that has recently gained popularity for its ability to improve flexibility in scheduling and resource management. Given the modular nature of PPVC assembly and the large amounts of visual data amassed throughout a construction project today, PPVC building construction progress monitoring can be conducted by quantifying assembled PPVC modules within images or videos. As manually processing high volumes of visual data can be extremely time consuming and tedious, building construction progress monitoring can be automated to be more efficient and reliable. However, the complex nature of construction sites and the presence of nearby infrastructure could occlude or distort visual data. Furthermore, imaging constraints can also result in incomplete visual data. Therefore, it is hard to apply existing purely data-driven object detectors to automate building progress monitoring at construction sites. In this paper, we propose a novel 2D window-based automated visual building construction progress monitoring (WAVBCPM) system to overcome these issues by mimicking human decision making during manual progress monitoring with a primary focus on PPVC building construction. WAVBCPM is segregated into three modules. A detection module first conducts detection of windows on the target building. This is achieved by detecting windows within the input image at two scales by using YOLOv5 as a backbone network for object detection before using a window detection filtering process to omit irrelevant detections from the surrounding areas. Next, a rectification module is developed to account for missing windows in the mid-section and near-ground regions of the constructed building that may be caused by occlusion and poor detection. Lastly, a progress estimation module checks the processed detections for missing or excess information before performing building construction progress estimation. The proposed method is tested on images from actual construction sites, and the experimental results demonstrate that WAVBCPM effectively addresses real-world challenges. By mimicking human inference, it overcomes imperfections in visual data, achieving higher accuracy in progress monitoring compared to purely data-driven object detectors. Full article
Show Figures

Figure 1

19 pages, 3445 KiB  
Article
A Novel Diagnostic Feature for a Wind Turbine Imbalance Under Variable Speed Conditions
by Amir R. Askari, Len Gelman, Russell King, Daryl Hickey and Andrew D. Ball
Sensors 2024, 24(21), 7073; https://doi.org/10.3390/s24217073 - 2 Nov 2024
Viewed by 696
Abstract
Dependency between the conventional imbalance diagnostic feature and the shaft rotational speed makes imbalance diagnosis challenging for variable-speed machines. This paper focuses on an investigation of this dependency and on a proposal for a novel imbalance diagnostic feature and a novel simplified version [...] Read more.
Dependency between the conventional imbalance diagnostic feature and the shaft rotational speed makes imbalance diagnosis challenging for variable-speed machines. This paper focuses on an investigation of this dependency and on a proposal for a novel imbalance diagnostic feature and a novel simplified version for this feature, which are independent of shaft rotational speed. An equivalent mass–spring–damper system is investigated to find a closed-form expression describing this dependency. By normalizing the conventional imbalance diagnostic feature by the obtained dependency, a diagnostic feature is proposed. By conducting comprehensive experimental trials with a wind turbine with a permissible imbalance, it is justified that the proposed simplified version of imbalance diagnostic feature is speed-invariant. Full article
Show Figures

Figure 1

18 pages, 3127 KiB  
Article
Precise Geoid Determination in the Eastern Swiss Alps Using Geodetic Astronomy and GNSS/Leveling Methods
by Müge Albayrak, Urs Marti, Daniel Willi, Sébastien Guillaume and Ryan A. Hardy
Sensors 2024, 24(21), 7072; https://doi.org/10.3390/s24217072 - 2 Nov 2024
Viewed by 669
Abstract
Astrogeodetic deflections of the vertical (DoVs) are close indicators of the slope of the geoid. Thus, DoVs observed along horizontal profiles may be integrated to create geoid undulation profiles. In this study, we collected DoV data in the Eastern Swiss Alps using a [...] Read more.
Astrogeodetic deflections of the vertical (DoVs) are close indicators of the slope of the geoid. Thus, DoVs observed along horizontal profiles may be integrated to create geoid undulation profiles. In this study, we collected DoV data in the Eastern Swiss Alps using a Swiss Digital Zenith Camera, the COmpact DIgital Astrometric Camera (CODIAC), and two total station-based QDaedalus systems. In the mountainous terrain of the Eastern Swiss Alps, the geoid profile was established at 15 benchmarks over a two-week period in June 2021. The elevation along the profile ranges from 1185 to 1800 m, with benchmark spacing ranging from 0.55 km to 2.10 km. The DoV, gravity, GNSS, and levelling measurements were conducted on these 15 benchmarks. The collected gravity data were primarily used for corrections of the DoV-based geoid profiles, accounting for variations in station height and the geoid-quasigeoid separation. The GNSS/levelling and DoV data were both used to compute geoid heights. These geoid heights are compared with the Swiss Geoid Model 2004 (CHGeo2004) and two global gravity field models (EGM2008 and XGM2019e). Our study demonstrates that absolute geoid heights derived from GNSS/leveling data achieve centimeter-level accuracy, underscoring the precision of this method. Comparisons with CHGeo2004 predictions reveal a strong correlation, closely aligning with both GNSS/leveling and DoV-derived results. Additionally, the differential geoid height analysis highlights localized variations in the geoid surface, further validating the robustness of CHGeo2004 in capturing fine-scale geoid heights. These findings confirm the reliability of both absolute and differential geoid height calculations for precise geoid modeling in complex mountainous terrains. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

19 pages, 9602 KiB  
Article
Forest Aboveground Biomass Estimation Based on Unmanned Aerial Vehicle–Light Detection and Ranging and Machine Learning
by Yan Yan, Jingjing Lei and Yuqing Huang
Sensors 2024, 24(21), 7071; https://doi.org/10.3390/s24217071 - 2 Nov 2024
Viewed by 623
Abstract
Eucalyptus is a widely planted species in plantation forests because of its outstanding characteristics, such as fast growth rate and high adaptability. Accurate and rapid prediction of Eucalyptus biomass is important for plantation forest management and the prediction of carbon stock in terrestrial [...] Read more.
Eucalyptus is a widely planted species in plantation forests because of its outstanding characteristics, such as fast growth rate and high adaptability. Accurate and rapid prediction of Eucalyptus biomass is important for plantation forest management and the prediction of carbon stock in terrestrial ecosystems. In this study, the performance of predictive biomass regression equations and machine learning algorithms, including multivariate linear stepwise regression (MLSR), support vector machine regression (SVR), and k-nearest neighbor (KNN) for constructing a predictive forest AGB model was analyzed and compared at individual tree and stand scales based on forest parameters extracted by Unmanned Aerial Vehicle–Light Detection and Ranging (UAV LiDAR) and variables screened by variable projection importance analysis to select the best prediction method. The results of the study concluded that the prediction model accuracy of the natural transformed regression equations (R2 = 0.873, RMSE = 0.312 t/ha, RRMSE = 0.0091) outperformed that of the machine learning algorithms at the individual tree scale. Among the machine learning models, the SVR prediction model accuracy was the best (R2 = 0.868, RMSE = 7.932 t/ha, RRMSE = 0.231). In this study, UAV-LiDAR-based data had great potential in predicting the AGB of Eucalyptus trees, and the tree height parameter had the strongest correlation with AGB. In summary, the combination of UAV LiDAR data and machine learning algorithms to construct a predictive forest AGB model has high accuracy and provides a solution for carbon stock assessment and forest ecosystem assessment. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

17 pages, 2627 KiB  
Article
Classification and Identification of Frequency-Hopping Signals Based on Jacobi Salient Map for Adversarial Sample Attack Approach
by Yanhan Zhu, Yong Li and Tianyi Wei
Sensors 2024, 24(21), 7070; https://doi.org/10.3390/s24217070 - 2 Nov 2024
Viewed by 534
Abstract
Frequency-hopping (FH) communication adversarial research is a key area in modern electronic countermeasures. To address the challenge posed by interfering parties that use deep neural networks (DNNs) to classify and identify multiple intercepted FH signals—enabling targeted interference and degrading communication performance—this paper presents [...] Read more.
Frequency-hopping (FH) communication adversarial research is a key area in modern electronic countermeasures. To address the challenge posed by interfering parties that use deep neural networks (DNNs) to classify and identify multiple intercepted FH signals—enabling targeted interference and degrading communication performance—this paper presents a batch feature point targetless adversarial sample generation method based on the Jacobi saliency map (BPNT-JSMA). This method builds on the traditional JSMA to generate feature saliency maps, selects the top 8% of salient feature points in batches for perturbation, and increases the perturbation limit to restrict the extreme values of single-point perturbations. Experimental results in a white-box environment show that, compared with the traditional JSMA method, BPNT-JSMA not only maintains a high attack success rate but also enhances attack efficiency and improves the stealthiness of the adversarial samples. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Previous Issue
Back to TopTop