Next Issue
Volume 12, November
Previous Issue
Volume 12, September
 
 

Technologies, Volume 12, Issue 10 (October 2024) – 36 articles

Cover Story (view full-size image): Practical implications of such photovoltaic (PV) systems include timely fault identification based on data-driven insights for enhanced energy outputs, extended lifetime spans for PV panels, cost savings, as well as safe and scalable inspections. The main components of PV systems, including operation principles, advancements in unmanned aerial vehicles, artificial intelligence, machine learning and deep learning methods, which all offer enhanced monitoring opportunities, are in focus. The current performance and failures of vision-based algorithms for PV fault detection are identified, raising their capabilities, limitations, and research gaps. Results indicate that shading anomalies significantly impact the performance of PV units and that the top five fault detection methodologies involve DL methods such as CNNs and YOLO variations. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 956 KiB  
Article
Technologies for Increasing the Control Efficiency of Small Spacecraft with Solar Panels by Taking into Account Temperature Shock
by Andrey Sedelinkov, Alexandra Nikolaeva, Valeria Serdakova and Ekaterina Khnyryova
Technologies 2024, 12(10), 207; https://doi.org/10.3390/technologies12100207 - 21 Oct 2024
Viewed by 1163
Abstract
The problem of the effective control of a small spacecraft is very relevant for solving a number of target tasks. Such tasks include, for example, remote sensing of the Earth or the implementation of gravity-sensitive processes. Therefore, it is necessary to develop new [...] Read more.
The problem of the effective control of a small spacecraft is very relevant for solving a number of target tasks. Such tasks include, for example, remote sensing of the Earth or the implementation of gravity-sensitive processes. Therefore, it is necessary to develop new technologies for controlling small spacecraft. These technologies must take into account a number of disturbing factors that have not been taken into account previously. Temperature shock is one such factor for small spacecraft with solar panels. Therefore, the goal of the work is to create a new technology for controlling a small spacecraft based on a mathematical model of the stressed/deformed state of a solar panel during a temperature shock. The main methods for solving the problem are mathematical methods for solving initial/boundary value problems, in particular, the initial/boundary value problem of the third kind. As a result, an approximate solution for the deformation of a solar panel during a temperature shock was obtained. This solution is more general than those obtained previously. In particular, it satisfies the symmetrical condition of the solar panel. This could not be achieved by the previous solutions. We also observe an improvement (as compared to the previous solutions) in the fulfillment of the boundary conditions for the whole duration of the temperature shock. Based on this, a new technology for controlling a small spacecraft was created and its effectiveness was demonstrated. Application of the developed technology will improve the performance of the target tasks such as remote sensing of the Earth or the implementation of gravity-sensitive processes. Full article
(This article belongs to the Special Issue Technological Advances in Science, Medicine, and Engineering 2024)
Show Figures

Figure 1

21 pages, 2515 KiB  
Review
Multidisciplinary Review of Induction Stove Technology: Technological Advances, Societal Impacts, and Challenges for Its Widespread Use
by Nestor O. Romero-Arismendi, Juan C. Olivares-Galvan, Rafael Escarela-Perez, Jose L. Hernandez-Avila, Victor M. Jimenez-Mondragon and Felipe Gonzalez-Montañez
Technologies 2024, 12(10), 206; https://doi.org/10.3390/technologies12100206 - 20 Oct 2024
Viewed by 1799
Abstract
Induction stoves are increasingly recognized as the future of cooking technology due to their numerous benefits, including enhanced energy efficiency, improved safety, and precise cooking control. This paper provides a comprehensive review of the key technological advancements in induction stoves, while also examining [...] Read more.
Induction stoves are increasingly recognized as the future of cooking technology due to their numerous benefits, including enhanced energy efficiency, improved safety, and precise cooking control. This paper provides a comprehensive review of the key technological advancements in induction stoves, while also examining the societal and health impacts that need to be addressed to support their widespread adoption. Induction stoves operate based on the principle of eddy currents induced in metal cookware, which generate heat directly within the pot, reducing cooking times and increasing energy efficiency compared with conventional gas and electric stoves. Moreover, induction stoves are considered an environmentally sustainable option, as they contribute to improvements in indoor air quality by reducing emissions associated with fuel combustion during cooking. However, ongoing research is essential to ensure the safe and effective use of this technology on a broader scale. Full article
Show Figures

Figure 1

20 pages, 896 KiB  
Article
SWL-LSE: A Dataset of Health-Related Signs in Spanish Sign Language with an ISLR Baseline Method
by Manuel Vázquez-Enríquez, José Luis Alba-Castro, Laura Docío-Fernández and Eduardo Rodríguez-Banga
Technologies 2024, 12(10), 205; https://doi.org/10.3390/technologies12100205 - 18 Oct 2024
Viewed by 1060
Abstract
Progress in automatic sign language recognition and translation has been hindered by the scarcity of datasets available for the training of machine learning algorithms, a challenge that is even more acute for languages with smaller signing communities, such as Spanish. In this paper, [...] Read more.
Progress in automatic sign language recognition and translation has been hindered by the scarcity of datasets available for the training of machine learning algorithms, a challenge that is even more acute for languages with smaller signing communities, such as Spanish. In this paper, we introduce a dataset of 300 isolated signs in Spanish Sign Language, collected online via a web application with contributions from 124 participants, resulting in a total of 8000 instances. This dataset, which is openly available, includes keypoints extracted using MediaPipe Holistic. The goal of this paper is to describe the construction and characteristics of the dataset and to provide a baseline classification method using a spatial–temporal graph convolutional network (ST-GCN) model, encouraging the scientific community to improve upon it. The experimental section offers a comparative analysis of the method’s performance on the new dataset, as well as on two other well-known datasets. The dataset, code, and web app used for data collection are freely available, and the web app can also be used to test classifier performance on-line in real-time. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

22 pages, 2370 KiB  
Article
A Hierarchical Machine Learning Method for Detection and Visualization of Network Intrusions from Big Data
by Jinrong Wu, Su Nguyen, Thimal Kempitiya and Damminda Alahakoon
Technologies 2024, 12(10), 204; https://doi.org/10.3390/technologies12100204 - 17 Oct 2024
Viewed by 1224
Abstract
Machine learning is regarded as an effective approach in network intrusion detection, and has gained significant attention in recent studies. However, few intrusion detection methods have been successfully applied to detect anomalies in large-scale network traffic data, and low explainability of the complex [...] Read more.
Machine learning is regarded as an effective approach in network intrusion detection, and has gained significant attention in recent studies. However, few intrusion detection methods have been successfully applied to detect anomalies in large-scale network traffic data, and low explainability of the complex algorithms has caused concerns about fairness and accountability. A further problem is that many intrusion detection systems need to work with distributed data sources in the cloud. In this paper, we propose an intrusion detection method based on distributed computing to learn the latent representations from large-scale network data with lower computation time while improving the intrusion detection accuracy. Our proposed classifier, based on a novel hierarchical algorithm combining adaptability and visualization ability from a self-structured unsupervised learning algorithm and achieving explainability from self-explainable supervised algorithms, is able to enhance the understanding of the model and data. The experimental results show that our proposed method is effective, efficient, and scalable in capturing the network traffic patterns and detecting detailed network intrusion information such as type of attack with high detection performance, and is an ideal method to be applied in cloud-computing environments. Full article
Show Figures

Figure 1

21 pages, 3725 KiB  
Article
An Efficient CNN-Based Intrusion Detection System for IoT: Use Case Towards Cybersecurity
by Amogh Deshmukh and Kiran Ravulakollu
Technologies 2024, 12(10), 203; https://doi.org/10.3390/technologies12100203 - 17 Oct 2024
Viewed by 1650
Abstract
Today’s environment demands that cybersecurity be given top priority because of the increase in cyberattacks and the development of quantum computing capabilities. Traditional security measures have relied on cryptographic techniques to safeguard information systems and networks. However, with the adaptation of artificial intelligence [...] Read more.
Today’s environment demands that cybersecurity be given top priority because of the increase in cyberattacks and the development of quantum computing capabilities. Traditional security measures have relied on cryptographic techniques to safeguard information systems and networks. However, with the adaptation of artificial intelligence (AI), there is an opportunity to enhance cybersecurity through learning-based methods. IoT environments, in particular, work with lightweight systems that cannot handle the large data communications typically required by traditional intrusion detection systems (IDSs) to find anomalous patterns, making it a challenging problem. A deep learning-based framework is proposed in this study with various optimizations for automatically detecting and classifying cyberattacks. These optimizations involve dimensionality reduction, hyperparameter tuning, and feature engineering. Additionally, the framework utilizes an enhanced Convolutional Neural Network (CNN) variant called Intelligent Intrusion Detection Network (IIDNet) to detect and classify attacks efficiently. Layer optimization at the architectural level is used to improve detection performance in IIDNet using a Learning-Based Intelligent Intrusion Detection (LBIID) algorithm. The experimental study conducted in this paper uses a benchmark dataset known as UNSW-NB15 and demonstrated that IIDNet achieves an outstanding accuracy of 95.47% while significantly reducing training time and excellent scalability, outperforming many existing intrusion detection models. Full article
(This article belongs to the Special Issue IoT-Enabling Technologies and Applications)
Show Figures

Figure 1

14 pages, 778 KiB  
Article
Particle Size Distribution in Holby–Morgan Degradation Model of Platinum on Carbon Catalyst in Fuel Cell: Normal Distribution
by Victor A. Kovtunenko
Technologies 2024, 12(10), 202; https://doi.org/10.3390/technologies12100202 - 17 Oct 2024
Viewed by 948
Abstract
The influence of particle size distribution in platinum catalysts on the aging of PEM fuel cells described by Holby–Morgan electrochemical degradation model is under investigation. The non-diffusive model simulates mechanisms of particle drop by Pt dissolution and particle growth through Pt ion deposition. [...] Read more.
The influence of particle size distribution in platinum catalysts on the aging of PEM fuel cells described by Holby–Morgan electrochemical degradation model is under investigation. The non-diffusive model simulates mechanisms of particle drop by Pt dissolution and particle growth through Pt ion deposition. Without spatial dependence, the number of differential equations can be reduced using the first integral of the system. For an accelerated stress test, a non-symmetric square-wave potential profile is applied according to the European harmonized protocol. The normal particle size distribution determined by two probability parameters of the expectation and the standard deviation is represented within finite groups. Numerical solution of the nonlinear diffusion equation justifies dispersion for small and narrowing for large distribution means, decrease or increase in amplitude, and movement of Pt particle diameters towards small sizes, which is faster for small particles. Full article
(This article belongs to the Section Environmental Technology)
Show Figures

Graphical abstract

20 pages, 4236 KiB  
Article
Enhancing Autonomous Visual Perception in Challenging Environments: Bilateral Models with Vision Transformer and Multilayer Perceptron for Traversable Area Detection
by Claudio Urrea and Maximiliano Vélez
Technologies 2024, 12(10), 201; https://doi.org/10.3390/technologies12100201 - 17 Oct 2024
Viewed by 1217
Abstract
The development of autonomous vehicles has grown significantly recently due to the promise of improving safety and productivity in cities and industries. The scene perception module has benefited from the latest advances in computer vision and deep learning techniques, allowing the creation of [...] Read more.
The development of autonomous vehicles has grown significantly recently due to the promise of improving safety and productivity in cities and industries. The scene perception module has benefited from the latest advances in computer vision and deep learning techniques, allowing the creation of more accurate and efficient models. This study develops and evaluates semantic segmentation models based on a bilateral architecture to enhance the detection of traversable areas for autonomous vehicles on unstructured routes, particularly in datasets where the distinction between the traversable area and the surrounding ground is minimal. The proposed hybrid models combine Convolutional Neural Networks (CNNs), Vision Transformer (ViT), and Multilayer Perceptron (MLP) techniques, achieving a balance between precision and computational efficiency. The results demonstrate that these models outperform the base architectures in prediction accuracy, capturing distant details more effectively while maintaining real-time operational capabilities. Full article
Show Figures

Figure 1

21 pages, 13387 KiB  
Article
Eight Element Wideband Antenna with Improved Isolation for 5G Mid Band Applications
by Deepthi Mariam John, Shweta Vincent, Sameena Pathan, Alexandros-Apostolos A. Boulogeorgos, Jaume Anguera, Tanweer Ali and Rajiv Mohan David
Technologies 2024, 12(10), 200; https://doi.org/10.3390/technologies12100200 - 17 Oct 2024
Viewed by 1080
Abstract
Modern wireless communication systems have undergone a radical change with the introduction of multiple-input multiple-output (MIMO) antennas, which provide increased channel capacity, fast data rates, and secure connections. To achieve real-time requirements, such antenna technology needs to have good gains, wider bandwidths, satisfactory [...] Read more.
Modern wireless communication systems have undergone a radical change with the introduction of multiple-input multiple-output (MIMO) antennas, which provide increased channel capacity, fast data rates, and secure connections. To achieve real-time requirements, such antenna technology needs to have good gains, wider bandwidths, satisfactory radiation characteristics, and high isolation. This article presents an eight-element CPW-fed antenna for the 5G mid-band. The proposed antenna consists of eight symmetrical, modified circular monopole antennas with a connected CPW-fed ground plane that offers 24 dB isolation over the operating range. The antenna is further investigated in terms of the scattering parameters, and radiation characteristics under both the x and y-axis bending scenarios. The antenna holds a volume of 83 × 129 × 0.1 mm3 and covers a measured impedance bandwidth of 4.5–5.5 GHz (20%) with an average gain of 4 dBi throughout the operating band. MIMO diversity performance of the antenna is performed, and the antenna exhibits good performance suitable for MIMO applications. Furthermore, the channel capacity (CC) is estimated, and the antenna gives a value of 41.8–42.6 bps/Hz within the operating bandwidth, which is very close to an ideal 8 × 8 MIMO system. The antenna shows an excellent match between the simulated and measured findings. Full article
(This article belongs to the Special Issue Perpetual Sensor Nodes for Sustainable Wireless Network Applications)
Show Figures

Figure 1

21 pages, 6252 KiB  
Article
Novel Genetic Optimization Techniques for Accurate Social Media Data Summarization and Classification Using Deep Learning Models
by Fahd A. Ghanem, M. C. Padma, Hudhaifa M. Abdulwahab and Ramez Alkhatib
Technologies 2024, 12(10), 199; https://doi.org/10.3390/technologies12100199 - 15 Oct 2024
Viewed by 1199
Abstract
In the era of big data, effectively processing and understanding the vast quantities of brief texts on social media platforms like Twitter (X) is a significant challenge. This paper introduces a novel approach to automatic text summarization aimed at improving accuracy while minimizing [...] Read more.
In the era of big data, effectively processing and understanding the vast quantities of brief texts on social media platforms like Twitter (X) is a significant challenge. This paper introduces a novel approach to automatic text summarization aimed at improving accuracy while minimizing redundancy. The proposed method involves a two-step process: first, feature extraction using term frequency–inverse document frequency (TF–IDF), and second, summary extraction through genetic optimized fully connected convolutional neural networks (GO-FC-CNNs). The approach was evaluated on datasets from the Kaggle collection, focusing on topics like FIFA, farmer demonstrations, and COVID-19, demonstrating its versatility across different domains. Preprocessing steps such as tokenization, stemming, stop word s removal, and keyword identification were employed to handle unprocessed data. The integration of genetic optimization into the neural network significantly improved performance compared to traditional methods. Evaluation using the ROUGE criteria showed that the proposed method achieved higher accuracy (98.00%), precision (98.30%), recall (98.72%), and F1-score (98.61%) than existing approaches. These findings suggest that this method can help create a reliable and effective system for large-scale social media data processing, enhancing data dissemination and decision-making. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

38 pages, 2270 KiB  
Article
The Role of Machine Learning in Enhancing Particulate Matter Estimation: A Systematic Literature Review
by Amjad Alkhodaidi, Afraa Attiah, Alaa Mhawish and Abeer Hakeem
Technologies 2024, 12(10), 198; https://doi.org/10.3390/technologies12100198 - 15 Oct 2024
Viewed by 1147
Abstract
As urbanization and industrial activities accelerate globally, air quality has become a pressing concern, particularly due to the harmful effects of particulate matter (PM), notably PM2.5 and PM10. This review paper presents a comprehensive systematic assessment of machine learning (ML) [...] Read more.
As urbanization and industrial activities accelerate globally, air quality has become a pressing concern, particularly due to the harmful effects of particulate matter (PM), notably PM2.5 and PM10. This review paper presents a comprehensive systematic assessment of machine learning (ML) techniques for estimating PM concentrations, drawing on studies published from 2018 to 2024. Traditional statistical methods often fail to account for the complex dynamics of air pollution, leading to inaccurate predictions, especially during peak pollution events. In contrast, ML approaches have emerged as powerful tools that leverage large datasets to capture nonlinear, intricate relationships among various environmental, meteorological, and anthropogenic factors. This review synthesizes findings from 32 studies, demonstrating that ML techniques, particularly ensemble learning models, significantly enhance estimation accuracy. However, challenges remain, including data quality, the need for diverse and balanced datasets, issues related to feature selection, and spatial discontinuity. This paper identifies critical research gaps and proposes future directions to improve model robustness and applicability. By advancing the understanding of ML applications in air quality monitoring, this review seeks to contribute to developing effective strategies for mitigating air pollution and protecting public health. Full article
Show Figures

Figure 1

27 pages, 7320 KiB  
Article
A Real-Time and Online Dynamic Reconfiguration against Cyber-Attacks to Enhance Security and Cost-Efficiency in Smart Power Microgrids Using Deep Learning
by Elnaz Yaghoubi, Elaheh Yaghoubi, Ziyodulla Yusupov and Mohammad Reza Maghami
Technologies 2024, 12(10), 197; https://doi.org/10.3390/technologies12100197 - 14 Oct 2024
Cited by 1 | Viewed by 1550
Abstract
Ensuring the secure and cost-effective operation of smart power microgrids has become a significant concern for managers and operators due to the escalating damage caused by natural phenomena and cyber-attacks. This paper presents a novel framework focused on the dynamic reconfiguration of multi-microgrids [...] Read more.
Ensuring the secure and cost-effective operation of smart power microgrids has become a significant concern for managers and operators due to the escalating damage caused by natural phenomena and cyber-attacks. This paper presents a novel framework focused on the dynamic reconfiguration of multi-microgrids to enhance system’s security index, including stability, reliability, and operation costs. The framework incorporates distributed generation (DG) to address cyber-attacks that can lead to line outages or generation failures within the network. Additionally, this work considers the uncertainties and accessibility factors of power networks through a modified point prediction method, which was previously overlooked. To achieve the secure and cost-effective operation of smart power multi-microgrids, an optimization framework is developed as a multi-objective problem, where the states of switches and DG serve as independent parameters, while the dependent parameters consist of the operation cost and techno-security indexes. The multi-objective problem employs deep learning (DL) techniques, specifically based on long short-term memory (LSTM) and prediction intervals, to effectively detect false data injection attacks (FDIAs) on advanced metering infrastructures (AMIs). By incorporating a modified point prediction method, LSTM-based deep learning, and consideration of technical indexes and FDIA cyber-attacks, this framework aims to advance the security and reliability of smart power multi-microgrids. The effectiveness of this method was validated on a network of 118 buses. The results of the proposed approach demonstrate remarkable improvements over PSO, MOGA, ICA, and HHO algorithms in both technical and economic indicators. Full article
Show Figures

Figure 1

31 pages, 5975 KiB  
Article
Introducing Digitized Cultural Heritage to Wider Audiences by Employing Virtual and Augmented Reality Experiences: The Case of the v-Corfu Project
by Vasileios Komianos, Athanasios Tsipis and Katerina Kontopanagou
Technologies 2024, 12(10), 196; https://doi.org/10.3390/technologies12100196 - 13 Oct 2024
Viewed by 2010
Abstract
In recent years, cultural projects utilizing digital applications and immersive technologies (VR, AR, MR) have grown significantly, enhancing cultural heritage experiences. Research emphasizes the importance of usability, user experience, and accessibility, yet holistic approaches remain underexplored and many projects fail to reach their [...] Read more.
In recent years, cultural projects utilizing digital applications and immersive technologies (VR, AR, MR) have grown significantly, enhancing cultural heritage experiences. Research emphasizes the importance of usability, user experience, and accessibility, yet holistic approaches remain underexplored and many projects fail to reach their audience. This article aims to bridge this gap by presenting a complete workflow including systematic requirements analysis, design guidelines, and development solutions based on knowledge extracted from previous relevant projects. The article focuses on virtual museums covering key challenges including compatibility, accessibility, usability, navigation, interaction, computational performance and graphics quality, and provides a design schema for integrating virtual museums into such projects. Following this approach, a number of applications are presented. Their performance with respect to the aforementioned key challenges is evaluated. Users are invited to assess them, providing positive results. To assess the virtual museum’s ability to attract a broader audience beyond the usual target group, a group of underserved minorities are also invited to use and evaluate it, generating encouraging outcomes. Concluding, results show that the presented workflow succeeds in yielding high-quality applications for cultural heritage communication and attraction of wider audiences, and outlines directions for further improvements in digitized heritage applications. Full article
(This article belongs to the Special Issue Immersive Technologies and Applications on Arts, Culture and Tourism)
Show Figures

Figure 1

15 pages, 5547 KiB  
Article
Improvement of Sound-Absorbing Wool Material by Laminating Permeable Nonwoven Fabric Sheet and Nonpermeable Membrane
by Shuichi Sakamoto, Kodai Sato and Gaku Muroi
Technologies 2024, 12(10), 195; https://doi.org/10.3390/technologies12100195 - 12 Oct 2024
Viewed by 1290
Abstract
Thin sound-absorbing materials are particularly desired in space-constrained applications, such as in the automotive industry. In this study, we theoretically analyzed the structure of relatively thin glass wool or polyester wool laminated with a nonpermeable polyethylene membrane and a permeable nonwoven fabric sheet. [...] Read more.
Thin sound-absorbing materials are particularly desired in space-constrained applications, such as in the automotive industry. In this study, we theoretically analyzed the structure of relatively thin glass wool or polyester wool laminated with a nonpermeable polyethylene membrane and a permeable nonwoven fabric sheet. We also measured and compared the sound-absorption coefficients of these samples between experimental and theoretical values. The sound-absorption coefficient was derived using the transfer matrix method. The Rayleigh model was applied to describe the acoustic behavior of glass wool and nonwoven sheet, while the Miki model was used for polyester wool. Mathematical formulas were employed to model an air layer without damping and a vibrating membrane. These acoustic components were integrated into a transfer matrix framework to calculate the sound-absorption coefficient. The sound-absorption coefficients of glass wool and polyester wool were progressively enhanced by sequentially adding suitable nonwoven fabric and PE membranes. A sample approximately 10 mm thick, featuring permeable and nonpermeable membranes as outer layers of porous sound-absorbing material, achieved a sound-absorption coefficient equivalent to that of a sample occupying 20 mm thickness (10 mm of porous sound-absorbing material with a 10 mm back air layer). Full article
(This article belongs to the Section Innovations in Materials Processing)
Show Figures

Figure 1

29 pages, 5990 KiB  
Article
A Novel Two-Stage Hybrid Model Optimization with FS-FCRBM-GWDO for Accurate and Stable STLF
by Eustache Uwimana and Yatong Zhou
Technologies 2024, 12(10), 194; https://doi.org/10.3390/technologies12100194 - 10 Oct 2024
Cited by 1 | Viewed by 1539
Abstract
The accurate, rapid, and stable prediction of electrical energy consumption is essential for decision-making, energy management, efficient planning, and reliable power system operation. Errors in forecasting can lead to electricity shortages, wasted resources, power supply interruptions, and even grid failures. Accurate forecasting enables [...] Read more.
The accurate, rapid, and stable prediction of electrical energy consumption is essential for decision-making, energy management, efficient planning, and reliable power system operation. Errors in forecasting can lead to electricity shortages, wasted resources, power supply interruptions, and even grid failures. Accurate forecasting enables timely decisions for secure energy management. However, predicting future consumption is challenging due to the variable behavior of customers, requiring flexible models that capture random and complex patterns. Forecasting methods, both traditional and modern, often face challenges in achieving the desired level of accuracy. To address these shortcomings, this research presents a novel hybrid approach that combines a robust forecaster with an advanced optimization technique. Specifically, the FS-FCRBM-GWDO model has been developed to enhance the performance of short-term load forecasting (STLF), aiming to improve prediction accuracy and reliability. While some models excel in accuracy and others in convergence rate, both aspects are crucial. The main objective was to create a forecasting model that provides reliable, consistent, and precise predictions for effective energy management. This led to the development of a novel two-stage hybrid model. The first stage predicts electrical energy usage through four modules using deep learning, support vector machines, and optimization algorithms. The second stage optimizes energy management based on predicted consumption, focusing on reducing costs, managing demand surges, and balancing electricity expenses with customer inconvenience. This approach benefits both consumers and utility companies by lowering bills and enhancing power system stability. The simulation results validate the proposed model’s efficacy and efficiency compared to existing benchmark models. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

13 pages, 11283 KiB  
Article
Field Ion Microscopy of Tungsten Nano-Tips Coated with Thin Layer of Epoxy Resin
by Dinara Sobola, Ammar Alsoud, Alexandr Knápek, Safeia M. Hamasha, Marwan S. Mousa, Richard Schubert, Pavla Kočková and Pavel Škarvada
Technologies 2024, 12(10), 193; https://doi.org/10.3390/technologies12100193 - 9 Oct 2024
Viewed by 1392
Abstract
This paper presents an analysis of the field ion emission mechanism of tungsten–epoxy nanocomposite emitters and compares their performance with that of tungsten nano-field emitters. The emission mechanism is described using the theory of induced conductive channels. Tungsten emitters with a radius of [...] Read more.
This paper presents an analysis of the field ion emission mechanism of tungsten–epoxy nanocomposite emitters and compares their performance with that of tungsten nano-field emitters. The emission mechanism is described using the theory of induced conductive channels. Tungsten emitters with a radius of 70 nm were fabricated using electrochemical polishing and coated with a 20 nm epoxy resin layer. Characterization of the emitters, both before and after coating, was performed using electron microscopy and energy-dispersive X-ray spectroscopy (EDS). The Tungsten nanocomposite emitter was tested using a field ion microscope (FIM) in the voltage range of 0–15 kV. The FIM analyses revealed differences in the emission ion density distributions between the uncoated and coated emitters. The uncoated tungsten tips exhibited the expected crystalline surface atomic distribution in the FIM images, whereas the coated emitters displayed randomly distributed emission spots, indicating the formation of induced conductive channels within the resin layer. The atom probe results are consistent with the FIM findings, suggesting that the formation of conductive channels is more likely to occur in areas where the resin surface is irregular and exhibits protrusions. These findings highlight the distinct emission mechanisms of both emitter types. Full article
(This article belongs to the Section Manufacturing Technology)
Show Figures

Figure 1

21 pages, 3121 KiB  
Article
Smart PV Monitoring and Maintenance: A Vision Transformer Approach within Urban 4.0
by Mariem Bounabi, Rida Azmi, Jérôme Chenal, El Bachir Diop, Seyid Abdellahi Ebnou Abdem, Meriem Adraoui, Mohammed Hlal and Imane Serbouti
Technologies 2024, 12(10), 192; https://doi.org/10.3390/technologies12100192 - 7 Oct 2024
Viewed by 1781
Abstract
The advancement to Urban 4.0 requires urban digitization and predictive maintenance of infrastructure to improve efficiency, durability, and quality of life. This study aims to integrate intelligent technologies for the predictive maintenance of photovoltaic panel systems, which serve as essential smart city renewable [...] Read more.
The advancement to Urban 4.0 requires urban digitization and predictive maintenance of infrastructure to improve efficiency, durability, and quality of life. This study aims to integrate intelligent technologies for the predictive maintenance of photovoltaic panel systems, which serve as essential smart city renewable energy sources. In addition, we employ vision transformers (ViT), a deep learning architecture devoted to evolving image analysis, to detect anomalies in PV systems. The ViT model is pre-trained on ImageNet to exploit a comprehensive set of relevant visual features from the PV images and classify the input PV panel. Furthermore, the developed system was integrated into a web application that allows users to upload PV images, automatically detect anomalies, and provide detailed panel information, such as PV panel type, defect probability, and anomaly status. A comparative study using several convolutional neural network architectures (VGG, ResNet, and AlexNet) and the ViT transformer was conducted. Therefore, the adopted ViT model performs excellently in anomaly detection, where the ViT achieves an AUC of 0.96. Finally, the proposed approach excels at the prompt identification of potential defects detection, reducing maintenance costs, advancing equipment lifetime, and optimizing PV system implementation. Full article
Show Figures

Figure 1

20 pages, 4837 KiB  
Article
Optical Particle Tracking in the Pneumatic Conveying of Metal Powders through a Thin Capillary Pipe
by Lorenzo Pedrolli, Luigi Fraccarollo, Beatriz Achiaga and Alejandro Lopez
Technologies 2024, 12(10), 191; https://doi.org/10.3390/technologies12100191 - 3 Oct 2024
Viewed by 3544
Abstract
Directed Energy Deposition (DED) processes necessitate a consistent material flow to the melt pool, typically achieved through pneumatic conveying of metal powder via thin pipes. This study aims to record and analyze the multiphase fluid–solid flow. An experimental setup utilizing a high-speed camera [...] Read more.
Directed Energy Deposition (DED) processes necessitate a consistent material flow to the melt pool, typically achieved through pneumatic conveying of metal powder via thin pipes. This study aims to record and analyze the multiphase fluid–solid flow. An experimental setup utilizing a high-speed camera and specialized optics was constructed, and the flow through thin transparent pipes was recorded. The resulting information was analyzed and compared with coupled Computational Fluid Dynamics-Discrete Element Modeling (CFD-DEM) simulations, with special attention to the solids flow fluctuations. The proposed methodology shows a significant improvement in accuracy and reliability over existing approaches, particularly in capturing flow rate fluctuations and particle velocity distributions in small-scale systems. Moreover, it allows for accurately analyzing Particle Size Distribution (PSD) in the same setup. This paper details the experimental design, video analysis using particle tracking, and a novel method for deriving volumetric concentrations and flow rate from flat images. The findings confirm the accuracy of the CFD-DEM simulations and provide insights into the dynamics of pneumatic conveying and individual particle movement, with the potential to improve DED efficiency by reducing variability in material deposition rates. Full article
(This article belongs to the Section Manufacturing Technology)
Show Figures

Figure 1

25 pages, 3089 KiB  
Article
A Hybrid Trio-Deep Feature Fusion Model for Improved Skin Cancer Classification: Merging Dermoscopic and DCT Images
by Omneya Attallah
Technologies 2024, 12(10), 190; https://doi.org/10.3390/technologies12100190 - 3 Oct 2024
Viewed by 1766
Abstract
The precise and prompt identification of skin cancer is essential for efficient treatment. Variations in colour within skin lesions are critical signs of malignancy; however, discrepancies in imaging conditions may inhibit the efficacy of deep learning models. Numerous previous investigations have neglected this [...] Read more.
The precise and prompt identification of skin cancer is essential for efficient treatment. Variations in colour within skin lesions are critical signs of malignancy; however, discrepancies in imaging conditions may inhibit the efficacy of deep learning models. Numerous previous investigations have neglected this problem, frequently depending on deep features from a singular layer of an individual deep learning model. This study presents a new hybrid deep learning model that integrates discrete cosine transform (DCT) with multi-convolutional neural network (CNN) structures to improve the classification of skin cancer. Initially, DCT is applied to dermoscopic images to enhance and correct colour distortions in these images. After that, several CNNs are trained separately with the dermoscopic images and the DCT images. Next, deep features are obtained from two deep layers of each CNN. The proposed hybrid model consists of triple deep feature fusion. The initial phase involves employing the discrete wavelet transform (DWT) to merge multidimensional attributes obtained from the first layer of each CNN, which lowers their dimension and provides time–frequency representation. In addition, for each CNN, the deep features of the second deep layer are concatenated. Afterward, in the subsequent deep feature fusion stage, for each CNN, the merged first-layer features are combined with the second-layer features to create an effective feature vector. Finally, in the third deep feature fusion stage, these bi-layer features of the various CNNs are integrated. Through the process of training multiple CNNs on both the original dermoscopic photos and the DCT-enhanced images, retrieving attributes from two separate layers, and incorporating attributes from the multiple CNNs, a comprehensive representation of attributes is generated. Experimental results showed 96.40% accuracy after trio-deep feature fusion. This shows that merging DCT-enhanced images and dermoscopic photos can improve diagnostic accuracy. The hybrid trio-deep feature fusion model outperforms individual CNN models and most recent studies, thus proving its superiority. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

19 pages, 6834 KiB  
Article
Advancing Nanopulsed Plasma Bubbles for the Degradation of Organic Pollutants in Water: From Lab to Pilot Scale
by Stauros Meropoulis and Christos A. Aggelopoulos
Technologies 2024, 12(10), 189; https://doi.org/10.3390/technologies12100189 - 3 Oct 2024
Viewed by 1556
Abstract
The transition from lab-scale studies to pilot-scale applications is a critical step in advancing water remediation technologies. While laboratory experiments provide valuable insights into the underlying mechanisms and method effectiveness, pilot-scale studies are essential for evaluating their practical feasibility and scalability. This progression [...] Read more.
The transition from lab-scale studies to pilot-scale applications is a critical step in advancing water remediation technologies. While laboratory experiments provide valuable insights into the underlying mechanisms and method effectiveness, pilot-scale studies are essential for evaluating their practical feasibility and scalability. This progression addresses challenges related to operational conditions, effectiveness and energy requirements in real-world scenarios. In this study, the potential of nanopulsed plasma bubbles, when scaled up from a lab environment, was explored by investigating critical experimental parameters, such as plasma gas, pulse voltage, and pulse repetition rate, while also analyzing plasma-treated water composition. To validate the broad effectiveness of this method, various classes of highly toxic organic pollutants were examined in terms of pollutant degradation efficiency and energy requirements. The pilot-scale plasma bubble reactor generated a high concentration of short-lived reactive species with minimal production of long-lived species. Additionally, successful degradation of all pollutants was achieved in both lab- and pilot-scale setups, with even lower electrical energy-per-order (EEO) values at the pilot scale, 2–3 orders of magnitude lower compared to other advanced oxidation processes. This study aimed to bridge the gap between lab-scale plasma bubbles and upscaled systems, supporting the rapid, effective, and energy-efficient destruction of organic pollutants in water. Full article
(This article belongs to the Section Environmental Technology)
Show Figures

Graphical abstract

18 pages, 12726 KiB  
Article
Quad-Band Rectifier Circuit Design for IoT Applications
by Ioannis D. Bougas, Maria S. Papadopoulou, Achilles D. Boursianis, Sotirios Sotiroudis, Zaharias D. Zaharis and Sotirios K. Goudos
Technologies 2024, 12(10), 188; https://doi.org/10.3390/technologies12100188 - 2 Oct 2024
Viewed by 1527
Abstract
In this work, a novel quad-band rectifier circuit is introduced for RF energy harvesting and Internet of Things (IoT) applications. The proposed rectifier operates in the Wi-Fi frequency band and can supply low-power sensors and systems used in IoT services. The circuit operates [...] Read more.
In this work, a novel quad-band rectifier circuit is introduced for RF energy harvesting and Internet of Things (IoT) applications. The proposed rectifier operates in the Wi-Fi frequency band and can supply low-power sensors and systems used in IoT services. The circuit operates at 2.4, 3.5, 5, and 5.8 GHz. The proposed RF-to-DC rectifier is designed based on Delon theory and Greinacher topology on an RT/Duroid 5880 substrate. The results show that our proposed circuit can harvest RF energy from the environment, providing maximum power conversion efficiency (PCE) greater than 81% when the output load is 0.511 kΩ and the input power is 12 dBm. In this work, we provide a comprehensive design framework for an affordable RF-to-DC rectifier. Our circuit performs better than similar designs in the literature. This rectifier could be integrated into an IoT node to harvest RF energy, thereby proving a green energy source. The IoT node can operate at various frequencies. Full article
(This article belongs to the Special Issue IoT-Enabling Technologies and Applications)
Show Figures

Figure 1

23 pages, 3516 KiB  
Article
Proposed Modbus Extension Protocol and Real-Time Communication Timing Requirements for Distributed Embedded Systems
by Nicoleta Cristina Găitan, Ionel Zagan and Vasile Gheorghiță Găitan
Technologies 2024, 12(10), 187; https://doi.org/10.3390/technologies12100187 - 2 Oct 2024
Viewed by 1652
Abstract
The general evolution of fieldbus systems has been variously affected by both computer electrical engineering and science. First, the main contribution undoubtedly originated from network IT systems, when the Open Systems Interconnection model was presented. This reference model with seven layers was and [...] Read more.
The general evolution of fieldbus systems has been variously affected by both computer electrical engineering and science. First, the main contribution undoubtedly originated from network IT systems, when the Open Systems Interconnection model was presented. This reference model with seven layers was and remains the foundation for the development of numerous advanced communication protocols. In this paper, the conducted research resulted in a major contribution; specifically, it describes the mathematical model for the Modbus protocol and defines the acquisition cycle model that corresponds to incompletely defined protocols in order to provide a timestamp and achieve temporal consistency for proposed Modbus Extension. The derived technical contribution of the authors is to exemplify the functionality of a typical industrial protocol that can be decomposed to improve the performance of data acquisition systems. Research results in this area have significant implications for innovations in industrial automation networking because of increasing distributed installations and Industrial Internet of Things (IIoT) applications. Full article
Show Figures

Figure 1

17 pages, 504 KiB  
Article
A Hybrid Deep Learning Approach with Generative Adversarial Network for Credit Card Fraud Detection
by Ibomoiye Domor Mienye and Theo G. Swart
Technologies 2024, 12(10), 186; https://doi.org/10.3390/technologies12100186 - 2 Oct 2024
Cited by 2 | Viewed by 1731
Abstract
Credit card fraud detection is a critical challenge in the financial industry, with substantial economic implications. Conventional machine learning (ML) techniques often fail to adapt to evolving fraud patterns and underperform with imbalanced datasets. This study proposes a hybrid deep learning framework that [...] Read more.
Credit card fraud detection is a critical challenge in the financial industry, with substantial economic implications. Conventional machine learning (ML) techniques often fail to adapt to evolving fraud patterns and underperform with imbalanced datasets. This study proposes a hybrid deep learning framework that integrates Generative Adversarial Networks (GANs) with Recurrent Neural Networks (RNNs) to enhance fraud detection capabilities. The GAN component generates realistic synthetic fraudulent transactions, addressing data imbalance and enhancing the training set. The discriminator, implemented using various DL architectures, including Simple RNN, Long Short-Term Memory (LSTM) networks, and Gated Recurrent Units (GRUs), is trained to distinguish between real and synthetic transactions and further fine-tuned to classify transactions as fraudulent or legitimate. Experimental results demonstrate significant improvements over traditional methods, with the GAN-GRU model achieving a sensitivity of 0.992 and specificity of 1.000 on the European credit card dataset. This work highlights the potential of GANs combined with deep learning architectures to provide a more effective and adaptable solution for credit card fraud detection. Full article
Show Figures

Figure 1

19 pages, 3048 KiB  
Review
Integrating Building Information Modelling and Artificial Intelligence in Construction Projects: A Review of Challenges and Mitigation Strategies
by Ayaz Ahmad Khan, Abdulkabir Opeyemi Bello, Mohammad Arqam and Fahim Ullah
Technologies 2024, 12(10), 185; https://doi.org/10.3390/technologies12100185 - 2 Oct 2024
Viewed by 3164
Abstract
Artificial intelligence (AI), including machine learning and decision support systems, can deploy complex algorithms to learn sufficiently from the large corpus of building information modelling (BIM) data. An integrated BIM-AI system can leverage the insights to make smart and informed decisions. Hence, the [...] Read more.
Artificial intelligence (AI), including machine learning and decision support systems, can deploy complex algorithms to learn sufficiently from the large corpus of building information modelling (BIM) data. An integrated BIM-AI system can leverage the insights to make smart and informed decisions. Hence, the integration of BIM-AI offers vast opportunities to extend the possibilities of innovations in the design and construction of projects. However, this synergy suffers unprecedented challenges. This study conducted a systematic literature review of the challenges and constraints to BIM-AI integration in the construction industry and categorise them into different taxonomies. It used 64 articles, retrieved from the Scopus database using the PRISMA protocol, that were published between 2015 and July 2024. The findings revealed thirty-nine (39) challenges clustered into six taxonomies: technical, knowledge, data, organisational, managerial, and financial. The mean index score analysis revealed financial (µ = 30.50) challenges are the most significant, followed by organisational (µ = 23.86), and technical (µ = 22.29) challenges. Using Pareto analysis, the study highlighted the twenty (20) most important BIM-AI integration challenges. The study further developed strategic mitigation maps containing strategies and targeted interventions to address the identified challenges to the BIM-AI integration. The findings provide insights into the competing issues stifling BIM-AI integration in construction and provide targeted interventions to improve synergy. Full article
Show Figures

Figure 1

17 pages, 7083 KiB  
Article
FPGA Implementation of Sliding Mode Control and Proportional-Integral-Derivative Controllers for a DC–DC Buck Converter
by Sandra Huerta-Moro, Jonathan Daniel Tavizón-Aldama and Esteban Tlelo-Cuautle
Technologies 2024, 12(10), 184; https://doi.org/10.3390/technologies12100184 - 1 Oct 2024
Viewed by 1522
Abstract
DC–DC buck converters have been designed by incorporating different control stages to drive the switches. Among the most commonly used controllers, the sliding mode control (SMC) and proportional-integral-derivative (PID) controller have shown advantages in accomplishing fast slew rate, reducing settling time and mitigating [...] Read more.
DC–DC buck converters have been designed by incorporating different control stages to drive the switches. Among the most commonly used controllers, the sliding mode control (SMC) and proportional-integral-derivative (PID) controller have shown advantages in accomplishing fast slew rate, reducing settling time and mitigating overshoot. The proposed work introduces the implementation of both SMC and PID controllers by using the field-programmable gate array (FPGA) device. The FPGA is chosen to exploit its main advantage for fast verification and prototyping of the controllers. In this manner, a DC–DC buck converter is emulated on an FPGA by applying an explicit multi-step numerical method. The SMC controller is synthesized into the FPGA by using a signum function, and the PID is synthesized by applying the difference quotient method to approximate the derivative action, and the second-order Adams–Bashforth method to approximate the integral action. The FPGA synthesis of the converter and controllers is performed by designing digital blocks using computer arithmetic of 32 and 64 bits, in fixed-point format. The experimental results are shown on an oscilloscope by using a digital-to-analog converter to observe the voltage regulation generated by the SMC and PID controllers on the DC–DC buck converter. Full article
Show Figures

Figure 1

20 pages, 2281 KiB  
Article
Brain Tumor Segmentation from Optimal MRI Slices Using a Lightweight U-Net
by Fernando Daniel Hernandez-Gutierrez, Eli Gabriel Avina-Bravo, Daniel F. Zambrano-Gutierrez, Oscar Almanza-Conejo, Mario Alberto Ibarra-Manzano, Jose Ruiz-Pinales, Emmanuel Ovalle-Magallanes and Juan Gabriel Avina-Cervantes
Technologies 2024, 12(10), 183; https://doi.org/10.3390/technologies12100183 - 1 Oct 2024
Viewed by 2494
Abstract
The timely detection and accurate localization of brain tumors is crucial in preserving people’s quality of life. Thankfully, intelligent computational systems have proven invaluable in addressing these challenges. In particular, the UNET model can extract essential pixel-level features to automatically identify the tumor’s [...] Read more.
The timely detection and accurate localization of brain tumors is crucial in preserving people’s quality of life. Thankfully, intelligent computational systems have proven invaluable in addressing these challenges. In particular, the UNET model can extract essential pixel-level features to automatically identify the tumor’s location. However, known deep learning-based works usually directly feed the 3D volume into the model, which causes excessive computational complexity. This paper presents an approach to boost the UNET network, reducing computational workload while maintaining superior efficiency in locating brain tumors. This concept could benefit portable or embedded recognition systems with limited resources for operating in real time. This enhancement involves an automatic slice selection from the MRI T2 modality volumetric images containing the most relevant tumor information and implementing an adaptive learning rate to avoid local minima. Compared with the original model (7.7 M parameters), the proposed UNET model uses only 2 M parameters and was tested on the BraTS 2017, 2020, and 2021 datasets. Notably, the BraTS2021 dataset provided outstanding binary metric results: 0.7807 for the Intersection Over the Union (IoU), 0.860 for the Dice Similarity Coefficient (DSC), 0.656 for the Sensitivity, and 0.9964 for the Specificity compared to vanilla UNET. Full article
Show Figures

Figure 1

11 pages, 1039 KiB  
Article
Granular Weighted Fuzzy Approach Applied to Short-Term Load Demand Forecasting
by Cesar Vinicius Züge and Leandro dos Santos Coelho
Technologies 2024, 12(10), 182; https://doi.org/10.3390/technologies12100182 - 1 Oct 2024
Viewed by 1137
Abstract
The development of accurate models to forecast load demand across different time horizons is challenging due to demand patterns and endogenous variables that affect short-term and long-term demand. This paper presents two contributions. First, it addresses the problem of the accuracy of the [...] Read more.
The development of accurate models to forecast load demand across different time horizons is challenging due to demand patterns and endogenous variables that affect short-term and long-term demand. This paper presents two contributions. First, it addresses the problem of the accuracy of the probabilistic forecasting model for short-term time series where endogenous variables interfere by emphasizing a low computational cost and efficient approach such as Granular Weighted Multivariate Fuzzy Time Series (GranularWMFTS) based on the fuzzy information granules method and a univariate form named Probabilistic Fuzzy Time Series. Secondly, it compares time series forecasting models based on algorithms such as Holt-Winters, Auto-Regressive Integrated Moving Average, High Order Fuzzy Time Series, Weighted High Order Fuzzy Time Series, and Multivariate Fuzzy Time Series (MVFTS) where this paper is based on Root Mean Squared Error, Symmetric Mean Absolute Percentage Error, and Theil’s U Statistic criteria relying on 5% error criteria. Finally, it presents the concept and nuances of the forecasting approaches evaluated, highlighting the differences between fuzzy algorithms in terms of fuzzy logical relationship, fuzzy logical relationship group, and fuzzification in the training phase. Overall, the GranularWMVFTS and weighted MVFTS outperformed other evaluated forecasting approaches regarding the performance criteria adopted with a low computational cost. Full article
(This article belongs to the Collection Electrical Technologies)
Show Figures

Figure 1

37 pages, 6728 KiB  
Article
Optimizing Cyber Threat Detection in IoT: A Study of Artificial Bee Colony (ABC)-Based Hyperparameter Tuning for Machine Learning
by Ayoub Alsarhan, Mahmoud AlJamal, Osama Harfoushi, Mohammad Aljaidi, Malek Mahmoud Barhoush, Noureddin Mansour, Saif Okour, Sarah Abu Ghazalah and Dimah Al-Fraihat
Technologies 2024, 12(10), 181; https://doi.org/10.3390/technologies12100181 - 30 Sep 2024
Viewed by 1602
Abstract
In the rapidly evolving landscape of the Internet of Things (IoT), cybersecurity remains a critical challenge due to the diverse and complex nature of network traffic and the increasing sophistication of cyber threats. This study investigates the application of the Artificial Bee Colony [...] Read more.
In the rapidly evolving landscape of the Internet of Things (IoT), cybersecurity remains a critical challenge due to the diverse and complex nature of network traffic and the increasing sophistication of cyber threats. This study investigates the application of the Artificial Bee Colony (ABC) algorithm for hyperparameter optimization (HPO) in machine learning classifiers, specifically focusing on Decision Trees, Support Vector Machines (SVM), and K-Nearest Neighbors (KNN) for IoT network traffic analysis and malware detection. Initially, the basic machine learning models demonstrated accuracies ranging from 69.68% to 99.07%, reflecting their limitations in fully adapting to the varied IoT environments. Through the employment of the ABC algorithm for HPO, significant improvements were achieved, with optimized classifiers reaching up to 100% accuracy, precision, recall, and F1-scores in both training and testing stages. These results highlight the profound impact of HPO in refining model decision boundaries, reducing overfitting, and enhancing generalization capabilities, thereby contributing to the development of more robust and adaptive security frameworks for IoT environments. This study further demonstrates the ABC algorithm’s generalizability across different IoT networks and threats, positioning it as a valuable tool for advancing cybersecurity in increasingly complex IoT ecosystems. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

22 pages, 3718 KiB  
Article
Comparing Optical and Custom IoT Inertial Motion Capture Systems for Manual Material Handling Risk Assessment Using the NIOSH Lifting Index
by Manuel Gutierrez, Britam Gomez, Gustavo Retamal, Guisella Peña, Enrique Germany, Paulina Ortega-Bastidas and Pablo Aqueveque
Technologies 2024, 12(10), 180; https://doi.org/10.3390/technologies12100180 - 30 Sep 2024
Viewed by 1423
Abstract
Assessing musculoskeletal disorders (MSDs) in the workplace is vital for improving worker health and safety, reducing costs, and increasing productivity. Traditional hazard identification methods are often inefficient, particularly in detecting complex risks, which may compromise risk management. This study introduces a semi-automatic platform [...] Read more.
Assessing musculoskeletal disorders (MSDs) in the workplace is vital for improving worker health and safety, reducing costs, and increasing productivity. Traditional hazard identification methods are often inefficient, particularly in detecting complex risks, which may compromise risk management. This study introduces a semi-automatic platform using two motion capture systems—an optical system (OptiTrack®) and a Bluetooth Low Energy (BLE)-based system with inertial measurement units (IMUs), developed at the Biomedical Engineering Laboratory, Universidad de Concepción, Chile. These systems, tested on 20 participants (10 women and 10 men, aged 30 ± 9 years without MSDs), facilitate risk assessments via the digitized NIOSH Index method. Analysis of ergonomically significant variables (H, V, A, D) and calculation of the RWL and LI showed both systems aligned with expected ergonomic standards, although significant differences were observed in vertical displacement (V), horizontal displacement (H), and trunk rotation (A), indicating areas for improvement, especially for the BLE system. The BLE Inertial MoCap system recorded mean heights of 33.87 cm (SD = 4.46) and vertical displacements of 13.17 cm (SD = 4.75), while OptiTrack® recorded mean heights of 30.12 cm (SD = 2.91) and vertical displacements of 15.67 cm (SD = 2.63). Despite the greater variability observed in BLE system measurements, both systems accurately captured vertical vertical absolute displacement (D), with means of 32.05 cm (SD = 7.36) for BLE and 31.80 cm (SD = 3.25) for OptiTrack®. Performance analysis showed high precision for both systems, with BLE and OptiTrack® achieving precision rates of 98.5%. Sensitivity, however, was lower for BLE (97.5%) compared to OptiTrack® (98.7%). The BLE system’s F1 score was 97.9%, while OptiTrack® scored 98.6%, indicating both systems can reliably assess ergonomic risk. These findings demonstrate the potential of using BLE-based IMUs for workplace ergonomics, though further improvements in measurement accuracy are needed. The user-friendly BLE-based system and semi-automatic platform significantly enhance risk assessment efficiency across various workplace environments. Full article
Show Figures

Figure 1

26 pages, 42050 KiB  
Article
Study of the Nature of the Destruction of Coatings Based on the ZrN System Deposited on a Titanium Alloy Substrate
by Alexander Metel, Alexey Vereschaka, Catherine Sotova, Anton Seleznev, Nikolay Sitnikov, Filipp Milovich, Kirill Makarevich and Sergey Grigoriev
Technologies 2024, 12(10), 179; https://doi.org/10.3390/technologies12100179 - 30 Sep 2024
Viewed by 1233
Abstract
The fracture strength was compared in a scratch test of coatings based on the ZrN system with the introduction of Ti, Nb and Hf, which were deposited on a titanium alloy substrate. The coatings were deposited using Controlled Accelerated Arc (CAA-PVD) technology. In [...] Read more.
The fracture strength was compared in a scratch test of coatings based on the ZrN system with the introduction of Ti, Nb and Hf, which were deposited on a titanium alloy substrate. The coatings were deposited using Controlled Accelerated Arc (CAA-PVD) technology. In coatings that simultaneously include Zr and Ti, a nanolayer structure is formed, while in coatings without Ti, the formation of a monolithic single-layer structure is observed. The comparison was carried out according to two parameters: adhesion strength to the substrate and overall coating strength. The (Zr,Hf)N coating showed better resistance to destruction, but had worse adhesion to the substrate. As a result, although the coating is retained directly in the scribing groove, a large area of delamination and destruction is formed around the groove. The (Ti,Zr,Nb)N coating, with its somewhat lower strength, has a high adhesion to the substrate; no noticeable delamination is observed along the groove boundary. In this paper, not only is the fracture resistance of various coatings deposited on a titanium alloy substrate compared, but the nature of this fracture is also investigated depending on the composition of the coatings. Full article
Show Figures

Figure 1

12 pages, 12365 KiB  
Article
Comparing Elastocaloric Cooling and Desiccant Wheel Dehumidifiers for Atmospheric Water Harvesting
by John LaRocco, Qudsia Tahmina, John Simonis and Vidhaath Vedati
Technologies 2024, 12(10), 178; https://doi.org/10.3390/technologies12100178 - 30 Sep 2024
Viewed by 3858
Abstract
Approximately two billion people worldwide lack access to clean drinking water, negatively impacting national security, hygiene, and agriculture. Atmospheric water harvesting (AWH) is the conversion of ambient humidity into clean water; however, conventional dehumidification is energy-intensive. Improvement in AWH may be achieved with [...] Read more.
Approximately two billion people worldwide lack access to clean drinking water, negatively impacting national security, hygiene, and agriculture. Atmospheric water harvesting (AWH) is the conversion of ambient humidity into clean water; however, conventional dehumidification is energy-intensive. Improvement in AWH may be achieved with elastocaloric cooling, using temperature-sensitive materials in active thermoregulation. Potential benefits, compared to conventional desiccant wheel designs, include substantial reductions in energy use, size, and complexity. A nickel–titanium (NiTi) elastocaloric water harvester was designed and compared with a desiccant wheel design under controlled conditions of relative humidity, air volume, and power. In a 30 min interval, the NiTi device harvested more water on average at 0.18 ± 0.027 mL/WH, compared to the 0.1567 ± 0.023 mL/WH of the desiccant wheel harvester. Moreover, the NiTi harvester required half the power input and was thermoregulated more efficiently. Future work will focus on mechanical design parameter optimization. Elastocaloric cooling is a promising advancement in dehumidification, making AWH more economical and feasible. Full article
(This article belongs to the Section Environmental Technology)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop