Next Article in Journal
The Analysis of the Thermal Processes Occurring in the Contacts of Vacuum Switches During the Conduction of Short-Circuit Currents
Previous Article in Journal
Climate Change and the Impacts on Power and Energy Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Energy Optimization in Ultrasound Tomography Through Sensor Reduction Supported by Machine Learning Algorithms

1
Research & Development Centre Netrix S.A., 20-704 Lublin, Poland
2
Institute of Public Administration and Business, WSEI University, 20-209 Lublin, Poland
3
Faculty of Mathematics and Information Technology, Lublin University of Technology, 20-618 Lublin, Poland
4
Faculty of Management, Lublin University of Technology, 20-618 Lublin, Poland
*
Author to whom correspondence should be addressed.
Energies 2024, 17(21), 5406; https://doi.org/10.3390/en17215406
Submission received: 2 October 2024 / Revised: 15 October 2024 / Accepted: 28 October 2024 / Published: 30 October 2024
(This article belongs to the Special Issue Energy Management Systems Based on Industrial Artificial Intelligence)

Abstract

:
This paper focuses on reducing energy consumption in ultrasound tomography by utilizing machine learning techniques. The core idea is to investigate the feasibility of minimizing the number of measurement sensors without sacrificing prediction accuracy. This article evaluates the quality of reconstructions derived from data collected through two or three measurement channels. In subsequent steps, machine learning models are developed to predict the number, location, and size of the objects. A reliable object detection method is introduced, requiring less information than traditional signal analysis from multiple channels. Various machine learning models were tested and compared to validate the approach, with most demonstrating high accuracy or R 2 scores in their respective tasks. By reducing the number of sensors, the goal is to lower energy usage while maintaining high precision in localization. This study contributes to the ongoing research on energy efficiency in sensing and localization, especially in environments where resource optimization is crucial, such as remote or resource-limited settings.

1. Introduction

Ultrasound tomography, by definition, is based on the analysis of sound wave propagation to identify the structure of investigated objects. Wave propagation is affected by many factors, including wave interference and absorption, making analysis a complex issue. Such analysis has a wide range of applications, mainly in medicine, nondestructive testing, or industrial tomography. Ultrasound tomography takes into account various physical phenomena, including acoustic pressure and energy transfer, as well as technical aspects such as the use of piezoelectric materials as a source of ultrasonic waves [1,2]. It uses sound waves with a frequency above human hearing (20 kHz), whereas typical diagnostic scanners operate in the frequency range of 2 to 18 MHz.
In medicine, ultrasonic measurement methods are well developed and provide excellent results for imaging soft human tissue [3]. This imaging technique is used to visualize subcutaneous structures of the body. This makes it possible to show biological structures consisting of tendons, joints, muscles, and blood vessels. It also enables the examination of internal organs for possible pathologies or damage. Popular measurement heads are equipped with more than a hundred individual sensors, requiring efficient methods and algorithms for 2D or 3D image reconstruction. Medical ultrasound devices could work in different operation modes [4,5,6], including the A-mode, where sensors scan a line through the body, resulting in the echoes as a function of depth; the B-mode, resulting in a 2D image generated based on a plane scan through the body by an array of transducers; and the M-mode, where M stands for motion, which is a sequence of B-mode images allowing for the production of an organ boundary based on the recorded a range of their motion. Moreover, those devices can operate in the Doppler mode, allowing for measuring and visualizing blood flow using its relative velocity obtained by calculating the frequency shift of a particular sample volume. This foregrounded part of ultrasound methods in medicine plays a key role in patient diagnostics.
Another field in which ultrasound is used is nondestructive testing, the main purpose of which is to look for defects and discontinuities in a solid material in order to eliminate the weakening of components that they may cause [7,8,9,10]. Nondestructive testing (NDT) is a powerful technique in various industries to assess the integrity of materials and structure homogeneity without causing any damage to the studied objects. In this technique, an ultrasonic transducer generates sound waves that travel through an object, and the echoes produced by discontinuities are analyzed to detect defects and assess material properties. Using this technique, the failures in component engineering could be detected, which are often caused by a combination of different conditions, mainly poor design, material defects, and improper usage. The process of defect formation in materials often takes place at the product manufacturing stage; for example, through lack of penetration and cracks in welds, the formation of porosity in castings, or improper material lamination. Ultrasonic nondestructive testing enables accurate and reliable inspections, ensuring the safety and reliability of critical assets [11].
One of the other fields in which ultrasound tomography is widely used is industrial process tomography (IPT), the primary purpose of which is to ensure the continuity of any industrial process. It allows for visualization and monitoring of closed industrial pipelines and process reactors in various sectors, such as food and drug production, wastewater treatment plants, and others [12,13,14,15,16]. An important feature of IPT is to ensure the continuity of the technological process as well as to detect critical situations in which damage to the equipment may occur. For this purpose, many other types of measurement methods are used, such as X-ray diffraction tomography [17], electrical impedance tomography [18], electrical capacitance tomography [19], and microtomography [20]. Process insights help to optimize work and increase productivity, which entails cost reductions.
Energy efficiency in tomography, especially in the context of sensor-based systems, has become a focal point of recent research due to the growing need for sustainable and resource-optimized technologies. Tomographic systems, such as ultrasound or computed tomography (CT), often rely on large sensor arrays that consume significant amounts of energy, leading to an interest in minimizing sensor use while maintaining image accuracy [21]. Recent developments in energy-efficient tomographic systems leverage techniques such as super-resolution convolutional neural networks (CNNs) to reduce the number of sensors without significantly sacrificing the quality of the reconstructed images. For example, Wójcik et al. demonstrated that reducing the number of sensors in ultrasound tomography and using CNNs can significantly reduce energy consumption while maintaining high accuracy in the reconstruction of images [22]. In addition to machine learning approaches, other techniques for reducing energy consumption include compressed sensing in sensor networks. Du et al. explored energy-efficient sensory data gathering in IoT networks, which could be applied to tomographic systems. By optimizing data acquisition through compressive sensing, the amount of data collected can be reduced, thereby decreasing the number of active sensors required and minimizing energy consumption [23]. The demand for energy-efficient medical imaging solutions has also driven innovations in CT systems. Studies such as those by Hasan et al. and Brown et al. have reported strategies to reduce energy consumption in CT imaging, including the adoption of optimized imaging protocols, machine learning algorithms for data processing, and the power-down of devices during idle periods [24,25]. These methods have the potential to be adapted for other tomographic modalities, including ultrasound tomography, where sensor arrays and imaging protocols can be optimized for lower power consumption. Other studies, such as that by Afat et al., focus on optimizing energy usage in MRI and CT by adjusting scan protocols and using deep learning algorithms to shorten scan times, which can further reduce energy demands in tomographic applications. These findings highlight the broad applicability of energy-saving strategies across various imaging modalities [26].
In summary, this paper not only contributes to the ongoing research on energy efficiency in tomography but also aligns with recent studies that explore the reduction of sensors and energy use in imaging technologies. The methods discussed herein, including sensor reduction supported by machine learning, fit within the larger body of literature aiming to make tomographic systems more energy-efficient while maintaining their diagnostic performance.
This article is organized as follows: the Section 2 presents the process of data acquisition using ultrasound tomography and the main idea of the algorithm using methods from the field of machine learning. The Section 3 discusses the main results of this work, while the Section 4 analyzes the energy optimization that can be achieved with the proposed solution and its limitations. In the Section 5, we summarized the key findings of our study.

2. Materials and Methods

This section includes a description of the measurement system consisting of an ultrasonic tomograph and a measurement tank, as well as a brief description of the hardware of the developed solution. Possible measurement and reconstruction methods are presented, as well as the principles of operation of the measurement device. The data acquisition process is discussed, and the machine learning methods used in the classification and regression tasks are presented.

2.1. Measurement System

The core of the measurement unit is an industrial ultrasonic tomograph (model 4.0, Netrix S.A. (Lublin, Poland)), specifically designed for analyzing multiphase industrial processes and detecting foreign objects via field-of-view reconstruction. The device integrates a motherboard with sixteen ultrasonic measurement cards, each equipped with four channels, offering modular and scalable performance. The hardware is housed in a rugged enclosure for easy transport and operation in various environmental conditions (Figure 1).
The device features a user-friendly touchscreen interface on the front panel, enabling the control of measurement parameters, reconstruction processes, and real-time monitoring. Measurement sensors can be connected via a 64 SMB connector made by DigiKey (Warsaw, Poland) or through the main Zero Insertion Force (ZIF) connector made by Mouser Electronics (Wroclaw, Poland), centralizing the ultrasonic transducer connections. Additionally, the device provides standard USB and RJ45 connectors (Mouser Electronics, Wroclaw, Poland) for flexible data transfer and communication.
The motherboard, built around an Altera Cyclone IV FPGA (Mouser Electronics Wroclaw, Poland), serves as the data aggregation hub, quickly collecting raw measurements and transmitting them to a connected computer for further processing. A high-voltage converter ensures a continuous and stable power supply, while communication is facilitated by USB 3.0, leveraging the FTDI FT601 chip (FTDI, Glasgow, UK) for high-speed data transfer via a USB type C port.
The measurement boards, equipped with low noise amplifiers and bandpass filters, support three frequencies (40 kHz, 350 kHz, and 1 MHz) to ensure precise data acquisition. These boards are managed by a central STM32 series microcontroller made by Mouser Electronics (Wroclaw, Poland), known for its reliability and processing efficiency. CAN interfaces (Mouser Electronics, Wroclaw, Poland) enable robust communication between the motherboard and the measurement boards, distributing critical parameters such as sampling frequency and operation modes (Figure 2).

2.2. Data Acquisition and Preprocessing

This study investigates how the waveform of an acoustic signal recorded by several transducers allows for the detection and localization of objects within a measurement area. To facilitate a better understanding of the research problem, it is necessary to briefly describe the procedure of signal generation, data aggregation, and the preprocessing steps applied before feeding the data into the machine learning models.
Let N be the total number of transducers (in our case, N = 32 ), positioned equidistantly around the measuring tank, as shown in Figure 3. Each transducer serves both as a signal emitter and receiver, generating ultrasonic waves at a frequency of 400 kHz and recording the resulting waveform. For each transducer, we collect a time series of 8192 samples at a sampling frequency of 4 MHz . The recorded signal is amplified by 32 dB to ensure sufficient signal strength for accurate detection. Let s i ( t ) represent the signal emitted by the i-th transducer, where i 1 , 2 , , N . The signal recorded at the j-th transducer is denoted by r j ( t ) . For each pair of transducers ( i , j ) , the signal propagation between them can be expressed as follows:
r j ( t ) = s i ( t ) · h i j ( t ) + n j ( t ) ,
where h i j ( t ) is the impulse response of the medium between transducers i and j, and n j ( t ) represents the noise at the receiving transducer j. The goal of data acquisition is to recover h i j ( t ) , which contains information about the object (inclusion) within the medium. Inclusions, which are made from PVC tubes, alter the acoustic properties of the medium, allowing for their detection through the changes in h i j ( t ) .
For each inclusion, the spatial coordinates x i n c are recorded based on the coordinate system anchored to the first transducer. Each inclusion’s position is represented as follows:
x i n c = ( r i n c , θ i n c ) ,
where r i n c is the radial distance from the center of the tank, and θ i n c is the angular position relative to the first transducer. For each transducer, the system generates a signal consisting of six pulses, which propagates through the medium, and the corresponding waveforms are recorded across all channels. This process is repeated for every inclusion and for every possible transducer combination.
Once the raw signals are collected, the data undergo preprocessing to prepare them for machine learning. The preprocessing steps include the following:
  • Signal normalization: Each recorded signal is normalized to remove variations in signal amplitude due to differences in transducer sensitivity and amplification. For a given recorded signal r j ( t ) , the normalized signal is given as follows:
    r ^ j ( t ) = r j ( t ) max r j ( t ) .
  • Feature extraction: Key features are extracted from the time–domain signals, including the peak amplitude, signal energy, and time of flight (ToF) between transducers. The ToF, which reflects the time delay between signal emission and reception, is a critical feature for detecting inclusions. For each transducer pair ( i , j ) , the ToF is computed as follows:
    ToF i j = arg max t r j ( t ) arg max t s i ( t ) .
  • Rotational invariance: To account for the rotational symmetry of the transducer array, a rotation angle θ i is applied to each channel’s coordinate system. For a single channel, the rotation angle is calculated as follows:
    θ i = 2 i π / N ,
    where N = 32 , and i is the channel number. For pairs of sensors, the rotation is modified by a factor of 2 ( i 0.5 ) π / N . This preprocessing ensures that the machine learning model is invariant to the rotation of the inclusions within the measurement tank.
The preprocessed data, consisting of normalized signals and extracted features, are then used as input to a machine learning algorithm for classification and regression tasks. The machine learning algorithm used in this study is a supervised learning model, where the inclusion positions serve as the ground truth labels for training. Let X represent the feature matrix containing the extracted signal features for all transducers and y be the corresponding labels representing the inclusion positions.
The training process involves minimizing the loss function L ( y , y ^ ) , where y ^ represents the predicted inclusion positions. Depending on the specific algorithm, this could involve optimizing a classification loss (e.g., cross-entropy loss) or regression loss (e.g., mean squared error). Cross-validation is employed to ensure generalizability and prevent overfitting, while hyperparameter tuning is used to optimize the model’s performance.

2.3. Algorithm Structure

The main objective of this work is to develop a machine learning algorithm that is capable of predicting the coordinates of inclusions in a reservoir. The problem can be structured into two sub-tasks: predicting the number of inclusions within the tank and determining the spatial coordinates of these inclusions. Each task is handled using a distinct machine learning approach—classification for the first and regression for the second.
The first sub-task involves predicting the number of inclusions; a classification problem. Given the set of features extracted from the acoustic signals, the objective is to predict an integer value representing the number of inclusions, n i n c , where n i n c 0 , 1 , 2 , , N max , and N max is the maximum number of inclusions considered in this study. Let X represent the feature matrix, where each row corresponds to a set of extracted features from the transducer data, and y c l a s s denote the vector of true labels representing the number of inclusions:
y c l a s s = [ n i n c , 1 , n i n c , 2 , , n i n c , m ] ,
where m is the number of data samples in the dataset. The classification model aims to learn a mapping f c l a s s : X y c l a s s .
We evaluate the performance of the classification models using standard metrics such as accuracy and F1 score. The accuracy is defined as follows:
Accuracy = 1 m i = 1 m 1 ( y c l a s s , i = y ^ c l a s s , i ) ,
where 1 ( · ) is the indicator function, and y ^ c l a s s , i is the predicted label for the i-th sample. The F1 score, a harmonic mean of precision and recall, is computed as follows:
F 1 Score = 2 · Precision · Recall Precision + Recall ,
where:
Precision = True Positives True Positives + False Positives , Recall = True Positives True Positives + False Negatives .
Among the tested models, extremely randomized trees (extra trees) performed the best in predicting the number of inclusions. Extra trees is an ensemble learning method similar to random forest, but with additional randomness in the tree-building process. Specifically, extra trees selects features randomly at each node and does not evaluate the importance of the split. The model learns by growing multiple trees on random subsets of data and features, with predictions averaged across all trees:
y ^ c l a s s = 1 T t = 1 T y ^ c l a s s , t ,
where T is the number of trees and y ^ c l a s s , t is the prediction of the t-th tree [27]. The added randomness helps to reduce overfitting and leads to improved generalization, especially for datasets with high variability.
The second sub-task is a regression problem, where the goal is to predict the spatial coordinates of each inclusion, x i n c = ( r i n c , θ i n c ) , based on the feature matrix X . Here, r i n c represents the radial distance of the inclusion from the center of the tank, and θ i n c is the angular position relative to the first transducer.
The regression models aim to learn a mapping f r e g : X y r e g , where y r e g = [ x i n c , 1 , x i n c , 2 , , x i n c , m ] . We evaluated the regression models using the R 2 coefficient and Mean Squared Error (MSE). The R 2 score, which measures the proportion of variance explained by the model, is defined as follows:
R 2 = 1 i = 1 m ( y r e g , i y ^ r e g , i ) 2 i = 1 m ( y r e g , i y ¯ r e g ) 2 ,
where y ¯ r e g is the mean of the true coordinates. The MSE, which measures the average squared difference between the predicted and true values, is given as follows:
MSE = 1 m i = 1 m ( y r e g , i y ^ r e g , i ) 2 .
The k-neighbors regressor achieved the best performance in predicting the inclusion coordinates. The k-neighbors regressor is a non-parametric method that predicts the output based on the k-nearest neighbors in the feature space. The predicted value y ^ r e g is given by the weighted average of the k-nearest neighbors:
y ^ r e g = 1 K j = 1 K y r e g , j ,
where y r e g , j is the output of the j-th nearest neighbor in the feature space. The k-neighbors regressor relies on a distance metric (e.g., Euclidean distance) to identify the closest neighbors and predict the output based on their known values. A key parameter is K, the number of neighbors, which influences the model’s flexibility and smoothness [28].
Additionally, Gaussian Process Regression (GPR) was used for the regression tasks. GPR is a non-parametric, probabilistic model based on the assumption that the outputs are distributed according to a multivariate Gaussian distribution. The predicted output is modeled as follows:
f ( x ) GP ( m ( x ) , k ( x , x ) ) ,
where m ( x ) is the mean function, and k ( x , x ) is the covariance (kernel) function, which defines the similarity between points x and x [29]. The model predicts both a mean estimate and the uncertainty of that estimate, making it valuable in cases where uncertainty estimates are important.
The training phase of the machine learning algorithms in this study involved a well-defined process where the dataset of 16,000 samples was split into training and testing sets in a 70/30 ratio. This resulted in 11,200 samples used for training and 4800 samples reserved for testing. The training was carried out using 10-fold cross-validation, which means the training data were split into 10 subsets. In each iteration, one subset was used for validation, while the remaining nine were used for training. This technique ensures that the model is evaluated on different portions of the data, improving its robustness and reducing the risk of overfitting. During the training phase, multiple machine learning models were trained and evaluated using the cross-validation technique. The models were compared based on key performance metrics, and after the initial evaluation, the hyperparameters of the top-performing models were fine-tuned to optimize their performance. This approach ensures that the final model, selected after cross-validation, is well-trained and capable of generalizing effectively to unseen data. The 10-fold cross-validation technique enhances model reliability, ensuring that the model performs consistently across different data subsets.
In addition to the main models used in this article, tests were conducted using competing predictive models to compare the levels of fit achieved. Optimization of the models’ hyperparameters was performed using the Bayesian optimization method. Bayesian optimization is a probabilistic model-based approach used to optimize complex, expensive-to-evaluate functions, often encountered in machine learning hyperparameter tuning or experimental design. It leverages Bayesian statistics to construct a surrogate model that estimates the objective function and its uncertainty. The algorithm iteratively selects the next set of parameters to evaluate based on an acquisition function that balances exploration and exploitation. By updating the surrogate model with new observations, Bayesian optimization intelligently focuses on promising regions of the parameter space, efficiently converging to the optimal solution. This methodology is particularly effective when the objective function is noisy, expensive to compute, or lacks a closed-form expression, making it a valuable tool in optimizing various real-world processes and models.

3. Results

The results of this study demonstrate the trade-offs between sensor reduction and model performance, particularly with regard to accuracy, robustness, and computational efficiency. By reducing the number of sensors, the goal was to optimize energy consumption while maintaining an acceptable level of accuracy for object detection and localization in ultrasound tomography. The models were trained and tested using datasets collected from 1, 2, and 3 sensors, and their performance was measured using several metrics, including accuracy, F1 score for classification, and MSE, R 2 for regression tasks.

3.1. Performance of Classification Models

Table 1 presents the accuracy and F1 values for five chosen classification models. It compares the performance of various algorithms based on the amount of information being used.
One of the most significant observations from the results is that, as expected, the accuracy of the models generally decreased with the reduction in the number of sensors. For example, the best-performing models showed close to 99% accuracy with three sensors but dropped to approximately 96% when using only one sensor. While the overall accuracy remained high, this reduction reflects the natural trade-off between energy savings and the loss of spatial information due to fewer sensor inputs. This finding is crucial in determining the application domains where such sensor reductions can be applied. In scenarios requiring high precision, such as medical diagnostics or safety-critical industrial inspections, the slight reduction in accuracy may not be acceptable. However, in less critical applications, such as remote monitoring or preliminary screening processes, this trade-off can be justified by the significant gains in energy efficiency.
In addition to accuracy, other metrics such as precision and recall showed similar trends. With three sensors, models such as random forest and gradient boosting achieved precision and recall scores above 0.99, but these scores decreased slightly to around 0.96 with one sensor. These results suggest that while sensor reduction can result in energy savings, it is essential to consider the specific accuracy requirements of the application before implementing this strategy.
The comparison between different models revealed interesting insights into how well the various algorithms handled sensor reduction. Ensemble methods such as random forest and gradient boosting consistently outperformed simpler models like K-Nearest Neighbors (KNN) and Support Vector Machines (SVMs) across all metrics. For instance, random forest achieved an accuracy of 99.3% with three sensors, which dropped to 96.7% with one sensor. Similarly, gradient boosting showed comparable performance, demonstrating its robustness in handling reduced data inputs by averaging the outputs of multiple weak learners. These ensemble methods are particularly suited for this task because of their ability to generalize well, even with limited data and noisy inputs, which become more prominent when fewer sensors are used.
On the other hand, simpler models such as KNN and SVM struggled more with sensor reduction. KNN, which relies heavily on local proximity in the feature space, saw its performance decline more significantly with reduced sensors. For instance, with three sensors, KNN achieved an accuracy of 98.7%, but this dropped to 96.8% with one sensor, highlighting its sensitivity to missing or incomplete spatial information. SVM also exhibited similar behavior, with a larger reduction in accuracy as the number of sensors decreased. This suggests that these simpler models may not be as effective for tasks where sensor reduction is critical, as they are more prone to errors due to the reduced input data.
In contrast, the ensemble models’ ability to combine multiple weak learners and reduce variance made them more resilient to the reduction in data from fewer sensors. These results underline the importance of model selection in the context of sensor reduction, as different algorithms exhibit varying degrees of sensitivity to incomplete or sparse data.

3.2. Performance of Regression Models

The use of regression models in this study aimed to predict continuous outcomes based on data from a reduced number of sensors. By training and testing the regression models using varying sensor configurations (1, 2, and 3 sensors), we were able to assess the impact of sensor reduction on predictive accuracy and overall model performance. Key metrics such as the coefficient of determination ( R 2 ) and Mean Absolute Error (MAE) were used to evaluate the regression models (see Table 2).
As expected, the results reveal that the number of sensors plays a significant role in the accuracy of the regression models. Models with three sensors consistently performed better across all metrics compared to those with only one or two sensors. For instance, the K-neighbors regressor achieved the highest accuracy with three sensors, boasting an R 2 value of 98.09% and an MSE of 0.681, showing minimal error and excellent predictive performance. When reduced to one sensor, the R 2 dropped to 87.95% and the MSE increased to 4.24, indicating a significant decline in performance due to the loss of spatial information from the reduced number of sensors.
Similarly, the MLP regressor showed strong performance with three sensors ( R 2 = 97.68 % , MSE = 0.82) but a drop in accuracy with only one sensor ( R 2 = 84.38 % , MSE = 5.50). The trend across all models demonstrates that reducing the number of sensors reduces the model’s ability to capture the full complexity of the data, resulting in higher prediction errors.
When comparing the models, the K-neighbors regressor consistently outperformed the others, especially in the three-sensor configuration, with an R 2 of 98.09% and the lowest MSE at 0.68. The extra trees regressor and MLP regressor also performed well with three sensors, achieving high R 2 values of 94.48% and 97.68%, respectively. These models proved to be more robust in handling reduced sensor inputs, maintaining relatively good performance even with one or two sensors. In contrast, the Gaussian process regressor showed the largest drop in performance with only one sensor, with an R 2 of just 44.99% and a high MSE of 20.46, highlighting its sensitivity to the reduction of input data. This model performed significantly better with three sensors, achieving an R 2 of 92.14%, but it still lagged behind the other models in terms of MSE.
The results indicate that sensor reduction can significantly affect model performance, particularly for models like the Gaussian process regressor, which relies heavily on the availability of comprehensive data. However, for models like k-keighbors and MLP regressors, the performance degradation is less pronounced, suggesting that they can handle reduced sensor data more effectively. These findings are crucial for applications where energy consumption or cost constraints limit the number of sensors, as they provide insight into which models can maintain acceptable levels of accuracy under such conditions. Future work could explore hybrid approaches that balance the number of sensors and model complexity to optimize both performance and resource usage, potentially leveraging techniques such as transfer learning or ensemble methods to mitigate the impact of sensor reduction.

3.3. Examples of Reconstruction by Regression Models

In Figure 4, we show predictions of the number of objects and their coordinates for the test set. In these diagrams, the original positions of the objects are marked with gray dots, while the predicted positions are marked with red circles. It can be noted that the algorithm does not completely handle the prediction of positions. Low efficiency can be caused by an insufficiently large learning dataset or a low number of measurement sensors used. It is noticeable that as the number of sensors is reduced, the quality of the obtained reconstructions decreases. However, comparing reconstructions for two and three sensors shows performance at a similar level. In order to keep the model robust and flexible to new measurement data, a reconstruction method based on the analysis of signals from three sensors would have to be used.
The novelty of this research lies in its development of a machine learning-based method for object detection within ultrasound tomography systems, specifically focusing on minimizing the number of sensors while maintaining acceptable accuracy levels. Unlike traditional tomographic approaches that require a full array of sensors to achieve high-resolution imaging, this study demonstrates that machine learning algorithms, such as extra trees for classification and k-keighbors regressor for regression, can effectively predict the number and coordinates of inclusions using significantly fewer sensors. This innovation allows for substantial reductions in energy consumption, making the system more efficient and cost-effective, particularly for industrial and resource-constrained environments. Additionally, the integration of advanced preprocessing techniques, such as rotational invariance for sensor data, ensures that the model remains robust despite sensor reduction. This approach introduces a flexible, scalable framework that bridges the gap between energy efficiency and the precision needed for practical applications, representing a significant advancement in the field of tomographic imaging.

4. Discussion

This section provides an estimate of energy consumption based on the number of measuring sensors. The industrial ultrasound tomograph used in this study has 16 built-in measurement cards with 4 channels each. Laboratory tests showed that the average current consumption was 325 mA for a single measurement card at a supply voltage of 12 V. It can be estimated that for a single measurement channel, the current consumption is approximately 81 mA. The default reconstruction method for a tomograph based on sensitivity matrix-related calculations typically requires 8 or 16 measurement channels [22,30,31]. In these considerations, the energy consumption resulting from the operation of the control unit is ignored in the case of the device being a mini-computer. At the same time, any desired hardware solution can play its role. Below is a preview of the possible energy savings that can be achieved using the approach outlined in Table 3.
It is important to note that while our approach demonstrates a reduction in energy consumption through sensor reduction, it is not intended for use in critical medical diagnostics or safety-critical industrial inspections where maximum accuracy is paramount. Instead, our method offers an alternative for applications where some reduction in accuracy is acceptable in exchange for significant energy savings, such as in remote monitoring systems, preliminary detection setups, or environments with limited resources. In such cases, the benefits of reduced energy consumption and system simplicity may outweigh the slight decrease in accuracy.
One of the primary limitations of the proposed method is the potential reduction in accuracy as the number of sensors is decreased. While the results show that energy consumption is significantly lowered, this comes at the cost of some loss in predictive accuracy. In applications where high precision is crucial, such as in medical imaging or detailed industrial defect detection, this trade-off may not be acceptable. However, in applications where energy efficiency is prioritized, such as remote environmental monitoring, pipeline inspections, or early-stage quality control in manufacturing, this trade-off could be justified. Further research should explore how sensor reduction affects accuracy in different contexts and whether hybrid systems, combining high-precision and low-energy modes, could offer a viable alternative. For example, a hybrid system could be employed in smart city infrastructure to monitor environmental parameters with low-energy sensors, switching to high-precision modes only when anomalies are detected.
Another limitation lies in the size and scope of the dataset used for this study. The relatively small dataset may not fully reflect the complexity of real-world environments, potentially limiting the algorithm’s generalizability. In industrial contexts such as chemical process monitoring or oil exploration, the diversity of operating conditions and materials would require much larger datasets to capture the full range of possible scenarios. Expanding the dataset to cover a wider variety of object types, environmental conditions, and noise levels will be necessary to fully validate the robustness of the method for more complex applications such as automotive defect detection or aerospace component inspection.
Object position prediction poses another challenge, particularly in scenarios with fewer sensors. The limited size of the dataset constrains the algorithm’s ability to accurately predict object positions under varying conditions. This limitation suggests the need for further development of dynamic algorithms capable of adjusting to changes in sensor input while maintaining accuracy. Potential applications like automated assembly lines or real-time logistics monitoring, where object positioning is critical, would benefit from such advancements.
Additionally, this study has not thoroughly explored the effects of environmental noise and signal disturbances. In real-world industrial applications—such as factory floor monitoring or field deployments in agriculture or environmental science—signal interference from machinery, temperature variations, or other environmental factors can degrade the quality of measurements. Evaluating the algorithm’s resilience to these disturbances is crucial for its deployment in operational settings. For instance, agricultural systems monitoring soil moisture levels or crop health in large fields would require robustness against various environmental noise sources, including wind or machinery interference.
The computational complexity of the proposed reconstruction method also deserves further investigation. While the reduction in sensors offers energy savings, the associated computational load could offset these benefits in certain real-time applications. For example, in dynamic environments like industrial process control or medical diagnostics, where rapid decisions are essential, the computational efficiency of the system becomes critical. Minimizing computational overhead while ensuring timely reconstruction will be essential to make this method suitable for real-time applications such as non-destructive testing in automotive manufacturing or real-time monitoring of critical infrastructure.
Moreover, the current study focuses primarily on energy reduction, and further work is needed to address the real-time capabilities of the proposed approach. In highly dynamic environments, such as in industrial robotics or surgical navigation systems, real-time performance is critical for effective implementation. Future research should investigate how to adapt the method for real-time applications, examining its speed and efficiency when applied in operational systems. For instance, real-time tomography could be used in robotic surgery, where both precision and speed are paramount.
The method’s adaptability to different ultrasound tomography systems or devices is another consideration. Various systems differ in hardware configurations, transducer types, and operating conditions, all of which could influence the method’s performance. Future studies should explore how this approach can be customized for different types of sensors and environments, such as industrial tomography systems used in the energy sector or portable medical devices for remote diagnostics.
To address these limitations, future research could integrate more advanced machine learning techniques, such as deep learning, which may improve accuracy even with fewer sensors. Combining sensor reduction with data fusion strategies—where multiple data sources are merged—could help compensate for the loss of spatial resolution caused by fewer sensors. This approach would be particularly beneficial in applications such as autonomous vehicles, where multiple sensors (e.g., LIDAR, radar, and cameras) can be used together to improve object detection and navigation efficiency. Additionally, hybrid systems, where sensors dynamically adjust based on real-time operational needs, could provide a flexible solution that balances energy efficiency and accuracy, especially in scenarios like smart grid management or disaster monitoring systems.
In conclusion, while the proposed method offers a promising solution for energy-efficient ultrasound tomography, it comes with specific limitations that must be considered depending on the application. These limitations highlight the need for further research and optimization, particularly regarding dataset size, algorithm robustness, noise resilience, and real-time applicability. As more advanced techniques are integrated and the method is tailored to specific industries, its potential for practical implementation in a wide range of fields will increase.

5. Conclusions

This paper proposed a machine learning-based approach for multichannel ultrasound tomography, demonstrating that, even with a relatively small training set derived from real data, it is possible to localize objects accurately within a selected measurement area. The research conducted on the presented prototype shows that a device utilizing only a few measurement probes, combined with machine learning algorithms, can be effectively developed for object detection in technological processes. Our solution, due to its reduced number of sensors and electronic components, not only decreases complexity but also makes the system more cost-effective.
This study provided a comprehensive comparison of energy consumption in relation to the number of sensors used in the measurement system. It has been demonstrated that reducing the number of sensors significantly lowers energy consumption, with reductions of up to four times with three sensors and as much as seven times with two sensors. This makes the system highly energy-efficient, especially in applications where energy savings are critical, such as remote monitoring or environments with limited resources.
However, the trade-off between sensor reduction and the accuracy of object detection needs to be carefully considered. The results showed that while using fewer sensors leads to substantial energy savings, it can also reduce the precision and robustness of the reconstruction, particularly in challenging environments. Based on our findings, it is recommended to use at least three sensors to maintain a balance between energy efficiency and accuracy, as this configuration provided reliable performance with minimized susceptibility to outliers and noise.
Moreover, depending on the specific requirements of the application—whether it prioritizes energy efficiency or higher reconstruction accuracy—a flexible system can be designed to fit the desired outcome. Future work should explore the integration of advanced techniques, such as dynamic sensor management and deep learning, to further enhance the performance of the system while maintaining low energy consumption. This balance between energy consumption and reconstruction quality is crucial for developing efficient and reliable object detection systems for various industrial and monitoring applications.

Author Contributions

Development of the concept, algorithms, and the image reconstruction, B.B., D.M. and P.S.; development of the measurement methodology and supervision, D.W. and T.R.; preparation of measurements, development of research methodology, and preparation of measurement documentation, M.G. and P.S.; literature review, formal analysis, general review, and editing of the manuscript, T.C., M.M., E.W. and K.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tole, N.M. Basic Physics of Ultrasonographic Imaging; World Health Organization: Geneva, Switzerland, 2005. [Google Scholar]
  2. Manbachi, A.; Cobbold, R.S.C. Development and Application of Piezoelectric Materials for Ultrasound Generation and Detection. Ultrasound 2011, 19, 187–196. [Google Scholar] [CrossRef]
  3. Carovac, A.; Smajlovic, F.; Junuzovic, D. Application of Ultrasound in Medicine. Acta Inform. Medica 2011, 19, 168. [Google Scholar] [CrossRef] [PubMed]
  4. Cobbold, R.S.C. Foundations of Biomedical Ultrasound; Oxford University Press: New York, NY, USA, 2006. [Google Scholar]
  5. Starkoff, B. Ultrasound physical principles in today’s technology. Australas. J. Ultrasound Med. 2014, 17, 4–10. [Google Scholar] [CrossRef]
  6. Postema, M.; Kotopoulis, S.; Jenderka, K.V. Physical Principles of Medical Ultrasound. In EFSUMB Coursebook on Ultrasound; EFSUMB: London, UK, 2020; pp. 1–23. [Google Scholar] [CrossRef]
  7. Blitz, J.; Simpson, G. Ultrasonic Methods of Non-Destructive Testing; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1995; Volume 2. [Google Scholar]
  8. Chen, C.H. Ultrasonic and Advanced Methods for Nondestructive Testing and Material Characterization; World Scientific: Singapore, 2007. [Google Scholar]
  9. Langenberg, K.J.; Marklein, R.; Mayer, K. Ultrasonic Nondestructive Testing of Materials: Theoretical Foundations; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  10. Dwivedi, S.K.; Vishwakarma, M.; Soni, A. Advances and researches on non destructive testing: A review. Mater. Today Proc. 2018, 5, 3690–3698. [Google Scholar] [CrossRef]
  11. Boyes, W. (Ed.) Chapter 31-Non-Destructive Testing. In Instrumentation Reference Book, 4th ed.; Butterworth-Heinemann: Boston, MA, USA, 2010; pp. 567–592. [Google Scholar] [CrossRef]
  12. Mann, R.; Stanley, S.J.; Vlaev, D.; Wabo, E.; Primrose, K. Augmented-reality visualization of fluid mixing in stirred chemical reactors using electrical resistance tomography. J. Electron. Imaging 2001, 10, 620–629. [Google Scholar] [CrossRef]
  13. Bolton, G.T.; Primrose, K.M. An overview of electrical tomographic measurements in pharmaceutical and related application areas. AAPS PharmSciTech 2005, 6, E137–E143. [Google Scholar] [CrossRef]
  14. Hiva Movafagh, G.T.; Ein-Mozaffari, F. Using tomography images to study the mixing of wheat straw slurries. Biofuels 2016, 7, 365–375. [Google Scholar] [CrossRef]
  15. Gradov, D.; González, G.; Vauhkonen, M.; Laari, A.; Koiranen, T. Experimental and Numerical Study of Multiphase Mixing Hydrodynamics in Batch Stirred Tank Applied to Ammoniacal Thiosulphate Leaching of Gold. J. Chem. Eng. Process. Technol. 2017, 8, 1–17. [Google Scholar] [CrossRef]
  16. Rymarczyk, T.; Kłosowski, G. Innovative methods of neural reconstruction for tomographic images in maintenance of tank industrial reactors. Eksploat. Niezawodn. Maint. Reliab. 2019, 21, 261–267. [Google Scholar] [CrossRef]
  17. Jacques, S.; Pile, K.; Barnes, P.; Lai, X.; Roberts, K.; Williams, R. An in-situ synchrotron X-ray diffraction tomography study of crystallization and preferred crystal orientation in a stirred reactor. Cryst. Growth Des. 2005, 5, 395–397. [Google Scholar] [CrossRef]
  18. Ricard, F.; Brechtelsbauer, C.; Xu, X.; Lawrence, C. Monitoring of Multiphase Pharmaceutical Processes Using Electrical Resistance Tomography. Chem. Eng. Res. Des. 2005, 83, 794–805. [Google Scholar] [CrossRef]
  19. Wajman, R.; Banasiak, R.; Mazurkiewicz, L.; Dyakowski, T.; Sankowski, D. Spatial Imaging with 3D Capacitance Measurements. Meas. Sci. Technol. 2006, 17, 2113. [Google Scholar] [CrossRef]
  20. Germishuys, Z.; Manley, M. X-ray micro-computed tomography evaluation of bubble structure of freeze-dried dough and foam properties of bread produced from roasted wheat flour. Innov. Food Sci. Emerg. Technol. 2021, 73, 102766. [Google Scholar] [CrossRef]
  21. Maleki, S.; Pandharipande, A.; Leus, G. Energy-Efficient Distributed Spectrum Sensing for Cognitive Sensor Networks. IEEE Sensors J. 2010, 11, 565–573. [Google Scholar] [CrossRef]
  22. Wójcik, D.; Rymarczyk, T.; Przysucha, B.; Gołąbek, M.; Majerek, D.; Warowny, T.; Soleimani, M. Energy Reduction with Super-Resolution Convolutional Neural Network for Ultrasound Tomography. Energies 2023, 16, 1387. [Google Scholar] [CrossRef]
  23. Du, X.; Zhou, Z.; Zhang, Y.; Rahman, T. Energy-Efficient Sensory Data Gathering Based on Compressed Sensing in IoT Networks. J. Cloud Comput. 2020, 9, 19. [Google Scholar] [CrossRef]
  24. Hasan, N.; Rizk, C.; AlKhaja, M.; Babikir, E. Optimisation toward Sustainable Computed Tomography Imaging Practices. Sustain. Futur. 2024, 7, 100176. [Google Scholar] [CrossRef]
  25. Brown, M.; Snelling, E.; De Alba, M.; Ebrahimi, G.; Forster, B.B. Quantitative Assessment of Computed Tomography Energy Use and Cost Savings Through Overnight and Weekend Power Down in a Radiology Department. Can. Assoc. Radiol. J. 2023, 74, 298–304. [Google Scholar] [CrossRef]
  26. Afat, S.; Wohlers, J.; Herrmann, J.; Brendlin, A.S.; Gassenmaier, S.; Almansour, H.; Werner, S.; Brendel, J.M.; Mika, A.; Scherieble, C.; et al. Reducing Energy Consumption in Musculoskeletal MRI Using Shorter Scan Protocols, Optimized Magnet Cooling Patterns, and Deep Learning Sequences. Eur. Radiol. 2024, 43, 1–12. [Google Scholar] [CrossRef]
  27. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely Randomized Trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
  28. Goldberger, J.; Roweis, S.T.; Hinton, G.E.; Salakhutdinov, R. Neighbourhood Components Analysis. Adv. Neural Inf. Process. Syst. 2004, 17, 513–520. [Google Scholar]
  29. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; The MIT Press: Cambridge, MA, USA, 2006. [Google Scholar] [CrossRef]
  30. Koulountzios, P.; Rymarczyk, T.; Soleimani, M. A Triple-Modality Ultrasound Computed Tomography Based on Full-Waveform Data for Industrial Processes. IEEE Sensors J. 2021, 21, 20896–20909. [Google Scholar] [CrossRef]
  31. Przysucha, B.; Wójcik, D.; Rymarczyk, T.; Baran, B.; Król, K. Sensitivity Matrix Reconstruction in Ultrasound Transmission Tomography Using Singular Value Decomposition. In Proceedings of the 2023 International Interdisciplinary PhD Workshop (IIPhDW), Wismar, Germany, 3–5 May 2023; pp. 1–5. [Google Scholar] [CrossRef]
Figure 1. A photo showing a measurement system consisting of an ultrasound industrial tomograph and a measurement tank.
Figure 1. A photo showing a measurement system consisting of an ultrasound industrial tomograph and a measurement tank.
Energies 17 05406 g001
Figure 2. An interior of a measurement device consisting of ultrasound measurement cards and a controlling computer.
Figure 2. An interior of a measurement device consisting of ultrasound measurement cards and a controlling computer.
Energies 17 05406 g002
Figure 3. A picture showing a measurement set consisting of 32 channels with two artificial inclusions. The picture shows the first (1) and second (2) channels, which are numbered clockwise.
Figure 3. A picture showing a measurement set consisting of 32 channels with two artificial inclusions. The picture shows the first (1) and second (2) channels, which are numbered clockwise.
Energies 17 05406 g003
Figure 4. Graphs showing the results calculated for the test data. The gray dots indicate the actual location and size of the inclusions, and the red circles denote the predictions. Rows (ac) represent predictions that were made using data frames with 3, 2, and 1 sensors, respectively.
Figure 4. Graphs showing the results calculated for the test data. The gray dots indicate the actual location and size of the inclusions, and the red circles denote the predictions. Rows (ac) represent predictions that were made using data frames with 3, 2, and 1 sensors, respectively.
Energies 17 05406 g004
Table 1. Values of accuracy and F1 score for the classification models for prediction of the number of inclusions.
Table 1. Values of accuracy and F1 score for the classification models for prediction of the number of inclusions.
Model1 Sensor2 Sensors3 Sensors
Accuracy F1 Accuracy F1 Accuracy F1
Extra trees classifier96.67%0.9799.19%0.9999.33%0.99
Gaussian process classifier77.96%0.7882.83%0.8384.19%0.84
K-neighbors classifier96.85%0.9798.73%0.9998.71%0.99
MLP classifier94.56%0.9598.10%0.9899.17%0.99
SVC77.71%0.7887.21%0.8791.52%0.92
Table 2. Values of R 2 and MSE metrics for regression models.
Table 2. Values of R 2 and MSE metrics for regression models.
Model1 Sensor2 Sensors3 Sensors
R2MSER2MSER2MSE
Extra trees regressor79.32%9.4291.04%5.0094.48%4.02
Gaussian process regressor44.99%20.4687.87%4.9592.14%4.83
K-neighbors regressor87.95%4.2494.70 %1.7998.09%0.68
MLP regressor84.38%5.5092.83%2.4297.68%0.82
SVR76.36%8.3095.62%1.6097.94%0.83
Table 3. Table showing the estimation of electricity energy reduction for measurement cards based on the number of sensors.
Table 3. Table showing the estimation of electricity energy reduction for measurement cards based on the number of sensors.
Number of SensorsPower Consumption [mW]Reduction in Energy Consumption [%]
1615,5520
87776100
32916433
21944700
19721500
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baran, B.; Rymarczyk, T.; Majerek, D.; Szyszka, P.; Wójcik, D.; Cieplak, T.; Gąsior, M.; Marczuk, M.; Wąsik, E.; Gauda, K. Energy Optimization in Ultrasound Tomography Through Sensor Reduction Supported by Machine Learning Algorithms. Energies 2024, 17, 5406. https://doi.org/10.3390/en17215406

AMA Style

Baran B, Rymarczyk T, Majerek D, Szyszka P, Wójcik D, Cieplak T, Gąsior M, Marczuk M, Wąsik E, Gauda K. Energy Optimization in Ultrasound Tomography Through Sensor Reduction Supported by Machine Learning Algorithms. Energies. 2024; 17(21):5406. https://doi.org/10.3390/en17215406

Chicago/Turabian Style

Baran, Bartłomiej, Tomasz Rymarczyk, Dariusz Majerek, Piotr Szyszka, Dariusz Wójcik, Tomasz Cieplak, Marcin Gąsior, Marcin Marczuk, Edmund Wąsik, and Konrad Gauda. 2024. "Energy Optimization in Ultrasound Tomography Through Sensor Reduction Supported by Machine Learning Algorithms" Energies 17, no. 21: 5406. https://doi.org/10.3390/en17215406

APA Style

Baran, B., Rymarczyk, T., Majerek, D., Szyszka, P., Wójcik, D., Cieplak, T., Gąsior, M., Marczuk, M., Wąsik, E., & Gauda, K. (2024). Energy Optimization in Ultrasound Tomography Through Sensor Reduction Supported by Machine Learning Algorithms. Energies, 17(21), 5406. https://doi.org/10.3390/en17215406

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop