A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning
Abstract
:1. Introduction
- (1)
- An acousto-optic positioning handheld device based on linear array wireless ultrasonic probe is designed and fabricated. The wireless probe outputs 12FPS B-scans raw data. Acousto-optic auxiliary positioning device is used to provide 60 sets of pose information per second. If each group of pose information output can directly correspond to a frame of B-scan, this is not a challenging problem. However, the real situation is that it is impossible to continuously collect with a handheld probe. For example, when we move continuously for 1 s at a speed of 1 cm/s, we can only obtain 12 FPS B-scans within a distance of 1 cm, although we obtain 60 sets of pose data. Among them, there are 48 groups of posture information without corresponding B-scans. Therefore, in this paper, the algorithm and training model are designed to make the probe collect the least number of times in the range of 1 cm, and finally synthesize 60 FPS B-scans within 1 cm. A 3D ultrasound imaging system can not only collect data, but it can also provide support for the SPRAO algorithm in this paper. It can be seen from Figure 1 that the innovation and advantages of the SPRAO algorithm require the spatial pose information output by the acousto-optic positioning device.
- (2)
- Because the surface of the measured target will change due to the extrusion of the probe, the position of the target will also change when the probe is collected back and forth. The effect of 3D reconstruction using pose information directly is not good; thus, curve fitting and speckle decorrelation are needed for position correction. The probe can only obtain 12 frames of B-scans after the first acquisition. We need to insert another 48 frames of B-scans through subsequent acquisition. The spatial position of each frame of B-scans is actually determined by three points on different lines in the space; therefore, three Bezier curves are needed to provide three points for each plane. Three Bezier curves are fitted from the existing 12 frames of B-scans according to the pose. Four points are inserted between two B-scans on each Bezier curve. The original data is interpolated and synthesized on a Bezier curve. This not only ensures the frame rate of the B-scan sequences, but also reduces the requirement of ultrasonic probe output frame rate.
- (3)
- Set ROI and extract the speckle decorrelation feature of two B-scans. Calibrate the acousto-optic auxiliary positioning information. It should be noted that through the speckle decorrelation results of two frames of B-scans, the reduction is not the interpolation error but the change error caused by the extrusion of the probe. The mean square error (MSE) loss function is used to represent the difference between the calculation result and the real value. The correct MSE between the actual and theoretical values of 48 frames of B-scans is obtained through Bezier interpolation.
- (4)
- The calculation of pose deviation and speckle decorrelation is integrated into 3DCNN. LSTM is used to predict the output pose information of 3DCNN, so that MSE can reach the minimum value quickly in a short time. 3DCNN-LSTM extracts deep abstract features and establishes a model to predict the spatial pose of the ultrasonic probe. Finally, the 3D reconstruction of B-scans sequence is realized.
- Bezier interpolation and Speckle decorrelation are applied to the data acquisition process of freehand 3D ultrasound reconstruction so that doctors can obtain 60 FPS B-scans of the target through 2–3 continuous acquisitions.
- By extracting the speckle decorrelation feature of two B-scans, the acousto-optic-assisted positioning information is compensated, and the cumulative error of the positioning device in the freehand 3D ultrasound reconstruction process is solved.
- The deep learning model 3DCNN-LSTM is used to implement the algorithm. The calculation of pose deviation and speckle decorrelation is integrated into 3DCNN. LSTM is used to predict the output pose information of 3DCNN so that MSE can reach the minimum value in a short time.
2. Materials and Methods
2.1. Bezier Interpolation
Algorithm 1: Bezier Interpolation of SPRAO |
STEP1: An ultrasonic probe with an acousto-optic positioning device is used to move and scan the target in one direction at a speed of 1 cm/s. In this process, the probe outputs 12 FPS B-scans, while the acousto-optic positioning device can output 60 spatial position coordinates and angle information in the reconstructed coordinate system per second. STEP2: The control window starts to move from the first image frame, four frames at a time, until all B-scans sequences are traversed. Three points of different straight lines are extracted from each frame of image, and three Bezier curves are generated. STEP3: Insert four points between two B-scans on each Bezier curve. Twelve points are inserted into three curves. Calculate the center point coordinates of the plane where the insertion point is located, and convert the coordinates to the reconstruction body coordinate system. STEP4: Return to STEP1. Make several acquisitions along the original path. Repeatedly compare the position of the acousto-optic positioning device and the coordinates of the insertion point. When the two are consistent, insert the current B-scan into the original image sequence. Finally, 60 FPS B-scans were obtained, which were five times those of the original sequence. There are still some errors between the coordinates of these inserted B-scans and the real values. The error will be adjusted later using the speckle decorrelation of SPRAO. |
2.2. Speckle Decorrelation
Algorithm 2: Speckle Decorrelation of SPRAO |
STEP1: Set the ROI of for each B-scan frame in the dataset. Take three image blocks that are not in the same line from the ROI. Input the B-scan image blocks of each frame into the convolution layer of 3DCNN one by one. STEP2: The speckle decorrelation features of B-scans image blocks are extracted via convolution operation of 3DCNN. The estimated distance of the current two frames of B-scans can be obtained by using the lookup table. STEP3: On the basis of the distance estimation, the normal vectors of the plane where the three image blocks are located are further extracted. Calculate the included angle between the normal vector and the three coordinate axes and the coordinate value of the ROI center point. STEP4: 3DCNN outputs pose information. When training the model, the pose information can be input into LSTM for prediction, which can make MSE reach the minimum value in a short time. After the model training is completed, 3DCNN can run the test set independently without LSTM, show as Figure 6 and Figure 7. |
2.3. 3DCNN-LSTM
3. Experimental Setup
3.1. SPRAO System
- Customized handle. It is composed of a fixed base, a wireless ultrasonic probe and acousto-optic positioning device, as shown in Figure 9. The wireless probe uses a 128 element linear array probe from Sonostar. The acousto-optic positioning device sends the spatial position coordinates and angle information to the positioning base station, and outputs 60 groups per second. The wireless ultrasonic probe uses a linear array transducer with 128 elements, and uses a 12 FPS frame rate to collect B-scan sequences. The frequency range of the probe is 7–10 MHz, and the depth range is 2–10 cm.
- Positioning base station. The scanning area of the base station needs to be determined. The positioning information of the acousto-optic signal positioning device is transmitted to the positioning information-receiving host through a wire.
- Positioning information-receiving host. Receives the information of the positioning base station. Runs the positioning host visualization software. The software can track the spatial position and angle of the positioning handle in the scanning area.
- Processor platform. Runs the deep learning framework 3DCNN-LSTM. Ubuntu OS runs on Intel (R) Xeon (R) CPU e5-2660. The number of CPUs is 56. The GPU processor model is GeForce RTX 2080, and the number of GPUs is 4. The platform memory capacity is 64 GB, and the solid state disk capacity is 256 GB.
- Determine the target scanning area of the probe, including the target and path.
- The positioning base station and the positioning information-receiving host are placed in the specified area.
- Connect the positioning information-receiving host and processor platform.
3.2. Data Acquisition
3.3. Datasets
4. Experimental Results
4.1. Bezier Interpolation
4.2. Speckle Decorrelation Calibration Curve
4.3. SPRAO
5. Discussion
6. Conclusions
- (1)
- The doctor can obtain 60 FPS B-scans of the target through 2–3 continuous acquisitions. Limited by the speed of the wireless transmission chip, the wireless probe can only output 12 FPS. If SPRAO is not used, the doctor needs to stay for a few seconds after each movement of the probe to ensure that the target B-scans are collected. When using SPRAO, three Bezier curves are fitted from the existing 12 frames of B-scans according to the pose. From the experimental results of Bezier interpolation, it can be seen that the output of 60 FPS can be synthesized by inserting four points between two B-scans of each Bezier curve and using the speckle decorrelation between B-scans for pose calibration. This not only effectively reduces the requirement for the output frame rate of the ultrasonic probe, but also increases the moving speed of the wireless probe.
- (2)
- The cumulative error can be compensated by the speckle decorrelation between two frames of B-scans. Firstly, the positioning information output by the acousto-optic positioning device is not easily affected by obstacles. Secondly, setting ROI to extract the speckle decorrelation features of two B-scans can not only help to construct 60 FPS B-scans output, but also calibrate the cumulative error of acousto-optic auxiliary positioning information.
- (3)
- 3DCNN-LSTM reduces the time for MSE to reach the target value. It provides the necessary conditions for real-time 3D reconstruction. Without deep learning, in order to obtain more B-scans, the probe must stay for several seconds after each move to complete the acquisition and reconstruction. In this paper, 3DCNN-LSTM model is used to improve the efficiency of B-scans sequence feature extraction. The calculation of pose deviation and speckle decorrelation is integrated into 3DCNN. The LSTM predicts the output pose information of the 3DCNN, which makes the MSE reach the minimum value in a short time. The experimental results not only show that the deep learning model can track the spatial pose change of B-scans better than other methods, but also that the MSE of the test dataset is less than 2.5 mm.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mohamed, F.; Chan, V.S. A Survey on 3D Ultrasound Reconstruction Techniques. In Artificial Intelligence—Applications in Medicine and Biology; IntechOpen: London, UK, 2019. [Google Scholar]
- Mozaffari, M.H.; Lee, W.-S. Freehand 3-D Ultrasound Imaging: A Systematic Review. Ultrasound Med. Biol. 2017, 43, 2099–2124. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Song, F.; Ma, Y.; You, I.; Zhang, H. Smart Collaborative Evolvement for Virtual Group Creation in Customized Industrial IoT. IEEE Trans. Netw. Sci. Eng. 2022, 1–11. [Google Scholar] [CrossRef]
- Song, F.; Zhu, M.; Zhou, Y.; You, I.; Zhang, H. Smart Collaborative Tracking for Ubiquitous Power IoT in Edge-Cloud Interplay Domain. IEEE Internet Things J. 2019, 7, 6046–6055. [Google Scholar] [CrossRef]
- Hsu, P.W. Freehand Three—Dimensional Ultrasound Calibration; University of Cambridge: Cambridge, UK, 2008. [Google Scholar]
- Huang, Q.; Zeng, Z. A Review on Real-Time 3D Ultrasound Imaging Technology. BioMed Res. Int. 2017, 2017, 6027029. [Google Scholar] [CrossRef] [Green Version]
- Moon, H.; Ju, G.; Park, S.; Shin, H. 3D freehand ultrasound reconstruction using a piecewise smooth Markov random field. Comput. Vis. Image Underst. 2016, 151, 101–113. [Google Scholar] [CrossRef]
- Toonkum, P.; Suwanwela, N.C.; Chinrungrueng, C. Reconstruction of 3D ultrasound images based on Cyclic Regularized Savitzky–Golay filters. Ultrasonics 2011, 51, 136–147. [Google Scholar] [CrossRef]
- Huang, Q.-H.; Yang, Z.; Hu, W.; Jin, L.-W.; Wei, G.; Li, X. Linear Tracking for 3-D Medical Ultrasound Imaging. IEEE Trans. Cybern. 2013, 43, 1747–1754. [Google Scholar] [CrossRef] [PubMed]
- Huang, Q.; Wu, B.; Lan, J.; Li, X. Fully Automatic Three-Dimensional Ultrasound Imaging Based on Conventional B-Scan. IEEE Trans. Biomed. Circuits Syst. 2018, 12, 426–436. [Google Scholar] [CrossRef] [PubMed]
- Huang, Q.; Lan, J.; Li, X. Robotic Arm Based Automatic Ultrasound Scanning for Three-Dimensional Imaging. IEEE Trans. Ind. Inform. 2019, 15, 1173–1182. [Google Scholar] [CrossRef]
- Chung, S.-W.; Shih, C.-C.; Huang, C.-C. Freehand three-dimensional ultrasound imaging of carotid artery using motion tracking technology. Ultrasonics 2017, 74, 11–20. [Google Scholar] [CrossRef]
- Cenni, F.; Monari, D.; Desloovere, K.; Aertbeliën, E.; Schless, S.-H.; Bruyninckx, H. The reliability and validity of a clinical 3D freehand ultrasound system. Comput. Methods Programs Biomed. 2016, 136, 179–187. [Google Scholar] [CrossRef] [PubMed]
- Herickhoff, C.; Lin, J.; Dahl, J. Low-cost Sensor-enabled Freehand 3D Ultrasound. In Proceedings of the 2019 IEEE International Ultrasonics Symposium (IUS), Glasgow, UK, 6–9 October 2019. [Google Scholar]
- Chen, X.; Chen, H.; Peng, Y.; Tao, D. Probe Sector Matching for Freehand 3D Ultrasound Reconstruction. Sensors 2020, 20, 3146. [Google Scholar] [CrossRef]
- Daoud, M.I.; Alshalalfah, A.L.; Awwad, F.; Al-Najar, M. Freehand 3D Ultrasound Imaging System Using Electromagnetic Tracking. In Proceedings of the 2015 International Conference on Open Source Software Computing, Amman, Jordan, 10–13 September 2015. [Google Scholar]
- Wen, T.; Yang, F.; Gu, J.; Wang, L. A novel Bayesian-based nonlocal reconstruction method for freehand 3D ultrasound imaging. Neurocomputing 2015, 168, 104–118. [Google Scholar] [CrossRef]
- Mohamed, F.; Mong, W.S.; Yusoff, Y.A. Quaternion Based Freehand 3D Baby Phantom Reconstruction Using 2D Ultrasound Probe and Game Controller Motion and Positioning Sensors. In Proceedings of the International Conference for Innovation in Biomedical Engineering & Life Sciences, ICIBEL 2015, Putrajaya, Malaysia, 6–8 December 2015; Springer: Singapore, 2015. [Google Scholar]
- Gao, H.; Huang, Q.; Xu, X.; Li, X. Wireless and sensorless 3D ultrasound imaging. Neurocomputing 2016, 195, 159–171. [Google Scholar] [CrossRef]
- Afsham, N.; Rasoulian, A.; Najafi, M.; Abolmaesumi, P.; Rohling, R. Nonlocal means filter-based speckle tracking. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2015, 62, 1501–1515. [Google Scholar] [CrossRef] [PubMed]
- Coupe, P.; Hellier, P.; Kervrann, C.; Barillot, C. Nonlocal means-based speckle filtering for ultrasound images. IEEE Trans. Image Process. 2009, 18, 2221–2229. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Song, F.; Ai, Z.; Zhang, H.; You, I.; Li, S. Smart Collaborative Balancing for Dependable Network Components in Cyber-Physical Systems. IEEE Trans. Ind. Inf. 2021, 17, 6916–6924. [Google Scholar] [CrossRef]
- Song, F.; Zhou, Y.-T.; Wang, Y.; Zhao, T.-M.; You, I.; Zhang, H.-K. Smart collaborative distribution for privacy enhancement in moving target defense. Inf. Sci. 2019, 479, 593–606. [Google Scholar] [CrossRef]
- Yang, C.; Jiang, M.; Chen, M.; Fu, M.; Li, J.; Huang, Q. Automatic 3-D Imaging and Measurement of Human Spines with a Robotic Ultrasound System. IEEE Trans. Instrum. Meas. 2021, 70, 1–13. [Google Scholar] [CrossRef]
- Prevost, R.; Salehi, M.; Sprung, J.; Ladikos, A.; Bauer, R.; Wein, W. Deep Learning for Sensorless 3D Freehand Ultrasound Imaging. In Proceedings of the International Conference on Medical Image Computing & Computer-Assisted Intervention, Quebec City, QC, Canada, 11–13 September 2017; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Guo, H.; Xu, S.; Wood, B.; Yan, P. Sensorless Freehand 3D Ultrasound Reconstruction via Deep Contextual Learning. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020. [Google Scholar]
- Guo, H.; Xu, S.; Wood, B.J.; Yan, P. Transducer Adaptive Ultrasound Volume Reconstruction. arXiv 2020, arXiv:2011.08419. [Google Scholar]
- Luo, M.; Yang, X.; Huang, X.; Huang, Y.; Zou, Y.; Hu, X.; Ravikumar, N.; Frangi, A.F.; Ni, D. Self Context and Shape Prior for Sensorless Freehand 3D Ultrasound Reconstruction. In Proceedings of the Image Computing and Computer Assisted Intervention, MICCAI 2021, Strasbourg, France, 27 September–1 October 2021. [Google Scholar]
- Prevost, R.; Salehi, M.; Jagoda, S.; Kumar, N.; Sprung, J.; Ladikos, A.; Bauer, R.; Zettinig, O.; Wein, W. Deep Learning-Based 3D Freehand Ultrasound Reconstruction with Inertial Measurement Units. In Proceedings of the 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands, 4–6 July 2018. [Google Scholar]
- Raphael, P.; Mehrdad, S.; Jagoda, S.; Kumar, N.; Sprung, J.; Ladikos, A.; Bauer, R.; Zettinig, O.; Wein, W. 3D freehand ultrasound without external tracking using deep learning. Med. Image Anal. 2018, 48, 187–202. [Google Scholar]
- Gao, F.; Yoon, H.; Wu, T.; Chu, X. A feature transfer enabled multi-task deep learning model on medical imaging. Expert Syst. Appl. 2020, 143, 112957. [Google Scholar] [CrossRef] [Green Version]
- Ji, S.; Xu, W.; Yang, M.; Yu, K. 3D Convolutional Neural Networks for Human Action Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 221–231. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Su, Y.-T.; Lu, Y.; Liu, J.; Chen, M.; Liu, A.-A. Spatio-Temporal Mitosis Detection in Time-Lapse Phase-Contrast Microscopy Image Sequences: A Benchmark. IEEE Trans. Med Imaging 2021, 40, 1319–1328. [Google Scholar] [CrossRef] [PubMed]
- de Ruijter, J.; Muijsers, J.J.M.; van de Vosse, F.N.; van Sambeek, M.R.H.M.; Lopata, R.G.P. A generalized approach for automatic 3-D geometry assessment of blood vessels in transverse ultrasound images using convolutional neural networks. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2021, 68, 3326–3335. [Google Scholar] [CrossRef]
- Pandey, R.; Kirchhof, J.; Krieg, F.; Pérez, E.; Römer, F. Preprocessing of Freehand Ultrasound Synthetic Aperture Measurements using DNN. In Proceedings of the 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; pp. 1401–1405. [Google Scholar]
- Wang, J.; Yu, L.-C.; Lai, K.R.; Zhang, X. Dimensional Sentiment Analysis Using a Regional CNN-LSTM Model. In Proceedings of the Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016. [Google Scholar]
- Xu, Z.; Shan, L.; Deng, W. Learning temporal features using LSTM-CNN architecture for face anti-spoofing. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2016. [Google Scholar]
- Song, Z.; Zhu, H.; Wu, Q.; Wang, X.; Li, H.; Wang, Q. Accurate 3D Reconstruction from Circular Light Field Using CNN-LSTM. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020. [Google Scholar]
- Hakim, N.L.; Shih, T.K.; Kasthuri Arachchi, S.P.; Aditya, W.; Chen, Y.-C.; Lin, C.-Y. Dynamic Hand Gesture Recognition Using 3DCNN and LSTM with FSM Context-Aware Model. Sensors 2019, 19, 5429. [Google Scholar] [CrossRef] [Green Version]
- Liang, Z.; Zhu, G.; Shen, P.; Song, J.; Shah, S.A.; Bennamoun, M. Learning Spatiotemporal Features Using 3DCNN and Convolutional LSTM for Gesture Recognition. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Xie, Y.; Liao, H.; Zhang, D.; Zhou, L.; Chen, F. Image-Based 3D Ultrasound Reconstruction with Optical Flow via Pyramid Warping Network. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; pp. 3539–3542. [Google Scholar]
- Huang, Q.; Zheng, Y.; Lu, M.; Wang, T.; Chen, S. A new adaptive interpolation algorithm for 3D ultrasound imaging with speckle reduction and edge preservation. Comput. Med Imaging Graph. 2009, 33, 100–110. [Google Scholar] [CrossRef] [Green Version]
- Coupé, P.; Hellier, P.; Morandi, X.; Barillot, C. Probe Trajectory Interpolation for 3D Reconstruction of Freehand Ultrasound. Med. Image Anal. 2007, 11, 604–615. [Google Scholar] [CrossRef] [Green Version]
- Huang, Q.-H.; Zheng, Y.-P. An adaptive squared-distance-weighted interpolation for volume reconstruction in 3D freehand ultrasound. Ultrasonics 2006, 44, e73–e77. [Google Scholar] [CrossRef] [Green Version]
- Gee, A.H.; James Housden, R.; Hassenpflug, P.; Treece, G.M.; Prager, R.W. Sensorless freehand 3D ultrasound in real tissue: Speckle decorrelation without fully developed speckle. Med. Image Anal. 2006, 10, 137–149. [Google Scholar] [CrossRef]
- Hassenpflug, P.; Prager, R.W.; Treece, G.M.; Gee, A.H. Speckle classification for sensorless freehand 3-D ultrasound. Ultrasound Med. Biol. 2005, 31, 1499–1508. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Housden, R.J.; Gee, A.H.; Treece, G.M.; Prager, R.W. Sensorless reconstruction of unconstrained freehand 3D ultrasound data. Ultrasound Med. Biol. 2007, 33, 408–419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Friemel, B.H.; Bohs, L.N.; Nightingale, K.R.; Trahey, G.E. Speckle decorrelation due to two-dimensional flow gradients. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 1998, 45, 317–327. [Google Scholar] [CrossRef] [PubMed]
Dataset Name | Target | Collection Times | Number of Direct Frames | Number of Interpolation Frames | Moving Track | Average Distance (mm) |
---|---|---|---|---|---|---|
DB1 | 1 mm metal sheet | 38 | 1000 | 4000 | Line | 15 |
DB2 | 2 mm metal sheet | 38 | 1000 | 4000 | Line | 16 |
DB3 | Symmetrical metal block | 44 | 2000 | 8000 | Line | 26 |
DB4 | Asymmetric metal block | 58 | 2000 | 8000 | Line | 21 |
DB5 | Right of human arm | 20 | 1500 | 6000 | Line | 100 |
DB6 | Left of human arm | 20 | 1500 | 6000 | Line | 100 |
Number of LSTM Layers | Training Times | MSE Error |
---|---|---|
2 | 500 | 3.2% |
4 | 500 | 3.1% |
5 | 500 | 2.5% |
6 | 500 | 2.1% |
7 | 500 | 1.8% |
8 | 500 | 2.0% |
DB1 Methods | Average Error (mm/°) | Final Drift (mm) | ||||||
X | Y | Min. | Med. | Max. | ||||
Linear motion (LM) | 3.94 | 3.87 | 1.69 | 1.81 | 2.38 | 2.23 | 3.92 | 5.19 |
LM + IMU | 2.25 | 2.23 | 0.87 | 0.91 | 1.19 | 1.59 | 2.4 | 3.08 |
Speckle decorrelation (SD) | 3.35 | 3.33 | 1.53 | 1.37 | 1.87 | 2.4 | 3.54 | 4.51 |
CNN | 2.52 | 2.52 | 1.36 | 1.45 | 1.85 | 0.89 | 2.34 | 3.45 |
CNN + SD | 2.23 | 2.23 | 0.79 | 0.85 | 1.05 | 0.29 | 1.88 | 2.9 |
SPRAO | 1.55 | 1.55 | 0.61 | 0.67 | 0.83 | 0.41 | 1.25 | 1.98 |
DB3 Methods | Average Error (mm/°) | Final Drift (mm) | ||||||
X | Y | Min. | Med. | Max. | ||||
Linear motion (LM) | 3.92 | 3.87 | 2.05 | 2.15 | 2.86 | 2.35 | 4.07 | 5.4 |
LM + IMU | 2.25 | 2.25 | 1.06 | 1.01 | 1.44 | 1.58 | 2.34 | 3.13 |
Speckle decorrelation (SD) | 3.37 | 3.37 | 1.79 | 1.91 | 2.28 | 2.48 | 3.56 | 4.67 |
CNN | 2.54 | 2.54 | 1.66 | 1.8 | 2.05 | 0.83 | 2.33 | 3.39 |
CNN + SD | 2.2 | 2.21 | 0.96 | 0.98 | 1.31 | 0.42 | 1.8 | 2.95 |
SPRAO | 1.53 | 1.55 | 0.75 | 0.75 | 1.03 | 0.26 | 1.33 | 2.14 |
DB5 Methods | Average Error (mm/°) | Final Drift (mm) | ||||||
X | Y | Min. | Med. | Max. | ||||
Linear motion (LM) | 3.94 | 3.94 | 3.18 | 3 | 3.93 | 2.25 | 3.98 | 5.52 |
LM + IMU | 2.25 | 2.25 | 1.65 | 1.49 | 2.1 | 1.43 | 2.34 | 3.16 |
Speckle decorrelation (SD) | 3.37 | 3.37 | 2.77 | 2.57 | 3.53 | 2.25 | 3.54 | 4.66 |
CNN | 2.54 | 2.54 | 2.6 | 2.45 | 3.09 | 0.87 | 2.32 | 3.48 |
CNN + SD | 2.25 | 2.25 | 1.48 | 1.35 | 1.85 | 0.35 | 1.84 | 3.17 |
SPRAO | 1.55 | 1.55 | 1.16 | 1.05 | 1.44 | 0.26 | 1.3 | 2.15 |
DB6 Methods | Average Error (mm/°) | Final Drift (mm) | ||||||
X | Y | Min. | Med. | Max. | ||||
Linear motion (LM) | 3.94 | 3.92 | 3.18 | 3.03 | 4.34 | 2.28 | 4.02 | 5.49 |
LM + IMU | 2.25 | 2.25 | 1.57 | 1.55 | 2.03 | 1.46 | 2.37 | 3.14 |
Speckle decorrelation (SD) | 3.37 | 3.37 | 2.8 | 2.7 | 3.48 | 2.24 | 3.53 | 4.68 |
CNN | 2.54 | 2.54 | 2.52 | 2.48 | 3.15 | 0.81 | 2.31 | 3.51 |
CNN + SD | 2.25 | 2.25 | 1.45 | 1.41 | 1.89 | 0.23 | 1.81 | 3.14 |
SPRAO | 1.55 | 1.53 | 1.1 | 1.07 | 1.49 | 0.2 | 1.31 | 2.16 |
Results of Dataset DB1 | PCC | R2 | MSE | MAS |
Linear motion (LM) | −0.17 | −0.04 | 14.75 | 3.79 |
LM + IMU | 0.13 | 0.83 | 5.57 | 2.35 |
Speckle decorrelation (SD) | −0.13 | 0.51 | 11.66 | 3.39 |
CNN | 0.23 | 0.84 | 7.75 | 2.29 |
CNN + SD | −0.02 | 0.69 | 3.29 | 1.78 |
SPRAO | 0.25 | 0.94 | 1.55 | 1.24 |
Results of Dataset DB3 | PCC | R2 | MSE | MAS |
Linear motion (LM) | 0.14 | 0.02 | 14.72 | 3.79 |
LM + IMU | −0.16 | 0.88 | 5.22 | 2.28 |
Speckle decorrelation (SD) | 0.12 | 0.67 | 12.28 | 3.49 |
CNN | 0.19 | 0.79 | 7.91 | 2.26 |
CNN + SD | 0.22 | 0.76 | 3.09 | 1.74 |
SPRAO | 0.45 | 0.89 | 1.67 | 1.28 |
Results of Dataset DB5 | PCC | R2 | MSE | MAS |
Linear motion (LM) | 0.17 | 0.03 | 14.08 | 3.71 |
LM + IMU | 0.29 | 0.75 | 4.99 | 2.21 |
Speckle decorrelation (SD) | 0.14 | 0.46 | 11.28 | 3.33 |
CNN | 0.23 | 0.77 | 7.28 | 2.18 |
CNN + SD | 0.41 | 0.71 | 3.19 | 1.76 |
SPRAO | 0.55 | 0.91 | 1.63 | 1.26 |
Results of Dataset DB6 | PCC | R2 | MSE | MAS |
Linear motion (LM) | 0.17 | 0.03 | 2.05 | 2.15 |
LM + IMU | 0.26 | 0.76 | 1.06 | 1.01 |
Speckle decorrelation (SD) | 0.19 | 0.56 | 1.79 | 1.91 |
CNN | 0.27 | 0.81 | 1.36 | 1.45 |
CNN + SD | 0.29 | 0.66 | 0.79 | 0.85 |
SPRAO | 0.53 | 0.91 | 0.61 | 0.67 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, X.; Chen, H.; Peng, Y.; Liu, L.; Huang, C. A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning. Electronics 2023, 12, 1527. https://doi.org/10.3390/electronics12071527
Chen X, Chen H, Peng Y, Liu L, Huang C. A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning. Electronics. 2023; 12(7):1527. https://doi.org/10.3390/electronics12071527
Chicago/Turabian StyleChen, Xin, Houjin Chen, Yahui Peng, Liu Liu, and Chang Huang. 2023. "A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning" Electronics 12, no. 7: 1527. https://doi.org/10.3390/electronics12071527
APA StyleChen, X., Chen, H., Peng, Y., Liu, L., & Huang, C. (2023). A Freehand 3D Ultrasound Reconstruction Method Based on Deep Learning. Electronics, 12(7), 1527. https://doi.org/10.3390/electronics12071527