A Data Compression Method for Wellbore Stability Monitoring Based on Deep Autoencoder
Abstract
:1. Introduction
- We propose an efficient and real-time compression method for wellbore safety monitoring-related data, which effectively improves the compression efficiency of wellbore inclination and azimuth data while greatly reducing the error between the reconstructed data and the original data. This solves the problems of low real-time performance, low compression efficiency, and large reconstruction data error in existing methods, and can effectively improve the performance of wellbore stability monitoring;
- We propose for the first time the use of deep autoencoders to compress inclination and azimuth data, achieving significant compression of inclination and azimuth data, effectively solving the problem of low compression ratio in existing methods;
- We propose a mean filtering method based on the optimal standard deviation threshold to filter the reconstructed data after compensation of the residual, further effectively reducing the error between it and the original data.
2. Proposed Method
2.1. The Overall Framework of the Proposed Method
2.1.1. Block Diagram of Compressed Data Transmission System for LWD
2.1.2. Structural Diagram of the Proposed Method
2.2. Extraction of Source Data Features
2.2.1. Autoencoder
2.2.2. Deep Autoencoder
2.3. Compression of Residual Data
2.3.1. Quantization Coding
2.3.2. Huffman Coding
2.4. Mean Filtering of Compensated Reconstructed Data
2.4.1. RMSProp Optimization Algorithm
2.4.2. Mean Filtering Method Based on Optimal Standard Deviation Threshold
2.5. Performance Evaluation
2.6. Implementation of the Proposed Method
2.6.1. Training Process
Algorithm 1: Training steps for the deep autoencoder |
. sampling data in each sample. Then, set shuffle to “True” to shuffle the sample order. Step 3: Use MSE as the loss function and the optimization method of Adam to train the deep autoencoder with the structure shown in Table 1. . |
Algorithm 2: Steps to find the optimal standard deviation threshold |
Step 1: Load the deep autoencoder shown in Table 1. . Step 3: Set the quantization bits for quantization coding to , perform quantization coding on the residual data according to Formulas (6) and (7), and then perform Huffman coding on the quantized encoded data according to the method in ref. [13] to obtain the compressed data of the residual data . Step 4: Decode the compressed data of the residual data with Huffman decoding and quantization decoding according to the inverse process of the method in step 3 to obtain the reconstructed data of the residual data . Step 5: Add the reconstructed residual data and decompressed data to obtain the reconstructed test dataset . Step 6: Use the RMSProp algorithm (in Section 2.4.1 of this paper) to find the optimal standard deviation threshold . The detailed steps are as follows:
|
2.6.2. Data Compression Process
Algorithm 3: Steps for compression encoding |
Step 1: Load the deep autoencoder model with the trained model parameters . Step 2: Collect data points as a set of data , and input into the encoder of the deep autoencoder to obtain the compressed data, denoted as . Step 3: Input into the decoder of the deep autoencoder to obtain the decompressed data , and subtract from to obtain the residual data . Step 4: Encode the residual data with quantization coding and Huffman coding to obtain the compressed data of the residual data Step 5: Output the compressed data of () and compressed data of residual data (). Step 6: Repeat steps 2 to 5 until all the collected data are completed. |
2.6.3. Data Decompression Process
Algorithm 4: Steps for decompression |
obtained from Algorithm 2 as the standard deviation threshold required for the mean filtering method. ). ). ). ). in order and fill them in the filtering window. according to Equations (11)–(16), and output the filtered data.Step 8: Repeat steps 2 to 7 until all the compressed data have been decoded. |
3. Experiments
3.1. Experimental Data
3.2. Experimental Setup
4. Analysis and Discussion of the Experimental Results
4.1. Data Feature Extraction Results
4.2. Comparison of Compression Results
4.3. Ablation Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Fouda, M.; Taher, A.; Hussein, M.; Al-Hassan, M. Advanced Techniques for Wellbore Stability Evaluation Using Logging-While-Drilling Technologies. In Proceedings of the ARMA/DGS/SEG International Geomechanics Symposium, Al Khobar, Saudi Arabia, 30 October–2 November 2023; p. ARMA–IGS-2023-0087. [Google Scholar]
- Ciuperca, C.-L.; Di Tommaso, D.; Dawber, M.; Tidswell, J. Determining Wellbore Stability Parameters Using a New LWD High Resolution Ultrasonic Imaging Tool. In Proceedings of the SPE/IADC Drilling Conference and Exhibition, The Hague, The Netherlands, 5–7 March 2019; p. D021S009R002. [Google Scholar]
- Stricker, K.; Schimschal, S.; Müller, B.; Wessling, S.; Bender, F.; Kohl, T. Importance of drilling-related processes on the origin of borehole breakouts—Insights from LWD observations. Geomech. Energy Environ. 2023, 34, 100463. [Google Scholar] [CrossRef]
- Greten, A.; Brahim, I.B.; Emmerich, W.; Akimov, O. Reliable Mud-Pulse Telemetry System for High-Resolution Real-Time Logs. In Proceedings of the SPE/IADC Drilling Conference and Exhibition, The Hague, The Netherlands, 14–16 March 2017. [Google Scholar]
- Mwachaka, S.M.; Wu, A.P.; Fu, Q.Q. A review of mud pulse telemetry signal impairments modeling and suppression methods. J. Pet. Explor. Prod. Technol. 2019, 9, 779–792. [Google Scholar] [CrossRef]
- Li, C.; Xu, Z. A Review of Communication Technologies in Mud Pulse Telemetry Systems. Electronics 2023, 12, 3930. [Google Scholar] [CrossRef]
- Siu, S.; Ji, Q.; Wu, W.; Song, G.; Ding, Z. Stress wave communication in concrete: I. Characterization of a smart aggregate based concrete channel. Smart Mater. Struct. 2014, 23, 125030. [Google Scholar] [CrossRef]
- Wu, A.; He, S.; Ren, Y.; Wang, N.; Ho, S.C.M.; Song, G. Design of a new stress wave-based pulse position modulation (PPM) communication system with piezoceramic transducers. Sensors 2019, 19, 558. [Google Scholar] [CrossRef] [PubMed]
- He, S.; Wang, N.; Ho, M.; Zhu, J.; Song, G. Design of a new stress wave communication method for underwater communication. IEEE Trans. Ind. Electron. 2020, 68, 7370–7379. [Google Scholar] [CrossRef]
- Zhang, G.; Yang, P.; He, S.; Zheng, Y.; Song, G. A power waveform design based on OVSF-PPM for stress wave based wireless power transfer. Mech. Syst. Signal Process. 2021, 147, 107111. [Google Scholar] [CrossRef]
- Alkamil, E.H.; Abbood, H.R.; Flori, R.E.; Eckert, A. Case study of wellbore stability evaluation for the Mishrif Formation, Iraq. J. Pet. Sci. Eng. 2018, 164, 663–674. [Google Scholar] [CrossRef]
- Khan, K.; Altwaijri, M.; Taher, A.; Fouda, M.; Hussein, M. Real-Time Wellbore Stability and Hole Quality Evaluation Using LWD Azimuthal Photoelectric Measurements. In Proceedings of the SPE Middle East Oil and Gas Show and Conference, Sanabis, Bahrain, 28 November–1 December 2021; p. D021S008R009. [Google Scholar]
- Huffman, D.A. A method for the construction of minimum-redundancy codes. Proc. IRE 1952, 40, 1098–1101. [Google Scholar] [CrossRef]
- Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
- Shensa, M.J. The discrete wavelet transform: Wedding the a trous and Mallat algorithms. IEEE Trans. Signal Process. 1992, 40, 2464–2482. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- O’Neal, J., Jr. Predictive quantizing systems (differential pulse code modulation) for the transmission of television signals. Bell Syst. Tech. J. 1966, 45, 689–721. [Google Scholar] [CrossRef]
- Song, S.; Lian, T.; Liu, W.; Zhang, Z.; Luo, M.; Wu, A. A lossless compression method for logging data while drilling. Syst. Sci. Control Eng. 2021, 9, 689–703. [Google Scholar] [CrossRef]
- Li, J.; Dai, B.; Jones, C.M.; Samson, E.M.; Gascooke, D. Downhole Signal Compression and Surface Reconstruction Based on Dictionary Machine Learning. In Proceedings of the SPWLA Annual Logging Symposium, Virtual Online Webinar, 24 June–29 July 2020; p. D363S025R006. [Google Scholar]
- Jarrot, A.; Gelman, A.; Kusuma, J. Wireless Digital Communication Technologies for Drilling: Communication in the Bits\/s Regime. IEEE Signal Process. Mag. 2018, 35, 112–120. [Google Scholar] [CrossRef]
- Yan, Z.D.; Wang, J.F.; Sheng, L.; Yang, Z.Y. An effective compression algorithm for real-time transmission data using predictive coding with mixed models of LSTM and XGBoost. Neurocomputing 2021, 462, 247–259. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Zhang, Y.; Xiong, K.; Qiu, Z.; Wang, S.; Sun, D. A new method for real-time LWD data compression. In Proceedings of the 2009 International Symposium on Information Processing (ISIP 2009), Huangshan, China, 21–23 August 2009; p. 163. [Google Scholar]
- Zhang, Y.; Wang, S.; Xiong, K.; Qiu, Z.; Sun, D. DPCM Compression for Real-Time Logging While Drilling Data. J. Softw. 2010, 5, 280–287. [Google Scholar] [CrossRef]
- Kim, H.; Nam, S.; Nam, E. Estimation of Shape Error with Monitoring Signals. Sensors 2023, 23, 9416. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, H. Recognition and prediction of ground vibration signal based on machine learning algorithm. Neural Comput. Appl. 2020, 32, 1937–1947. [Google Scholar] [CrossRef]
- Yi, H. Efficient machine learning algorithm for electroencephalogram modeling in brain–computer interfaces. Neural Comput. Appl. 2022, 34, 9233–9243. [Google Scholar] [CrossRef]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Abbaschian, B.J.; Sierra-Sosa, D.; Elmaghraby, A. Deep Learning Techniques for Speech Emotion Recognition, from Databases to Models. Sensors 2021, 21, 1249. [Google Scholar] [CrossRef] [PubMed]
- Eddahmani, I.; Pham, C.-H.; Napoléon, T.; Badoc, I.; Fouefack, J.-R.; El-Bouz, M. Unsupervised Learning of Disentangled Representation via Auto-Encoding: A Survey. Sensors 2023, 23, 2362. [Google Scholar] [CrossRef] [PubMed]
- Al-Ashwal, N.H.; Al Soufy, K.A.M.; Hamza, M.E.; Swillam, M.A. Deep Learning for Optical Sensor Applications: A Review. Sensors 2023, 23, 6486. [Google Scholar] [CrossRef] [PubMed]
- Nuha, H.H.; Balghonaim, A.; Liu, B.; Mohandes, M.; Deriche, M.; Fekri, F. Deep neural networks with extreme learning machine for seismic data compression. Arab. J. Sci. Eng. 2020, 45, 1367–1377. [Google Scholar] [CrossRef]
- Liu, J.Y.; Di, S.; Zhao, K.; Jin, S.; Tao, D.W.; Liang, X.; Chen, Z.Z.; Cappello, F.; Soc, I.C. Exploring Autoencoder-based Error-bounded Compression for Scientific Data. In Proceedings of the IEEE International Conference on Cluster Computing (Cluster), Electr Network, Portland, OR, USA, 7–10 September 2021; pp. 294–306. [Google Scholar]
- Di, S.; Cappello, F. Fast error-bounded lossy HPC data compression with SZ. In Proceedings of the 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS), Chicago, IL, USA, 23–27 May 2016; pp. 730–739. [Google Scholar]
- Lindstrom, P. Fixed-rate compressed floating-point arrays. IEEE Trans. Vis. Comput. Graph. 2014, 20, 2674–2683. [Google Scholar] [CrossRef] [PubMed]
- Jalilian, E.; Hofbauer, H.; Uhl, A. Iris Image Compression Using Deep Convolutional Neural Networks. Sensors 2022, 22, 2698. [Google Scholar] [CrossRef] [PubMed]
- Candido de Oliveira, D.; Nassu, B.T.; Wehrmeister, M.A. Image-Based Detection of Modifications in Assembled PCBs with Deep Convolutional Autoencoders. Sensors 2023, 23, 1353. [Google Scholar] [CrossRef] [PubMed]
- Shinde, A.B.; Bagade, J.; Bhimanpallewar, R.; Dandawate, Y.H. Image Compression of Handwritten Devanagari Text Documents Using a Convolutional Autoencoder. Int. J. Intell. Syst. Appl. Eng. 2023, 11, 449–457. [Google Scholar]
- Hinton, G.E.; Osindero, S.; Teh, Y.-W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
- Yildirim, O.; San Tan, R.; Acharya, U.R. An efficient compression of ECG signals using deep convolutional autoencoders. Cogn. Syst. Res. 2018, 52, 198–211. [Google Scholar] [CrossRef]
- Kuester, J.; Gross, W.; Middelmann, W. An Approach to Near-lossless Hyperspectral Data Compression using Deep Autoencoder. In Proceedings of the Conference on Image and Signal Processing for Remote Sensing XXVI, Electr Network, Online, 21–25 September 2020. [Google Scholar]
- Tieleman, T.; Hinton, G. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 2012, 4, 26–31. [Google Scholar]
- Dhar, J. An adaptive intelligent diagnostic system to predict early stage of parkinson’s disease using two-stage dimension reduction with genetically optimized lightgbm algorithm. Neural Comput. Appl. 2022, 34, 4567–4593. [Google Scholar] [CrossRef]
- Zhao, S.; Tu, K.; Ye, S.; Tang, H.; Hu, Y.; Xie, C. Land Use and Land Cover Classification Meets Deep Learning: A Review. Sensors 2023, 23, 8966. [Google Scholar] [CrossRef] [PubMed]
- Pei, X.L.; Zheng, X.Y.; Wu, J.L. Intelligent bearing fault diagnosis based on Teager energy operator demodulation and multiscale compressed sensing deep autoencoder. Measurement 2021, 179, 15. [Google Scholar] [CrossRef]
- Yang, Z.; Xu, B.B.; Luo, W.; Chen, F. Autoencoder-based representation learning and its application in intelligent fault diagnosis: A review. Measurement 2022, 189, 20. [Google Scholar] [CrossRef]
- Zou, F.; Shen, L.; Jie, Z.; Zhang, W.; Liu, W. A sufficient condition for convergences of adam and rmsprop. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11127–11135. [Google Scholar]
Composition | Layer Name | Layer Type | Activation Function | Dimension |
---|---|---|---|---|
Encoder | input layer | - | - | n |
hidden layer 1 | Dense | n/2 | ||
hidden layer 2 | Dense | n/4 | ||
bottleneck | Dense | = n/8 | ||
Decoder | hidden layer 3 | Dense | n/4 | |
hidden layer 4 | Dense | n/2 | ||
hidden layer 5 | Dense | n | ||
output layer | - | - | n |
Method | Data Set | Quantization Data Bits | Tw | CR | SNR/dB | MSE |
---|---|---|---|---|---|---|
DPCM | 4 | - | 2.75 | 20.42 | 3767.45 | |
5 | - | 2.20 | 25.90 | 1067.62 | ||
6 | - | 1.83 | 31.60 | 286.80 | ||
7 | - | 1.57 | 36.99 | 83.06 | ||
8 | - | 1.38 | 42.90 | 21.29 | ||
4 | - | 2.75 | 27.34 | 2704.75 | ||
5 | - | 2.20 | 31.15 | 1124.74 | ||
6 | - | 1.83 | 36.98 | 293.73 | ||
7 | - | 1.57 | 42.13 | 89.66 | ||
8 | - | 1.38 | 48.34 | 21.47 | ||
Average | 1.94 | 34.37 | 946.06 | |||
DPCM-I | 4 | - | 2.75 | 20.72 | 3512.95 | |
5 | - | 2.20 | 26.39 | 952.75 | ||
6 | - | 1.83 | 31.77 | 276.33 | ||
7 | - | 1.57 | 38.03 | 65.28 | ||
8 | - | 1.38 | 44.16 | 15.92 | ||
4 | - | 2.75 | 28.68 | 1986.52 | ||
5 | - | 2.20 | 32.50 | 824.54 | ||
6 | - | 1.83 | 38.42 | 210.84 | ||
7 | - | 1.57 | 43.46 | 66.09 | ||
8 | - | 1.38 | 48.94 | 18.70 | ||
Average | 1.94 | 35.31 | 792.99 | |||
deepAE | - | - | 7.33 | 27.60 | 721.63 | |
- | - | 7.33 | 22.89 | 7523.82 | ||
Average | 7.33 | 25.25 | 4122.73 | |||
The proposed method | 4 | 53.25 | 4.38 | 37.59 | 72.26 | |
5 | 45.51 | 4.35 | 43.38 | 19.06 | ||
6 | 33.61 | 4.33 | 46.66 | 8.96 | ||
7 | 21.81 | 4.28 | 50.50 | 3.70 | ||
8 | 11.01 | 4.18 | 53.22 | 1.98 | ||
4 | 55.23 | 4.23 | 35.34 | 428.61 | ||
5 | 50.05 | 4.08 | 40.10 | 143.27 | ||
6 | 24.50 | 3.88 | 43.96 | 58.82 | ||
7 | 11.79 | 3.62 | 48.03 | 23.05 | ||
8 | 6.18 | 3.22 | 52.10 | 9.02 | ||
Average | 4.05 | 45.09 | 76.87 |
Method Used for Comparison | Data Set | Quantization Data Bits | ΔCR | ΔSNR | ΔMSE |
---|---|---|---|---|---|
DPCM | 4 | 59.48% | 84.09% | −98.08% | |
5 | 98.00% | 67.52% | −98.21% | ||
6 | 136.08% | 47.64% | −96.88% | ||
7 | 172.25% | 36.54% | −95.55% | ||
8 | 203.78% | 24.06% | −90.70% | ||
4 | 53.80% | 29.28% | −84.15% | ||
5 | 85.54% | 28.74% | −87.26% | ||
6 | 112.01% | 18.88% | −79.98% | ||
7 | 130.11% | 14.00% | −74.29% | ||
8 | 134.33% | 7.78% | −57.98% | ||
Average | 118.54% | 35.85% | −86.31% | ||
DPCM-I | 4 | 59.48% | 81.39% | −97.94% | |
5 | 98.00% | 64.38% | −98.00% | ||
6 | 136.08% | 46.89% | −96.76% | ||
7 | 172.25% | 32.78% | −94.33% | ||
8 | 203.78% | 20.52% | −87.56% | ||
4 | 53.80% | 23.23% | −78.42% | ||
5 | 85.54% | 23.40% | −82.62% | ||
6 | 112.01% | 14.43% | −72.10% | ||
7 | 130.11% | 10.53% | −65.12% | ||
8 | 134.33% | 6.46% | −51.76% | ||
Average | 118.54% | 32.40% | −82.46% | ||
deepAE | 4 | −40.23% | 36.20% | −89.99% | |
5 | −40.60% | 57.17% | −97.36% | ||
6 | −41.00% | 69.06% | −98.76% | ||
7 | −41.65% | 82.97% | −99.49% | ||
8 | −43.02% | 92.83% | −99.73% | ||
4 | −42.36% | 54.39% | −94.30% | ||
5 | −44.34% | 75.19% | −98.10% | ||
6 | −47.01% | 92.05% | −99.22% | ||
7 | −50.68% | 109.83% | −99.69% | ||
8 | −56.04% | 127.61% | −99.88% | ||
Average | −44.69% | 79.73% | −97.65% |
deepAE | QC+HC | F | Data Set | Quantization Data Bits | Tw | CR | SNR/dB | MSE |
---|---|---|---|---|---|---|---|---|
√ | ✕ | ✕ | - | - | 7.33 | 27.60 | 721.63 | |
- | - | 7.33 | 22.89 | 7523.82 | ||||
Mean | 7.33 | 25.25 | 4122.73 | |||||
√ | √ | ✕ | 4 | - | 4.38 | 36.98 | 83.25 | |
5 | - | 4.35 | 42.17 | 25.15 | ||||
6 | - | 4.33 | 44.90 | 13.42 | ||||
7 | - | 4.28 | 48.08 | 6.45 | ||||
8 | - | 4.18 | 49.93 | 4.22 | ||||
4 | - | 4.23 | 34.86 | 478.87 | ||||
5 | - | 4.08 | 39.22 | 175.22 | ||||
6 | - | 3.88 | 42.75 | 77.85 | ||||
7 | - | 3.62 | 46.49 | 32.86 | ||||
8 | - | 3.22 | 50.19 | 14.01 | ||||
Mean | 4.06 | 43.56 | 91.13 | |||||
√ | √ | √ | 4 | 53.25 | 4.38 | 37.59 | 72.26 | |
5 | 45.51 | 4.35 | 43.38 | 19.06 | ||||
6 | 33.61 | 4.33 | 46.66 | 8.96 | ||||
7 | 21.81 | 4.28 | 50.50 | 3.70 | ||||
8 | 11.01 | 4.18 | 53.22 | 1.98 | ||||
4 | 55.23 | 4.23 | 35.34 | 428.61 | ||||
5 | 50.05 | 4.08 | 40.10 | 143.27 | ||||
6 | 24.50 | 3.88 | 43.96 | 58.82 | ||||
7 | 11.79 | 3.62 | 48.03 | 23.05 | ||||
8 | 6.18 | 3.22 | 52.10 | 9.02 | ||||
Mean | 4.06 | 45.09 | 76.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Song, S.; Zhao, X.; Zhang, Z.; Luo, M. A Data Compression Method for Wellbore Stability Monitoring Based on Deep Autoencoder. Sensors 2024, 24, 4006. https://doi.org/10.3390/s24124006
Song S, Zhao X, Zhang Z, Luo M. A Data Compression Method for Wellbore Stability Monitoring Based on Deep Autoencoder. Sensors. 2024; 24(12):4006. https://doi.org/10.3390/s24124006
Chicago/Turabian StyleSong, Shan, Xiaoyong Zhao, Zhengbing Zhang, and Mingzhang Luo. 2024. "A Data Compression Method for Wellbore Stability Monitoring Based on Deep Autoencoder" Sensors 24, no. 12: 4006. https://doi.org/10.3390/s24124006
APA StyleSong, S., Zhao, X., Zhang, Z., & Luo, M. (2024). A Data Compression Method for Wellbore Stability Monitoring Based on Deep Autoencoder. Sensors, 24(12), 4006. https://doi.org/10.3390/s24124006