Next Article in Journal
Sigmoid-like Event-Triggered Security Cruise Control under Stochastic False Data Injection Attacks
Previous Article in Journal
Simulation of the Biofiltration of Sulfur Compounds: Effect of the Partition Coefficients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction of Casing Collapse Strength Based on Bayesian Neural Network

1
College of Pipeline and Civil Engineering, China University of Petroleum (East China), Qingdao 266580, China
2
CNPC Tubular Goods Research Institute, Xi’an 710077, China
3
School of Electronic Engineering, Xi’an Shiyou University, Xi’an 710077, China
*
Author to whom correspondence should be addressed.
Processes 2022, 10(7), 1327; https://doi.org/10.3390/pr10071327
Submission received: 10 May 2022 / Revised: 23 June 2022 / Accepted: 27 June 2022 / Published: 6 July 2022
(This article belongs to the Section Process Control and Monitoring)

Abstract

:
With the application of complex fracturing and other complex technologies, external extrusion has become the main cause of casing damage, which makes non-API high-extrusion-resistant casing continuously used in unconventional oil and gas resources exploitation. Due to the strong sensitivity of string ovality, uneven wall thickness, residual stress, and other factors to high anti-collapse casing, the API formula has a big error in predicting the anti-collapse strength of high anti-collapse casing. Therefore, Bayesian regularization artificial neural network (BRANN) is used to predict the external collapse strength of high anti-collapse casing. By collecting full-scale physical data, including initial defect data, geometric size, mechanical parameters, etc., after data preprocessing, the casing collapse strength data set is established for model training and blind measurement. Under the classical three-layer neural network, the Bayesian regularization algorithm is used for training. Through empirical formula and trial and error method, it is determined that when the number of hidden neurons is 12, the model is the best prediction model for high collapse resistance casing. The prediction results of the blind test data imported by the model show that the coincidence rate of BRANN casing collapse strength prediction can reach 96.67%. Through error analysis with API formula prediction results and KT formula prediction results improved by least square fitting, the BRANN-based casing collapse strength prediction has higher accuracy and stability. Compared with the traditional prediction method, this model can be used to predict casing strength under more complicated working conditions, and it has a certain guiding significance.

1. Introduction

As a key component in the development and production of oil and gas wells, the casing is not only subjected to high axial tensile or compressive loads, as well as internal and external pressure loads, but also the harsh service environment such as high-temperature environment at the bottom of the well and acidic corrosion. Once damage occurs, it will not only reduce oil and gas production but also seriously damage the reservoir and affect the normal exploration and production [1,2,3]. The economic loss of well damage or scrapping caused by casing damage in China’s oil fields amounts to billions of dollars every year, and up to now, the casing damage problem is still a non-negligible part of the international oil industry. Due to the long-term complex service environment, the casing is subject to various uniform or non-uniform loads formed by the formation and downhole operations, and its full-scale performance will constantly change due to the transformation of the mechanical environment and downhole working conditions. Numerous studies on casing strength show that casing steel grade, diameter-to-thickness ratio, geometric defects (OD ellipticity and wall thickness unevenness), yield strength, and residual stresses are the main factors affecting casing collapse strength. In addition, external factors such as temperature, downhole wear, and cement rings also affect the casing collapse strength [4]. In recent years, with the deepening of drilling depth, the casing collapse strength has become a key indicator in casing selection. The calculation of collapse strength used API and ISO standards cannot fully consider the relationship between the inherent defects of the tubular column and the non-uniform external load in complex downhole conditions, resulting in the calculated value deviating from the actual value. To investigate the change law of casing strength performance under the influence of multiple factors, scholars at home and abroad have revised the formula for calculating the casing resistance to collapse with experiments and finite element simulations. At present, the research on collapse strength is still being explored, seeking a more accurate formula for casing collapse strength prediction from a data-driven perspective.
Big data analytics, as a branch of data science, covers artificial intelligence, data mining, machine learning, and pattern recognition. Machine learning studies the ability of computers to learn based on data and is used to extract predictive models from data [5,6,7,8,9]. Machine learning is divided into two types. Supervised learning, which learns by classifying or labeling data, and supervised learning, which analyzes trained data to obtain a model capable of predicting new cases based on a vector of homogeneous features [10,11]. Artificial neural networks are one of the most widely used machine learning methods in the oil and gas industry, with applications in oilfield production, drilling, fluid processing, etc. They are a form of a mathematical structure inspired by biological neural networks for approximating functions that rely on large amounts of input data. Neural networks “learn” from samples and identify associations between input and output values from a selected sequence of data [12,13,14,15]. Since there is no specific expected value for the correlation between the data and the physical properties of each parameter in the model are independent, the collapse strength of the casing can be predicted by combining different process parameters. In this paper, the main correlation factors of the current casing collapse strength are combined with the relevant data obtained from the laboratory collapse experiments, and the data samples are formed after data pre-processing for training the artificial neural network to form the casing collapse strength prediction model. In terms of algorithm optimization, the Bayesian regularization algorithm is considered to improve the generalization ability of the model and further ensure the effectiveness of the model.

2. Prediction Model Scheme of Casing Collapse Strength Based on Bayesian Regularization Algorithm

Model Construction Scheme

Casing manufacturing process defects and complex underground service environment make casing become one of the weak links in the oil and gas industry. Combining with the relevant specifications of casing strength design, among many factors affecting casing strength performance, diameter-thickness ratio, ovality, wall thickness unevenness, yield strength, and residual stress are selected to carry out the prediction research on collapse strength. Figure 1 shows the flow chart of the scheme for predicting casing collapse strength with the help of an artificial neural network, which includes three parts: data acquisition, model development, and comparative analysis of prediction results.
In the data acquisition part of the scheme, the full-scale physical performance experiment will be used to acquire enough data on collapse strength and establish data sets. The model development will be realized from the structure of the neural network, the division of the data set, the optimization of the model, and the evaluation of the model. Finally, in order to ensure that the model can effectively predict the collapse strength, the reserved test data is imported for prediction, and the accuracy of the predicted values of the model is calculated and compared by combining the traditional regression fitting method and the collapse strength calculation formula in API 5C3 specification, so as to test the validity of the model.

3. Bayesian Regularized Artificial Neural Networks

In artificial neural networks, neural information is stored in the form of weights and biases, and the magnitude of the weight value determines the impact of the corresponding information on the whole model. The classical machine learning approach divides a data set into 3 parts: training data subset, validation data subset, and test data subset [16,17]. Bayesian regularized neural networks refer to networks that use Bayesian regularization methods to train BP. Bayesian-Regularization (BR) refers to the process of improving the generalization ability of a neural network by modifying its performance function [10,18,19]. For function approximation, the most commonly used is the multilayer perception (MLP) algorithm with backpropagation. The MLP architecture based on the BP algorithm is given in Figure 2 and is the basis for developing the BRANN-based model in this study.
It can be seen that the architecture consists of 3 key components: the input layer, the implicit layer, and the output layer. The signal (Xi) in the input layer is first passed through a series of weights (wxi,i) and then passed into the implied layer through a commonly used activation function (e.g., logistic function or hyperbolic tangent function). Therefore, these processed signals are passed through using another series of weights (wyi) and eventually summed to an output (YI) by a linear transfer function in the output layer. The mean squared error function ED is then iteratively calculated to determine the optimal weights and ultimately the appropriate architecture.
However, the traditional BP algorithm may encounter overfitting problems, i.e., small bias and large variance. As an alternative method, BRANN has better generalization capability. The objective function F (including the combination of the mean square error function ED and the weight decay function EW) is minimized and the optimal weights and objective function parameters are fitted in a probabilistic manner. The objective function of BRANN is:
F = β E D + α E w
E D = 1 N i N ( y i t i ) 2 = 1 N i N e i 2
E w = 1 2 i m w i 2
where α and β denote hyperparameters to control the distribution of other parameters. w is the weight and m is the number of weights. D = (xi,ti) denotes the data of the training set with i = 1, 2, …, N, where N is the total number of training sets (input-output pairs). yi denotes the ith output value corresponding to the ith training set (input-output pairs).
In BRANN, the initial weights are set randomly. With these initial weights, the density function of the weights can be updated according to Bayer’s rule:
P ( w | D , α , β , M ) = P ( D | w , β , M ) · P ( w | α , M ) P ( D | α , β , M )
where M is the particular neural network architecture used; P ( w | α , M ) is the prior density, which represents the knowledge of the weights before collecting the data; P ( D | w , β , M ) is the likelihood function, which is the probability of the data occurring given a weight w; and P ( D | α , β , M ) is the normalization factor, which can be calculated by the following equation:
P ( D | α , β , M ) = + P ( D | α , β , M ) P ( w | α , M ) d w
If the noise of the training set data and weights is assumed to be Gaussian distributed, the probability density can be calculated by:
P ( D | α , β , M ) = 1 Z D ( β ) e x p ( β E D ) = ( π / β ) N / 2 e x p ( β E D )
P ( w | α , M ) = 1 Z w ( α ) e x p ( β E W ) = ( π / α ) m / 2 e x p ( β E W )
If these probability densities are substituted into Equation (4). The probability equation becomes:
P ( w | D , α , β , M ) = 1 Z W ( α ) 1 Z D ( β ) e x p ( ( β E D + α E W ) ) P ( D | α , β , M ) = 1 Z F ( α , β ) e x p ( F ( w ) )
In BRANN, determining the optimal weights means maximizing the posterior probability P ( w | D , α , β , M ) , in this case, the objective function F of the minimization regularization.
Combined post-test density:
P ( α , β / D , M ) = P ( D / α , β , M ) · P ( α , β / M ) P ( D / M )
The maximized joint posterior density can be determined by maximizing the likelihood function P ( D / α , β , M ) , which is calculated as follows:
P ( D / α , β , M ) = P ( D / w , β , M ) · P ( w / α , M ) P ( w / D , α , β , M ) = Z F ( α , β ) ( π / β ) n 2 ( π / α ) m 2
where n is the number of observations (input target simulation pairs) and m is the total number of network parameters. In addition, the parameter Z F ( α , β ) depends on the Hessian of the objective function (prevent and Hagan, 1997), which is calculated as follows:
Z F ( α , β ) e F ( w m a x ) | H m a x |
where the subscript “max” indicates the maximum posterior probability. The Hessian matrix (H) is calculated from the Jacobian (J) as follows:
H = J T J
where the Jacobi matrix contains the first-order derivatives of the network errors concerning the network parameters.

4. Experimental Data Acquisition

Full-scale collapse performance is a key parameter in ensuring the quality and safe use of the casing [20,21,22]. The full-scale collapse tests were carried out by the requirements of API RP 5C5 and API TR 5C3. The standard specifies a minimum length of 8 times the nominal outside diameter (D) for tubes with a nominal outside diameter (D) less than or equal to 9–5/8in and 7 times the nominal outside diameter (D) for tubes with nominal outside diameter (D) more than 9–5/8in [23]. The collapse test is carried out utilizing a composite collapse test system, which requires full-scale specimen lengths and no radial or axial loads. The composite collapse test system ensures that the specimen is slowly depressurized after the collapse has occurred and that the error does not exceed 1% of the collapse test pressure.
The collapse test specimens were geometrically measured before the test and the measurement locations are shown in Figure 3. Five sections were measured for each specimen and 8 points were measured for each section. The results of the geometric measurements for specimen #1 are shown in Table 1. The average outer diameter, average wall thickness, ellipticity, and wall thickness unevenness values were calculated.
Each collapsed specimen shall be subjected to residual stress measurement using the stress ring method, and the residual stress specimen shall be taken from the adjacent part of the collapsed specimen. The minimum length of the specimen should be two times the outer diameter (L/D ≥ 2), the specimen is shown in Figure 2, and the residual stress measurement results for specimen #1 are shown in Table 2.
The full-scale collapse test was conducted by an external pressure collapse test system shown in Figure 4. The full-scale collapse test specimens are shown in Figure 5. The collapse failure specimens are shown in Figure 6.

5. Establishment of Bayesian Regularized Neural Network for Prediction Casing Collapse Strength

5.1. Sample Data Pre-Processing

After the external extrusion test, the collected experimental data were grouped. Table 3 below shows the extrusion strength parameters corresponding to each pipe diameter of 4.5in, 5.5in, 7.0in, 9.5in, 13.5in, and 16in obtained after the experiment. Before model training, the data of each parameter should be unified in dimension to eliminate the useless data therein and form a casing collapse strength data set [24,25].
In order to increase the validity of data, here, the min-max (Min-Max Normalization) standardized data consistency processing method will be selected. This method, also called deviation standardization, is a linear transformation of the original data, so that the result value is mapped to [0–1]. The conversion function is as follows:
X * = X m i n m a x m i n
where max is the maximum value of the sample data and min is the minimum value of the sample data. The drawback of this method is that once new data are added, it may lead to changes in max and min and therefore needs to be redefined.

5.2. Define the Network Structure

The determination of the number of layers of the neural network model, i.e., the determination of the hidden layers. In this paper, a three-layer neural network structure (one input layer, one hidden layer, and one output layer) will be selected. The number of nodes in the input and output layers is equal to the number of input and output parameters, and the optimal number of neurons in the hidden layer is obtained by comparing them after several training sessions. The five main correlation parameters of casing collapse strength, namely diameter-thickness ratio, ellipticity, wall thickness inhomogeneity, yield strength, and residual stress, will be used as the input of the neural network, and the output will be the casing collapse strength. Under such a three-layer neural network, the input layer contains five neurons, and the output layer contains one neuron. The structure of this neural network is shown in Figure 7 below.

5.3. Model Training and Optimization

Among the obtained data, 2/3 of the data are selected for model development, and 1/3 (not involved in model training) are used to test the degree of generalization of the developed model, and the algorithm is selected as a Bayesian regularization algorithm. In the model development group data, 85% of the data will be randomly divided into training set data, and 15% of the data will be test set data, where each group of data includes 5 inputs (diameter-thickness ratio, ellipticity, wall thickness unevenness, yield strength, and residual stress) and one output (casing collapse strength).
In the optimization of the model, the mean square error (MSE) and the decision coefficient (R2) were mainly referenced. The MSE was the average square deviation between the output value and the target value. The value of R2 was used to measure the correlation between the output value and the target value. When the R2 of the training set was greater than that of the test set, it indicated that an over-fitting occurred. When the model with the MSE as low as possible and the R2 value close to 1 is the appropriate model in the testing stage, these two indicators can show whether the model extracts all the information or whether further adjustment is required. The fitting effect of the model is poor when the number of neurons is too small, but it will lead to over-fitting. In order to find the optimal model, the purpose can be achieved by adjusting the sample size, the proportion of the dataset, or the number of neurons in the hidden layer. In order to obtain effective feedback and eliminate the data deviation caused by network training fluctuation, the training process would undergo 10 training and blind tests under the same condition and the average value would be taken. In view of the input parameter n = 5 of the model, the selection of the number of neurons in the hidden layer N follows N ≥ n, and the trial-and-error initial value of the number of neurons in the hidden layer N0 = 5, the trial-and-error upper limit value is set according to the empirical formula method (Equation (14)).
N = n + m + a
where n is the number of nodes in the input layer, m is the number of nodes in the output layer, and a is an integer from 1 to 10.
Table 4 shows the average R2 of the neural network model after 10 times of training under different numbers of hidden neurons, and Figure 8 shows the comparison curve of the determination coefficient R2 of model training and prediction when the number of hidden neurons is 5~15.
It can be seen from Table 4 and Figure 8 that the change in the number of neurons in the hidden layer affects the prediction accuracy of the model when the proportion of data sets is constant. There is no regular relationship between the R2 value and the number of neurons in the hidden layer, but overall, the determining coefficient R2 tends to approach 1. It can be clearly seen from the comparison curve in Figure 8 that the predicted R2 of 11 groups of models is higher than that of model training R2, and the model has not been fitted. When the number of hidden neurons is 9, the difference between the two R2 values is the biggest, and when the number of hidden neurons is 13, R2 is the closest. When the number of hidden layer neurons is 12, the average value of the training R2 and the prediction R2 of the model is closest to 1. In addition, as can be seen from Figure 9, the error of the network model is the smallest compared with other models at this time, which best meets the demand of model optimization. Therefore, the number of hidden layer neurons N is set to 12 for prediction research.

5.4. Model Evaluation and Prediction Result Analysis

According to 5.3, the number of hidden layer neurons N = 12 is set to predict the casing collapse strength. To test the effectiveness and advantages of the model, the casing collapse pressure is calculated according to API 5C3 specification on the basis of the measured data, and combined with KT improved formula in ISO/TR 10400:2007 specification, the minimum collapse pressure of casing is predicted by the least square regression method. For the casing with D/T < 12.53, the collapse strength is related to the collapse pressure Py of the yield strength, which is calculated by Equation (15).
P y = 2 σ s [ ( D / t ) 1 ( D / t ) 2 ]
When 12.53 < D/t < 20.56, the minimum collapse strength is related to the plastic collapse pressure Py, which is calculated by Equation (16).
P p = σ s [ A D / t B ] C
A = 2.8762 + 0.15489 × 10 3 σ s + 0.44809 × 10 6 σ s 2 0.16211 × 10 19 σ s 3
B = 0.026233 + 0.73402 × 10 4 σ s
C = 3.2125 + 0.030867 σ s 0.15204 × 10 5 σ s 2 + 0.7781 × 10 9 σ s 3
When D/t > 20.56, the minimum collapse strength is related to the elastic collapse pressure Py, which is calculated by the elastic collapse pressure Equation (20).
P E = 3.237 × 10 5 ( D / t ) [ ( D / t ) 1 ] 2
The data set to be measured includes 4.5in, 5.5in, 7.0in, 9.5in, 13.5in, 16in, and other pipe diameter parameters. According to the BRANN model, the data to be predicted is imported for prediction, and regression fitting and formula calculation are carried out on the data to be measured. Figure 10 below shows the comparison between the model prediction results, regression prediction results, formula calculation values, and measured collapse strength, and Figure 11 shows the error distribution curves of the results obtained by three methods, in which the error is calculated by the relative error method, that is | predicted value-actual value |/100% of actual value.
From the distribution of prediction results, it can be seen that under different pipe diameters, the calculated results guided by API specifications are obviously deviated from the measured values. Both the least square regression fitting and Bayesian neural network can predict the casing collapse strength. Compared with the traditional formula calculation, the coincidence rate between the two prediction results and the measured values is higher. As for the trend of errors, the following Table 5 shows the maximum, minimum, and average values of each error.
Combined with Figure 11 and Table 5, for the same blind sample input, the minimum errors of the results obtained by the three methods are all less than 0.1%. In terms of stability, the error span of the least square regression fitting results is the largest, with the maximum error reaching 85.56%. By comparing the sample information, it is found that the error is mainly distributed in the samples with a diameter of 10.8 inches. For the pipe fittings with a diameter of 5 inches to 7.8 inches, the least square fitting effect is relatively stable. It can be seen from Figure 10 that the error of API formula calculation results shows an obvious swing between 0.02% and 50%, and the average error reaches 19.46% in different pipe diameters; Combined with the error curve distribution in Figure 10, the error trend of BRANN model is more stable than that of the other two methods, with the maximum error of 15.09%. In different pipe diameters, the model can achieve good prediction results, and the average prediction accuracy of the model can reach 96.67%, except that the prediction error of individual samples in the range of 10.8~14.4in inches is more than 10%, which is significantly improved compared with the traditional methods.

6. Conclusions

(1) The experimental results of five parameters affecting the casing collapse strength (diameter-thickness ratio, ellipticity, wall thickness unevenness, yield strength, and residual stress), are obtained by full-scale tests, which constitute the training data set of BRANN.
(2) The number of hidden layers neurons is 12, the R-value is closest to 1, which meets the prediction accuracy requirement.
(3) Based on the established BRANN model, the casing collapse strength prediction and supplementary regression fitting were performed. The results show that the BRANN-based prediction model has higher prediction accuracy, and the maximum error between this model and the physical experimental test results is 13.11%, and most of the errors are less than 10%.

Author Contributions

Conceptualization, X.Y.; methodology, D.L. and H.F.; validation, S.Y.; formal analysis, Y.Z. and D.L.; data curation, R.W.; writing—original draft preparation, Y.Z.; writing—review and editing, H.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Innovative Talents Promotion Program—Young Scienceand Technology Nova Project (2021KJXX-63), the Research on key technology of casing damageevaluation and repair in oil and gas wells (2021DJ2705) and the Study on key technology of stimulationand modification for Gulong shale oil (2021ZZ10-04).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

No involving humans.

Data Availability Statement

The study did not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han, J.Z.; Zhang, X.P. Preliminary study on casing collapse strength under non-uniform load. Drill. Technol. 2001, 24, 48–50. [Google Scholar]
  2. Zhang, R.R.; Yan, Y.F.; Wang, P.; Yan, H.; Yan, X. Quantitative failure risk analysis of shale gas well casing deformation based on Bayesian network. Pet. Drill. Prod. Technol. 2018, 40, 736–742. [Google Scholar]
  3. Lou, Q.; Zhang, G.L.; Zhang, D.; Han, X.L.; Yang, P.; Zhang, Y. Experimental study on the main influencing factors of casing collapse strength. Pet. Field Mach. 2012, 41, 38–42. [Google Scholar]
  4. Zhang, X.; Wang, L.; Meng, F.S.; Zheng, Z.C. Research on the application of Bayesian neural network method in casing loss prediction. Prog. Geophys. 2018, 33, 1319–1324. [Google Scholar]
  5. Zhao, K.N. Distributed Photovoltaic Prediction and Application Based on Bayesian Neural Network; China Electric Power Research Institute: Wales, UK, 2020. [Google Scholar] [CrossRef]
  6. Xia, S.Y.; Su, J.H.; Du, Y.; Wang, H.N.; Shi, S. PEMFC stack modeling based on Bayesian regularization BP neural network. J. Hefei Univ. Technol. 2021, 44, 5. [Google Scholar]
  7. Zhao, Y.H.; Jiang, H.Q.; Li, H.Q.; Liu, H.T.; Han, D.W.; Wang, Y.N.; Liu, C.C. Prediction method of single well casing loss based on machine learning. J. China Univ. Pet. 2020. [Google Scholar]
  8. Zhang, S.R.; Li, X.K. Hidden layer node estimation algorithm of BP network based on simulated annealing. J. Hefei Univ. Technol. 2017, 40, 4. [Google Scholar]
  9. Tan, C.D.; He, J.Y.; Zhou, T.; Liu, J.K.; Song, W.C. Optimization of shale gas fracturing construction parameters based on PCA-BNN. J. Southwest Pet. Univ. 2020, 42, 7. [Google Scholar]
  10. Negash, B.M.; Atta, Y.D. Production prediction of waterflooding reservoir based on artificial neural network. Pet. Explor. Dev. 2020, 47, 357–365. [Google Scholar] [CrossRef]
  11. Negash, B.M.; Vasant, P.M.; Jufar, S.R. Application of artificial neural networks for calibration of a reservoir model. Intell. Decis. Technol. 2018, 12, 67–79. [Google Scholar] [CrossRef]
  12. Wang, M. Application Research of Neural Network Method in Casing Loss Prediction; Northeast Petroleum University: Daqing, China, 2007. [Google Scholar]
  13. Li, X.H.; Zhu, H.W.; Chen, G.M.; Lv, H.; Meng, X.K. Bayesian dynamic model for risk analysis of submarine oil and gas pipeline leakage accidents. Chin. Saf. Sci. J. 2015, 25, 75–80. [Google Scholar]
  14. Pan, Y.C.; Shan, W.B.; Zhang, S.H.; Wang, F. Application of Bayesian Normalization Algorithm in Reservoir Parameter Fitting. Internet Things Technol. 2012, 2, 45–47. [Google Scholar]
  15. Deng, K.H. Research on Non-Uniform Casing Collapse and Repair Mechanics; Southwest Petroleum University: Chengdu, China, 2018. [Google Scholar]
  16. Shi, J.; Khan, F.; Zhu, Y.; Li, J.; Chen, G. Robust data-driven model to study dispersion of vapor cloud in offshore facility. Ocean. Eng. 2018, 161, 98–110. [Google Scholar] [CrossRef]
  17. Shi, J.; Zhu, Y.; Khan, F.; Chen, G. Application of Bayesian Regularization Artificial Neural Network in explosion risk analysis of fixed offshore platform. J. Loss Prev. Process Ind. 2019, 57, 131–141. [Google Scholar] [CrossRef]
  18. Lin, Y.-h.; Deng, K.-h.; Zeng, D.-z.; Zhu, H.-j.; Zhu, D.-j.; Qi, X.; Huang, Y. Theoretical and experimental analyses of casing collapsing strength under non-uniform loading. J. Cent. South Univ. 2014, 21, 3470–3478. [Google Scholar] [CrossRef]
  19. Cao, Q. Data driven production forecasting using machine learning. In Proceedings of the SPE Argentina Exploration and Production of Unconventional Resources Symposium, Buenos Aires, Argentina, 1–3 June 2016. [Google Scholar]
  20. Wang, C.Y. Research and Development of Tubing and Casing Quality Assessment System under Intelligent Manufacturing Environment; Xi’an University of Technology: Xi’an, China, 2018. [Google Scholar]
  21. Liu, Q.; Li, N.; Shen, Z.-x.; Zhao, M.-f.; Xie, J.-f.; Zhu, G.-c.; Xu, X.; Yin, C.-x. Calculation and experiment of anti-collapse performance of titanium alloy tubing and casing. Nat. Gas Ind. 2020, 40, 94–101. [Google Scholar]
  22. Xiao, Z.Q.; Jia, L.F.; Wen, C.X.; Zhao, Z.Y.; Jia, S.P. Model and optimization of the collapse strength of casing-cement ring combination under non-uniform ground stress. J. Yangtze Univ. 2020, 17, 39–44. [Google Scholar]
  23. Jiao, W.; Wang, Q.; Tian, X.J.; Yuan, Q.Y.; Li, X.L.; Gao, C.L. P110 steel grade Φ139.7mm × 10.54mm high collapse resistance performance analysis and collapse strength prediction. Welded Pipe 2017, 40, 20–24. [Google Scholar]
  24. Miskuf, M.; Michalik, P.; Zolotova, I. Data mining in cloud usage data with Matlab’s statistics and machine learning toolbox. In Proceedings of the IEEE International Symposium on Applied Machine Intelligence & Informatics, Herl’any, Slovakia, 26–28 January 2017. [Google Scholar]
  25. Mohamadian, N.; Ghorbani, H.; Wood, D.A.; Mehrad, M.; Davoodi, S.; Rashidi, S.; Soleimanian, A.; Shahvand, A.K. A geomechanical approach to casing collapse prediction in oil and gas wells aided by machine learning. J. Pet. Sci. Eng. 2020, 196, 107811. [Google Scholar] [CrossRef]
Figure 1. Flow chart of prediction scheme of casing collapse strength.
Figure 1. Flow chart of prediction scheme of casing collapse strength.
Processes 10 01327 g001
Figure 2. MLP architecture based on BP algorithm.
Figure 2. MLP architecture based on BP algorithm.
Processes 10 01327 g002
Figure 3. Measured specimen before collapse test. Note: 1 residual stress test specimen; 2 tensile specimens; 3 collapseed specimen. L1 minimum length of the collapsed specimen. L2 minimum length of residual stress specimen. Average outside diameter, average wall thickness, and ellipticity are measured at five equally spaced locations and the wall thickness unevenness is calculated from the wall thickness measurements.
Figure 3. Measured specimen before collapse test. Note: 1 residual stress test specimen; 2 tensile specimens; 3 collapseed specimen. L1 minimum length of the collapsed specimen. L2 minimum length of residual stress specimen. Average outside diameter, average wall thickness, and ellipticity are measured at five equally spaced locations and the wall thickness unevenness is calculated from the wall thickness measurements.
Processes 10 01327 g003
Figure 4. External pressure collapse test system.
Figure 4. External pressure collapse test system.
Processes 10 01327 g004
Figure 5. The specimens before collapse.
Figure 5. The specimens before collapse.
Processes 10 01327 g005
Figure 6. The specimens after collapse.
Figure 6. The specimens after collapse.
Processes 10 01327 g006
Figure 7. Neural network structure for prediction the casing collapse strength.
Figure 7. Neural network structure for prediction the casing collapse strength.
Processes 10 01327 g007
Figure 8. R2 comparison of the model with 5–15 hidden neurons.
Figure 8. R2 comparison of the model with 5–15 hidden neurons.
Processes 10 01327 g008
Figure 9. Network training error when the number of hidden layer neurons is between 5–15.
Figure 9. Network training error when the number of hidden layer neurons is between 5–15.
Processes 10 01327 g009
Figure 10. Prediction result of casing collapse strength.
Figure 10. Prediction result of casing collapse strength.
Processes 10 01327 g010
Figure 11. Comparison of prediction errors of collapse strength.
Figure 11. Comparison of prediction errors of collapse strength.
Processes 10 01327 g011
Table 1. Specimen geometry inspection results (mm).
Table 1. Specimen geometry inspection results (mm).
Measurement sectionAverage outside diameterEllipticity (1)
M1-N1G1-H1O1-P1E1-F1
141.58141.18141.07141.17141.250.36
M1N1G1H1O1P1E1F1Average wall thicknessWall thickness unevenness (2)
13.0913.4312.7713.4612.9612.8413.1613.1513.115.26
Measurement sectionAverage outside diameterEllipticity
M2-N2G2-H2O2-P2E2-F2
141.00141.22141.21141.09141.130.16
M2N2G2H2O2P2E2F2Average wall thicknessWall thickness unevenness
13.0813.4012.9413.2112.7512.9212.9613.3113.064.97
Measurement sectionAverage outside diameterEllipticity
M3-N3G3-H3O3-P3E3-F3
141.10141.56141.14141.33141.280.33
M3N3G3H3O3P3E3F3Average wall thicknessWall thickness unevenness
13.0013.3412.7113.2212.6012.8613.1513.0312.995.71
Measurement sectionAverage outside diameterEllipticity
M4-N4G4-H4O4-P4E4-F4
141.01141.16141.19141.01141.090.13
M4N4G4H4O4P4E4F4Average wall thicknessWall thickness unevenness
13.1013.1913.0212.7112.9412.5913.3212.9612.985.63
Measurement sectionAverage outside diameterEllipticity
M5-N5G5-H5O5-P5E5-F5
141.17141.21141.09141.09141.140.09
M5N5G5H5O5P5E5F5Average wall thicknessWall thickness unevenness
13.3313.2012.9612.9613.1512.6513.1013.0013.025.23
Note: (1) Ellipticity calculation formula: 2 ( D m a x D m i n ) D m a x + D m i n × 100 % . Where: Dmax—the maximum measured outer diameter value on the same cross-section; Dmin—the minimum measured outer diameter value on the same cross-section. (2) Wall thickness unevenness calculation formula: 2 ( t m a x t m i n ) t m a x + t m i n × 100 % . Where: tmax—the same section on the measured maximum wall thickness value; tmin—the same section on the measured minimum wall thickness value.
Table 2. Residual stress measurement results.
Table 2. Residual stress measurement results.
Specimen NumberLocation of MeasurementsOuter Diameter (mm)Wall Thickness (mm)Residual Stress (MPa)
B-D (before)B-D (after)C/
Di (m)Df (mm)t (mm)/
1#1141.29142.1613.04/
2140.90141.6513.40/
3140.92141.8513.15/
4140.90141.7813.06/
Average value141.00141.8613.16130.57
Note: The residual stress calculation formula: σ = E t ( 1 μ 2 ) ( 1 D i 1 D f ) . where E = 2.1 × 105 MPa; u =0.3.
Table 3. Experimental data of casing collapse strength (partial).
Table 3. Experimental data of casing collapse strength (partial).
Outer Diameter inOuter Diameter Out-of-Roundness %Unevenness of Wall Thickness %Residual Stress
Mpa
Yield Strength
Mpa
Casing Collapse Strength/psi
4.530.4713.14514.76700.5312,469
4.510.4162.356116.92817.0613,550
4.510.0744.400225.95661.9213,416
5.530.370.643156.86464.034263
5.530.4280.7145.27458.174160
5.530.4050.376153.87468.864048
7.040.1942.87262.85498.855757
7.050.1922.304130.21480.925961
7.050.1656.662108.27490.945400
9.700.3593.678105.48452.662689
9.710.4021.98535.31645.375032
9.710.4012.78134.83649.5094975
13.470.3681.987213.31900.003342
13.470.1841.643254.84890.483316
13.440.2121.603172.45886.353208
16.090.2312.32931.351721.562669
16.080.2914.72337.79718.452550
16.080.2344.68229.73712.592413
Table 4. R2 of the model with different numbers of hidden layer neurons.
Table 4. R2 of the model with different numbers of hidden layer neurons.
Number of Neurons in the Hidden LayerNeural Network Model Training R2-ValueNeural Network Model Predicting R2-Value
50.996990.99715
60.997070.99730
70.996750.99727
80.996890.99701
90.995370.99774
100.997160.99748
110.996840.99750
120.997460.99780
130.997400.99742
140.997120.99775
150.996850.99754
Table 5. Output error of blind test samples.
Table 5. Output error of blind test samples.
TapeMaxMinAverage
API formula48.84%0.02%19.46%
The least square improved KT formula85.56%0.06%7.41%
BRANN15.09%0.01%3.33%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, D.; Fan, H.; Wang, R.; Yang, S.; Zhao, Y.; Yan, X. Prediction of Casing Collapse Strength Based on Bayesian Neural Network. Processes 2022, 10, 1327. https://doi.org/10.3390/pr10071327

AMA Style

Li D, Fan H, Wang R, Yang S, Zhao Y, Yan X. Prediction of Casing Collapse Strength Based on Bayesian Neural Network. Processes. 2022; 10(7):1327. https://doi.org/10.3390/pr10071327

Chicago/Turabian Style

Li, Dongfeng, Heng Fan, Rui Wang, Shangyu Yang, Yating Zhao, and Xiangzhen Yan. 2022. "Prediction of Casing Collapse Strength Based on Bayesian Neural Network" Processes 10, no. 7: 1327. https://doi.org/10.3390/pr10071327

APA Style

Li, D., Fan, H., Wang, R., Yang, S., Zhao, Y., & Yan, X. (2022). Prediction of Casing Collapse Strength Based on Bayesian Neural Network. Processes, 10(7), 1327. https://doi.org/10.3390/pr10071327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop