Next Article in Journal
Modeling of German Low Voltage Cables with Ground Return Path
Next Article in Special Issue
Optimization of Flow Rate and Pipe Rotation Speed Considering Effective Cuttings Transport Using Data-Driven Models
Previous Article in Journal
Homeowners’ Willingness to Make Investment in Energy Efficiency Retrofit of Residential Buildings in China and Its Influencing Factors
Previous Article in Special Issue
Autonomous Decision-Making While Drilling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SlurryNet: Predicting Critical Velocities and Frictional Pressure Drops in Oilfield Suspension Flows

by
Alireza Sarraf Shirazi
1 and
Ian Frigaard
2,*
1
Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC V6T 1Z4, Canada
2
Departments of Mathematics and Mechanical Engineering, University of British Columbia, 1984 Mathematics Road, Vancouver, BC V6T 1Z2, Canada
*
Author to whom correspondence should be addressed.
Energies 2021, 14(5), 1263; https://doi.org/10.3390/en14051263
Submission received: 30 December 2020 / Revised: 11 February 2021 / Accepted: 15 February 2021 / Published: 25 February 2021
(This article belongs to the Special Issue Recent Advances in Petroleum Drilling Engineering)

Abstract

:
Improving the accuracy of the slurry flow predictions in different operating flow regimes remains a major focus for multiphase flow research, and it is especially targeted at industrial applications such as oil and gas. In this paper we develop a robust integrated method consisting of an artificial neural network (ANN) and support vector regression (SVR) to estimate the critical velocity, the slurry flow regime change, and ultimately, the frictional pressure drop for a solid–liquid slurry flow in a horizontal pipe, covering wide ranges of flow and geometrical parameters. Three distinct datasets were used to develop machine learning models with totals of 100, 325, and 125 data points for critical velocity, and frictional pressure drops for heterogeneous and bed-load regimes respectively. For each dataset, 80% of the data were used for training and the rest 20% for evaluating the out of sample performance. The K-fold technique was used for cross-validation. The prediction results of the developed integrated method showed that it significantly outperforms the widely used existing correlations and models in the literature. Additionally, the proposed integrated method with the average absolute relative error (AARE) of 0.084 outperformed the model developed without regime classification with the AARE of 0.155. The proposed integrated model not only offers reliable predictions over a wide range of operating conditions and different flow regimes for the first time, but also introduces a general framework of how to utilize prior physical knowledge to achieve more reliable performances from machine learning methods.

1. Introduction

This paper addresses the application of machine learning (ML) methods to making accurate and relevant predictions of slurry flow behavior. Slurries are complex multi-phase systems studied actively from a physical perspective for >70 years. Flow regime prediction is inexact, generally relying on semi-empirical correlations that have been fitted to different data sets, which are expensive and non-trivial to gather. These regime predictions are used to make design decisions for pipelines and other transport applications where errors are costly. In applying ML methods in any mature industrial or scientific field one has two choices: (i) start from scratch with no prior knowledge; (ii) incorporate existing knowledge. This second approach is that used here. Thus below, we review both relevant slurry flow fundamentals and ML applications in this domain.
Pipe flows of slurries are commonly encountered in the mining industry (slurry transport) and in oil and gas well operations: hole cleaning, hydraulic fracturing and gravel packing. Here we deal with slurry applications in well drilling, where the relevance of multi-phase flow has long been recognized [1]. Horizontal slurry flow of sand and a Newtonian/non-Newtonian carrier liquid in a pipe geometry is encountered widely in horizontal wells, and also in gathering and transition lines. In drilling engineering, a variety of models are used for cuttings transport. At the lowest level, these simply compare the pumping velocity with a typical particle settling velocity. More sophisticated models [2,3,4] consider flows with a layered structure in the well cross-section, consisting of settled beds with mobile suspension flows above.
These mechanistic models are analogous to those developed earlier to predict slurry transport in the mining industry. Starting from the early work of Durand and Condolios [5], the importance of flow regimes was immediately recognized. Turian and Yuan [6] proposed four different semi-empirical correlations, based on more than 2800 experimental data points, to determine the slurry friction factor in the four flow regimes. This is probably the most comprehensive empirical correlation developed to date. However, an underlying criticism of [6] is that the frictional pressure does not represent the underlying physical balance leading to a solids bed. Transition between the four flow regimes has historically formed one major axis of research on slurry transport. These are typically known as transition velocities, the most important one being the deposition or critical velocity, which defines the onset of a stationary bed at the bottom of the pipe. In applications it is crucial to determine the most efficient pipe size to handle a variety of flow conditions throughout lifetime of the field/pipeline. Accurate prediction of the key flow parameters such as critical velocity, flow regime, and pressure drop has significant impact on such design decisions [7,8].
There are many empirical and semi-empirical models and correlations for predicting the critical velocity, e.g., that of Oroskar and Turian [9] and Kokpinar et al. [10]. One of the significant features of the critical velocity is that it corresponds to the minimum frictional pressure drop in the slurry flow, which has even motivated predictions based on this feature, e.g., see [11]. Likewise, many models and correlations have been developed for prediction of frictional pressure drops, e.g., [12,13] for heterogeneous flow, and [14,15] for the bed-load regime. Additionally, many comprehensive layered models were developed more recently to predict both critical velocity and frictional pressure drop in different regimes, e.g., the two layer model of Gillies et al. [3] and the modified three layer model of Sarraf Shirazi and Frigaard [16].
All the above models are to some extent a combination of phenomenological and mechanistic approaches: once the flow regime is phenomenologically defined this enables a more accurate mechanistic model. However, even targeted mechanistic models are limited by the physical complexity of the actual flows. This suggests that a more data-driven (ML) approach might be effective. In this context, Osman and Aggour [17] developed an artificial neural network for prediction of the frictional pressure drop of a slurry flow in horizontal and near horizontal pipes. The accuracy of their model outperformed that of the existing correlations compared. Ulker and Sorgun [18] used four different machine learning algorithms including k-nearest neighbor (kNN), support vector regression (SVR), linear regression, and ANN to estimate the sedimentation bed height inside a wellbore with and without drill pipe rotation. They found that ANN provided slightly better performance compared to other models. Azamathulla et al. [19] used adaptive neuro-fuzzy interference system (ANFIS) and gene-expression programming (GEP) for prediction of the pressure drop. Their results showed that the ANFIS model led to better performance compared to GEP and existing correlations. Lahiri and Ghanta [20] developed a hybrid SVR and genetic algorithm (GA) technique for prediction of the slurry frictional pressure drop, where GA was used for efficient tuning of SVR hyper-parameters. Their developed model accuracy outperformed that of all the existing correlations.
While the above ML methods have produced positive results for slurry transport in the past 2 decades, the picture is incomplete. First, the estimation of pressure drops only cover the heterogeneous regime. This is a practical drawback: not only are the methods limited to prediction in one regime, but one needs prior knowledge of the flow regime, which is not always the case in practice. Secondly, no dimensional analysis was performed before feeding the parameters as inputs to the algorithm. This necessarily means that there is significant redundancy methodologically. In this study, we address both issues and give a complete model. The key novelty of our approach is that we work with the known physical structure of slurry flows. First we use dimensional analysis to eliminate redundancy in variables. Second we integrate 2 models to mimic the physical studies: (a) a model to predict the regimes and transition; (b) knowing the regime, we predict pressure drop. This improves the accuracy in a physically consistent way.
An outline of the paper is as follows. Below in Section 2 we outline the dimensional analysis and the development of the features as inputs to our models for critical velocity and frictional pressure drop. Section 3 provides a brief background on the ANN and SVR models, and discuss the corresponding important hyper-parameters for each model that need to be tuned for the propose of training. In Section 4 we introduce our modeling and training approach in detail, specially for developing the integrated model for prediction of the slurry friction factor using our knowledge of the flow regime. Section 5 provides the acquired experimental data from the literature, and the detailed results produced by our model with the comparison against the well-known correlations in the literature.

2. Dimensional Analysis and Feature Selection

For a solid–liquid Newtonian slurry flowing through a horizontal pipe, we may assume that the steady flow depends on at least the following parameters: the pipe diameter, D ^ , the liquid phase density, ρ ^ l , the solids phase density, ρ ^ s , the liquid phase viscosity, μ ^ l , the particle diameter in the solids phase, d ^ p , gravitational acceleration, g ^ , the mean slurry velocity, U ^ s , and the mean volumetric concentration of solids in pipe cross section, C v . The last mentioned parameter is dimensionless, whereas the rest are dimensional. Throughout this paper we write all dimensional quantities with a · ^ symbol and dimensionless parameters without.

2.1. Critical Velocity

The deposition velocity, also referred to as the critical velocity, V ^ c , is one of the key design parameters for most of the slurry transport systems. It is defined as the velocity, lower than which there exists a stationary bed at the bottom of the pipe. Over the past decades, many researchers have developed empirical and/or semi-empirical correlations and models to predict the critical velocity in pipe geometry. Table 1 lists the suggested correlations of Durand [21], Zandi et al. [13], Yufin [22], Oroskar and Turian [9], and Kokpinar et al. [10].
For the prediction of critical velocity, U ^ s is replaced with V ^ c whose value is to be derived. The critical depends on the following parameters:
V ^ c = f ( D ^ , ρ ^ l , ρ ^ s , μ ^ l , d ^ p , g ^ , ω ^ , C v ) .
Some researchers proposed predictive correlations for obtaining critical velocity in which the particle settling velocity in the mixture, ω ^ m and the viscosity of the mixture μ ^ m are involved. However, we know that ω ^ m is a function of μ ^ l , ρ ^ l , C v , and s, and μ ^ m is a function of μ ^ l and C v [19]. By performing dimensional analysis on the parameters in (1) we derive the following dimensionless parameters based on which the critical velocity could be predicted:
V ^ c g ^ D ^ = f ( δ , s , R e p w , C v ) ,
where δ = d ^ p / D ^ , s = ρ ^ s / ρ ^ l , and R e p w = ρ ^ l ω ^ d ^ p / μ ^ l are the diameter ratio, density ratio, and the particle Reynolds number respectively. It should be noted that the R e p w is based on the settling velocity in clear water ω ^ . The functional relationship (2) among the dimensionless parameters, which has four inputs (features) and one output (target), is used to develop the predictive machine learning algorithms.

2.2. Frictional Pressure Drop

For the prediction of frictional pressure drop ( d p ^ d z ^ ) of the slurry flow, we can introduce two important dimensionless groups, the Froude number ( F r ) and Reynolds number ( R e ), which relate the balances of the representative forces and stresses in a slurry pipe flow:
R e = ρ ^ l D ^ U ^ s μ ^ l ,
F r = U ^ s 2 g ^ D ^ ( s 1 ) .
It is possible (and necessary) to define and utilize R e and F r numbers for prediction of the frictional pressure drop because we have the mean slurry velocity U ^ s as an input parameter here, in contrast to the critical velocity prediction task where this parameter is unknown. We also define the slurry friction factor ( f s l ) as the dimensionless parameter obtained from the frictional pressure drop:
f s l = 0.5 D ^ d p ^ d z ^ ρ ^ l U ^ s 2 .
Therefore, the dimensionless parameters governing the slurry friction factor are as follows:
f s l = f ( R e , F r , s , C v , δ ) .
It has also been found that the friction factor f w of the clean water with the same flow parameters is useful for prediction of the slurry flow friction factor. f w can be derived by the Colebrook-White correlation which gives a Darcy-Weisbach friction factor as a function of the Reynolds number and the roughness of the pipe ( f w = f C W ( R e , ϵ r ) ). Therefore, we also add f w as an extra feature which potentially helps the model performance in prediction. As could be observed, the functional relationship (6) among the dimensionless parameters, plus f w has six features based on which the target f s l should be predicted.

3. Machine Learning Methodology

We now briefly outline the background to the methods we have used in this study. For more detail on these methods the reader is referred to [23,24] for SVR modeling and [25,26] for ANN.

3.1. ANN Modeling

Artificial neural networks (ANNs) are composed of a large number of interconnected processing elements called neurons or cells, that are tied together with weighted connections. Neural networks are inspired by the system of biological neurons whose connections are provided by synapses [25,27]. Learning process in neural networks occurs in a similar way through training and provision of the true input and output dataset, where the connection weights are iteratively being adjusted to solve the specific problem at hand.
The most widely applied feed forward ANN for supervised regression and classification purposes is multi-layer perceptron (MLP), which consists of an input layer, an output layer, and one or more hidden layer(s) in which each layer has a weight matrix, W and a bias vector, b [26]. Figure 1 illustrates the architecture of an MLP network. Observe that each node in every layer of MLP, including the bias node, is fully connected to all the nodes in the subsequent layer. The number of nodes in the input layer is equal to the number of input parameters. The output layer may also contain more than one nodes, corresponding to the number of predictions the network is responsible for making. However, the number of hidden layers and the number of their nodes are adjustable hyperparamters so that the model satisfies the desired approximation and suitable generalization capability.
Considering the feed-forward process of a single data point, for obtaining the values of all the n [ l ] nodes in layer l we first calculate the vector z [ l ] ( n [ l ] , 1 ) which is a linear function of the values of nodes in the previous layer, a [ l 1 ] ( n [ l 1 ] , 1 ), i.e., z [ l ] = j = 1 n [ l 1 ] w i j [ l ] a j [ l 1 ] + b i [ l ] . Subsequently, a nonlinear activation function, ψ ( z ) , is applied element-wise to the vector z [ l ] to get the final values of all the nodes in layer l contained in the vector a [ l ] = ψ ( z [ l ] ) . If we have m data points in the training batch, we can write the feed-forward equations in matrix formation as:
Z [ l ] = W [ l ] A [ l 1 ] + b [ l ] ,
A [ l ] = ψ ( Z [ l ] ) ,
where the matrix A [ l ] ( n [ l ] , m ) contains the obtained values of the nodes in layer l, and W [ l ] ( n [ l ] , n [ l 1 ] ) and b [ l ] ( n [ l ] , 1 ) are the adjustable parameter (weight) matrix and the bias vector respectively.
The activation function acts as a mathematical gate between the inputs that are fed to a neuron and its output that is going to the next layer. Non-linear activation functions allow the model to create complex mappings between each layer’s input and output, which are vital for learning and modeling complex data [28]. Whereas, using linear activation functions leads to a model as simple as linear regression with a significant under-fitting problem. An additional important aspect of activation functions is that they should be computationally inexpensive. The most common activation functions for MLPs are Logistic sigmoid function, ψ ( z ) = 1 1 + e z , Hyperbolic tangent function, ψ ( z ) = e z e z e z + e z , ReLU function, ψ ( z ) = m a x ( 0 , z ) , and Leaky ReLU function ψ ( z ) = m a x ( 0.01 z , z ) .
The training is an iterative process during which the network tries to “learn” the relationship between the provided input(s) and the corresponding output(s) by altering the weights and biases to achieve a satisfactory prediction within a reasonable error margin. The weights and biases are slightly adjusted during each iteration through the training set until the mentioned task is accomplished. At this stage, the learning process is finished on the training set, and the model is ready to be examined on the unseen data. Iterating once over the entire training set is called one epoch. The optimized number of epochs for the training purpose depend on many hyper-parameters such as the learning rate, the optimization algorithm, complexity of the dataset itself, etc.
The back propagation algorithm is the most popular method for modifying and adjusting the weights and biases in every iteration, in which the difference (error) in the ground truth and the obtained outputs is propagated back to each layer and the weights and biases get adjusted accordingly [29]. The main goal of the back propagation process is that the prespecified loss function is minimized. The most widely used loss function for training a regression problem is the mean squared error:
L ( y , y ) = 1 m i = 1 m ( y i y i ) 2 ,
where y and y are the predicted and true outputs respectively.
All machine learning models including neural networks are considered to be satisfactory in terms of their prediction(s) in case they perform well on the unseen dataset (test set) which was not used in the training process. In other words, the learning model has a suitable generalization capability if the out of sample error is within an acceptable margin. It is possible that a learning model performs well on the training set, but fails to make accurate predictions on the test set. this issue is referred to as overfitting or variance problem, in which the generalizability of the model is poor. On the other hand, if the model or MLP structure is too simple so that the performance on both the training and test sets are poor, the model has high bias or underfitting problem. To examine a specific trained model, one should perform a cross-validation procedure in which the generalizability of the model is monitored on the data that was not seen by the network during the training. The most suitable model is the one with the lowest cross-validation error. This can be accomplished either via defining another set (development set) or by performing K-fold cross validation [30].
Finding the most suitable model for a supervised learning task is essentially a trial and error process which often involves conducting a grid search over different hyperparameters of the model. For instance, MLP hyperparameters include the learning rate, weight matrices initialization, optimization algorithm, network architecture, regularization parameter, number of training epochs, etc.

3.2. SVR Modeling

Support vector regression (SVR) is the most common application form of support vector machines (SVMs). Suppose our training set contains { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x m , y m ) } where x i R n are input variables (features) and y i R are the corresponding output (target) value. The modeling goal in ϵ -SVR is to propose a function f ( x ) that has a maximum of ϵ deviation from the actual target values for all of the input variables in the training set [24,31]. Meanwhile, the obtained function should be as flat as possible to prevent the high variance issue. For this purpose, the loss function is penalized only in case the predicted output deviation from the target is more than ϵ . SVR considers the following linear estimation function:
f ( x ) = w , x + b ,
where w R n and b R denote the weight vector and bias respectively, and . , . represents the dot product in the feature space R n . A viable solution for increasing the flatness of f ( x ) is to minimize the norm of w. Therefore, we transform to the following convex optimization problem:
minimize 1 2 w 2 subject to y i w , x i b ϵ w , x i + b y i ϵ
The optimization problem (11) assumes there exists a function f such that the prediction errors for all the data points are within the ϵ margin. However, this assumption might not be always satisfied. One might also consider allowing for some errors by defining a “soft margin” loss function which was primarily adapted to the SVMs by Cortes and Vapnik [32] to prevent overfitting and enhance the generalizability of the proposed function/model. To fulfill this purpose, slack variables ξ and ξ * can be introduced to modify the infeasible constraints of the optimization problem (11). The corresponding formulation is as follows:
minimize 1 2 w 2 + C i = 1 m ( ξ i + ξ i * ) subject to y i w , x i b ϵ + ξ i w , x i + b y i ϵ + ξ i * ξ , ξ * 0
where the constant C > 0 determines the penalizing degree of the output predictions with the errors larger than ϵ , which also regulates the flatness of f. Therefore, the corresponding ϵ -intensive loss function | ξ | ϵ is described by:
| ξ | ϵ = 0 for | ξ | ϵ | ξ | ϵ o t h e r w i s e .
It is found that the optimization problem (12) can be solved more simply when converted to its dual formation. For this purpose, Lagrange multipliers can be used as described in Fletcher et al. where the idea is to construct a Lagrange function from the primal objective function and the corresponding constraints, by introducing a dual set of variables. Considering the Lagrangian function and its properties, by performing some mathematical operations (see details in [24]), we arrive at the following dual optimization problem:
maximize 1 2 i , j = 1 m ( α i α i * ) ( α j α j * ) x i , x j ϵ i = 1 m ( α i + α i * ) + i = 1 m y i ( α i α i * ) subject to i = 1 m ( α i α i * ) = 0 and α i , α i * [ 0 , C ] ,
where α i , α i * are Lagrange multipliers. In practice, only some of the coefficients ( α i α i * ) are non-zero due to the specific character of the quadratic programming problem (14). The corresponding input vectors x i whose coefficients are non-zero are referred to as support vectors (SVs). The SVs could be considered as the data points that represent the information content of the entire training dataset. The final form of the estimation function using (14):
f ( x ) = i = 1 m ( α i α i * ) x i , x + b .
From Equation (15) it can be deduced that w = i = 1 m ( α i α i * ) x i . Figure 2 demonstrates a schematic of the diagram of a non-linear support vector regression using ϵ -sensitive loss function. As could be observed, according to (13) if the predicted value of a data point (blue dots) are within the ϵ -tube the loss is zero; Otherwise (red dots), the loss is equal to the magnitude of the difference between the predicted value and the radius of the tube ϵ . As can be observed in Figure 2 the SVR algorithm tries to situate the tube around the data points with the help of the support vectors (green dots).
The true power of SVMs can be achieved by introducing non-linearity to the original algorithm. This could be accomplished by mapping the training data points to another feature space Φ : R n R k with higher dimension k > n , in which the dot product operation can be substituted by a kernel function, i.e., K ( x i , x j ) = ϕ ( x i ) ϕ ( x j ) . In other words, the kernel function represents the dot product in a higher dimensional feature space. Substituting the kernel function in Equation (15) introduces the mentioned non-linearity to the SVM algorithm:
f ( x ) = i = 1 m ( α i α i * ) K ( x i , x ) + b .
It is important to note that kernel functions ought to have some key specific characteristics so that they correspond to a dot product operation in some other feature space. The two most widely used kernel functions in SVM algorithm are as follows:
Polynomial Function : K = ( u T v + 1 ) P ,
Gaussian Radial Basis Function : K = exp u v 2 2 σ 2 ,
where u, v are kernel arguments, P is degree of the polynomial, and σ is the width of radial basis function (RBF).
Similar to ANNs it is important to come up with an optimized set of hyper-parameters for SVR algorithm so that the proposed hypothesis function f offers an acceptable generalization performance and lacks the high bias or variance problem. The tunable hyper-parameters of SVR are ϵ , C, the kernel type, and the corresponding kernel parameters, e.g., P for the polynomial and γ = 1 2 σ 2 for the RBF kernels. Choosing a specific kernel type is usually based on the application domain knowledge and is also required to reflect the distribution of the dataset. C determines the trade-off between the flatness of the hypothesis function and the degree up to which deviations larger than ϵ are tolerated. In other words, it also has a regularization effect, such that the smaller the value of C, the more significantly the objective function in (12) is regularized. The ϵ parameter defines the radius of a “tube” zone in which the loss function is zero: larger ϵ selection leads to a proposed hypothesis function with more flatness and also fewer number of support vectors. Therefore, it can be deduced that both C and ϵ affect the model complexity.

4. Modeling Approach

The purpose of this study is to develop learning models using ANN and SVR algorithms for prediction of the critical velocity and frictional pressure drop of slurry flow in pipe geometry. For critical velocity, we use the four dimensionless features, δ , s, R e p w , and C v developed in Section 2.1 as inputs to develop the above mentioned learning algorithms with satisfactory generalizability. However, for the prediction of frictional pressure drop we also need to understand the effect of the slurry flow regime on the friction factor.
Figure 3 shows a schematic of the frictional pressure drop as a function of the mean slurry velocity for different flow regimes. The slurry flow regime is governed by the competition between the turbulent eddies and the particle settling tendency due to gravity. The former tends to suspend the solid particles in the carrier liquid while the latter drives the particles to settle at the bottom of the pipe. The frictional pressure drop of a slurry flow depends on the different existing stresses and forces whose nature and strength strongly depend on the flow regime [6,16].
At low flow rates, the turbulent eddies are not strong enough to suspend the solid phase. As a result, a considerable portion of the pipe is occupied by stationary sedimentation bed above which there is a heterogeneous layer with a recognizable solids concentration gradients. This regime of the slurry flow is also referred to as the bed-load regime. As observed in Figure 3 the frictional pressure drop decreases with the mean velocity in this regime. This is explained by the fact that at low velocities, the stresses and forces are dominated by the solids phase, that are weakened as the slurry velocity increases.
As the mean superficial velocity increases, the turbulent eddies become more capable of suspending the solids until all the static bed layer is eroded, and there is a moving bed layer at the bottom of the pipe whose concentration is close to maximal packing. As the flow rate is further increased, we reach the heterogeneous or fully suspended regime where there is a solid concentration gradient in the direction of gravity. At extremely high flow rates, turbulent eddies become significantly more dominant and the solid phase becomes progressively more homogeneously distributed in the carrier liquid. As shown in Figure 3 the frictional pressure drop increases with the mean velocity through saltation flow, heterogeneous, and homogeneous regimes. Furthermore, the pressure drop increase rate is also increasing at higher velocities, as the liquid phase role becomes more dominant in the suspension stresses.
As noted above, the frictional pressure drop behavior is noticeably affected when the regime changes from the bed-load to saltation flow, i.e., at the critical velocity. Therefore, we can introduce this prior knowledge to our predictive modeling approach. Figure 4a,b shows the work flow chart for developing our predictive models. We develop two separate learning models with satisfactory accuracy and generalization capability for the bed-load and heterogeneous flow regimes according to the work-flow chart illustrated in Figure 4a. For this task, we also need to train the two mentioned models with separate datasets representing the corresponding regimes. For examining the generalizability of the developed predictive model for frictional pressure drop, we primarily check what the flow regime is, based on the developed model for critical velocity. Subsequently, we feed the six dimensionless parameters (see Section 2.2) as features to the corresponding predictive learning model for frictional pressure drop prediction. This procedure is illustrated in Figure 4b for further clarification of our integrated method scheme for prediction of the slurry friction factor. Consequently we have a dataset for critical velocity, and two distinct datasets for frictional pressure drop: in bed-load regime and the rest of the regimes.
We develop the most suitable ANN and SVR predictive models for each of the three datasets via grid search among their corresponding hyperparameters. The chosen hyperparameters of ANN for tuning include the architecture of the network, i.e., the number of hidden layer(s) and neurons in each hidden layer, activation function, number of training epochs, and learning rate, and the ones for SVR include C, ϵ , kernel type, and kernel parameter (polynomial degree for polynomial function, and γ = 1 2 σ 2 for the radial basis function). Then we pick the one with the best validation score as our ultimate proposed model. For the purpose of model development we take 80% of each dataset randomly as the training set and the remaining 20% as the test set. We perform 5-fold cross validation on the training set to examine the generalization capacity of the model on the data that it did not get trained on. The best model with specific sets of hyperparameters is chosen based on this validation score.

5. Results and Discussion

As the magnitude of the input features are significantly different, the data should be normalized before being fed to the training algorithms. If the inputs are of different scales, the weights connected to the inputs with larger scales will be updated much faster compared to others, which can considerably hurt the learning process. On the other hand, there are also a variety of practical reasons why normalizing the inputs can make training faster and reduce the chances of getting stuck in local optima. We use the standard normalization as follows:
x n o r m , i = x i u t r a i n σ t r a i n ,
where x n o r m , i is the normalized input of the i th sample, and u t r a i n and σ t r a i n are the mean and standard deviation of the data points in training set. The output is also normalized in similar way as in (19).
Table 2 shows parameters of 100 experimental data points collected from the literature, measuring critical velocity, which we use to train and test our proposed models. Additionally, Figure 5a–e demonstrate the estimation of the probability density function and box plot of all the input features along with the output, which provides insightful information about the distribution and statistical parameters of dataset. Each data point is the result of an experimental test by the listed authors, performed in different flow loop facilities. As can be observed, these experiments cover a wide range of particle sizes d ^ p = 0.23 mm 5.34 mm , pipe diameters D ^ = 0.025 m 0.152 m , mean solids concentrations C v = 0.007 0.30 , and also different density ratios s = 1.04 2.68 . Most of the data are taken from the measurements conducted by Kokpinar et al. [10] who used coarse particles, with different materials to also see the effect of s on the critical velocity. They used sand, coarse sand, coal, blue plastic, black plastic, fine tuff, and coarse tuff with specific densities of s = 2.60 , 2.55 , 1.74 , 1.20 , 1.35 , 1.31 , 1.04 respectively.
We train and obtain a validation score (loss) on the randomly chosen 80 data points (training set) and report the out of sample results on the remaining 20 data points. Table 3 shows the the optimum hyperparameters for SVR algorithm, Table 4 shows parameters of experimental data points collected from the literature, and Table 5 shows the optimum hyperparameters for ANN algorithms. Table 3 and Table 5 show their corresponding validation loss respectively. It should be noted that the validation loss refers to the average mean squared error obtained by the 5-fold cross validation. As observed, the optimum SVR model outperforms ANN in terms of the validation score and hence the generalization capability. Therefore, the SVR model is chosen as the ultimate prediction model for critical velocity.
Table 6 shows the performance of the chosen model on training and test sets, in terms of the average absolute relative error (AARE), the cross-correlation coefficient (R), and the standard deviation of error ( σ ). We can compare the proposed model performance against the most widely used predictive correlations in literature brought in Table 1. The out of sample average absolute relative error of these models are 0.099, 0.153, 0.308, 0.322, 0.412, and 0.447 corresponding to the prediction of the proposed SVR model, Kokpinar et al. [10], Oroskar and Turian [9], Durand [11], Yufin [22], and Zandi et al. [13] respectively. It is evident that the prediction error of critical velocity has reduced considerably in the present work. Figure 6 shows the parity plot of the experimentally measured and predicted results of the dimensionless critical velocity for the training and test sets with the AARE of 0.073 and 0.099 respectively.
We have also directly compared the performance of our model with that of Kokpinar et. al. [10] with their own 42 experimental data points. Figure 7 shows the parity plot of the corresponding predictions versus the measured dimensionless critical velocities. The AARE of estimations are 0.142 and 0.062 for Kokpinar et al. [10] and present models respectively. As observed in Figure 7 the present model performs better in particular where V ^ c g ^ D ^ > 1.8 .
Figure 8 illustrates the effect of hyperparameter C on the loss function (mean squared error) of the training, validation, and test sets. As was mentioned in Section 3.2, C determines the trade-off between the flatness of the hypothesis function, and the degree up to which deviations larger than ϵ are tolerated in SVR algorithm. In practice it also has regularization effect such that the lower the value of C, the more the objective function is regularized. As seen in Figure 8 there is an optimal C where the loss function is minimized in validation and test sets, lower than which the hypothesis function suffers from high bias (under-fitting) and higher than which it suffers from high variance (over-fitting). Obviously, the values of all hyperparameters including C are chosen based on the validation score.
Table 4 shows parameters of experimental data points collected from the literature, measuring frictional pressure drop for heterogeneous and bed-load regimes, which we use to train and test our proposed models. The total number of experimental data points are 365 and 125 for the heterogeneous and bed-load regimes respectively. As can be observed, the experiments mostly used fine particles except for the Doron et al.’s data [38] and part of Durand’s measurements in bed-load regime [5], where particle sizes of d ^ p = 0.23 mm and 5.34 mm were used respectively. Pipe diameters of the range D ^ = 0.051 m 0.155 m were used in the experiments with flow velocity range of U ^ s = 0.24 m / s 7.77 m / s , and mean delivered solids concentration of C s = 0.042 0.40 . Most of the experiments were conducted using sand particles with the density ratio of s = 2.44 2.87 except for Doron et al.’s work where General Electric “Black Acetal” with the density ratio of s = 1.24 was used [38]. Figure 9a–g show the kernel density estimation and box plot of all the input features along with the output for both heterogeneous and bed-load regime datasets. An interesting observation is that the distribution of F r and f s l are considerably different comparing the two regimes. The reason is that according to (4) and (5) both of these dimensionless variables include the term U ^ s 2 in their equations, and we know that the mean slurry velocity in the bed-load regime is less than that of the heterogeneous regime. Therefore, F r is considerably lower while f s l is larger in bed-load regime compared to the heterogeneous regime.
Similar to the critical velocity case, we randomly take 80% of the dataset for the purpose of training and validation, and the rest 20% as the test set for evaluating the out of sample performance. As can be observed from Table 3 and Table 5 the most efficient developed ANN models are outperforming SVR for both heterogeneous and bed-load regimes. Figure 10a,b show the parity plot comparing the measured and predicted slurry friction factor for both regimes. The corresponding out of sample results are illustrated in Table 6.
For a fair comparison against the existing correlations and models from literature, we also need to investigate the integrated method performance in terms of predicting the frictional pressure drop. In other words, we would like to determine the out of sample error where we ignore the prior knowledge of the flow regime, which can be the case in real-life scenarios specifically for industrial applications. To serve this purpose, we feed each data point to the developed SVR algorithm for critical velocity prediction, and compare the predicted result with the mean slurry velocity as a means to identify the regime. For this process the key assumption is C v = C s at the critical velocity which is a reasonable assumption to make. After the regime identification, we feed the data point to the corresponding model for predicting the frictional pressure drop. The out of sample results for integrated method prediction is shown in Table 6.
Once again, we can compare the out of sample AARE against that of some recognized correlations and models available in literature for predicting the pressure drop. For slurry friction factor prediction in heterogeneous regime, the AARE of the correlations developed by Zandi and Govatos [13], Durand and Condolios [5], and Turian and Yuan [6] are 0.643, 0.449, and 0.348 respectively, whereas for the bed-load regime, the AARE of the proposed models by Gruesbeck et al. [14], Penberthy et al. [15], and Turian and Yuan [6] are 0.837, 0.769, and 0.529 respectively. It is clear that the prediction performance of the current study with AARE of 0.084 significantly outperforms that of the mentioned models.
Figure 11 illustrates the effect of the epoch number, a key hyperparameter for ANNs, on the loss function of the training, validation, and test sets for the heterogeneous regime ANN model. As can be observed, there is an optimal epoch number for training, after which the validation loss starts to increase. In other words, after around 400 training epochs the model is over-fitting on the training dataset.
To investigate whether the proposed integrated method is indeed required for the purpose of a satisfactory prediction for frictional pressure drop, we have also performed a batch train using all of the 490 frictional pressure drop data points, without any supervised or unsupervised classification based on the flow regime. We trained and tested another learning model under the mentioned condition with the similar procedure as other developed models. Table 3 and Table 5 show that the SVR model performance is more satisfactory compared to ANN in terms of generalization capacity. Figure 12 illustrates the corresponding parity plot for the measured slurry friction factor against the predicted values.
For comprehending and comparing the performance of the batch-trained model with the integrated method, the corresponding parity plots are shown in Figure 13a,b. Figure 13a shows the measured and predicted slurry friction factor for the integrated method. As could be observed, there are four heterogeneous data points whose regime was incorrectly classified as bed-load (blue squares), and three bed-load data points with false classification. As shown, the predicted slurry friction factor for misclassified heterogeneous data points tend to be higher than the measured value, whereas the reverse is true for the misclassified bed-load data points. The reason is that generally, the value of slurry friction factor in bed-load regime is more than that of the heterogeneous regime. However, the out of sample results of the integrated method is more satisfactory compared to the batch-trained model, with the AARE of 0.084 for the former and 0.155 for the latter, as shown in Table 6. Consequently, it can be comprehended that although the integrated method prediction highly relies on the performance of regime classification, i.e., the SVR model for critical velocity prediction, it is considered to be more efficient in practice to predefine a regime classification method, such as the one accomplished in this work, prior to feeding it to the model for satisfactory prediction of the friction factor.

6. Summary

We have developed a robust integrated method using ANN and SVR algorithms for prediction of critical velocity and frictional pressure drop by identifying and implementing existing knowledge of the main slurry regimes. The proposed model clearly outperforms the estimation of existing well-known and widely used correlations and models for the prediction of critical velocity and frictional pressure drop. Furthermore, it overcomes the limitations of previous machine learning models which only targeted the estimation of frictional pressure drop in the heterogeneous regime.
The features have been extracted based on dimensional analysis of geometrical and flow parameters that are involved in the governing equations of a solid liquid slurry flow in pipe. This ensures that we preserve all the data information with the least input dimension, which is one of the main goals in developing machine learning algorithms and other methods for prediction. Indeed this is a relatively simple step that can be taken for any physical/mechanical scenario. Additionally, we have shown that the slurry friction factor estimation noticeably improves with regime classification before feeding the data to the developed model.
One of the limitations of the proposed integrated method is that its accuracy highly relies on the regime’s classification performance. However, the overall prediction accuracy can be improved by ensuring that the data used for training the critical velocity and frictional pressure drop models are provided from the same distribution. Another limitation of this study is the limited number of data points available for efficient training of the proposed models. Additionally, using more complex machine learning methods along with more available data can be considered as a suitable future task.
In general, the message of the paper is that one should not discard old methodologies in assuming that new machine learning algorithms will automatically solve all problems. The challenge in industrial applications where we need to predict important variables, is to integrate new predictive methodologies with the old and with our prior physical knowledge/know how. In this respect our results are promising in showing significant advance in predictive abilities with a small investment in dimensional analysis.

Author Contributions

Conceptualization, A.S.S.; methodology, A.S.S.; software, A.S.S. and I.F.; validation, A.S.S. and I.F.; formal analysis, A.S.S.; investigation, A.S.S. and I.F.; data curation, A.S.S.; writing—original draft preparation, A.S.S.; writing—review and editing, A.S.S. and I.F.; visualization, A.S.S.; supervision, I.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been carried out at the University of British Columbia, supported financially by NSERC and Schlumberger through CRD project 505549-16, and by UBC through the 4YF programme.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Oliemans, R.V.A. Multiphase Science and Technology for Oil/gas Production and Transport; University of Tulsa Centennial Petroleum Engineering Symposium, Society of Petroleum Engineers: Tulsa, OK, USA, 1994. [Google Scholar]
  2. Li, Y.; Bjorndalen, N.; Kuru, E. Numerical modelling of cuttings transport in horizontal wells using conventional drilling fluids. J. Can. Pet. Technol. 2007, 46. [Google Scholar] [CrossRef]
  3. Gillies, R.G.; Shook, C.A.; Xu, J. Modelling heterogeneous slurry flows at high velocities. Can. J. Chem. Eng. 2004, 82, 1060–1065. [Google Scholar] [CrossRef]
  4. Martins, A.; Santana, M.; Campos, W.; Gaspari, E. Evaluating the transport of solids generated by shale instabilities in ERW drilling. SPE Drill. Complet. 1999, 14, 254–259. [Google Scholar] [CrossRef]
  5. Durand, R.; Condolios, E. Experimental investigation of the transport of solids in pipes. In Proceedings of the Deuxieme Journée de lhydraulique, Societé Hydrotechnique de France, Grenoble, France, 25–29 June 1952. [Google Scholar]
  6. Turian, R.; Yuan, T.F. Flow of slurries in pipelines. AIChE J. 1977, 23, 232–243. [Google Scholar] [CrossRef]
  7. Wilson, K.C.; Addie, G.R.; Sellgren, A.; Clift, R. Slurry Transport Using Centrifugal Pumps; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  8. Shook, C.A.; Roco, M.C. Slurry Flow: Principles and Practice; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
  9. Oroskar, A.R.; Turian, R.M. The critical velocity in pipeline flow of slurries. AIChE J. 1980, 26, 550–558. [Google Scholar] [CrossRef]
  10. Kokpinar, M.; Gogus, M. Critical Flow Velocity in Slurry Transporting Horizontal Pipelines. J. Hydr. Eng. 2001, 127, 763–771. [Google Scholar] [CrossRef]
  11. Doron, P.; Barnea, D. A three-layer model for solid-liquid flow in horizontal pipes. Int. J. Multi. Flow 1993, 19, 1029–1043. [Google Scholar] [CrossRef]
  12. Durand, R.; Condolios, E. The hydraulic transport of coal and solid material in pipes. In Proceedings of the Colloquium on the Hydraulic Transport of Coal, National Coal Board, London, UK, 5–6 November 1952; pp. 39–55. [Google Scholar]
  13. Zandi, I.; Govatos, G. Heterogeneous flow of solids in pipelines. J. Hydraul. Div. 1967, 93, 145–159. [Google Scholar] [CrossRef]
  14. Gruesbeck, C.; Salathiel, W.; Echols, E. Design of Gravel Packs in Deviated Wellbores. J. Pet. Technol. 1979, 31, 109–115. [Google Scholar] [CrossRef]
  15. Penberthy, W.; Bickham, K.; Nguyen, H.; Paulley, T. Gravel Placement in Horizontal Wells. SPE Drill. Complet. 1997, 12, 85–92. [Google Scholar] [CrossRef]
  16. Shirazi, A.S.; Frigaard, I. A three layer model for solids transport in pipes. Chem. Eng. Sci. 2019, 205, 374–390. [Google Scholar] [CrossRef]
  17. Osman, E.S.A.; Aggour, M.A. Artificial neural network model for accurate prediction of pressure drop in horizontal and near-horizontal-multiphase flow. Pet. Sci. Technol. 2002, 20, 1–15. [Google Scholar] [CrossRef]
  18. Ulker, E.; Sorgun, M. Comparison of computational intelligence models for cuttings transport in horizontal and deviated wells. J. Pet. Sci. Eng. 2016, 146, 832–837. [Google Scholar] [CrossRef]
  19. Azamathulla, H.M.; Ahmad, Z. Estimation of critical velocity for slurry transport through pipeline using adaptive neuro-fuzzy interference system and gene-expression programming. J. Pipeline Syst. Eng. Pract. 2013, 4, 131–137. [Google Scholar] [CrossRef]
  20. Lahiri, S.; Ghanta, K. Prediction of pressure drop of slurry flow in pipeline by hybrid support vector regression and genetic algorithm model. Chin. J. Chem. Eng. 2008, 16, 841–848. [Google Scholar] [CrossRef]
  21. Durand, R. Basic relationships of the transportation of solids in pipes experimental research. In Proceedings of the 5th Congress IAHR, Minneapolis, MN, USA, 1–4 September 1953; pp. 89–103. [Google Scholar]
  22. Vanoni, V. Sedimentation Engineering, ASCE Manuals and Reports on Engineering Practice—No. 54; American Society of Civil Engineers: New York, NY, USA, 1975. [Google Scholar]
  23. Awad, M.; Khanna, R. Support vector regression. In Efficient Learning Machines; Springer: Berlin/Heidelberg, Germany, 2015; pp. 67–80. [Google Scholar]
  24. Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef] [Green Version]
  25. Fausett, L.V. Fundamentals of Neural Networks: Architectures, Algorithms and Applications; Pearson Education India: Noida, India, 2006. [Google Scholar]
  26. Murtagh, F. Multilayer perceptrons for classification and regression. Neurocomputing 1991, 2, 183–197. [Google Scholar] [CrossRef]
  27. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  28. Sharma, S. Activation functions in neural networks. Towards Data Sci. 2017, 6, 310–316. [Google Scholar] [CrossRef]
  29. Deosarkar, M.P.; Sathe, V.S. Predicting effective viscosity of magnetite ore slurries by using artificial neural network. Powder Technol. 2012, 219, 264–270. [Google Scholar] [CrossRef]
  30. Fushiki, T. Estimation of prediction error by using K-fold cross-validation. Stat. Comput. 2011, 21, 137–146. [Google Scholar] [CrossRef]
  31. Drucker, H.; Burges, C.J.; Kaufman, L.; Smola, A.; Vapnik, V. Support vector regression machines. Adv. Neural Inf. Process. Syst. 1996, 9, 155–161. [Google Scholar]
  32. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  33. Graf, W.; Robinson, M.; Yucel, O. Critical Velocity for Solid-Liquid Mixtures: The Lehigh Experiments; Fritz Laboratory Reports, paper 386; Lehigh University: Bethlehem, Palestine, 1970. [Google Scholar]
  34. Avci, I. Experimentally Determination of Critical Flow Velocity in Sediment Carrying Pipeline Systems; Istanbul Technical University: Istanbul, Turkey, 1981. [Google Scholar]
  35. Yotsukura, N. Some Effects of Bentonite Suspensions on Sand Transport in a Smooth Four-Inch Pipe. Ph.D. Thesis, Colorado State University, Fort Collins, CO, USA, 1961. [Google Scholar]
  36. Wicks, M. Transport of solids at low concentration in horizontal pipes. In Advances in Solid–Liquid Flow in Pipes and Its Application; Elsevier: Amsterdam, The Netherlands, 1971; pp. 101–124. [Google Scholar]
  37. Sinclair, C. The limit deposit-velocity of heterogeneous suspensions. In Proceedings of the Symposium on the Interaction Between Fluids and Particles, Third Congress of the European Federation of Chemical Engineers, London, UK, 20–22 June 1962. [Google Scholar]
  38. Doron, P.; Granica, D.; Barnea, D. Slurry flow in horizontal pipes—Experimental and modeling. Int. J. Multiph. Flow 1987, 13, 535–547. [Google Scholar] [CrossRef]
  39. Schaan, J.; Sumner, R.; Gillies, R.; Shook, C. The effect of particle shape on pipeline friction for newtonian slurries of fine particles. Can. J. Chem. Eng. 2000, 78, 717–725. [Google Scholar] [CrossRef]
  40. Matousek, V. Pressure drops and flow patterns in sand-mixture pipes. Exp. Therm. Fluid Sci. 2002, 26, 693–702. [Google Scholar] [CrossRef]
  41. Clift, R.; Wilson, K.; Addie, G.; Carstens, M. A Mechanistically-Based Method for Scaling Pipeline Tests for Settling Slurries. In Proc. Hydrotransport 8; BHRA Fluid Engineering: Cranfield, UK, 1982; pp. 91–101. [Google Scholar]
  42. Yagi, T.; Okude, T.; Miyazaki, S.; Koreishi, A. An Analysis of the Hydraulic Transport of Solids in Horizontal Pipes; Report of the Port & Harbour Research Institute; Nagase: Yokosuka, Japan, 1972; Volume 11. [Google Scholar]
Figure 1. Multilayer perceptron (MLP) architecture with two hidden layers and one prediction output.
Figure 1. Multilayer perceptron (MLP) architecture with two hidden layers and one prediction output.
Energies 14 01263 g001
Figure 2. The diagram of non-linear support vector regression with a soft margin using an ϵ -sensitive loss function. The circular dots represent the data points: blue and red dots are the points inside and outside the ϵ -tube respectively, and green dots are support vectors.
Figure 2. The diagram of non-linear support vector regression with a soft margin using an ϵ -sensitive loss function. The circular dots represent the data points: blue and red dots are the points inside and outside the ϵ -tube respectively, and green dots are support vectors.
Energies 14 01263 g002
Figure 3. Frictional pressure drop as a function of mean velocity for different slurry flow regimes.
Figure 3. Frictional pressure drop as a function of mean velocity for different slurry flow regimes.
Energies 14 01263 g003
Figure 4. The work-flow charts for (a) obtaining the most generalized model corresponding to each dataset and (b) an integrated method for prediction of the slurry friction factor.
Figure 4. The work-flow charts for (a) obtaining the most generalized model corresponding to each dataset and (b) an integrated method for prediction of the slurry friction factor.
Energies 14 01263 g004
Figure 5. Kernel density distribution and a box plot of all the input features ( C v , s , δ , R e p w ) and the output ( V ^ c g ^ D ^ ) (ae, respectively) for the critical velocity prediction model.
Figure 5. Kernel density distribution and a box plot of all the input features ( C v , s , δ , R e p w ) and the output ( V ^ c g ^ D ^ ) (ae, respectively) for the critical velocity prediction model.
Energies 14 01263 g005
Figure 6. Experimentally measured vs. predicted results of the dimensionless critical velocity for training and test sets.
Figure 6. Experimentally measured vs. predicted results of the dimensionless critical velocity for training and test sets.
Energies 14 01263 g006
Figure 7. Comparison between dimensionless critical velocity measure experimentally by Kokpinar et al. and the results predicted by their model and the present SVR model.
Figure 7. Comparison between dimensionless critical velocity measure experimentally by Kokpinar et al. and the results predicted by their model and the present SVR model.
Energies 14 01263 g007
Figure 8. Effect of the hyperparameter C on the loss function for the training, validation, and test sets.
Figure 8. Effect of the hyperparameter C on the loss function for the training, validation, and test sets.
Energies 14 01263 g008
Figure 9. Kernel density distribution and box plot of all the input features ( C v , s , R e , F r , δ , f w ) and the output ( f s l ) (a–g, respectively) for the slurry friction factor prediction model in heterogeneous and bed-load regimes.
Figure 9. Kernel density distribution and box plot of all the input features ( C v , s , R e , F r , δ , f w ) and the output ( f s l ) (a–g, respectively) for the slurry friction factor prediction model in heterogeneous and bed-load regimes.
Energies 14 01263 g009
Figure 10. Experimentally measured vs. predicted results of the slurry friction factor in (a) the heterogeneous regime and (b) the bed-load regime for training and test sets.
Figure 10. Experimentally measured vs. predicted results of the slurry friction factor in (a) the heterogeneous regime and (b) the bed-load regime for training and test sets.
Energies 14 01263 g010
Figure 11. Learning curve showing the effect of the number of training epochs on the loss functions of training, validation, and test sets for the heterogeneous regime ANN model.
Figure 11. Learning curve showing the effect of the number of training epochs on the loss functions of training, validation, and test sets for the heterogeneous regime ANN model.
Energies 14 01263 g011
Figure 12. Experimentally measured vs. predicted results of the slurry friction factor for training set with training on SVR without regime classification.
Figure 12. Experimentally measured vs. predicted results of the slurry friction factor for training set with training on SVR without regime classification.
Energies 14 01263 g012
Figure 13. The measured and predicted slurry friction factor parity plot for (a) integrated method and (b) batch-trained model.
Figure 13. The measured and predicted slurry friction factor parity plot for (a) integrated method and (b) batch-trained model.
Energies 14 01263 g013
Table 1. Proposed correlations for the critical velocity.
Table 1. Proposed correlations for the critical velocity.
ResearcherProposed Correlation
Kokpinar et al. [10] a V ^ c / g ^ D ^ = 0.055 δ 0.60 C v 0.27 ( s 1 ) 0.07 R e p 0.30
Durand [21] b V ^ c = F L 2 g ^ D ^ ( s 1 )
Zandi et al. [13] c V ^ c = { [ 40 C v D ^ g ^ ( s 1 ) ] / C D } 0.5
Yufin [22] d V ^ c = 14.23 d ^ p 0.65 D ^ 0.54 e x p ( 1.36 [ C v ( s 1 ) ] 0.5 d ^ p 0.13 )
Oroskar & Turian [9] e V ^ c / g ^ d ^ p ( s 1 ) = 1.85 C v 0.1536 ( 1 C v ) 0.3564 δ 0.378 R e p 0.09 x 0.30
a R e p = ρ ^ l ω ^ d ^ p / μ ^ l . b F L is a constant. c C D is the drag coefficient. d Lengths are measured in feet. e x = 2 π { 2 π γ e x p ( 4 γ 2 / π ) + γ e x p ( 4 γ 2 / π ) d γ } where γ = V ^ p / V ^ c .
Table 2. Parameters of the experimental data considered for comparison with critical velocity.
Table 2. Parameters of the experimental data considered for comparison with critical velocity.
SourceData Sets D ^ [ m ] d ^ p [ mm ] V ^ c [ m / s ] C v s
Kokpinar et al. [10]420.151.09–5.341.06–3.000.011–0.0911.04–2.6
Graf et al. [33]120.102; 0.1520.45–0.881.55–2.420.007–0.072.65
Durand [12]70.150.44–2.042.19–2.710.05–0.152.6
Avci [34]150.0520.29–3.20.27–1.580.05–0.301.04–2.68
Yotsukura [35]110.1080.23–1.151.83–2.960.05–0.252.6
Wicks [36]20.027; 0.140.250.46–0.790.012.6
Sinclair [37]110.0252.2050.32–0.520.03–0.181.74
Table 3. Optimum hyper-parameters obtained by the SVR algorithm.
Table 3. Optimum hyper-parameters obtained by the SVR algorithm.
CaseC ϵ Kernel TypeKernel ParameterValidation Loss
Critical Velocity400.05RBF0.20.059
Heterogeneous Friction Factor8000.025RBF0.10.097
Bed-load Friction Factor100.05RBF0.60.123
Batch-trained Friction Factor500.025RBF0.50.149
Table 4. Parameters of the experimental data considered for comparison for frictional pressure drop.
Table 4. Parameters of the experimental data considered for comparison for frictional pressure drop.
SourceRegime D ^ [ m ] d ^ p [ mm ] U ^ s [ m / s ] C s s d p ^ d z ^ [ kPa / m ]
Gillies et al. [3]Heterogeneous0.1030.09; 0.271.49–7.770.10–0.402.650.37–5.32
Schaan et al. [39]Heterogeneous0.0530.085–0.10.99–5.020.15–0.402.44–2.660.27–7.20
Matousek [40]Heterogeneous0.1550.374.72–8.980.12; 0.262.650.99–3.53
Doron et al. [38]Heterogeneous0.0513.000.55–1.630.042–0.1151.240.22–0.63
Durand [5]Bed-load0.150.44; 2.041.10–2.130.085–0.262.650.52–2.13
Clift [41]Bed-load0.440.29–0.681.73–3.810.11; 0.152.65; 2.870.36–1.05
Yagi [42]Bed-load0.08–0.150.25–1.281.00–2.810.152.63–2.671.01–4.67
Doron et al. [38]Bed-load0.0513.000.24–0.550.042–0.1151.240.19–0.41
Table 5. Optimum hyper-parameters obtained by the ANN algorithm.
Table 5. Optimum hyper-parameters obtained by the ANN algorithm.
CaseHidden LayersNeuronsActivation FunctionEpochsLearning RateValidation Loss
Critical Velocity116Leaky ReLU1200.080.072
Heterogeneous Friction Factor216Leaky ReLU3500.020.090
Bed-load Friction Factor214Leaky ReLU5000.020.112
Batch-trained Friction Factor218Leaky ReLU7000.010.155
Table 6. Performance of the chosen models on training and test sets.
Table 6. Performance of the chosen models on training and test sets.
CaseChosen ModelSet AARE σ R
Critical VelocitySVRtraining0.0730.1530.959
test0.0990.2070.920
Heterogeneous Friction FactorANNtraining0.0170.0130.997
test0.0260.0340.992
Bed-load Friction FactorANNtraining0.0250.0240.999
test0.0540.0850.997
Batch-trained Friction FactorSVRtraining0.0970.0960.963
test0.1550.1780.926
Integrated method Friction FactorSVR-ANNtest0.0840.2150.991
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sarraf Shirazi, A.; Frigaard, I. SlurryNet: Predicting Critical Velocities and Frictional Pressure Drops in Oilfield Suspension Flows. Energies 2021, 14, 1263. https://doi.org/10.3390/en14051263

AMA Style

Sarraf Shirazi A, Frigaard I. SlurryNet: Predicting Critical Velocities and Frictional Pressure Drops in Oilfield Suspension Flows. Energies. 2021; 14(5):1263. https://doi.org/10.3390/en14051263

Chicago/Turabian Style

Sarraf Shirazi, Alireza, and Ian Frigaard. 2021. "SlurryNet: Predicting Critical Velocities and Frictional Pressure Drops in Oilfield Suspension Flows" Energies 14, no. 5: 1263. https://doi.org/10.3390/en14051263

APA Style

Sarraf Shirazi, A., & Frigaard, I. (2021). SlurryNet: Predicting Critical Velocities and Frictional Pressure Drops in Oilfield Suspension Flows. Energies, 14(5), 1263. https://doi.org/10.3390/en14051263

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop