4.1. Simulation Campaign Overview
The simulation campaign consisted of 25 cases, which differed in the amount of upwinding introduced by the convective flux interpolation scheme, the density of the grid, and the VoF methodology employed.
A single simulation, referred to as the benchmark, was run on a grid with the edge of the cubic cells set to 1 mm, which is approximately equal to the resolution used in the DNS. With this grid, the theoretical height of the jump, , was discretised by 81 cells. The size of the grid was million cells. Algebraic VoF was used, and only 2% upwinding was employed. We note that the initial plan was to use the geometric VoF for the benchmark simulation due to its superior accuracy. However, stabilizing the simulation proved difficult. Several costly attempts were made, with the amount of upwinding gradually increased, but even with 20% upwinding, instabilities occurred.
The rest of the simulations covered the following choices for the grid resolution,
mm, and amount of upwinding,
. We from here on refer to the four grids used in the study as [
,
,
,
] and denote the amount of upwinding as [
,
,
,
]. For each configuration, algebraic and geometric VoF were considered, which are referred to by the name of the key underlying algorithm, MULES and isoAdvector, respectively. As mentioned in
Section 3, for simulations on grids
and
, the value of
was doubled.
All simulations were first run for 1 s of simulation time, after which time-averaging was commenced and continued for 11 s. This corresponds to time units. By comparison, the reference DNS was averaged across 120 time units. To obtain the final statistical results, spatial averaging along the spanwise direction was performed. The time- and spanwise-averaged quantities are denoted with angular brackets below, . Adaptive time stepping based on the maximum value of the CFL number currently registered in the domain was used. For simulations using MULES, the maximum CFL allowed was 0.75, whereas 0.5 was used with isoAdvector.
Further notation used in the remainder of the paper is now introduced. The mean location of the interface is denoted as
, corresponding to the 0.5 isoline in the mean volume fraction field. The triple
u,
v,
w is used to denote the three Cartesian components of velocity. The location of the toe of the jump,
, is defined as the streamwise location at which the vertical position of
is
. The same definition is used in the reference DNS data [
4]. The following rescaling of the coordinate system is used:
,
.
4.2. Benchmark Simulation
Here, the results of the benchmark simulation are compared to the DNS of Mortazavi et al. [
4]. The grid resolution in the two simulations was similar, but the setup did not match exactly, as pointed out in
Section 3. There were also certain differences in the definitions of the considered quantities of interest, as discussed below. Additionally, for certain quantities, the DNS clearly poorly converged. Nevertheless, a qualitative and, in most cases, quantitative comparison of the results was possible. We stress that the primary goal here was not to obtain perfect agreement, but rather to answer the principle question of whether the employed physical and numerical modelling frameworks were capable of capturing the properties of such a complicated flow.
An overview of the distribution of the main flow quantities is given first; see
Figure 3. The top-left plot shows the distribution of
, with the magenta line showing
. Close to the toe of the jump, and some distance downstream, the values of
were significantly lower than one, indicating air entrainment. The mean streamwise and vertical velocities are shown in the top-right and bottom-left plots, respectively. As expected, the streamwise velocity was significantly lower downstream of the jump. It is also visible how the boundary layer in the gas followed the interface, leading to an increase in the vertical velocity in a region above the toe of the jump. Finally, the mean turbulent kinetic energy per unit mass,
, is shown in the bottom-right plot. High values were observed in the region close to the toe, with the peak directly downstream of it. This reflects the coupling between turbulence and the air entrainment.
As discussed in the Introduction, the depth of the water after the jump,
, can be computed a priori. It was therefore possible to compute how the location of the interface approaches
with increasing
x. The corresponding graph is shown in
Figure 4 along with the reference DNS data. We note that the value at
was fixed through the definition for
, which explains why the agreement with the DNS was perfect. The rate of growth of the water depth continued to be similar in both the LES and DNS up to
. Further downstream, the DNS values converged towards
at a faster pace, and for the LES, full convergence was in fact not achieved in the limits of the computational domain. The observed discrepancy was likely explained by the difference in the treatment of the outflow boundary.
Figure 5 shows the obtained profiles of
. The agreement with the DNS was extremely good, with observable discrepancies only at
and
. As discussed above, the most intense air entrainment occurred right downstream of the toe, so it is unsurprising that capturing the same
profile in this region was the most difficult.
The mean streamwise and vertical velocity profiles are shown in
Figure 6. The horizontal magenta lines show the positions of
. Excellent agreement with the reference was obtained at all six streamwise positions. A noticeable deviation was only observed in the values of the vertical velocity of the air, which are not of particular interest and can be significantly affected by the boundary condition at the top of the domain.
The profiles of the root-mean-squared values of the three velocity components are shown in
Figure 7. We note that the inspection of the DNS data clearly showed that these second-order statistical moments did not completely converge; see
Figure 8 in [
4]. In light of this and the differences in the simulation setup, the obtained agreement was generally very good. All three components were predicted with a similar accuracy. It is noteworthy that the disagreement with the DNS was chiefly observed in the air and a short distance below the interface, whereas closer to the bottom, the match was close to perfect.
The analysis continued with the consideration of the temporal energy spectra of the velocity fluctuations. These were computed at two
positions:
,
. These are shown with red dots in the top-left plot in
Figure 3. Note that the
values were essentially an outcome of the simulation, since it was not possible to know the value of
a priori. Furthermore, the intention was to use the same
x and
y values in the whole simulation campaign, and the location of
varied slightly from simulation to simulation. The values were therefore chosen in a conservative way to ensure that both locations were to the right of the toe. The DNS data also provided temporal velocity spectra, including the following
positions:
,
. Both the DNS and LES data are shown in
Figure 8. The LES recovered the correct slope in the inertial range, which was in most cases close to the canonical
-power spectrum. Less energy was contained in the fluctuations in the case of the LES, but this was likely a consequence of the signals being sampled from locations further from the toe. Spectra for all three velocity components were predicted with comparable precision.
Next, the spanwise autocorrelation functions of the three velocity components,
, were considered. These were computed at the same two
locations as the temporal spectra, plus an additional location further downstream:
; see the black dot in
Figure 3. The result is shown in
Figure 9. Evidently,
, did not decline to zero for two of the three considered locations. This indicates that the spanwise dimension of the computational domain was somewhat insufficient and prompted the use of a larger domain for the simulations on the
and
meshes. The figure also presents the ratio of the integral length scales
and the cell size in the spanwise direction
. The smallest scale to be discretised was
, and at
, it was only covered by
cells. By comparison, in [
33], eight cells were recommended for a
coarse LES. This may indicate that even with the
mesh, some turbulent scales were resolved poorly. Alternatively, the integral length scale may be a poor metric to relate the grid resolution for this particular flow. In any case, further downstream,
grew, meaning that the resolution with respect to them improved.
Lastly, we analysed the air entrainment by considering the temporal variation of the volume of air passing through the box
,
. The box is shown with red lines in the top-left plot in
Figure 3. Similar to the analysis made for the DNS [
4], we considered the autocorrelation function of the recorded signal. The result is shown in
Figure 10. As expected, strong periodicity was revealed. The DNS data appeared somewhat unconverged, but the location of the first peak was relatively close to the LES. The integral time scales corresponding to the two curves were clearly different, but that is explained by the fact that the width of the box used for sampling the signal was larger in the LES.
The primary conclusion of this section is that OpenFOAM® can be successfully used for scale-resolving simulations of the CHJ. This can probably be extended to include other codes based on the same discretisation and multiphase modelling frameworks. In spite of the slight differences in the simulation setup, the observed overall agreement with the DNS data was good not only for the first- and second-order statistical moments of the considered flow variables, but also for temporal turbulent spectra and the air entrainment properties. This justifies using the produced dataset as an accurate reference for the particular simulation setup selected.
4.3. Influence of Modelling Parameters
In this section, the effects of the grid resolution, amount of upwinding, and interface-capturing method on the cost and accuracy of the results are considered. The cost of the simulations is analysed first, and the associated metric, , is defined as follows. First, the simulation logs were used to compute the number of physical hours necessary to advance each simulation by 1 s. Since the simulations on different grids were parallelised using different amounts of computational cores, the obtained timings were then multiplied by the corresponding amount of cores used. This assumed linear scaling of the computational effort with parallelisation, which was not exact, but provided a very good approximation in the range of core numbers used in the study. Recall also that in the simulations using isoAdvector, the time step was adjusted to ensure the maximum Courant number was , whereas was used in the MULES simulations. To be able to account for the cost difference associated with the VoF algorithm as such, the cost metric for the MULES simulations was premultiplied by . Note that since a typical desktop computer has around 10 computational cores and the full simulation needs to be run for about 10 s, also gives a rough estimate of how many hours it would take to perform a given simulation on a desktop machine.
The obtained values of
are shown in
Table 2. Each entry contains two numbers, corresponding to MULES and isoAdvector. It is evident that the isoAdvector simulations are more expensive. Depending on the other simulation parameters, the ratio of
varied within
,
. As a general trend, isoAdvector became relatively more expensive with increased mesh resolution. Numerical dissipation sometimes favourably affected the amount of iterations necessary to solve the pressure equation. Here, this effect was observed when the transition from 10% to 25% upwinding occurred, with the former always leading to a more expensive simulation. However, for higher
, the effect of dissipation on
was neither particularly strong nor regular. Considering the cost as a function of
, it is crucial to recall that the
simulations were performed on a thinner domain. Since the computational effort did not scale linearly with the number of cells, this could not be directly accounted for in the metric. Based on the data, on a desktop machine, it was possible to perform
simulations in about 3 d and
in about 10 d. For
, the corresponding number was from 25 d to 35 d depending on the simulation settings. Taking into account the increased access of both academia and industry to HPC hardware, it can be said that the simulations on all three grids were relatively inexpensive, at least by LES standards.
Next, the computed profiles of
were investigated; see
Figure 11. The benchmark simulation revealed that the region of the flow that was most difficult to predict was directly downstream of the toe. Therefore, here, we focused on the following streamwise positions:
,
,
. The clear trend overarching all
and
was that a higher amount of upwinding led to better results. For the majority of
and streamwise positions, using isoAdvector and
led to the best predictive accuracy. The fact that using more dissipative schemes improved the results was somewhat unexpected, because typically, the recommendation for scale-resolving simulations is to keep dissipativity to a minimum. However, it should be appreciated that in VoF, any parasitic currents arising due to numerical errors of the dispersive type propagate into errors in the advection of the interface. It appears that avoiding these errors is more important than resolving steep velocity gradients. As expected, the quality of the results degraded with the coarsening of the mesh. The most precise result on
was quite close to the benchmark. On the coarser grids, the accuracy was acceptable considering how inexpensive the corresponding simulations were.
The predictions of the mean velocity are analysed next; see
Figure 12. We focused on the streamwise component
only, since the level of accuracy of
was similar. It was clear that compared to
, the results were more robust with respect to the amount of upwinding. This is rather peculiar: the choice of interpolation scheme for
u had little effect on
, but a stronger effect on a different quantity,
. Nevertheless, the profiles obtained with higher
were generally slightly more accurate, at least in the water phase. Using isoAdvector led to superior accuracy in the gas phase, whereas in the water phase, no significant advantage over MULES was achieved. The combination of
,
, and isoAdvector gave the best results, which were close to the benchmark. At coarser resolutions, the accuracy deteriorated, but not as strongly as for
.
Figure 13 shows the obtained profiles of
. The observed error patterns were significantly less regular than in
and
. Two factors contributed to this. One is that
lumped together the errors in the variances of the three velocity components. The other was that parasitic oscillations had a direct amplifying effect on
. Both of the above can lead to either error cancellation or amplification. On the
grid, the best results were achieved with isoAdvector and 25/50% upwinding. In the case of
, the main peak in the detached shear layer was somewhat under-predicted, but the discrepancy was not very significant. An interesting observation is that at lower grid resolutions, a secondary peak in
was developed for
and
right underneath the interface. This unphysical peak was more pronounced when isoAdvector was used and could even be observed on the
grid when this interface-capturing technique was used. It was present in all three components of the velocity variance, although for the streamwise component, it was less pronounced. The size of the peak grew with a decreasing amount of upwinding, which confirmed its numerical origin. Even apart from this additional peak, the results for
on
and
were quite inaccurate, although the combination
,
, and MULES reproduced the main features of the benchmark profiles fairly faithfully.
The analysis of the velocity predictions is now concluded by considering the spanwise energy spectra of the streamwise velocity; see
Figure 14. The spectra were computed at the same three
locations as the spanwise autocorrelation functions for the benchmark simulations. This entailed that the respective
values were slightly different from simulation to simulation. The reason for considering spanwise spectra instead of temporal was that, due to a larger amount of samples to average across, the spanwise spectra were much smoother, making it easier to distinguish the profiles from different simulations in the plots. Unsurprisingly, increased upwinding led to heavier dampening of the high-frequency fluctuations. Due to the log–log scale being used, it was actually difficult to distinguish any effects of the VoF algorithm or
, besides the fact that the frequency band of the spectrum was larger for denser meshes. One could say that for small amounts of
, the spectrum was relatively well predicted even at
. Therefore, one should exercise caution when making judgements regarding the mesh resolution based on the spectrum predictions.
Finally, the periodicity of air entrainment was analysed. As for the benchmark simulation, the autocorrelation functions of the volume of air passing through a box located some distance downstream of the toe (see the top-left plot in
Figure 3) were computed. The results are shown in
Figure 15. For
and
, the location of the first peak was quite well predicted by all the simulations, whereas for
, the accuracy deteriorated, in particular for some of the simulations using MULES. Animations of the
isosurface revealed that isoAdvector did a much better job at preserving the sharpness of the interface as the entrained bubbles travelled downstream. Therefore, if tracking the fate of the bubbles is important, using this VoF approach is recommended. It should also be noted that while all the simulation predicted similar entrainment frequencies, other statistical air entrainment properties did not agree equally well. For example, the mean amount of air within the monitored box was highly affected by the choice of the VoF method, with MULES giving systematically higher values.