1. Introduction
In the maritime industry, simulation-driven design (SDD) has become a widely accepted approach of developing functional surfaces such as ship hulls and appendages as well as turbochargers and engine component [
1]. In simulation-driven design variants of a functional surface are analysed by means of simulations, often resource-intensive simulations of computational fluid dynamics (CFD) and finite element analysis (FEA). The simulation results are also used to come up with new variants which are then analysed. Instead of running the process manually, formal explorations and exploitations such as Designs-of-Experiment (DoE) and local or global search strategies, respectively, are put to use. As a consequence, the simulations—more precisely, the results from the simulations for each of the variants investigated—drive the process, the aim being to improve selected objectives while complying to a set of constraints, see [
1].
In SDD, the role of the design team is that of formulating the design task, setting up an efficient variation approach via a suitable computer aided design (CAD) model, selecting and configuring a meaningful simulation approach as well as monitoring and adjusting the process. Ideally, the busy work of changing geometry, pre-processing the simulations, transferring, extracting, post-processing and aggregating data, as well as of managing variants and results, is done by a process integration and design optimisation environment (PIDO). This frees the design team from cumbersome, repetitive and error-prone work and, naturally, allows and encourages the generation and analysis of very many variants. Simulation-driven design generally lives off the abundance and often the number of variants considered is one to two and sometimes even three orders of magnitude higher than in the more laborious traditional approach of manually generating a new variant and afterwards analysing its performance in a separate step.
Not surprisingly, many variants can be afforded the chance of identifying an exceptionally good design increase. Still, there are several good reasons why SDD approaches are being investigated and proposed that need as little time and as few simulations as possible:
More and more high-fidelity simulations are used which require considerable computational resources, sometimes several hours or even days per variant.
A number of operational points are considered concurrently which calls for more simulations per variant.
A wider range of objectives from different disciplines are taken into account in parallel, again intensifying the computational burden.
Time pressure to improve a product is continuously rising, rendering faster optimisation campaigns more attractive.
Less expensive products developed with smaller budgets could and should also benefit from SDD.
Hence, one of the challenges is to reduce the number of simulation runs meaningfully, possibly by sacrificing some of the improvement potential for the sake of conducting a faster campaign. The actual number of variants to be analysed quickly scales up with the system’s degrees-of-freedom (DoF), i.e., the number of free variables defining the functional surface or its modifications. Two recent approaches of counteracting this and of reducing turn-around times in SDD were discussed in [
2], namely, parametric-adjoint simulation and dimensionality reduction. Both approaches are meant to improve the fluid-dynamic performance of a functional surface with not too many simulation runs, i.e., within less than 100 variations.
Mathematically speaking, parametric-adjoint simulation is a very elegant approach but requires the solution of the adjoint equations which, so far, only a few CFD solvers provide. The gradient of the objective with respect to the free variables is numerically approximated by concatenating the CAD model’s design velocities and the CFD code’s adjoint sensitivities, the latter resulting from the solution of the so-called adjoint equations. The complementing effort of determining the adjoint sensitivities is similar to solving the primal equation system, for instance the Reynolds-averaged Navier–Stokes equations (RANS), and independent of the number of free variables. Intrisically, the approach is confined to a local search unless systematically repeated in various regions of the design space.
Dimensionality reduction is also mathematically elegant. It builds on a principal component analysis (PCA) of the design space spanned by the CAD variables, i.e., the geometric parameters that define the functional surface itself or its modifications. The original set of CAD variables is replaced by a (considerably) smaller set of principal parameters (or modes) which then capture the variability of the possible shape variations up to a user specified level. It is independent of the chosen simulation code(s) but room for improvement is literally speaking a bit smaller due to deliberately forgoing variability.
Another option of reducing the number of free variables and, hence, the number of necessary simulations, is to conduct an initial DoE from which the most important CAD variables are identified and kept for subsequent optimisation runs while the less important CAD variables are held constant. Quite a few variants need to be investigated before being able to select the subset of variables with which to continue. The potential of tangibly reducing simulations is therefore somewhat limited. Importantly, this approach should not to be confused with the dimensionality reduction based on PCA. The principal parameters of the PCA form a new and orthonormal coordinate system, avoiding many inherent geometric dependencies between CAD variables.
Another challenge that design teams encounter when commencing with an SDD campaign is the time and expertise needed to produce a suitable model of variable geometry. In general, two approaches can be distinguished as elaborated in [
3]: partially-parametric and fully-parametric modelling. In short, partially-parametric modelling builds on an existing geometry, from wherever it may originate, and only defines the modifications parametrically while fully-parametric modelling brings about a geometry from scratch by means of a self-sufficient hierachical CAD model. A fully-parametric model usually is dedicated to a specific application, say a specific ship type, and supports different levels of modifications, from changing main dimensions down to fine-tuning specific regions, but it needs time and expertise to be developed. A partially-parametric model is often quicker to set up but rarely supports a wide range of modifications or the same level of sophistication as a fully-parametric model does.
A promising combination for an SDD campaign which brings about good optimisation results within reasonable effort, i.e., a campaign of high efficacy, appears to be a combination of partially-parametric modelling and dimensionality reduction. Ideally, the design team would see some improvements of their design within 24 h. In order to shed light on this, a comprehensive investigation of improving the energy efficiency of a ship hull by means of modifying its geometry via radial basis functions (RBF)—an intuitive and flexible approach of partially-parametric—and by applying a range of optimisation strategies, including dimensionality reduction via PCA, was undertaken. A fast catamaran was chosen as an illustrating application case. A thorough comparison is given between number of simulation runs and improvements achieved.
The paper first describes the design task and its deliberately chosen simplifications. It then explains the RBF approach along with the partially-parametric model realized with it. This is followed by a discussion of dimensionality reduction based on PCA. Both the RBF approach and the PCA were implemented in CAESES which was also utilized as the PIDO environment. Two different CFD codes were used for the simulations, a potential flow code (SHIPFLOW XPAN from FLOWTECH) and a RANS code (Neptuno from the Technical University Berlin), both of which were coupled to CAESES. The potential flow code is a non-linear free surface code with free sinkage and trim. It only needs a few minutes of CPU time per variant and was employed to cover the bulk of the investigations, i.e., the systematic testing and comparison of different optimisation approaches. The viscous code is a high-fidelity code that solves the RANS equations, taking into account the free surface and the cat’s sinkage and trim for given weight and centre of gravity. It needs several hours per variant and was therefore employed to test and showcase the initial findings for substantially more expensive simulations. An overview of the two CFD codes and for the selected optimisation strategies is given as needed to appreciate the campaigns. This is followed by an elaboration of the results. The paper ends with conclusions and recommendations for future work.
2. Design Task And Simplifications
2.1. Design Task
A small catamaran ferry of
length between perpendiculars at a design speed of
and a displacement mass of
at
draft shall be optimised for energy efficiency, see
Table 1. The cat is meant to be solely battery powered and should serve up to 50 passengers. The energy density of available batteries being substantially lower in comparison to the energy density of fossil fuels, the battery mass becomes critical for a given operational range. Consequently, highest energy efficiency turns out to be even more important for an electric ferry as it would be for a conventionally powered vessel.
An existing hull shape, from hereon referred to as baseline, see
Figure 1, was provided which featured a slender round-bilge demi-hull with a slightly submerged transom stern. The term baseline is used here to refer to the initial design, i.e., the parent hull from wich variations are derived.
Figure 2 gives an impression for various views, omitting the deck and superstructure. The blue demi-hull illustrates the cat’s port side up to the deck while the green demi-hull shows the underwater portion of the cat’s starboard side cut at design draft. The baseline’s demi-hulls were symmetric with regard to both their centre planes and, naturally, the cat’s mid plane. They neither featured any knuckle lines nor spray rails. No topological changes ought to be introduced during the optimisation campaign, but an asymmetry of the demi-hulls would be acceptable.
Each side of the demi-hull was modelled with just one single B-spline surface that featured a polyhedron of ten vertices in longitudinal direction (u-direction) and six vertices in vertical direction (v-direction). It should be noted that the B-spline surfaces were set up within CAESES which can also be used as system for free-form surface modelling. However, any other CAD system could have been used to establish the baseline. For the B-spline surfaces the degrees in u- and v-direction were five and three, respectively. Both the transom and the deck were modelled as ruled surfaces, following the B-spline surface’s aft edge and upper edge, respectively.
The Froude number at
length and
is
which, from a general point of view, falls into a rather unfavourable speed range. A resistance curve was computed for the baseline with both the potential flow code and with the RANS code checked for one speed, see
Figure 3. It showed, not surprisingly, that the cat’s total resistance steadily increases with speed. From the wave resistance coefficient it can be seen that the cat would indeed sail close to a resistance hump, the maximum wave resistance coefficient occurring at
. The bare hull’s total resistance
(both demi-hulls taken into account) as computed from SHIPFLOW amounted to
. All potential flow results were normalized with this value since any absolute values ought to be looked at with some caution. First of all, appendages and openings were not taken into account. Secondly, even though both potential flow and boundary layer theory was utilized, the form factor was simply set to zero. Thirdly, by definition the interactions fo the free surface and viscous flow are not accounted for the potential flow analyses.
A pure variation of the distance between the demi-hulls revealed that the cat’s resistance would steadily fall with increasing clearance up to its upper bound, corresponding to a beam over all of . It was therefore decided to set the clearance to its maximum value and exclude it from the optimisation campaign afterwards. Apart from thus reducing the number of free CAD variables—which is always beneficial—the maximum clearance would also yield the highest stability.
2.2. Simplifications
The distance from the keel K to the metacentre M, KM, was for the baseline and varied between and for the shapes investigated, keeping the clearance constant. This points towards a sufficiently stable vessel so that no constraints had to be monitored with regard to stability.
For a propulsion system a standard arrangement with shaft, bracket and propeller plus a spade rudder for each of the demi-hulls was chosen. As this was not subject to any changes during the optimisation campaign, it was assumed—for the sake of simplicity—that no unfavourable effects would be encountered for any of the anticipated variants. Consequently, the propulsive efficiency was assumed to stay constant so that the cat with least total resistance would also yield the design of highest energy efficiency. It needs to be pointed out that this, naturally, neglects an important component of calm-water hydrodynamics which can be justified solely on the grounds of focusing on methodology and not on proposing a final design.
Additional hydrodynamic factors would also actually have to be considered, such as seakeeping and manoeuvring performance as well as possible resistance increase due to canal and shallow water effects, depending on the cat’s final operational profile. It could be argued that the anticipated shape modifications would primarily influence calm-water performance and should only lead to minor effects in seakeeping etc. as main dimensions, including length, draft, beam and displacement, were all kept constant.
An upfront design-of-experiment (DoE) for two speeds (applying a Sobol sequence [
4]), using the potential flow code, showed that an improvement of the hull at its design speed of 13 kn would not necessarily give an improvement also at a lower speed of interest, here 9 kn. The normalized resistance values for the different variants at both speeds are shown in
Figure 4. The general tendency is that variants which perform well at 13 kn may come with slightly higher resistance values at 9 kn. For certain designs, however, this is not necessarily the case, indicating that there would be favourable compromises.
In addition, a similar DoE study (also applying a Sobol sequence) was run at 13 kn for the design draft of
and at a higher draft of
, see
Figure 5. It shows that performance is correlated but that variants do not always yield improvements at both drafts. Both DoEs clearly indicate that a thorough optimisation for calm-water hydrodynamics would have to be multi-objective, including (at least) two speeds, possibly two drafts and the influence of the propulsion system. If a design team has to consider all these aspects, further complemented by non-hydrodynamic aspects, it becomes all the more clear that the number of variants to be investigated needs to be as small as possible when running expensive simulations.
4. CFD Codes and Numerical Set-Ups
For the evaluation of the towing resistance of the cat, which serves as an objective function for the optimisation study, two numerical codes are used. The first code is SHIPFLOW XPAN/XBOUND, a non-linear potential flow code including sinkage and trim, complemented with a boundary layer simulation, developed by FLOWTECH. The advantage of using a potential flow code in numerical optimisation studies is the rather short computation time, which allows the evaluation of the objective function in just a few minutes per variant. Especially for studies where the geometry variation is limited to the bow area, potential flow results provide at least a good ranking of the designs.
For a few years, however, it has been rather common to use more sophisticated methods such as a RANS solver to compute the viscous flow and the free surface around the hull so as to predict the absolute values with higher accuracy for the price of a significantly higher computation time. In the present study, selected computations have been performed with the RANS code Neptuno, which has proven to yield very accurate results in diverse marine applications, see for example [
12].
4.1. Potential Flow Computation
In the SHIPFLOW package (SHF), the non-linear potential flow code XPAN is shipped together with the panel generator XMESH and a module for thin turbulent boundary layer analysis, XBOUND, which solves the momentum equation along the streamlines traced from the potential flow solution. All these programs are tightly integrated into CAESES and their set-up is rather straightforward. The geometry is exported as offsets at each station. The free surface mesh extends upstream of the bow, downstream of the stern and to the side.
For the present application case, some fine-tuning was required in order to obtain converging solutions for all variants. For example, the panel mesh size between the two demi-hulls was adjusted, since some wave breaking between the demi-hulls would occur, leading to divergence unless numerically suppressed.
Figure 12 shows the final panel mesh on the free surface and on the hull. During the course of the computation, sinkage and trim is taken into account and at the end, the results are imported back into CAESES for visualization.
It should be noted that the cat’s hydrodynamics approaches the limits of a potential flow analysis, with possible wave breaking and modifications of the transom. However, XPAN provides a very fast way to evaluate the objective function and since the main goal of the present work is to study the capabilities of the RBF and PCA approach, these restrictions were deliberately accepted.
4.2. Viscous Flow Computation
The RANS code used as a high-fidelity solver is Neptuno (NEP). Neptuno is an in-house CFD code developed by Prof. Cura [
13,
14]. It solves the RANS equation on a multi-block structured grid using the finite volume method. The pressure–velocity coupling is done using the SIMPLE algorithm from Patankar [
15]. The turbulence is modelled using the standard two equation k-
model from Wilcox [
16]. The free surface is captured using a two phase level set method [
17,
18]. The code has been validated in several workshops, see [
19,
20,
21].
The numerical grid required for the computations is built using the commercial meshing software GridPro. The main advantage of GridPro is the ability to separate topology and geometry. This allows setting up the initial topology (block structure) of the grid prior to the optimisation study and simply re-run the meshing process for each variant. The resulting grids are all of high quality and due to the identical block structure, the dependency of the objective function on the numerical grid is reduced. By using a symmetry boundary condition at the centre plane between the two demi-hulls, only one side of the domain needed to be meshed. The boundaries are located upstream, downstream, beside and below the ship. The selected boundary conditions were slip-wall (symmetry) at the centre plane and starboard boundary while at the inlet, top and bottom side of the domain the velocity is prescribed. At the outlet the hydrostatic pressure is given.
Since the code has been widely and successfully validated at model scale, it was decided to carry out all computations at a fictive model scale of
, which would correspond to a model length of
and a Reynolds number of
. For the optimisation and comparison with XPAN, the values are extrapolated to full scale using the standard ITTC procedure, see [
22].
Only for some geometric variants, e.g., the ones with an asymmetric bow, a slight modification of the topology is required.
Figure 13 shows a slice of the resulting mesh in the bow region for two rather different geometries cut at the same longitudinal postition. As can be seen, GridPro allows to change the grid in the vicinity of the hull, leaving the far-field grid untouched. In addition, the topology of the grid is practically identical, so it can be assumed, that changes in the grid have a rather small influence when evaluating the resistance.
For the cat simulation with free surface, a constant number of 8000 time steps are computed. The non-dimensional time step is chosen at 0.008 and one SIMPLE iteration per timestep is performed. A numerical beach for the damping of waves in the far field is used, for details see [
23]. Sinkage and trim is taken into account by a stepwise movement of the free surface relative to the ship. This is performed at the time steps 1500, 2500, 3500 and 4500. As an example,
Figure 14 shows the time traces of the earth fixed longitudinal component of the hydrodynamic force (
) acting on the hull, the iterative history of sinkage (
) and trim angle (
) along with the residuals. As can be seen, even after the force stabilized, the residuals keep dropping and thus the solution converges. Only very minor changes occur after 8000 steps, so in order to save time, only 8000 iterations are performed during the optimisation study.
In order to assess the dependence of the computed hydrodynamic forces on the grid resolution, a systematic variation has been performed with cell counts ranging from just 460,000 to 3.7 million cells.
Figure 15 shows the force components resulting from the integration of pressure (
), and shear stresses (
) as well as the total force (
). In addition,
Table 5 shows the results of the convergence study, with
being the convergence radius and
u the uncertainty of the solution, as proposed by the ITTC [
24]. As can be seen a monotonic convergence is achieved for each individual component and the quality of the prediction is quite satisfactory. It should be noted that the grid convergence study is carried out without considering sinkage and trim, since for each grid different trim angles would have resulted and thus it would have been impossible to run an uncertainty study for the resistance.
Since the change in total resistance between the fine and coarse grid is only
, but the computation time differs by one order of magnitude, namely, 8 h (480 min) for
in comparison to 48 min for
on a state-of-the-art workstation using eight cores, all computations for the optimisation study are carried out on the coarse grid.
Figure 16 shows a comparison of the free surface elevations for the coarse and fine grid. One should keep in mind that parallel computations could be carried out on a cluster for the finer grid. However, that would accelerate the computations for all grids, naturally.
4.3. Exploration and Optimisation Algorithms
For the exploration of the design space, a Sobol algorithm is used which is based on a quasi-random yet deterministic sequence [
4]. The values of the free variables are set such that the sampling points iteratively fill up the design space as evenly as possible, an additional sampling point always reducing the region of the design space that has been least populated so far. There are other exploration algorithms available in CAESES, for example the Latin-Hypercube sampling (LHS), which is made available through the DAKOTA optimisation package. However, an advantage of using a Sobol sequence is, that contrary to the LHS, the number of samples in the design space can be readily extended, whereas with the LHS a complete new design set would have to be generated. Furthermore, a Sobol sequence can be easily repeated with the same sampling points, provided that the bounds of the free variables have not been modified.
The exploitations, i.e., various optimisation runs, have been undertaken on the basis of three purely deterministic strategies, namely, a T-Search, a Nelder–Mead simplex and a one-stop steepest descent. This was done with the aim of fast turn-around times, i.e., improvements within 24 h. All three strategies perform local searches and are typically employed when quickly trying to understand the potential for improvement of a baseline without introducing major modifications and/or when fine-tuning a design.
The T-Search, i.e., tangent search by Hillary [
25], first evaluates the baseline and then starts with a small positive perturbation of the first design variable. If this readily gives an improvement the second design variable undergoes a small perturbation. Otherwise a small negative perturbation is introduced to the first design variable. Setting out from the best design so far the second design variable is slightly changed, first in positive direction than, if no improvement was found, in negative direction. This is repeated until all free variables have been modified at least once, establishing a local search pattern. Using both the best design found during such a local search and its current starting point (e.g., the baseline at the onset) a global search direction can be determined and a global step is then undertaken, hoping to bring home a more substantial improvement. If that was successful further global steps can follow until a suitable starting point for a new local search pattern has been found. Without going into the strategy’s details any further, it can be appreciated that the algorithm scales up linearly with the number of free variables. In the best case, all positive increments would readily give improvements during the local search. In the worst case, all free variables would have to be perturbed twice, once into positive and once into negative direction. Therefore, in practical situations an objective function needs to be computed once for the baseline and then anything between the number of free variables (best case) and twice that number (worst case) before a global step can be taken.
The Nelder–Mead simplex [
26] also is a local search method which flips through the design space with a simplex, i.e., the simplest possible polytope in any given space. In one-dimensional space a simplex is a line segment while in two-dimensional space it is a triangle. In three-dimensional space it is a tetrahedron while, by abstraction, a simplex represents a cell with n+1 corners in an n-dimensional space. The search, in short, always takes the corner with the least favourable objective value and flips it through the centroid of the n remaining corners that offered better objective values. This step is called a reflection. Depending on the changes found for the objective an extension may take place, trying to benefit further. In general, these flips are repeated (with some additional tweaks such as contracting and shrinking the simplex), slowly advancing the simplex through the design space. As can be readily appreciated the first simplex requires n+1 evaluations of the objective, one evaluation for the baseline and n evaluations for a small perturbation of each free variable (or any independent set of n combinations). Each additional evaluation than brings about an improvement (or information how to contract or shrink the current simplex).
The one-stop steepest descent takes a similar approach as the Nelder–Mead simplex for the first n+1 evaluations. Basically, a gradient of the objective function is numerically determined by adding a small delta to each free variable while keeping all other free variables constant, establishing an approximation through forward differencing. As soon as the gradient is known, a line search is undertaken into the direction of anticipated improvement. In case the objective of the new variant is better than the baseline’s a second step is taken into gradient direction, e.g., by doubling the step size. Otherwise, the first step apparently has been too far and a bisection (or any other subdivision) of the distance between the baseline and the first new variant takes place. This is repeated a few times, say three to five times, until the local search is stopped, establishing a “one-stop” strategy. Alternatively, a conjugate gradient can be utilized to advance further through the design space. In principle, the one-stop steepest descent allows setting the maximum number of evaluations of the objective function upfront, namely, one evaluation for the baseline, n evaluations for the gradient approximation plus three to five evaluations during the one-dimensional line search. This means that as soon as the time needed to simulate a single variant, say the baseline, and the time available before improvements have to be realized are known, it is possible to select the maximum number of free variables—either CAD variables or principal parameters—to take into consideration. For instance, if a single variant needs one hour to simulate for hydrodynamics, a one-stop steepest descent with seven principal parameters would take eight hours before the gradient is known (one hour for the baseline and seven hours for each perturbation). With another four variants evaluated during the line search that sums up to 12 h. That would then leave another 12 h to do both the pre- and post-processing. With some eight to ten hours of pre-processing, including parametric modelling (three to four hours), undertaking the PCA (five to six hours) and coming up with a good CFD set-up (several hours while the PCA is running), that would leave about four hours to complete the job within a single day of turn-around time.
6. Conclusions and Future Work
With an approximate decrease of 3.6% in resistance at design draft and design speed when compared to the total resistance of the baseline, the improvements found for the catamaran are relatively small. The baseline’s performance can therefore be considered pretty good already, at least when looking at the proposed and acceptable geometric variations. This is representative of practical design work where baselines are often quite mature already.
The performed optimisation shows that dimensionality reduction by using a PCA offers high potential in removing complexity from a model without giving away too much optimisation potential. In the present study, the number of free variables is reduced from 14 to 7, which lead to a reduction in computational time of , without sacrificing much of the improvements.
It turns out that the limits of the potential code are reached during the course of the optimisation, requiring higher fidelity tools to be used. Due to the reduction in the degrees-of-freedom by performing a PCA, it is still feasible to run the whole optimisation using a viscous flow solver for practical application cases. This is an even bigger advantage for projects with even more free variables, where gradient-based algorithms take a very long time until they find the direction in which to improve the objective.
More research is needed, naturally. The one-stop steepest descent would be worthwhile to test within other optimisation campaigns. In addition, the PCA and its influence on both speed-up and optimisation gains requires more application cases to gather evidence. In addition, further fine-tuning should be undertaken with regard to questions of sample size for the PCA and the number of points collected. While a large set of samples and many points to consider certainly increase the accuracy of the dimensionality reduction, complex parametric models require a few seconds to a minute of time to update. The fewer samples are acceptable, the less time spent on the PCA.