The simpler control case for a vessel is the seakeeping scenario, that is the cruising mode where heading, height and angle of attack are kept constant. This is the most common way of operating a vessel, since assuming no disturbances affect its attitude allows the vehicle to go from one point to another following a straight line, i.e., following the shortest path.
A promising control strategy based on the complete non-linear model will be used in first place, namely MPC (Model-based Predictive Control). MPC is a candidate for more demanding control tasks than seakeeping, given its high complexity, that makes it particularly suitable for WIG craft control. For instance, non-linear MPC is theoretically capable of obstacle avoidance by fly over. Proving so would allow us to use it for broader cases than steady state keeping, opening the possibility of planning and controlling much more complex manoeuvres. Such tasks are possible using the non-linear model, but would be hardly possible for simpler linear controllers, whose capabilities are by design limited by the range of applicability of the underlying linear approximation of the actual non-liner model.
The main goal of this study is to evaluate both controllers performance for steady state cruising and fly over obstacle avoidance. The input expressions introduced in [
18], broadly accepted in modelling WIG crafts, are quite complex since they introduce more coupling between the inputs, and put extra strain in the optimization process, so they are used solely with the non-linear MPC. For the MPC case the motor acts solely along
, and the surfaces, i.e., elevators, ailerons and rudders, as needed and possible for the demanded configuration. Their range of mobility is considered to be in
.
3.1. WIG Craft Equations of Motion
The 6 DOFs (degrees of Freedom) modelling of WIG crafts has a strong empirical component, as the behaviours produced in this operating regime depend on the geometry of the vehicle. The vector form of the classic dynamic equations for a WIG craft, referred to the coordinate systems shown in
Figure 2, can be presented in the following form [
18].
The first two equations represent the force and angular momenta equations in body frame respectively, and the other two the transformations to inertial frame with as the inertia tensor. The vector represents the velocity with respect to the inertial frame, is the position vector, and the altitude h is taken into account by . The matrices and are transformation matrices used to rotate from the body referred frame to the inertial one. The components of the body referred linear and angular velocities are and . The external forces and torques are represented respectively by and , which also contain aerodynamic terms. The vector represent the Euler angles, called, in order roll, pitch and yaw and m is the mass of the WIG craft.
The expanded form of the forces equation is the following
where
stands for the density of air,
,
and, later on,
,
A the reference surface, usually the wing one,
is the fluid density, air in this case,
the body referred aerodynamic coefficients which in this case are strongly dependent on the altitude
h from the water surface, and finally the last term represents the external forces, in this case the thrust.
WIG crafts are usually symmetric with respect to a vertical plane so the inertia tensor is of the form
The expanded angular momenta equation on body frame is
The last two equations of (
1) are the transformations between the quantities in body frame and the vehicle-carried frame. The following represents the relationship between the derivatives of Euler angles, which are roll, pitch and yaw and the angular velocities in body frame
and
r.
This allows us to find the relative orientation of body frame with respect to the vehicle-carried frame, obtaining the angular configuration of the vehicle.
The rotation matrix in the last equation of (
1) is
Once the model has been completed, it is worth noting that no matter how many of the 6 DOFs of the vessel are to be controlled, underlying nonlinearities cause that closing a loop on one influences the rest of them. For that reason, the complete model will be considered through our experiments.
For our experiments we have considered a very similar design to WigetWorks AirFish 8, so for all the following simulations and studies the aerodynamic behaviour has been featured using its morphology. In
Figure 3, a physical model ready for wind tunnel and motion capture experiments can be seen.
3.2. Non-Linear Model Predictive Control
This first control strategy that has been analyzed is a nonlinear MPC (Model Predictive Control), that allows one to use the previously obtained nonlinear model model; MPC is an optimal control strategy that can be applied both to linear and nonlinear dynamics [
28]. The used environment is do-mpc python library [
29], a toolbox for non-linear robust MPC based in CasADI [
30] and IPOPT.
Model predictive control considers an objective function, which is used to determine the optimal inputs, and a reference model, which is used to predict the system behaviour. In this case the simulated model coincides with the reference model. The tuning, and therefore the behaviour, of the controller is given by the objective function, which has the general form:
where
N is the prediction horizon,
the states,
the inputs and
are the time varying parameters. Each term of the cost function
J represent, respectively: stage cost represents the instantaneous expense evaluated at each time instant according to a custom function, which also contains the final states values; input cost solely focuses on inputs, quantifying the relative cost of each input through the weight matrix
R.
This objective function is utilized by the optimizer to compute the optimal input. This is done by using the model fed to the controller to predict the evolution of the system, stage by stage, from the present instant until the instant N, which is also a parameter of the controller.
A relative equilibrium position is sought for the reference values of
and
. The pitch value is considered as a bounded state rather than a requirement. Therefore, even though the pitch belongs to the equilibrium states, it affects the lift force through the dependency upon it, so it is considered a free parameter let to be chosen by the optimizer. An equilibrium position at the given altitude and speed is found through the control loop itself giving the following results:
In the following, we present the results of the simulations carried out on controlling two different maneuvers. The first control scenario is obstacle avoidance by fly over, starting from the equilibrium position. Reaching a final steady state on altitude is the control objective.
3.2.1. Altitude Variation Using MPC
Starting from an altitude of
, the system is given a new reference altitude of
, which is relatively a big step (see
Figure 4). The parameters selected to express the performance of the controller are rise time (
), settling time (
), and overshoot.
Given the different order of magnitude of the two type of inputs the weights are assigned to even them out. They are chosen empirically, and the general principle is to even the different orders of magnitude, in order to obtain a smooth behaviour and steady state for all the states. In this way a weight of 100 is given the control surfaces and a weight of 1 is given to motor. The cost function is
In
Table 1 the characteristic values of the WIG craft response as read from the graphics in
Figure 4 can be seen: the rise time is short, at the cost of significant oscillations in pitch; nevertheless, as this experiment has been conducted to evaluate the system behaviour in altitude, the results are satisfactory, as it is demonstrated that the craft can change its altitude and fly over an obstacle in an agile way.
The yaw angle shows what seems to be a steady state error (see
Figure 4), which however does not generate a big deviation in the inertial speed. This can be explained by the small weight associated to this variable in the objective function, so this error is being corrected too slowly by the controller.
For the purpose of correcting the fast oscillations in the pitch, a different tuning has been tested, since this could be a major issue as it could affect the comfort and safety of the passengers on board.
The following modifications have been performed to the objective function and other parameters. First of all, a slightly different approach that consists in defining a function that relates the altitude equilibrium value to a pitch angle, for a given speed is used. Doing so the controller can be set for the final value of the pitch based on the final altitude. This will make the pitch angle oscillate less, leading to a smoother dynamic.
A function relating these variables can be obtained through the static equation of vertical motion in the inertial frame. The velocity alongside
, which is the first axis of the body referred frame pointing toward the nose, also influences the lift force, and in this case it is considered as fixed and having a value of
, which is the steady state cruising speed. By fixing the speed, the lift force now solely depends on the angle
and the altitude
h.
By imposing , which represents a condition of vertical equilibrium alongside the axis, a function of and h is obtained.
Using this data a target value can be computed for the final pitch given the final altitude. In this case, setting the final altitude at as before, the result is .
Including this information in the objective function it becomes
In addition to this, the bounds on the state values modified with respect to the previous cases: now they are bounded between , while in the last experiment they were limited to the interval .
The dynamic shows the foreseen improvement, at the cost of slower performances: in
Figure 5 it can be observed how the pitch shows a much smaller oscillation, solving the issue formerly detected, but keeping the rise time still short enough to allow obstacle avoidance in an agile way if needed. However, the effect on altitude overshoot is much smaller, as it decreases from
to only
, being, however, smoother as well.
Nevertheless, it must be highlighted that the difference in terms of response between both cases is not negligible as the rise time more than doubles compared to the previous case, increasing from
to
, as can be read in
Table 2. This fact would impact the choice of a controller version over the other, depending, for instance, on the time in advance an obstacle can be detected.
Taking into account that this simulation has been run in order to evaluate the performance of this controller in a hopping maneuver, where the craft has to fly over the obstacle, it seems natural to choose the controller that shortens the altitude change time as much as possible, i.e., leaving pitch unbounded. with this idea in mind, new simulations were run, assuming different final altitudes as not all the obstacles has the same height and therefore is not always necessary to make the same maneuver. In
Figure 6 the evolution of pitch and altitude for different altitude changes can be seen.
It is important to highlight that for evaluating the performance of this controller undershoot is even more important than overshoot, as it reflects how close to the water surface the craft gets during the obstacle avoidance. Needless to say that touching the water can lead to a sudden and critical change of the craft dynamics, jeopardizing its integrity. Therefore, both overshoot and undershoot has been measured an depicted in
Table 3 and
Table 4.
From both tables it can be seen that overshoot and undershoot take acceptable values, as they reflect a smooth behavior, mostly during rise, and a safe descent to the cruise altitude, as undershoot takes the craft to a height which is far enough from water surface.
3.2.2. Altitude Change with Constraints on the Control Surfaces Mobility
The previous simulations have been carried out considering a rather permissive input surfaces operating range. This has been done in order to be able to compare the MPC controller, that is designed to include bounds on states and inputs, with the other controllers, that hardly allow saturation on the behaviour of the system variables. For the sake of completion the same simulations as those shown in
Figure 6, whose evaluation parameters are shown in the
Table 3 and
Table 4, have been performed using actuator saturation: the action made by the control surfaces is always within
. Results are shown in
Figure 7.
The figure clearly shows that it is harder for the controller to set the output at the right value as target altitude becomes higher. This behaviour is even clearer in the descent phase, where controlling the WIG craft is more difficult.
Table 5 and
Table 6 show a significant change in the evaluation parameters compared the previous case, represented by
Table 3 and
Table 4.
3.2.3. Recovering from a Disturbed Attitude
This control scenario using the MPC shows a borderline feasible case for situations such as a significant displacement from cruising conditions that needs to be restored as soon as possible. The MPC has shown great performances in the previous cases, so good results are expected here too.
The initial attitude is chosen to be
The objective function is set back to the expression defined in (
10).
The evaluation parameters include now the undershoot, that has already been considered in the previous section, which measures how low, percentage-wise, the altitude goes before reaching the prefixed steady state. This effect can be clearly seen in
Figure 8, in the bottom right graphic, where the evolution of altitude is depicted. The precise values of undershoot and the other response characteristics can be seen in
Table 7.
As a conclusion it can be stated that in this case it is important that the altitude does not reach too low values, which would be a safety concern as could increase the risk of an unwanted contact between the craft and the water; altitude appears to be quite well bounded in terms of oscillations. In the top part of the figure it can be seen, as well, that the Euler angles are well bounded, and have a soft transition to the steady state value.
Moreover, it is worth to note that a combination undershoot and high roll values could lead to wing tips to touch the water surface. However, given the maximum roll values we have observed, the minimum height and the wingspan of our model, this risky situation is always far from arising. The same can be stated for extreme pitch values: neither the nose nor the tail of the craft would touch the water with the values that are reached during our experiments.
3.3. Feedback Linearization
Feedback lnearization is a control technique designed for transforming totally or partially a nonlinear system in a linear equivalent, by means of a state feedback which cancels available nonlinearities of the dynamic model [
31].
First of all, let us recall the dynamical equations general form
The principle of input-output linearization is to choose the inputs in order to obtain a linear relationship between these and the output. This represents a less coomplicated and more attainable kind of linearization compared to input-state, which would lead to a fully linear system.
Unfortunately, given the existing relationship between the accelerations and body frame and the velocities in inertial frame, it is not possible to obtain an input-state linearization, because of the transformations in the third and fourth equations of (
1). Therefore, the inputs are chosen in order to cancel the non linearities in the body-referred forces and angular momenta equations.
The steps required to achieve this objetive are the following.
The body referred equation can be simply linearized, as is shown for the first angular velocity equation from (
14).
Given the expanded form (
4) of the first component of angular velocity in body frame
where
is the input torque from the motor. By choosing it as
where the last term represents an independent input, the equation (
15) becomes
Applying the same principle to all other body referred acceleration equations the result is
The resulting equations are perfectly linear and decoupled, with each input acting on one state only, without any coupling effects present.
However, coupling is still present in the inertial velocities, in particular in the third component, that is the part of the linearized model to be taken into account.
Compared to regular series expansion, the big advantage of feedback linearization is that it is valid in the full range of values of the involved variables, and not only in the neighbourhood of a single chosen point. Of course the quality of the process is subject to the sensors performances used in the real case to compute the values of the states to be fed back. Right now such considerations are not going to be made and the simulations will not include this effect, given that there are no information about such process apart from a general perspective on the alleged problems that can cause.
3.3.1. LQR Controller on Top of Mixed Linearized Model
Given a linear system of the form
The LQR strategy consists of considering a quadratic cost function
The feedback control law that minimizes this cost is
where the feedback is given by
P is found by solving the continuous time Riccati differential equation
As shown above it is not possible to linearize the model in inertial velocities and inputs, thus in altitude, using feedback since the non-linear relations do not allow so. Henceforth, another layer of linearization is needed for a linear controller to be applied.
One of the ways the feedback linearization can be exploited is to apply it to the body-referred Equation (
1), as shown above and linearize the remaning part of the model to build an LQR controller.
Given the chosen states amongst the available ones
where the last state is the altitude, the matrix
A assumes the form
where
and
are respectively the transformation from angular velocities in body frame to Euler angles and the third row of the rotation matrix for the linear velocities.
The chosen equilibrium position in terms of Euler angles, in radians, is the same as (
9)
To evaluate its performance, a disturbed initial attitude is randomly chosen. The initial position is the same as before for comparison purposes (Equation (
13)).
It is important to highlight that the equilibrium position is valid as long as the Euler angles are close to the linearization point, but the other states do not interfere with the linearization, since the feedback cancels those effects.
This kind of controller acts quite differently compared to the MPC, because the model is simplified and the full dynamic is not visible. While the MPC mainly relies on a change in in order to generate enough lift force for the altitude to change, in conjunction to a proper arrangement of the surfaces so that the resulting force is mainly along , the model used for LQR does not include such part of the dynamic. This leads to the speculation that such controller is suitable for cruising, since the required control actions are limited compared to manoeuvring, but might not be efficient for other more sophisticated purposes. Additionally, given that the LQR does not allow for constrains as the MPC does, the range of effectiveness has to be evaluated carefully.
Figure 9 shows how the error compared to the cruising configuration is small, but it is still present on every component of the states.
The used weighting matrix
Q is
The weights are ordered as the states in (
24).
As expected the control action that is mainly used is the force along , that generates a variation in altitude, which in turn presents a strong overshoot. It has been also observed during the experiments how a lateral dumped non-oscillating force is required along to correct the lateral velocity component generated by the initial non-straight direction of the ekranoplan.
As a general conclusion, it can be stated that the performance of the controller on Euler angles is smooth and acceptable.
However, when we put the LQR to undertake the task of controlling altitude during a hopping maneuver, results where far from satisfactory, yielding no acceptable results. Our interpretation is that the controller is not able to cope with the underlying non linearities as variable values get apart from the initial equilibrium point, that used for their linearization. Under this vision, it was decided to move to an adaptive version of LQR, in which the system dynamics matrix is recomputed over time in order to update the linear approximation as the system variables’ values evolve. In this way, this controller assume the same working principle as auto-tuning Adaptive Control, described in [
26].
3.3.2. Updating LQR over Feedback Linearized Model for Altitude Change
The LQR controller is built using the matrices of the linearized system, which constitute an approximation. The range of motion can reach points too far away from the linearization one, causing errors and bringing unreliability to the behaviour.
Given that the
A matrix is of the form (
25), its linearization can be recomputed during the simulation loop for the updated position. This allows us to keep the controller closer to the linearization point at the cost of solving again for each time instant the algebraic Riccati equation.
Doing so the issue with non-linear transformations in (
1) can be tackled and reduced, extending the possible use cases for this controller.
This kind of adaptiveness can be tuned according to the requirements. The first and most important could be the computational constrain, which, as said previously when referring to the MPC, has to be assessed separately, since another level of simulation is needed.
Then, the simulation for altitude change was repeated using this adaptive version of LQR. The initial position is the usual equilibrium position for
(
9) and the sought position is
, which requires a pitch angles of about
.
This pitch value is computed through
Figure 10 and is manually set as the final target, as the is needed for the present case. It could be set avoided and left to the initial values of
, given that the equations do not require an equilibrium position to be found. Instead it is done manually given the experience with the MPC, which reached a high enough pitch in order to save input cost increasing the pitch.
This could even be a feasibility matter, as the surfaces might not generate enough lift for the initial pitch value to be enough. Apart from the feasibility, it is clear that the pitch change makes the control process more efficient and requires less effort, so the behaviour of the MPC is imitated by manually setting the values.
The feedback linearization hides the cost of the linearization, so this other layer of control is implemented to limit it, confident that it will be realistically useful in real control scenarios once this particular control strategy will be applied in the real case.
The used weight matrix
Q is
As can be seen in
Figure 11, the controller almost does not cause overshoot in pitch and the overall transition is quite smooth. It is worth to note that the weight that has been set for the altitude state is set to 100, 1000 times more than the weight of the velocity components and 10 times more than the angles one. The characteristic values of the altitude behaviour can be read in
Table 8, where it can be observed that the smooth evolution of pitch is reflected in the low overshoot of altitude.
Comparing the evaluation parameters the MPC is better if the objective is obstacle avoidance in the shortest possible time, but the LQR on the feedback linearised model might provide a smoother transition between two different cruising conditions when celerity is not crucial.
Being the case that adaptive LQR has yielded a promising result in this initial simulation, the same batch that was used for evaluating MPC in different altitude changes has been tested against this new controller. Results can be seen in
Figure 12.
In
Table 9 and
Table 10 the values of height overshoot and undershoot, respectively, can be observed, as measured in curves depicted in
Figure 12.
The overshoot and undershoot values depicted in these tables confirm what could be observed during the initial simulation: a smoother and slower behavior, which makes this controller more suitable when comfort during maneuvering is an issue. Anyway, once again is shown that undershoot reach safe values, as the craft keeps a safe clearance at any moment.
3.3.3. Updating LQR for Recovery from Disturbed Attitude
This initial position is the same as the previous case shown using the regular LQR controller, in order to later compare the results. Since the LQR updates itself every cycle, the distance from the linearization position is not, in theory, a problem at all. The weights are the same as (
29).
In
Figure 13 it can be observed, as the most salient feature, how the overshoot in altitude is much bigger than for the rest of controllers. This comes together with the fact that the settling time is much slower as well; putting both facts side by side it can be stated that the tuning of this particular LQR has caused a much slower reaction in the control system, allowing much bigger oscillations as it trends to react in a smoother way.
3.3.4. PID Baseline Controller
A set of PID controllers is deployed for the sake of having a well known baseline to evaluate the results achieved with the LQR controller. The case under analysis is the same as
Section 3.2.3.
The controllers are applied on the feedback linerised model shown in the previous section. After the feedback linearization process the part of the model that refers to the body referred dynamic is linear and decoupled, as proven by the zeros block in the upper-left part of the matrix (
24). Therefore the system can be controlled in body frame by using one controller for each one of the body referred velocities, angular and linear, using the desired values as reference. Results are shown in
Figure 14.
All the controllers use the controlled variable as reference, expect for the controller on the vertical body referred velocity , which takes as reference the altitude h, so as to control it directly. Despite having been linearized, the system still has non-linearities in the relation between the altitude and the input used to control it, as explained more in detail later.
The
Table 11 shows the evaluation parameters for this case.
Since a PID is meant to directly control a variable in a SISO (Single Input Single Output) system, in this case, due to the underlying nonlinearities and couplings aforementioned, its behaviour is rather different from a classic scenario. Controlling the altitude h through an input that acts on without having a direct input on the altitude itself, means controlling a state by directly acting on its second derivative. For this reason the tuning of the controller PID parameters has been done empirically.