2.1. Generalized Sediment Conservation Equation for Steep and Gentle Backshore Slopes
From the general analysis of the conservation of sediment over a cross-shore profile z =
η[
x] at a fixed alongshore position y, using a shoreface control volume
s ≤
x ≤
u (
Figure 2) [
7] derived a generalized shoreline transgression for any coastal profile as:
Each variable in Equation (2) is described in
Table 1, and each term can be understood as a cross-sectional area of sediment produced (or consumed) per unit time. The left-hand side is sediment produced by coastal erosion due to shoreline retreat (for
ds/
dt < 0). The right-hand terms represent sediment accumulation due to net cross- and along-shore sediment flux divergences, and sediment consumption required to keep pace with a sea-level rise at an effective rate
R’.
Table 1 summarizes the variables shown in
Figure 2. Note that Equation (2) requires
ηs −
zsea =
const but is insensitive to the particular datum used to define the shoreline
s. The authors of [
6] provided a full derivation of the shoreline Equation (2) and discussed its significance generally.
The effective sea-level rise rate
R’ includes accommodation due to both a relative sea-level rise and changes in profile geometry.
where the bulk profile geometry is described completely by the length
L, relief
H, average slope
, and average shape a (
Figure 2). These variables are related by
Note that the order-one shape factor α depends only on the fractional sediment fill of the control volume bounding box. For smooth monotonic profiles, α also describes the profile curvature: α = 1/2 for a linear profile, while α < 1/2 for a concave profile, and α > 1/2 for a convex profile.
The mathematical expression for the cliff shoreline evolution model shown in
Figure 3 is obtained by applying the shoreline Exner Equation (2) to the vertical cliff (i.e., zero length) and the nearshore region and then combining them into a net cliff-nearshore mass balance equation resulting in
where
is the average slope of the combined cliff and nearshore profile (
Figure 3). This simplified model assumes cliff erosion (
< 0) produces a seaward flux of sediment (
) in concert with shoreline retreat and produces sediment that is instantly available to feed nearshore aggradation. For a rocky coast, cliff retreat occurs only after shoreline retreat undercuts the cliff, and produces rock fragments, which must be weathered before sand becomes available to the nearshore. In the special case where cliff and nearshore deposits are homogeneous,
c0 =
cs, Equation (5) reduces to the cliff Bruun rule of Equation (1) and
Figure 1c. However, the slope
will vary as the cliff relief
Hc[
t] changes through time.
At any time
t, the cliff relief
Hc[
t] is the difference in elevation between the backshore topography
η0[
s] and the shoreline elevation
ηs[
s]. If we define the origin of our (x,z) coordinate system to be the shoreline position (
s,
ηs) at time
t = 0, then from the geometry of
Figure 3, the cliff relief at time
t is
where
is the cliff relief at time
t = 0. Differentiating Equation (6) with respect to time gives
which can be combined with Equation (5) to eliminate
ds/
dt, giving an ordinary differential equation (ODE) for the time evolution of the cliff relief
Hc[
t]
where
is the equilibrium cliff relief reached as
t becomes large. Equations (8a)–(8b) constitute [
7]’s model for shoreline retreat on steep coasts. On steep coasts, shoreline retreat may obey the classic Bruun model in the short term, but in the long run, shoreline retreat will always converge to the passive inundation model. A first-order estimate for the equilibrium transition is the time for the sea level to traverse the effective zone of erosion [
8], which is determined by the tidal range in conjunction with wave variability. For example, with a 1 m and 10 m elevation of the zone of erosion, under a sea-level rise rate of 10 mm/year, it will take 100 to 1000 years to fully traverse this zone and reach equilibrium. To determine shoreline behaviour over intermediate timescales, we must solve Equation (8a) for a time-varying cliff relief
, from which we can compute the shoreline trajectory
s[
t] using Equation (6). The solution to Equation (8a,b) was given by [
7] as
where W[] is Lambert’s W function [
10].
The mathematical expression for the back-barrier shoreline evolution model shown in
Figure 4 is obtained by applying the shoreline Exner Equation (2) to the shoreface, overwash zone and back-barrier regions and then combining them into a net backshore-barrier-island and nearshore mass balance equation resulting in
At any time
t, the back-barrier relief
Hb[
t] is the difference in elevation between the backshore topography
η0[
b-
Lb] and the shoreline elevation
ηs[
s] =
zsea[
t]. If we define the origin of our (x, z) coordinate system to be the shoreline position (
s,
ηs) at time
t = 0, then from the geometry of
Figure 4, the back-barrier relief at time
t is
where
is the initial back-barrier relief at time
t = 0. Differentiating Equation (10) with respect to time gives
which can be combined with Equation (10) to eliminate
ds/
dt, giving an ordinary differential equation (ODE) for the time evolution of the back-barrier relief
Hb[
t]
where
is equilibrium back-barrier relief, and the effective shoreface relief
and sea-level rise rate
are defined by
where
is a shape factor accounting for back-barrier curvature. In typical cases, where the curvature is mild (
= 1/2) and the back-barrier is relatively steep (
>>
S0), this shape factor is near one.
Equations (13) and (14) constitute our model for shoreline retreat on gentle coasts. From Equation (13), we can see immediately that as in the steep coast case, in the long-term limit of an equilibrium back-barrier (
=
), shoreline transgression converges to passive inundation. Similarly, to determine shoreline behavior over intermediate timescales, we must solve Equation (13a) for time-varying back-barrier relief
Hb[
t] and then compute the shoreline trajectory
s[
t] using Equation (11). As shown by [
7], the solution to Equation (13a–c) is given by
where W[] is Lambert’s W function [
10]. In our gentle coast model, the fundamental requirement for barrier island formation is net onshore sediment transport (
qs < 0). As shown by [
7], for an equilibrium barrier, we can use Equations (2) applied to the shoreface shown in
Figure 4 and (12) to express this as
Hence, even if
Ss >
S0, barrier formation may be impossible on a sand-poor substrate (
c0 <
cs). On the other hand, onshore sediment transport does not guarantee barrier island formation. An equilibrium back-barrier is only possible if
> 0, which requires
The onshore flux,
, must be sufficient to aggrade the island while leaving some sediment left over for back-barrier aggradation [
7]; otherwise, a barrier beach will form rather than a true barrier island. More generally, we can interpret
as a maximum overwash distance. Physical limitations on overwash processes will always impose a maximum length,
, as a function of both storm and wave climate (e.g., surge heights) and sediment supply limitations, such that
When Equation (17) is satisfied, we get a barrier island with
and a subaqueous back-barrier, but when Equation (17) is violated, we get a barrier beach with
and no back-barrier. Combining Equations (16) and (17) shows that a true barrier island will form only if
Equations (16) and (19) delineate a continuum of coastal landforms that can occur on gentle coasts experiencing a sea-level rise. If Equation (16) is violated, we get a cliff. Otherwise we get a barrier beach with a length given by Equation (18), or if Equation (19) is also satisfied, we get a barrier island.
The role of backshore topography in shoreline change due to a sea-level rise can be inferred from Equations (9) and (15). We start by noticing that the two real branches
of the Lambert function W have a shape as shown in
Figure 5. Any initial state value that falls within the domain of the principal branch,
, will evolve from the initial
and
towards the equilibrium
and
, respectively. Any initial state value that falls within the domain of the branch
will evolve towards
and will never reach equilibrium. The slope of the W function at
t = 0 indicates the rate of change of
and
over time. From
Figure 5, it can be seen that the slope of
W(
x) is maximal and near vertical at the
x = −1/
e value (i.e., where the two branches coincide); the slope is approximately unity near
x = 0 and smaller than unity for
x >> 0. The initial value of
x on W(
x) can be obtained from the identity W(
x =
zez) = z of the Lambert
Wk=0,1 branches as
where
and
are the initial values of
z in
x =
zez for the cliff and back-barrier models of Equations (9) and (15), respectively, and are functions of both the geometry of the coast and the sediment fraction composition. Equations (20) and (21) provide a simple geometrical relationship between the topography and the sediment fraction ratio for
and
to be equal to −1, 0 and > 0, which corresponds with the x values
x = −1/
e,
x = 0 and
x >> 0, with the maximum, median and minimum characteristic rates of change of W with x. From the expected rates of change of
and
and Equations (7) and (12), we can also infer if the shoreline position will retreat (
) or advance (
) as the sea-level rises. We first notice that the condition for the shoreline to advance requires
> 0 for our cliff and back-barrier models, respectively. Hereinafter, we will only consider the main branch
Wk=0 solution, which will continuously evolve towards equilibrium, and therefore, the condition for shoreline advance under a sea-level rise can be expressed as
Equipped with Equations (20)–(22), we can now infer if the shoreline is likely to respond to a sea-level rise by eroding or advancing and at what rate of change (i.e., maximum, medium or minimum).
2.5. Cluster Analysis of Elevation Profiles
Due to the large dimensionality, (ca. 4 million points x ca. 100 elevation values per transect), some pre-processing was needed to be able to perform a cluster analysis of the extracted transects. The automatic extraction procedure produced a set of transects where most of the transects were of the same length (i.e., 500 m for this work) or shorter (i.e., if the normal was shortened by the algorithm) but did not necessarily have the same number of cross-shore points (i.e., it depended on how many raster cells were underneath each coastline transect). To ensure that all the profiles had the same number of cross-shore points, we normalized the length of all the profiles to the maximum length and resampled at nx = 100 cross-shore locations, obtaining a matrix of nx × np, where np is the number of transects used for this analysis. By normalizing the profile length, we are implicitly assuming that the variance of the length is non-informative, which seems reasonable given the transect length is a user pre-defined variable. We used the de-trended elevation at each of the nx cross-shore locations to ensure that the elevation at the seaward limit of the transect (x/xmax = 0) and landward limit of the transect (x/xmax = 1) was the same: this ensured that the data were periodic and easier to fit to a set of orthogonal Eigen functions.
The matrix dimension was reduced to a matrix of
nc ×
np, where
nc ≤
nx is the number of main Principal Components (PCs) obtained from applying the Principal Component Analysis [
15] that capture a given percentage of the total database variance. Principal Component Analysis (PCA) is a technique for reducing the dimensions of large datasets, increasing interpretability but at the same time minimizing information loss. The empirically generated PCs have a series of properties that make them suitable for reducing the dimensionality of a dataset: (1) they are a reduced set of variables that are optimal predictors for the original variables in the sense of the square minimums, and (2) the new variables are obtained as an original combination of the former ones and are not correlated. By reducing the dimensions of the database using the PCs, we avoided including some of the non-informative small profile differences in the cluster analysis. The matrix algebra required is standard on commercially available routines (e.g., the pca function in MATLAB); thus, it is not discussed in more detail here. For the interested reader, the MATLAB script used is provided as
supplementary material.
A detailed description of cluster analysis is beyond the scope of this paper, and the reader is referred to Hennig, et al. [
16] for a general description and to [
17,
18] for updated references to coastal applications. In brief, it entails the calculation of distances (interpreted as the similarity) between all objects in a data matrix, on the basis that those closer together are more alike than those are further apart. The clustering techniques can be generally classified as hierarchical or partitional approaches [
12]. Hierarchical, agglomerative clustering is a bottom-up approach where objects and then clusters of objects are progressively combined on the basis of a linkage algorithm (or dendogram) that uses the distance measures to determine the proximity of objects and then clusters to each other. The number of distance calculations in hierarchical approaches increases geometrically with the number of observations, making this method unsuitable for very large datasets. Partitional methods have advantages in applications involving large datasets for which the construction of a dendogram is computationally prohibitive. A problem accompanying the use of a partitional algorithm is the choice of the number of desired output clusters. The partitional technique usually produces clusters by optimizing a criterion function defined either locally or globally. A combinatorial search of the set of possible labelling for an optimum value of a criterion is clearly computationally prohibitive. In practice, therefore, the algorithm is typically run multiple times with different starting states, and the best configuration obtained from all of the runs is issued as the output clustering. One such partitional technique is the K-means approach [
12]. K-means is one of the simplest unsupervised learning algorithms that solves the well-known clustering problem. The procedure is a simple and easy way to classify a given dataset through a certain number of clusters (assume
k clusters) fixed a priori. The main idea is to define
k centroids, one for each cluster. The next step is to take each point belonging to a given dataset and associate it to the nearest centroid. When no point is pending, the first step is completed and an early group is performed. At this point, it is necessary to re-calculate
k new centroids as centres of the clusters resulting from the previous step. After these
k new centroids, a new binding has to be performed between the same data points and the nearest new centroid. A loop is generated. As a result of this loop, it may be noticed that the
k centroids change their location step by step until no more changes are made. In other words, centroids do not move any more. We used the “Manhattan distance metric” as the distance metric because this metric provides more meaningful results from both the theoretical and empirical perspectives for large datasets [
19].