The essence of the proposed approach boils down to four elements: (1) modeling of a tsunami wave is based on the displacement of the water surface in the tsunami source, obtained by recalculating the wave profile measurements in those terms; (2) using modern (smart) algorithms to estimate this displacement by a part of the first period of the wave recorded only in one point; (3) calculating the wave on a personal computer with a hardware gas pedal based on FPGA locally for populated areas of the coast, using the nested grid method; and (4) assessing the flooded area based on the estimates of expected wave amplitude.
2.1. Data for Analysis
First, it is necessary to obtain input data for wave modeling, namely seafloor deformation parameters, at the source of the tsunami. An internationally developed network of seismic stations gives the coordinates of the earthquake epicenter and its magnitude (an estimate of the energy released) virtually immediately after the earthquake. An earthquake with a magnitude greater than M > 7.5 usually generates a dangerous tsunami. Most modern tsunami warning systems convert this information into an assumption about the shape of the seafloor deformation at the tsunami source. They use information on the geological structure of the focal zone, numerical calculations of residual displacements in an elastic half-space [
8,
9], and analysis of historical data on seismic events in the area, etc.
Since the result is still only an approximation, we consider it promising to take ocean level measurements as the basis for the propagation of an already formed wave. Several approaches are currently used to solve this problem. Without going into an analysis of the strengths and weaknesses of the various methods, we will rely on direct measurements of the parameters of the already generated tsunami wave on the water surface. We will limit ourselves to a brief listing of the methods used; a more detailed analysis can be found in [
10]. Among the sources of information used we note: tidal gauges, ultrasonic and pressure sensors, GPS sensors, and space altimetry images.
Consider several technologies to obtain sea level measurements. Tide gauges are abundant, but when installed on the shoreline, they provide data that are extremely noisy due to multiple reflections from the shoreline that are distorted by bottom friction and wave toppling. In addition, these data are completely unsuitable for tsunami warning because they provide data after the wave arrives on shore. Usually, they are used for post-event analysis and reconstruction of the actual tsunami source.
Methods based on the analysis of data from GPS sensors installed on land have worked well in Indonesia, for example. However, at this time it is not certain that this technology will work as well in other regions. Sea-based sensors—the so-called GPS buoys—are currently technologically possible to install only in shallow depths (300, and in the future, up to 600 m), which imposes a severe limitation on the advance of wave data acquisition. Such buoys installed to the east off the Japanese coast successfully detected the far-field tsunami generated by the 2010 Chile Earthquake [
11].
Altimetry data from low-flying satellites are highly accurate and could form the basis of a warning system. However, these satellites are not geostationary and are unlikely to be in the right place at the time of a seismic event.
The most reliable and promising at present, in the opinion of the authors, are bottom-installed pressure sensors that allow measuring the height of the tsunami wave with centimeter accuracy, and transmitting these data to a processing center via satellite communication channels [
12]. Until the beginning of this century, there were no technical means of recording a moving tsunami wave in the deep ocean (i.e., at depths greater than 100–200 m) that would carry information about the tsunami source free of distortion by friction on the bottom and multiple reflections from the shoreline. Therefore, attempts have been made to estimate expected wave heights near the shore from seismic records. However, no clear dependence of the height of the initial vertical displacement in the tsunami source on the earthquake magnitude, which seismologists are able to determine quickly enough (usually a few minutes after the seismic event), has yet been found. Therefore, apart from making a statement about the occurrence of a tsunami, no quantitative estimates of wave height can yet be made using only seismic data. With the advent of deep ocean tsunami recorders integrated into the DARTS (Deep-ocean Assessment and Reporting of Tsunamis) system, it has become possible to measure actual tsunami heights in some points of the Pacific and other oceans [
12].
These sensors are located in the deep ocean opposite the subduction zones [
13], in which almost all epicenters of tsunamigenic earthquakes are located. Since the speed of wave propagation (estimated within the well-proven shallow water theory approximation) is proportional to the square root of the depth, the wave reaches at least one of these sensors much faster than it reaches the shore.
The approach implemented in Japan should also be noted. To determine the vertical displacement of the ocean surface around and inside the tsunami source, two zones of underwater earthquake epicenters were covered by a network of sensors connected by a bottom cable. Of the rather broad functionality of the sensors, we are interested in the pressure sensors. One such network, NIED S-net (Sea-floor observation network for earthquakes and tsunamis along the Japan Trench) [
14], consists of 150 observation units from off Hokkaido to off Chiba Prefecture. Configuration of this network (taken from [
14]) is presented in
Figure 1. Each unit contains seismometers and water pressure gauges to monitor offshore earthquakes and tsunamis. All the data are transmitted to the land stations by fiber-optic cable and arrives at NIED in real time [
15,
16].
The second system, developed in Japan, covers a part of Nankai trough and is called DONET (Dense Oceanfloor Network System for Earthquakes and Tsunamis). The first phase of this program has been carried out since 2006 with the purpose of monitoring the earthquake epicenter’s region close to Nankai trough, and the installation of observational equipment on 20 stations at Kumanonada was completed in 2011. The second phase (DONET2) has also started to cover a wider region in 2010. A total of 29 observatories are planned to be installed at offshore Kii peninsula for DONET2 and 2 additionally at Kumanonada for DONET [
17]. These observation networks successfully detected a few events. For example, S-net stations recorded the 2016 Fukushima tsunami, whereas DONET stations recorded the 2022 Tonga volcanic tsunami [
18].
To conclude, we state that direct measurements of tsunami wave at deep ocean are available for analysis in a real time mode.
2.2. Smart Sensor Network Design to Identify Source Parameters
Let us consider the possibility of determining the approximate parameters of the wave in the tsunami source in the shortest possible time. We will take as a basis the direct measurements of the wave profile by the bottom pressure sensors. Except for the already mentioned S-net and DONET networks, the existing sensors (DART buoys of USA NOAA and similar ones produced in other countries) are located in front of the selected subduction zones quite randomly, not accounting for wave travelling times and over parameters that are critical for timely warning about tsunami wave danger.
At the same time, mathematical modeling methods can easily solve the problem of optimizing the locations of a small number of additional (to the already existing ones) sensors in order for the tsunami wave to reach the nearest sensor in the minimum possible time [
19]. It should be noted that this will be the guaranteed time in the so-called worst-case scenario, when the epicenter of the earthquake within a given subduction zone will be as far (in terms of wave propagation time) from the sensor system as possible. Until now, this approach of “smart” [
20,
21] extension of the observation system has unfortunately not been developed.
In a few words, the essence of the method of reconstruction of the initial displacement in the source using data from several deep-water recorders is as follows. The sensor wave record is a sequence of ocean level measurements
f(n − Δ
t), n = 1, ...,
N with fixed time step Δ
t. To reconstruct the source, we first numerically simulate tsunami propagation from each UnS
k,
k = 1, ...,
K and store the wave signals (synthetic mareograms)
fk(n − Δ
t), n = 1, ...,
N at sensor locations with the same time step as in the real recordings. The recovery process consists of finding the set of coefficients
bk, which provide minimum of discrepancy between the real recorded signal and the linear sum of synthetic mareograms from several UnSs according to the following discrete version of mean square difference:
The result will approximate the water surface in the source by linear combination of displacements in UnSs with found coefficients
bk. The procedure of orthogonalization and normalization of discrete synthetic mareograms (in fact, vectors) allows us to find the required set of coefficients in a few seconds, regardless of the number of involved UnSs. Detailed formulae (computationally low cost) for finding the desired coefficients
bk could be found in [
21].
Preliminary numerical calculations for model sources show that optimizing the location of the sensors opposite the 1000 km long subduction zone makes it possible to register a wave no later than 10 min after the earthquake with only 4 DART buoys. It is believed that only the whole period of the wave carries sufficient information to make a correct judgment about the source parameters. Moreover, since the form of the initial disturbance of the water surface we are interested in is two-dimensional, and the record of the profile of the wave passing over the sensor has only one dimension, from the mathematical point of view several (according to different estimates from 3 to 10) records from different sensors are required as input data. In the works, where such recovery is carried out from the data of 3 sensors [
22,
23], their “ideal” location in relation to the source is required. Thus, for the practical application of this approach, several dozens of sensors must be installed in front of a 1000 km long subduction zone.
In fact, knowledge of the detailed structure of the initial waveform is not necessary to assess the tsunami hazard. The most important parameters are the size of the disturbance zone and the amplitude of the wave at the initial time instance. Model calculations [
24] show that this information can be obtained within the concept of pre-computations using about one quarter of the first wave period (if the deep-water recorder is located on the side of the positive wing of the dipole source), recorded at only one point. Already in the case of 50 km wavelength (which corresponds to 250 s of its period passing over the sensor), the result of amplitude determination is obtained by 3 min before the wave passes over the sensor completely. This time gain becomes even more significant for longer waves, typical for catastrophic events. The algorithm itself, based on Fourier series theory, has low computational complexity [
20,
21]. The recovery time takes a few seconds when using a personal computer (PC).
For tsunami warning services, the location of the observing system should be of interest so that after determining the parameters (amplitude) of the tsunami source, there is as much time as possible before the wave arrives on shore. Model calculations show that in this case the optimal location of the sensors should be different from the one that provides the shortest time to determine the parameters [
21,
24]. Until now, the authors are not aware of the practical application of this technology.
Figure 2 shows a digital grid bathymetry off the southeastern coast of Japan (Honshu Island), constructed based on data from the Japanese Agency JODC [
25]. The white rectangles show the location of tsunami “unit sources” (UnSs) with the shape characteristic typical for this seismic area. In the calculations, we used “Composite sources”
CSi, which are the initial surface displacement obtained as the sum of four neighboring UnSs with some coefficients. The form of such a composite source
CS well simulates real historical events. Green crosses denote the positions of model sensors
L1
–L10, the first of which
L1 is located on top,
L2 below it, and so on down to the lowest
L10.
The times in which the proposed algorithm determines source parameters of
CSi (
i = 1, 2, 3) if data from each of the sensors
Lj (
j = 1, ..., 10) were used have been obtained by numerical tests. In addition, for each sensor, the time it takes for the tsunami wave to reach the nearest point on the coast after the source parameters has been determined. Data on the optimal location of the sensors are shown in
Table 1.
The optimal location of the two sensors in terms of the shortest time to determine the source parameters (for the considered three positions
CSi,
i = 1, 2, 3) is shown in
Figure 2 by yellow arrows. The location of the two sensors in the places marked by red arrows gives the maximum time between the source parameters determining and wave arrival to the nearest shore,
Table 2.
Thus, the correct (smart) location of the sensor network and the algorithm of data circulation based on mathematical theory can allow us to obtain reasonable approximate values of parameters of the initial displacement of the water surface in the tsunami source in a comparatively short time.
2.3. Smart Data Processing: Fast Numerical Modeling
After obtaining an estimate of the initial displacement of the water surface in the source, the stage of numerically solving the Cauchy problem for the system of shallow water equations [
26] is followed; this determines the expected distribution of maximum wave heights along the shoreline. Note that this model is widely used for modeling tsunami wave propagation [
3,
27]. As a practice shows, the distribution of maximum wave heights along the shoreline is highly irregular and characterized by the presence of highly localized peaks. In other words, wave amplitudes can differ many times in neighboring segments of the coast, making selected sites on the coast extremely dangerous. It is known from the theory and practice of difference methods that the results of numerical calculations of problems for partial differential equations can be trusted only if the time step is in a certain dependence on the step length of the spatial grid. This is a consequence of the Lax theorem [
28], which causes the convergence of the numerical solution to the exact solution by the requirement of stability and approximation of the original differential equations by the difference scheme. In addition, the stability condition establishes the correlation between the time step value and the spatial step length of the computational grid. For explicit difference schemes the calculation of wave propagation (or other disturbances) is stable if the Courant condition, which limits the wave advance for one time step to one step of the computational grid, is satisfied. It follows that reducing the spatial step requires a smaller time step. Thus, a calculation with a sufficiently detailed spatial computational grid can take too much time, even with a powerful computer or supercomputer. In order to determine the amplitude of the wave approaching the shore, it is necessary to perform calculations on sufficiently detailed grids with a step of about 10 m.
Therefore, solving this standard problem (numerical solution of the shallow water system) requires a significant amount of time, even with modern supercomputers. Note that as the events of 11 March 2011 showed, in the case of a strong earthquake power outages are to be expected, and these may hinder the use of supercomputer systems.
It is possible to combine such mutually exclusive requirements as: (1) grid step at near shore zone is ≤10 m; (2) total time of computation is about 1 min; (3) obtaining results in case of power outage, in the way of application of smart processing algorithms, namely, application of hardware acceleration of calculations, made on a modern personal computer. Thus, the first requirement is fulfilled by using nested grids [
29,
30]. “Energy-independence” is achieved by using a PC with an uninterruptible power supply. Finally, the required calculation speed is provided by a specialized FPGA-based calculator [
31].
The use of FPGA as the hardware basis of computing acceleration has its own limitations. Consequently, the required performance is achieved by programming parallel computational pipelines, which allows start calculations of variable values at the (
n + 1)-step in time as soon as their values in neighboring grid points at the
n-th time step become known [
32]. In this case, it is required to store all necessary data in the internal memory of each special processor. In a considered case of shallow water equation system these are two components of velocity vector
(u(x,y,t),v(x,y,t)) and total height of liquid column
H(x,y,t) = h(x,y,t) + D(x,y), where
h(x,y,t) stands for the tsunami wave height of interest, and
D(x,y) is a readily known depth in all points of the area. When using the relatively low-cost solutions VC709 (host computer is required) or ZCU106 (stand-alone solution), there is a limit to the computational grid size of 3000 × 2500 nodes.
Such a restriction allows us to consider a computational area of the order of 1000 × 700 km, which is usually sufficient for near-shore events. Then it is enough to calculate on a sequence of three refining grids, reducing the step from the initial 250–300 m to 10–15 m in the coastal zone. The disadvantage is that the last calculation stage covers a coastal area of about 30 km. This limitation can be compensated by simultaneous calculations on several nested grids of fine resolution [
29], or by delegating the calculations to local emergency warning services that are interested in a similarly sized coastal area. Let us reflect that only a PC resource is required for the calculation.