Next Article in Journal
Numerical Evaluation of the Effectiveness of the Use of Endplates in Front Wings in Formula One Cars under Multiple Track Operating Conditions
Next Article in Special Issue
Engine Mass Flow Estimation through Neural Network Modeling in Semi-Transient Conditions: A New Calibration Approach
Previous Article in Journal
Analyzing Hydrothermal Wave Transitions through Rotational Field Application Based on Entropy Production
Previous Article in Special Issue
Machine Learning for Dynamic Pressure Coefficient Prediction in Vertical Water Jets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Data Assimilation and Parameter Identification for Water Waves Using the Nonlinear Schrödinger Equation and Physics-Informed Neural Networks

1
Dynamics Group, Hamburg University of Technology, 21073 Hamburg, Germany
2
Communication Networks Institute, TU Dortmund University, 44227 Dortmund, Germany
3
Department of Mechanical Engineering, Technical University of Munich, 85748 Garching bei München, Germany
4
Ship Performance Department, Institute of Maritime Energy Systems, German Aerospace Center, 21052 Geesthacht, Germany
5
Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK
6
Cyber-Physical Systems in Mechanical Engineering, Technische Universität Berlin, 10623 Berlin, Germany
*
Author to whom correspondence should be addressed.
Fluids 2024, 9(10), 231; https://doi.org/10.3390/fluids9100231
Submission received: 5 September 2024 / Revised: 13 September 2024 / Accepted: 29 September 2024 / Published: 1 October 2024
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence in Fluid Mechanics)

Abstract

:
The measurement of deep water gravity wave elevations using in situ devices, such as wave gauges, typically yields spatially sparse data due to the deployment of a limited number of costly devices. This sparsity complicates the reconstruction of the spatio-temporal extent of surface elevation and presents an ill-posed data assimilation problem, which is challenging to solve with conventional numerical techniques. To address this issue, we propose the application of a physics-informed neural network (PINN) to reconstruct physically consistent wave fields between two elevation time series measured at distinct locations within a numerical wave tank. Our method ensures this physical consistency by integrating residuals of the hydrodynamic nonlinear Schrödinger equation (NLSE) into the PINN’s loss function. We first showcase a data assimilation task by employing constant NLSE coefficients predetermined from spectral wave properties. However, due to the relatively short duration of these measurements and their possible deviation from the narrow-band assumptions inherent in the NLSE, using constant coefficients occasionally leads to poor reconstructions. To enhance this reconstruction quality, we introduce the base variables of frequency and wavenumber, from which the NLSE coefficients are determined, as additional neural network parameters that are fine tuned during PINN training. Overall, the results demonstrate the potential for real-world applications of the PINN method and represent a step toward improving the initialization of deterministic wave prediction methods.

1. Introduction

The field of ocean engineering and experimental water wave research highly desires a deterministic description of wave quantities [1], which are usually described by partial differential equations (PDEs). Unlike statistical methods, deterministic prediction involves a phase-resolved tracing of wave fields, i.e., the wave elevation η as a function of space and time with high resolution. Unfortunately, acquiring such spatio-temporal data from experiments is often impractical: On the one hand, the reconstruction of wave surface elevations from real-world radar data is still an unresolved issue [2]. On the other hand, in situ measurement devices, such as wave gauges, measure the elevation time series at only a few selected locations due to the high operational costs. This sparsity of information leads to ill-posed inverse problems [3] when attempting to reconstruct the complete wave elevation η ( x , t ) from gauge measurements η m ( t ) , which might be solved by numerical or machine learning methods.
Despite substantial advancements in common numerical PDE solvers, such as finite element, finite difference, and spectral methods, their applications to highly dynamic systems still incurs remarkable computational costs. These costs primarily stem from the need for fine-grained discretizations to ensure accurate solution approximations. Furthermore, conventional grid-based numerical solvers still face unresolved challenges, especially for addressing ill-posed inverse problems [4], such as wave reconstructions. In particular, inverse problems become computationally expensive due to the high number of forward evaluations required to estimate the inverse. Additionally, issues related to numerical stability and convergence necessitate the implementation of regularization techniques [5].
In the meantime, machine learning methods have revolutionized many scientific disciplines, including their applications in the fields of fast deterministic ocean wave prediction cf. [6,7,8,9] and reconstruction cf. [2,10]. However, in contrast to classical numerical solvers, these data-based approaches lack the incorporation of inherent knowledge about the physical laws that approximate the underlying system in the form of PDEs. Physical information is solely provided by observational training data for supervised learning, which are sourced from PDE simulations using classical numerical methods or from field measurements of real physical wave systems. Thus, the solutions generated by data-based approaches may not inherently ensure physical consistency, as the quality and quantity of training data limit their accuracy. This can be regarded as a neglect of established knowledge, especially when addressing the reconstruction problem outlined earlier, considering the centuries-long development of various model equations for describing surface gravity waves.
To overcome the limitations of both explicit numerical PDE solvers and neural network approaches, Raissi et al. [11] proposed physics-informed neural networks (PINNs). PINNs integrate observational data with physical laws by parameterizing the PDE solution as a neural network to solve forward, inverse parameter identification or inverse data assimilation problems. More precisely, the network’s training process is constrained by a loss function that incorporates a PDE residual. To effectively calculate this residual, the PINN algorithm leverages the method of automatic differentiation [12]. In recent years, PINNs have gained attention across diverse scientific domains [4,13], including computational fluid dynamics [14,15,16], acoustic wave propagation [17], heat transfer [18,19], climate modeling [20], nano-optics [21], and the study of hyperelastic materials [22]. Notably, PINNs also employed in a few investigations related to water waves: For instance, Wang et al. [23] used a loss function based on the wave energy balance equation and the linear dispersion relation to reconstruct near-shore phase-averaged wave heights. The same equations also allow for solving sea bed depth inversion problems from statistical wave parameters in shallow-water regimes [24]. Furthermore, the Saint-Venant equations within the loss residuals of PINNs enable the downscaling of large-scale river models by assimilating remote sensing data in conjunction with in situ measurement data of the water surface [25]. Additionally, the research of Jagtap et al. [3] showcased the potential of PINNs in resolving ill-posed assimilation problems by leveraging analytical solitary surface measurements and the Serre–Green–Naghdi equations in shallow water.
However, in typical ocean engineering research, i.e., down-scaled testing in wave tanks, scenarios arise where the water depth significantly exceeds the wavelength. In such cases, it becomes essential to characterize the nonlinear behavior of water waves in the intermediate-to-deep water regime [26] rather than in shallow water. Zakharov [27] demonstrated that the amplitude envelope of slowly modulated wave groups approximately satisfies the nonlinear Schrödinger equation (NLSE). Consequently, various variants of the hydrodynamic nonlinear Schrödinger equation were investigated experimentally and numerically for deterministic wave prediction cf. [28,29,30] and rogue wave modeling cf. [31,32].
To the best of the authors’ knowledge, there is no prior documentation of PINNs to solve the hydrodynamic form of the NLSE, despite the successful integration of other forms of the NLSE for different physical phenomena into the PINN framework cf. [11,33,34,35,36,37,38]. However, unifying most of this related work demonstrates the application of the NLSE-PINN methodology concerning the initial or boundary conditions derived from analytical soliton or breather solutions that correspond to their specific NLSE used in the loss function. While these approaches provide a theoretical proof of concept, they cannot inherently assess the practical viability of PINNs in real-world scenarios, as analytical solutions seldom align with the complexity and imperfections encountered in measurements.
Therefore, the objective of this study was to demonstrate the application of an NLSE-PINN framework in the more realistic scenario of irregular deep water gravity waves. Specifically, we aimed to leverage the capability of PINNs in solving data assimilation problems, which is crucial in ocean engineering to infer the dynamic system’s state from partial or noisy surface elevation measurements η m ( t ) cf. [17,25,39,40], thereby reducing the need for costly gauges while still being able to reconstruct the fully resolved wave fields η ( x , t ) in between. For this purpose, we utilized irregular and nonlinear wave group data from gauge locations several meters apart within a numerical wave tank that was simulated using the high-order spectral method (HOSM) [41].
However, as our PINN loss function was constrained by the hydrodynamic NLSE associated with limitations in accommodating arbitrary irregular sea states due to the constraints in the bandwidth, steepness, and nonlinearity [1], we investigated the utilization of a slightly misspecified PINN for assimilating wave data derived from the more realistic HOSM; the HOSM also accounts for highly nonlinear, broad-banded, or directional sea states in non-breaking scenarios [42,43]. This misspecification acknowledges the common discrepancy between mathematical models (e.g., PDEs) and measurement data in complex, real-world physical systems [44], where perfect alignment between measurements and analytical solutions is uncommon, and exact knowledge of the physical equations that describe all phenomena encountered in measurements is rare. Moreover, an ongoing challenge in applying the NLSE to irregular waves with specific bandwidths lies in accurately determining its coefficients, which are derived from ratios of the base parameters of the carrier wave frequency ω p and wavenumber k p . In particular, relatively short measurement intervals and slight deviations in the measurement data from the narrow-band assumptions of the NLSE can introduce uncertainties in the determination of ω p and k p , which affect, for example, the propagation velocities, and thus, cause errors and offsets in the surface elevation reconstructions. Therefore, our hypothesis for this study was as follows:
  • Alongside the assimilation of measurement data, the NLSE-PINN will facilitate fine tuning the base parameters ( ω p and k p ) of the NLSE coefficients. We expect this approach to enhance the reconstruction performance compared with using constant coefficients predetermined from spectral wave properties.
To address this, we first present the wave tank utilized for data generation in Section 2.1. Afterward, the hydrodynamic NLSE is introduced in Section 2.2 to subsequently develop the NLSE-PINN framework and training methodology in Section 2.3. Then, we first performed a pure data assimilation task with predetermined, constant NLSE coefficients to reconstruct wave surface envelopes from only two gauge measurements in Section 3.1. These results served as a benchmark to demonstrate the enhancement of reconstruction by incorporating the NLSE coefficient base parameters as additional tuneable PINN variables in Section 3.2. Finally, Section 4 summarizes the key findings, outlines the method limitations, and suggests potential directions for future research.

2. Method

The following subsection initiates a concise overview of the wave tank specifications and the numerical method to generate the synthetic wave elevation data. Next, the nonlinear Schrödinger equation for deep water gravity waves is introduced. Subsequently, this equation is integrated to develop a physics-informed neural network, with details on its architecture and training strategy provided in the last subsection.

2.1. Numerical Wave Tank and Measurement Data Generation

The numerical wave tank experiments were based on the dimensions of a real wave tank facility at Hamburg University of Technology. This wave tank possessed a cross-sectional area of 1.5 m × 1.5 m and extended over a length of 15 m (https://www.tuhh.de/mum/en/research/facilities (accessed on 11 September 2024)) and is visualized in Figure 1a. To generate waves, a flap-type board was installed on one side of the tank, while the opposite side had a beach element to minimize wave reflections. To simulate nonlinear wave propagation inside this numerical wave tank, we employed the high-order spectral method (HOSM) of order M = 4 , as formulated by West et al. [41]. This method models highly realistic, irregular nonlinear water wave surfaces on a Cartesian coordinate system ( x , z ) , with the mean free surface located at z = 0 m with z pointing upward. Assuming a Newtonian fluid that is incompressible, inviscid, and rotational, the HOSM solves the general initial boundary-value potential flow problem that is given by the Laplace equation:
2 Φ = Φ x x + Φ z z = 0
and the kinematic surface, dynamic surface, and bed boundary condition:
η t + η x Φ x Φ z = 0 on z = η ( x , t )
Φ t + g η + 1 2 Φ x 2 + Φ z 2 = 0 on z = η ( x , t )
Φ z = 0 on z = d .
Therein, Φ ( x , z , t ) is the velocity potential, η ( x , t ) is the free surface elevation, and g is the gravity acceleration. The simulations were initialized using wave surfaces from the JONSWAP spectra [45] in the finite depth form [46].
For the data generation, we maintained a fixed water depth of d = 1 m and employed four virtual wave gauges to provide time-series measurements along the water surface at x g { 3 , 4 , 5 , 6 } m , which constituted a typical depth and measurement point spacing in educational wave tanks [47,48]. The outer gauges of the numerical wave tank were designated as locations of sparse measurements ( x g , meas { 3 , 6 } m ) for the PINN training, while measurements at the inner gauges ( x g , test { 4 , 5 } m ) were solely used to assess the PINNs’ performance against a ground truth after the training process. We generated a temporal wave surface elevation series η ( x = x g , t ) over an interval that spanned t = 0–60 s to ensure that the wave packets fully passed all four gauges, while no significant wave reflections had yet occurred at the channel’s end. A graphical representation of the resulting three-dimensional data structure is exemplified in Figure 1b.
As we desired wave data that covered different wave conditions and significant wave heights H s , the sea state parameters of the peak wave frequency ω p = 2 π / T p (with T p being the peak period), wave steepness ϵ = k p H s / 2 (with k p = 2 π / L p being the peak wavenumber and L p being the peak wavelength), and peak enhancement factor γ were varied as follows:
ω p { 3 , 4 , 5 , 6 , 7 , 8 , 9 } rad s ϵ { 0.0125 , 0.0250 , 0.0375 , 0.0500 , 0.0750 , 0.1000 } γ { 1 , 3 , 6 } ,
where a higher γ indicates a narrow-banded spectra, as shown in Figure 2. By randomly selecting initial phase shifts of the component waves, five different elevations were generated for each ϵ ω p γ combination, which resulted in 630 different wave samples in total, where each individual sample had the data structure shown in Figure 1b. For details on the nonlinear wave data generation in a numerical wave tank facility using the HOSM, the reader is referred to Klein et al. [7] and Lünser et al. [43].

2.2. Hydrodynamic Nonlinear Schrödinger Equation

In addition to employing numerical methods, such as HOSM, to approximate the potential flow Equations (1)–(4), an alternative approach is the derivation of simplified solutions. By utilizing perturbation theory around the parameters of wave steepness ϵ = k p a and relative bandwidth μ = Δ k / k p < < 1 , small-amplitude waves and a narrow bandwidth are assumed. Here, Δ k denotes the width of the wavenumber spectrum around the peak wavenumber k p . Moreover, the boundary value problem at the unknown free surface z = η ( x , t ) can be approximated using a Taylor series expansion. By truncating the perturbation expansion at order O ( ϵ 3 ) , the envelope equation of the nonlinear Schrödinger equation (NLSE) is derived [27,49,50] in terms of a complex wave envelope amplitude A ( x , t ) = U ( x , t ) + i V ( x , t ) that varies slowly compared with the phase ϑ = k p x ω p t + φ of its underlying carrier wave η ( x , t ) , where φ is a phase shift. The hydrodynamic NLSE in the time-like form is as follows:
i A x + 1 c g A t + δ A t t + ν | A | 2 A = 0
which finds common application in boundary value wave tank problems [51], where
c g = ω p 2 k p , δ = k p ω p 2 , ν = k p 3
are the NLSE coefficients. The peak frequency ω p of the carrier wave is related to the corresponding peak wavenumber k p through the linear dispersion relation for deep water:
ω p = g k p .
The first term in Equation (5) characterizes the spatial variation of the amplitude, while the second term represents the wave propagation by the group velocity c g . The third term introduces the dispersive effect, while the last term introduces the nonlinearity. In contrast to the potential flow equations, which require an approximation in the depth direction, the NLSE is regarded as the simplest nonlinear equation for deep-water wave dynamics, as it is uniquely formulated on the water surface [52]. However, while it captures the essential aspects of nonlinear water waves, the NLSE’s limitations in nonlinearity magnitude and spectral bandwidth limit its general applicability in deterministic wave prediction [1].
In practical cases, where real-valued measurement series η m ( x = x g , t ) are available, the values of ω p and k p are commonly derived from a spectral representation via a discrete Fourier transform F ( ω ) of the time domain surface elevation η m ( t ) . We employed the advanced method of Sobey [53] and Mansard and Funke [54] by defining
ω p = ω · F ( ω ) 5 d ω F ( ω ) 5 d ω ,
which enhances the robustness compared with determining ω p as the frequency where F ( ω ) attains its maximum. The corresponding k p can be derived by the dispersion relation.
The complex amplitudes for initializing the NLSE are computed as
A ( x g , t ) = η ( x g , t ) + i H η ( x g , t ) · exp ( i ϑ ) ,
where H denotes the Hilbert transform of the measured signal [55,56] and ϑ = k p x ω p + φ is the carrier wave’s phase. For the other way around, the equation
η ( x g , t ) = Re A ( x g , t ) · exp ( i ϑ )
instead allows for the reverse transformation from a complex envelope back to a carrier wave elevation after the reconstruction procedure, although this transformation is not presented in this work. The relationship between the carrier wave η ( x g , t ) and complex envelope A ( x g , t ) = U ( x g , t ) + i V ( x g , t ) is illustrated in Figure 3.

2.3. Physics-Informed Neural Network for the NLSE

In light of the simplifications inherent in the hydrodynamic NLSE concerning the generalized water wave problem discussed previously, we investigated the suitability of this equation within a PINN framework for analyzing surface waves that arose from sightly different wave physics by using a phenomenon that is termed model misspecification and commonly encountered in real-world physical systems. Our selection of this equation was motivated by the NLSE’s simple definition (Equation (5)), which is limited to the ( x , t ) domain only. In contrast, for example, employing a PINN to solve the fully nonlinear potential flow equations (Equations (1)–(4)) would demand substantially greater human and computational effort. Hence, subsequent sections introduce the architecture, loss function, training methodology, and evaluation metrics adopted for our NLSE-PINN framework.

2.3.1. PINN Architecture

The neural network architecture of the PINN developed in our study is depicted on the lower-left side of Figure 4. It includes two input nodes for collocation points in space x and time t. Given the complex-valued solution A ( x , t ) = U ( x , t ) + i V ( x , t ) of the NLSE, two output nodes are required [11], where U represents the real part and V represents the imaginary part. Moreover, the network consists of four intermediate hidden layers, with each comprising 200 nodes, which results in a total network depth of D = 6 . The variables of each layer l, which are denoted as θ ( l ) = { W ( l ) , b ( l ) } 1 l D , receive their initial values using the Xavier initialization technique for weights W ( l ) [57], while the biases b ( l ) are initialized to zero. The total amount of the network’s weights and biases is denoted as θ = θ ( 1 ) , , θ ( D ) .
To enhance the convergence and accuracy during the training, we incorporated a strategy using layer-wise locally adaptive activation functions [58,59], which has shown promising applications, e.g., in Pu et al. [35], Jagtap et al. [3,60], Shukla et al. [61], and Guo et al. [62]. The input of a hidden layer h ( l + 1 ) is derived from the output of the preceding layer o ( l ) by following the rule
o ( l ) = W ( l ) T · h ( l ) + b ( l )
h ( l + 1 ) = tanh ( s · a ( l ) · o ( l ) )
for l = 1 , , D 1 . The last layer D has a linear activation function. In this context, W ( l ) and b ( l ) are the weight and bias matrices of the ( l ) -th layer, and s = 10 is a fixed scaling factor. Furthermore, a = a ( 1 ) , , a ( D 1 ) represent additional variables of the network, which are fine tuned during the training process to modulate the slopes of the activations. In our case, we initialized the system with a ( l ) = 0.2 , which resulted in the initial slope of s · a ( l ) = 2 being slightly steeper than the usual tanh activation. Empirical evidence suggested that this initialization could be considered valid according to [58], as it accelerated the convergence rate in our setup without causing divergence or increasing oscillations in the loss function.
Moreover, over the course of our study (Section 3.2), we examined how the data-driven fine tuning of the NLSE coefficients (Equation (6)) while assimilating sparse measurement data affected the reconstruction quality. Given that all the coefficients in the NLSE were related to the peak wavenumber and frequency, we selected treating the coefficients’ base parameters ω p and k p as additional PINN variables while sharing an optimizer with the neural network parameters θ and a . This is visualized in the lower-left box of Figure 4.

2.3.2. PINN Loss Function and Training

To enable the PINN’s capability to approximate an NLSE surrogate model A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ , we used wave elevation measurement data η m R N d from the two outer gauge positions, x g , meas { 3 , 6 } m , within the numerical wave tank shown in Figure 1. The elevation data at the remaining locations, x g , test { 4 , 5 } m , were reserved for later evaluation and not incorporated in the PINN’s training process. Each elevation measurement was transferred to a complex envelope A m C N d using Equation (9), as depicted in the top boxes of Figure 4. In this work, N d = 1200 was the total number of measurement data points at the domain boundaries, which is denoted as { x d = x g , meas , t d ( j ) } j = 1 N d . These points were obtained by sampling the temporal sequence t = 0–60 s with an increment of Δ t = 0.05 s . In addition, we incorporated a set of N r = 20 , 000 randomly located collocation points { x r ( j ) , t r ( j ) } j = 1 N r to enforce the NLSE solution across the entire computational ( x , t ) domain. Using these measurements and sets of points, the multi-objective PINN loss function was defined by
L = L U , data + L V , data + L U , res + L V , res ,
where the loss components were composed of the mean-squared errors (MSEs):
L U , data = 1 N d j = 0 N d U ˜ ( x d , t d ( j ) ) U m ( j ) 2 MSE U , data · λ d ( j ) 2
L V , data = 1 N d j = 0 N d V ˜ ( x d , t d ( j ) ) V m ( j ) 2 MSE V , data · μ d ( j ) 2
L U , res = 1 N r j = 0 N r R ˜ U ( x r ( j ) , t r ( j ) ) 2 MSE U , res · λ r ( j ) 2
L V , res = 1 N r j = 0 N r R ˜ V ( x r ( j ) , t r ( j ) ) 2 MSE V , res · μ r ( j ) 2 .
The terms MSE U , data and MSE V , data served to quantify the error between PINN predictions ( U ˜ , V ˜ ) and measured envelope data ( U m , V m ) at the domain boundaries x , meas g { 3 , 6 } m . In contrast, MSE U , res and MSE V , res quantified the degree to which the PINN solution conformed to the residuals of the NLSE, which are given by
R ˜ U ( x , t ) : = V ˜ x ( x , t ) 1 c g V ˜ t ( x , t ) + δ U ˜ t t ( x , t ) + ν U ˜ ( x , t ) 2 + V ˜ ( x , t ) 2 U ˜ ( x , t )
R ˜ V ( x , t ) : = U ˜ x ( x , t ) + 1 c g U ˜ t ( x , t ) + δ V ˜ t t ( x , t ) + ν U ˜ ( x , t ) 2 + V ˜ ( x , t ) 2 V ˜ ( x , t )
within the remaining computational domain. Thus, as the optimizer minimizes these residuals during training, the PINN solution increasingly aligns with the physical constraints imposed by the NLSE. The derivative terms in these residuals are calculated using automatic differentiation (AD) [12] of the neural network’s outputs U ˜ and V ˜ with respect to the input variables x or t, as illustrated on the right of Figure 4. Unlike numerical differentiation methods, AD provides exact derivatives without any approximation error [13]. Furthermore, AD enhances the computational efficiency by decomposing complex functions into elementary operations and reusing intermediate results through the application of the chain rule, thus avoiding redundant calculations.
Furthermore, following the approach proposed by McClenny and Braga-Neto [63], trainable self-adaptation weights λ d ( j ) , μ d ( j ) , λ r ( j ) , and μ r ( j ) were introduced in the loss components (Equations (14)–(17)). These variables, which are initially set to one, are associated with each measurement point { x d , t d ( j ) } j = 1 N d or collocation point { x r ( j ) , t r ( j ) } j = 1 N r . The PINN autonomously identifies challenging regions inside the solution domain characterized by high point-specific errors and increases the respective self-adaptation weights to emphasize the penalty, and thus, improve the approximation. This behavior is achieved through the concurrent minimization of the loss function L = L ( θ , a , λ d , μ d , λ r , μ r ) by updating the network weights and biases θ , activation slopes a , and coefficient parameters ω ¯ p and k ¯ p alongside the maximization of the loss for the self-adaption weights λ d , μ d , λ r , and μ r in each epoch of training.
In many instances, PINNs are trained using a two-step strategy that involves the Adam optimizer [64] for a defined number of epochs, followed by the L-BFGS optimizer [65]. This strategy is recognized as pivotal [66], with Adam initially preventing convergence to local minima, while L-BFGS refines small-scale solution components [13]. However, this approach, which is beneficial for smooth analytical solutions, proved less suitable for our studies: The L-BFGS optimizer tends to over-refine noisy elements within measurements. We found that the AMSGrad modification of the Adam optimizer [67] from the PyTorch library (Version 1.8.1, sourced from Hamburg, Germany) [68] yielded satisfactory results with a learning rate of α = 0.0001 over 15 , 000 epochs of training.

2.3.3. Evaluation

After assimilating the measurements at the domain boundaries to reconstruct the envelopes in the entire computational domain, the evaluation of the PINNs’ performance necessitated a metric that compared the measurement time series A m = A m ( x = x g , t ) with the reconstruction A ˜ = A ˜ ( x = x g , t ) at all gauge positions x g { 3 , 4 , 5 , 6 } m . While metrics based on Euclidean distances, such as the mean-squared error (MSE), are scale dependent and treat deviations in frequency and phase as amplitude errors [69], the surface similarity parameter (SSP)
SSP ( A m , A ˜ ) = | F A m ( ω ) F A ˜ ( ω ) | 2 d ω | F A m ( ω ) | 2 d ω + | F A ˜ ( ω ) | 2 d ω [ 0 , 1 ]
proposed by Perlin and Bustamante [70] combines phase, amplitude, and frequency errors into a scalar unified measure. In this metric, ω denotes the wave frequency vector and F A m and F A ˜ denote the discrete Fourier transforms of the time series A m ( x = x g , t ) or A ˜ ( x = x g , t ) . The SSP is a normalized error metric, with SSP = 0 indicating perfect agreement and SSP = 1 implying a comparison against zero or phase-inverted surfaces. To visually illustrate the impact of the SSP metric, we systematically compared an exemplary wave envelope against various signals across a range of SSP and MSE values in the Appendix (Figure A1). Due to its straightforward error assessment and the applicability for comparing signals with differing sampling rates and lengths, the SSP has found application in recent research related to ocean wave prediction and reconstruction by Klein et al. [1,7], Wedler et al. [9,69], Desmars et al. [71,72], Lünser et al. [43], Kim et al. [73], and Ehlers et al. [2].

3. Results and Discussion

After a successful validation of the NLSE-PINN against an analytic solution to the NLSE (see Appendix A.2), we assessed its applicability to more realistic scenarios that involved data that do not entirely conform to the equation in the loss function, such as elevation measurements from the wave tank shown in Figure 1. The NLSE-PINN framework underwent training for all 630 generated wave measurement samples, where each was characterized by a distinct wave parameter combination and random phases. Our aim was to reconstruct wave envelopes throughout the spatio-temporal domain 3 m x 6 m and 0 s t 60 s solely based on two time series measured at the gauge locations x , meas g { 3 , 6 } m spaced 3 m apart. Additional measurements at locations x , test g { 4 , 5 } m were reserved exclusively for evaluating the PINN solutions post-training. Therefore, the first subsection (Section 3.1) focuses on the pure data assimilation task, where the NLSE coefficients in the PINN’s loss function remained constant during training and determined a priori based on spectral wave properties and linear dispersion. In contrast, the second subsection (Section 3.2) explores treating the peak frequency and wavenumber, which form the NLSE coefficients, as identifiable variables that are adapted by the optimizer to fit the measurements best.

3.1. Data Assimilation with Constant NLSE Coefficients

In the following benchmark data assimilation task, the NLSE coefficients (Equation (6)) were kept constant during the PINN training and determined a priori based on the mean of the spectral peak frequencies ω p , 3 and ω p , 6 at the domain boundaries using Equation (8). This peak frequency ω ¯ p = 1 2 ω p , 3 + ω p , 6 was calculated for each of the 630 samples, where each individual sample was generated following the procedure outlined in Section 2.1 and followed the data structure shown in Figure 1b. The linear dispersion relation Equation (7) yielded the corresponding peak wavenumber k ¯ p . Note that the calculated ω ¯ p could slightly deviate from the peak frequency ω p used to generate the JONSWAP spectrum for the HOSM. Each sample underwent individual PINN training for 15,000 epochs, which required approximately 1200 s of computational time on a NVIDIA GeForce RTX 3090 GPU.
Figure 5 depicts a representative training loss curve for the NLSE-PINN framework. During the initial epochs, a remarkable decrease in the PDE residual error components MSE U , res and MSE V , res was observed, while the data errors MSE U , data and MSE V , data remained high. As the data errors started to decrease and the PINN solution aligned more closely with the prescribed boundary data, the PDE errors experienced a transient increase before reaching a plateau. These observations could have arose from a zero envelope that caused high data errors in the early training, despite satisfying PDE residues that caused low PDE errors. However, as the envelope progressively matched the actual measurement data at the boundaries, fulfilling PDE residues in the remaining domain became more challenging, which led to a slight increase in the corresponding error.
Figure 6 presents a reconstruction of the real part U ˜ ( x , t ) and imaginary part V ˜ ( x , t ) of an envelope above the carrier wave sample no. 1, with ϵ = 0.075 , ω ¯ p = 4.45 rad s , and γ = 6 . The PINN reconstructed a wave envelope structure across the entire computational domain, despite relying solely on measurement data from the boundaries marked in violet.
While some wave samples were successfully assimilated, challenges were evident for other samples. For instance, Figure 7 presents a PINN solution for sample no. 2, which was characterized by ϵ = 0.025 and ω ¯ p = 8.34 rad s , with a broad-banded nature indicated by γ = 1 . The average errors for the real and imaginary parts of the envelope were SSP = 0.194 and SSP = 0.242 , respectively. Notably, increased errors were observed within the computational domain at x , test g { 4 , 5 } m , despite the satisfactory alignment of the boundary values with the ground truth at x , meas g { 3 , 6 } m . These errors seemed to originate from nonphysical envelope peaks that emerged from both boundaries toward the middle of the domain but met with an offset, which indicated a small error in the NLSE coefficient of the group velocity c g = ω ¯ p 2 k ¯ p for the underlying data. However, as the advanced method of Equation (8) was already utilized for ω ¯ p determination, it became apparent that establishing a robust approach to determine this parameter consistently across all samples presented a challenge.
The trends observed in the aforementioned examples were repetitive across all 630 instances in the assimilation task, which is summarized in Figure 8, where the SSP value represents the average error across the real and imaginary parts of each sample. Each cell in the figure corresponds to the mean SSP achieved for all five samples of a specific ϵ ω ¯ p γ combination, where the determined ω ¯ p value was rounded to the nearest integer for this illustration. Notably, the errors tended to decrease as the ω ¯ p value increased. The frequency dependence aligned with the inherent preference of PINNs for successive solving from lower to higher frequency components cf. [13] when considering that higher frequencies ω p of the carrier wave η ( x , t ) resulted in smoother, lower-frequency envelopes A ( x , t ) , while lower frequencies ω p of the carrier wave η ( x , t ) resulted in higher-frequency envelopes A ( x , t ) . Moreover, Figure 8 illustrates a general error increase as the peak enhancement factors γ decrease or the steepness values ϵ slightly increased. As proven by Appendix A.2, these observations were not attributable to the PINN method itself but rather aligned with the model misspecification. Specifically, the NLSE was limited for narrow-band and small-amplitude waves, while the ground truth HOSM data seem to exceed its validity range for increasing ϵ and a broader spectra.

3.2. Coefficient Fine Tuning Alongside Data Assimilation

Despite employing an advanced formula (Equation (8)) to determine the peak frequency ω ¯ p required for calculating the NLSE coefficients (Equation (6)), uncertainties in the ω ¯ p determination persisted due to the short measurement durations and broader-banded signals, as discussed in the context of the envelope offset observed in Figure 7. Given the impracticality of exploring alternative possibilities for ω ¯ p determination or iteratively adjusting it to recalculate the solution, we examined a data-driven opportunity to fine tune this parameter in the following subsection. Therefore, we treated the PDE coefficients not as constants but as additional trainable variables to address the inverse problems during the assimilation of the measurement data. Our objective was to enhance the reconstruction quality and eliminate the envelope offsets attributed to non-optimal constant coefficients by utilizing the results from the previous Section 3.1 as a benchmark. Since all the NLSE coefficients depended on ω ¯ p and k ¯ p , we refrained from directly setting c g , δ , and ν as variables. Instead, we initially determined ω ¯ p and k ¯ p using Equations (7) and (8), as demonstrated before, but subsequently treated them as additional PINN variables. Hence, the coefficients c g , δ , and ν emerged from the current ω ¯ p and k ¯ p in each epoch of training. Moreover, as water waves with increasing nonlinearity are known to deviate from the linear dispersion relation (Equation (7)), we decoupled the current k ¯ p and ω ¯ p during training. As discussed in the previous subsection, each of the 630 samples, with a data structure shown in Figure 1b, underwent individual training for 15,000 epochs. However, now the training incorporated two additional trainable variables ω ¯ p and k ¯ p , as illustrated in the lower-left box of Figure 4.
The visual comparison of the new reconstruction for sample no. 2 that employed trainable variables ω ¯ p and k ¯ p in Figure 9 with the previous solution that used constant coefficients in Figure 7 revealed a strong reduction in the envelope offset. Table 1 compares the initial peak frequency ω ¯ p and wavenumber k ¯ p with their fine tuned values, along with the resulting NLSE coefficients. For this sample, the ω ¯ p increased during the training, while the k ¯ p decreased, which resulted in a faster group velocity c g = ω ¯ p 2 k ¯ p . Simultaneously, the nonlinear term received slightly less attention due to a decreased coefficient ν = k ¯ p 3 . These adjustments not only resulted in a visual reduction of the nonphysical envelope offsets but also improved the SSP error values. For the real part, the error decreased from SSP = 0.194 to SSP = 0.150 , and for the imaginary part, from SSP = 0.242 to SSP = 0.188 , both of which were an around 20 % improvement for sample no. 2. This improvement is noteworthy considering that while adjustable coefficients can mitigate uncertainties in ω ¯ p determination, they cannot fully overcome the NLSE’s assumption of a single ideal carrier frequency, particularly for HOSM data with γ = 1 .
Examining the 630 samples collectively revealed that the solutions that used the trainable ω ¯ p and k ¯ p values exhibited fewer envelope offsets. Figure 10 demonstrates an enhancement in the SSP values when comparing these results for fine tuned coefficients with results obtained with the constant and predetermined coefficients shown in Figure 8. While the trend persisted, where lower-frequency, higher-steepness, or broad-banded samples yielded comparatively high individual errors, the average SSP values experienced a reduction from SSP = 0.223 to SSP = 0.209 ( γ = 1 ), from SSP = 0.199 to SSP = 0.185 ( γ = 3 ), and from SSP = 0.181 to SSP = 0.165 ( γ = 6 ). On average the reduction from SSP = 0.204 with constant coefficients to SSP = 0.186 with trainable coefficients represented an improvement of approximately 8.8 % , while complicated instances, such as sample no. 2, also experienced considerably higher individual improvements (around 20 % ). Consequently, allowing the fine tuning of ω ¯ p and k ¯ p values during PINN training confirmed the initial hypothsis, as it enhanced the measurable and visible reconstruction quality compared with using constant coefficients in pure data assimilation tasks.
During the coefficient fine tuning, the peak frequency increased and the wavenumber decreased while training of sample no. 2 shown in Figure 9 and Table 1. In contrast, other samples displayed varied trends, such as concurrent increasing or decreasing ω ¯ p and k ¯ p , or decreasing ω ¯ p with increasing k ¯ p . Figure 11 illustrates this variability across all 630 samples according to their respective ϵ γ combination. The initial ω ¯ p and k ¯ p values aligned with the blue graph of the linear dispersion relation, while the fine tuned values mostly deviated from this relation, as indicated by red crosses. The left and middle columns of this Figure reveal that medium- and broad-banded sea states ( γ = 3 and γ = 1 ) tended to benefit from ω ¯ p k ¯ p combinations above the linear dispersion relation. This suggests that increased group velocities c g = ω ¯ p 2 k ¯ p enhanced the reconstruction quality. In contrast, narrow-banded samples ( γ = 6 ) showed learned combinations both above and below the linear dispersion curve, which reflected that the neural network effectively reduced the uncertainties in the initial determination of ω ¯ p .
Figure 11 further illustrates a consistent trend: irrespective of the γ value, increasing steepness ϵ resulted in smaller changes in the k ¯ p values during training. This was evident from the vertically oriented lines between the initial and final ω ¯ p k ¯ p combinations in the upper subplots. This observation implies that as the steepness increases, the PINN prioritizes maintaining the nonlinear term of the NLSE, as represented by the coefficient ν = k ¯ p 3 . This aligns with physical reasoning, as a higher ϵ corresponds to increased nonlinearities. Preserving this coefficient indicates that the PINN identifies nonlinearity as crucial. Consequently, alterations in the group velocity c g = ω ¯ p 2 k ¯ p primarily arise from alterations in ω ¯ p only.

4. Conclusions

This study demonstrated realistic application scenarios of physics-informed neural networks (PINNs) in ocean engineering contexts. Specifically, the hydrodynamic nonlinear Schrödinger equation (NLSE) constrains the developed PINN framework to ensure the physical consistency of its solutions. First, the NLSE-PINN is employed for reconstructing synthetic wave elevation measurements between two gauges, a process known as data assimilation and a fundamental function of PINNs. However, the utilized wave data occasionally deviates from the small-steepness and narrow-band assumptions inherent to the NLSE, which poses a challenge known as model misspecification. This challenge must be anticipated whenever employing PINNs for real-world scenarios, where analytical solutions to the PDE utilized for a PINN’s loss functions are typically not observable, and comprehensive knowledge of physical equations that cover all observed phenomena is rare. Moreover, this model misspecification and the short time intervals of elevation measurements complicate the determination of the required NLSE coefficients based on spectral properties. These uncertainties in the a priori determination of constant NLSE coefficients occasionally lead to non-physical envelope offsets in the reconstruction of certain instances.
To enhance the reconstruction quality of such cases, conventional numerical methods would typically require a systematic coefficient adjustment and numerical forward propagation to optimize the data fit between two measurement points. In contrast, our second research phase demonstrated that the NLSE-PINN method allowed for fine tuning the NLSE coefficients’ base parameters ω ¯ p and k ¯ p concurrently with the data assimilation in the same optimization process. This integrated parameter fine tuning effectively mitigated the previously observed non-physical offsets and reduced the mean error across all 630 wave samples from SSP = 0.204 with predetermined, constant coefficients during pure data assimilation to SSP = 0.186 with trainable coefficients. This represented an average improvement of 8.82 % , while some samples experienced even greater improvements of up to 20 % .
However, it is important to note that the reconstruction remains generally more challenging for broader wave spectra and samples with higher steepness. This challenge is not inherent to the PINN methodology itself, but rather associated with the characteristics of the utilized NLSE to constrain the loss function for data stemming from the more realistic HOSM. Therefore, future research can explore the integration of the modified nonlinear Schrödinger equation (MNLSE), which is also referred to as the Dysthe equation [74], into the PINN loss function. Due to its higher order of nonlinearity, the MNLSE may mitigate the steepness limitations inherent in the standard NLSE and provide new insights. Also, the broader-bandwidth variant of the MNLSE [75] might be particularly advantageous, as it additionally alleviates the limitations associated with narrow-band assumptions. Moreover, the development of a PINN capable of directly solving the fully nonlinear potential flow equations represents a promising direction, as it could offer improvements over relying on simplified models, such as the variants of the NLSE.
Moreover, concerning the successful fine tuning of ω p ¯ and k ¯ p values in this study, further research can explore the feasibility of deriving a nonlinear dispersion relation as a function of wave parameters, such as the bandwidth, amplitude, or steepness. This research direction holds the potential to deepen our understanding of the interplay of NLSE coefficients and wave parameters and may contribute to the development of more flexible and accurate models for describing nonlinear water wave phenomena. Furthermore, this study employed synthetic wave data generated at two locations within a numerical wave tank with a fixed measurement distance. Hence, future research should investigate the impact of varying the distances between these two buoys. Moreover, applying the potential flow PINN or NLSE-PINN methods to real wave measurement data from wave tanks or open ocean scenarios, particularly for directional two-dimensional wave surfaces, would be valuable.

Author Contributions

Conceptualization, S.E., M.S. and N.H.; methodology, S.E., N.A.W. and A.S.; software, S.E.; validation, S.E. and N.H.; formal analysis, S.E.; investigation, S.E., N.A.W. and A.S.; resources, N.H.; data curation, M.K.; writing—original draft preparation, S.E.; writing—review and editing, N.A.W., A.S., M.K., N.H. and M.S.; visualization, S.E.; supervision, N.H. and M.S.; project administration, M.K. and N.H.; funding acquisition, N.H. All authors read and agreed to the published version of this manuscript.

Funding

This research was funded by the Deutsche Forschungsgemeinschaft (DFG—German Research Foundation) [project number 277972093: Excitability of Ocean Rogue Waves].

Data Availability Statement

The raw data and code supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

The authors would like to express their gratitude to the lecturers George Em. Karniadakis and Khemraj Shukla, as well as to all organizers and funders for the Summer School on Physics-Informed Neural Networks and Applications at KTH, Stockholm, in 2023. Furthermore, appreciation is extended to the Studienstiftung des deutschen Volkes for the support provided through the Natural Science Collegium 2022/2023, during which time this work was initiated.
The investigations were conducted and the manuscript was written completely by the authors. Once the manuscript was completed, the authors used ChatGPT 3.5 (https://chat.openai.com/) in order to improve its grammar and readability. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADAutomatic differentiation
HOSMHigh-order spectral method
MSEMean-squared error
NLSENonlinear Schrödinger equation
PDEPartial differential equation
PINNPhysics-informed neural network
SSPSurface similarity parameter

Appendix A

Appendix A.1. Graphical Representation of SSP Error Metric

Although the surface similarity parameter (SSP, Equation (20)) was used in several studies to compare surface waves, it remains a relatively new metric. Therefore, Figure A1 presents the SSP for one-dimensional surfaces V ˜ with increasing alignment with the reference surface V m . To further illustrate the increasing surface alignment, the more established mean-squared error metric
MSE = i = 1 n V m , i V ˜ i 2 ,
with n being the number of grid points, is also provided for each signal. It is important to note that the MSE values are scale dependent, which poses a disadvantage for our comparative investigation, as the amplitudes of the 630 examined wave samples varied by several orders of magnitude due to the chosen wave parameter ranges for the data generation. Although the MSE could be normalized by the norm or the maximum of the reference signal V m to mitigate the issue of differing magnitudes between wave samples, this would still treat phase shifts or frequency differences as amplitude errors. In contrast, the SSP distinguishes between amplitude, phase, and frequency errors cf. [69] by comparison in the frequency domain and combines the total error into a single value that always ranges between 0 and 1, regardless of the specific sample’s amplitude.
Figure A1. Graphical representation for assessing the meaning of different SSP values. The baseline signal V m represents the imaginary part of sample no. 2 (also shown in Figure 7 and Figure 9) at position x = 6 m . It was compared with incrementally improving PINN solutions V ˜ , which resulted in decreasing SSP values. Additionally, the MSE values were calculated to provide further insights.
Figure A1. Graphical representation for assessing the meaning of different SSP values. The baseline signal V m represents the imaginary part of sample no. 2 (also shown in Figure 7 and Figure 9) at position x = 6 m . It was compared with incrementally improving PINN solutions V ˜ , which resulted in decreasing SSP values. Additionally, the MSE values were calculated to provide further insights.
Fluids 09 00231 g0a1

Appendix A.2. NLSE-PINN Validation Using Analytic Solution

To evaluate the assimilation and coefficient identification capabilities of the NLSE-PINN, distinguishing between errors originating from the neural network and those inherent to the NLSE is imperative. Given the NLSE’s simplifying assumptions, it may not capture all the characteristics of the numerical wave tank data generated by the HOSM. Hence, we initially validated our method using the analytical NLSE solution of a Peregrine breather [76] in time-like form [51], which is defined as
A per = a 1 + 4 1 i ω p ϵ 2 x c g 1 + 8 k p 2 ϵ 2 ( x c g t ) 2 + ω p 2 ϵ 4 x 2 c g 2 exp i ω p ϵ 2 x 2 c g .
As the NLSE entirely encompasses this analytic solution, the following analysis allowed for estimating the magnitude of errors attributable to the approximation by the neural network: Given the spatial range of 3 m covered by the numerical wave tank (Figure 1), we derived the analytical solution accordingly. The carrier wave frequency was set to ω p = 9 rad s , with amplitude a = 0.02 m . The NLSE coefficients were calculated using Equation (6) and held constant during 15,000 epochs of PINN training. The reconstruction results in the spatio-temporal domain, which were solely based on the boundary data, are shown in Figure A2.
Figure A2. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using predetermined, constant NLSE coefficients for a Peregrine breather. The analytic envelope points U per and V per are provided only at the domain boundaries and are highlighted in violet. This analysis not only validated the efficiency of the developed NLSE-PINN framework but also allowed for accessing the small magnitude of approximation error attributable to the neural network.
Figure A2. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using predetermined, constant NLSE coefficients for a Peregrine breather. The analytic envelope points U per and V per are provided only at the domain boundaries and are highlighted in violet. This analysis not only validated the efficiency of the developed NLSE-PINN framework but also allowed for accessing the small magnitude of approximation error attributable to the neural network.
Fluids 09 00231 g0a2
Despite providing only two time series of the analytical solution at the outer boundaries x = 1.5 m and x = 1.5 m for computing the data loss (Equations (14) and()), the PINN effectively reconstructed physically consistent Peregrine envelopes within the remaining computational domain solely with the aid of the PDE residual loss components (Equations () and ()). This is demonstrated in the lower subplots, where the time series of the PINN solution’s real and imaginary parts, U ˜ ( x , t ) and V ˜ ( x , t ) , are compared with the analytical solution, U per ( x , t ) and V per ( x , t ) , at the cross-section of five points in space. Notably, despite the lower envelope amplitudes at the domain boundaries, the PINN accurately reconstructed the higher peak in the real part in the middle of the domain. This underscores the NLSE-PINN’s efficacy to capture and calculate the underlying NLSE dynamics. Furthermore, this analysis revealed that the approximation errors attributable to the neural network were very small, where they averaged around MSE = 5.8 × 10 9 and SSP = 0.004 in this instance. Hence, any encountered errors were more likely attributable to model misspecification between the NLSE-PINN and the HOSM wave tank measurement data than to approximation errors.

References

  1. Klein, M.; Dudek, M.; Clauss, G.F.; Ehlers, S.; Behrendt, J.; Hoffmann, N.; Onorato, M. On the deterministic prediction of water waves. Fluids 2020, 5, 9. [Google Scholar] [CrossRef]
  2. Ehlers, S.; Klein, M.; Heinlein, A.; Wedler, M.; Desmars, N.; Hoffmann, N.; Stender, M. Machine learning for phase-resolved reconstruction of nonlinear ocean wave surface elevations from sparse remote sensing data. Ocean. Eng. 2023, 288, 116059. [Google Scholar] [CrossRef]
  3. Jagtap, A.D.; Mitsotakis, D.; Karniadakis, G.E. Deep learning of inverse water waves problems using multi-fidelity data: Application to Serre–Green–Naghdi equations. Ocean Eng. 2022, 248, 110775. [Google Scholar] [CrossRef]
  4. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Physics 2021, 3, 422–440. [Google Scholar] [CrossRef]
  5. Willard, J.; Jia, X.; Xu, S.; Steinbach, M.; Kumar, V. Integrating physics-based modeling with machine learning: A survey. arXiv 2020, arXiv:2003.04919. [Google Scholar]
  6. Mohaghegh, F.; Murthy, J.; Alam, M.R. Rapid phase-resolved prediction of nonlinear dispersive waves using machine learning. Appl. Ocean. Res. 2021, 117, 102920. [Google Scholar] [CrossRef]
  7. Klein, M.; Stender, M.; Wedler, M.; Ehlers, S.; Hartmann, M.; Desmars, N.; Pick, M.-A.; Seifried, R.; Hoffmann, N. Application of machine learning for the generation of tailored wave sequences. In Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering, Volume 5B: Ocean Engineering, Honoring Symposium for Professor Günther F. Clauss on Hydrodynamics and Ocean Engineering, Hamburg, Germany, 5–10 June 2022. [Google Scholar] [CrossRef]
  8. Liu, Y.; Zhang, X.; Chen, G.; Dong, Q.; Guo, X.; Tian, X.; Lu, W.; Peng, T. Deterministic wave prediction model for irregular long-crested waves with Recurrent Neural Network. J. Ocean. Eng. Sci. 2022, 9, 251–263. [Google Scholar] [CrossRef]
  9. Wedler, M.; Stender, M.; Klein, M.; Hoffmann, N. Machine learning simulation of one-dimensional deterministic water wave propagation. Ocean Eng. 2023, 284, 115222. [Google Scholar] [CrossRef]
  10. Zhao, M.; Zheng, Y.; Lin, Z. Sea surface reconstruction from marine radar images using deep convolutional neural networks. J. Ocean. Eng. Sci. 2023, 8, 647–661. [Google Scholar] [CrossRef]
  11. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  12. Baydin, A.G.; Pearlmutter, B.A.; Radul, A.A.; Siskind, J.M. Automatic differentiation in machine learning: A survey. J. Mach. Learn. Res. 2018, 18, 1–43. [Google Scholar]
  13. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific machine learning through physics-informed neural networks: Where we are and what’s next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  14. Kissas, G.; Yang, Y.; Hwuang, E.; Witschey, W.R.; Detre, J.A.; Perdikaris, P. Machine learning in cardiovascular flows modeling: Predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 2020, 358, 112623. [Google Scholar] [CrossRef]
  15. Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 2020, 367, 1026–1030. [Google Scholar] [CrossRef]
  16. Cai, S.; Mao, Z.; Wang, Z.; Yin, M.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for fluid mechanics: A review. Acta Mech. Sin. 2021, 37, 1727–1738. [Google Scholar] [CrossRef]
  17. Rasht-Behesht, M.; Huber, C.; Shukla, K.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for wave propagation and full waveform inversions. J. Geophys. Res. Solid Earth 2022, 127, e2021JB023120. [Google Scholar] [CrossRef]
  18. Zobeiry, N.; Humfeld, K.D. A physics-informed machine learning approach for solving heat transfer equation in advanced manufacturing and engineering applications. Eng. Appl. Artif. Intell. 2021, 101, 104232. [Google Scholar] [CrossRef]
  19. Cai, S.; Wang, Z.; Wang, S.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks for heat transfer problems. J. Heat Transf. 2021, 143, 060801. [Google Scholar] [CrossRef]
  20. Kashinath, K.; Mustafa, M.; Albert, A.; Wu, J.-L.; Jiang, C.; Esmaeilzadeh, S.; Azizzadenesheli, K.; Wang, R.; Chattopadhyay, A.; Singh, A.; et al. Physics-informed machine learning: Case studies for weather and climate modelling. Philos. Trans. Ser. Math. Phys. Eng. Sci. 2021, 379, 2194. [Google Scholar] [CrossRef]
  21. Chen, Y.; Lu, L.; Karniadakis, G.E.; Dal Negro, L. Physics-informed neural networks for inverse problems in nano-optics and metamaterials. Opt. Express 2020, 28, 11618–11633. [Google Scholar] [CrossRef]
  22. Nguyen-Thanh, V.M.; Zhuang, X.; Rabczuk, T. A deep energy method for finite deformation hyperelasticity. Eur. J. Mech. A/Solids 2020, 80, 103874. [Google Scholar] [CrossRef]
  23. Wang, N.; Chen, Q.; Chen, Z. Reconstruction of nearshore wave fields based on physics-informed neural networks. Coast. Eng. 2022, 176, 104167. [Google Scholar] [CrossRef]
  24. Chen, Q.; Wang, N.; Chen, Z. Simultaneous mapping of nearshore bathymetry and waves based on physics-informed deep learning. Coast. Eng. 2023, 183, 104337. [Google Scholar] [CrossRef]
  25. Feng, D.; Tan, Z.; He, Q. Physics-informed neural networks of the Saint-Venant equations for downscaling a large-scale river model. Water Resour. Res. 2023, 59, e2022WR033168. [Google Scholar] [CrossRef]
  26. Chiang, C.M.; Stiassnie, M.; Yue, D.K.-P. Theory and Applications of Ocean Surface Waves; World Scientific: Singapore, 2005. [Google Scholar] [CrossRef]
  27. Zakharov, V.E. Stability of periodic waves of finite amplitude on the surface of a deep fluid. J. Appl. Mech. Tech. Phys. 1968, 9, 190–194. [Google Scholar] [CrossRef]
  28. Shemer, L.; Kit, E.; Jiao, H.; Eitan, O. Experiments on nonlinear wave groups in intermediate water depth. J. Waterw. Port Coast. Ocean. Eng. 1998, 124, 320–327. [Google Scholar] [CrossRef]
  29. Trulsen, K.; Stansberg, C.T. Spatial evolution of water surface waves: Numerical simulation and experiment of bichromatic waves. In Proceedings of the ISOPE International Ocean and Polar Engineering Conference, ISOPE–I, Kitakyushu, Japan, 25–31 May 2001. [Google Scholar]
  30. Dysthe, K.B.; Trulsen, K.; Krogstadd, H.E.; Socquet-Juglard, H. Evolution of a narrow-band spectrum of random surface gravity waves. J. Fluid Mech. 2003, 478, 1–10. [Google Scholar] [CrossRef]
  31. Chabchoub, A.; Hoffmann, N.P. Rogue wave observation in a water wave tank. Phys. Rev. Lett. 2011, 106, 204502. [Google Scholar] [CrossRef]
  32. Ruban, V.P. Gaussian variational ansatz in the problem of anomalous sea waves: Comparison with direct numerical simulation. J. Exp. Theor. Phys. 2015, 120, 925–932. [Google Scholar] [CrossRef]
  33. Li, J.; Li, B. Solving forward and inverse problems of the nonlinear Schrödinger equation with the generalized PT-symmetric Scarf-II potential via PINN deep learning. Commun. Theor. Phys. 2021, 73, 125001. [Google Scholar] [CrossRef]
  34. Pu, J.-C.; Li, J.; Chen, Y. Solving localized wave solutions of the derivative nonlinear Schrödinger equation using an improved PINN method. Nonlinear Dyn. 2021, 105, 1723–1739. [Google Scholar] [CrossRef]
  35. Pu, J.-C.; Li, J.; Chen, Y. Soliton, breather, and rogue wave solutions for solving the nonlinear Schrödinger equation using a deep learning method with physical constraints. Chin. Phys. B 2021, 30, 060202. [Google Scholar] [CrossRef]
  36. Wang, L.; Yan, Z. Data-driven rogue waves and parameter discovery in the defocusing nonlinear Schrödinger equation with a potential using the PINN deep learning. Phys. Lett. A 2021, 404, 127408. [Google Scholar] [CrossRef]
  37. Jiang, X.; Wang, D.; Fan, Q.; Zhang, M.; Lu, C.; Lau, A.P.T. Physics-informed neural network for nonlinear dynamics in fiber optics. Laser Photonics Rev. 2022, 16, 2100483. [Google Scholar] [CrossRef]
  38. Zhang, S.; Lan, P.; Su, J.-J. Wave-packet behaviors of the defocusing nonlinear Schrödinger equation based on the modified physics-informed neural networks. Chaos Interdiscip. J. Nonlinear Sci. 2021, 31, 113107. [Google Scholar] [CrossRef]
  39. He, Q.Z.; Barajas-Solano, D.; Tartakovsky, G.; Tartakovsky, A.M. Physics-informed neural networks for multiphysics data assimilation with application to subsurface transport. Adv. Water Resour. 2020, 141, 103610. [Google Scholar] [CrossRef]
  40. Von Saldern, J.G.R.; Reumschüssel, J.M.; Kaiser, T.L.; Sieber, M.; Oberleithner, K. Mean flow data assimilation based on physics-informed neural networks. Phys. Fluids 2022, 34, 115129. [Google Scholar] [CrossRef]
  41. West, B.J.; Brueckner, K.A.; Janda, R.S.; Milder, D.M.; Milton, R.L. A new numerical method for surface hydrodynamics. J. Geophys. Res. Ocean. 1987, 92, 11803–11824. [Google Scholar] [CrossRef]
  42. Ducrozet, G.; Bonnefoy, F.; Perignon, Y. Applicability and limitations of highly non-linear potential flow solvers in the context of water waves. Ocean Eng. 2017, 142, 233–244. [Google Scholar] [CrossRef]
  43. Lünser, H.; Hartmann, M.; Desmars, N.; Behrendt, J.; Hoffmann, N.; Klein, M. The influence of characteristic sea state parameters on the accuracy of irregular wave field simulations of different complexity. Fluids 2022, 7, 243. [Google Scholar] [CrossRef]
  44. Zou, Z.; Meng, X.; Karniadakis, G.E. Correcting model misspecification in physics-informed neural networks (PINNs). arXiv 2023, arXiv:2310.10776. [Google Scholar] [CrossRef]
  45. Hasselmann, K.; Barnett, T.P.; Bouws, E.; Carlson, H.; Cartwright, D.E.; Enke, K.; Ewing, J.A.; Gienapp, H.; Hasselmann, D.E.; Kruseman, P.; et al. Measurements of wind-wave growth and swell decay during the Joint North Sea Wave Project (JONSWAP). ErgäNzung Zur Dtsch. Hydrogr. Z. Reihe A 1973, 12, 1–95. [Google Scholar]
  46. Bouws, E.; Günther, H.; Rosenthal, W.; Vincent, C.L. Similarity of the wind wave spectrum in finite depth water: 1. Spectral form. J. Geophys. Res. 1985, 90, 975–986. [Google Scholar] [CrossRef]
  47. Amir, M.A.U.; Wahid, A.S.A.; Appelbe, H.; Aziz, H.A.; Ahmad, M.A.; Azmir, H.; Jamil, R.A.; Radzi, Z.M. Optimal specification of wave flume in confined space. In Proceedings of the 2016 International Conference on Information and Communication Technology (ICICTM), Kuala Lumpur, Malaysia, 16–17 May 2016; pp. 136–140. [Google Scholar]
  48. Khalilabadi, M.R.; Bidokhti, A.A. Design and Construction of an Optimum Wave Flume. J. Appl. Fluid Mech. 2013, 5, 99–103. [Google Scholar]
  49. Hasimoto, H.; Ono, H. Nonlinear modulation of gravity waves. J. Phys. Soc. Jpn. 1972, 33, 805–811. [Google Scholar] [CrossRef]
  50. Yuen, H.C.; Lake, B.M. Nonlinear Dynamics of Deep-Water Gravity Waves. Adv. Appl. Mech. 1982, 22, 67–229. [Google Scholar] [CrossRef]
  51. Chabchoub, A.; Grimshaw, R.H.J. The hydrodynamic nonlinear Schrödinger equation: Space and time. Fluids 2016, 1, 23. [Google Scholar] [CrossRef]
  52. Osborne, A.R. Nonlinear Ocean Waves and the Inverse Scattering Transform; Academic Press: Amsterdam, The Netherlands, 2010; Volume 97. [Google Scholar]
  53. Sobey, R.J. Real Sea States—Wave Grouping. In Real Sea States: Advanced Short Course Notes; Leichtweiß-Institut für Wasserbau, Technische Universität Braunschweig: Braunschweig, Germany, 1999. [Google Scholar]
  54. Mansard, E.P.D.; Funke, E.R. On the fitting of parametric models to measured wave spectra. In Proceedings of the 2nd International Symposium on Waves and Coastal Engineering, Hannover, Germany, 12–14 October 1988; pp. 12–14. [Google Scholar]
  55. Huang, N.E.; Shen, Z.; Long, S.R. A new view of nonlinear water waves: The Hilbert spectrum. Annu. Rev. Fluid Mech. 1999, 31, 417–457. [Google Scholar] [CrossRef]
  56. Thrane, N.; Wismer, J.; Konstantin-Hansen, H.; Gade, S. Application Note: Practical Use of the “Hilbert Transform”. Brüel & Kjær. 2011. Available online: https://www.bksv.com/media/doc/bo0437.pdf (accessed on 5 September 2024).
  57. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics; Sardinia, Italy, 13–15 May 2010, Teh, Y.W., Titterington, M., Eds.; PMLR: Sardinia, Italy, 2010; Volume 9, pp. 249–256. Available online: https://proceedings.mlr.press/v9/glorot10a.html (accessed on 10th September 2024).
  58. Jagtap, A.D.; Kawaguchi, K.; Karniadakis, G.E. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. J. Comput. Phys. 2020, 404, 109136. [Google Scholar] [CrossRef]
  59. Jagtap, A.D.; Kawaguchi, K.; Karniadakis, G.E. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc. R. Soc. A Math. Phys. Eng. Sci. 2020, 476, 20200334. [Google Scholar] [CrossRef]
  60. Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
  61. Shukla, K.; Di Leoni, P.C.; Blackshire, J.; Sparkman, D.; Karniadakis, G.E. Physics-Informed Neural Network for Ultrasound Nondestructive Quantification of Surface Breaking Cracks. J. Nondestruct. Eval. 2020, 39, 1–20. [Google Scholar] [CrossRef]
  62. Guo, Y.; Cao, X.; Peng, K. Solving nonlinear soliton equations using improved physics-informed neural networks with adaptive mechanisms. Commun. Theor. Phys. 2023, 75, 095003. [Google Scholar] [CrossRef]
  63. McClenny, L.; Braga-Neto, U. Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism. arXiv 2020, arXiv:2009.04544. [Google Scholar]
  64. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015), San Diego, CA, USA, 12–15 May 2015; pp. 1–15. [Google Scholar]
  65. Liu, D.C.; Nocedal, J. On the limited memory BFGS method for large scale optimization. Math. Program. 1989, 45, 503–528. [Google Scholar] [CrossRef]
  66. Markidis, S. The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers? Front. Big Data 2021, 4, 669097. [Google Scholar] [CrossRef]
  67. Reddi, S.J.; Kale, S.; Kumar, S. On the Convergence of Adam and Beyond. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018; Available online: https://openreview.net/pdf?id=ryQu7f-RZ (accessed on 5 September 2024).
  68. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Curran Associates, Inc.: Red Hook, NY, USA, 2019; Volume 32, pp. 8024–8035. Available online: https://proceedings.neurips.cc/paper_files/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf (accessed on 5 September 2024).
  69. Wedler, M.; Stender, M.; Klein, M.; Ehlers, S.; Hoffmann, N. Surface Similarity Parameter: A New Machine Learning Loss Metric for Oscillatory Spatio-Temporal Data. Neural Netw. 2022, 156, 123–134. [Google Scholar] [CrossRef]
  70. Perlin, M.; Bustamante, M. A robust quantitative comparison criterion of two signals based on the Sobolev norm of their difference. J. Eng. Math. 2016, 101, 115–124. [Google Scholar] [CrossRef]
  71. Desmars, N.; Hartmann, M.; Behrendt, J.; Klein, M.; Hoffmann, N. Reconstruction of Ocean Surfaces from Randomly Distributed Measurements Using a Grid-Based Method. In Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering, Virtual Conference, 21–30 June 2021; Volume 6: Ocean Engineering. [Google Scholar] [CrossRef]
  72. Desmars, N.; Hartmann, M.; Behrendt, J.; Klein, M.; Hoffmann, N. Nonlinear Reconstruction and Prediction of Regular Waves. In Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering, Hamburg, Germany, 5–10 June 2022. [Google Scholar] [CrossRef]
  73. Kim, I.-C.; Ducrozet, G.; Bonnefoy, F.; Leroy, V.; Perignon, Y. Real-time phase-resolved ocean wave prediction in directional wave fields: Enhanced algorithm and experimental validation. Ocean Eng. 2023, 276, 114212. [Google Scholar] [CrossRef]
  74. Dysthe, K.B. Note on a modification to the nonlinear Schrödinger equation for application to deep water waves. Proc. R. Soc. Lond. A 1979, 369, 105–114. [Google Scholar] [CrossRef]
  75. Trulsen, K.; Dysthe, K.B. A Modified Nonlinear Schrödinger Equation for Broader Bandwidth Gravity Waves on Deep Water. Wave Motion 1996, 24, 281–289. [Google Scholar] [CrossRef]
  76. Peregrine, D.H. Water waves, nonlinear Schrödinger equations and their solutions. J. Aust. Math. Soc. Ser. B Appl. Math. 1983, 25, 16. [Google Scholar] [CrossRef]
Figure 1. In the numerical wave tank setup, wave gauges measured the elevations at four points x g = { 3 , 4 , 5 , 6 } m (a), which resulted in a sparse spatio-temporal data structure for each sample (b). During the PINN training, the violet elevation time series were utilized, while the grey series were reserved for the subsequent performance evaluation.
Figure 1. In the numerical wave tank setup, wave gauges measured the elevations at four points x g = { 3 , 4 , 5 , 6 } m (a), which resulted in a sparse spatio-temporal data structure for each sample (b). During the PINN training, the violet elevation time series were utilized, while the grey series were reserved for the subsequent performance evaluation.
Fluids 09 00231 g001
Figure 2. JONSWAP spectra for the different peak enhancement factors γ used to initialize the HOSM wave simulations. A higher value of γ resulted in a narrower spectra for the generated irregular waves.
Figure 2. JONSWAP spectra for the different peak enhancement factors γ used to initialize the HOSM wave simulations. A higher value of γ resulted in a narrower spectra for the generated irregular waves.
Fluids 09 00231 g002
Figure 3. Example of an irregular carrier wave η ( x , t ) measured at one location inside the wave tank. The real and imaginary parts of its corresponding complex envelope A ( x , t ) = U ( x , t ) + i V ( x , t ) are visualized, along with its absolute value | A ( x , t ) | .
Figure 3. Example of an irregular carrier wave η ( x , t ) measured at one location inside the wave tank. The real and imaginary parts of its corresponding complex envelope A ( x , t ) = U ( x , t ) + i V ( x , t ) are visualized, along with its absolute value | A ( x , t ) | .
Fluids 09 00231 g003
Figure 4. Schematic framework of the physics-informed neural network developed to solve the hydrodynamic nonlinear Schrödinger equation. The neural network architecture comprises two input nodes to insert points of the computational domain ( x , t ) and two output nodes to approximate the real and imaginary parts of the complex-valued NLSE solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) . Real wave measurement data η m , which are obtained at the domain boundaries, are transformed into envelope representations A m to guide the PINN’s solution toward approximating these boundary values. This is achieved by the data loss component MSE data (Equations (14) and (15)). To additionally guide the PINN solution toward ensuring physical consistency inside the entire computational domain, the PDE loss MSE res incorporates NLSE residuals (Equations (16)–(19)). The PINN’s variables θ (weights W and biases b ); activation function slopes a ; and self-adaption weights λ d , μ d , λ r , and μ r (and, for the case of additional coefficient fine tuning during assimilation, the parameters ω ¯ p and k ¯ p ) are updated iteratively in the training process to minimize the total loss L , which is composed of data and PDE loss components.
Figure 4. Schematic framework of the physics-informed neural network developed to solve the hydrodynamic nonlinear Schrödinger equation. The neural network architecture comprises two input nodes to insert points of the computational domain ( x , t ) and two output nodes to approximate the real and imaginary parts of the complex-valued NLSE solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) . Real wave measurement data η m , which are obtained at the domain boundaries, are transformed into envelope representations A m to guide the PINN’s solution toward approximating these boundary values. This is achieved by the data loss component MSE data (Equations (14) and (15)). To additionally guide the PINN solution toward ensuring physical consistency inside the entire computational domain, the PDE loss MSE res incorporates NLSE residuals (Equations (16)–(19)). The PINN’s variables θ (weights W and biases b ); activation function slopes a ; and self-adaption weights λ d , μ d , λ r , and μ r (and, for the case of additional coefficient fine tuning during assimilation, the parameters ω ¯ p and k ¯ p ) are updated iteratively in the training process to minimize the total loss L , which is composed of data and PDE loss components.
Fluids 09 00231 g004
Figure 5. Exemplary NLSE-PINN training loss curve. The PDE residual error components MSE U , res and MSE V , res initially exhibited a strong decrease, but slightly increased as the data error components MSE U , data and MSE V , data were reduced. After around 10,000 epochs, the PDE residual errors reached a plateau, while the data errors continued to gradually improve.
Figure 5. Exemplary NLSE-PINN training loss curve. The PDE residual error components MSE U , res and MSE V , res initially exhibited a strong decrease, but slightly increased as the data error components MSE U , data and MSE V , data were reduced. After around 10,000 epochs, the PDE residual errors reached a plateau, while the data errors continued to gradually improve.
Fluids 09 00231 g005
Figure 6. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using predetermined, constant NLSE coefficients for sample no. 1 (carrier wave with ϵ = 0.075 , ω ¯ p = 9.45 rad s , and γ = 6 ). The measurement envelope points U m and V m were provided on the domain boundaries only and are highlighted in violet. While the PINN solution at the boundaries aligned well with the measured data, it exhibited slight inaccuracies within the remaining computational domain.
Figure 6. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using predetermined, constant NLSE coefficients for sample no. 1 (carrier wave with ϵ = 0.075 , ω ¯ p = 9.45 rad s , and γ = 6 ). The measurement envelope points U m and V m were provided on the domain boundaries only and are highlighted in violet. While the PINN solution at the boundaries aligned well with the measured data, it exhibited slight inaccuracies within the remaining computational domain.
Fluids 09 00231 g006
Figure 7. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using predetermined, constant NLSE coefficients for sample no. 2 (carrier wave with ϵ = 0.025 , ω ¯ p = 8.34 rad s , and γ = 1 ). Compared with sample no. 1 in Figure 6, the envelope maxima that developed from the boundaries was met with an offset inside the domain.
Figure 7. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using predetermined, constant NLSE coefficients for sample no. 2 (carrier wave with ϵ = 0.025 , ω ¯ p = 8.34 rad s , and γ = 1 ). Compared with sample no. 1 in Figure 6, the envelope maxima that developed from the boundaries was met with an offset inside the domain.
Fluids 09 00231 g007
Figure 8. Reconstruction errors for all 630 samples from training the PINNs with predetermined, constant NLSE coefficients. In general, the envelopes above lower-frequency waves (lower ω ¯ p ) were harder to reconstruct than for higher-frequency waves. Moreover, the errors increased for broader-banded samples (lower γ ) and higher steepness samples (higher ϵ ).
Figure 8. Reconstruction errors for all 630 samples from training the PINNs with predetermined, constant NLSE coefficients. In general, the envelopes above lower-frequency waves (lower ω ¯ p ) were harder to reconstruct than for higher-frequency waves. Moreover, the errors increased for broader-banded samples (lower γ ) and higher steepness samples (higher ϵ ).
Fluids 09 00231 g008
Figure 9. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using adaptable NLSE coefficients due to the learnable ω ¯ p and k ¯ p for sample no. 2 (carrier wave with ϵ = 0.025 , ω ¯ p = 8.34 rad s , and γ = 1 ). Compared with the reconstruction using constant NLSE coefficients in Figure 7, a reduction in the envelope offset was evident.
Figure 9. Real part and imaginary part of the envelope solution A ˜ ( x , t ) = U ˜ ( x , t ) + i V ˜ ( x , t ) generated by the PINN using adaptable NLSE coefficients due to the learnable ω ¯ p and k ¯ p for sample no. 2 (carrier wave with ϵ = 0.025 , ω ¯ p = 8.34 rad s , and γ = 1 ). Compared with the reconstruction using constant NLSE coefficients in Figure 7, a reduction in the envelope offset was evident.
Fluids 09 00231 g009
Figure 10. Reconstruction errors for all 630 samples from training the PINNs with tunable ω ¯ p and k ¯ p values that caused adaptable NLSE coefficients. In general, the SSP errors were slightly reduced compared with the results obtained with constant coefficients in Figure 8. However, the general trend where lower ω ¯ p values, higher ϵ values, and lower γ values le d to increased errors remained.
Figure 10. Reconstruction errors for all 630 samples from training the PINNs with tunable ω ¯ p and k ¯ p values that caused adaptable NLSE coefficients. In general, the SSP errors were slightly reduced compared with the results obtained with constant coefficients in Figure 8. However, the general trend where lower ω ¯ p values, higher ϵ values, and lower γ values le d to increased errors remained.
Fluids 09 00231 g010
Figure 11. Initial and final ω ¯ p k ¯ p combinations for all 630 samples from training the PINNs with tuneable ω ¯ p and k ¯ p values that caused adaptable NLSE coefficients. While the initial values were determined following the linear dispersion relation, this constraint was removed during training. We observed that broader-banded samples (lower γ ) tended to benefit from ω ¯ p k ¯ p combinations above linear dispersion. Furthermore, with increasing steepness ϵ , the PINNs attempted to maintain the nonlinear term of the NLSE associated with ν = k ¯ p 3 as far as possible by allowing minor changes to the k ¯ p value.
Figure 11. Initial and final ω ¯ p k ¯ p combinations for all 630 samples from training the PINNs with tuneable ω ¯ p and k ¯ p values that caused adaptable NLSE coefficients. While the initial values were determined following the linear dispersion relation, this constraint was removed during training. We observed that broader-banded samples (lower γ ) tended to benefit from ω ¯ p k ¯ p combinations above linear dispersion. Furthermore, with increasing steepness ϵ , the PINNs attempted to maintain the nonlinear term of the NLSE associated with ν = k ¯ p 3 as far as possible by allowing minor changes to the k ¯ p value.
Fluids 09 00231 g011
Table 1. NLSE coefficients of sample no. 2 calculated from ω ¯ p and k ¯ p for the pure data assimilation task (initial) compared with the optimized values learned during the parameter fine tuning. In addition, the corresponding SSP and MSE errors are given.
Table 1. NLSE coefficients of sample no. 2 calculated from ω ¯ p and k ¯ p for the pure data assimilation task (initial) compared with the optimized values learned during the parameter fine tuning. In addition, the corresponding SSP and MSE errors are given.
NLSE CoefficientsMean Error
ω ¯ p k ¯ p c g = ω ¯ p 2 k ¯ p δ = k ¯ p ω ¯ p 2 ν = k ¯ p 3 SSP   Re. SSP   Im. MSE   Re. MSE   Im.
Initial8.3447.0970.588−0.102−357.40.1940.242 2.72 × 10 7 2.88 × 10 7
Learned9.3556.2020.754−0.071−256.70.1500.188 1.72 × 10 7 1.73 × 10 7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ehlers, S.; Wagner, N.A.; Scherzl, A.; Klein, M.; Hoffmann, N.; Stender, M. Data Assimilation and Parameter Identification for Water Waves Using the Nonlinear Schrödinger Equation and Physics-Informed Neural Networks. Fluids 2024, 9, 231. https://doi.org/10.3390/fluids9100231

AMA Style

Ehlers S, Wagner NA, Scherzl A, Klein M, Hoffmann N, Stender M. Data Assimilation and Parameter Identification for Water Waves Using the Nonlinear Schrödinger Equation and Physics-Informed Neural Networks. Fluids. 2024; 9(10):231. https://doi.org/10.3390/fluids9100231

Chicago/Turabian Style

Ehlers, Svenja, Niklas A. Wagner, Annamaria Scherzl, Marco Klein, Norbert Hoffmann, and Merten Stender. 2024. "Data Assimilation and Parameter Identification for Water Waves Using the Nonlinear Schrödinger Equation and Physics-Informed Neural Networks" Fluids 9, no. 10: 231. https://doi.org/10.3390/fluids9100231

APA Style

Ehlers, S., Wagner, N. A., Scherzl, A., Klein, M., Hoffmann, N., & Stender, M. (2024). Data Assimilation and Parameter Identification for Water Waves Using the Nonlinear Schrödinger Equation and Physics-Informed Neural Networks. Fluids, 9(10), 231. https://doi.org/10.3390/fluids9100231

Article Metrics

Back to TopTop