Next Article in Journal
Poissonization Inequalities for Sums of Independent Random Variables in Banach Spaces with Applications to Empirical Processes
Previous Article in Journal
Fractional Calculus for Non-Discrete Signed Measures
Previous Article in Special Issue
Greedy Kernel Methods for Approximating Breakthrough Curves for Reactive Flow from 3D Porous Geometry Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Underwater Wireless Optical Communication Optical Receiver Decision Unit Strategy Based on a Convolutional Neural Network

1
Physics Department, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Lithography in Devices Fabrication and Development Research Group, Deanship of Scientific Research, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Chemical and Environmental Engineering, University of California, Riverside, CA 92521, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(18), 2805; https://doi.org/10.3390/math12182805
Submission received: 17 July 2024 / Revised: 18 August 2024 / Accepted: 27 August 2024 / Published: 10 September 2024

Abstract

:
Underwater wireless optical communication (UWOC) systems face challenges due to the significant temporal dispersion caused by the combined effects of scattering, absorption, refractive index variations, optical turbulence, and bio-optical properties. This collective impairment leads to signal distortion and degrades the optical receiver’s bit error rate (BER). Optimising the receiver filter and equaliser design is crucial to enhance receiver performance. However, having an optimal design may not be sufficient to ensure that the receiver decision unit can estimate BER quickly and accurately. This study introduces a novel BER estimation strategy based on a Convolutional Neural Network (CNN) to improve the accuracy and speed of BER estimation performed by the decision unit’s computational processor compared to traditional methods. Our new CNN algorithm utilises the eye diagram (ED) image processing technique. Despite the incomplete definition of the UWOC channel impulse response (CIR), the CNN model is trained to address the nonlinearity of seawater channels under varying noise conditions and increase the reliability of a given UWOC system. The results demonstrate that our CNN-based BER estimation strategy accurately predicts the corresponding signal-to-noise ratio (SNR) and enables reliable BER estimation.

1. Introduction

Underwater wireless optical communication (UWOC) systems are showing promise as low-cost, high-capacity, energy-efficient ways to transmit data at high speeds of up to multi-gigabits per second (Gbps) over distances of 10 to 20 m [1,2]. Unlike traditional acoustic communication, UWOC offers higher bandwidth and lower latency, making it suitable for applications such as underwater exploration, environmental monitoring, and military operations [3]. However, the performance of UWOC systems is significantly influenced by various impairments, including scattering, absorption, and turbulence, which collectively deteriorate the signal quality and increase the bit error rate (BER) [4]. However, the challenges facing UWOC systems are becoming increasingly complex, necessitating effective solution options. One of these options is to optimise the optical receiver design circuitry [5] to render an optimum performance level for the overall receiver unit.
In an optical receiver, a decision unit (DU) with an accurate BER estimation is crucial to support the performance optimisation steps of the digital receiver systems in UWOC. However, traditional BER estimation strategies, such as Monte Carlo simulations (MCSs) and analytical methods, are computationally intensive [6] and may not adapt well to the dynamics of the underwater environment. Pilot symbols and training sequences provide more real-time estimation but at the cost of reduced data throughput [7]. Error Vector Magnitude (EVM) and noise variance estimation offer alternative approaches but are often limited by their assumptions about the channel conditions [6,7], which makes the estimation strategy highly dependent on the full knowledge of the channel impulse response (CIR) temporal profile. Monte Carlo simulations are flexible and can handle complex systems but are computationally expensive. Analytical methods are efficient, but their success depends on the model accuracy level, which the appropriate decision unit should avoid. Empirical methods provide real-world accuracy but are impractical for initial design, i.e., it is design time not run time implementation knowledge. Hence, the choice of BER estimation method depends on the specific numerical needs and constraints of the communication system being analysed.
Recent advancements in machine learning (ML) technology solution options (see Figure 1), particularly CNNs in optical performance monitoring, have introduced a new technology to help design an efficient computational processor (for the DU) that deploys a CNN-oriented strategy. CNNs can learn complex patterns embedded in eye diagrams (composed of various pixels) that are generated in real-time (see Figure 2). These patterns are learned from the bit stream received at the input of the DU, making them suitable for dynamically adapting to the varying conditions in UWOC systems. CNNs recognise patterns in visual data, making them suitable for processing eye diagram images—a critical visualisation tool used in digital communication systems to evaluate signal integrity and quality. Eye diagrams encapsulate key performance metrics, including timing jitter and noise levels, providing a comprehensive snapshot of the signal’s health. By utilising the capabilities of CNNs, it is possible to develop a more robust and efficient decision unit strategy that improves BER estimation accuracy and overall system performance. This CNN-based DU strategy does not depend on the transmission modulation format, channel stochastic impairments, and the need to set a fixed threshold during the design phase to estimate the BER. The CNN training data pool is continuously enhanced without reducing the data throughput. Additionally, it is worth mentioning that the DU implementation should not require knowledge of the CIR during the design or run time. This CNN solution approach to building a high-performance DU is the core of this study.
Using CNN technology is not new in optical communications. CNN networks have been previously used in optical performance monitoring (OPM) to measure the parameters of optical systems such as chromatic dispersion (CD), modulation format identification (MFI), and signal-to-noise ratio (SNR) [8,9,10,11,12]. Considering these measures to develop an affordable OPM system with great diagnostic capabilities is crucial. Further investigation and analysis are necessary to address the obstacles and issues that this field faces, including the natural factors in underwater environments and the accompanying phenomena, whether they are inherent optical properties (IOPs) such as absorption, scattering, and scintillation or apparent optical properties (AOPs), such as reflectance. Section 2 summarises the relevant published studies on ANNs, specifically focusing on CNNs. The purpose is to facilitate navigation and focus on implementing our proposed new CNN approach and the architecture and design elements.
This study first applies the CNN model directly on eye diagram images to predict SNR values through regression; subsequently, the BER is extracted from the SNR for UWOC receiver systems. The CNN model can provide accurate predictions at a reasonable cost, regardless of water type, pulse shape, and noise sources. It achieves a Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) in the range of 0.29–0.52 and 0.39–0.73, respectively, rendering it a fast and accurate way to assess the received signal inside the receiver of the UWOC system. Consequently, it becomes a core component in the decision unit, as depicted in Figure 3. The training of the CNN model is based on handling the nonlinearity of water channels under various noise environments, which helps identify and manage the UWOC systems’ reliability even though the impulse response of the water channel is not yet fully characterised. Although the eye diagram images in this study have been generated from simulations, the model has proved that the concept can be conveniently expanded to assess real-time generated eye diagrams.
Our proposed tool is vital for researchers and communication engineers interested in UWOC because of the difficulty of measuring SNR in real-world scenarios. Thus, when a new pulse is received, an image of the corresponding eye diagram is generated at run-time. Then, the ML model can learn and deduce the SNR value with high accuracy. This reliable approach deals with the nonlinearity of channels in underwater environments, such as multiple scattering, turbulence, scintillation, propagation time jitter, and multipath effect, which causes intersymbol interference (ISI) and receiver thermal noise. Furthermore, the decision unit will be an ML base unit using the NN to pick the best-matched image to make a decision. Because training high-performance ML applications and big processing units take a long time, the Microsoft Azure Virtual Machine (VM) was used in this study. This study also used various other resources, including Python 3.9.13, TensorFlow and Keras 2.12.0, cloud computing, SQLite Database Management System (SQLite DBMS), and eye diagram images. The essential features for signal processing in a UWOC system are seawater type, channel model, pulse shape, pulse width, and the zero-position symbol. Based on these features, the proposed algorithm generates eye diagram images.
This study employed a CNN approach to perform regression analysis on eye diagram images to determine SNR values and, subsequently, the corresponding BER. The CNN contained a flattened feature map as the input, a hidden layer with a ReLU activation function, and an output layer with a linear activation function. The CNN took eye diagram images as inputs and processed them through subsequent layers to obtain feature maps. The first layer is called the convolutional layer. This study conducted 13 trials, each utilising different CNN models with filters ranging from 16 to 64.
Additionally, the feature map from the convolutional layer underwent max pooling. Five iterations of the convolutional and max pooling layers were performed. The last feature map was flattened to obtain the input values entering the Fully Connected layer (FC). Additionally, dropout (a regularisation technique) was used in the FC layer of the NN to tackle the overfitting problem. The dropout rate used was 0.45. The Adam optimiser and a learning rate of 10−5 were employed in all trials. Finally, the output layer of the CNN yielded predictions in the form of SNR values. In this study, our newly designed CNN instrument is equivalent to a computational processor for the optical receiver electronic circuitry decision unit. The training and validation loss exhibit minimal disparity, and the congruity in the performance metric suggests that the proposed model is more precise and comprehensive.
Additionally, if the neural network’s size (the number of parameters) increases, the model’s performance also increases. This study demonstrates that CNNs can make decisions using cost-effective functions with limited trainable parameters (ranging from 516,881 to 2,267,201). These decisions apply to various types of waters, even in the presence of ISI noise and fluctuations in the water environment. This study is organised as follows: Section 2 briefly reviews related studies. Section 3 reviews the basics of UWOC systems. Section 4 discusses the foundations of CNN modelling with basic theory. Section 5 presents the CNN algorithm design and implementation. Section 6 provides a comprehensive overview of the results of the SNR and BER predictions, the performance metrics, and the statistical summary of the obtained results. Section 8 and Section 9 of the document discuss the conclusion and future studies, respectively.

2. Related Studies—A Brief Review

Artificial intelligence (AI) has caused significant reorganisation in many different industrial and scientific sectors as machines learn how to solve specific problems [13]. Computer algorithms acquire knowledge of the fundamental connections within a provided dataset and autonomously detect patterns to make decisions or forecasts [14]. Machine learning (ML) algorithms enable machines to implement intellectual activities by applying complex mathematical and statistical models [15]. Specifically, supervised and unsupervised ML methods have played an essential and effective role in optical communication, especially in detecting impairments and performance monitoring in UWOC systems. ML methods are important interdisciplinary tools that utilise eye diagram images as feature sources in several fields, including computer vision [16], equalisation [17], signal detection, and modulation format identification [18]. Some examples of techniques used in this study include Support Vector Machine (SVM) [19], k-means clustering to mitigate nonlinearity effects [20,21], Principal Component Analysis (PCA) for modulation format identification (MFI), optical signal-to-noise ratio (OSNR) monitoring [22], and Kalman filtering for OPM [23]. Reference [24] indicated that CNNs could achieve the highest accuracy compared to five other ML algorithms: Decision Tree (DT), K-Nearest Neighbour (KNN), Back Propagation (BP), Artificial Neural Networks (ANNs), and SVM. Figure 1 depicts the various applications of ML algorithms used in optical communication. This study provides a comprehensive review of ML solution applications in UWOC technology.
Neural networks, such as ANNs, CNNs, and recurrent neural networks (RNNs) are highly suitable machine learning tools. They are capable of learning the complex relationships between samples or extracted features from symbols and channel parameters such as optical signal-to-noise ratio (OSNR), polarisation mode dispersion (PMD), polarisation-dependent loss (PDL), baud rate, and chromatic dispersion (CD) [10,25,26,27,28,29,30,31,32,33]. The OSNR is a signal parameter that significantly impacts the effectiveness of optical links. The OSNR can be used to predict the bit error rate (BER), which directly gauges receiver performance [34]. Reference [35] proposed and demonstrated a system for compensating for fibre nonlinearity impairment using a simple recurrent neural network (SRNN) with low complexity. This method reduces computational complexity and training costs while maintaining good compensation performance.
Figure 1. ML algorithms in optical performance monitoring.
Figure 1. ML algorithms in optical performance monitoring.
Mathematics 12 02805 g001
Several methods based on automatic feature extraction can be used to obtain the features input into the neural network (NN). These methods use constellation diagrams, asynchronously amplitude histograms (AAHs), Asynchronous Delay Tap Plots (ADTPs), Asynchronous Single Channel Sampling (ASCS), In-phase Quadrature Histograms (IQHs) for SNR, and other parameter estimations. Here, we review the various works that display a range of OPM works utilising machine learning techniques to forecast signal-to-noise ratio (SNR) values through various approaches. In [36], a new machine learning OPM method is proposed that uses support vector regressors (SVRs) and modified In-phase Quadrature Histogram (IQH) features to estimate several optical parameters, including signal-to-noise ratio (SNR) and chromatic dispersion (CD). A deep learning algorithm in ref. [37] has been successfully applied in wireless communications, but it often results in challenging nonlinear problems. An ANN algorithm in [26] was developed to calculate the signal-to-noise ratio (SNR) using On-Off Keying (OOK) and Differential Phase Shift Keying (DPSK) data. The training errors for OOK and DPSK were 0.03 and 0.04, respectively. The ANN is trained by sending a series of well-known symbols before being used as an equaliser. The parameters are modified to reduce discrepancies between the desired and ANN outputs [38]. Improving the ANN on the receiver can bring several benefits, such as reducing training time and complexity, maintaining high performance, achieving high data rates and bandwidth transmission capabilities, improving efficiency, and enhancing multipath delay robustness. Various studies have highlighted these advantages, including references [8,12,31,39,40,41,42,43]. A 10 Gbps NRZ modulation scheme measures the SNR using statistical parameters such as means and standard deviations obtained from the ADTP. The RMSE of the ADTP is 0.73. In [31,44,45,46,47,48,49], a DNN was employed to classify SNR from AAH and 16-QAM PDM-64QAM with an accuracy that can reach 100%. OSNR monitoring from 10 to 30 dB was achieved using 10 Gb/s NRZ-OOK and NRZ-DPSK from ASCS. Constellation diagrams were used in [50,51] to estimate SNR with errors less than 0.7 dB by designing CNNs with QPSK, PSK, and QAM modulation formats. The fundamental CNN algorithm for SNR estimation is presented in [52], and methods for preprocessing received signals and selecting optimal parameters are provided. The technique efficiently and accurately identifies the modulation format and estimates SNR and BER using 3D constellation density matrices in Stokes space.
Eye diagrams have been utilised in the literature to track OSNR, PMD, CD, nonlinearity, and crosstalk via NNs [53,54,55]. Observing an eye pattern involves various optical communication noises simultaneously (e.g., thermal noise, time jitter, and ISI); consequently, SNR decreases, and the signal declines when the noise levels rise. SNR is used to investigate the quality of the received signal in communication systems. Reference [56] presents a long short-term memory (LSTM)--based deep learning to simultaneously estimate SNR and nonlinear noise power. The test error is less than 1.0 dB, and the modulation types include QPSK, 16QAM, and 64QAM. The SNR monitoring method suggested in ref. [57] uses an LSTM neural network, a classifier, and a low-bandwidth coherent receiver to convert continuous monitoring into a classification problem. It is cost-effective and suitable with multi-purpose OPM systems because it achieves excellent classification accuracy and robustness with minimal processing complexity.
The eye diagram used to locate optical signal impairments by overlapping the symbols depicts the amplitude distribution over one or more-bit periods. SNR and BER indicate how well a system performs by assessing the signal quality based on various properties: eye height, eye width, jitter, cross percentage, and levels 0 and 1 (Figure 2).
Figure 2. Eye diagram essential features.
Figure 2. Eye diagram essential features.
Mathematics 12 02805 g002
SVM for classification and NN for regression were studied in ref. [25,26,27,38] using 64-QAM, 40 Gb/s RZ-OOK, 10 Gb/s NRZ-OOK, and DPSK. The input features from the eye diagram are mean, variance, Q-factor, closure, jitter, and crossing amplitude. The ANN reports a correlation coefficient of 0.97 and 0.96 for OOK and DPSK systems, respectively [58]. NN regression was developed to extract variance from eye diagram images, and SNR with a range from 4 to 30 dB was measured with a mean estimation error range of 0.2   t o   1.2 dB for 250   k m [25]. Another study used an ANN to extract 24 features from eye diagram images. The RSME values ranged from 1.5   t o   2 for SNRs between 10 and 30 dB using NRZ, RZ, and QPSK for a data rate of 40 Gb/s [59]. Table A1 (in Appendix A) represents the studies from 2009 to 2024 that used ML to extract features from eye diagram images to obtain signal-to-noise ratios; the table also shows the implementations of the NN algorithms and model performance and compares these studies with ours. References [24,60,61] demonstrated the CNN-based algorithms on eye diagram images and discussed the CNN structure and implementations in detail. These studies have generated eye diagrams by run-time simulation or experimental setup and used classification techniques to obtain the SNR (see Table A2 in Appendix A). What is crucial to note is that while our study has created and implemented a new CNN structure for UWOC, previous efforts have primarily focused on optical fibre. Our approach to estimating SNR directly from eye diagrams, which involves 13 regression CNN models, is at the heart of the novelty of our study.

3. UWOC System Model

In the following sections, we will introduce our study as an innovative method for rapidly estimating bit error rate (BER) in technology. However, before doing so, we will provide concise explanations of two topics to help the audience understand the underlying challenges this study aims to address. These topics are (1) the digital signal evaluation cycle, in which the digital signal transforms from an optical digital signal on the transmitter (Tx) side to an electronic digital signal as an output of the optical receiver (Rx), and (2) the conventional BER estimation regimes, which include some familiar approaches: modified Monte Carlo (MC)-based estimation methods, the MC prediction method, and the Log-Likelihood Ratio-based BER model. The UWOC system generally consists of three fundamental components, as depicted in Figure 3: the transmitter unit, the water propagation channel, and the receiver section.
Figure 3. The layout of a typical UWOC system [62].
Figure 3. The layout of a typical UWOC system [62].
Mathematics 12 02805 g003
The photons propagate across the water in the underwater communication channel independently from each other through any medium, facing different sequenced sets of optical events: transmission, absorption, and scattering (elastic and inelastic). The impact on the transmitted optical signal includes attenuation, temporal and spatial beam spreading, deflection of its geometrical path, and amplitude and phase distortions [63]. Degradation, such as absorption and scattering, significantly impacts the UWOC’s performance [59]. Turbulence is another degrading factor that causes beam spreading, beam wander, beam scintillation, and link misalignment. Oceanic water types can be classified as follows [64]: clean ocean water, pure sea water, turbid harbour water, and coastal ocean water. Furthermore, in turbid harbour water, several photons may arrive at the receiver with delays, intersymbol interference (ISI), and fading signal, reducing communication viability [65].

3.1. The Transmitter Unit

The transmitter unit utilises a beam-shaping optical unit to interface with the water propagation channel. The transmitter unit’s modulator provides the needed modulation shaping characteristics to generate the information bit stream. Moreover, in UWOC systems, the driver circuit is another crucial part of the transmitter unit [66]. The main job of this device is to convert the electrical signal from the modulator into an optical signal that can be transmitted through the water channel. The driver circuit typically consists of a laser or LED driver that provides the necessary current to the light source, which emits the optical signal s(t). The selection of the light source and driver circuit is determined by the particular system requirements, such as the desired data rate, transmission distance, power consumption [67], and optical characteristics of the water channel [68]. In the UWOC system, the LED setup is more affordable and straightforward, but the connection range is very constrained because of the incoherent optical beam and light spread in all directions [4]. Laser diodes are often used as the light source in UWOC systems due to their long ranges, high intensity of output power, improved collimation characteristics, narrow beam divergence [69], high efficiency, small size [70], high data rates, and low latencies [4]. The high-quality output of the coherent laser beam is quickly degraded by turbulence and underwater scattering. The laser-based UWOC system may reach a link distance of 100 m in clear water and 30 to 50 m in turbid water, while the LED-based UWOC may cover a linkspan of no more than 50 m [69].

3.2. UWOC Propagation Channel

In UWOC systems, water is the communication channel via which the optical signal s(t) propagates. One of the challenges of the UWOC is that there is no definite mathematical expression for the impulse response function (hc(t)). Hence, hc(t) must be reliably modelled to assess the scope of impacts on the propagated s(t) due to water channel impairments like absorption, single/multiple scattering, scintillation, and turbulence. These degradations degrade s(t) temporal and spatial quality, reducing the received OSNR [71] at the surface of the photodetector. There are many studies (e.g., [70,71,72]) that focus on solving the radiative transfer equation (RTE) analytically and numerically, which account for different sets of inherent optical properties (IOPs) that mainly include absorption and scattering. The analytical solutions of the RTE are based on a wide range of assumptions or rather simplifications. These solutions are considered benchmark limits for the numerical ones. The simplest and most well-known benchmark is the Beer-Lambert law (BLL) [71,72,73]. The main aim of numerical solutions is to conclude an extrapolated mathematical close form using a double gamma curve-fitting to conclude a temporal profile for hc(t), which accounts for impairments’ impact limits for different water types and given link configurations. Once the hc(t) format is defined, we will be able to conclude the convolution s(t) with hc(t), the product of which is the optical received signal (ropt(t)).
In this study, we utilised the following hc(t) format versions: (1) double gamma functions (DGFs), (2) weighted double gamma functions (WDGFs), (3) a combination of exponential and arbitrary power functions (CEAPFs), and (4) Beta Prime (BP). The impairment scope of each hc(t) model is shown in Table 1. The CEAPF and BP formats might look different from the foundation DGF but can be reduced back to the DGF.

3.3. The Receiver Unit

An optical detection system or a receiver is one of the main components of UWOC. The optical signal will go through an optical filter and focusing lens on the receiver side. The photon detector will then capture it. Since a photodiode can only transform light intensity variations from an LD or laser into corresponding current changes [78,79], a trans-impedance amplifier is cascaded in the following stage to convert current into voltage. The transformed voltage signals will then go through a low-pass filter responsible for shaping the voltage pulse to reduce the thermal and ambient noise levels without causing significant inter-symbol interference (ISI) [80,81].
Further signal processing programmes are bypassed through a signal quality analyser for demodulation and decoding [82]. An equalisation is a tool used to reshape the incoming pulse, extract the timing information (sampling), and decide the symbol value. A PC or BER tester will finally collect and analyse the recovered original data to evaluate several important performance parameters, such as BER. In optical receivers, many types of photodetectors can be used; for more details, see ref [62]. The most functional OWC systems use a PIN or an avalanche photodiode (APD) as a receiver [83]. The UWOC receiver system must meet specific requirements to address the effects of noise and attenuation. The receiver’s most significant parameters are a large FOV, high gain, fast response time, low cost, small size, high reliability, high sensitivity and responsivity at the operating wavelength, and high SNR [83]. The APD can provide higher sensitivity and gain faster response times. It could also be used in longer UWOC links (tens of metres) and wider bandwidths but at a much higher cost and complex circuits. The noise performance of these two devices is the most significant difference. The main source of noise in PIN photodiodes is thermal noise, while in APD, it is shot noise [79,82,84,85,86].
However, the PIN photodiode appears to be a more favourable technology for shorter wavelengths than the APD for the UWOC system [4]. To process and understand the received data, the decision unit in an optical receiver converts the signal into discrete binary values. It compares the sampled voltage to a reference level or threshold (Dth). With the use of the received optical signal, this procedure estimates the underlying BER based on which a decision will be made if the bit is “0” or “1” for binary s(t) [87,88,89].

3.4. Digital Signal Evaluation Cycle from Optical to Electronic—A Mathematical Viewpoint

The digital signals evaluation cycle is based on the illustration in Figure 4. The main components of a typical optical receiver system (Rx), as explained in Section 3.2, include a photodetector, preamplifier/amplifier, filter/equaliser, and decision unit (DU). The received optical signal ropt(t) can be described as follows:
In a typical optical digital communication system, the transmitted optical signal s(t) can be represented as follows [5]:
s ( t ) = k = a k   h p t k T
where T is the signalling period, for a binary (OOK) signal format, if τ is the timespan of each bit within a symbol, then T = τ , so 1/T is the bit rate. ak is the energy received in the kth symbol; for binary system ak  {0, 1}, the hp(t) is the transmitted optical pulse. Typically, the s(t) experiences temporal and spatial distortions while propagating through a medium channel (air, fibre optic, or water), depending on the profile of the propagation channel impulse response hc(t). The received optical signal ropt(t) is the footprint of the convolutional impact of hc(t) on the s(t). Hence, ropt(t) can be expressed as follows:
r o p t t = s t h c ( t )
r o p t t = a 0 h p ( t ) h c ( t ) + k = k   0 a k   h p t k T h c ( t )
where ⊗ denotes the convolution operation. The ropt(t) is the received optical signal. In this study, we consider a binary direct detection Rx, as depicted in Figure 4. Without losing generality, we will assume that the Rx includes a PIN photodetector with an internal gain (g) equal to one. The photodetector converts the input photons of ropt(t) into electronic signal rsig(t) =, which can be expressed as follows:
r s i g ( t ) = j = 1 N ( t )   h d t t j
where {tj} denotes the photoelectron emission times. Therefore, the filter electronic signal output rf (t) is
r f ( t ) = j = 1 N t   h d t t j h f t + r t h ( t )
If we assume that hd(t) = δ(t), then Equation (3b) takes the following form:
= j = 1 N t h f t + r t h t                              
We should note that the assumption of hd(t) = δ(t) is valid for modern fast-response PIN detectors. The first term is the signal component rsig(t), and the second term rth(t) is the AGTN. In Equation (3), {tj} is the set of photoelectrons’ arrival times governed by Poisson statistics. N(t) represents the stochastic counting process, which is an inhomogeneous process with a time-varying rate intensity of {ak} in Equation (1).
It is expected that the DU in Figure 4 will be able to estimate rf (t) as an accurate replica of s(t). The accuracy and speed of decision-making, which is comparable to the function s(t), heavily relies on the computational processing strategy of the DU. This strategy minimises the BER to meet the receiver’s performance goals.

4. BER Computational Strategies for Decision Unit—Background Tutorial

In this section, before presenting our novel technique for CNN-based bit error rate (BER) estimation technology, we need to briefly present tutorial descriptions for the conventional BER estimation regimes, which include some familiar approaches: the Monte Carlo (MC) prediction method, the Log-Likelihood Ratio-based BER model, and the modified MC-based estimation approaches.

4.1. BER Estimation Schemes—A Brief Review

Many techniques may be used to conclude bit error rate estimation. This subsection provides a synopsis of the conventional MC simulation to reveal that the execution time for low BER is very long. This subsection presents three techniques: quasi-analytical estimation, importance sampling theory, and tail extrapolation probabilities.
Such solutions demand assumptions regarding the actual system behaviour, and the effectiveness is greatly dependent on the presumed parameters, which likely have to be altered for various systems of communication. Predominantly, finding the ideal model or suitable parameters is not easy. Next, a number of new BER estimators based on the LLR distribution were introduced; nevertheless, they have a few shortcomings, such as being dependent on the SNR uncertainty estimation and the specific channel features. However, all the aforementioned approaches demand awareness of the transmitted bit stream, while the estimator certainly does not know transmitted data in practical situations. In contrast, our new CNN imaging computational processor requires no prior information.

4.2. Monte Carlo (MC) Method Simulation

The MCS approach is predominantly used for BER estimation in communication systems [90,91]. This estimation approach is implemented by passing N data symbols through a model, which reflects the influencing features of the underlying digital communication system by counting the error numbers that take place at the receiver. The simulation run includes noise sources, pseudo-random data, and device models, which process the digital communication signal. In conclusion, the MC simulation processes a number of symbols, and eventually, the BER is estimated.
Let us assume that we have a standard baseband signal model representation, as shown in Equation (2), and the decision unit is using the Bernoulli decision function I(ak) expressed as follows:
I a k =   1   i f   a k ^     a k   0   o t h e r w i s e
where ak  {0, 1} as defined in Section 3.4, and the ^ sign refers to the assembled average of the variable. Accordingly, the BER can be indicated in terms of the probability of error pe as follows:
p e = P a k ^     a k = P   I a k = 1 = E [ I a k ] ,
where E[.] is the expectation operator, P a k ^     a k is the probability that the instant value of ak does not equal its average a k ^ . If we take into consideration the entire stream of symbols in Equation (2b), then the BER is estimated by utilising the ensemble average of pe:
p e ^ = 1 K   k = 1 K I a k
where K is the maximum number of symbols (the bit stream size) in Equation (2b). Equation (7) helps to determine the estimation error, and its variance will be given as follows:
ε = p e p e ^ = 1 K   k = 1 K ( p e I a k )
This means that the variance of ε can be expressed as:
σ ε 2 = p e ( 1 p e ) K
Hence, we can write the normalised estimation error as follows:
σ n = σ ε p e = ( 1 p e ) K . p e
For a small BER, Equation (8b) can be simplified to
σ n   1 K . p e
Here, σn is an indicator for the target accuracy we must aim at. Consequently, we can determine the required K for a target performance as given below:
K   1 σ n 2 p e
Equation (8d) indicates that a small BER value requires a large, simulated signal bit stream. For example, to configure a system with a BER of 10−6, we require no less than 108 bits in the signal stream. This numerical limitation ensures that the MC simulation trial size will satisfy the central limit theorem. This operational limitation means the decision unit will take a long time to estimate a trusted value of BER. Accordingly, MC simulation is impractical for a baud rate larger than 100 MBit. It is worth mentioning that in our discussion, we assumed that the bit errors were independent.

4.3. Importance Sampling Scheme

As earlier concluded, a small BER demands a large K. From a DU point of view, this is considered a fatal limitation of MC implementation, specifically for spread spectrum (SS) systems [92] (such as CDMA systems) in which every transmitted bit must be modulated via the SS code with an abundance of bits.
A modified MC method called the importance sampling (IS) method can be utilised to decrease BER simulation complexity for SS systems [93]. Further, ref. [94] introduces a method for estimating the bit error rate (BER) based on IS applied to trapping sets. Considering the IS approach, the noise source statistics in the system are biased so that bit errors occur with greater pe, thus minimising the needed execution time. For instance, for a BER equal to 10−5, practically, we artificially degrade the performance of the channel, pushing the BER to 10−2.
To explain the IS approach, let g(·) be the original noise probability density function (PDF) and let g*(·) be the rising noise PDF utilising an external noise source. Hence, the weighting coefficient can be expressed as follows:
w x = g ( x ) g ( x )
For a simple threshold-dependent decision element, the expression that describes an error takes place as soon as there is a significant excursion of the threshold Dth as follows:
a k = 0   f o r   e r r o r   c o u n t = 1   i f   x k   D t h   e r r o r   c o u n t = 0   o t h e r w i s e
Then, pe is given as follows:
p e = I x   g x d x
I(x) is an indicator function, which equals 1 when an error takes place; otherwise, it equals 0. Hence, we can express that with regard to the natural estimator of the expectation (i.e., sample mean) as follows:
p e ^ = 1 K   k = 1 K I x k
Hence, concerning the PDF of the noise (i.e., rth(t) in Equation (4)) and using Equations (9c) and (9d), we obtain
p e = I x   g ( x ) g ( x ) g ( x ) d x = I x   g ( x ) d x = E [ I x ]
Equation (9e) is not just a mathematical expression; it represents the noise processes statistics that influence, and the prediction is achieved with regard to g*(·). As in the preceding subsection, we may attain the estimator using the sample mean;
p e ^ = 1 K   k = 1 K I a k   = 1 K   k = 1 K w a k I a k
Regarding Equation (9d), in Equation (9f), the weight parameter w(x) needs to be evaluated at ak. This means reducing the σє, which can be accomplished by establishing external noise of biassed density.
IS-based BER estimation performance relies crucially on the biassing scheme w(x). An accurate estimate of the BER can be attained with a brief simulation run time if a good biassing scheme is configured for a specified receiver circuitry system. Contrarily, the BER estimate might even converge at a slower rate than the conventional MC simulation. This implies the IS technique must not be regarded as a generic approach for estimating every receiving system’s bit error rate (BER).

4.4. Tail Extrapolation Scheme

We should keep in mind that the BER estimation obstacle, in essence, is a numerical integration problem if we regard the eye diagram (ED) in Figure 5, measured for an experimental system with SNR = 20 dB. It is possible to determine the worst case of the received bit sequence.
When we regard the PDF of the eye section in lines A and B, the lower bound on the PDF (green line) is the worst-case bit sequence, and the small red area contains all of the bit errors. The BER of the given system can be thought of as the area under the tail of the probability density function.
Generally, we could not depict the sort of distribution to which the slopes of the bathtub curve in ED belong. However, we may presume that the PDF file is affiliated with a specific class and then accomplish curve-fitting on the obtained data. That technique for estimating the bit error rate (BER) is known as the tail extrapolation (TE) method [95].
When we set multiple thresholds for the lower bound, the number of times the decision metric surpasses every Dth is recorded, and a standard MC simulation can be executed. A wide category of PDFs is then detected. The tail region is typically identified by certain members of the Generalised Exponential Class (GEC) and is identified as follows:
f v , σ ,   μ x = v 2   2     Г ( 1 v )   e x μ 2     σ v
where Γ(·) is the gamma function, μ is the mean of the distribution, and σ is related to the variance Vυ through
V v = 2   σ 2   Г ( 3 v )   Г ( 1 v )
where the parameters (v, σ, µ) are then adjusted to find the PDF that best fits the data sample; therefore, the BER could be estimated via the integral evaluation of the PDF for Dth. Nevertheless, which class of PDF and which Dth should be selected is not frequently clear. Generally, it is hard to evaluate the estimated BER accuracy [95].

4.5. The Method of Quasi-analytical Estimation

The abovementioned methods analyse the received signal components (data and noise) at the receiver’s output. At this point, we consider solving the BER estimation problem utilizing the succeeding two stages:
  • One handles the transmitted signal rf(t) in Equation (4);
  • The other handles the noise component rth(t).
First, we presume that the noise is denoted as the Equivalent Noise Source (ENS) and, second, that the ENS probability density function is known and determinable.
Therefore, we can assume that an ENS with an appropriate distribution can closely evaluate the receiver’s performance. This approach is known as quasi-analytical (QA) estimation [96]. We can calculate the BER with ENS statistics using the noiseless waveform. More precisely, we can allow the simulation to calculate the influence of signal changes in the non-existence of rth(t) and superimpose the rth(t) on the noiseless signal component.
The noise statistics assumption results in a significant drop in computation run time. Nevertheless, this may create a risk of complete miscalculation. The appropriateness of the QA estimation will rely on how well the assumption matches actuality [97]. Hence, predicting ENS statistics before they occur for a linear system may be challenging.

4.6. Estimating BER Based on the Log-Likelihood Ratio

A receiver can implement soft-output decoding to reduce the signal stream’s BER (e.g., a posteriori probability (APP) decoder). The APP decoder may output probabilities or Log-Likelihood Ratio (LLR) values. Let (ak)1≤ kK ∈ {+1,−1} be the bit stream and let Xk; k = 1, 2, …, K represent the received values. Hence, the definition of LLR can be expressed just as follows:
L L R k = L L R a k X k = x k = log P ( a k = + 1 | X k = x k ) P ( a k = 1 | X k = x k )
Hence, when using Baye’s theorem, we obtain the following:
L L R k = log P ( a k = + 1 ) P ( a k = 1 ) + log P ( a k = + 1 | X k = x k ) P ( a k = 1 | X k = x k )
In Equation (11b), the first term on the RHS represents a priori information, and the second represents channel information. The hard decision expression is implemented by computing the LLR sign as follows:
a k = + 1   i f   L L R k ( a k |   x k ) > 0   1   o t h e r w i s e  
In [98], some basic properties of LLR values are extracted, and new BER estimators are proposed based on the statistical moments of the LLR distribution. If we are examining the succeeding criterion:
P(X = +1|Y = y) + P(X = −1|Y = y) = 1
Solving Equation (11b) utilising the criterion mentioned above permits us to derive a posteriori probabilities P(ak = +1|xk) and P(ak = −1|xk); then, we can write the following:
P(ak = +1|xk) = e(LLRk)/1+ e(LLRk) and P(ak = −1|xk) = 1/1+ e(LLRk)
If LLRk = A, then we could infer the probability that the hard decision of bit kth is wrong and
pk = 1/1+ e−A
Now, the BER estimate can be expressed as follows:
p e , 1 ^ = 1 K   k = 1 K p k
p e , 2 ^ = y g A ( y )   1 1 + e y d y
The constraints of the LLR method are as follows:
  • The first estimate of BER given by Equation (12a) may not be as efficient as the second BER estimate given by Equation (12b) since gA(y) is usually Gaussian and smooth.
  • The second estimator is extra complicated to execute because an estimate of gA(y) ought to be computed (for instance, utilising a histogram) prior to the integral.
  • Both methods are sensitive to channel noise variance as the LLR distribution vigorously relies upon the accuracy of the SNR estimate. We should note that the earlier estimators implicitly presume that the SNR is well-known to the decoder.

5. CNN Model Solution Framework Foundations

5.1. Receiver Performance Indicators

This study highlights the receiver’s performance by modelling a decision unit strategy. The UWOC receiver unit performance is influenced by the water channel signal impairments and various noise sources on the receiver side, such as electronic thermal, optical background, dark current, and shot noise—reference [4] reviewed such noise sources. Generally, the leading performance indicators for a digital receiver are SNR and BER. The SNR is represented by the following:
S N R = P S P N
where P S and P N represent the signal power and noise power, respectively.
The BER is defined as the probability of incorrect identification of a bit by the decision circuit of the underlying receiver [81]. It is one of the most important metrics for assessing signal quality and estimating communication system performance. If the number of error bits received is N e and the total bits is N t , then the BER is as follows:
B E R = N e N t
The relation between SNR and BER is embedded in the following formula:
B E R = 1 2 e r f c 2 S N R 2

5.2. Test Data

In this study, eye diagrams and their SNRs are used as the source of feeding the CNN with the required testing data. A Python code was written and ran on a Microsoft Azure VM to generate 576 received pulses, including some random noise; then, the eye diagram pattern images were drawn, and their related SNRs based on the received pulses were calculated. For eye pattern generation, we used four-channel models, including the following:
  • DGFs with distances of 5.47 m and 45.45 m for harbour and coastal waters, respectively, and (20°, 180°) field of view (FOV).
  • WDGFs and CEAPFs with distances of 10.93 m and 45.45 m for harbour and coastal waters, respectively, and 20° FOV.
  • BP with 5 m and 10 m distances for harbour and coastal waters, respectively, and 180° FOV.
We also utilised two transmitted pulse shapes, Gaussian and Rectangular, to implement a binary OOK modulation in our simulation. The range of pulse widths (FWHM) was 0.1 to 0.95. It is worth mentioning that pulse widths beyond 0.6 are unrealistic, but we added these scenarios as a “burn test” for our solution. The ranges of the FOV values across channel models were not similar because we had to use the published double gamma fitting parameters (shown in the last row of Table 1) and their corresponding FOV value ranges.
After that, the names of the images and SNR values were stored in an SQL database, while the images were stored in one folder. Consequently, the data were ready to have CNNs applied to them. These test data have some properties, and they are as follows:
The background of the eye diagram images is black, while the diagram itself is white (greyscale) to speed up and simplify the CNN calculations. All the eye diagram images’ sizes (height × width) are 2366 × 3125; this size was taken from the shapes of the images’ arrays (it is already an output from the code). The SNR has a normal distribution (which means there is no bias in our data before applying ML), as shown in Figure 6. The minimum SNR value is 0.5723, the maximum SNR value is 8.1478, the mean is 3.0004, and the standard deviation is 1.5061.
The data preprocessing steps before applying the CNN:
  • Loading the images’ names and SNRs from the database.
  • Converting the data into a ‘pandas’ data frame.
  • Shuffling the data frame.
  • Using TensorFlow library on Python, we conducted the following:
    • Loaded the eye diagram images based on their names and normalised them using max normalisation (dividing each pixel by 255).
    • Split the dataset into training (70%) and validation (30%).
    • Converted the colour mode from RGB into grayscale.
    • Used the images’ original size instead of resizing them to keep the resolution high.

5.3. Machine Learning—Neural Networks (NNs)

Neural networks (NNs) are computing algorithms that include processing units known as neurons that are organised into layers. These layers are connected via weights; each cell has a different weighted function. Many researchers have investigated neural networks since the 1960s [99,100,101]. NNs were developed based on how biological nerves transmit information and analyse data, and they are mainly used to increase computing performance [102]. NNs can be used for supervised learning in both classification and regression, and they can also be used in unsupervised learning.
A general NN structure consists of at least the input of data and the output layer, which allows NNs to make predictions on new input middle-level layers known as hidden layers, which process the outputs of previous layers [26]. The neurons have various coefficients, such as bias ( θ 0 ) and weights ( θ i ), which are modified during the training process to obtain the optimum values that make the loss as low as possible. The correlations between input-output datasets that constitute the attributes of the device or system under study are discovered using NNs. The model outputs are compared to the true desired outputs, and the error is calculated [103]. For the training phase, the sample is represented as x , y , where the input and output are x and y , respectively. Each node makes calculations on the ( x ) values that are entered into the neural network, and then the value of ( z L ) is obtained. After that, the expected values of ( a L ) are found via applying the activation function f ( x ) on ( z L ); the process is repeated as represented by the following equations [102]:
z L = θ 0 + i = 1 n θ i x i
where z(L) is the predicted output of each layer, which is the input for the next layer.
a L = f z L
The form of the hypothesis function or activation function (the final output in the last layer) is represented as follows:
y p r e d = a l
where i is the cell number, L is the layer number, l is the last layer number, and f is the activation function. Note that Equation (15) is similar to linear regression in which a predictor x variable and a dependent y variable are included in the model, and they are linearly related to one another. In this case, the output variable y was predicted based on the input variable x . The linear regression model is represented by the equation shown below:
y = m x + c
Equation (18) is the foundation equation for NNs, where θ i or m is the slope or rate of predicted y based on the best-fit line and θ 0 or c is y-intercept. Figure 7 represents the functions of neurons in NNs.
The most significant and often utilised component of neural networks is data propagation in both the forward and the reversed (or back) directions. This propagation is crucial for performing quick and efficient weight adjustments. The term “forward propagation”, which describes moving information from the input to the output direction, has been the subject of the whole discussion up to this point. However, the neural network did not achieve practical significance until 1986, when the Back Propagation mechanism (BP) was employed [104,105]. Back Propagation is a technique used to train neural networks to adjust the weights and increase the model’s generalisation to make it more reliable. The error rate of forward propagation is fed back through the NN layers. It analyses, compares, and evaluates the outcomes before going back oppositely from the outputs to the inputs and adjusting the weights’ values. This process is performed endlessly until the weights are optimal. A reverse calculation of the weight values is carried out by finding the difference between the predicted and real values, followed by partial derivation, and Back Propagation is used to adjust the assumed weight values. After the output of each layer a L is calculated, the result is passed through a function; the goal is to minimise the cost function J , and the result is then passed through the loss function, as described in Section 6. After reaching the expected value a l (whether regression or classification), we find the delta error rate δ l by subtracting the predicted from actual values y t as follows:
δ l = a l y t
where y t is not numbers but matrices with one column (vector) because several cells are in each layer. The general equation for Back Propagation is as follows:
δ L = θ L T δ L + 1 . f z L
where f is the first derivative of the activation function, T is the transpose matrix of θ L , and is a dot product, not a matrix product. A conventional neural network (CNN) is a kind of deep feed-forward neural network that is one of the most effective learning algorithms used in many applications, with significantly higher accuracy [106]. A CNN is the best algorithm for analysing image data [106] and for solving problems in several visual recognition tasks, such as identifying traffic signs, biological image segmentation, image classification [107], speech recognition, natural language processing, and video processing [108]. The power of a CNN is its ability to extract features from samples with different requests at a fast speed [88] and handle high-dimensional inputs. A CNN offers two significant benefits over other ML algorithms [107]: (a) automated feature extraction from images utilising feature extraction without the requirement for feature engineering or data restoration, and (b) the algorithm complexity is significantly reduced by a network topology with local connections and weight sharing. The way the attention mechanism works allows it to extract the most important information from an image and store its contextual relationship to other image elements [106]. The main layers of a CNN are the convolutional layer, pooling layer, and neural network layer. First, the convolutional layer applies several filters to input images to extract features (produce feature maps) and decrease their size [93]. The convolutional layer’s final output is obtained by merging these feature maps [109]. Decreasing the number of network parameters and computations requires that the feature map size be reduced again in the pooling layer by selecting the essential features and obtaining the maximum values. The advantages of max pooling are that it decreases training time and controls overfitting [109]. After repeating these layers several times, the output will enter into a neural network as flattened input values [110]. These values go through FCs that reach the final output of the CNN. The activation function could be used in a CNN on convolutional, hidden, and output layers.

6. CNN Model Architecture, Design, and Implementation

6.1. Model Solution Architecture and Design

Microsoft Azure VM was used to develop a Python code that draws eye diagram images and calculates SNR values via a multiprocessing technique. These images were saved in a folder on the VM, whereas their names and SNRs were stored as reference data in an SQL database. This study used SQLite DBMS to store information on 576 rows of eye patterns and related SNRs. The meta dataset used to generate eye diagrams consists of water type, channel model, pulse shape, pulse width, and the signal state (0 or 1), which is the value of position zero on eye diagrams. Another code was developed using the OOP paradigm to retrieve data from the database and train 13 CNN models via a training set to make decisions for testing images using the validation set. Then, the error between actual and predicted SNRs was calculated. The errors include the MAE and the RMSE for training and testing data. The BER values were extracted based on the original and predicted SNRs and the performance of the CNN models was measured. A schematic representation of the methodology for this study is shown in Figure 8. Consequently, the ML works as a decision unit in the optical receiver, which is the primary goal of this study.

6.2. Model Dataset

Eye diagram images were generated using a multiprocessing technique on Azure VM with the following components: Windows 11 Pro operating system, x64-based processor, Intel Xeon Platinum 8171M CPU @ 2.60GHz 2.10 GHz, 32 GB RAM, 127 GB Premium SSD LRS Storage, and eight virtual CPUs. The following attributes are required to generate the eye diagrams and conclude the SNR: water type, the channel model, optical pulse shape, and pulse width. Table 2 shows the details of these attributes. The corresponding SNR values were stored in an SQL database. Some examples of the created eye patterns are shown in Figure 9.

6.3. CNN Algorithm

CNNs are widely used in optical communications and networking. Regarding UWOC, ref [111] proposed a constellation diagram recognition and evaluation method using deep learning (DL). ML is applied in networking systems to address tasks in the physical layers. These tasks include monitoring systems, assessing signal degradation effects, optimising launch power, controlling gain in optical amplifiers, and adapting modulation formats. It is also used in nonlinearity mitigation [15]. The optical receiver can serve as an OPM in addition to its primary function of receiving data. A signal waveform is graphically represented in an eye diagram to locate optical signal impairments. The amplitude distribution over one or more-bit periods is depicted by overlapping the symbols. Eye diagrams are employed to evaluate the strength of high-speed digital signals [53]. A data waveform is typically applied to the sampling oscilloscope’s input to create them. Then, all conceivable one-zero combinations are overlapped on the instrument’s display to cover three intervals [54]. Pulses are spread out beyond the period of a single symbol because of the ISI, which results from temporal variations between light beams arriving at the receiver from multiple pathways. At data rates greater than 10 Mbps, ISI seriously impairs the system’s performance. A clustering algorithm is used to identify anomaly attacks without being aware of the attacks beforehand. In ref. [112], a groundbreaking application of ML in optical network security has been reported. The findings showed that ANNs have a significant potential for detecting out-of-band jamming signals of various intensities with an average accuracy of 93%. Using TensorFlow and Keras 2.12.0, were used.
The CNN algorithm is used on eye diagram images to predict the values of SNR in different cases based on UWOC. To organise the inputs in a particular way or convert the relationship to a function that might predict an output, a CNN learns associations between the properties of the input data it receives. In this study, eye diagrams represent signals, and the result for SNR prediction and its magnitude is the type of impairment. This study’s total number of samples is 576 eye diagram images, split into 404 and 172 for training and testing data, respectively.
The structure and implementation of the CNN in this study are as follows:
  • The dimensions of the input eye diagram images are 2366 × 3125 pixels, with a resolution of 600 dpi.
  • The network includes convolutional layers with a filter size of 10 and a stride of 1. The filters range from 16 to 64, increasing by four at each step. There is no activation function applied.
  • There are three non-overlapping max-pooling layers with a size and stride 3.
  • Flattened values refer to the input values that will be fed into the NN.
  • This study refers to the hidden layer as FC and uses the ReLU activation function to reduce the CNN calculations by setting negative values to zero.
  • The ultimate output of the CNN is the prediction of the signal-to-noise ratio (SNR) using a linear activation function, which is appropriate for regression tasks.
The Functional API model was used with the Adam optimiser, and the learning rate is equal to 1  ×   10 5 . Figure 10 shows the model structure; each circle represents convolutional and max pooling layers. The architecture contains five convolutional and max pooling layers. Each output of these layers comes with an input of the next layer; notice that the connection path between the flattened, hidden, and output layer is the weights (small random numbers at the beginning), and the weights affect the layer’s output as seen in Equations (15) and (16). Figure 11 shows the CNN structure and its implementations.

6.4. SNR Prediction

This study constructed the CNN layers using the Keras library with the Functional API model. The loss error for the training sample, or the difference between the predicted and actual values, was also calculated. The cost function, or the cost error function, is the cumulative total of all errors for the training set. The cost function, which measures the model’s accuracy, essentially refers to how far the predicted value is from the real data. The cost function’s minimum value is determined throughout the CNN model learning phase. The task is to identify the model weights resulting in the cost function having a minimum value. Gradient descent optimisation (GD), a fundamental approach for CNN model optimisation, is employed to achieve this [113,114,115]. The equation of GD is as follows:
θ j θ j α θ j J ( θ 0 , θ i )
By substituting the partial derivative of J ( θ 0 , θ i ) we obtain the following:
θ j θ j α 1 m i = 1 m h θ x i y i . x j ( i )
where j = 0, 1, 2, …, n and m is the number of samples.
This study used MAE as a loss function, while the RMSE was used as a metric function to measure the model’s performance. MAE is the mean of absolute differences between predictions and real results where all individual deviations are even more critical, and the RMSE is measured as the average of the square root of the sum of squared differences between predictions and actual output. The mathematical formulas of them are as follows:
M A E y t r u e , y p r e d = 1 n i m a g e s i = 1 n i m a g e s y t r u e y p r e d
R M S E y t r u e , y p r e d = 1 n i m a g e s i = 1 n i m a g e s   y t r u e y p r e d 2
where nimages is the variable that represents the number of eye diagram images in the testing sample.
This CNN programme retrieves the SNR (True) from the database and computes the predicted SNR value by processing the run-time-generated eye diagram images. The MAE is calculated by comparing SNR (True) and SNR (Predict). Moreover, the BER values are extracted from SNR, as shown in Figure 12.

7. Results and Discussion

The proposed CNN models have successfully predicted SNR with high performance. Figure 13, Figure 14 and Figure 15 depict the learning curves, which show the training and validation results for both loss and RMSE and their ratio in relation to the validation values. We discarded the scatter plot because it created point-overlapping distortion. The graphs in this study, referred to as “standard hyperparameters,” display the dropout rate, learning rate, and number of epochs. The quantity of filters utilised in this study was modified, as indicated in Table 3 and Table 4. The training and validation curves show a gradual decrease in loss and RMSE as the number of epochs increases, eventually converging to a similar value. The loss and RMSE in training and validation curves gradually decrease with epochs, and they become close to each other. When the number of filters (16, 20,24, and 28) in the CNN architecture increases, the training and validation loss and RMSE decrease, as seen in Figure 13. The crucial metrics are the loss and RMSE ratios, approximately equal to 1. This indicates that the models are highly accurate and efficient in predicting the actual SNR values. Figure 14 and Figure 15 demonstrate a decrease in both the loss and RMSE. However, a slight divergence was observed between the training and validation curves, indicating a minimal gap between the loss and RMSE values for the training and validation datasets.
Table 5 provides a comprehensive summary of the models’ performance via train/validation loss and train/validation RMSE at the last epoch, as well as the number of trainable parameters and the loss and RMSE ratios. The equation for each one is as follows:
L o s s   R a t i o = L o s s V a l i d a t i o n   L o s s
R M S E   R a t i o = R M S E   V a l i d a t i o n   R M S E
The nearer to 1 the ratio is, the more fitting the model is so that the model can make a correct decision, and the more likely the predicted SNR will approach the actual value.
The statistical analysis includes the results’ minimum, maximum, and mean information. For example, the training time ranges between 8.33 and 10.99, whereas the range of predicting time ranges from 0.1732 to 0.2098. We observed no significant fluctuation in time, although the number of filters changed. The maximum difference between 1 and loss or RMSE ratios are 0.3381 and 0.4153, respectively, using 48 and 56 filters in CNN implementation. In contrast, the minimum difference between 1 and loss or RMSE ratios are 0.0297 and 0.0183, respectively, when using 20 filters. In addition, the average of the 1 L o s s   R a t i o is 0.2107, and for 1 R M S E   R a t i o it is 0.2551, which is very close to zero, as shown in Table 3.
Table 4 displays the constant hyperparameters and their corresponding values used in this study, including the colour mode of the eye diagram images, the optimisation of the model, and other hyperparameters as shown in this table. The primary implementation motivation for the set of hyperparameters in Table 4 is to ensure that the CNN engine achieves its optimum model accuracy fitting and safely operates within a region away from the over-fitting and under-fitting boundaries. Moreover, this stable fitting region is broad enough for optimum processing time.
The Pearson correlation coefficients between the number of parameters and other results’ information are displayed in Table 6. Loss, validation loss, RMSE, and validation RMSE have strong correlations, indicating that the cost function decreases while the CNN size increases. Therefore, the performance of the CNN model is enhanced by increasing its size.
Table 7 represents the Pearson correlation coefficient between the number of filters and other results’ information, like training and predicting times, loss, and RMSE, and the validation for both interpretations of these correlations. On the other hand, the graphs below, Figure 16, Figure 17, Figure 18 and Figure 19 show the relationship between the number of filters used in the CNN models and their information. Figure 16 shows the weak correlation between the number of filters and training and predicting times, which indicates that the curve is almost constant. While it is expected that increasing the number of filters in a CNN would result in increased computations and, therefore, more time to complete them, using a highly capable VM mitigates the impact of increased computations on time, making it negligible.
Their correlation coefficients range from medium to very strong concerning loss, validation loss, RMSE, and validation RMSE. When the number of filters increases, the capacity of the model (trainable parameters) also increases, which allows it to fit the training data better and improve its effectiveness (see Figure 17). In contrast, the loss and RMSE ratios are close to 1, as seen in Figure 18, which means the model makes good decisions.
Figure 17. Number of filters vs. training and validation for both loss and RMSE for all CNN models.
Figure 17. Number of filters vs. training and validation for both loss and RMSE for all CNN models.
Mathematics 12 02805 g017
Figure 18. Number of filters vs. loss and RMSE ratios for all CNN models.
Figure 18. Number of filters vs. loss and RMSE ratios for all CNN models.
Mathematics 12 02805 g018
On the other hand, the positive upward curve displays a very strong linear direct correlation between the number of filters and the number of parameters; it forms a perfectly straight line (see Figure 19). The reason is that the total number of values inside all filters increases when the filters are increased. These are considered parameters, so the number of trainable parameters increases.
Figure 19. Number of filters vs. number of trainable parameters for all CNN models.
Figure 19. Number of filters vs. number of trainable parameters for all CNN models.
Mathematics 12 02805 g019
To assess the performance of the CNN models that work on the optical receiver, which can handle the ISI noise in UWOC, the relationship between the actual and predicted values of SNR and BER is drawn in Figure 20. This result illustrates the models’ outcomes using a 0.45 dropout rate and a learning rate of 10-5 with 28 filters, as shown in the figure below. CNN models can predict correct results for harbour and coastal waters using Gaussian and Rectangular optical pulse shapes, with different pulse width ranges from 0.1 to 0.95, using DGF, WDGF, CEAPF, and BP channel models. The trend in the curves is similar to the identity function, represented by the red line (y=x), which means the actual values are close to the predicted ones in both SNR on the left and BER on the right. This shows that the CNN model can decide correctly in various situations involving various types of water, ISI noise, and water environment variations.
The relation between SNR and BER, which represents the performance of the optical receiver, is drawn, as represented in Figure 21, using a 0.45 dropout rate and a 10−5 learning rate with 28 filters. From the graphs, we can conclude that the suggested CNN models perform well in making accurate decisions for various instances involving various types of waters, ISI noise, and underwater environment variations.
Regarding the high BER values, it means small SNRs are included in our models. This is due to this study’s significant noise and channel fluctuations. Although the SNRs are caused by received pulses that pass through noisy channels, the models can predict SNRs accurately.

8. Conclusions

This study successfully demonstrated the implementation of a novel CNN-based decision unit strategy in an optical receiver of UWOC systems. The proposed CNN models are found to predict SNR effectively with high performance, with the train and validation losses and RMSE demonstrating convergence towards smaller values. The results show an inverse strong correlation between the number of parameters in the model and the cost function, suggesting that increasing the CNN model’s size enhances its performance. Even in diverse water types with fluctuating noise levels and environment variability, employing a CNN model as a decision unit in an optical receiver enables efficient decision-making with a low-cost function.
Our innovative CNN tool’s architecture and supporting mathematical formulations made it agnostic to the UWOC model and transmission modulation format. Hence, if any or all channel models in Table 1 are proven not to partially or fully satisfy the linear time-invariant system (LTIS) condition requirements, replacing any or all of these models with ones that comply will not impact the CNN tool computational software algorithm. Still, it might require altering the hyperparameters of the CNN model platform structure shown in Table 4 to ensure optimum model accuracy fitting, but from a hardware perspective, we do not expect any necessary change to the hosting math processor of the DU. It is worth mentioning that LTIS requirements are translated in terms of channel path loss, mean delay, Root Mean-Square delay spread, and the constancy of the frequency bandwidth with the model temporal profile broadening with linkspans.

9. Future Studies

In future studies, we plan to elevate the effectiveness of our ML model for the UWOC system through a two-pronged strategy: dataset expansion and CNN refinement. The first cornerstone of our approach is the extension of our dataset. We aim to generate more eye diagram images, diversifying and enriching the data available for the ML model. This broader dataset will fortify the model’s learning capabilities and enhance its predictive precision. Simultaneously, we propose a strategic refinement of our CNN’s hyperparameters. We contemplate introducing two or three hidden layers into the network’s architecture, which could amplify the model’s ability to detect intricate features and, in turn, boost its accuracy. We are also considering altering activation functions in both the convolutional and hidden layers (e.g., Tanh, Leaky ReLU, ELU, and SELU), aiming to introduce varying degrees of nonlinearity that could potentially enhance the model’s learning and generalisation from the data.
Additionally, we will explore the employment of non-overlapping filters in our convolutional operations. This adjustment could help retain original input data information, minimising information loss during the convolution process and potentially leading to more robust predictions. Aside from these strategies, we also plan to incorporate the capabilities of large language models, like GPT-4. We envision utilising these models for tasks such as automated hyperparameter tuning and predictive modelling based on text mining of recent research trends. Furthermore, they could aid in automatic feature extraction from eye diagram images and augment our dataset by generating synthetic images based on textual descriptions of various UWOC scenarios. These models can also help interpret the CNN model’s results and enhance our understanding of the network’s decisions. Lastly, we could facilitate knowledge transfer by identifying parallels between UWOC and other fields and refining our CNN techniques and approaches. Through this holistic strategy, integrating both traditional methods and advanced AI techniques, we aim to significantly elevate the accuracy and efficiency of our CNN model in UWOC.

Author Contributions

Conceptualisation, I.F.E.R. and Y.A.-H.; Methodology, N.M.B. and S.A.-Z.; Software, I.F.E.R., N.M.B. and S.A.-Z.; Validation, I.F.E.R., N.M.B. and A.Z.B.; Formal analysis, I.F.E.R. and Y.A.-H.; Investigation, I.F.E.R., S.A.-Z., Y.A.-H. and A.Z.B.; Writing—original draft, I.F.E.R.; Writing—review and editing, N.M.B., S.A.-Z., Y.A.-H., A.Z.B. and M.C.; visualisation, I.F.E.R. All authors have read and agreed to the published version of the manuscript.

Funding

The Deanship of Scientific Research (DSR) at King Abdulaziz University (KAU), Jeddah, Saudi Arabia, under grant no. (GPIP-1512-130-2024).

Data Availability Statement

The digital image SQL database can be provided upon request through the corresponding author’s email.

Acknowledgments

This study was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (GPIP-1512-130-2024). The authors, therefore, acknowledge DSR’s technical and financial support with thanks.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Recent studies use the ML approach to obtain SNR using eye diagrams.
Table A1. Recent studies use the ML approach to obtain SNR using eye diagrams.
Ref.YearType of
Modulation
Data RateOptical Fibre/UWOCDistanceML Tec.Sim./Exp.Train: TestInputHidden LayersHidden NeuronsSNR Range (dB)BERPerformance
[25]2017PDM-64QAM32 GbaudFibre250 kmNN, regressionExp. To generate an eye diagram1664:832variance from the eye diagram134–30-------mean estimation error range from 0.2–1.2 dB
[58]2009NRZ-OOK
RZ-DPSK
10 Gb/s
40 Gb/s
Fibre---------ANNSim.---------------Q-factor, closure, jitter,
crossing-amplitude from eye diagram
11216–32-------correlation coefficient = 0.91 (@10G)
correlation coefficient = 0.96 (@40G)
[26]2009RZ-OOK
RZ-DPSK
40 Gb/sFibre---------ANNSim.135:32Q-factor, closure, jitter,
Crosspoint, mean, standard deviation from eye diagram
11216–32------- R M S E = 0.57 ,   0.77
[116]2021NRZ-OOK10 Gb/sFibre100 kmANNSim145: 41factor, noise power, eye amplitude, eye height,
eye closure, eye-opening factor, extinction ratio at min BER, and
RMS jitter
1515–30------- M S E = 1.12
[8]2009NRZ-
DPSK
40 Gb/sFibre50 kmNO-------------500–1500 for trainingsix features--------------25, 35-------------------------
[68]------NRZ-OOK1.25 GbpsUWOC1.5–6 mNOExp-------------------------------------------------------1 × 10−7------------------
[59]2012NRZ, RZ and
QPSK
40 Gb/sFibre---------ANNSim---------------24 features from the Eye diagram11010–30------- R M S E = 1.5 : 2
Our study 2024NRZ-OOK 2 Gb/sUWOC12 mCNNSim 404:172Eye diagram180–320(−2.42–9.11)0.0022–0.2247MAE = 0.291
RMSE = 0.387
Table A2. CNN algorithm structure and implementations that utilise eye diagram images for SNR prediction.
Table A2. CNN algorithm structure and implementations that utilise eye diagram images for SNR prediction.
Ref. [24,60][61]Our Work
ML techniqueCNNCNNCNN
Input dataEye diagramsEye-opening, height, width, closure.Eye diagrams
Images formatjpgNojpg
ApproachClassificationclassificationRegression
No. of convolutional and pooling layers235
No. of filtersC1 = 6
C2 = 12
C1 = 60
C2 = 80
C3 = 180
Unified for all
convolutional layers, and it ranges from
16 to 64
Filter size5310
Pooling size223
activation functionsSigmoid in the whole CNNReLU for each convolutional layer
and soft-max
ReLU for hidden layer
Linear for output layer
No. of hidden layers021
No. of elements of the fully connected feature map192FC1=360
FC2 =120
Range from 80 to 320
According to no. of filters
DropoutNoyes45%
BackpropagationYesYesYes
No. of output nodes20No1
Output4 modulation formats
16 SNR
modulation formats, OSNR, ROF, and IQ skewSNR
No. of epochs35No250
Modulation formatRZ-OOK, NRZ-OOK, RZ-DPSK, and 4PAMQAM and QPSKNRZ-OOK
Data rate25Gbaud32 Gbaud2 Gb/s
Collecting eye diagrams waySimulating signals and displaying eye diagrams using the oscilloscopeExperimentalSimulating everything using Python
Colour modeColoured converted into grayscaleColouredBlack and white in RGB convert to grayscale
Original image size900 × 1200224 × 2242366 × 3125
Resized image28 × 28NoNo
ResolutionLowLowHigh (600 dpi)
No. of models1213
Prediction time0.46 sNoRange from 0.17 to 0.20 s
SNR range(10–25) dB(15 to 40) dB(−2.42–9.11) dB
BER rangeNoNo0.0022–0.2247
Total no. of images64001170576
Performance100% accuracy99.57% accuracyMAE = 0.29–0.52
RMSE = 0.39–0.73
Learning curves existenceNoNoYes
UWOC/ FibreFibreFibreUWOC
Year201720192024

References

  1. Aldin, M.B.; Alkareem, R.A.; Ali, M.A. Transmission of 10 Gb/s For Underwater Optical Wireless Communication System. J. Opt. 2024, 1–12. [Google Scholar] [CrossRef]
  2. Tian, R.; Wang, T.; Shen, X.; Zhu, R.; Jiang, L.; Lu, Y.; Lu, H.; Song, Y.; Zhang, P. 108 m Underwater Wireless Optical Communication Using a 490 nm Blue VECSEL and an AOM. Sensors 2024, 24, 2609. [Google Scholar] [CrossRef] [PubMed]
  3. Qu, Z.; Lai, M. A Review on Electromagnetic, Acoustic, and New Emerging Technologies for Submarine Communication. IEEE Access 2024, 12, 12110–12125. [Google Scholar] [CrossRef]
  4. Álvarez-Roa, C.; Álvarez-Roa, M.; Raddo, T.R.; Jurado-Navas, A.; Castillo-Vázquez, M. Cooperative Terrestrial–Underwater FSO System: Design and Performance Analysis. Photonics 2024, 11, 58. [Google Scholar] [CrossRef]
  5. Ramley, I.F.E.; AlZhrani, S.M.; Bedaiwi, N.M.; Al-Hadeethi, Y.; Barasheed, A.Z. Simple Moment Generating Function Optimisation Technique to Design Optimum Electronic Filter for Underwater Wireless Optical Communication Receiver. Mathematics 2024, 12, 861. [Google Scholar] [CrossRef]
  6. Proakis, J.G.; Salehi, M. Digital Communications; McGraw-Hill: New York, NY, USA, 2008. [Google Scholar]
  7. Sklar, B. Digital Communications: Fundamentals and Applications; Pearson: London, UK, 2021. [Google Scholar]
  8. Anderson, T.B.; Kowalczyk, A.; Clarke, K.; Dods, S.D.; Hewitt, D.; Li, J.C. Multi impairment monitoring for optical networks. J. Light. Technol. 2009, 27, 3729–3736. [Google Scholar] [CrossRef]
  9. Khan, F.N.; Zhou, Y.; Lau, A.T.; Lu, C. Modulation format identification in heterogeneous fiber-optic networks using artificial neural networks. Opt. Express 2012, 20, 12422–12431. [Google Scholar] [CrossRef]
  10. Khan, F.N.; Shen, T.S.R.; Zhou, Y.; Lau, A.T.; Lu, C. Optical performance monitoring using artificial neural networks trained with empirical moments of asynchronously sampled signal amplitudes. IEEE Photonics Technol. Lett. 2012, 24, 982–984. [Google Scholar] [CrossRef]
  11. Shen, T.S.R.; Sui, Q.; Lau, A.T. OSNR monitoring for PM-QPSK systems with large inline chromatic dispersion using artificial neural network technique. IEEE Photonics Technol. Lett. 2012, 24, 1564–1567. [Google Scholar] [CrossRef]
  12. Tan, M.C.; Khan, F.N.; Al-Arashi, W.H.; Zhou, Y.; Lau, A.T. Simultaneous optical performance monitoring and modulation format/bit-rate identification using principal component analysis. J. Opt. Commun. Netw. 2014, 6, 441–448. [Google Scholar] [CrossRef]
  13. Marsland, S. Machine Learning: An Algorithmic Perspective; Chapman and Hall/CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
  14. Khan, F.N.; Fan, Q.; Lu, C.; Lau, A.T. An optical communication’s perspective on machine learning and its applications. J. Light. Technol. 2019, 37, 493–516. [Google Scholar] [CrossRef]
  15. Musumeci, F.; Rottondi, C.; Nag, A.; Macaluso, I.; Zibar, D.; Ruffini, M.; Tornatore, M. An overview on application of machine learning techniques in optical networks. IEEE Commun. Surv. Tutor. 2018, 21, 1383–1408. [Google Scholar] [CrossRef]
  16. Li, P.; Yi, L.; Xue, L.; Hu, W. 56 Gbps IM/DD PON based on 10G-class optical devices with 29 dB loss budget enabled by machine learning. In Proceedings of the 2018 Optical Fiber Communications Conference and Exposition (OFC), San Diego, CA, USA, 11–15 March 2018; pp. 1–3. [Google Scholar]
  17. Liao, T.; Xue, L.; Hu, W.; Yi, L. Unsupervised learning for neural network-based blind equalization. IEEE Photonics Technol. Lett. 2020, 32, 569–572. [Google Scholar] [CrossRef]
  18. Zha, X.; Peng, H.; Qin, X.; Li, G.; Yang, S. A deep learning framework for signal detection and modulation classification. Sensors 2019, 19, 4042. [Google Scholar] [CrossRef] [PubMed]
  19. Wang, D.; Zhang, M.; Li, Z.; Cui, Y.; Liu, J.; Yang, Y.; Wang, H. Nonlinear decision boundary created by a machine learning-based classifier to mitigate nonlinear phase noise. In Proceedings of the 2015 European Conference on Optical Communication (ECOC), Valencia, Spain, 27 September–1 October 2015; pp. 1–3. [Google Scholar] [CrossRef]
  20. Zhang, J.; Chen, W.; Gao, M.; Shen, G. K-means-clustering-based fiber nonlinearity equalization techniques for 64-QAM coherent optical communication system. Opt. Express 2017, 25, 27570–27580. [Google Scholar] [CrossRef]
  21. Zhang, L.; Pang, X.; Ozolins, O.; Udalcovs, A.; Popov, S.; Xiao, S.; Hu, W.; Chen, J. Spectrally efficient digitized radio-over-fiber system with k-means clustering-based multidimensional quantization. Opt. Lett. 2018, 43, 1546–1549. [Google Scholar] [CrossRef]
  22. Khan, F.N.; Yu, Y.; Tan, M.C.; Al-Arashi, W.H.; Yu, C.; Lau, A.P.T.; Lu, C. Experimental demonstration of joint OSNR monitoring and modulation format identification using asynchronous single channel sampling. Opt. Express 2015, 23, 30337–30346. [Google Scholar] [CrossRef]
  23. Zibar, D.; Gonzalez, N.G.; de Oliveira, J.C.R.F.; Monroy, I.T.; de Carvalho, L.H.H.; Piels, M.; Doberstein, A.; Diniz, J.; Nebendahl, B.; Franciscangelis, C.; et al. Application of machine learning techniques for amplitude and phase noise characterization. J. Light. Technol. 2015, 33, 1333–1343. [Google Scholar] [CrossRef]
  24. Wang, D.; Zhang, M.; Li, Z.; Li, J.; Fu, M.; Cui, Y.; Chen, X. Modulation format recognition and OSNR estimation using CNN-based deep learning. IEEE Photonics Technol. Lett. 2017, 29, 1667–1670. [Google Scholar] [CrossRef]
  25. Thrane, J.; Wass, J.; Piels, M.; Diniz, J.C.M.; Jones, R.; Zibar, D. Machine learning techniques for optical performance monitoring from directly detected PDM-QAM signals. J. Light. Technol. 2016, 35, 868–875. [Google Scholar] [CrossRef]
  26. Wu, X.; Jargon, J.A.; Skoog, R.A.; Paraschis, L.; Willner, A.E. Applications of Artificial Neural Networks in Optical Performance Monitoring. J. Light. Technol. 2009, 27, 3580–3589. [Google Scholar]
  27. Jargon, J.A.; Wu, X.; Choi, H.Y.; Chung, Y.C.; Willner, A.E. Optical performance monitoring of QPSK data channels by use of neural networks trained with parameters derived from asynchronous constellation diagrams. Opt. Express 2010, 18, 4931–4938. [Google Scholar] [CrossRef] [PubMed]
  28. Shen, T.S.R.; Meng, K.; Lau, A.T.; Dong, Z.Y. Optical performance monitoring using artificial neural network trained with asynchronous amplitude histograms. IEEE Photonics Technol. Lett. 2010, 22, 1665–1667. [Google Scholar] [CrossRef]
  29. Zibar, D.; Thrane, J.; Wass, J.; Jones, R.; Piels, M.; Schaeffer, C. Machine learning techniques applied to system characterization and equalization. In Proceedings of the 2016 Optical Fiber Communications Conference and Exhibition (OFC), Anaheim, CA, USA, 20–24 March 2016; pp. 1–3. [Google Scholar]
  30. Reza, A.G.; Rhee, J.-K.K. Blind nonlinear equalizer using artificial neural networks for PAM-4 signal transmissions with DMLs. Opt. Fiber Technol. 2021, 64, 102582. [Google Scholar] [CrossRef]
  31. Chan, C.C.K. Optical Performance Monitoring: Advanced Techniques for Next-Generation Photonic Networks; Academic Press: Cambridge, MA, USA, 2010. [Google Scholar]
  32. Hauske, F.N.; Kuschnerov, M.; Spinnler, B.; Lankl, B. Optical performance monitoring in digital coherent receivers. J. Light. Technol. 2009, 27, 3623–3631. [Google Scholar] [CrossRef]
  33. Geyer, J.C.; Fludger, C.R.S.; Duthel, T.; Schulien, C.; Schmauss, B. Performance monitoring using coherent receivers. In Proceedings of the 2009 Conference on Optical Fiber Communication, San Diego, CA, USA, 22–26 March 2009. [Google Scholar] [CrossRef]
  34. Szafraniec, B.; Marshall, T.S.; Nebendahl, B. Performance monitoring and measurement techniques for coherent optical systems. J. Light. Technol. 2012, 31, 648–663. [Google Scholar] [CrossRef]
  35. Zhao, Y.; Chen, X.; Yang, T.; Wang, L.; Wang, D.; Zhang, Z.; Shi, S. Low-complexity fiber nonlinearity impairments compensation enabled by simple recurrent neural network with time memory. IEEE Access 2020, 8, 160995–161004. [Google Scholar] [CrossRef]
  36. Saif, W.S.; Ragheb, A.M.; Nebendahl, B.; Alshawi, T.; Marey, M.; Alshebeili, S.A. Machine learning-based optical performance monitoring for super-channel optical networks. Photonics 2022, 9, 299. [Google Scholar] [CrossRef]
  37. Honkala, M.; Korpi, D.; Huttunen, J.M.J. DeepRx: Fully convolutional deep learning receiver. IEEE Trans. Wirel. Commun. 2021, 20, 3925–3940. [Google Scholar] [CrossRef]
  38. Skoog, R.A.; Banwell, T.C.; Gannett, J.W.; Habiby, S.F.; Pang, M.; Rauch, M.E.; Toliver, P. Automatic identification of impairments using support vector machine pattern classification on eye diagrams. IEEE Photonics Technol. Lett. 2006, 18, 2398–2400. [Google Scholar] [CrossRef]
  39. Ziauddin, F. Localization Through Optical Wireless Communication in Underwater by Using Machine Learning Algorithms. J. Glob. Res. Comput. Sci. 2024, 15, 1. [Google Scholar]
  40. Fan, X.; Xie, Y.; Ren, F.; Zhang, Y.; Huang, X.; Chen, W.; Zhangsun, T.; Wang, J. Joint optical performance monitoring and modulation format/bit-rate identification by CNN-based multi-task learning. IEEE Photonics J. 2018, 10, 1–12. [Google Scholar] [CrossRef]
  41. Jargon, J.A.; Wu, X.; Willner, A.E. Optical performance monitoring by use of artificial neural networks trained with parameters derived from delay-tap asynchronous sampling. In Proceedings of the 2009 Conference on Optical Fiber Communication, San Diego, CA, USA, 22–26 March 2009; pp. 1–3. [Google Scholar] [CrossRef]
  42. Wu, X.; Jargon, J.A.; Paraschis, L.; Willner, A.E. ANN-based optical performance monitoring of QPSK signals using parameters derived from balanced-detected asynchronous diagrams. IEEE Photonics Technol. Lett. 2010, 23, 248–250. [Google Scholar] [CrossRef]
  43. Dods, S.D.; Anderson, T.B. Optical performance monitoring technique using delay tap asynchronous waveform sampling. In Proceedings of the Optical Fiber Communication Conference, Anaheim, CA, USA, 5–10 March 2006; p. OThP5. [Google Scholar]
  44. Chen, H.; Poon, A.W.; Cao, X.-R. Transparent monitoring of rise time using asynchronous amplitude histograms in optical transmission systems. J. Light. Technol. 2004, 22, 1661. [Google Scholar] [CrossRef]
  45. Dong, Z.; Khan, F.N.; Sui, Q.; Zhong, K.; Lu, C.; Lau, A.T. Optical performance monitoring: A review of current and future technologies. J. Light. Technol. 2016, 34, 525–543. [Google Scholar] [CrossRef]
  46. Cheng, Y.; Zhang, W.; Fu, S.; Tang, M.; Liu, D. Transfer learning simplified multi-task deep neural network for PDM-64QAM optical performance monitoring. Opt. Express 2020, 28, 7607–7617. [Google Scholar] [CrossRef]
  47. Wan, Z.; Yu, Z.; Shu, L.; Zhao, Y.; Zhang, H.; Xu, K. Intelligent optical performance monitor using multi-task learning based artificial neural network. Opt. Express 2019, 27, 11281–11291. [Google Scholar] [CrossRef]
  48. Khan, F.N.; Zhong, K.; Zhou, X.; Al-Arashi, W.H.; Yu, C.; Lu, C.; Lau, A.P.T. Joint OSNR monitoring and modulation format identification in digital coherent receivers using deep neural networks. Opt. Express 2017, 25, 17767–17776. [Google Scholar] [CrossRef]
  49. Xia, L.; Zhang, J.; Hu, S.; Zhu, M.; Song, Y.; Qiu, K. Transfer learning assisted deep neural network for OSNR estimation. Opt. Express 2019, 27, 19398–19406. [Google Scholar] [CrossRef]
  50. Kashi, A.S.; Zhuge, Q.; Cartledge, J.; Borowiec, A.; Charlton, D.; Laperle, C.; O’Sullivan, M. Artificial neural networks for fiber nonlinear noise estimation. In Proceedings of the 2017 Asia Communications and Photonics Conference, Guangzhou, China, 10–13 November 2017; pp. 1–3. [Google Scholar]
  51. Wang, D.; Zhang, M.; Li, J.; Li, Z.; Li, J.; Song, C.; Chen, X. Intelligent constellation diagram analyzer using convolutional neural network-based deep learning. Opt. Express 2017, 25, 17150–17166. [Google Scholar] [CrossRef]
  52. Cho, H.J. Deep Learning Based Optical Performance Monitoring for Digital Coherent Optical Receivers. Ph.D Thesis, Georgia Institute of Technology, College of Engineering, Atlanta, GA, USA, 2021. Available online: http://hdl.handle.net/1853/66065 (accessed on 17 July 2024).
  53. Derickson, D. Fiber optic test and measurement. In Fiber Optic Test and Measurement/Edited by Dennis Derickson; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  54. Jargon, J.A.; Wang, C.M.J.; Hale, D. A robust algorithm for eye-diagram analysis. J. Light. Technol. 2008, 26, 3592–3600. [Google Scholar] [CrossRef]
  55. Rajbhandari, S.; Faith, J.; Ghassemlooy, Z.; Angelova, M. Comparative study of classifiers to mitigate intersymbol interference in diffuse indoor optical wireless communication links. Optik 2013, 124, 4192–4196. [Google Scholar] [CrossRef]
  56. Wang, Z.; Yang, A.; Guo, P.; He, P. OSNR and nonlinear noise power estimation for optical fiber communication systems using LSTM based deep learning technique. Opt. Express 2018, 26, 21346–21357. [Google Scholar] [CrossRef] [PubMed]
  57. Ye, H.; Jiang, H.; Liang, G.; Zhan, Q.; Huang, S.; Wang, D.; Di, H.; Li, Z. OSNR monitoring based on a low-bandwidth coherent receiver and LSTM classifier. Opt. Express 2021, 29, 1566–1577. [Google Scholar] [CrossRef]
  58. Jargon, J.A.; Wu, X.; Willner, A.E. Optical performance monitoring using artificial neural networks trained with eye-diagram parameters. IEEE Photonics Technol. Lett. 2008, 21, 54–56. [Google Scholar] [CrossRef]
  59. Ribeiro, V.; Costa, L.; Lima, M.; Teixeira, A.L.J. Optical performance monitoring using the novel parametric asynchronous eye diagram. Opt. Express 2012, 20, 9851–9861. [Google Scholar] [CrossRef]
  60. Wang, D.; Zhang, M.; Li, Z.; Li, J.; Song, C.; Li, J.; Wang, M. Convolutional neural network-based deep learning for intelligent OSNR estimation on eye diagrams. In Proceedings of the 2017 European Conference on Optical Communication (ECOC), Gothenburg, Sweden, 17–21 September 2017; pp. 1–3. [Google Scholar] [CrossRef]
  61. Zhang, Y.; Pan, Z.; Yue, Y.; Ren, Y.; Wang, Z.; Liu, B.; Zhang, H.; Li, S.-A.; Fang, Y.; Huang, H.; et al. Eye diagram measurement-based joint modulation format, OSNR, ROF, and skew monitoring of coherent channel using deep learning. J. Light. Technol. 2019, 37, 5907–5913. [Google Scholar] [CrossRef]
  62. Al-Zhrani, S.; Bedaiwi, N.M.; El-Ramli, I.F.; Barasheed, A.Z.; Abduldaiem, A.; Al-Hadeethi, Y.; Umar, A. Underwater Optical Communications: A Brief Overview and Recent Developments. Eng. Sci. 2021, 16, 146–186. [Google Scholar] [CrossRef]
  63. Oubei, H.M.; Shen, C.; Kammoun, A.; Zedini, E.; Park, K.-H.; Sun, X.; Liu, G.; Kang, C.H.; Ng, T.K.; Alouini, M.-S.; et al. Light based underwater wireless communications. Jpn. J. Appl. Phys. 2018, 57, 08PA06. [Google Scholar] [CrossRef]
  64. Petzold, T.J. Volume Scattering Functions for Selected Ocean Waters; Scripps Institution of Oceanography La Jolla Ca Visibility Lab: La Jolla, CA, USA, 1972. [Google Scholar]
  65. Singh, M.; Singh, M.L.; Singh, G.; Kaur, H.; Kaur, S. Modeling and performance evaluation of underwater wireless optical communication system in the presence of different sized air bubbles. Opt. Quantum Electron. 2020, 52, 1–15. [Google Scholar] [CrossRef]
  66. Zhang, H.; Gao, Y.; Tong, Z.; Yang, X.; Zhang, Y.; Zhang, C.; Xu, J. Omnidirectional optical communication system designed for underwater swarm robotics. Opt. Express 2023, 31, 18630–18644. [Google Scholar] [CrossRef] [PubMed]
  67. Yu, C.; Chen, X.; Zhang, Z.; Song, G.; Lin, J.; Xu, J. Experimental verification of diffused laser beam-based optical wireless communication through air and water channels. Opt. Commun. 2021, 495, 127079. [Google Scholar] [CrossRef]
  68. Li, D.-C.; Chen, C.-C.; Liaw, S.-K.; Afifah, S.; Sung, J.-Y.; Yeh, C.-H. Performance Evaluation of Underwater Wireless Optical Communication System by Varying the Environmental Parameters. Photonics 2021, 8, 74. [Google Scholar] [CrossRef]
  69. Loo, J.; Mauri, J.L.; Ortiz, J.H. Mobile Ad Hoc Networks: Current Status and Future Trends; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  70. Sun, X.; Kang, C.H.; Kong, M.; Alkhazragi, O.; Guo, Y.; Ouhssain, M.; Weng, Y.; Jones, B.H.; Ng, T.K.; Ooi, B.S. A Review on Practical Considerations and Solutions in Underwater Wireless Optical Communication. J. Light. Technol. 2020, 38, 421–431. [Google Scholar] [CrossRef]
  71. Geldard, C.T.; Thompson, J.; Popoola, W.O. An Overview of Underwater Optical Wireless Channel Modelling Techniques: (Invited Paper). In Proceedings of the 2019 International Symposium on Electronics and Smart Devices (ISESD), Badung, Indonesia, 8–9 October 2019; pp. 1–4. [Google Scholar] [CrossRef]
  72. Johnson, L.; Green, R.; Leeson, M. A survey of channel models for underwater optical wireless communication. In Proceedings of the 2013 2nd International Workshop on Optical Wireless Communications (IWOW), Newcastle Upon Tyne, UK, 21 October 2013; pp. 1–5. [Google Scholar] [CrossRef]
  73. Al-Kinani, A.; Wang, C.; Zhou, L.; Zhang, W. Optical Wireless Communication Channel Measurements and Models. IEEE Commun. Surv. Tutor. 2018, 20, 1939–1962. [Google Scholar] [CrossRef]
  74. Tang, S.; Dong, Y.; Zhang, X. Impulse response modeling for underwater wireless optical communication links. IEEE Trans. Commun. 2013, 62, 226–234. [Google Scholar] [CrossRef]
  75. Dong, Y.; Zhang, H.; Zhang, X. On impulse response modeling for underwater wireless optical MIMO links. In Proceedings of the 2014 IEEE/CIC International Conference on Communications in China (ICCC), Shanghai, China, 13–15 October 2014; pp. 151–155. [Google Scholar] [CrossRef]
  76. Li, Y.; Leeson, M.S.; Li, X. Impulse response modeling for underwater optical wireless channels. Appl. Opt. 2018, 57, 4815–4823. [Google Scholar] [CrossRef]
  77. Boluda-Ruiz, R.; Rico-Pinazo, P.; Castillo-Vázquez, B.; García-Zambrana, A.; Qaraqe, K. Impulse response modeling of underwater optical scattering channels for wireless communication. IEEE Photonics J. 2020, 12, 1–14. [Google Scholar] [CrossRef]
  78. Kodama, T.; Sanusi, M.A.B.A.; Kobori, F.; Kimura, T.; Inoue, Y.; Jinno, M. Comprehensive Analysis of Time-Domain Hybrid PAM for Data-Rate and Distance Adaptive UWOC System. IEEE Access 2021, 9, 57064–57074. [Google Scholar] [CrossRef]
  79. Kaushal, H.; Kaddoum, G. Underwater optical wireless communication. IEEE Access 2016, 4, 1518–1547. [Google Scholar] [CrossRef]
  80. Khalighi, M.A.; Uysal, M. Survey on free space optical communication: A communication theory perspective. IEEE Commun. Surv. Tutor. 2014, 16, 2231–2258. [Google Scholar] [CrossRef]
  81. Agrawal, G.P. Fiber-Optic Communication Systems; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  82. Zhu, S.; Chen, X.; Liu, X.; Zhang, G.; Tian, P. Recent progress in and perspectives of underwater wireless optical communication. Prog. Quantum Electron. 2020, 73, 100274. [Google Scholar] [CrossRef]
  83. Ghassemlooy, Z.; Popoola, W.; Rajbhandari, S. Optical Wireless Communications: System and Channel Modelling with Matlab®; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  84. Kharraz, O.; Forsyth, D. Performance comparisons between PIN and APD photodetectors for use in optical communication systems. Optik 2013, 124, 1493–1498. [Google Scholar] [CrossRef]
  85. Zhao, Z.; Liu, J.; Liu, Y.; Zhu, N. High-speed photodetectors in optical communication system. J. Semicond. 2017, 38, 121001. [Google Scholar] [CrossRef]
  86. Farr, N.; Chave, A.; Freitag, L.; Preisig, J.; White, S.; Yoerger, D.; Sonnichsen, F. Optical Modem Technology for Seafloor Observatories. In Proceedings of the OCEANS 2006, Boston, MA, USA, 18–21 September 2006; pp. 1–6. [Google Scholar] [CrossRef]
  87. Lee, H.-K.; Moon, J.-H.; Mun, S.-G.; Choi, K.-M.; Lee, C.-H. Decision threshold control method for the optical receiver of a WDM-PON. J. Opt. Commun. Netw. 2010, 2, 381–388. [Google Scholar] [CrossRef]
  88. Palermo, S. CMOS Transceiver Circuits for Optical Interconnects. In Encyclopedia of Modern Optics, 2nd ed.; Guenther, B.D., Steel, D.G., Eds.; Elsevier: Oxford, UK, 2018; pp. 254–263. [Google Scholar]
  89. Shieh, W.; Djordjevic, I. Optical Communication Fundamentals. In OFDM for Optical Communications; Shieh, W., Djordjevic, I., Eds.; Academic Press: Oxford, UK, 2010; pp. 53–118. [Google Scholar]
  90. Kryukov, Y.; Pokamestov, D.; Brovkin, A.; Shinkevich, A.; Shalin, G. MCS MAP FOR LINK-LEVEL SIMULATION OF TWO-USER PD-NOMA SYSTEM. Proc. Eng. 2024, 6, 151–160. [Google Scholar] [CrossRef]
  91. Tsipi, L.; Karavolos, M.; Papaioannou, G.; Volakaki, M.; Vouyioukas, D. Machine learning-based methods for MCS prediction in 5G networks. Telecommun. Syst. 2024, 86, 705–728. [Google Scholar] [CrossRef]
  92. Qiu, Y.; Gan, Z.; Pan, Y. Research on application of software simulation to spread spectrum communication systems. J. Syst. Simul. 1999, 11, 461–464. [Google Scholar]
  93. Wikipedia. Importance Sampling—Wikipedia, the Free Encyclopedia. Available online: https://en.wikipedia.org/wiki/Importance_sampling (accessed on 28 May 2024).
  94. Cavus, E.; Haymes, C.L.; Daneshrad, B. Low BER performance estimation of LDPC codes via application of importance sampling to trapping sets. IEEE Trans. Commun. 2009, 57, 1886–1888. [Google Scholar] [CrossRef]
  95. Jeruchim, M.C.; Balaban, P.; Shanmugan, K.S. Simulation of Communication Systems: Modeling, Methodology and Techniques; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  96. Jeruchim, M. Techniques for estimating the bit error rate in the simulation of digital communication systems. IEEE J. Sel. Areas Commun. 1984, 2, 153–170. [Google Scholar] [CrossRef]
  97. Shin, S.K.; Park, S.K.; Kim, J.M.; Ko, S.C. New quasi-analytic ber estimation technique on the nonlinear satellite communication channels. IEE Proc.-Commun. 1999, 146, 68–72. [Google Scholar] [CrossRef]
  98. Land, I.; Hoeher, P.; Sorger, U. Log-likelihood values and Monte Carlo simulation-some fundamental results. In Proceedings of the International Symposium on Turbo Codes and Related Topics, Brest, France, 4–7 September 2000; pp. 43–46. [Google Scholar]
  99. Kabrisky, M. A Proposed Model for Visual Information Processing in the Human Brain; University of Illinois at Urbana-Champaign: Champaign, IL, USA, 1964. [Google Scholar]
  100. Giebel, H. Feature Extraction and Recognition of Handwritten Characters by Homogeneous Layers; Springer: Berlin/Heidelberg, Germany, 1971; pp. 162–169. [Google Scholar] [CrossRef]
  101. Fukushima, K. Cognitron: A self-organizing multilayered neural network. Biol. Cybern. 1975, 20, 121–136. [Google Scholar] [CrossRef] [PubMed]
  102. Cui, N. Applying Gradient Descent in Convolutional Neural Networks, 1st ed.; IOP Publishing: Bristol, UK, 2018; Volume 1004. [Google Scholar] [CrossRef]
  103. Boutaba, R.; Salahuddin, M.A.; Limam, N.; Ayoubi, S.; Shahriar, N.; Estrada-Solano, F.; Caicedo, O.M. A comprehensive survey on machine learning for networking: Evolution, applications and research opportunities. J. Internet Serv. Appl. 2018, 9, 1–99. [Google Scholar] [CrossRef]
  104. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Internal Representations by Error Propagation; California Univ San Diego La Jolla Inst for Cognitive Science: La Jolla, CA, USA, 1985. [Google Scholar]
  105. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  106. Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef]
  107. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  108. Koushik, J. Understanding convolutional neural networks. arXiv 2016, arXiv:1605.09081. [Google Scholar] [CrossRef]
  109. Al Bataineh, A.; Kaur, D.; Al-khassaweneh, M.; Al-sharoa, E. Automated CNN Architectural Design: A Simple and Efficient Methodology for Computer Vision Tasks. Mathematics 2023, 11, 1141. [Google Scholar] [CrossRef]
  110. Bataineh, A.A. A comparative analysis of nonlinear machine learning algorithms for breast cancer detection. Int. J. Mach. Learn. Comput. 2019, 9, 248–254. [Google Scholar] [CrossRef]
  111. Zhou, Z.; Guan, W.; Wen, S. Recognition and evaluation of constellation diagram using deep learning based on underwater wireless optical communication. arXiv 2020, arXiv:2007.05890. [Google Scholar] [CrossRef]
  112. Natalino, C.; Schiano, M.; Di Giglio, A.; Wosinska, L.; Furdek, M. Field demonstration of machine-learning-aided detection and identification of jamming attacks in optical networks. In Proceedings of the 2018 European Conference on Optical Communication (ECOC), Rome, Italy, 23–27 September 2018; pp. 1–3. [Google Scholar] [CrossRef]
  113. dorronsoro, J.R.; González, A.; Cruz, C.S. Natural Gradient Learning in NLDA Networks. In Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence; Mira, J., Prieto, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2001; pp. 427–434. [Google Scholar] [CrossRef]
  114. Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar] [CrossRef]
  115. Amari, S.-I. Natural gradient works efficiently in learning. Neural Comput. 1998, 10, 251–276. [Google Scholar] [CrossRef]
  116. Rai, P.; Kaushik, R. Artificial intelligence based optical performance monitoring. J. Opt. Commun. 2024, 44, s1733–s1737. [Google Scholar] [CrossRef]
Figure 4. Typical direct detection optical receiver model.
Figure 4. Typical direct detection optical receiver model.
Mathematics 12 02805 g004
Figure 5. Probability density function (PDF) tails obtained from an eye diagram (ED).
Figure 5. Probability density function (PDF) tails obtained from an eye diagram (ED).
Mathematics 12 02805 g005
Figure 6. Density vs. SNR.
Figure 6. Density vs. SNR.
Mathematics 12 02805 g006
Figure 7. Components and the functions of an artificial neuron.
Figure 7. Components and the functions of an artificial neuron.
Mathematics 12 02805 g007
Figure 8. A schematic representation of predicting SNRs with various numbers of filters.
Figure 8. A schematic representation of predicting SNRs with various numbers of filters.
Mathematics 12 02805 g008
Figure 9. Examples of eye diagram images.
Figure 9. Examples of eye diagram images.
Mathematics 12 02805 g009
Figure 10. The CNN architecture.
Figure 10. The CNN architecture.
Mathematics 12 02805 g010
Figure 11. The CNN architecture and implementation.
Figure 11. The CNN architecture and implementation.
Mathematics 12 02805 g011
Figure 12. Scheme of calculating the MAE of True and Predicted data.
Figure 12. Scheme of calculating the MAE of True and Predicted data.
Mathematics 12 02805 g012
Figure 13. Learning curves of CNN regression models using (a) 16, (b) 20, (c) 24, and (d) 28 filters, measured via loss and RMSE.
Figure 13. Learning curves of CNN regression models using (a) 16, (b) 20, (c) 24, and (d) 28 filters, measured via loss and RMSE.
Mathematics 12 02805 g013
Figure 14. Learning curves of CNN regression models using (a) 32, (b) 36, (c) 40, and (d) 44 filters, measured via loss and RMSE.
Figure 14. Learning curves of CNN regression models using (a) 32, (b) 36, (c) 40, and (d) 44 filters, measured via loss and RMSE.
Mathematics 12 02805 g014aMathematics 12 02805 g014b
Figure 15. Learning curves of CNN regression models using (a) 48, (b) 52, (c) 56, (d) 60 and (e) 64 filters, measured via loss and RMSE.
Figure 15. Learning curves of CNN regression models using (a) 48, (b) 52, (c) 56, (d) 60 and (e) 64 filters, measured via loss and RMSE.
Mathematics 12 02805 g015aMathematics 12 02805 g015b
Figure 16. Number of filters vs. training and predicting time for all CNN models.
Figure 16. Number of filters vs. training and predicting time for all CNN models.
Mathematics 12 02805 g016
Figure 20. Performance of the CNN models, the true versus the predicted SNR (left) and BER (right) values. Various channel models are employed to simulate the behaviour of water in harbours (represented by the colour blue) and coastal areas (represented by the colour green) for different pulse widths.
Figure 20. Performance of the CNN models, the true versus the predicted SNR (left) and BER (right) values. Various channel models are employed to simulate the behaviour of water in harbours (represented by the colour blue) and coastal areas (represented by the colour green) for different pulse widths.
Mathematics 12 02805 g020
Figure 21. SNR vs. BER for harbour water (left) and coastal water (right). The true (red) and predicted (blue) values are for different pulse widths using different channel models.
Figure 21. SNR vs. BER for harbour water (left) and coastal water (right). The true (red) and predicted (blue) values are for different pulse widths using different channel models.
Mathematics 12 02805 g021
Table 1. List of models’ channel impulse response functions.
Table 1. List of models’ channel impulse response functions.
Model NameThe Equation of the ModelRef.
DGFThe closed-form expression of the double gamma functions (DGFs) is given as follows:
h t = c 1 t e c 2 t + c 3 t e c 4 t ,     t t 0 ,
t = t t 0       t 0 = L ν
[74]
WDGFThe weighted double gamma functions (WDGFs) model is given as follows:
h t = c 1 t α e c 2 t + c 3 t β e c 4 t , t t 0 ,
t = t t 0
[75]
CEAPEA combination of exponential and arbitrary power functions (CEAPEs) is given as follows:
h t = c 1 t α t + c 2 β e a . ν t + t 0
c 1 > 0 , c 2 > 0 , α > 1   a n d   β > 0
[76]
BPBeta Prime (BP) distribution is given as follows:
h B P t = Γ ( β 1 + β 2 ) Γ ( β 1 ) Γ ( β 2 ) t β 1 1 ( 1 + t ) β 1 + β 2 , t > 0
[77]
C1, C2, C3, C4, α, β, β1, and β2 are double gamma curve-fitting parameters. v is the light velocity for the seawater medium under consideration. L is the linkspan distance between the Tx and Rx.
Table 2. The required attributes for generating eye diagram images and calculating SNR.
Table 2. The required attributes for generating eye diagram images and calculating SNR.
MetadataValues
Water typesHarbour and coastal waters
Channel modelsDGF, WDGF, CEAPF, and BP
Pulse shapesGaussian and Rectangular
Pulse widths0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95
Centre positions on the eye diagrams0/1
Table 3. Statistical analysis of results information.
Table 3. Statistical analysis of results information.
MinMaxMean
Training Time (h)8.3310.999.6915
Predicting Time (s)0.17320.20980.1855
Loss0.29110.51900.3739
Validation Loss0.36050.61130.4730
RMSE0.38690.72530.5070
Validation RMSE0.51780.86860.6779
Loss Ratio0.02970.33810.2107
RMSE Ratio0.01830.41530.2551
Number of Filters166440
Number of Parameters516,8812,267,2011,369,161
Table 4. Standard hyperparameters for CNN model platform structure.
Table 4. Standard hyperparameters for CNN model platform structure.
Standard HyperparametersValues
Colour modeGrayscale
Eye diagram image size (height × width)2366 × 3125
ModelFunctional API
OptimizerAdam
Loss functionMAE
MetricRMSE
Batch size15
No. of convolutional layers5
No. of max pooling layers5
Filters size10
Filters stride1
Pooling size3
Pooling stride3
No. of hidden layers1
Activation function in the hidden layerReLU
Activation function in the output layerlinear
Learning rate10−5
Epochs250
Dropout rate0.45
Table 5. Performance summary of the dataset’s model regarding training and validation for loss and RMSE.
Table 5. Performance summary of the dataset’s model regarding training and validation for loss and RMSE.
Training Time (h)Predicting Time (s)LossValidation LossRMSEValidation RMSELoss RatioRMSE RatioNo. of FiltersNo. of Parameters
9.820.185512090.5190.61130.72530.86860.84900.835016516,881
10.120.187027420.4810.49560.6730.66050.97031.018320651,301
8.840.185258980.4460.49070.5920.70470.90970.840424787,801
9.860.182944970.4430.50890.5750.73320.87010.784828926,381
8.960.185777320.3630.4750.5020.65660.76340.7638321,067,041
10.190.182030780.3260.46590.4340.66890.70040.6493361,209,781
9.970.180924630.3550.45820.4950.67660.77370.7317401,354,601
10.620.181764810.3610.50510.4860.72510.71370.6707441,501,501
10.090.194248530.3390.51220.4620.71330.66190.6476481,650,481
10.990.183154520.3140.42020.4120.63360.74750.6509521,801,541
9.080.180045190.2910.43830.3870.66170.66420.5847561,954,681
9.120.173163760.3310.36050.4580.51780.91900.8839602,109,901
8.330.209826590.2930.40750.390.59160.71830.6589642,267,201
Table 6. Pearson correlation coefficients between a number of parameters and results’ information and their interpretation.
Table 6. Pearson correlation coefficients between a number of parameters and results’ information and their interpretation.
No. of ParametersInterpretation
Loss−0.8977Strong linear inverse correlation
Val. loss−0.7891Strong linear inverse correlation
RMSE−0.8863Strong linear inverse correlation
Val. RMSE−0.7029Strong linear inverse correlation
No. of filters 0.9997Very strong linear direct correlation
Table 7. Pearson correlation coefficients between a number of filters and results’ information and their interpretation.
Table 7. Pearson correlation coefficients between a number of filters and results’ information and their interpretation.
No. of FiltersInterpretation
Training Time−0.1923Very weak linear inverse correlation
Predicting Time 0.1793Very weak linear direct correlation
Loss−0.9053Very strong linear inverse correlation
Validation Loss−0.7897Strong linear inverse correlation
RMSE−0.8429Strong linear inverse correlation
Validation RMSE−0.6416Moderate linear inverse correlation
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El Ramley, I.F.; Bedaiwi, N.M.; Al-Hadeethi, Y.; Barasheed, A.Z.; Al-Zhrani, S.; Chen, M. A Novel Underwater Wireless Optical Communication Optical Receiver Decision Unit Strategy Based on a Convolutional Neural Network. Mathematics 2024, 12, 2805. https://doi.org/10.3390/math12182805

AMA Style

El Ramley IF, Bedaiwi NM, Al-Hadeethi Y, Barasheed AZ, Al-Zhrani S, Chen M. A Novel Underwater Wireless Optical Communication Optical Receiver Decision Unit Strategy Based on a Convolutional Neural Network. Mathematics. 2024; 12(18):2805. https://doi.org/10.3390/math12182805

Chicago/Turabian Style

El Ramley, Intesar F., Nada M. Bedaiwi, Yas Al-Hadeethi, Abeer Z. Barasheed, Saleha Al-Zhrani, and Mingguang Chen. 2024. "A Novel Underwater Wireless Optical Communication Optical Receiver Decision Unit Strategy Based on a Convolutional Neural Network" Mathematics 12, no. 18: 2805. https://doi.org/10.3390/math12182805

APA Style

El Ramley, I. F., Bedaiwi, N. M., Al-Hadeethi, Y., Barasheed, A. Z., Al-Zhrani, S., & Chen, M. (2024). A Novel Underwater Wireless Optical Communication Optical Receiver Decision Unit Strategy Based on a Convolutional Neural Network. Mathematics, 12(18), 2805. https://doi.org/10.3390/math12182805

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop