Next Article in Journal
Fabrication of a Conjugated Fluoropolymer Film Using One-Step iCVD Process and its Mechanical Durability
Next Article in Special Issue
Effect of Cu and Ni Undercoatings on the Electrochemical Corrosion Behaviour of Cr–C-Coated Steel Samples in 0.1 M H2SO4 Solution with 1 g/L NaCl
Previous Article in Journal
Temperature Dependence of AlF3 Protection on Far-UV Al Mirrors
Previous Article in Special Issue
Doped PANI Coated Nano-Ag Electrode for Rapid In-Situ Detection of Bromide in Seawater
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Modeling of Laviron Treatment for Coating of Electrodes with Mediator

1
Automation Department, Technical University of Cluj-Napoca, 28 Memorandumului Street, 400114 Cluj-Napoca, Romania
2
Physics and Chemistry Department, Technical University of Cluj-Napoca, 28 Memorandumului Street, 400114 Cluj-Napoca, Romania
3
Department of Environmental Analysis and Engineering, Babes-Bolyai University, 30 Fantanele Street, 400294 Cluj-Napoca, Romania
4
National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath Street, 400293 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Coatings 2019, 9(7), 429; https://doi.org/10.3390/coatings9070429
Submission received: 29 April 2019 / Revised: 20 June 2019 / Accepted: 3 July 2019 / Published: 6 July 2019
(This article belongs to the Special Issue Corrosion and Electrochemical Behavior of Metals Coating)

Abstract

:
In this paper, an original solution for modeling and simulation of the nonlinear electrochemical process associated to the Laviron treatment is proposed. The graphite electrodes were coated with mediator by adsorption. The Laviron treatment was firstly used to determine the efficiency of modified electrodes coatings. The experimental data were obtained using an electrochemical experiment. The mathematical model of the process is expressed using a neural network with complex structure, an aspect which represents a novel approach in this domain. The main advantages of the proposed model are: its accuracy in relation to the experimental data and the fact that its usage permits the numerical simulation of the process, with multiple future applications. Based on the proposed neural model, an original procedure to determine the parameters of the nonlinear Laviron equation is presented. Another interesting element is represented by proving the fact that the value of heterogeneous electron-transfer rate constant kS is a function depending in the potential scan rate. This aspect is possible due to the original proposed approach of the Laviron treatment as a nonlinear process, on the entire range of input signals, in contrast with the big majority of the studies from the literature which are based on the linearization of this process near particular steady state working points.

Graphical Abstract

1. Introduction

Phenothiazine derivatives are widely used compounds as mediators for Nicotinamide adenine dinucleotide (NADH) oxidation, a subject of great interest, because there are around 300 NAD+-dependent dehydrogenases and, consequently, a great number of biosensors can be obtained for different applications [1,2,3,4].
For this, it is important to know the value of heterogeneous electron-transfer rate constants (kS, s−1) to have a performing mediator for NADH electrocatalytic oxidation. This value can be determined from the dependency between the peak potentials and the potential scan rate, using the Laviron treatment [5]. In a previous work [1], we obtained higher kS values for a graphite electrode, modified with a phenothiazine derivative, in electrolytes with different pH values.
The mathematical model of a technological process can be determined through analytical identification or through experimental identification [6]. In addition, the mathematical models of the technological processes can be expressed, for example, using: ordinary differential equations in the case of linear processes (applying the Laplace transformation to these type of equations, the processes transfer functions result) [7], the state space representation for both linear and nonlinear processes [8], partial differential equations in the case of distributed parameter processes [9,10,11,12], neural models based on neural networks in the case of all type of processes [13,14,15], and fractional-order models in the case of fractional-order processes [16]. In other cases, when it is possible, the process model (for all type of processes) can be expressed using the direct analytical approximating solution (practically the solution which verifies an ordinary differential equation or a partial differential equation), which mathematically describes the evolution (in relation to time as well as, in some cases, other independent variables) of the output signal as effect of the applying the input signal to the considered process [17,18].
In this study, since we used experimental data, the process mathematical model was determined based on experimental identification. In addition, after the analysis of the experimental sets of data, due to its particularities (relatively few samples, two input signals, two output signals, and small variance of the second input signal—the pH value), we concluded the most efficient modeling method that can be applied is to learn the process behavior using a neural network. The main advantage of using neural networks is that they can be used to obtain a functional input–output connection even when we process “difficult” sets of experimental data, associated to nonlinear processes. This aspect is because a neural network is trained to learn the process behavior using an error minimization algorithm as well as neural networks are universal approximators (according to the universal approximation theorem) [19,20].

2. Materials and Methods

2.1. Materials

Bis-phenothiazin-3-yl methane (BPhM) was a gift from Dr. Castelia Cristea, Department of Organic Chemistry, “Babes-Bolyai” University Cluj-Napoca, Cluj-Napoca, Romania. BPhM was synthesized as previously reported [21].
Dimethyl sulfoxide (DMSO) 99.6% was purchased from Sigma (St. Louis, MO, USA) and K2HPO4·2H2O and KH2PO4·H2O from Merck (Darmstadt, Germany). All other reagents were of analytical grade and used as received. The supporting electrolyte was a 0.1 mol/L phosphate buffer solution.

2.2. Electrode Preparation

A spectrographic graphite rod (Ringsdorff-Werke, GmbH, Bonn-Bad Godesberg, Bonn, Germany) of ~3 mm diameter was wet polished on fine (grit 400 and 600) emery paper (Buehler, Lake Bluff, IL, USA). Then, a graphite piece of suitable length was carefully washed with deionized water, dried, and finally press-fitted into a PTFE holder to obtain a graphite electrode having, in contact with the solution, a flat circular surface area of ~0.071 cm2. The modified graphite electrodes (G/BPhM) were obtained by spreading onto the electrode surface 20 μL of 5 mmol/L phenothiazine derivative solution in dimethyl sulfoxide, and leaving them for 1 h at room temperature to evaporate the solvent. Before immersion in the test solution, the modified electrodes were carefully washed with deionized water. For each electrode, the surface concentration (Γ, mol/cm2) was estimated from the peak areas, recorded during the cyclic voltammetric measurements at low scan rate (v < 10 mV/s), corrected for the background current. All presented results are the average of at least three identically prepared electrodes, if not otherwise mentioned [1]. To check the accuracy of data measured, we calculated the surface concentration (Γ, mol/cm2), estimated from the peak areas, recorded during the cyclic voltammetric measurements [1], for every graphite electrode coated with phenothiazine derivative. In all cases the surface concentration was of the same magnitude [1]. In addition, we previously used [1] SEM micrographs of graphite electrodes coated with phenothiazine derivative to characterize the surface obtained for coated graphite electrodes.

2.3. Electrochemical Experiments

Electrochemical experiments were carried out using a typical three-electrode electrochemical cell. The modified electrode was used as working electrode, a platinum ring as counter electrode and an Ag|AgCl|KClsat as reference electrode. Cyclic voltammetry experiments were performed on a PC-controlled electrochemical analyzer (Autolab-PGSTAT 10, EcoChemie, Utrecht, The Netherlands).

2.4. Mathematical Model

An important stage in the research activity is to determine a mathematical model which describes the nonlinear process behavior. The mentioned mathematical model has to make the connection between the two main input signals in the process and the main two output signals from the process. The two main input signals are the potential scan rate u1 = v (V/s) and the phosphate buffer solution pH (u2 = pH). The two output signals are the differences of potentials y1 = EpaE0’ (V) and y2 = EpcE0’ (V). To obtain the process mathematical model, we chose an original approach based on neural networks. The main advantage of this procedure consists in the fact that only one entity will ensure the mathematical model of the treated Multi Input–Multi Output (MIMO) process (the usage of the term “MIMO process” is justified by the existence of two input and two output signals). The basic voltammograms (obtained at one scan rate) are published in [1].
The experimental data obtained in laboratory are presented in Table 1.
The two output signals are functions depending on both input signals y1 = y1(u1, u2) and y2 = y2(u1, u2).
The architecture of the neural network that modeled the MIMO mentioned chemical process is presented in Figure 1. The architecture in Figure 1 corresponds to a fully connected feed-forward neural network. To not increase the complexity of the structure representation in Figure 1 too much, not all the neurons and the connections between them are shown. Figure 2 explains the term “fully connected feed-forward neural network”, in which the neurons from the last two layers and the connections between them are presented (to each connection, a weight is associated, and the weights are mathematically stored in the weights matrix W9(2 × 5)). To simplify the understandability of Figure 1 and Figure 2 (due to their complexity), the bias values associated to all the neurons in their structure are not represented. These values are introduced below.
The following notations are used in Figure 1 and Figure 2:
  • Si is the neuron layer ([u2 = pH; u1 = v] is the input layer (practically the input signals; forward notated with S0); S1−S9 are the hidden layers; S10 is the output layer (it contains two neurons—both output neurons—one for each output signal));
  • Nij is the neuron (i) from layer (j);
  • Wa is the matrix of weights that makes the connection between layer (a) and layer (a + 1);
  • w(a + 1; a)ij is the weight associated to the connection between the neuron (i) from layer (a + 1) and the neuron (j) from layer (a); and
  • the thickened arrows highlight the connections between the consecutive layers of the network.
The neural network contains an input layer, nine hidden layers (S1−S9: S1 contains 25 neurons, S2−S8 contain 20 neurons each, and S9 contains 5 neurons) and an output layer S10 with two neurons. The term “last layers” refers to layers S9 and S10 (obviously, the first layer is the input layer [u1; u2]). The proposed neural network is a fully connected one as all output signals from the neurons from layer (a) are transmitted as input signals to all neurons from layer (a + 1). In addition, the proposed neural network is a feed-forward one as it does not contain feedback (the signals flow passes through the network from the input to the output (“from left to right” in Figure 1)).
In our research activity, we also tried to learn the process behavior using two other neural architectures (in different variants): the Elman neural network and the Nonlinear Autoregressive with Exogenous Inputs (NARX) Model. The motivation of choosing the fully connected feed-forward neural network as architecture solution for learning the process behavior is the fact that we could not obtain a better accuracy using the two other types of tested neural architectures. In general, the NARX model [13] can be trained with high efficiency for learning the behavior of dynamic systems (in their case, the output signals at a moment of time depend on the previous samples of both the input and output signals). Consequently, the delay lines have to be introduced in the model to memorize the previous values of the signals. The number of necessary delay lines is imposed by the system order. In the case of the approached electrochemical process, we had to learn the behavior of a complex function, but at the same time a static one. In this context, the increased complexity of the NARX model due to the usage of the delay lines is not justified. Moreover, the solution search is made indirectly, the solution being more complicated to find. Using a NARX model having 85 nonlinear neurons in the hidden layer (each neuron having hyperbolic tangent transfer function), two linear neurons in the output layer and four delay lines both on the input and on the output signal, we could obtain, in the best case, a GLOBAL MSE (Mean Square Error) for both y1 and y2 of 0.0543 V, which is consistently higher than the GLOBAL MSE error presented in Table 2 (more than seven times higher). The Elman neural network [13] has a particular structure that can memorize the previous state of the network. In the case of dynamical systems, it can be approximately assimilated as behavior with a first order NARX model (which has only one delay line both on the input and on the output signal). From this reason, it has a much lower processing capacity than a high order NARX model and generates lower approximation results. Using the Elman Neural Network, we could not obtain a GLOBEL MSE lower than 0.72 V, an error that is consider unacceptable for this application.
To determine the proposed neural network structure, the following original procedure was applied:
(1)
The initial structure of the neural network is imposed, containing: the input layer (practically the input signals) as in Figure 1, the S1 layer containing three neurons and the output layer S10 as in Figure 1.
(2)
The neural network is trained and simulated, which results in the value of the mean square error (considered quality indicator for this application) between the experimental values (of y1 and y2) and the simulated values (the values of the output signals y1 and y2 generated by the neural network).
(3)
The S1 layer dimension is increased by one neuron.
(4)
Stages (2) and (3) are repeated, until the mean square error increases from one simulation to the next.
(5)
The last dimension obtained for S1 (before the mean square error starts to increase) in Stage (4) is considered for further use.
(6)
The neural network structure of layer S2 (initially containing three neurons) is increased.
(7)
A similar simulation as in Stage (2) is run.
(8)
The S2 layer dimension is increased by one neuron.
(9)
Stages (7) and (8) are repeated until are repeated, until the mean square error increases from one simulation to the next.
(10)
The last dimension obtained for S2 (before the mean square error starts to increase) at Stage (9) is considered for further use.
(11)
Stages (6)–(10) are repeated for the S3 layer.
(12)
Since the dimension of S3 layer is obtained as equal to the dimension of S2 layer, the dimension of the next layers from the right of S3 layer is imposed as the value obtained in Stages (5) and (10) (the dimension of S2 and S3 layers).
(13)
The neural network structure of layer S4 (containing the same number of neurons as S2 and S3) is increased.
(14)
A similar simulation as in the case of Stage (2) is run.
(15)
Stages (13)–(14) are repeated (in the case of Stage (13), we consider Si layer, where in the first iteration, i = 5 and i is increased by 1 in each iteration) until are repeated, until the mean square error increases from one simulation to the next.
(16)
The last layer with the dimension equal to the dimension of S2 and S3 is Si, where i is the last iteration from Stage (15) before the mean square error starts to increase (after the procedure application, we obtained i = 8; this structure of the neural network is considered for further use).
(17)
The neural network structure of layer S9 (initially containing three neurons) is increased.
(18)
A similar simulation as in the case of Stage (2) is run.
(19)
The S9 layer dimension is increased by one neuron.
(20)
Stages (18) and (19) are repeated, until the mean square error increases from one simulation to the next.
(21)
The last dimension obtained for S9 (before the mean square error starts to increase) in Stage (20) is considered for further use.
(22)
The influence of other neural network structure variations is studied (both in the case of the layers dimension and in the case of the number of layers) over the global value of the mean square error.
(23)
After the study in Stage (22), it is proven that the solution obtained in Stage (21) is the best, thus is proposed as the final solution and the algorithm is stopped.
The high complexity of the neural network structure (i.e., the network dimensions) is due to the usage of a training dataset with a high level of difficulty: two input signals in the process; extremely few values for the u2 input signal (pH 5, 7 and 9); two output signals that have values with different algebraic signs; and a globally small number of samples.
All the neurons from the neural network structure are linear, as shown in Figure 3. In Figure 3, xi are the input signals in the neuron, wi are the weights corresponding to each neuron input (i {1, 2, ..., n}, where n is the number of inputs of the neuron), b is the bias value, p is the internal potential of the neuron and F(p) is its activation function (in this case, the function is linear, being described by the equation F(p) = p). We used the architecture of the linear neural network presented in Figure 1 to learn the behavior of the nonlinear process associated to the Laviron treatment. The possibility of learning the behavior of a nonlinear process using a linear neural network is due to its high complexity, which implies a high computation power able to compensate for the nonlinearity.
The neural network was trained using the data in Table 1, using the Levenberg–Marquardt learning algorithm, setting the maximum number of training epochs at 5000, and the target error at 10−10 V (the target error is given by a predefined formulae in MATLAB [22]). The final result was obtained after 29 training epochs. The error was proportional with 10−6 V (the algorithm was interrupted early due to the constraints imposed on the error gradient value evolution from one epoch to the next) and the maximum value of the error associated to data from the pairs of experimental and simulated samples was 0.02 V. The very small errors prove the validity of the determined neural model.
The function that models the proposed neural network work can be represented in matrix form and its mathematical form can be determined based on the equation describing the work of each neuron (presented in Figure 3) in its structure:
y i j = ( k = 1 n ( ( w i ; i 1 ) j ; k ) y i 1 ; k ) + b i j
In Equation (1), yij is the output signal from neuron j from layer i, n is the number of the input signals in the neuron (the dimension of the (i − 1) layer) and bij is the bias value associated to the neuron. The notation “;” is used only to separate two consecutive numerical indices. In addition, the brackets used in the case of the w element signify the fact that the “j;k” indices are associated to the weights.
The input and the output vectors are presented in Equations (2) and (3):
U = [ u 1 u 2 ]
Y = [ y 1 y 2 ]
The weight matrix which makes the connection between the layers (i) and (i + 1) of the neural network is:
W i ( m i × n i ) = [ ( w i + 1 ; i ) 11 ( w i + 1 ; i ) 12 ........ ( w i + 1 ; i ) 1 ; n i ( w i + 1 ; i ) 21 ( w i + 1 ; i ) 22 ........ ( w i + 1 ; i ) 2 ; n i ( w i + 1 ; i ) m i ; 1 ( w i + 1 ; i ) m i ; 2 ........ ( w i + 1 ; i ) m i ; n i ]
In Equation (4), ni is the dimension of the ith layer and mi is the dimension of the (i + 1)th layer. In addition, the bias values in Wi in the first column of Equation (4) become:
W i [ m i × ( n i + 1 ) ] = [ b i + 1 ; 1 b i + 1 ; 2 b i + 1 ; m i ( w i + 1 ; i ) 11 ( w i + 1 ; i ) 12 ........ ( w i + 1 ; i ) 1 ; n i ( w i + 1 ; i ) 21 ( w i + 1 ; i ) 22 ........ ( w i + 1 ; i ) 2 ; n i ( w i + 1 ; i ) m i ; 1 ( w i + 1 ; i ) m i ; 2 ........ ( w i + 1 ; i ) m i ; n i ]
It can be remarked that Equation (5) is the extended form of Equation (4). Obviously, the number of columns of Wi from Equation (4) is increased by 1 (ni + 1) in Equation (5). Equations (4) and (5) are also valid for the input weight matrix W0, considering i = 0. The output column vector that contains the output signals from layer i is given by:
Y i = [ 1 y i ; 1 y i ; 2 y i ; n i ] } ( n i + 1 )
The constant 1 introduced on the first line of Yi has the purpose to include the bias values in the following computations necessary to obtain the equation of the output vector Y. To adapt the form of W’i from Equation (5) to the dimension of Yi from Equation (6), the following equation is considered:
W i [ ( m i + 1 ) × ( n i + 1 ) ] = [ 1 0 0 ........ 0 b i + 1 ; 1 ( w i + 1 ; i ) 11 ( w i + 1 ; i ) 12 ........ ( w i + 1 ; i ) 1 ; n i b i + 1 ; 2 ( w i + 1 ; i ) 21 ( w i + 1 ; i ) 22 ........ ( w i + 1 ; i ) 2 ; n i b i + 1 ; m i ( w i + 1 ; i ) m i ; 1 ( w i + 1 ; i ) m i ; 2 ........ ( w i + 1 ; i ) m i ; n i ]
Consequently, the output vector from the layer (i + 1) is:
Y ( i + 1 ) = W i Y i
In this context, the output vector of the neural network proposed in Figure 1 is given by:
Y = A [ ( i = r 0 ( W i ) ) U ]
In Equation (7), r = 9 is the number of the neural network layers without considering the output layer and U’ is the adapted (extended) form of U given in Equation (10):
U = [ 1 u 1 u 2 ]
In addition, A is a dimension adapting matrix of the form: A = [ 0 1 0 0 0 1 ] .
A” matrix is used to convert the extended form of Y: y = [ 1 y 1 y 2 ] generated by the term [ ( i = r 0 ( W i ) ) U ] into the final form from Equation (3).

3. Results and Discussions

Using the proposed neural model, the process associated to the Laviron treatment can be numerically simulated through the implementation in MATLAB of the previously presented equations. The fact that we modeled a nonlinear process using a complex linear model has an important practical advantage. This advantage consists in a simpler implementation of the process model on a microcontroller to use it as reference model in automatic control systems applications.
The neural network response at the input signals in Table 1 is presented in Figure 4 (the variation of the y1 signal) and 5 (the variation of y2 signal).
Figure 4 and Figure 5 show that the signals y1 and y2, in module, present an increasing evolution at the increase of the pH value. The inconsistencies that occur at small values of v signal are due to the errors of the potential measurement, errors which are more visible for small values of the potential.
In Figure 6, Figure 7 and Figure 8, the comparative graphs between the experimental and the simulated (obtained through the simulation of the proposed neural model) responses are presented (Figure 6 for pH = 5, Figure 7 for pH = 7 and Figure 8 for pH = 9). Figure 6, Figure 7 and Figure 8 show the high accuracy and validity of the proposed model. In all cases, the simulated curve is almost totally superposed over the experimental curve. In Figure 6 and Figure 8, for the high values of v signal, the simulated and the experimental responses cannot be distinguished on the graph with the eye.
The mean square error between the experimental and the simulated responses can be computed using the equation:
M S E = 1 N i = 1 N [ y e x p ( i ) y s i m ( i ) ] 2
In Equation (11), MSE signifies the Mean Square Error, N is the number of considered samples, yexp refers to the experimental data and ysim refers to the data obtained through simulation (the output of the proposed neural network). The obtained errors between the pairs of the previously presented curves are summarized in Table 2. Table 2 also presents the global mean square error.
The model validity is proved again due to the very small values of the errors in Table 2. The term “GLOBAL” refers to all data associated to all three considered pH values (5, 7 and 9). In addition, the last line in Table 2 is associated to both output signals y1 and y2 (the mean square error was computed for all experimental samples).
An important advantage using neural networks is that they are universal approximators. In this context, the obtained model can be simulated for any value of v signal belonging to the domain of the training data and for any number of samples of v signal. Considering as example the case when pH = 9, the comparative graph between the y1 response in Figure 8 and the y1 response when the sampling step for v signal has the value Δv = 0.0008 V/s is presented in Figure 9. The two curves in Figure 9 are almost superposed, proving the universal approximator property of the neural network-determined model.
The proposed neural model preserves its validity at the increase in some limits of the v signal value, as presented in Figure 10. When the v signal was increased by more than 20%, the model became invalid (shown in the graph as the beginning of the flexion to the right of the red response for log10(v) closed of value 0).
A disadvantage of the proposed method is the model inconsistency with different pH values. In Figure 11, the comparative graph between the network responses (case of signal y2) when the pH reference value is 5 is presented. The pH values for which the simulations are made are: 4.5, 4.8, 5, 5.3 and 5.5. Figure 11 shows that, for some particular domains, the model validity is not verified (between the values −0.5 and 0 of log10(v)). However, the model validity persists for all the other values of v signal, thus the neural model can be improved if supplementary measurements were available, i.e., measurements associated to other pH values than those values in Table 1 (the obtained accuracy can be improved by making a larger number of experiments for two or three other pH values and by processing the resulted experimental data).
Another important research target is to determine the parameters of the nonlinear Laviron equation, which has the following form:
y 1 = R T ( 1 α ) n F 2.3 lg ( ( 1 α ) n F R T k S ) + 2.3 R T ( 1 α ) n F lg ( u 1 )
In Equation (12), the temperature T = 298 K, the perfect gases universal constant R = 8.314 J/K·mol, the number of transferred electrons n = 1 and the Faraday’s constant F = 96500 C/mol. The parameters of the Laviron equation are: α is the electron transfer coefficient (which is a non-dimensional parameter with the value enclosed between [0;1]) and kS is the heterogeneous electron transfer rate constant (expressed in [s−1]).
Rewriting Equation (12), ks is given by:
k S = ( 1 α ) n F R T 1 0 ( y 1 ( 1 α ) n F 2.3 R T lg ( u 1 ) )
The procedure proposed to determine the values of (α) and kS is:
(1)
The sampling step for v signal is fixed to the value Δv = 0.0008 V/s.
(2)
The value of the pH is fixed at u2 = 9.
(3)
The neural network proposed as solution is simulated for the value of u2 set in Stage (2) and for u1 [0; 0.8] V/s with the sampling step imposed in Stage (1), resulting in the corresponding values of y1 and y2 signals.
(4)
The α parameter is initialized with the value 0 (α = 0).
(5)
The corresponding value of the kS parameter is computed.
(6)
The alternative value of y1 signal is computed using the Laviron formula from Equation (12).
(7)
The mean square error between the y1 signal generated by the proposed neural network output and the y1 signal generated by the Laviron equation is computed.
(8)
The value of α parameter is increased with (the value) Δα = 0.01.
(9)
Stages (5)–(8) are repeated until α = 0.99 (for α = 1, the Laviron equation is not defined).
(10)
The solution pair for the α and kS parameters is chosen corresponding to the α parameter for which the value of the mean square error resulted the smallest.
(11)
For the resulted solution, the mean square error between the y2 signal generated by the proposed neural network and the −y1 signal generated by the Laviron equation is computed, the obtained value is analyzed, and the two responses are graphically analyzed.
(12)
Stages (2)–(11) are repeated for u2 = 7 (in this case, in Stage (3), u1 [0; 1.28] V/s) and u2 = 5 (in this case, in Stage (3), u1 [0; 1.28] V/s).
The value of Δv = 0.0008 V/s is 1000 times smaller than the maximum scan rate (v) experimental value (0.8 V/s for pH = 9). Due to this consistent scaling, the value of Δv is considered small enough for precise data processing. For a higher precision, the value of Δv from Stage (1) can be made even smaller (due to the general approximator property of the proposed neural network).
Applying the above presented algorithm for u2 = 9 resulted that the mean square error evolution in relation to α has the variation form presented in Figure 12.
The error value, for each α was computed for 1000 samples (value given by the ratio 0.8 (V/s)/Δv). The minimum value of the mean square error is MSE4 = 0.0641 × 10−14 V corresponding to the optimum value of α, αopt = 0.92. In Figure 13, the evolution of the maximum value of the error from all samples, for each α, is presented. It can be remarked that the minimum value of the resulted curve E = 0.0208∙10−14 V was also obtained for αopt = 0.92.
Consequently, the solution obtained for the value of α parameter is αopt = 0.92. Analyzing Equations (12) and (13), if we set the value of α to αopt, at the variation of u1 signal, the value of kS parameter is also modified. Using Equation (13), the evolution of the value of kS parameter in relation to u1 (v signal) can be graphically represented, as presented in Figure 14.
The variable value of kS parameter is introduced so that Equation (13) is satisfied for the entire range of values of v signal. For very high values of v (in the neighborhood of v = 0.8 V/s), the kS parameter becomes steady at kS = 0.2517 s−1. The small monotony variations of the kS(v) function are due to both measurement errors and physical disturbances.
To prove the validity of the determined values of α and kS parameters (in the case of kS, practically the function kS(v) results), the comparison between the proposed neural network response and the response generated by the simulation of Equation (12) is made. For u2 = 9, the comparative graph between the y1 signal generated by the two mathematical entities is presented in Figure 15. The y1 signal generated through the simulation of the Laviron equation is displayed as the blue continuous line and the y1 signal generated through the simulation of the proposed neural network is shown as the red dashed line with stars.
In Figure 15, it can be seen that the two responses are practically superposed (MSE4 has an insignificant value), which proves the validity of α and kS parameters.
In Figure 16, the simulation associated to Stage (11) is presented. The red line represents the y2 signal generated by the proposed neural network and teh blue line represents the signal y2 = −y1 (where y1 is the signal generated through the simulation of Laviron equation). The mean square error between the two curves in Figure 16, computed for 1000 samples, is MSE5 = 0.9799 V. The two curves are close in distance for the first half of the values of lg(v) domain and they tend to intersect for the values near the end of lg(v) domain (the right end of the domain, which contains the maximum value of lg(v)).
These aspects prove the correctness of α and kS parameters. Bigger errors can be seen for part of the second half of lg(v) (due to the physical properties of the material), which explains the much higher value of MSE5 in comparison with MSE4.
When the procedure was repeated for u2 = 7 and u2 = 5, according to Stage (12), the following optimal values for the α parameter resulted: α1opt = 0.87 and α2opt = 0.9. The corresponding evolutions of the function kS(v) are presented in Figure 17 and Figure 18.
Figure 17 and Figure 18 show that the two curves are steady for values near the maximum value of v (1.28 V/s).
The values obtained for the nonlinear Laviron equation parameters are summarized in Table 3.
The variation of kS in relation to v signal represents an original element proposed by the authors.

4. Conclusions

In this paper, an original method for modeling the considered nonlinear chemical process, based on neural networks, is proposed. The specific contributions of this research are the following:
  • The method for determining the neural network structure (adapted for the case of using a very small set of training data and for the case of learning the behavior of a MIMO system) is presented.
  • A procedure to determine the values of (α) and kS constants was developed.
  • The kS variation was proven through numerical simulation.
  • The procedure for determining the dependency kS(v) was designed.
  • The process for modeling the entire domains of the input signals as a nonlinear process was determined (in contrast with the big majority of the methods presented in the literature, which are based on the process linearization near a steady state working point).
The proposed method using a fully connected feed-forward neural network (which contains a consistent number of neurons in its structure) generated high modelling accuracy because, to determine the neural network weights, an optimized learning algorithm was applied. In addition, by increasing the neural networks complexity (according to the method proposed in the manuscript) by increasing the number of layers and the number of neurons, the approached electrochemical process behavior was learned, even in the hypothesis of having few experimental samples of the second input signal (the phosphate buffer solution pH). As shown in Figure 6, Figure 7 and Figure 8, the model validity was proved by the represented curves being almost superposed.
The computation effort used to train the proposed neural network architecture was quite consistent, the learning algorithm being run on the computer for a complex neural structure. However, after training, by implementing the mathematical equations of the neural network, the proposed model could be simulated practically with an insignificant effort and no matter many times was necessary (for different values of the input signals). By simulating the proposed trained model, we obtained the simulations results in a few milliseconds. Consequently, by taking in account all these aspects, we consider that the overall effort is a reasonable one.
The proposed model was determined based on the experimental data which resulted for a large variation domain of the scan rate (v) values (both small and high values). Our aim was to propose a general model thst is valid for the entire v variation domain. In this context, to superpose with high accuracy the responses of the proposed model and the responses of the analytical model, the only freedom degree that can be used was considered the kS (heterogeneous electron transfer rate constant) variation in relation to v. However, as shown in Figure 14, Figure 17 and Figure 18 we proved that the kS variation occurs only for low and medium values of v. In the domain of high values of v, the kS parameter does not present significant variations, tending to become steady (at a constant value; the steady state values of kS for the three considered pH values are summarized in Table 3). Consequently, considering a linearized model in the domain of high values of v, the heterogeneous electron transfer rate constant is properly a constant.
The process behavior can also be modeled using linear approximations (through the linearization near particular values of scan rate). The resulted linearized models are only accurate for v values near the particular value considered for linearization. Associated to each linearized model, the kS heterogeneous electron transfer rate constant is properly a constant. To obtain a more accurate mathematical model, a nonlinear model is proposed, which results as a family of linearized models, for the entire domain of v values. In this context, passing from one linearized model to the next, kS(v) variation occurs.
We consider that the neural model can be improved using data obtained from graphite electrodes coated with phenothiazine derivative and tested in supporting electrolyte with more pH values (the neural model can be improved by making future experiments for two or three other pH values). These data would be useful for application of these electrodes as sensors and biosensors that work at different pH.

Author Contributions

Conceptualization, V.M., M.-L.U., D.G. and C.V.; Methodology, V.M., M.-L.U. and D.G.; Software, V.M.; Validation, V.M. and M.-L.U.; Formal Analysis, V.M., M.-L.U., D.G. and C.V.; Investigation, V.M., M.-L.U., D.G. and C.V.; Resources, D.G. and C.V.; Data Curation, V.M., M.-L.U., D.G. and C.V.; Writing—Original Draft Preparation, V.M.; and Writing—Review and Editing, M.-L.U.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gligor, D.; Varodi, C.; Muresan, L. Graphite electrode modified with a new phenothiazine derivative and with carbon nanotubes for NADH electrocatalytic oxidation. Chem. Biochem. Eng. Q. 2010, 24, 159–166. [Google Scholar]
  2. Meredith, M.T.; Giroud, F.; Minteer, S.D. Azine/hydrogen/nanotube composite-modified electrodes for NADH catalysis and enzyme immobilization. Electrochim. Acta 2012, 72, 207–214. [Google Scholar] [CrossRef]
  3. Hasebe, Y.; Wang, Y.; Fukuoka, K. Electropolymerized poly(Toluidine Blue)-modified carbon felt for highly sensitive amperometric determination of NADH in flow injection analyisis. J. Environ. Sci. 2011, 23, 1050–1056. [Google Scholar] [CrossRef]
  4. Doumèche, B.; Blum, L.J. NADH oxidation on screen-printed electrode modified with a new phenothiazine diazonium salt. Electrochem. Commun. 2010, 12, 1398–1402. [Google Scholar] [CrossRef]
  5. Laviron, E. General expression of the linear potential sweep voltammogram in the case of diffusionless electrochemical systems. J. Electroanal. Chem. 1979, 101, 19–28. [Google Scholar] [CrossRef]
  6. Golnaraghi, F.; Kuo, B.C. Automatic Control Systems, 9th ed.; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  7. Love, J. Process Automation Handbook; Springer: London, UK, 2007. [Google Scholar]
  8. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2001. [Google Scholar]
  9. Coloşi, T.; Abrudean, M.; Unguresan, M.L.; Muresan, V. Numerical Simulation of Distributed Parameter Processes; Springer Int.: Basel, Switzerland, 2013. [Google Scholar]
  10. Li, H.X.; Qi, C. Spatio-Temporal Modeling of Nonlinear Distributed Parameter Systems: A Time/Space Separation Based Approach, 1st ed.; Springer: Dordrecht, The Netherlands, 2011. [Google Scholar]
  11. Curtain, R.F.; Morris, K.A. Transfer functions of distributed parameter systems: A tutorial. Automatica 2009, 45, 1101–1116. [Google Scholar] [CrossRef]
  12. Smyshlyaev, A.; Krstic, M. On control design for PDEs with space-dependent diffusivity and time-dependent reactivity. Automatica 2005, 41, 1601–1608. [Google Scholar] [CrossRef]
  13. Haykin, S. Neural Networks and Learning Machines, 3rd ed.; Pearson Int.: London, UK, 2009. [Google Scholar]
  14. Vălean, H. Neural network for system identification and modelling. In Proceedings of the Automatic Control and Testing Conference–AQTR, Cluj-Napoca, Romania, 23–24 May 1996; pp. 263–268. [Google Scholar]
  15. Borges, R.V.; Garcez, A.D.A.; Lamb, L.C. Learning and representing temporal knowledge in recurrent networks. IEEE Trans. Neural Netw. 2011, 22, 2409–2421. [Google Scholar] [CrossRef] [PubMed]
  16. Monje, C.A.; Chen, Y.Q.; Vinagre, B.M.; Xue, D.; Feliu, V. Fractional-Order Systems and Controls; Springer: London, UK, 2010. [Google Scholar]
  17. Mureşan, V.; Abrudean, M. Temperature modelling and simulation in the furnace with rotary hearth. In Proceedings of the IEEE AQTR–17th ed., Cluj-Napoca, Romania, 28–30 May 2010; pp. 147–152. [Google Scholar]
  18. Abrudean, M. Systems Theory and Automatic Control; Mediamira Publishing House: Cluj-Napoca, Romania, 1998. [Google Scholar]
  19. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Samide, A.; Stoean, R.; Stoean, C.; Tutunaru, B.; Grecu, R.; Cioateră, N. Investigation of polymer coatings formed by polyvinyl alcohol and silver nanoparticles on copper surface in acid medium by means of deep convolutional neural networks. Coatings 2019, 9, 105. [Google Scholar] [CrossRef]
  21. Cristea, C.; Cormos, G.; Gligor, D.; Filip, I.; Muresan, L.; Popescu, I.C. Electrochemical characterization of bis-(10Hphenothiazin-3-yl)-methane derivatives obtained by microwave assisted organic synthesis. J. New Mater. Electrochem. Syst. 2009, 12, 233–238. [Google Scholar]
  22. User Guide. Matlab (R2018a). Available online: https://www.mathworks.com/help/matlab/release-notes-R2018a.html#responsive_offcanvas (accessed on 3 July 2019).
Figure 1. The proposed architecture of the fully connected feed-forward neural network.
Figure 1. The proposed architecture of the fully connected feed-forward neural network.
Coatings 09 00429 g001
Figure 2. The last two layers of the neural network.
Figure 2. The last two layers of the neural network.
Coatings 09 00429 g002
Figure 3. The structure of the linear neuron.
Figure 3. The structure of the linear neuron.
Coatings 09 00429 g003
Figure 4. The variation of y1 signal.
Figure 4. The variation of y1 signal.
Coatings 09 00429 g004
Figure 5. The variation of y2 signal.
Figure 5. The variation of y2 signal.
Coatings 09 00429 g005
Figure 6. The comparative graph between the experimental and the simulated response for pH = 5.
Figure 6. The comparative graph between the experimental and the simulated response for pH = 5.
Coatings 09 00429 g006
Figure 7. The comparative graph between the experimental and the simulated response for pH = 7.
Figure 7. The comparative graph between the experimental and the simulated response for pH = 7.
Coatings 09 00429 g007
Figure 8. The comparative graph between the experimental and the simulated response for pH = 9.
Figure 8. The comparative graph between the experimental and the simulated response for pH = 9.
Coatings 09 00429 g008
Figure 9. The comparative graph between the simulated responses, for pH = 9 and for different values of the sampling step Δv.
Figure 9. The comparative graph between the simulated responses, for pH = 9 and for different values of the sampling step Δv.
Coatings 09 00429 g009
Figure 10. The comparative graph between the simulated responses, for pH = 9, at the increase of the v signal domain.
Figure 10. The comparative graph between the simulated responses, for pH = 9, at the increase of the v signal domain.
Coatings 09 00429 g010
Figure 11. The comparative graph between the simulated responses for the reference pH equal to 5, at the pH value variation.
Figure 11. The comparative graph between the simulated responses for the reference pH equal to 5, at the pH value variation.
Coatings 09 00429 g011
Figure 12. The evolution of the mean square error in relation to (α).
Figure 12. The evolution of the mean square error in relation to (α).
Coatings 09 00429 g012
Figure 13. The evolution of the maximum value of the error (for each α) in relation to α.
Figure 13. The evolution of the maximum value of the error (for each α) in relation to α.
Coatings 09 00429 g013
Figure 14. The evolution of the kS parameter (heterogeneous electron transfer rate constant) in relation to v (scan rate).
Figure 14. The evolution of the kS parameter (heterogeneous electron transfer rate constant) in relation to v (scan rate).
Coatings 09 00429 g014
Figure 15. The comparative graph between the evolutions of y1 signal generated by the two mathematical entities.
Figure 15. The comparative graph between the evolutions of y1 signal generated by the two mathematical entities.
Coatings 09 00429 g015
Figure 16. The comparative graph between the evolutions of y2 signal generated by the two mathematical entities.
Figure 16. The comparative graph between the evolutions of y2 signal generated by the two mathematical entities.
Coatings 09 00429 g016
Figure 17. The evolution of the kS parameter (heterogeneous electron transfer rate constant) in relation to v (scan rate), if u2 = 7 (corresponding to the obtained α1opt).
Figure 17. The evolution of the kS parameter (heterogeneous electron transfer rate constant) in relation to v (scan rate), if u2 = 7 (corresponding to the obtained α1opt).
Coatings 09 00429 g017
Figure 18. The evolution of the kS parameter (heterogeneous electron transfer rate constant) in relation to v (scan rate), if u2 = 5 (corresponding to the obtained α2opt).
Figure 18. The evolution of the kS parameter (heterogeneous electron transfer rate constant) in relation to v (scan rate), if u2 = 5 (corresponding to the obtained α2opt).
Coatings 09 00429 g018
Table 1. Centralizer with the experimental data.
Table 1. Centralizer with the experimental data.
v (V/s)pH = 5pH = 7pH = 9
EpaE0’ (V)EpcE0’ (V)EpaE0’ (V)EpcE0’ (V)EpaE0’ (V)EpcE0’ (V)
0.0050.0350−0.0350
0.010.0680−0.06000.0430−0.06000.0840−0.0650
0.020.0800−0.08800.0760−0.07600.1040−0.1020
0.040.1180−0.11700.1130−0.11300.1410−0.1470
0.050.1220−0.13700.1210−0.14600.1580−0.1550
0.080.1790−0.17100.1870−0.16700.2070−0.2090
0.100.2120−0.18300.1910−0.18800.2240−0.2420
0.160.2620−0.22400.2620−0.24900.3090−0.2970
0.200.3070−0.26100.3150−0.27800.3580−0.3380
0.320.4060−0.31500.4350−0.33200.4730−0.3920
0.400.4510−0.34000.4760−0.38100.5430−0.4170
0.640.5580−0.47200.6390−0.47800.6590−0.5280
0.800.6040−0.49600.6760−0.51500.7330−0.7090
1.2800.6980−0.52900.7620−0.5720
1.600.7860−0.6120
Table 2. The centralizer with the mean square error computed values.
Table 2. The centralizer with the mean square error computed values.
pHNMSE
5140.0051 V (for y1)
5140.0075 V (for y2)
7140.0106 V (for y1)
7140.0086 V (for y2)
9120.0038 V (for y1)
9120.0054 V (for y2)
GLOBAL400.0075 V (for y1)
GLOBAL400.0075 V (for y2)
GLOBAL800.0075 V (for both y1 and y2)
Table 3. The centralizer with the obtained results for the Laviron equation parameters.
Table 3. The centralizer with the obtained results for the Laviron equation parameters.
pHThe Optimum Value of αThe Steady State Value of kS (s−1)
50.90.3279
70.870.136
90.920.2517

Share and Cite

MDPI and ACS Style

Mureşan, V.; Ungureşan, M.-L.; Gligor, D.; Varodi, C. Neural Modeling of Laviron Treatment for Coating of Electrodes with Mediator. Coatings 2019, 9, 429. https://doi.org/10.3390/coatings9070429

AMA Style

Mureşan V, Ungureşan M-L, Gligor D, Varodi C. Neural Modeling of Laviron Treatment for Coating of Electrodes with Mediator. Coatings. 2019; 9(7):429. https://doi.org/10.3390/coatings9070429

Chicago/Turabian Style

Mureşan, Vlad, Mihaela-Ligia Ungureşan, Delia Gligor, and Codruţa Varodi. 2019. "Neural Modeling of Laviron Treatment for Coating of Electrodes with Mediator" Coatings 9, no. 7: 429. https://doi.org/10.3390/coatings9070429

APA Style

Mureşan, V., Ungureşan, M. -L., Gligor, D., & Varodi, C. (2019). Neural Modeling of Laviron Treatment for Coating of Electrodes with Mediator. Coatings, 9(7), 429. https://doi.org/10.3390/coatings9070429

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop