Next Article in Journal
A Dynamic Analysis of Smart and Nanomaterials for New Approaches to Structural Control and Health Monitoring
Next Article in Special Issue
Modelling of Electro-Viscoelastic Materials through Rate Equations
Previous Article in Journal
Advanced Mass Spectrometric Techniques for the Comprehensive Study of Synthesized Silicon-Based Silyl Organic Compounds: Identifying Fragmentation Pathways and Characterization
Previous Article in Special Issue
A Magneto-Viscoelasticity Problem with Aging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Level Scheme for Identification of the Relaxation Time Spectrum Using Stress Relaxation Test Data with the Optimal Choice of the Time-Scale Factor

Department of Technology Fundamentals, Faculty of Production Engineering, University of Life Sciences in Lublin, 20-612 Lublin, Poland
Materials 2023, 16(9), 3565; https://doi.org/10.3390/ma16093565
Submission received: 10 March 2023 / Revised: 23 April 2023 / Accepted: 2 May 2023 / Published: 6 May 2023
(This article belongs to the Special Issue Modelling of Viscoelastic Materials and Mechanical Behavior)

Abstract

:
The viscoelastic relaxation spectrum is vital for constitutive models and for insight into the mechanical properties of materials, since, from the relaxation spectrum, other material functions used to describe rheological properties can be uniquely determined. The spectrum is not directly accessible via measurement and must be recovered from relaxation stress or oscillatory shear data. This paper deals with the problem of the recovery of the relaxation time spectrum of linear viscoelastic material from discrete-time noise-corrupted measurements of a relaxation modulus obtained in the stress relaxation test. A two-level identification scheme is proposed. In the lower level, the regularized least-square identification combined with generalized cross-validation is used to find the optimal model with an arbitrary time-scale factor. Next, in the upper level, the optimal time-scale factor is determined to provide the best fit of the relaxation modulus to experiment data. The relaxation time spectrum is approximated by a finite series of power–exponential basis functions. The related model of the relaxation modulus is proved to be given by compact analytical formulas as the products of power of time and the modified Bessel functions of the second kind. The proposed approach merges the technique of an expansion of a function into a series of independent basis functions with the least-squares regularized identification and the optimal choice of the time-scale factor. Optimality conditions, approximation error, convergence, noise robustness and model smoothness are studied analytically. Applicability ranges are numerically examined. These studies have proved that using a developed model and algorithm, it is possible to determine the relaxation spectrum model for a wide class of viscoelastic materials. The model is smoothed and noise robust; small model errors are obtained for the optimal time-scale factors. The complete scheme of the hierarchical computations is outlined, which can be easily implemented in available computing environments.

1. Introduction

Numerous mathematical rheological models are used to describe the mechanical properties of viscoelastic materials [1,2,3]. The Maxwell and Kelvin–Voight models are, probably, the best-known rheological models. However, deeper insight into the complex behavior of viscoelastic materials is provided by the relaxation spectrum [2,4,5]. The relaxation spectrum is vital for constitutive models and for insight into the properties of a material, since, from the relaxation spectrum, other material functions used to describe rheological properties can be uniquely determined [4,6,7]. It is commonly used to describe, analyze, compare, and improve the mechanical properties of polymers [4,8,9], concrete [10], asphalt [11], rubber [12], wood [13], glass [14], dough [15], polymeric textile materials [16], geophysics [17] and biological materials [18]. The spectrum is not directly measurable; therefore, it must be recovered from oscillatory shear or stress relaxation data [2,5]. A number of different methods and algorithms have been proposed during the last five decades for the recovery of the relaxation spectrum of a viscoelastic material from oscillatory shear data. These works, using different models, approaches and computational techniques, have included contributions by Baumgaertel and Winter [19], Honerkamp and Weese [20], Malkin [21], Malkin et al. [22], Stadler and Bailly [23], Davis and Goulding [24], Davis et al. [25] and Cho [26]. These studies, but also many others, have created new directions of research on discrete and continuous relaxation spectra identification based on dynamic moduli data, and have been conducted ever since [8,11,27].
However, a classical way of studying viscoelasticity is also the stress relaxation test, where time-dependent shear stress is studied for the step increase in strain [1,2,3,28]. For some materials, for example, highly hydrated biological plants, the stress relaxation test is much easier to perform and is more suitable than dynamic oscillatory shear tests [3,29]. There are only a few papers that deal with spectrum determination from time measurements of the relaxation modulus. Additionally, some of them only address specific materials. The first works came from the turn of the 1940s and 1950s. Alfrey and Doty [30] proposed a simple differential model, being, in fact, the first-order Post–Widder formula [31] for the inverse Laplace transform; for details, see the newer edition [32]. Ter Harr [33] approximated the spectrum of relaxation frequencies by the modulus multiplied by time inverse of the relaxation frequency, which can be thought as the Post–Widder inversion formula of the zero order. After many years, Bažant and Yunping [34] and Goangseup and Bažant [35] introduced a two-stage approach of approximating the stress (or retardation) data via multiple differentiable models of the relaxation modulus (creep compliance) and, next, by applying the Post–Widder formula to designate the related relaxation (retardation) spectrum model. The effectiveness of this approach depends, among other aspects, on the function applied to approximate the relaxation modulus. In [34], a logarithmic–exponential model of the relaxation modulus is proposed, for which the authors state the third-order Post–Widder approximation to be satisfactory.
The relaxation spectrum modeling based on the known pairs of Laplace transforms was initiated by Macey [36], who described the relaxation modulus of viscoelastic ceramic material by the modified Bessel function of the second kind and zero order, which corresponds to the exponential–hyperbolic model of the spectrum. To describe the mechanical properties of polyisobutylene, Sips [37] introduced a simple relaxation spectrum model given by the difference of two exponential functions and a related logarithmic model of the modulus. This model was augmented to consider a long-term modulus by Yamamoto [38] and applied to test the rheological properties of the plant cell wall.
Both the algorithms based on the Post–Widder formula and those using the pairs of Laplace transforms assumed rather narrow classes of models. Thus, the scope of their effective applicability is limited to similar, not much wider classes of ‘real’ relaxation characteristics. A wider range of applicability has been offered by the models based on the expansion of an unknown spectrum into a series of basis functions (or polynomials) forming a complete basis in a function space, for example, in the space of real-valued square-integrable functions. Stankiewicz [39] and Stankiewicz and Gołacki [18] derived algorithms of the optimal regularized identification of relaxation and retardation spectra in the classes of models generated by different special functions, where various rules were applied for the choice of regularization parameters.
However, articles [18,39] were based on such a definition of the relaxation spectrum, according to which, the modulus was directly given by the Laplace integral of the spectrum. This spectrum definition is not often used in the literature. Therefore, computationally efficient algorithms to determine the relaxation spectrum applied to time measurements of the relaxation modulus are still desirable. Recently, for the dominant definition of the relaxation spectrum in the literature, a class of algorithms for relaxation spectrum recovery, which combines the technique of an expansion of a function into a series in an orthonormal basis with the least-squares regularized identification, has been derived by Stankiewicz [40]. Legendre, Laguerre, Chebyshev and Hermite functions were used as the basis functions for the spectrum model. In [40], the problem of determining the spectrum of relaxation frequencies was considered. The complementary problem of determining the spectrum of relaxation times based on the stress relaxation data is both analytically and numerically more difficult than the determination of the spectrum of relaxation frequencies, because the relaxation time occurs in the denominator of the exponential function in the spectrum definitional formula. This very task is the subject of this article. Previous studies [18,39,40] have suggested that by selecting an appropriate time-scale factor, a better fit of the model to the measurement data can be obtained.
The objective of the present paper was to develop a model and an identification algorithm for the determination of the continuous relaxation time spectrum based on discrete-time measurements of the relaxation modulus, which, taking into account the ill-posedness of the original problem of the spectrum recovery and the idea of the optimal selection of the time-scale factor, will provide: (a) good approximation of the relaxation spectrum and modulus also due to the best choice of the time-scale factor; (b) smoothness of the spectrum fluctuations, even for noise-corrupted measurements; (c) noise robustness; (d) applicability to a wide range of viscoelastic materials; (e) ease of the implementation of the model and identification algorithm in available computing packages. The idea of the optimal choice of time-scale factor is used here for the first time in the context of relaxation spectrum identification. The goal of this work was the synthesis of the respective model and identification scheme and the analysis of their properties. A further purpose was the numerical verification of the model and algorithm for double-mode Gauss-like distribution used to describe the viscoelastic properties of various materials, in particular different polymers, wood and glass.
A new hierarchical algorithm of noise robust approximation of the continuous spectrum of relaxation times by finite series of power–exponential basis functions is proposed. The components of the relaxation modulus model are given by compact analytical formulas described by the product of power of time and the modified Bessel function of the second kind. The main properties of the basis functions of relaxation spectrum and modulus models have been studied; positive definiteness, upper bounds, monotonicity and asymptotic properties have been examined. Ranges of applicability for different time-scale factors are determined. Since the problem of relaxation spectrum identification is an ill-posed inverse problem, the regularized least-squares identification technique is applied combined with generalized cross-validation to guarantee the stability of the scheme. The quadratic identification index refers to the measured relaxation modulus. The task of determining the best “regularized” model is solved at the lower level of the identification scheme. The appropriate choice of the time-scale factor on the upper level of the scheme ensures the best fit of the relaxation modulus to experimental data.
The optimality conditions are derived both for the identification problem solved at the lower level and the task of the optimal choice of the time-scale factor at the upper level of the scheme. It is proved that the smoothness of the vector of the optimal model parameters implies smoothness of the fluctuations of the relaxation spectrum model. A direct formula and upper and lower bounds for the square integral norm of the smoothed spectrum model are derived. The accuracy of the spectrum model for noisy measurements of the relaxation modulus is studied, and the linear convergence to the model that we would obtain for the noise-free measurements is proved.
To design a numerical algorithm based on the scheme, the computations should be arranged hierarchically in a two-level scheme, i.e., for each iteration of the minimization procedure at the upper level, the whole numerical procedure must be realized for the lower-level task. The complete computational procedure for determining the best model is described. The singular value decomposition technique is applied to simplify the algebraic computations. The identification scheme can be easily implemented in available computing environments. The numerical studies, especially the strong smoothing of the relaxation spectrum models in the example of a double-mode Gauss-like spectrum, with a simultaneous very good fit to the experimental data of the relaxation modulus models, suggests the need to explore the applicability of the regularized weighted least-squares [41] in the lower-level identification task. This will be the subject of further work.
In Appendix A, the proofs and derivations of some mathematical formulas are given. Some tables have been moved to Appendix B to increase the clarity of the article.

2. Materials and Methods

2.1. Relaxation Time Spectrum

The uniaxial, non-aging and isothermal stress–strain equation for a linear viscoelastic material is represented by a Boltzmann superposition integral [2]:
σ ( t ) = t G ( t u ) ε ˙ ( u ) d u ,
where u is the past time variable in the range to the present time t . In Equation (1), variables σ ( t ) and ε ( t ) denote, respectively, the stress and strain at time t , and G ( t ) is the linear relaxation modulus. Modulus G ( t ) is given by [2,5]:
G ( t ) = 0 H ( τ ) τ e t / τ d τ ,
where H ( τ ) characterizes the distributions of relaxation times τ . The continuous relaxation spectrum H ( τ ) is a generalization of the discrete Maxwell spectrum [2,5] to a continuous function of the relaxation times τ .
The problem of relaxation spectrum identification is the problem of solving a system of Fredholm integral equations of the first kind (2) obtained for discrete-time measurement data. In this paper, time measurements of the relaxation modulus are considered. This problem is ill-posed in the Hadamard sense [42], i.e., small changes in measured relaxation modulus can lead to arbitrarily large changes in the determined relaxation spectrum. As a remedy, some reductions in the set of admissible solutions or the appropriate regularization of the original problem are used. Both the techniques are applied here.

2.2. Models

Assume that H ( τ ) L 2 ( 0 , ) , where L 2 ( 0 , ) is the space of real-valued square-integrable functions on the interval ( 0 , ) . It is known that the set of the linearly independent functions { e α τ , τ e α τ , τ 2 e α τ , } form a basis of the space L 2 ( 0 , ) [43] (p. 125); here, α is a positive time-scaling factor. Since the maximum
m a x τ 0   h ¯ k ( τ , α ) = ( k α ) k e k
of the function h ¯ k ( τ , α ) = τ k e α τ grows rapidly with k , it is convenient to expand the relaxation spectrum into a series of scaled basis functions
h k ( τ , α ) = ( α τ k ) k e α τ + k ,   k = 1 , 2 , ,
with the first function
h 0 ( τ , α ) = e α τ ,
as follows:
H ( τ ) = k = 0 g k h k ( τ , α ) ,
where g k are real model coefficients.
It is practical to replace the infinite summation in Equation (5) with a finite one of K first terms, i.e., to approximate the spectrum H ( τ ) by a model of the form
H K ( τ , α ) = k = 0 K 1 g k h k ( τ , α ) ,
where the lower index is the number of model summands. Then, according to (2), the respective model of the relaxation modulus is described by:
G K ( t , α ) = 0 H K ( τ , α ) τ e t / τ d τ = k = 0 K 1 g k ϕ k ( t , α ) ,
where the functions
ϕ k ( t , α ) = 0 h k ( τ , α ) τ e t / τ d τ .
For computational purposes, function h 0 ( τ , α ) is defined by (4). However, since most mathematicians agreed that 0 0 = 1 [44], the general Formula (3) can also be applied in further analysis for k = 0 .
The basis functions ϕ k ( t , α ) (8) of the modulus model (7) are given by a compact analytical formula specified by the following theorem proved in Appendix A.1.
Theorem 1.
Let  α > 0 ,  k 0  and  t > 0 . Then, the basis functions  ϕ k ( t , α )  (8) are given by:
ϕ k ( t , α ) = 2 e k ( α t k ) k K k ( 2 α t )   ,
where  K k ( x )  is the modified Bessel function of the second kind [45,46] of integer order k.
The first Function (9) is as follows
ϕ 0 ( t , α ) = 2 K 0 ( 2 α t )   .
The modified Bessel functions of the second kind, and especially K 0 ( x ) [47], have many applications in science and engineering, for example, in physics to describe the flow of magneto-hydrodynamic (MHD) viscous fluid in a Darcy-type porous medium [48], in engineering to derive a closed analytical form of the model of a axial-flux permanent magnet machine with segmented multipole-Halbach PM array [49] and to describe the per-unit-length internal impedance of two-layer cylindrical conductors [50]. The applications of the Bessel functions in the description of the dynamic response of a mono-pile foundation in homogeneous soil and varied layered soil–rock conditions under horizontal dynamic loads [51], to obtain a fully coupled poroelastic solution for spherical indentation into a half space with an impermeable surface when the indenter is subjected to step displacement loading [52] and to express a distribution of the traveling distance in heterogeneous populations [53] come from material science and ecology.
Five first basis functions h k ( τ , α ) (3) are shown in Figure 1 for two different values of the time-scaling factor α . Figure 2 shows the related functions ϕ k ( t , α ) (9). The basis functions h k ( τ , α ) and ϕ k ( t , α ) are dimensionless. It is seen from Figure 1 that the maximum of each scaled basis function h k ( τ , α ) (3) and (4) is equal one; however, the relaxation time t m a x corresponding to the maximum, for a given parameter α , depends on the index k according to the formula t m a x = k / α ; i.e., grows with k . This means that increasing the number of model components K will allow for good modeling of multimodal spectra, which is confirmed by the example presented in the final part of the paper. Reducing the time-scale factor α shifts the spectrum maxima towards larger relaxation times. From Figure 2, it is seen that the Debye decay monotonicity of basis functions for the relaxation modulus model is in good agreement with the courses of the relaxation modulus obtained in an experiment for real materials; for example: concrete [10] (Figure 13), rubber [12] (Figure 2), elastic polyacrylamide hydrogels [28] (Figures 2a,b, 4a, A5, A7 and A8a), sugar beet [18] (Figure 1) and several foods [3] (Figures 3–39).

2.2.1. Positive Definiteness of the Basis Functions

The basis functions of the relaxation spectrum and modulus models are positive definite. For the functions h k ( τ , α ) (3) and (4), these properties are obvious. Since, according to the property A.1 in [54], the Bessel functions of the second kind K v ( x ) are positive for x > 0 and real v , the positive definiteness of the functions ϕ k ( t , α ) (9) and ϕ 0 ( t , α ) (10) directly result.

2.2.2. Asymptotic Properties of the Basis Functions

For x 0 , the following asymptotic formula is found in the literature [45]:
K k ( x ) ~ 1 2 Γ ( k ) ( x 2 ) k   ,
for modified Bessel functions of the order k > 0 . Thus, for t 0 , Formulas (11) and (9) imply
ϕ k ( t , α ) ~ Γ ( k ) ( e k ) k   ,
which means that for t near zero, the values of basis functions ϕ k ( t , α ) decrease with increasing index k ; see Figure 2.
For argument x , the asymptotic exponential formula holds [54]
K k ( x ) ~ π 2 x e x   .
From (9) and (12), the next asymptotic formula follows for large t and k 1
ϕ k ( t , α ) ~ π k k ( α t ) k 1 4 e 2 α t + k   ,
while for t and the first basis function, we have
ϕ 0 ( t , α ) ~ π α t e 2 α t     .
The multiplication by power function ( α t ) k 1 4 in (13) causes that the greater k is, the slower the basis function ϕ k ( t , α ) decreases, which is seen in Figure 2. The first basis function ϕ 0 ( t , α ) decreases faster than the exponential function, which is also confirmed by the analysis of the course of the basis functions in Figure 2.
By the asymptotic Formulas (13) and (14), the basis functions tend to 0 as t ; i.e., for an arbitrary, α > 0 and k = 0 , 1 , 2 ,
l i m t ϕ k ( t , α ) = 0 .

2.2.3. Upper Bounds for the Basis Functions

In [54], Corollary 3.4, the following inequality is proved:
K 0 ( x ) < π 2 x   e x ,
whence
ϕ 0 ( t , α ) = 2 K 0 ( 2 α t ) < π α t   e 2 α t   .
From Theorem 3.1 in [54], for k 1 we have
K k ( x ) < 2 k 1 Γ ( k ) x k + 1 .
This inequality applied into (9) gives the following upper bound
ϕ k ( t , α ) < 2 e k ( α t k ) k   2 k 1 Γ ( k ) ( 2 α t ) k + 1 = e k 2 k k   Γ ( k ) α t .
Note that having in mind the positive definiteness of the basis functions ϕ k ( t , α ) , the limit (15) for the first function ϕ k ( t , α ) yields from (16), while, for any k 1 , can be derived from (17).

2.2.4. Monotonicity of the Basis Functions

As indicated above, the basis functions h k ( τ , α ) (3) for k 1 have a global maximum equal to 1 for the relaxation time τ = k α 1 , while the first function h 0 ( τ , α ) (4) is monotonically decreasing.
Since the basis function ϕ k ( t , α ) (9) is the product of a monotonically decreasing Bessel function K k ( x ) [54] of increasing argument 2 α t and a monotonically increasing power function ( α t ) k , its monotonicity is not obvious. By applying the differentiation formula, we obtain [54] (Equation (A.13)):
d d x [ x v K v ( x ) ] = x v K v 1 ( x )
which holds for any real-valued order v , to (9), gives for integer k 1
d d t [ ϕ k ( t , α ) ] = 4 α ( e 2 k ) k ( 2 α t ) k 1 K k 1 ( 2 α t )   ,
whereas, having in mind the positive definiteness of the Bessel function K k 1 ( 2 α t ) for all t > 0 , we immediately conclude that the function ϕ k ( t , α ) is monotonically decreasing; see Figure 2. For the first basis function ϕ 0 ( t , α ) (10), the next differentiation formula is [54] (Equation (A.14)):
d d x [ K v ( x ) ] = 1 2 [ K v 1 ( x ) + K v + 1 ( x ) ] ,
which is satisfied for any real v , implies
d d t [ ϕ 0 ( t , α ) ] = α t [ K 1 ( 2 α t ) + K 1 ( 2 α t ) ]   ,
which shows that ϕ 0 ( t , α ) is also a monotonically decreasing function.

2.2.5. Ranges of Applicability

In the models, the parameter α > 0 is a time-scaling factor. The following rule holds: the lower the parameter α , the greater the relaxation times. The above is illustrated by Figure 1 and Figure 2. Through the optimal choice of the scaling factor, the best fit of the model to the experimental data is achieved, which will be the subject of study in Section 3.2.
Following [40], on the basis of the relaxation modulus course, the range of applicability is specified as the time t , for which the first K basis functions ϕ k ( t , α ) no longer permanently exceed; i.e., for any θ > t , ε = 0.5 % of its maximum value. Specifically,
t a p p ( α ) = m a x 0 k K 1 m i n t > 0   { t : | ϕ k ( θ , α ) | 0.005 · ϕ k m a x ( α )   for   any   θ   t } ,
where
ϕ k m a x ( α ) = m a x t 0   | ϕ k ( t , α ) | .
Similarly, in [40], the range of applicability specified directly for the relaxation times τ was defined on the basis of the variability in the basis functions h k ( τ ) ; i.e.,
τ a p p ( α ) = m a x 0 k K 1 m i n τ > 0   { τ : | h k ( ϑ , α ) | 0.005 · h k m a x ( α )   for   any   ϑ   τ } ,
with h k m a x ( α ) defined by
h k m a x ( α ) = m a x τ 0   | h k ( τ , α ) | .
The times t a p p ( α ) (20) and τ a p p ( α ) (21) for different values of α are summarized in Table 1 for K = 5 and K = 12 . The same data for K = 6 , , 11 are given in Table A1 in Appendix B.
A review of the data from these tables shows that for any fixed α , both τ a p p ( α ) and t a p p ( α ) grow almost linearly with the number of model summands K .

2.3. Least-Squares Regularized Identification

Identification consists of the selection, within the chosen class of models given by (6) and (7), of such a model that ensures the best fit to the measurement results. Suppose a certain identification experiment (stress relaxation test [2,3]) resulted in a set of measurements of the relaxation modulus { G ¯ ( t i ) = G ( t i ) + z ( t i ) } at the sampling instants t i 0 , i = 1 , , N , where z ( t i ) is the measurement noise. It is assumed that the number of measurements N K . As a measure of the model (7) accuracy, the quadratic index is taken
Q N ( g K , α ) = i = 1 N [ G ¯ ( t i ) G K ( t i , α ) ] 2 ,
where g K = [ g 0 g K 1 ] T is an K -element vector of unknown coefficients of the models (6) and (7). The identification index (22) is rewritten in the compact form as
Q N ( g K , α ) = G ¯ N Φ N , K ( α ) g K 2 2 ,
where
Φ N , K ( α ) = [ ϕ 0 ( t 1 , α ) ϕ K 1 ( t 1 , α ) ϕ 0 ( t N , α ) ϕ K 1 ( t N , α ) ] ,   G ¯ N = [ G ¯ ( t 1 ) G ¯ ( t N ) ]
and · 2 denotes the square norm in the real Euclidean space N . Thus, the optimal identification of the relaxation spectrum in the class of models defined by (6) and (7) consists of determining the model parameter g K through solving the following optimization task:
m i n g K K           G ¯ N Φ N , K ( α )   g K 2 2 .
The matrix Φ N , K ( α ) is usually ill-conditioned. In consequence, the problem (25) is ill posed in the sense of Hadamard [42]. The parameter g K minimizing identification index (23) is not unique, and even the normal (with the lowest Euclidean norm) solution of (25) is a non-continuous and unbounded function of the measurement vector G ¯ N . Therefore, when the data are noisy, even small changes in G ¯ N would lead to arbitrarily large artefacts in optimal model parameters g K . To deal with the ill-posedness, Tikhonov regularization [55], replacing the original ill-posed problem (25) by a nearby problem with a modified square functional of the form:
m i n g K K G ¯ N Φ N , K ( α )   g K 2 2 + λ g K 2 2 ,
where λ > 0 is a regularization parameter, can be used. The regularized task (26) is well-posed; that is, the solution always exists, is unique and continuously depends on both the matrix Φ N , K and on the measurement data G ¯ N . The parameter vector solving (26) is given by:
g ¯ K λ ( α ) = ( Φ N , K T ( α ) Φ N , K ( α ) + λ I K , K ) 1 Φ N , K T ( α ) G ¯ N ,
where I K , K is the K -dimensional identity matrix.
Following [40], the generalized cross-validation GCV [42,56] is applied, according to which, the best regularization parameter is [42,56]:
λ G C V ( α ) = m i n { λ :   λ = a r g   m i n λ 0   V G C V ( λ , α ) } ,
where the GCV functional is defined by [56]
V G C V ( λ , α ) = Ξ ( λ , α ) G ¯ N 2 2 t r [ Ξ ( λ , α ) ] 2 ,
with the matrix
Ξ ( λ , α ) = I N , N Φ N , K ( α ) ( Φ N , K T ( α ) Φ N , K ( α ) + λ I K , K ) 1 Φ N , K T ( α ) .
Here, Ξ ( λ , α ) G ¯ N is the residual vector for the regularized solution (27); t r [ Ξ ( λ , α ) ] denotes the trace of the symmetric matrix Ξ ( λ , α ) . The optimization problem in (28) has a unique solution, and the resulting parameter g ¯ K λ G C V ( α ) ( α ) differs the least from the normal solution of the problem (26) that we would obtain for the ideal (not noise-corrupted) measurements of the relaxation modulus [56].

3. Results and Discussion

In this section, the necessary optimality condition for the regularized identification task with GCV’s choice of regularization parameter is given in the form of an algebraic nonlinear equation. Next, the problem of the choice of the optimal scale factor is stated, and the respective necessary optimality condition is derived. A hierarchical two-level identification scheme with the optimal choice of the scale factor is proposed. The numerical realization and the application of a singular value decomposition technique are discussed. A complete computational procedure is outlined. The analysis of the smoothing of the model and model accuracy for noisy measurements of the relaxation modulus is presented.

3.1. Necessary Optimality Condition

The GCV functional (29) is a differentiable function of the regularization parameter λ . The necessary optimality condition for the minimization task solved in (28) is derived in Appendix A.3.
Theorem 2.
The optimal regularization parameter  λ G C V ( α )  solves the following equation:
G ¯ N T ( Ψ ( α ) + λ I N , N ) 3 G ¯ N t r [ ( Ψ ( α ) + λ I N , N ) 1 ] = G ¯ N T ( Ψ ( α ) + λ I N , N ) 2 G ¯ N   t r [ ( Ψ ( α ) + λ I N , N ) 2 ] .
where
Ψ ( α ) = Φ N , K ( α ) Φ N , K T ( α ) .
In Appendix A.4, the following formula:
V G C V ( λ , α ) λ = 2 G ¯ N T ( Ψ ( α ) + λ I N , N ) 3   G ¯ N t r [ ( Ψ ( α ) + λ I N , N ) 1 ] + 2 G ¯ N T ( Ψ ( α ) + λ I N , N ) 2 G ¯ N t r [ ( Ψ ( α ) + λ I N , N ) 2 ] [ t r [ ( Ψ ( α ) + λ I N , N ) 1 ] ] 3
is developed, describing the derivative of the GCV functional (29) minimized in (28) directly as a function of matrices G ¯ N and Ψ ( α ) .

3.2. Choice of the Time-Scale Factor

The optimal regularization parameter λ G C V ( α ) and matrix Φ N , K ( α ) depend on the time-scale factor. Therefore, the optimal model parameter
g ^ K λ G C V ( α ) = g ¯ K λ ( α ) | λ = λ G C V ( α )
and the optimal identification index
Q N o p t ( α ) = Q N ( g ^ K λ G C V ( α ) , α ) = G ¯ N Φ N , K ( α ) g ^ K λ G C V ( α ) 2 2
also depend on α . By the optimal choice of the scaling factor, the best fit of the model to the experimental data can be achieved. By (27), (30) and (34), and having (A9) in mind, the index Q N o p t ( α ) (35) is expressed by the following formula:
Q N o p t ( α ) = λ G C V 2 ( α ) G ¯ N T ( Ψ ( α ) + λ G C V ( α ) I N , N ) 2 G ¯ N .
The smaller the index Q N o p t ( α ) , the smaller the model error results. Thus, the problem of the choice of the best time-scale factor α o p t takes the form
m i n α > 0   Q N o p t ( α ) = Q N o p t ( α o p t ) .
Before we state the necessary optimality condition of the task (37), we prove the following result concerning the basis functions ϕ k ( t , α ) as the functions of the time-scale factor. The proof is given in Appendix A.5.
Theorem 3.
Let  α > 0  and  t > 0 . Then, for  k 1 , derivatives of the basis functions  ϕ k ( t , α )  (9) with respect to the parameter  α  are given by:
d d α [ ϕ k ( t , α ) ] = e ( t k ) ( k 1 k ) k 1 ϕ k 1 ( t , α ) .
For  k = 0 ,  the following formula holds:
d d α [ ϕ 0 ( t , α ) ] = 1 e α ϕ 1 ( t , α )   .
Thus, positive definite functions ϕ k ( t , α ) are monotonically decreasing functions of α for any fixed t > 0 .
The following necessary optimality condition is derived in Appendix A.6:
Theorem 4.
The optimal time-scale factor  α o p t  satisfies the following equation:
λ G C V ( α ) G ¯ N T Υ ( α ) 1 Ω N , K ( α ) Υ ( α ) 2 G ¯ N = d λ G C V ( α ) d α G ¯ N T Υ ( α ) 2 G ¯ N λ G C V ( α ) d λ G C V ( α ) d α G ¯ N T Υ ( α ) 3 G ¯ N ,
where derivative  d λ G C V ( α ) d α  is given by the equation
d λ G C V ( α ) d α { 3 G ¯ N T Υ ( α ) 4 G ¯ N t r [ Υ ( α ) 1 ] G ¯ N T Υ ( α ) 3 G ¯ N t r [ Υ ( α ) 2 ] 2 G ¯ N T Υ ( α ) 2 G ¯ N   t r [ Υ ( α ) 3 ] } = 2 G ¯ N T Υ ( α ) 2 Ω N , K ( α ) Υ ( α ) 1 G ¯ N   t r [ Υ ( α ) 2 ] + 2 G ¯ N T Υ ( α ) 2 G ¯ N   t r [ Ω N , K ( α ) Υ ( α ) 3 ] 2 G ¯ N T Υ ( α ) 3 Ω N , K ( α ) Υ ( α ) 1 G ¯ N t r [ Υ ( α ) 1 ] G ¯ N T Υ ( α ) 2 Ω N , K ( α ) Υ ( α ) 2 G ¯ N t r [ Υ ( α ) 1 ] G ¯ N T Υ ( α ) 3 G ¯ N t r [ Ω N , K ( α ) Υ ( α ) 2 ] ,
with symmetric matrices
Υ ( α ) = Ψ ( α ) + λ G C V ( α ) I N , N ,
Ω N , K ( α ) = d d α Ψ ( α ) = Θ N , K ( α ) Φ N , K T ( α ) + Φ N , K ( α ) Θ N , K T ( α ) .
where  Ψ ( α )  is defined by (32) and
Θ N , K ( α ) = d d α Φ N , K ( α ) = [ 1 e α ϕ 1 ( t 1 , α ) e t 1 ϕ 0 ( t 1 , α ) e ( t 1 K 1 ) ( K 2 K 1 ) K 2 ϕ K 2 ( t 1 , α ) 1 e α ϕ 1 ( t N , α ) e t N ϕ 0 ( t N , α ) e ( t N K 1 ) ( K 2 K 1 ) K 2 ϕ K 2 ( t N , α ) ] .

3.3. Two-Level Identification Scheme

To find the optimal time-scale factor α o p t and the optimal model of the relaxation time spectrum, the following two-level scheme is applied:

3.3.1. Lower Level

Given time-scale factor α > 0 , find regularization parameter λ G C V ( α ) solving the GCV minimization task (28).

3.3.2. Upper Level

Find the time-scale factor α o p t > 0 minimizing identification index Q N o p t ( α ) (36), i.e., solving optimization task (37). Take
g ^ K = g ^ K λ G C V ( α o p t ) ,
where g ^ K λ G C V ( α o p t ) is defined by (34) as a parameter of the best model of the relaxation time spectrum.
Having the optimal model parameter g ^ K , the optimal model of the relaxation spectrum is determined according to the formula resulting directly from (6):
H K o p t ( τ ) = H K ( τ , α o p t ) = k = 0 K 1 g ^ k h k ( τ , α o p t ) ,
where g ^ k are elements of the vector g ^ K .
To design numerical realization of the scheme, we need:
  • A numerical procedure for solving the lower-level GCV minimization task (28);
  • An iterative scheme for solving the upper-level problem (37) of choosing the best time-scale factor.
For any given parameter α , the GCV function V G C V ( λ , α ) (29) is differentiable with respect to λ . Partial derivative V G C V ( λ , α ) λ is given by (33) as a function of the experiment data G ¯ N and the matrix Ψ ( α ) (32), which depends on the time instants t i used in the relaxation experiment and on the time-scaling factor. An arbitrary gradient optimization method can be implemented to solve the GCV minimization task (28). Additionally, an arbitrary gradient method can be used to solve optimization problem (37); the derivative of the index Q N o p t ( α ) (36) is described by Formula (A24) derived in Appendix A.6. However, the optimal parameter α o p t can be also found by solving the necessary optimality condition from Theorem 4, i.e., the two scalar algebraic Equations (40) and (41), in fact.

3.4. Algebraic Background of the Identification Scheme

Formulas (27)–(30), fundamental for lower-level optimization task (28), are elegant but generally unsuitable for computational purposes. Following [40], the singular value decomposition (SVD) technique [57] will be applied. Let SVD of the N × K dimensional matrix Φ N , K ( α ) take the form [57]:
Φ N , K ( α ) = U ( α ) Σ ( α )   V ( α ) T ,
where Σ ( α ) = d i a g ( σ 1 ( α ) , , σ r ( α ) ( α ) , 0 , , 0 ) ϵ N , K is a diagonal matrix containing the non-zero singular values σ 1 ( α ) , , σ r ( α ) ( α ) of the matrix Φ N , K ( α ) , matrices V ( α ) K , K and U ( α ) N , N are orthogonal and r ( α ) = r a n k [ Φ N , K ( α ) ] < N . Taking advantage of the diagonal structure of Σ ( α ) and orthogonality of the matrices V ( α ) and U ( α ) , it may be simply proved that the parameter g ¯ K λ ( α ) (27) is given by
g ¯ K λ ( α ) = V ( α ) Λ λ ( α )   U ( α ) T   G ¯ N ,
where K × N diagonal matrix Λ λ ( α ) is as follows:
Λ λ ( α ) = d i a g ( σ 1 ( α ) [ σ 1 ( α ) ] 2 + λ , , σ r ( α ) ( α ) [ σ r ( α ) ( α ) ] 2 + λ , 0 , , 0 ) .
Using SVD (47) and introducing N dimensional vector Y ( α ) = U ( α ) T   G ¯ N , the GCV function (29) is also expressed by a convenient analytical formula:
V G C V ( λ , α ) = [ i = 1 r ( α ) λ 2 [ y i ( α ) ] 2 ( [ σ i ( α ) ] 2 + λ ) 2 + i = r ( α ) + 1 N   [ y i ( α ) ] 2 ] [ N r ( α ) + i = 1 r ( α ) λ ( [ σ i ( α ) ] 2 + λ ) ] 2 ,
as a function of the singular values σ i ( α ) and elements y i ( α ) of the vector Y ( α ) .
Similarly, the identification index Q N o p t ( α ) (36) minimized in the upper level of the scheme is expressed using the SVD (47). Since, by virtue of (47) and (32) we have
( Ψ ( α ) + λ G C V ( α ) I N , N ) 2 = U ( α ) [ Σ ( α )   Σ ( α ) T + λ G C V ( α ) I N , N ] 2 U ( α ) T ,
the index Q ^ N ( α ) is expressed as
Q N o p t ( α ) = λ G C V 2 ( α ) [ i = 1 r ( α ) [ y i ( α ) ] 2 ( [ σ i ( α ) ] 2 + λ G C V ( α ) ) 2 + i = r ( α ) + 1 N   [ y i ( α ) ] 2 ] .

3.5. Computational Algorithm for Model Identification

To design a numerical algorithm of the scheme, the communication between the levels should also be resolved. The computations must be arranged hierarchically in a two-level structure, i.e., for each iteration of the minimization procedure at the upper level, the whole numerical procedure must be realized for the lower-level GCV task (28). The complete computational procedure for determining the optimal model is given below.
  • Step 0: Perform the experiment—stress relaxation test [1,2,3,28]—and record the measurements G ¯ ( t i ) , i = 1 , , N , of the relaxation modulus at times t i 0 .
  • Step 1: Determine the optimal regularization parameter λ G C V ( α o p t ) in the following two-level computations.
    • Step 1.0: Choose the initial point α 0 for the numerical procedure applied to solve the upper-level task (37).
    • Step 1.1: Let α m be the m -th iterate in the numerical procedure chosen to solve the upper-level task (37). For α = α m , solve the lower-level minimization task (28) according to the chosen numerical optimization procedure and determine the regularization parameter λ G C V ( α m ) . The algebraic formula V G C V ( λ , α ) (50) is applied.
    • Step 1.2: Using λ G C V ( α m ) , compute, according to the numerical procedure selected to solve the upper-level task (37), with the index Q N o p t ( α ) described by (51), the new parameter α m + 1 , which is the next approximation of α o p t . If for α m + 1 the stopping rule of the chosen numerical procedure is satisfied, i.e.,
      | α m + 1 α m | ε 1
      or
      | Q N o p t ( α m + 1 ) Q N o p t ( α m ) | ε 2 ,
      where ε 1 and ε 2 are preselected small positives, put α o p t = α m as the optimal time-scale factor, λ G C V ( α m ) as λ G C V ( α o p t ) , and go to Step 2. Otherwise, return to Step 1.1 and continue the computations for α = α m + 1 .
  • Step 2: Compute the vector of the optimal model parameters g ^ K according to (45) and the best model of the relaxation spectrum H K o p t ( τ ) given by (46).
Remark 1.
The appealing feature of the scheme is that only the values of  λ G C V ( α m ) , not the related parameters vector  g ^ K λ G C V ( α m )  (34), are used for  α m  in successive iterations of the numerical procedure solving the upper-level task (37).
Remark 2.
The regularization parameter  λ G C V ( α m )  resulting from the lower-level minimization task (28) in each iteration of the upper level is the solution of the GCV problem (28) for current  α m . Thus, the respective vector  g ^ K λ G C V ( α m )  (34) can be treated as an approximate solution of the overall identification problem.
Remark 3.
The selection of the initial time-scaling factor  α 0  in Step 1.0 may be based on the data concerning model applicability summarized in Table 1 and Table A1 or  α 0  can be selected by comparison, for different values of  α , the first basis function  ϕ k ( t , α )  (9), with the experiment results  { G ¯ ( t i ) } .
Remark 4.
The SVD (47) of the matrix  Φ N , K ( α ) , of computational complexity  O ( N K 2 )  [57], must be computed only once for successive  α m  generated in Step 1.1. SVD is accessible in the form of optimized numerical procedures in most commonly used computational packets.
The above procedure and communication between the levels are illustrated in Figure 3.

3.6. Analysis

Recently, a class of robust algorithms of the approximation of the continuous spectrum of relaxation frequencies by finite series of orthonormal functions has been developed and analyzed in detail in [40]. However, these results were derived using the orthogonality of the basis functions, so they are not applicable here. Therefore, both the analysis of the smoothing of the model and model accuracy for noisy measurements of the relaxation modulus must be carried out anew here.

3.6.1. Smoothness

The purpose of regularization relies on the stabilization of the resulting vector g ¯ K λ ( α ) (27). Since the basis functions h k ( τ , α ) (3) and (4) are such that h k ( τ , α ) 1 for any arguments, the following inequality
m a x τ 0 | H K ( τ , α ) | k = 0 K 1 | g k |
holds for an arbitrary time-scale factor, which means that the smoothing of the vector of model parameters results in the limitation of the respective relaxation spectrum. The mechanism of the vector g ¯ K λ ( α ) (27) stabilization via Tikhonov regularization is explained in many papers, for example, [20,40,55]. The following rule holds: the greater the regularization parameter λ is, the more highly bounded the fluctuations of the vector g ¯ K λ ( α ) are; see [40].
The norm H K ( τ , α ) 2 is a measure of smoothing of the relaxation spectrum model, where · 2 also means the square norm in L 2 ( 0 , ) . In Appendix A.7, the following proposition is proved:
Proposition 1.
For an arbitrary time-scale factor  α  and arbitrary vector of model parameters  g K , we have
H K ( τ , α ) 2 2 = g K T Γ ( α ) g K = 1 2 α g K T Γ 1 g K ,
where  K × K  symmetric real matrices,  Γ ( α )  described by (A39) and  Γ 1  given by
Γ 1 = [ 1 e 2 ( e 2 ) j j ! j j ( e 2 ) K 1 ( K 1 ) ! ( K 1 ) K 1 e 2 2 ( e 2 ) 2 ( e 2 ) 1 + j ( 1 + j ) !   j j ( e 2 ) K ( K ) !   ( K 1 ) K 1 ( e 2 ) k k ! k k ( e 2 ) k + 1 ( k + 1 ) ! k k ( e 2 ) k + j ( k + j ) ! k k   j j ( e 2 ) k + K 1 ( k + K 1 ) ! k k   ( K 1 ) K 1 ( e 2 ) K 1 ( K 1 ) ! ( K 1 ) K 1 ( e 2 ) K 1 + j ( K ) ! ( K 1 ) K 1 ( e 2 ) K 1 + j ( K 1 + j ) ! ( K 1 ) K 1   j j ( e 2 ) 2 K 2 ( 2 K 2 ) ! ( K 1 ) 2 ( K 1 ) ]
are a positive definite; matrix  Γ ( α )  for an arbitrary time-scale factor  α > 0 .
The matrix Γ 1 is independent of the time-scale factor; only the multiplier 1 2 α in the last expression of (52) depends on α . Since for any symmetric non-negative definite matrix, the eigenvalues and singular values are identical, by virtue of the Rayleigh–Ritz inequalities [58] (Lemma I):
λ m i n ( X ) x T x x T X x λ m a x ( X ) x T x ,
which hold for any x ϵ m and any symmetric matrix X = X T ϵ m , m , where λ m i n ( X ) and λ m a x ( X ) are minimal and maximal eigenvalues of the matrix X , Equation (52) implies the following estimations:
1 2 α σ m i n ( Γ 1 ) g K 2 2 H K ( τ , α ) 2 2 1 2 α σ 1 ( Γ 1 ) g K 2 2 ,
where σ 1 ( Γ 1 ) and σ m i n ( Γ 1 ) denote the largest and the minimal singular values of matrix Γ 1 (53). Thus, the next result is derived.
Proposition 2.
For an arbitrary time-scale factor  α  and arbitrary vector of model parameters  g K , the following inequalities hold:
1 2 α σ m i n ( Γ 1 ) g K 2 H K ( τ , α ) 2 1 2 α σ 1 ( Γ 1 ) g K 2 .
The square roots of the singular values σ 1 ( Γ 1 ) and σ m i n ( Γ 1 ) for K = 5 , 6 ,   12 are summarized in Table 2. However, the lower bound of this norm is useful only for small K and small time-scale factors.
Since σ 1 ( Γ 1 ) grows with K , the greater the number of model summands there are, the greater the time-scaling factor should be to achieve pre-assumed multiplier 1 2 α σ 1 ( Γ 1 ) in the estimation (55). In [59], a decreasing sequence of upper bounds for the largest singular value of a non-negative definite square matrix is constructed, given by Equation (19) in [59]. This result applied to the K × K matrix Γ 1 means that
ψ n ( Γ 1 ) = t r ( Γ 1 ) K + [ ( K 1 ) 2 n 1 ( K 1 ) 2 n 1 + 1 t r [ ( Γ 1 t r ( Γ 1 ) K I K , K ) 2 n ] ] 1 2 n ,
is a decreasing sequence of upper bounds for σ 1 ( Γ 1 ) . The right inequality in (55) can be weakened to the following:
Proposition 3.
For an arbitrary time-scale factor  α  and arbitrary vector of model parameters  g K , the sequence of inequalities hold:
H K ( τ , α ) 2 1 2 α ψ n ( Γ 1 ) g K 2
for  n = 1 , 2 , , with the coefficients  ψ n ( Γ 1 )  (56).
In Table A2 in the Appendix B, the sequences ψ n ( Γ 1 ) are summarized for n = 1 , 2 , , 10 and K = 4 , 5 ,   12 model summands; for comparison, in the last row, σ 1 ( Γ 1 ) are given. The sequence ψ n ( Γ 1 ) quickly decreases to σ 1 ( Γ 1 ) ; already, the sixth to eighth estimates equal σ 1 ( Γ 1 ) . However, it should be remembered that the right inequality in (55) and (57) only give the upper bounds of the norm H K ( τ , α ) 2 .
To summarize, the smoothness of the optimal solution g ¯ K λ ( α ) (27) of discrete regularized problem (26) guarantees that the fluctuations in the respective spectrum of relaxation, in particular the resulting spectrum of relaxation H ^ K ( τ ) (46), are also bounded. The time-scale factor α also affects the smoothness of the spectrum model.

3.6.2. Convergence and Noise Robustness

The relaxation spectrum recovery from experimental data is an inverse ill-posed problem, in which the identification index refers to the measured relaxation modulus, but not directly to the unknown relaxation spectrum H ( τ ) . Therefore, we cannot estimate the model error H ( τ ) H K ( τ , α ) 2 directly. As a reference point for the model H K ( τ , α ) (6) and (27), we will consider the model of the spectrum that we would obtain for the same time-scale factor α and regularization parameter λ on the basis of ideal (undisturbed) measurements of the relaxation modulus:
H ˜ K ( τ , α ) = k = 0 K 1 g K λ ( α ) h k ( τ , α ) ,
where g K λ ( α ) is the vector of the regularized solution of (26) given by (compare (27))
g K λ ( α ) = ( Φ N , K T ( α ) Φ N , K ( α ) + λ I K , K ) 1 Φ N , K T ( α ) G N
for noise-free measurements of relaxation modulus G N = [ G ( t 1 ) G ( t N ) ] T ; c.f., Equation (24). In Appendix A.8, the following estimations are derived:
Proposition 4.
For an arbitrary time-scale factor  α  and arbitrary regularization parameter  λ , the error between the relaxation time spectrum models  H K ( τ , α )  (6) and (27) and  H ˜ K ( τ , α )  (58) and (59) is estimated using the following inequality:
H K ( τ , α ) H ˜ K ( τ , α ) 2 1 2 α σ 1 ( Γ 1 ) [ i = 1 r ( α ) [ σ i ( α ) ] 4 { [ σ i ( α ) ] 2 + λ } 4 ] 1 4 z N 2
where  z N = [ z ( t 1 ) z ( t N ) ] T  is the vector of measurement noises.
According to inequality (60), the accuracy of the spectrum approximation depends both on the measurement noises and regularization parameter and on the singular values σ 1 ( α ) , , σ r ( α ) ( α ) of the matrix Φ N , K ( α ) (47) depending on the time-scale factor. By (60), the spectrum H K ( τ , α ) tends to the noise-free spectrum H ˜ K ( τ , α ) in each relaxation time τ , at which they are both continuous, linearly with respect to the norm z N 2 , as z N 2 0 .

3.7. Example

Consider a viscoelastic material of the relaxation spectrum described by the double-mode Gauss-like distribution:
H ( τ ) = [ β 1 e ( 1 τ m 1 ) 2 / q 1 + β 2 e ( 1 τ m 2 ) 2 / q 2 ] / τ ,
where the parameters are as follows: β 1 = 467   Pa · s , m 1 = 0.0037   s 1 , q 1 = 1.124261 × 10 6   s 2   , β 2 = 39   Pa · s , m 2 = 0.045   s 1 and q 2 = 1.173 × 10 3   s 2 . Spectra of this type are tested at the stage of developing new identification methods, for example, in [23] (Figure 2), [24] (Figures 9, 11 and 17) and [25] (Figures 2, 3, 6, 7–11 and 14), because they describe the viscoelastic properties of various materials, in particular: polymers [60] (Figures 4b and 8b), polyacrylamide gels [28] (Figure A4), glass [61] (Figure 2), wood [13] and sugar beet [18] (Figure 2). It is shown in Appendix A.9 that the corresponding ‘real’ relaxation modulus is
G ( t ) = π 2 [ β 1 q 1   e 1 4 t 2 q 1 m 1 t e r f c ( 1 2 t q 1 m 1 q 1 ) + β 2 q 2   e 1 4 t 2 q 2 m 2 t e r f c ( 1 2 t q 2 m 2 q 2 ) ] ,
where the complementary error function e r f c ( x ) is defined by [62]:
e r f c ( x ) = 2 π     x e z 2 d z .
In the experiment, N = 5000 sampling instants t i were generated with the constant period in the time interval T = [ 0 , 1550 ] seconds selected in view of the course of the modulus G ( t ) (62). Additive measurement noises z ( t i ) were selected independently by random choice with uniform distribution on the interval [ 0.005 ,   0.005 ]   Pa . The ‘real’ spectrum (61), modulus (62) and the basis functions h k ( τ , α ) , ϕ k ( t , α ) were simulated in Matlab R2022a using the special functions besselk and erfc. For the singular value decomposition procedure, svd was applied. New calculation algorithms of the modified Bessel function of the second kind are constantly being developed; recently, an algorithm for parallel calculation was proposed [63].

3.7.1. Optimal Models

For K = 3 , 4 , , 12 , the optimal time-scaling factors α o p t were determined via the proposed two-level identification scheme and are given in Table 3 together with the related regularization parameters λ G C V ( α o p t ) . Next, the vectors of optimal model parameters g ^ K = g ^ K λ G C V ( α o p t ) (45) were computed and are given in Table A3 in Appendix B; the elements of these vectors are both negative and positive. The square norms g ^ K 2 and H K o p t ( τ ) 2 are also enclosed in Table 3 as the measures of the solution smoothness. The norm H K o p t ( τ ) 2 of the optimal model H K o p t ( τ ) (46) was determined through (52) based on g ^ K . For the ‘real’ spectrum H ( τ ) (61), the norm H ( τ ) 2 = 19.2562   Pa . The approximation error between H ( τ ) (61) and their models H K o p t ( τ ) (46) was estimated via relative integral error E R 1 ( K ) , defined by:
E R 1 ( K ) 2 = H ( τ ) H K o p t ( τ ) 2 2 H ( τ ) 2 2 = 0 [ H ( τ ) H K o p t ( τ ) ] 2 d τ 0 H ( τ ) 2 d τ ,
which is expressed in a percentage in the penultimate column of Table 3. For the bimodal spectrum, the values of this error of the order of 33% are not surprising in the context of the ill-conditioned inverse problem. To compare the approximations H K o p t ( τ ) (46) obtained for successive integer K , the square index E R 2 ( K ) defined by the distance:
E R 2 ( K ) = H ( K + 1 ) o p t ( τ ) H K o p t ( τ ) 2 2 = 0 [ H ( K + 1 ) o p t ( τ ) H K o p t ( τ ) ] 2 d τ ,
is applied; see the last column of Table 3. The optimal indices Q N o p t ( α o p t ) (37) are given in Table 3, too. The optimal models H K o p t ( τ ) (46) and the ‘real’ spectrum H ( τ ) (61) are plotted in Figure 4 for K = 3 , 4 , 12 . The analysis of the data from Table 3 and, in particular, the analysis of Figure 4b–d show that increasing the number of model components above K = 8 does not significantly affect the course of the model H K o p t ( τ ) and its accuracy. This is emphasized by the values of the E R 1 ( K ) (64) and E R 2 ( K ) (65) indices; in particular, the integral square error E R 2 ( K ) between successive H K o p t ( τ ) and H ( K + 1 ) o p t ( τ ) spectrum approximations, which relates to the square norm H ( τ ) 2 2 for K 8 does not exceed 0.046%, and for K 10 , it is of the order of 0.009%.
In Figure 5, the optimal models of the relaxation modulus G K ( t , α o p t ) computed for g ^ K (45) according to (7) are plotted, where the measurements G ¯ ( t i ) of the ‘real’ modulus G ( t ) (62) are also marked. The optimal models G K ( t , α o p t ) have been well fitted to the experimental data, as indicated by the mean-square model errors Q N o p t ( α o p t ) / N , which for 8 K 12 vary in the range from 4.889 · 10 9   Pa 2 to 5.0482 · 10 9   Pa 2 . Thus, models G K ( t , α o p t ) for different K practically coincide with the measurement points and with each other; see Figure 5.

3.7.2. Optimal and Sub-Optimal Time-Scale Factors

The identification index Q N o p t ( α ) (35) minimized in the upper-level task (37) as a function of time-scale factor α is plotted in Figure 6a–d for K = 6 , 8 , 10 , 12 . The optimal parameters α o p t are marked. The analysis of these plots suggests that there is such a neighborhood of α o p t , namely a closed interval [ α m i n , α m a x ] , that α m i n α o p t α m a x and for each α [ α m i n , α m a x ] , the identification index Q N o p t ( α ) differs from the minimal Q N o p t ( α o p t ) not more than ε α · Q N o p t ( α o p t ) , i.e.,
Q N o p t ( α ) Q N o p t ( α o p t ) ε α · Q N o p t ( α o p t ) ,
where ε α is a small positive number. Inequality (66) means the deterioration of the model error is not greater than ε α percent of the optimal Q N o p t ( α o p t ) . Any parameter α from the interval [ α m i n , α m a x ] is a suboptimal time-scale factor. In Figure 6, ε α = 0.1 , which means a 10% level of sub-optimality, is assumed, and the values of α m i n and α m a x are marked on the small subfigures. They are also given in Table 3. With the increase in the number of model components, the optimal parameter α o p t increases and the range of sub-optimal time-scale factors shifts. For an exemplary number of model components, K = 9 , the optimal model H K o p t ( τ ) (46) and models H K ( τ , α ) (6), optimal in the sense of lower-level task (28) for suboptimal parameters α m i n and α m a x , are plotted in Figure 7a; the ‘real’ spectrum H ( τ ) (61) is presented, too. The corresponding optimal regularization parameters are as follows: λ G C V ( α m i n ) = 0.0167 and λ G C V ( α m a x ) = 0.0094 . For respective vectors of optimal model parameters, we have g ^ K λ G C V ( α m i n ) 2 = 2.5114   Pa and g ^ K λ G C V ( α m a x ) 2 = 2.5119   Pa . The norms of the relaxation spectra are H K ( τ , α m i n ) 2 = 17.4472   Pa and H K ( τ , α m a x ) 2 = 20.2069   Pa . In Figure 7b, the ‘real’ modulus G ( t ) (62) and the models: G K ( t , α o p t ) (7) and (45) and G K ( t , α m i n ) , G K ( t , α m a x ) computed for g ^ K λ G C V ( α m i n ) , g ^ K λ G C V ( α m a x ) according to (7) are plotted. However, the relaxation moduli are almost identical (Figure 7b), the spectrum models differ (Figure 7a), which emphasizes the importance of the time-scale factor optimal selection. Too strong smoothing of the relaxation spectrum models, with a simultaneous very good fit to the experimental data of the relaxation modulus models, indicates the need to modify the quality index in the regularized task (26). For example, the square term g K 2 2 in the second component of the objective function can be replaced by the quadratic form g K T W g K , and a positive definite weight matrix W or regularized weighted least-squares approach [41] can be applied. This will be the subject of further work.

4. Conclusions

In this paper, a new robust hierarchical algorithm for the identification of the relaxation time spectrum, which combines the technique of an expansion of a function into a series of basis functions with the least-squares regularized identification and the optimal choice of the time-scale factor, has been derived. The task of determining the best ‘regularized’ model was solved at the lower level, while the optimal time-scale factor was selected on the upper level of the identification scheme. The continuous spectrum of relaxation times was approximated by finite series of power–exponential basis functions, while the components of the relaxation modulus model were proven to be described by the product of power of time and the modified Bessel function of the second kind. In the present paper, the problem of the optimal choice of the time-scale factor to ensure the best fit of the model to experimental data has been formulated and solved for the first time in the context of the relaxation spectrum identification.
The necessary optimality conditions both for the optimal regularized least-squares identification task and the problem of the optimal selection of the time-scale factor were derived in the form of nonlinear algebraic equations. The main properties of the basis functions of the relaxation spectrum and modulus models, their positive definiteness, convenient upper bounds, monotonicity, asymptotic properties and wide range of applicability for different time-scale factors indicated the possibility of using the proposed model and identification algorithm to determine the spectrum of a wide class of viscoelastic materials.
The overly strong smoothing of the relaxation spectrum models in the example, with a very good fit to the experimental data of the relaxation modulus models, indicates the need to modify the quality index in the lower level identification task. An introduction of a weight matrix in the second component of the objective function or a direct application of regularized weighted least-squares should be investigated. Another solution is the selection of such a spectrum model which guarantees the assumed level of smoothing and optimal adjustment to the relaxation modulus measurement data. These approaches will be the subject of further work.
The presented scheme of the relaxation spectrum identification can be easily modified for retardation spectrum recovery from creep compliance measurements obtained in the standard creep test.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Appendix A.1. Proof of Theorem 1

To prove the theorem, we first derive the following integral formula:
= 0 τ v 1 e α τ e t / τ d τ = 2 ( t α ) v K v ( 2 α t ) ,
where v .
By applying in the integral of the left-hand side of (A1) the substitution α   τ = t e y we have:
= ( t α ) v e y ( v 1 ) e α t   e y e α t   e y e y d y ,
which can be expressed as
= ( t α ) v e v y e 2 α t   c o s h ( y ) d y .
Since, bearing in mind that c o s h ( y ) is an even function, the integral in (A2) is rewritten as
= ( t α ) v [ 0 e v y   e 2 α t   c o s h ( y ) d y + 0 e v y   e 2 α t   c o s h ( y ) d y ] ,
or equivalently
= 2 ( t α ) v 0 e 2 α t   c o s h ( y ) c o s h ( v y ) d y ,
whence, by the following integral representation formula for the Bessel functions [45,46] (Equation (5) on p. 181 in [46]):
K v ( x ) = 0 e x   c o s h ( y ) c o s h ( v y ) d y ,
where x > 0 and v , Formula (A1) directly follows.
Now, we prove Formula (9). Since, by (3) and (8) the basis functions
ϕ k ( t , α ) = ( α k ) k e k 0 τ k 1 e α τ e t / τ d τ
from (A1), we have
ϕ k ( t , α ) = 2 ( α k ) k e k ( t α ) k K k ( 2 α t ) ,  
whence, Formula (9) follows. □

Appendix A.2. Algebraic Matrix Properties

In this appendix, a new differential matrix property, used to obtain the optimality conditions and to simplify numerical computations, is derived. First, for convenience, some matrix identities and inequalities known in the literature are also provided.

Appendix A.2.1. Matrix Identities and Inequalities

The following identities hold for arbitrary matrices A , B , C and D [57] (Equations (6.4.2)), (2.1.1) and (2.1.4), in order:
t r [ A B ] = t r [ B A ] ,  
( A B ) 1 = B 1 A 1 ,  
( B C + A ) 1 = A 1 A 1 B ( I + C A 1 B ) 1 C A 1 ,  
provided that respective matrices are invertible.
The following differential properties hold for arbitrary differentiable matrix functions A ( x ) and B ( x ) [57] (Equations (P2.1.2a) and (P2.1.2b), respectively):
x [ A ( x ) B ( x ) ] = A ( x ) x B ( x ) + A ( x ) B ( x ) x ,  
x [ A ( x ) 1 ] = A ( x ) 1 A ( x ) x A ( x ) 1 ,  
assuming that matrix A ( x ) is invertible.

Appendix A.2.2. New Matrix Property

Let us now prove the following result.
Proposition A1.
Let  A ( x )  be a symmetric positive definite  m × m  matrix function differentiable with respect to the real argument  x . Then
t r [ A ( x ) 1 ] x = t r [ A ( x ) 1 A ( x ) x A ( x ) 1 ] .
Proof of Proposition A1.
Since the trace of the matrix is a linear function, bearing in mind definition of the matrix derivative with respect to the scalar argument x [57] (Section 2.1.8), we have
t r [ A ( x ) 1 ] x = t r [ x [ A ( x ) 1 ] ] ,
which, in view of (A7), directly yields (A8); the proposition is proved. □

Appendix A.3. Proof of Theorem 2

First, we express Ξ ( λ , α ) (30) in a more useful equivalent form. By the matrix identity (A5) for λ > 0 , we have
λ ( Φ N , K ( α ) Φ N , K T ( α ) + λ I N , N ) 1 = I N , N Φ N , K ( α ) ( λ I K , K + Φ N , K T ( α ) Φ N , K ( α ) ) 1 Φ N , K T ( α ) ,
whereas, by (30) and bearing in mind the notation (32), we obtain
Ξ ( λ , α ) = λ ( Ψ ( α ) + λ I N , N ) 1 .
Since, by (30), the GCV functional V G C V ( λ , α ) (29) is expressed as
V G C V ( λ , α ) = G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N t r [ Ξ ( λ , α ) ] 2 ,
its derivative with respect to λ is given by
V G C V ( λ , α ) λ = λ [ G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N ] t r [ Ξ ( λ , α ) ] 2 G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N t r [ Ξ ( λ , α ) ] λ t r [ Ξ ( λ , α ) ] 3 .
Now, we determine the partial derivatives which appear in the nominator of the right-hand side of (A10). By Formula (A6), we have
λ [ G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N ] = 2 G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) λ G ¯ N .
Equation (A9) yields
Ξ ( λ , α ) λ = ( Ψ ( α ) + λ I N , N ) 1 + λ λ [ ( Ψ ( α ) + λ I N , N ) 1 ] ,
where the derivative of the inverse matrix can be found due to property (A7), which, when applied to (A12), results in
Ξ ( λ , α ) λ = ( Ψ ( α ) + λ I N , N ) 1 λ ( Ψ ( α ) + λ I N , N ) 1 ( Ψ ( α ) + λ I N , N ) 1 ,
whereas, after algebraic manipulations, we obtain
Ξ ( λ , α ) λ = ( Ψ ( α ) + λ I N , N ) 2 Ψ ( α ) .
Combining (A11), (A13) and (A9) yields
λ [ G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N ] = 2 λ G ¯ N T ( Ψ ( α ) + λ I N , N ) 3 Ψ ( α ) G ¯ N .
To find t r [ Ξ ( λ , α ) ] λ , expression (A9) is used, which implies
t r [ Ξ ( λ , α ) ] = λ   t r [ ( Ψ ( α ) + λ I N , N ) 1 ] ,
whence
t r [ Ξ ( λ , α ) ] λ = t r [ ( Ψ ( α ) + λ I N , N ) 1 ] + λ   t r [ ( Ψ ( α ) + λ I N , N ) 1 ] λ .
By Proposition A1 and Equation (A8), we have
t r [ ( Ψ ( α ) + λ I N , N ) 1 ] λ = t r [ ( Ψ ( α ) + λ I N , N ) 2 ] ;
therefore, derivative t r [ Ξ ( λ , α ) ] λ (A16) takes the form
t r [ Ξ ( λ , α ) ] λ = t r [ ( Ψ ( α ) + λ I N , N ) 1 ] λ   t r [ ( Ψ ( α ) + λ I N , N ) 2 ] .
Now, we can derive the necessary optimality condition. Since, in view of (A9), matrix Ξ ( λ , α ) is positive definite for any λ > 0 , derivative V G C V ( λ , α ) λ given by the fraction from the right-hand side of (A10) is equal to zero, if and only if its nominator is equal zero to, i.e.,
λ [ G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N ] t r [ Ξ ( λ , α ) ] = 2 G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N t r [ Ξ ( λ , α ) ] λ .
By (A18), (A14), (A9), and (A17), we have
2 λ 2 G ¯ N T ( Ψ ( α ) + λ I N , N ) 3 Ψ ( α ) G ¯ N t r [ ( Ψ ( α ) + λ I N , N ) 1 ] = 2 λ 2 G ¯ N T ( Ψ ( α ) + λ I N , N ) 2 G ¯ N { t r [ ( Ψ ( α ) + λ I N , N ) 1 ] λ   t r [ ( Ψ ( α ) + λ I N , N ) 2 ] } ,
which is equivalent to
G ¯ N T ( Ψ ( α ) + λ I N , N ) 3 Ψ ( α ) G ¯ N t r [ ( Ψ ( α ) + λ I N , N ) 1 ] = G ¯ N T ( Ψ ( α ) + λ I N , N ) 2 G ¯ N { t r [ ( Ψ ( α ) + λ I N , N ) 1 ] λ   t r [ ( Ψ ( α ) + λ I N , N ) 2 ] } ,
whence, by standard algebraic manipulations, Equation (31) follows directly. The theorem is proved. □

Appendix A.4. Derivation of the Formula (33)

To derive the Formula (33) describing the derivative V G C V ( λ , α ) λ directly as a function of the matrices Ψ ( α ) and G ¯ N , let us consider the nominator of the right-hand side of (A10), i.e., the function
M ( λ , α ) = λ [ G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N ] t r [ Ξ ( λ , α ) ] 2 G ¯ N T Ξ ( λ , α ) Ξ ( λ , α ) G ¯ N t r [ Ξ ( λ , α ) ] λ ,
which, by (A14), (A15), (A9) and (A17), is expressed as
M ( λ , α ) = 2 λ 2 G ¯ N T ( Ψ ( α ) + λ I N , N ) 3 Ψ ( α ) G ¯ N   t r [ ( Ψ ( α ) + λ I N , N ) 1 ] 2 λ 2 G ¯ N T ( Ψ ( α ) + λ I N , N ) 2 G ¯ N { t r [ ( Ψ ( α ) + λ I N , N ) 1 ] λ   t r [ ( Ψ ( α ) + λ I N , N ) 2 ] } .
Through tedious algebraic manipulations, the above expression is rewritten as
M ( λ , α ) = 2 λ 3 G ¯ N T ( Ψ ( α ) + λ I N , N ) 3   G ¯ N t r [ ( Ψ ( α ) + λ I N , N ) 1 ] + 2 λ 3 G ¯ N T ( Ψ ( α ) + λ I N , N ) 2 G ¯ N t r [ ( Ψ ( α ) + λ I N , N ) 2 ] .
Now, (A10) combined with (A20) and (A15) yields Formula (33).

Appendix A.5. Proof of Theorem 3

We first prove Equation (39) for k = 0 ; next, Formula (38) is derived.
Based on (10), property (19) and bearing in mind that K 1 ( x ) = K 1 ( x ) [54], we have
d d α [ ϕ 0 ( t , α ) ] = t α [ K 1 ( 2 α t ) + K 1 ( 2 α t ) ] = 2 t α K 1 ( 2 α t )   ,
whence, by (9) and the last equation in (A21), we immediately obtain Equation (39).
Since, for an arbitrary k 1 , the basis function ϕ k ( t , α ) (9) is expressed as
ϕ k ( t , α ) = 2 ( e 2 k ) k ( 2 α t ) k K k ( 2 α t )   ,
by Formula (18), we immediately have
d d α [ ϕ k ( t , α ) ] = 2 t α e k ( 1 2 k ) k ( 2 α t ) k K k 1 ( 2 α t ) ,
which is rewritten as
d d α [ ϕ k ( t , α ) ] = 2 e ( t k ) ( k 1 k ) k 1 e k 1 ( α t k 1 ) k 1 K k 1 ( 2 α t ) ,
whence, in view of (9), Formula (38) follows directly, which completes the proof. □

Appendix A.6. Proof of Theorem 4

Let us find the derivative of the function Q N o p t ( α ) . Based on (36) and property (A6), and bearing in mind definition (42), we have
d Q N o p t ( α ) d α = 2 λ G C V ( α ) d λ G C V ( α ) d α G ¯ N T Υ ( α ) 2 G ¯ N + 2 λ G C V 2 ( α ) G ¯ N T [ d d α Υ ( α ) 1 ] Υ ( α ) 1 G ¯ N ,
whereas by property (A7), the next formula follows
d Q N o p t ( α ) d α = 2 [ λ G C V ( α ) d λ G C V ( α ) d α G ¯ N T λ G C V 2 ( α ) G ¯ N T Υ ( α ) 1 [ d d α Υ ( α ) ] ] Υ ( α ) 2 ,
where, by (42):
d d α Υ ( α ) = d d α Ψ ( α ) + d λ G C V ( α ) d α I N , N = Ω N , K ( α ) + d λ G C V ( α ) d α I N , N .
Substituting (A23) into (A22), we obtain
d Q N o p t ( α ) d α = 2 λ G C V ( α ) d λ G C V ( α ) d α G ¯ N T Υ ( α ) 2 G ¯ N 2 λ G C V 2 ( α ) d λ G C V ( α ) d α G ¯ N T Υ ( α ) 3 G ¯ N 2 λ G C V 2 ( α ) G ¯ N T Υ ( α ) 1 Ω N , K ( α ) Υ ( α ) 2 G ¯ N .
Whence, if the parameter α o p t solves the minimization task (37), then λ G C V ( α o p t ) is a solution of the algebraic Equation (40), resulting directly from equating the derivative d Q N o p t ( α ) d α (A24) to zero. The inspection of (40) shows that two derivatives, Ω N , K ( α ) = d d α Ψ ( α ) introduced in (A23) and d λ G C V ( α ) d α , are necessary to define (40) well.
By virtue of (32) and (A6), we obtain
d d α Ψ ( α ) = d d α Φ N , K ( α ) Φ N , K T ( α ) + Φ N , K ( α ) d d α Φ N , K T ( α ) .
Derivative d d α Φ N , K ( α ) of the matrix Φ N , K ( α ) follows from Theorem 3 and the structure of Φ N , K ( α ) (24). By (24), (38) and (39), bearing in mind that 0 0 = 1 , matrix Θ N , K ( α ) = d d α Φ N , K ( α ) is given by (44). Thus, d d α Ψ ( α ) = Ω N , K ( α ) (A25) is expressed by (43).
The regularization parameter λ G C V ( α ) , which is defined by the minimization task (28), satisfies the necessary condition (31) from Theorem 2; thus, to evaluate derivative d λ G C V ( α ) d α , Equation (31) is used. By (31), bearing in mind the notation (42), we have
G ¯ N T Υ ( α ) 3 G ¯ N t r [ Υ ( α ) 1 ] = G ¯ N T Υ ( α ) 2 G ¯ N   t r [ Υ ( α ) 2 ] .
Differentiating both sides of the above equation with respect to α , and bearing in mind the properties (A7) and (A8), we obtain
G ¯ N T Υ ( α ) 3 [ d d α Υ ( α ) 3 ] Υ ( α ) 3 G ¯ N t r [ Υ ( α ) 1 ] + G ¯ N T Υ ( α ) 3 G ¯ N t r [ Υ ( α ) 1 d d α Υ ( α ) Υ ( α ) 1 ] = G ¯ N T Υ ( α ) 2 [ d d α Υ ( α ) 2 ] Υ ( α ) 2 G ¯ N   t r [ Υ ( α ) 2 ] + G ¯ N T Υ ( α ) 2 G ¯ N   t r [ Υ ( α ) 2 d d α Υ ( α ) 2 Υ ( α ) 2 ] ,
where derivative d d α Υ ( α ) is given by (A23). By (42) and property (A6), we have
d d α Υ ( α ) 2 = [ d d α Υ ( α ) ] Υ ( α ) + Υ ( α ) d d α Υ ( α ) ,
and
d d α Υ ( α ) 3 = [ d d α Υ ( α ) ] Υ ( α ) 2 + Υ ( α ) d d α Υ ( α ) 2 ,
whence
d d α Υ ( α ) 3 = [ d d α Υ ( α ) ] Υ ( α ) 2 + Υ ( α ) [ d d α Υ ( α ) ] Υ ( α ) + Υ ( α ) 2 d d α Υ ( α ) .
By (A28), bearing in mind the that matrices Υ ( α ) and d d α Υ ( α ) are symmetric, we have
G ¯ N T Υ ( α ) 3 [ d d α Υ ( α ) 3 ] Υ ( α ) 3 G ¯ N = 2 G ¯ N T Υ ( α ) 3 [ d d α Υ ( α ) ] Υ ( α ) 1 G ¯ N + G ¯ N T Υ ( α ) 2 [ d d α Υ ( α ) ] Υ ( α ) 2 G ¯ N .
Similarly, by (A27), we obtain
G ¯ N T Υ ( α ) 2 [ d d α Υ ( α ) 2 ] Υ ( α ) 2 G ¯ N = 2 G ¯ N T Υ ( α ) 2 [ d d α Υ ( α ) ] Υ ( α ) 1 G ¯ N .
Including (A29), (A30) and (A27), Equation (A26) is, after algebraic manipulations, rewritten as
2 G ¯ N T Υ ( α ) 3 [ d d α Υ ( α ) ] Υ ( α ) 1 G ¯ N t r [ Υ ( α ) 1 ] + G ¯ N T Υ ( α ) 2 [ d d α Υ ( α ) ] Υ ( α ) 2 G ¯ N t r [ Υ ( α ) 1 ] + G ¯ N T Υ ( α ) 3 G ¯ N t r [ Υ ( α ) 1 d d α Υ ( α ) Υ ( α ) 1 ] = 2 G ¯ N T Υ ( α ) 2 [ d d α Υ ( α ) ] Υ ( α ) 1 G ¯ N   t r [ Υ ( α ) 2 ] + G ¯ N T Υ ( α ) 2 G ¯ N   t r [ [ Υ ( α ) 2 [ d d α Υ ( α ) ] Υ ( α ) 1 + Υ ( α ) 1 [ d d α Υ ( α ) ] Υ ( α ) 2 ] ] .
By (A23), the last equation can be expressed as follows:
2 G ¯ N T Υ ( α ) 3 [ Ω N , K ( α ) + d λ G C V ( α ) d α I N , N ] Υ ( α ) 1 G ¯ N t r [ Υ ( α ) 1 ] + G ¯ N T Υ ( α ) 2 [ Ω N , K ( α ) + d λ G C V ( α ) d α I N , N ] Υ ( α ) 2 G ¯ N t r [ Υ ( α ) 1 ] + G ¯ N T Υ ( α ) 3 G ¯ N t r [ Υ ( α ) 1 [ Ω N , K ( α ) + d λ G C V ( α ) d α I N , N ] Υ ( α ) 1 ] = 2 G ¯ N T Υ ( α ) 2 [ Ω N , K ( α ) + d λ G C V ( α ) d α I N , N ] Υ ( α ) 1 G ¯ N   t r [ Υ ( α ) 2 ] + G ¯ N T Υ ( α ) 2 G ¯ N   t r [ [ Υ ( α ) 2 [ Ω N , K ( α ) + d λ G C V ( α ) d α I N , N ] Υ ( α ) 1 + Υ ( α ) 1 [ Ω N , K ( α ) + d λ G C V ( α ) d α I N , N ] Υ ( α ) 2 ] ] ,
which, after painstaking algebraic manipulations, yields Equation (41), which completes the proof. □

Appendix A.7. Proof of Proposition 1

By (6), for any time-scale factor α and any vector of model parameters g K , we have
H K ( τ , α ) 2 2 = 0 [ k = 0 K 1 g k h k ( τ , α ) ] 2 d τ ,
which is rewritten as
H K ( τ , α ) 2 2 = k = 0 K 1 j = 0 K 1 g k g j γ k j ( α ) .
where the functions
γ k j ( α ) = 0 h k ( τ , α ) h j ( τ , α ) d τ ,
for k , j = 0 , 1 , K 1 .
By virtue of (A33), we have γ k j ( α ) = γ j k ( α ) for any k , j = 0 , 1 , K 1 . The proof is based on the following integral formula [64] (Equation (3.351.3)):
0 τ n e 2 α τ d τ = n ! ( 2 α ) n + 1 .
For k = j = 0 , definition (A33) and (4) immediately imply
γ 00 ( α ) = 0 e 2 α τ d τ = 1 2 α .
For k = 0 and j = 1 , K 1 , by virtue of (A33), (3), (4) and (A34), we have
γ 0 j ( α ) = ( e α j ) j 0 τ j e 2 α τ d τ = ( e α j ) j j ! ( 2 α ) j + 1 = 1 2 α ( e 2 ) j j ! j j .
For k , j = 1 , K 1 , through (A33), (3) and (A34), we obtain
γ k j ( α ) = e k + j ( α k ) k ( α j ) j 0 τ k + j e 2 α τ d τ = e k + j ( α k ) k ( α j ) j ( k + j ) ! ( 2 α ) k + j + 1 .
The last equation is expressed as
γ k j ( α ) = 1 2 α ( e 2 ) k + j ( k + j ) ! k k   j j .
Bearing in mind (A33), Equation (A32) is rewritten as the quadratic form
H K ( τ , α ) 2 2 = g K T Γ ( α ) g K ,
where symmetric K × K matrix Γ ( α ) = [ γ k j ( α ) ] , composed by the functions γ k j ( α ) defined by (A33), through (A35)–(A37), is as follows
Γ ( α ) = 1 2 α [ 1 e 2 ( e 2 ) j j ! j j ( e 2 ) K 1 ( K 1 ) ! ( K 1 ) K 1 e 2 2 ( e 2 ) 2 ( e 2 ) 1 + j ( 1 + j ) !   j j ( e 2 ) K ( K ) !   ( K 1 ) K 1 ( e 2 ) k k ! k k ( e 2 ) k + 1 ( k + 1 ) ! k k ( e 2 ) k + j ( k + j ) ! k k   j j ( e 2 ) k + K 1 ( k + K 1 ) ! k k   ( K 1 ) K 1 ( e 2 ) K 1 ( K 1 ) ! ( K 1 ) K 1 ( e 2 ) K 1 + j ( K ) ! ( K 1 ) K 1 ( e 2 ) K 1 + j ( K 1 + j ) ! ( K 1 ) K 1   j j ( e 2 ) 2 K 2 ( 2 K 2 ) ! ( K 1 ) 2 ( K 1 ) ] .
Since matrix Γ ( α ) can be expressed as Γ ( α ) = 1 2 α Γ 1 , where Γ 1 is given by (53), Equation (52) is derived.
According to (A38) and (A31), the quadratic form g K T Γ ( α ) g K is expressed as
g K T Γ ( α ) g K = 0 [ k = 0 K 1 g k h k ( τ , α ) ] 2 d τ .
Thus, g K T Γ ( α ) g K 0 for an arbitrary vector g K , and g K T Γ ( α ) g K = 0 , if and only if k = 0 K 1 g k h k ( τ , α ) = 0 for almost all τ > 0 . Since the basis functions h k ( τ , α ) are independent, the last equality holds, if and only if g k = 0 for all k = 0 , 1 , , K 1 , i.e., only if the vector g K = 0 , which yields the positive definiteness of Γ ( α ) . Whence, Γ 1 = 2 α Γ ( α ) . is positive definite, too; this finishes the proof. □

Appendix A.8. Proof of Proposition 4

Since, by (6) and (58), bearing in mind (A32), (A33), (A38) and Γ ( α ) = 1 2 α Γ 1 , we have
H K ( τ , α ) H ˜ K ( τ , α ) 2 2 = 1 2 α [ g ¯ K λ ( α ) g K λ ( α ) ] T Γ 1 [ g ¯ K λ ( α ) g K λ ( α ) ] ,
in view of the right inequality in (54), the following estimation holds:
H K ( τ , α ) H ˜ K ( τ , α ) 2 2 1 2 α σ 1 ( Γ 1 ) g ¯ K λ ( α ) g K λ ( α ) 2 2 .
By virtue of (27) and (59), and bearing in mind (48), we obtain
g ¯ K λ ( α ) g K λ ( α ) = V ( α ) Λ λ ( α )   U ( α ) T z N ,
where the orthogonal matrices V ( α ) and U ( α ) are defined by SVD (47), and the diagonal matrix Λ λ ( α ) is given by (49). Thus,
g ¯ K λ ( α ) g K λ ( α ) 2 2 = t r [ U ( α ) Λ λ T ( α ) Λ λ ( α )   U ( α ) T z N z N T ] ,
which, using the Schwarz inequality for matrices [57] and the orthogonality of U ( α ) , is estimated as follows:
g ¯ K λ ( α ) g K λ ( α ) 2 2 t r [ Λ λ T ( α ) Λ λ ( α ) Λ λ T ( α ) Λ λ ( α ) ] 1 2     t r [ z N z N T z N z N T ] 1 2 .
Since
  t r [ z N z N T z N z N T ] = z N T z N t r [ z N z N T ] = [ z N T z N ] 2 = z N 2 4 ,
and by the diagonal structure of Λ λ ( α ) (49), we have
t r [ Λ λ T ( α ) Λ λ ( α ) Λ λ T ( α ) Λ λ ( α ) ] = i = 1 r ( α ) [ σ i ( α ) ] 4 { [ σ i ( α ) ] 2 + λ } 4 ,
inequality (A41) is equivalently rewritten as
g ¯ K λ ( α ) g K λ ( α ) 2 2 [ i = 1 r ( α ) [ σ i ( α ) ] 4 { [ σ i ( α ) ] 2 + λ } 4 ] 1 2 z N 2 2 .
Now, estimations (A40) and (A42) yield inequality (60), and the proposition is proved. □

Appendix A.9. Derivation of the Formula (62)

To derive Equation (62), note that due to (2), the relaxation modulus induced by the spectrum
H ( τ ) = [ e ( 1 τ m ) 2 / q ] / τ
is described by
G ( t ) = 0 e ( 1 τ m ) 2 / q 1 τ 2 e t / τ d τ .
Through (2) and (A43), applying the substitution 1 τ = x , the above integral is rewritten as
G ( t ) = 0 e ( x m ) 2 / q e t x d x = 0 e ( x m ) 2 / q e t x d x
and next, as follows:
G ( t ) = e 1 4 t 2 q m t 0 e ( x + 1 2 t q m ) 2 q d x .
Applying in the integral (A44) the substitution ( x + 1 2 t q m ) q = z , the formula describing relaxation modulus is expressed as
G ( t ) = q   e 1 4 t 2 q m t ( 1 2 t q m ) / q e z 2 d z = q π 2   e 1 4 t 2 q m t e r f c ( 1 2 t q m q ) ,
where the function e r f c ( x ) is defined by (63). Formula (62) follows directly from the last expression of the right-hand side of (A45), by comparison of (A43) and (61).

Appendix B

Table A1. Ranges of the applicability of the model for various time-scale parameters for K = 6 , 11 .
Table A1. Ranges of the applicability of the model for various time-scale parameters for K = 6 , 11 .
Time-Scale Factor α [s]Range 1 of Relaxation Times τapp(α) [s]Range 1 of Times tapp(α) [s]Range 1 of Relaxation Times τapp(α) [s]Range 1 of Times tapp(α) [s]
K = 6 K = 6 K = 7 K = 7
0.0001161,655.35337,647.7178,346.044392,372
0.00116,165.5333,764.7717,834.60639,238.5
0.011616.553376.481783.4623923.85
0.1161.65337.65178.348392.5
116.16533.76517.83639.26
101.6163.3761.7843.95
1000.16160.3370.17830.392
α   [ s ] K = 8 K = 8 K = 9 K = 9
0.0001194,529.30446,719.4210,306.21500,804.6
0.00119,452.9344,671.9421,030.62150,080.46
0.011945.294467.192103.0625008.046
0.1194.53446.72210.306500.81
119.45344.67221.030650.08
101.9454.4672.10315.008
1000.19450.4470.21030.501
α   [ s ] K = 10 K = 10 K = 11 K = 11
0.0001225,748.07554,697240,907.43608,443.6
0.00122,574.80755,469.724,090.74360,844.38
0.012257.4815546.972409.0746084.45
0.1225.748554.69240.907608.46
122.574855.46924.090760.86
102.25755.5472.40916.1
1000.22570.5550.24090.608
1 The upper bounds t a p p ( α ) (20) and τ a p p ( α ) (21) of the applicability intervals [ 0 , t a p p ( α ) ] and [ 0 , τ a p p ( α ) ] are given.
Table A2. The square roots of the upper bounds ψ n ( Γ 1 ) (56) and the square roots of the largest σ 1 ( Γ 1 ) and minimal σ m i n ( Γ 1 ) singular values of the matrix Γ 1 (53) for K = 4 , 5 ,   12 model summands.
Table A2. The square roots of the upper bounds ψ n ( Γ 1 ) (56) and the square roots of the largest σ 1 ( Γ 1 ) and minimal σ m i n ( Γ 1 ) singular values of the matrix Γ 1 (53) for K = 4 , 5 ,   12 model summands.
K 456789101112
n ψ n ( Γ 1 )
13.6801994.3538304.9749235.5563196.1061996.6301737.1322997.6156428.082583
23.6717114.3303084.9315795.4897246.0138276.5100866.9829657.4358117.871217
33.6673834.3256904.9273445.4859666.0103656.5066256.9791697.4313427.865747
43.6665554.3252324.9270785.4857996.0102516.5065366.9790817.4312277.865575
53.6664214.3251904.9270625.4857926.0102476.5065346.9790797.4312247.865568
63.6664004.3251864.9270615.4857926.0102476.5065346.9790797.4312237.865567
73.6663974.3251864.9270615.4857926.0102476.5065346.9790797.4312237.865567
83.6663964.3251864.9270615.4857926.0102476.5065346.9790797.4312237.865567
93.6663964.3251864.9270615.4857926.0102476.5065346.9790797.4312237.865567
103.6663964.3251864.9270615.4857926.0102476.5065346.9790797.4312237.865567
σ 1 ( Γ 1 ) 3.6663964.3251854.9270615.4857926.0102476.5065346.9790797.4312237.865567
Table A3. Optimal parameters g ^ K = g ^ K λ G C V ( α ^ ) (45) of the relaxation spectrum models from Example 1; the elements of the vectors g ^ K are expressed in [ Pa ] .
Table A3. Optimal parameters g ^ K = g ^ K λ G C V ( α ^ ) (45) of the relaxation spectrum models from Example 1; the elements of the vectors g ^ K are expressed in [ Pa ] .
g ^ K
K = 3 K = 4 K = 5 K = 6 K = 7 K = 8 K = 9 K = 10 K = 11 K = 12
0.50819−0.32787−0.31484−0.28238−0.26421−0.26104−0.26344−0.25958−0.25722−0.24893
0.489163.254062.447051.863411.575351.447011.388461.269491.183961.042852
−0.01177−5.37283−1.74791−0.214830.217350.332980.381510.530240.626270.77959
3.79955−1.90531−1.81625−1.22415−0.94252−0.85336−0.72742−0.63340−0.49513
3.00974−0.42686−0.80274−0.65867−0.57137−0.58067−0.58715−0.58709
2.497720.359900.000380.018141−0.06714−0.11806−0.19722
1.857360.611570.393430.270440.221250.12879
1.256690.580480.400040.343720.26284
0.754980.461140.347250.26954
0.595920.355470.25643
0.461770.31367
0.49927

References

  1. Malkin, A.I.A.; Malkin, A.Y.; Isayev, A.I. Rheology: Concepts, Methods and Applications; ChemTec: Deerfield Beach, FL, USA, 2006; Available online: https://books.google.pl/books?id=8rGafjhgz-UC (accessed on 3 December 2022).
  2. Ferry, J.D. Viscoelastic Properties of Polymers, 3rd ed.; John Wiley & Sons: New York, NY, USA, 1980. [Google Scholar]
  3. Rao, M.A. Rheology of Fluid, Semisolid, and Solid Foods: Principles and Applications; Springer: New York, NY, USA, 2013; Available online: https://books.google.pl/books?id=9-23BAAAQBAJ (accessed on 9 March 2023).
  4. Ankiewicz, S.; Orbey, N.; Watanabe, H.; Lentzakis, H.; Dealy, J. On the use of continuous relaxation spectra to characterize model polymers. J. Rheol. 2016, 60, 1115–1120. [Google Scholar] [CrossRef]
  5. Anderssen, R.S.; Davies, A.R.; de Hoog, F.R.; Loy, R.J. Derivative based algorithms for continuous relaxation spectrum recovery. J. Non-Newton. Fluid Mech. 2015, 222, 132–140. [Google Scholar] [CrossRef]
  6. Mead, D.W. Numerical interconversion of linear viscoelastic material functions. J. Rheol. 1994, 38, 1769–1795. [Google Scholar] [CrossRef]
  7. Hajikarimi, P.; Moghadas Nejad, F. Chapter 6-Interconversion of constitutive viscoelastic functions. In Applications of Viscoelasticity; Hajikarimi, P., Nejad, F.M., Eds.; Elsevier: Amsterdam, The Netherlands, 2021; pp. 107–139. [Google Scholar] [CrossRef]
  8. Chang, F.-L.; Bin, H.; Huang, W.-T.; Chen, L.; Yin, X.-C.; Cao, X.-W.; He, G.-J. Improvement of rheology and mechanical properties of PLA/PBS blends by in-situ UV-induced reactive extrusion. Polymer 2022, 259, 125336. [Google Scholar] [CrossRef]
  9. Pogreb, R.; Loew, R.; Bormashenko, E.; Whyman, G.; Multanen, V.; Shulzinger, E.; Abramovich, A.; Rozban, A.; Shulzinger, A.; Zussman, E.; et al. Relaxation spectra of polymers and phenomena of electrical and hydrophobic recovery: Interplay between bulk and surface properties of polymers. J. Polym. Sci. Part B Polym. Phys. 2017, 55, 198–205. [Google Scholar] [CrossRef]
  10. Sun, Y.; Huang, B.; Chen, J. A unified procedure for rapidly determining asphalt concrete discrete relaxation and retardation spectra. Constr. Build. Mater. 2015, 93, 35–48. [Google Scholar] [CrossRef]
  11. Luo, L.; Xi, R.; Ma, Q.; Tu, C.; Ibrahim Shah, Y. An improved method to establish continuous relaxation spectrum of asphalt materials. Constr. Build. Mater. 2022, 354, 129182. [Google Scholar] [CrossRef]
  12. Martinez, J.M.; Boukamel, A.; Méo, S.; Lejeunes, S. Statistical approach for a hyper-visco-plastic model for filled rubber: Experimental characterization and numerical modeling. Eur. J. Mech.-A/Solids 2011, 30, 1028–1039. [Google Scholar] [CrossRef]
  13. Bardet, S.; Gril, J. Modelling the transverse viscoelasticity of green wood using a combination of two parabolic elements. C. R. Mécanique 2002, 330, 549–556. [Google Scholar] [CrossRef]
  14. Zhang, N.Z.; Bian, X.L.; Ren, C.; Geng, C.; Mu, Y.K.; Ma, X.D.; Jia, Y.D.; Wang, Q.; Wang, G. Manipulation of relaxation processes in a metallic glass through cryogenic treatment. J. Alloys Compd. 2022, 894, 162407. [Google Scholar] [CrossRef]
  15. Yazar, G.; Demirkesen, I. Linear and Non-Linear Rheological Properties of Gluten-Free Dough Systems Probed by Fundamental Methods. Food Eng. Rev. 2023, 15, 56–85. [Google Scholar] [CrossRef]
  16. Demidov, A.V.; Makarov, A.G. Spectral Simulation of Performance Processes of Polymeric Textile Materals. Fibre Chem. 2023, 54, 222–226. [Google Scholar] [CrossRef]
  17. Lau, H.C.P.; Holtzman, B.K.; Havlin, C. Toward a Self-Consistent Characterization of Lithospheric Plates Using Full-Spectrum Viscoelasticity. AGU Adv. 2020, 1, e2020AV000205. [Google Scholar] [CrossRef]
  18. Stankiewicz, A.; Golacki, K. Approximation of the continuous relaxation spectrum of plant viscoelastic materials using Laguerre functions. Electron. J. Pol. Agric. Univ. Ser. Agric. Eng. 2008, 11. Available online: http://www.ejpau.media.pl/articles/volume11/issue1/art-20.pdf (accessed on 7 March 2023).
  19. Baumgaertel, M.; Winter, H.H. Determination of discrete relaxation and retardation time spectra from dynamic mechanical data. Rheol. Acta 1989, 28, 511–519. [Google Scholar] [CrossRef]
  20. Honerkamp, J.; Weese, J. Determination of the relaxation spectrum by a regularization method. Macromolecules 1989, 22, 4372–4377. [Google Scholar] [CrossRef]
  21. Malkin, A.Y. The use of a continuous relaxation spectrum for describing the viscoelastic properties of polymers. Polym. Sci. Ser. A 2006, 48, 39–45. [Google Scholar] [CrossRef]
  22. Malkin, A.Y.; Vasilyev, G.B.; Andrianov, A.V. On continuous relaxation spectrum. Method of calculation. Polym. Sci. Ser. A 2010, 52, 1137–1141. [Google Scholar] [CrossRef]
  23. Stadler, F.J.; Bailly, C. A new method for the calculation of continuous relaxation spectra from dynamic-mechanical data. Rheol. Acta 2009, 48, 33–49. [Google Scholar] [CrossRef]
  24. Davies, A.R.; Goulding, N.J. Wavelet regularization and the continuous relaxation spectrum. J. Non-Newton. Fluid Mech. 2012, 189–190, 19–30. [Google Scholar] [CrossRef]
  25. Davies, A.R.; Anderssen, R.S.; de Hoog, F.R.; Goulding, N.J. Derivative spectroscopy and the continuous relaxation spectrum. J. Non-Newton. Fluid Mech. 2016, 233, 107–118. [Google Scholar] [CrossRef]
  26. Cho, K.S. Power series approximations of dynamic moduli and relaxation spectrum. J. Rheol. 2013, 57, 679–697. [Google Scholar] [CrossRef]
  27. Takeh, A.; Shanbhag, S. A Computer Program to Extract the Continuous and Discrete Relaxation Spectra from Dynamic Viscoelastic Measurements. Appl. Rheol. 2013, 23, 24628. [Google Scholar] [CrossRef]
  28. Pérez-Calixto, D.; Amat-Shapiro, S.; Zamarrón-Hernández, D.; Vázquez-Victorio, G.; Puech, P.-H.; Hautefeuille, M. Determination by Relaxation Tests of the Mechanical Properties of Soft Polyacrylamide Gels Made for Mechanobiology Studies. Polymers 2021, 13, 629. [Google Scholar] [CrossRef] [PubMed]
  29. Joyner, H.S. Rheology of Semisolid Foods; Springer: Berlin/Heidelberg, Germany, 2019; Available online: https://books.google.pl/books?id=siy-DwAAQBAJ (accessed on 9 March 2023).
  30. Alfrey, T.; Doty, P. The Methods of Specifying the Properties of Viscoelastic Materials. J. Appl. Phys. 1945, 16, 700–713. [Google Scholar] [CrossRef]
  31. Widder, D.V. An Introduction to Transformation Theory; Academic Press: Cambridge, MA, USA, 1971; Available online: https://books.google.pl/books?id=wERoPQAACAAJ (accessed on 9 March 2023).
  32. Alfrey, T. Mechanical Behavior of High Polymers; Interscience Publishers: Geneva, Switzerland, 1965. [Google Scholar]
  33. ter Haar, D. An easy approximate method of determining the relaxation spectrum of a viscoelastic materials. J. Polym. Sci. 1951, 6, 247–250. [Google Scholar] [CrossRef]
  34. Bažant, Z.P.; Yunping, X. Continuous Retardation Spectrum for Solidification Theory of Concrete Creep. J. Eng. Mech. 1995, 121, 281–288. [Google Scholar] [CrossRef]
  35. Goangseup, Z.; Bažant, Z.P. Continuous Relaxation Spectrum for Concrete Creep and its Incorporation into Microplane Model M4. J. Eng. Mech. 2002, 128, 1331–1336. [Google Scholar] [CrossRef]
  36. Macey, H.H. On the Application of Laplace Pairs to the Analysis of Relaxation Curves. J. Sci. Instrum. 1948, 25, 251. [Google Scholar] [CrossRef]
  37. Sips, R. Mechanical behavior of viscoelastic substances. J. Polym. Sci. 1950, 5, 69–89. [Google Scholar] [CrossRef]
  38. Yamamoto, R. Stress relaxation property of the cell wall and auxin-induced cell elongation. J. Plant Res. 1996, 109, 75–84. [Google Scholar] [CrossRef]
  39. Stankiewicz, A. A scheme for identification of continuous relaxation time spectrum of biological viscoelastic materials. Acta Sci. Pol. Ser. Tech. Agrar. 2003, 2, 77–91. [Google Scholar]
  40. Stankiewicz, A. A Class of Algorithms for Recovery of Continuous Relaxation Spectrum from Stress Relaxation Test Data Using Orthonormal Functions. Polymers 2023, 15, 958. [Google Scholar] [CrossRef]
  41. Zhang, B.; Makram-Ebeid, S.; Prevost, R.; Pizaine, G. Fast solver for some computational imaging problems: A regularized weighted least-squares approach. Digit. Signal Process. 2014, 27, 107–118. [Google Scholar] [CrossRef]
  42. Hansen, P.C. Rank-Deficient and Discrete Ill-Posed Problems; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1998. [Google Scholar] [CrossRef]
  43. Szabatin, J. Podstawy Teorii Sygnałów; Wydawnictwa Komunikacji i Łączności: Warszawa, Poland, 1982. (In Polish) [Google Scholar]
  44. Knuth, D.E. Two Notes on Notation. Am. Math. Mon. 1992, 99, 403–422. [Google Scholar] [CrossRef]
  45. Yang, Z.-H.; Chu, Y.-M. On approximating the modified Bessel function of the second kind. J. Inequal. Appl. 2017, 2017, 41. [Google Scholar] [CrossRef]
  46. Watson, G.N. A Treatise on the Theory of Bessel Functions, 3rd ed.; Cambridge University Press: Cambridge, UK, 1995; Available online: https://books.google.nl/books?id=Mlk3FrNoEVoC (accessed on 19 December 2022).
  47. Martin, P.; Maass, F. Accurate analytic approximation to the Modified Bessel function of Second Kind K0(x). Results Phys. 2022, 35, 105283. [Google Scholar] [CrossRef]
  48. Rehman, N.U.; Hussain, S.T.; Aziz, A. Pulsatile Darcy flow of water-based thermally radiative carbon nanotubes between two concentric cylinders. Numer. Methods Partial. Differ. Equ. 2023, 39, 213–230. [Google Scholar] [CrossRef]
  49. Okita, T.; Harada, H. 3-D Analytical Model of Axial-Flux Permanent Magnet Machine with Segmented Multipole-Halbach Array. IEEE Access 2023, 11, 2078–2091. [Google Scholar] [CrossRef]
  50. Lovrić, D.; Vujević, S. Accurate Computation of Internal Impedance of Two-Layer Cylindrical Conductors for Arguments of Arbitrary Magnitude. IEEE Trans. Electromagn. Compat. 2018, 60, 347–353. [Google Scholar] [CrossRef]
  51. Das, R.; Manna, B.; Banerjee, A. Spectral element formulation for rock-socketed mono-pile under horizontal dynamic loads. Soil Dyn. Earthq. Eng. 2023, 169, 107863. [Google Scholar] [CrossRef]
  52. Liu, M.; Huang, H. Poroelastic response of spherical indentation into a half space with an impermeable surface via step displacement. J. Mech. Phys. Solids 2021, 155, 104546. [Google Scholar] [CrossRef]
  53. Yamamura, K. Dispersal distance of heterogeneous populations. Popul. Ecol. 2002, 44, 93–101. [Google Scholar] [CrossRef]
  54. Gaunt, R.E. Inequalities for modified Bessel functions and their integrals. J. Math. Anal. Appl. 2014, 420, 373–386. [Google Scholar] [CrossRef]
  55. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; John Wiley & Sons: New York, NY, USA, 1977. [Google Scholar]
  56. Wahba, G. Practical Approximate Solutions to Linear Operator Equations When the Data are Noisy. SIAM J. Numer. Anal. 1977, 14, 651–667. [Google Scholar] [CrossRef]
  57. Golub, G.H.; Van Loan, C.F. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 2013. [Google Scholar]
  58. Lee, C.-H. Upper and lower matrix bounds of the solution for the discrete Lyapunov equation. IEEE Trans. Autom. Control. 1996, 41, 1338–1341. [Google Scholar] [CrossRef]
  59. Rojo, O.; Soto, R.; Rojo, H. Bounds for the spectral radius and the largest singular value. Comput. Math. Appl. 1998, 36, 41–50. [Google Scholar] [CrossRef]
  60. Kwakye-Nimo, S.; Inn, Y.; Yu, Y.; Wood-Adams, P.M. Linear viscoelastic behavior of bimodal polyethylene. Rheol. Acta 2022, 61, 373–386. [Google Scholar] [CrossRef]
  61. Wang, J.; Wang, X.; Ruan, H. On the mechanical β relaxation in glass and its relation to the double-peak phenomenon in impulse excited vibration at high temperatures. J. Non-Cryst. Solids 2020, 533, 119939. [Google Scholar] [CrossRef]
  62. Deaño, A.; Temme, N.M. Analytical and numerical aspects of a generalization of the complementary error function. Appl. Math. Comput. 2010, 216, 3680–3693. [Google Scholar] [CrossRef]
  63. Takekawa, T. Fast parallel calculation of modified Bessel function of the second kind and its derivatives. SoftwareX 2022, 17, 100923. [Google Scholar] [CrossRef]
  64. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products; Academic Press: Boston, MA, USA, 2007. [Google Scholar] [CrossRef]
Figure 1. Basis functions h k ( τ , α ) (3) and (4) of the relaxation spectrum model H K ( τ , α ) (6) for time-scaling factors: (a) α = 0.1   [ s 1 ] and (b) α = 0.01   [ s 1 ] , k = 0 , 1 , 2 , 3 , 4 .
Figure 1. Basis functions h k ( τ , α ) (3) and (4) of the relaxation spectrum model H K ( τ , α ) (6) for time-scaling factors: (a) α = 0.1   [ s 1 ] and (b) α = 0.01   [ s 1 ] , k = 0 , 1 , 2 , 3 , 4 .
Materials 16 03565 g001
Figure 2. Basis functions ϕ k ( t , α ) (9) of the relaxation modulus model for time-scaling factors: (a) α = 0.1   [ s 1 ] and (b) α = 0.01   [ s 1 ] , k = 0 , 1 , 2 , 3 , 4 .
Figure 2. Basis functions ϕ k ( t , α ) (9) of the relaxation modulus model for time-scaling factors: (a) α = 0.1   [ s 1 ] and (b) α = 0.01   [ s 1 ] , k = 0 , 1 , 2 , 3 , 4 .
Materials 16 03565 g002
Figure 3. Hierarchical procedure of the two-level scheme for determination of the optimal spectrum relaxation model.
Figure 3. Hierarchical procedure of the two-level scheme for determination of the optimal spectrum relaxation model.
Materials 16 03565 g003
Figure 4. Relaxation time spectrum H ( τ ) (61) (solid red line) from the example and the corresponding models H K o p t ( τ ) (46) for K summands of the model: (a) K = 3 , 4 , 5 ; (b) K = 6 , 7 , 8 ; (c) K = 9 , 10 ; (d) K = 11 , 12 .
Figure 4. Relaxation time spectrum H ( τ ) (61) (solid red line) from the example and the corresponding models H K o p t ( τ ) (46) for K summands of the model: (a) K = 3 , 4 , 5 ; (b) K = 6 , 7 , 8 ; (c) K = 9 , 10 ; (d) K = 11 , 12 .
Materials 16 03565 g004
Figure 5. The measurements G ¯ ( t i ) of the ‘real’ relaxation modulus G ( t ) (62) (red points) from the example and the optimal approximated models G K ( t , α o p t ) (7) computed for g ^ K (45) for K summands of the model: (a) K = 3 , 6 , 8 ; (b) K = 9 , 12 .
Figure 5. The measurements G ¯ ( t i ) of the ‘real’ relaxation modulus G ( t ) (62) (red points) from the example and the optimal approximated models G K ( t , α o p t ) (7) computed for g ^ K (45) for K summands of the model: (a) K = 3 , 6 , 8 ; (b) K = 9 , 12 .
Materials 16 03565 g005
Figure 6. The identification index Q N o p t ( α ) (35) (solid navy blue line) minimized in the upper level task (37) from the example for: (a) K = 6 , (b) K = 8 , (c) K = 10 , (d) K = 12 summands of the relaxation spectrum model (6); on small subfigures, dashed red line of the value ( 1 + ε α ) Q N o p t ( α o p t ) determines the interval [ α m i n , α m a x ] of suboptimal time-scale factors defined by (66); α o p t is the optimal time-scale factor solving problem (37) for the sub-optimality factor ε α = 10 %.
Figure 6. The identification index Q N o p t ( α ) (35) (solid navy blue line) minimized in the upper level task (37) from the example for: (a) K = 6 , (b) K = 8 , (c) K = 10 , (d) K = 12 summands of the relaxation spectrum model (6); on small subfigures, dashed red line of the value ( 1 + ε α ) Q N o p t ( α o p t ) determines the interval [ α m i n , α m a x ] of suboptimal time-scale factors defined by (66); α o p t is the optimal time-scale factor solving problem (37) for the sub-optimality factor ε α = 10 %.
Materials 16 03565 g006
Figure 7. Relaxation time spectra and moduli from the example and the corresponding models for K = 9 summands of the model: (a) ‘real’ spectrum H ( τ ) (61) (solid red line) and the models: optimal H K o p t ( τ ) (46) and suboptimal H K ( τ , α m i n ) and H K ( τ , α m a x ) ; (b) ‘real’ modulus G ( t ) (62) (red points), the optimal approximated model G K ( t , α o p t ) (7) and (45), the suboptimal models G K ( t , α m i n ) and G K ( t , α m a x ) computed for g ^ K λ G C V ( α m i n ) and g ^ K λ G C V ( α m a x ) according to (7).
Figure 7. Relaxation time spectra and moduli from the example and the corresponding models for K = 9 summands of the model: (a) ‘real’ spectrum H ( τ ) (61) (solid red line) and the models: optimal H K o p t ( τ ) (46) and suboptimal H K ( τ , α m i n ) and H K ( τ , α m a x ) ; (b) ‘real’ modulus G ( t ) (62) (red points), the optimal approximated model G K ( t , α o p t ) (7) and (45), the suboptimal models G K ( t , α m i n ) and G K ( t , α m a x ) computed for g ^ K λ G C V ( α m i n ) and g ^ K λ G C V ( α m a x ) according to (7).
Materials 16 03565 g007
Table 1. Ranges of the applicability of the model for various time-scale parameters for K = 5 and K = 12 .
Table 1. Ranges of the applicability of the model for various time-scale parameters for K = 5 and K = 12 .
Time-Scale Factor α [s] K = 5 K = 5 K = 12 K = 12
Range 1 of Relaxation Times τapp(α) [s]Range 1 of Times tapp(α) [s]Range 1 of Relaxation Times τapp(α) [s]Range 1 of Times tapp(α) [s]
0.0001144,305.22282,360.6255,824.35662,077.1
0.00114,430.5228,236.0625,582.43566,207.71
0.011443.052823.612558.2436620.77
0.1144.305282.36255.824662.08
114.4328.23625.582466.208
101.44312.8242.55826.621
1000.14430.2820.25580.662
1 The upper bounds t a p p (20) and τ a p p ( α ) (21) of the applicability intervals [ 0 , t a p p ( α ) ] and [ 0 , τ a p p ( α ) ] are given.
Table 2. The square roots of the largest σ 1 ( Γ 1 ) and minimal σ m i n ( Γ 1 ) singular values of the matrix Γ 1 (53) for K = 4 , 5 ,   12 model summands.
Table 2. The square roots of the largest σ 1 ( Γ 1 ) and minimal σ m i n ( Γ 1 ) singular values of the matrix Γ 1 (53) for K = 4 , 5 ,   12 model summands.
K456789101112
σ 1 ( Γ 1 ) 3.6663964.3251854.9270615.4857926.0102476.5065346.9790797.4312237.865567
σ m i n ( Γ 1 ) 0.2064810.0875230.0346420.0132430.0049530.0018246.6358 × 10−42.3925 × 10−48.5615 × 10−5
Table 3. The parameters of the optimal models in the example: optimal time-scale factors α o p t , upper α m i n and lower α m a x bounds of the interval [ α m i n , α m a x ] of suboptimal time-scale factors defined by inequality (66) for ε α = 0.1 , regularization parameters λ G C V ( α o p t ) , optimal identification indices Q N o p t ( α o p t ) defined in (37), square norms: g ^ K 2 of the vector of optimal model parameter g ^ K (45) and H K o p t ( τ ) 2 of the optimal relaxation spectrum model H K o p t ( τ ) (46), relaxation spectrum approximation error E R 1 ( K ) defined in (64) and the distance between successive approximations H K o p t ( τ ) measured by E R 2 ( K ) (65).
Table 3. The parameters of the optimal models in the example: optimal time-scale factors α o p t , upper α m i n and lower α m a x bounds of the interval [ α m i n , α m a x ] of suboptimal time-scale factors defined by inequality (66) for ε α = 0.1 , regularization parameters λ G C V ( α o p t ) , optimal identification indices Q N o p t ( α o p t ) defined in (37), square norms: g ^ K 2 of the vector of optimal model parameter g ^ K (45) and H K o p t ( τ ) 2 of the optimal relaxation spectrum model H K o p t ( τ ) (46), relaxation spectrum approximation error E R 1 ( K ) defined in (64) and the distance between successive approximations H K o p t ( τ ) measured by E R 2 ( K ) (65).
K α o p t
[ s 1 ]
α m i n
[ s 1 ]
α m a x
[ s 1 ]
λ G C V ( α o p t )
[ ]
Q N o p t ( α o p t )
[ Pa 2 ]
g ^ K 2
[ Pa ]
H K o p t ( τ ) 2
[ Pa ]
E R 1 ( K )
[ % ]
E R 2 ( K )
[ Pa 2 · s ]
30.005200.002050.00800.04458.63505 × 10−40.705513.024890.45912.876
40.016750.01640.017450.00633.43945 × 10−57.348516.160739.1603.1049
50.020250.01920.02130.00712.71552 × 10−54.672417.146535.6382.4592
60.023750.02200.02550.00782.48511 × 10−53.649318.044333.9860.7563
70.026550.02340.02900.00832.48256 × 10−52.884618.508833.3640.1432
80.028650.02340.03180.00892.51617 × 10−52.355518.567932.8240.1692
90.030050.02410.034250.00992.52412 × 10−52.063918.375632.7010.0111
100.032150.02550.03670.01092.51143 × 10−51.905818.422432.6310.0336
110.033900.02760.039150.01222.48521 × 10−51.802018.349832.8790.0327
120.036700.029350.041950.01272.44452 × 10−51.719818.443232.9190.0328
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stankiewicz, A. Two-Level Scheme for Identification of the Relaxation Time Spectrum Using Stress Relaxation Test Data with the Optimal Choice of the Time-Scale Factor. Materials 2023, 16, 3565. https://doi.org/10.3390/ma16093565

AMA Style

Stankiewicz A. Two-Level Scheme for Identification of the Relaxation Time Spectrum Using Stress Relaxation Test Data with the Optimal Choice of the Time-Scale Factor. Materials. 2023; 16(9):3565. https://doi.org/10.3390/ma16093565

Chicago/Turabian Style

Stankiewicz, Anna. 2023. "Two-Level Scheme for Identification of the Relaxation Time Spectrum Using Stress Relaxation Test Data with the Optimal Choice of the Time-Scale Factor" Materials 16, no. 9: 3565. https://doi.org/10.3390/ma16093565

APA Style

Stankiewicz, A. (2023). Two-Level Scheme for Identification of the Relaxation Time Spectrum Using Stress Relaxation Test Data with the Optimal Choice of the Time-Scale Factor. Materials, 16(9), 3565. https://doi.org/10.3390/ma16093565

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop