Next Article in Journal
Evaluation of Coastal Zone Construction Based on Theories of the Combination of Empowerment Judgment and Neural Networks: The Example of the Putian Coastal Zone
Previous Article in Journal
An Efficient Asymmetric Nonlinear Activation Function for Deep Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperbolicity of First Order Quasi-Linear Equations

by
Vladimir Vasilyev
1,* and
Yuri Virchenko
1,2
1
Department of Applied Mathematics and Computer Modeling, Belgorod State National Research University, Pobedy St. 85, 308015 Belgorod, Russia
2
Department of Software and Automated Control Systems, Belgorod Shukhov State Technological University, Kostyukova St. 46, 308012 Belgorod, Russia
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(5), 1024; https://doi.org/10.3390/sym14051024
Submission received: 17 April 2022 / Revised: 5 May 2022 / Accepted: 12 May 2022 / Published: 17 May 2022
(This article belongs to the Topic Dynamical Systems: Theory and Applications)

Abstract

:
The theorem about equivalence of the strong hyperbolicity concept and the Friedrichs hyperbolicity concept for partial quasi-linear differential equations of the first order is proved. On the basis of this theorem, the necessary and sufficient conditions of hyperbolicity are found in terms of the matrix of the corresponding linearized first order equations system.

1. Introduction

Interest in hyperbolic quasi-linear equations has arisen firstly in mathematical physics, and it continues to be maintained due to the connection with the necessity of solving gas dynamics problems (see, for example, refs. [1,2,3]). At the same time, along with the search for exact solutions to the gas dynamics equations system and analogous equations of mathematical physics, a significant place in research is the study of the phenomenon, which consists of the discontinuous solutions formation. Such solutions are called weak Hugoniot solutions (see, for example, refs. [1,4,5,6]). Later, there was interest in the appearance of dynamic mode in their solutions, which is commonly called the chaotic solution when nonlinearity is present in such systems [7]. Finally, in connection with the problems of mathematical physics, in the formulation, of which there is no a priori information about the general qualitative properties of solutions of the used quasi-linear systems of equations, there is an urgent need to determine the area of their hyperbolicity (see, refs. [8,9,10]). This work is devoted just to solving this problem.
Consider the system quasi-linear evolution of differential equations of the first order. Let u ( x , t ) = u a ( x , t ) ; a = 1 , , n : R m R n be the collection of functions that depend on the coordinates x = x 1 , , x m R m and on the parameter t R . Then this system has the form
u ˙ a ( x , t ) = b = 1 n k = 1 m A k ( a , b ) ( x , t ) u b ( x , t ) x k + H a ( u , x , t ) , a = 1 ÷ n .
Later, throughout the work, we distinguish vectors from the definition space R m of a vector function u ( x , t ) and the space R n of the values of these functions. Therefore, we use different fonts to denote vectors from these spaces. Vectors from R m are displayed in sanserif font, and vectors from R n are displayed in bold.
Verification of hyperbolicity of the fixed system of quasi-linear equations of the first order, in the case when the coefficients of the Equation (1) are fixed numbers, is carried out by a simple algorithm. On the contrary, if the coefficients depend on parameters, which can vary widely, and if they are functionals, the study of the hyperbolicity of the system becomes dramatically more complicated. Solving such a problem can be a rather time-consuming procedure (see, for example, refs. [9,10]). Such a problem becomes especially difficult in the case when the system of equations has a large dimension n. The relevance of solving such a problem becomes very important due to the fact that the quasi-linear equations system, with such differential operators, may have a variable type, that is, at some of their possible values, such equations may no longer have the property of “evolutionarity”. It is always important if a system of first-order quasi-linear equations is intended for modeling physical processes. In particular, it is connected with its use for describing the processes in continuous media when any physical dissipation mechanisms are neglected. This is due to the fact that, for any solution v ( x , t ) v 0 exp [ i ω t + ( x , q ) ] of corresponding linearized systems of equations, with the matrix T ( q ) , the so-called “dispersion equation” must be fulfilled from a physical point of view, which connects the wave frequency ω with the wave vector q . In such a situation, all solution branches ω j ( q ) , j = 1 ÷ n of this algebraic equation (see, the defining Equations (3) and (4)) should be real when q is real. If a solution branch ω k ( q ) has non-zero imaginary parts for some vectors q , then it leads to the existence of the complex-conjugate solution ω j * ( q ) due to the realness of the dispersion equation. In this case, the presence of the imaginary part leads not only to the existence of solutions v ( x , t ) of the system, tending asymptotically to some stationary evolutionary regime, but also to the mandatory existence of solutions v ( x , t ) , having no physical sense, when there is an unlimited increase in some areas of change of the spatial variable x .
Thus, when selecting physically reasonable systems of equations, it is necessary to require the realness of all solutions ω j ( q ) , j = 1 ÷ n . In addition, a power-law increase of the solutions v ( x , t ) of the linearized system, with respect to t, may occur under the conditions of the realness of all solutions ω j ( q ) , j = 1 ÷ n , but when the matrix T ( q ) of the system is not diagonalized. In this regard, when choosing a physically reasonable evolutionary system of quasi-linear equations, it is necessary to require diagonalizability of this matrix. As a result, we come to the conclusion that the hyperbolicity condition (see, Definition 1) of the quasi-linear equations system of the first order, or the strong hyperbolicity, as such a property is also called, is a natural requirement imposed on them from the point of view of physics.
It should be emphasized that the study of the hyperbolicity conditions of systems of quasi-linear equations in the cited works [9,10] is still limited to small-dimensional systems. At the same time, in general, the problem under study is connected with the so-called covariant differential equation systems (see, for example, ref. [11]), the importance of which is due to their applicability in mathematical modeling in physics of complex condensed matter. The coefficients of such systems of equations are some arbitrary functions of the invariants of the group O 3 transformations of the system of equations, and the dimension of such systems can vary from a minimum of 3 to about 20 (see, for example, refs. [12,13]). Therefore, the research presented in this paper is aimed at finding necessary and sufficient features of the characteristic matrix T ( q ) , which would guarantee strong hyperbolicity of a system of quasi-linear equations of the first order in a general case.

2. Hyperbolic Systems of First Order Equations

We find the connection between two concepts. First of them is the strong hyperbolicity of the quasi-linear equations system of first order. In future, we will simply call it the hyperbolicity. The second concept is the Friedrichs hyperbolicity, or the so-called t-hyperbolicity. But we modified a little of the definition of the last concept.
The linear system of equations (see, refs. [1,2,3]) for the set v ( x , t ) is connected with the system (1). This system is obtained by linearization of the system (1) at the point u ( x , t )
v ˙ a ( x , t ) = b = 1 n k = 1 m A k ( a , b ) ( u ) v b ( x , t ) x k + b = 1 n H a u b v b ( x , t ) , a = 1 , , n
where the set of functions v ( x , t ) = v a ( x , t ) ; a = 1 , , n corresponds to the fixed set u = u 1 , , u n R n values of the set u a ( x , t ) ; a = 1 , , n at the point x and at the time moment t.
In order for this system of equations to be solvable, at least locally, with respect to t, for arbitrarily selected initial conditions that are “in general position”, it is necessary and sufficient that it has the hyperbolicity property. This property assumes that the collection A k ; k = 1 ÷ m of n × n -matrix coefficients of the system (1), which are defined by matrix elements A k ( a , b ) = A k a , b , a , b = 1 , , n , must have the following special property.
Definition 1.
The given definition corresponds to the so-called strong hyperbolicity. The system (1) is called hyperbolic if, in the corresponding system (2), the n × n -matrix T ( q ) with matrix elements
T a , b ( q ) = k = 1 m q k A k ( a , b ) ( x , t )
is diagonalizable and has only real eigenvalues ω ( l ) , l = 1 , , n for any set q = q s ; s = 1 ÷ m R m , at any temporal point t R , and for any set u R n , x R m .
Thus, the hyperbolicity of the system (1) consists in the realness of the roots ω ( l ) , l = 1 , , n of the equation
det ( ω T ( q ) ) = 0
with respect to ω and the presence of eigenvectors for each multiple solution. In future, we will call matrices T ( q ) satisfying this condition the hyperbolic ones.
Due to the difficulty of establishing the fact of hyperbolicity of systems of quasi-linear equations in the cases pointed out in the introduction, they resort to checking the availability of the property that is, generally speaking, stronger than hyperbolicity (see Theorem 1, Sufficiency), namely, it is the so-called t-hyperbolicity (the Friedrichs hyperbolicity) (see, for example, refs. [4,14]). We give the following definition of the t-hyperbolicity of the system (1), which is somewhat modernized in comparison with [1].
This will permit us to establish the equivalence of the upgraded definition with Definition 1, which opens the way for research on the hyperbolicity of systems (1) by a simpler method than if the original definition of hyperbolicity is used.
Definition 2.
System (1) is called t-hyperbolic if the matrix T ( q ) is diagonalizable for any sets q = q 1 , , q n R m and u = u 1 , , u n and there exists such a symmetric positive n × n -matrix D , for which the matrix D T ( q ) is symmetric.

3. Coincidence of Hyperbolicity Definitions

n × n -matrix B is called diagonalizable if it has exactly n eigenvectors { e r ; r = 1 , , n } or, in other words, these vectors form a basis in R n . In this section, we will prove an important auxiliary theorem on the diagonalizability of a matrix, which can be attributed to the field of matrix analysis and which may represent an independent interest. In future, it will allow us to prove the equivalence of Definitions 1 and 2. Further, we follow the terminology of the monograph [15] when formulating and proving all statements.
Theorem 1.
In order for the real n × n -matrix B to be diagonalizable and all its eigenvalues μ 1 , , μ n to be real, it is necessary and sufficient that there exists such a symmetric positive n × n -matrix D for which the matrix D B is symmetric. In this case, the set of eigenvalues of the matrix B coincides with the set of eigenvalues of the matrix D B .
Proof. 
Necessity. Let all eigenvalues of the matrix B be real. The proof of the existence of the matrix D consists of the following items 15.
1. Let the real matrix B represent the implementation of the linear operator in R n in the standard basis { e a ( 0 ) R n ; a = 1 , , n } , ( e a ( 0 ) ) b = δ a b , a , b = 1 , , n . We will consider this operator in C n . Further, let c = c 1 , , c n R n be an eigenvector in R n corresponding to the real eigenvalue μ of the matrix B , that is B c = μ c . This eigenvector can always be chosen so that all its components are real.
In fact, under the specified condition, at least one of the real vectors Re c = Re c 1 , ,   Re c n or Im c = Im c 1 , , Im c n is not equal to zero. Assuming, for certainty, that such is the first of them, and calculating the real part of both parts of the equality B c = μ c , we find that Re c is the eigenvector of the matrix B corresponding to the same eigenvalue μ . Therefore, since μ is an arbitrarily chosen eigenvalue of the matrix B , which, by supposition, has a complete set of real eigenvalues μ 1 , , μ n with corresponding eigenvectors { e 1 , , e n } , then all of them can be selected with real components.
2. For any basis { e 1 , , e n } in the space R n , there is such a real nonsingular n × n -matrix V , for which the set of vectors { V e 1 , , V e n } is orthonormal. Indeed, we apply the Sonin-Schmidt orthogonalization process of the vector set { e 1 , , e n } . As a result, we get an orthonormal set { e 1 , , e n } , which is complete in the space R n .
This set is decomposed
e a = b = 1 a V a b e b , a = 1 ÷ n
on the basis of the original vector set { e 1 , , e n } , to which the process is applied, where the coefficients of this expansion are real. The matrix V is defined by matrix elements V a b = V a b . It has a non-zero determinant, since the set { e 1 , , e n } is the basis in R n , and this matrix is triangular.
3. Since { e 1 , , e n } are eigenvectors of the matrix B , with eigenvalues μ 1 , , μ n , B e r = μ r e r ,
V B V 1 V e r = V B e r = μ r V e r , r = 1 , , n .
Then vectors { V e 1 , , V e n } are orthonormal, and each of them is the eigenvector of the matrix V B V 1 .
Thus, a consequence of items 1 and 2 is the fact that, for any diagonalizable real n × n -matrix B with real eigenvalues, there exists such a real nonsingular n × n -matrix V , for which the matrix V B V 1 is diagonalizable and it has only real eigenvalues. All its eigenvectors form an orthonormal basis in R n .
4. Let us introduce n × n -matrix C = V B V 1 . This matrix is symmetric, c T = C , since this property has any matrix with a complete orthonormal set of eigenvectors with real eigenvalues.
5. From the item 4, it follows that the matrix V T C V is symmetric, since V T C V T = V T C T V = V T C V . In addition, D = V T V is a positive symmetric matrix. Then
D B = ( V T V ) B = V T C V
is also a symmetric matrix.
Sufficiency. It is known that there exists a real symmetric positive matrix D for a real n × n -matrix B , such that the matrix D B is real and symmetric. The proof of the diagonalizability of the matrix B and the realness of its eigenvalues consists of the following items 610.
6. Under given conditions, there exists an orthonormal set { e r ; r = 1 , , n } , such that there are real positive eigenvalues μ r > 0 r = 1 , , n of the matrix D for each of its ejgenvectors. At the same time, according to the item 1, all eigenvectors of the matrix D corresponding to these eigenvalues can be chosen with real components in C n .
There is a unitary and, therefore, nonsingular matrix U , U U + = 1 (+denotes Hermitian conjugation), which diagonalizes the matrix D , that is U e r ( 0 ) = e r , r = 1 , , n , where { e r ( 0 ) = δ r , r ; r = 1 , n ; r = 1 , , n } is the standard basis in R n and U D U + = diag μ r ; r = 1 , , n .
From the equalities U e r ( 0 ) = e r , taking into account the realness of the components of basis vectors { e r ; r = 1 , n } and vectors { e r ( 0 ) ; r = 1 , n } , it follows ( Im U ) e r ( 0 ) = 0 for all r = 1 , n , that is, for all vectors of the standard basis. Hence, the matrix Im U is null. Thus, the matrix U is real and, therefore, U + = U T . Then, the unitarity condition of the matrix U takes the form U T U = 1 , therefore, this matrix is orthogonal.
7. We define the real-valued matrix V by the equality V T = U T diag μ r 1 / 2 ; r = 1 , , n . The matrix V is also non-singular since it is the product of non-singular real matrices. From the equality U D U + = diag μ r ; r = 1 , , n and the realness of the matrix U , it follows that
D = U T diag μ r ; r = 1 , , n U = V T V .
8. Since the matrix V is non-singular, we define C = V T 1 D B V 1 . Since the matrix D B is symmetric, the matrix C is also symmetric, C T = V 1 T D B T V T 1 T = V T 1 D B V 1 = C . Additionally, it takes place
D B = V T C V = V T V B .
Consequently, B = V 1 C V .
9. All solutions of the characteristic equation det ( C μ ) = 0 are real, since the matrix C is symmetric. Since
det ( B μ ) = det V · det ( B μ ) · det V 1 = det ( V B V 1 μ ) = det ( C μ ) ,
all these solutions are eigenvalues of the matrix B . Thus, all eigenvalues of the matrix B are real.
10. Let e r , r = 1 , , n be the collection of eigenvectors of the symmetric matrix C and μ r , r = 1 , , n be the collection of eigenvalues corresponding to it, C e r = μ r e r . Then, substituting the expression C = V B V 1 in V B V 1 e r = μ r e r , we find that
B V 1 e r = μ r V 1 e r , r = 1 , , n .
Therefore, V 1 e r , r = 1 , , n is the collection of eigenvectors of the matrix B , which means that the matrix B has strictly n eigenvectors; therefore, it is diagonalizable.
The statement of the theorem that one may reformulate is as follows.  □
Corollary 1.
The matrix B is diagonalizable and its eigenvalues are real if and only if it is representable as the product of two symmetric matrices and one of them is strictly positive.
Proof. 
The statement follows from equalities D B = V T C V , B = D 1 V T C V .   □
Remark 1.
The matrix D is defined by arbitrary positive multipliers. However, even if we take into account this kind of ambiguity of its choice, the class of possible matrices D , the existence of which is stated in the theorem, significantly depends on the type of matrix B , and it takes place even in the case when it is non-degenerate.
The statement about the coincidence of the concepts of hyperbolicity and t-hyperbolicity is the consequence of Theorem 1.
Theorem 2.
In order for the system ( 1 ) of quasi-linear equations to be hyperbolic, it is necessary and sufficient that it should be t-hyperbolic.
Proof. 
Based on the definition of hyperbolicity of first-order quasi-linear equations systems, the proof follows by applying the statement of Theorem 1 to the matrix T ( q ) with elements (3).  □
Thus, the question about the hyperbolicity of the Equation (1) system boils down to determining the factorization possibility of the matrix T ( q ) in the form of T ( q ) = F ( q ) G ( q ) , where F ( q ) and G ( q ) are symmetric matrices, with F ( q ) > 0 . It is essential that the matrix T ( q ) depends linearly on the vector q .
The proved statement simplifies the proof of hyperbolicity property of a given system with a large number of quasi-linear equations when the set of matrices A k ( x , t ) ; k = 1 , , m is fixed, reducing it to finding a suitable matrix D , while the direct search for conditions under which the matrix T ( q ) is hyperbolic seems much more laborious.
For effective application of such a method, it is necessary to specify some transparently verifiable sign of hyperbolicity of an arbitrarily selected n × n -matrix T ( q ) , depending linearly on the vector q R m . The following sections of the work are devoted to this problem. We will focus on the somewhat more general algebraic problem, namely, we study the matrix hyperbolicity of the fixed matrix T of dimension n.
Corollary 2.
In order for the matrix T to be hyperbolic, it is necessary and sufficient that there exists a symmetric positive matrix D that satisfies the equation
T T D = D T .
Proof. 
The equality (5) follows from the symmetry of matrices D T and D , that is ( D T ) T = T T D T = T T D .  □
In future, we will call the matrix D the binder one. Equation (5) is also convenient to represent in the following form. Introducing matrices T + = ( T + T T ) / 2   T = ( T T T ) / 2 , we obtain the following conclusion.
Corollary 3.
In order for the matrix T to be hyperbolic, it is necessary and sufficient that there exists such a symmetric positive matrix D that satisfies the equation
[ T + , D ] = { T , D }
where [ · , · ] is the commutator of the matrix pair and { · , · } is the anticommutator of them.
Proof. 
Equation (6) is valid due to T = T + + T , T T = T + T .  □
Corollary 4.
If [ T + , T ] = 0 , then the symmetric solution of the Equation (6) relative to the matrix D does not exist.
Proof. 
In the case of matrix commutation, due to the self-conjugacy of the matrices T + and i T in C n , they have an orthonormal basis of eigenvectors common to them { e r : r = 1 , , n } . Denote the sets { λ r ( + ) : r = 1 , , n } and { i λ r ( ) : r = 1 , , n } of eigenvalues corresponding to these matrices with real values λ r ( ± ) : r = 1 , , n . We calculate the matrix elements ( e a , · e b ) of both sides of the Equation (6),
( e a , [ T + , D ] e b ) = ( λ a ( + ) λ b ( + ) ) ( e a , D e b ) , A A A A A A A A A A
A A A A A A A A A A ( e a , { T , D } e b ) = i ( λ a ( ) + λ a ( ) ) ( e a , D } e b ) .
Then, at a = b , the left-hand side of the equality,
( e a , [ T + , D ] e b ) = ( e a , { T , D } e b )
is equal to zero. However, at T 0 , at least for one of eigenvalues i λ r ( ) 0 , r = 1 ÷ n , and for this number l, it is valid ( e l , D e l ) = 0 that it is impossible for the positive matrix D . Consequently, this equality is possible only in the trivial case T = 0 .  □

4. Hyperbolicity of Diagonizable Matrices T

Let R be a n × n -matrix is hyperbolic in the sense of the definition of this term given in Section 2, that is whose all eigenvalues are real and are not multiples. Next, let Q be an arbitrary n × n -matrix. Consider the one-parametric family of matrices R + η Q , η R . Now, we will prove the following statement.
Let us now prove the statements (Theorems 3–5) that show the fundamental possibility of analytical establishing of the strong hyperbolicity presence by studying the dependence of the matrix T ( q ) on the equations system parameters using the strong hyperbolicity of a reference system.
Theorem 3.
All eigenvalues of each matrix R + η Q at η ( ρ , ρ + ) are real and do not have multiples, where 0 ( ρ , ρ + ) and ρ , ρ + are points on a real axis, which are nearest on the left and on the right to the point η = 0 , correspondingly, in which the equation det ( z E R η Q ) = 0 relative to z R has multiple roots. These eigenvalues are analytic functions on η R in the domain of the complex plane, containing the interval ( ρ , ρ + ) .
Proof. 
Let P ( z , η ) = r = 0 n a r ( η ) z n r be a polynomial on z R with the degree n and with coefficients a r ( η ) , r = 0 , , n , which depend on the parameter η . The P ( z , 0 ) has n real, which is not multiple at | η | < ε when ε > 0 is sufficiently small. If coefficients of polynomials P ( z , η ) depend on η in an analytic way, its roots are also analytic functions on η . They are built by analytic continuation from the circle { η : | η | < ε } with sufficiently small ε > 0 .
We apply this statement to the polynomial
P ( z , η ) = det ( z E R η Q ) = z n + r = 0 n 1 a r ( η ) z n 1 r ,
which has coefficients a r ( η ) depending on η R in a polynomial way (see, ref. [16]), and its roots are eigenvalues of the matrix R + η Q .   □
From Theorem 3, we find the following sufficient criterium of the hyperbolicity of the matrix T .
Theorem 4.
Let the symmetric matrix [ T + T T ] have no multiple eigenvalues and 1 ( ρ , ρ + ) , where ρ ± are boundary points of the interval in which the parameter η specified by Theorem 3 is changed ( ρ ± may be infinite). The matrix ( 1 + η ) T + ( 1 η ) T T has no multiple roots in this interval and, therefore, the matrix T is hyperbolic.
Proof. 
The statement follows directly from Theorem 3 at R = T + T T and Q = T T T .  □
The points ρ ± are defined on the basis of the coefficients a r ( η ) , r = 0 , , n 1 by means of applying the Bésout theorem for the polynomial P ( z ; η ) . To do this, one should apply the Euclid algorithm to this polynomial and its derivative P 1 ( z ; η ) P ( z ; η ) on z (see, for example, ref. [16]). The polynomial P n ( η ) of zero degrees on z is obtained as the remainder when this algorithm applying is equal to zero in those points η , where P ( z ; η ) has a multiple root. Thus, it is valid.
Theorem 5.
If 1 ( ρ , ρ + ) where the points ρ ± are nearest the roots of the equation P n ( η ) = 0 , correspondingly on the left and on the right of the point η = 0 , and P n ( η ) is a result of the application of the Euclid algorithm to polynomials
P ( z ; η ) = det ( z E ( 1 + η ) T ( 1 η ) T T )
and P ( z ; η ) , then the matrix T is hyperbolic.
In principle, by calculating the polynomial P n ( η ) , we may find the necessary condition of the hyperbolicity of the matrix T on the basis of the statement of Theorem 5. However, its implementation faces a rather routine analysis if the matrix order n is not a small number. In this regard, we proceed to a deeper study of the hyperbolicity of the advance given matrix T using Theorem 3.

5. The Case n = 2

Let n = 2 . This case is important for future study. Let S ( j ) , j = 1 , 2 , 3 be the standard Pauli 2 × 2 -matrices,
S ( 1 ) = 0 1 1 0 , S ( 2 ) = 0 i i 0 , S ( 3 ) = 1 0 0 1 .
They form the basis, together with the unit matrix E of the dimension 2 in the linear space of all complex 2 × 2 -matrices. We represent the real matrix T of the dimension 2 in the form of the expansion according to this basis
T = t 0 E + T , T = t 1 S ( 1 ) + i t 2 S ( 2 ) + t 3 S ( 3 ) = t 3 t 1 + t 2 t 1 t 2 t 3 ,
where the coefficients t j , j = 0 , 1 , 2 , 3 are real. Eigenvalues λ ± of the matrix T are defined by roots of quadratic trinomial λ 2 λ Sp T + det T relative λ . Since matrices S ( j ) , j = 1 , 2 , 3 have no traces, then Sp T = 2 t 0 . In this case, det T = t 0 2 t 1 2 + t 2 2 t 3 2 . Consequently, eigenvalues are real if and only if 0 ( Sp T ) 2 / 4 det T = t 1 2 + t 3 2 t 2 2 . On the basis of the analysis carried out, boundary points of the interval [ ρ , ρ + ] of the hyperbolicity violation are defined by the condition t 1 2 + t 3 2 = t 2 2 .
Consider the question of the non-diagonalizability of the matrix T . This can only be the case if the equality is realized and there exists the eigenvalue t 0 . On the other hand, it is easily verified that the matrix T (see, (7)) is nilpotent T 2 = 0 in such conditions, that is, the T is really not diagonalizable.
We show that obtained condition of the matrix T hyperbolicity is consistent with the conclusion of Theorem 2. To do this, we introduce, together with the decomposition (7), the analogous expansion D = d 0 E + d 1 S ( 1 ) + d 3 S ( 3 ) , where d 2 = 0 , due to the symmetry of the matrix D . Since matrices T + and T are represented in the form T + = t 0 E + t 1 S ( 1 ) + t 3 S ( 3 ) , T = t 2 S ( 2 ) , in the case under consideration, using the famous commutation relations of the Pauli matrices
{ S ( j ) , S ( k ) } = 2 δ j k E , [ S ( j ) , S ( k ) ] = 2 i ε j k l S ( l ) , j , k = 1 , 2 , 3 ,
where ε j k l is the Levi-Civita symbol, we find that
{ T , D } = { i t 2 S ( 2 ) , d 0 E + d 1 S ( 1 ) + d 3 S ( 3 ) } = 2 i t 2 d 0 S ( 2 ) , A A A A A A A A A A
A A A A A A A A A A [ T + , D ] = [ t 0 E + t 1 S ( 1 ) + t 3 S ( 3 ) , d 0 E + d 1 S ( 1 ) + d 3 S ( 3 ) ] =
= t 1 d 3 [ S ( 1 ) , S ( 3 ) ] + t 3 d 1 [ S ( 3 ) , S ( 1 ) ] = 2 i ( t 3 d 1 t 1 d 3 ) S ( 2 ) .
Substituting obtained expression in (6), we find the condition of hyperbolicity of the matrix T in the form of the following restriction t 2 d 0 = t 3 d 1 t 1 d 3 of coefficients d 0 , d 1 , d 3 . Then, in the case when the roots of the Equation (4) are multiples, it should be fulfilled ± d 0 t 1 2 + t 3 2 = t 3 d 1 t 1 d 3 . Consequently, introducing angles φ and ψ , such that cos φ = t 1 t 1 2 + t 3 2 1 / 2 , sin φ = t 3 t 1 2 + t 3 2 1 / 2 , cos ψ = d 1 d 1 2 + d 3 2 1 / 2 , sin ψ = d 3 d 1 2 + d 3 2 1 / 2 , we obtain the condition for coefficients d 0 , d 1 , d 3 in the form d 0 2 = ( d 1 2 + d 3 2 ) cos 2 ( φ ψ ) . If roots are multiples, then there is such a choice of angle φ = ψ for any angle ψ when it is fulfilled d 0 2 = ( d 1 2 + d 3 2 ) , that is det D = 0 , and, therefore, the matrix D is not positive.

6. Investigation of the Equation for the Matrix D

Consider Equation (5) in the general case. If this equation has a degenerate equation D with det D = 0 , then the kernel of the matrix D is not empty. If the vector is g Ker D and, therefore, D g = 0 , then it follows D T g = T T D g = 0 from Equation (5), that is, due to the arbitrariness of the vector g Ker D choice, the matrix kernel is invariant relative to the transformation of matrix T . For this reason, further, we exclude from our consideration the degenerated solutions of Equation (5).
Since the matrix T has, at least, one eigenvalue, and since all its eigenvalues coincide with eigenvalues of the matrix T + , then the system of uniform equations is relative to matrix elements ( D ) a b D a b , which follows from the matrix equation T T D D T = 0 , and has, at least, one solution. Furthermore, we allow that for the antisymmetric solution D T = D . It then follows from (5) that the equality ( T D ) T = T T D + D T T = 0 is fulfilled for such a solution. Consequently, T D and, due to the nondegeneracy of the matrix D , we have T = 0 . We excluded this trivial case from our consideration. Then, Equation (5) always has the solution in the form of a symmetric matrix. Indeed, let D be a solution of a nontrivial matrix T . If it is not a symmetric matrix, then, applying the transposition operation to both parts of the equation, we obtain that its solution is also the matrix D T . Since D T D , the matrix D + D T is the symmetric solution of Equation (5).
Thus, the main problem is to find the conditions for the existence of a solution D of Equation (5), which is diagonalizable, and all its eigenvalues are positive. In order to solve this problem, we will introduce into consideration such a set T of matrices in the space of all possible antisymmetric n × n -matrices, consisting of all matrices T , for which there is a symmetric positive solution D . The following statements, namely Theorem 6 and its related consequences, clarify the qualitative structure of the set of all hyperbolic matrices. It is valid in the following.
Theorem 6.
The set T of all possible matrices T is centrally symmetric, relative to the zero matrix.
Proof. 
We fixed the matrix T + and the symmetric positive matrix D . Let T T . The latter matrix obeys the Equation (6) at given T + and D . Then, due to its antisymmetry, the matrix T = T T also obeys Equation (5), and, consequently, Equation (6). Due to choosing arbitrariness of the matrix T , the set T is centrally symmetric.  □
Now, we will prove an auxiliary statement that has a technical sense.
Theorem 7.
Let the matrix T satisfy Equation (6), together with the symmetric positive matrix D , which has a set of eigenvalues ζ k > 0 with eigenvectors e k , k = 1 , , n . Then, matrix elements T a b ( ± ) = ( T ± ) a b , a , b = 1 ÷ n of matrices T ± , which are calculated in the orthonormal basis e k , k = 1 , , n , satisfy the equation
T a b ( ) = ζ b ζ a ζ b + ζ a T a b ( + ) .
Proof. 
Let us calculate the matrix elements of linear operators on both sides of the Equation (6), using the scalar production ( · , · ) of vectors in R n . Since ( e a , D e b ) = ζ a δ a b , ( e a , T ± e b ) = T a b ( ± ) , a , b = 1 , , n , then
( e a , { T , D } e b ) = T a b ( ) ( ζ a + ζ b ) , ( e a , [ T + , D ] e b ) = T a b ( + ) ( ζ b ζ a ) .
On the basis of (6), we conclude that the equality of these expressions should be fulfilled, which follows (8), due to ζ a + ζ b > 0 .  □
Corollary 5.
For the fixed symmetric matrix T + , each matrix T T is uniquely defined by matrix elements T a b ( ) , which are calculated on the basis of equality (8) by means of the orthogonal n × n -matrix W , which translates the standard basis { e a ( 0 ) = δ a b ; b = 1 , , n } in R n into the basis { e a ( 0 ) : a = 1 , , n } and by means of the collection of eigenvalues ζ a > 0 ; a = 1 , , n .
Proof. 
It follows directly from (8).  □
Corollary 6.
The dimension dim T does not exceed the number of non-zero matrix elements T a b ( + ) = 0 at a > b .
Proof. 
If T a b ( + ) = 0 in the formula (8), then it is valid T a b ( ) = 0 for these values a and b. The dimension of the space of matrices T ( ) in (8) is determined by an arbitrary choice of non-zero matrix elements for a > b , since this matrix is antisymmetric and T a a ( ) = 0 . The number of such elements T a b ( ) does not exceed the numbers of the corresponding elements of the matrix T a b ( + ) .
Then, the validity of the theorem statement follows from the fact that, for every solution of D of Equation (5) and, consequently, of Equation (6), and for any orthogonal matrix W (the orthogonality of matrices W are connected with their realness), the matrix D W = W D W T is the solution of the equation W T + W T D W = D W W T W T .  □
Corollary 7.
The set T is compact T T + .
Proof. 
Since | ( ζ b ζ a ) / ( ζ b + ζ a ) | 1 at ζ l > 0 , l = 1 , , n , then
T = max { | T a b ( ) | ; a , b = 1 , , n } T + .
  □
Corollary 8.
The set T is connected.
Proof. 
Changing parameters ζ l ( s ) by means of a continuous dependence on the parameter s [ 0 , 1 ] , such that ζ l ( 0 ) = ζ l , ζ ( 1 ) = 0 , l = 1 , , n and sgn ( ζ b ( s ) ζ a ( s ) ) = sgn ( ζ b ζ a ) , a , b = 1 , , n , we find that, for any solution T T , there always exists such a continuous curve located in the subset T of the matrix space which connects the matrix T with a zero matrix.  □
Let T a b = ( e a , T e b ) be matrix elements of the matrix T in the basis of eigenvectors of the matrix D . Then, it follows from the Equation (5) that
ζ a T a b = ζ b T b a , a , b = 1 , , n .
We find from this equality that the original equation for the matrix D has the solution if and only if non-zero values of productions T a , j 1 T j 1 , j 2 T j s 1 , j s T j s , b should be unchanged for any pair { a , b } and for any sequence of different numbers j 1 , j 2 , , j s , s n 2 . At n = 2 , since there is only one equality in (9) at a b , such restrictions of matrix elements T a b do not arise. Therefore, we studied this case separately in the previous section. However, for example, at n = 3 , there are already three independent equalities ζ 1 T 12 = ζ 2 T 21 , ζ 2 T 23 = ζ 3 T 32 , and ζ 1 T 13 = ζ 3 T 31 . In this case, we have
ζ 1 T 12 T 23 T 31 = ζ 2 T 21 T 23 T 13 = ζ 3 T 32 T 21 T 13 = ζ 1 T 13 T 32 T 21
and, consequently, due to ζ 1 0 , it should be fulfilled that T 12 T 23 T 31 = T 13 T 32 T 21 . Namely, the presence of the entire set of specified fierce conditions that the matrix elements T a b must obey in the basis { e r : r = 1 , , n } determines the possibility of solvability of Equations (5) and (6).
Let us prove, now, the statement, which gives a sufficient condition of the hyperbolicity of the matrix T and significantly simplifies its establishment. Thus, it gives a sufficient indication of the hyperbolicity of the system of quasi-linear equations. It essentially simplifies the hyperbolicity analysis of the matrix T and, therefore, the hyperbolicity analysis of the quasi-linear equations system. We will show that Equation (6) has a symmetric positive solution D , which is sufficiently close to a positive matrix s E for any matrix T in the case of a non-degenerate spectrum of the matrix T + . It is clear that this result agrees with the conclusions of the previous section in the two-dimensional case.
Theorem 8.
Let { λ l : l = 1 , , n } be the collection of eigenvalues of the matrix T + , which are non-equal in pairs to each other. Let further δ = min { | λ j λ k | : j k ; j , k = 1 , , n } . Then, at δ > 2 n ε T > 0 , there exists symmetric positive matrix D , satisfying the equation
[ T + , D ] = ε { T , D } .
Proof. 
We represent the solution of the equation in the form of series
D = l = 0 ε l D ( l )
where [ T + , D ( 0 ) ] = 0 . Further, we choose the matrix D ( 0 ) in the form D ( 0 ) = s E , where s > 0 is a sufficiently large value, such that the matrix D is positive and [ T + , D ( l ) ] = { T , D ( l 1 ) } , l N . Let { e l : l = 1 , , n } be the orthonormal basis of eigenvectors of the matrix T + , with eigenvalues { λ l : l = 1 , , n } . Write down the equality of the matrix elements in the basis { e l : l = 1 , , n } , which is followed from the last recurrent relation
( e j , [ T + , D ( l ) ] e k ) = ( λ j λ k ) ( e j , D ( l ) e k ) = ( e j , { T , D ( l 1 ) } e k ) .
Since the matrix { T , D ( l 1 ) } is antisymmetric when D ( l 1 ) is the symmetric matrix, then the right-hand side of the last equality is equal to zero at j = k . Without contradicting this equality, we suppose ( e k , D ( l ) e k ) = 0 , k = 1 , , n . Consequently, considering it as the equation relative non-diagonal matrix elements of the matrix, we find that it is D ( l ) , and this equation is solvable and takes place at the recurrent communication
( e j , D ( l ) e k ) = ( λ j λ k ) 1 × A A A A A A A A A A A A A A A A A A A A A
A A A A A A A A A A × m = 1 n ( e j , T e m ) ( e m , D ( l 1 ) e k ) + ( e j , D ( l 1 ) e m ) ( e m , T e k ) .
From here, it follows the estimate
| ( e j , D ( l ) e k ) | D ( l ) 2 n δ T D ( l 1 ) .
Consequently, the series is converged according to the matrix norm · at 2 n ε T / δ < 1 , and the following estimate is valid
D D ( 0 ) l = 0 ε l 2 n T δ l = s 1 2 n ε T / δ .
  □

7. Conclusions

We have carried out the investigation devoted to the problem of strong hyperbolicity of quasi-linear equations systems of the first order. As a result, a modified formulation of the strong hyperbolicity concept is proposed and the equivalence of this formulation and the so-called hyperbolicity according to Friedrichs is proved. The necessary and sufficient condition is formulated, showing that this equations system is hyperbolic. It is done in terms of the matrix T ( q ) , which defines evolution of a linearized (tangent) system of differential equations of the first order with constant coefficients corresponding to the original system. This condition is given in Corollary 3. We have named matrices T ( q ) satisfying this condition as the hyperbolic ones.
We conducted a qualitative study of the set of matrices T ( q ) , for which the specified necessary and sufficient condition holds for a given matrix T + ( q ) , that is, when there is a solution to Equation (6). Moreover, we found an effectively verifiable necessary condition for the class of matrices T ( q ) in order for such a solution to actually exist.
However, it is necessary to further study the problem despite the fact that the obtained results allow, in some cases (see, for example, refs. [12,13]), to prove the hyperbolicity of the quasi-linear equations systems of mathematical physics. Namely, it is necessary to obtain such quantitative results that will allow to accurately estimate both from above and from below the location of the hyperbolicity region boundary for each pre-defined matrix T ( q ) . It should be noted that the problem closely related to the one studied in the paper is also of interest. Namely, it is important to find the criterion that the system (1) is elliptical. Apparently, in this case, the matrix T ( q ) is diagonalizable and it has purely imaginary eigenvalues. Such matrices are naturally called the elliptical ones.

Author Contributions

Conceptualization, V.V. and Y.V.; methodology, V.V. and Y.V.; validation, V.V. and Y.V; formal analysis, V.V. and Y.V.; investigation, V.V. and Y.V.; data curation, V.V. and Y.V.; writing and original draft preparation, Y.V.; writing and review and editing, V.V. and Y.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rozhdestvenskii, B.L.; Yanenko, N.N. Systems of Quasilinear Equations and Their Applications to Gas Dynamics; AMS: Providence, RI, USA, 1983. [Google Scholar]
  2. Lax, P.D. Hyberbolic Systems of Conservation Laws and the Mathematical Theory of Shock Waves; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1973. [Google Scholar]
  3. Jeffrey, A. Quasilinear Hyperbolic System and Waves; Pitman: London, UK, 1976. [Google Scholar]
  4. Majda, A. The existence of multi-dimensional shock fronts. In Memoirs of the American Mathematical Society; AMS: Providence, RI, USA, 1983; Volume 43, p. 281. [Google Scholar]
  5. Rykov, V.V.; Filimonov, A.M. Hyperbolic systems with multiple characteristic and applications. Upr. Bol’Shimi Sist. 2020, 85, 72–86. [Google Scholar] [CrossRef]
  6. Rozhdestvenskii, B.L. Discontinuous solutions of hyperbolic systems of quasi-linear equations. Russ. Math. Surv. 1960, 15, 55–111. [Google Scholar] [CrossRef]
  7. Palis, J.L.; Takens, F. Hyperbolicity and Sensitive-Chaotic Dynamics at Homoclinic Bifurcations; Cambridge University Press: Cambidge, UK, 1993. [Google Scholar]
  8. Araujo, V.; Viana, M. Hyperbolic dynamical systems. arXiv 2008, arXiv:0804.3192v1. [Google Scholar]
  9. Kulikovskii, A.G.; Slobodkina, F.A. Equilibrium of arbitrary steady flows at the transonic points. Appl. Math. Mech. 1968, 31, 593–602. [Google Scholar] [CrossRef]
  10. Kulikovskii, A.G.; Pogorelov, N.V.; Semenov, A.Y. Mathematical Aspects of Numerical Solution of Hyperbolic Systems; Chapman and Hall/CRC: New York, NY, USA, 2001. [Google Scholar]
  11. Virchenko, Y.P.; Subbotin, A.V. The class of evolutionary ferrodynamic equations. Math. Methods Appl. Sci. 2021, 44, 11913–11922. [Google Scholar] [CrossRef]
  12. Virchenko, Y.P.; Novosel’tseva, A.E. Class of hyperbolic first order quasi-linear covariant equations of divergent type. In Itogi Nauki. Modern Mathmatics. Thematic Reviews; Gamreklidze, R.V., Ed.; VINITI: Moscow, Russia, 2020; pp. 19–30. (In Russian) [Google Scholar]
  13. Virchenko, Y.P.; Pleskanev, A.A. Hyperbolic spherically symmetric equation of the first order of divergent type for wector field. Belgorod State Univ. Sci. Bull. Math. Phys. 2019, 51, 280–286. (In Russian) [Google Scholar]
  14. Friedrichs, K. Nonlinear hyperbolic differential equations for functions of two independent variables. Am. J. Math. 1948, 70, 555–589. [Google Scholar] [CrossRef]
  15. Gantmacher, F.R. The Theory of Matrices; Springer: Chelsea, NY, USA, 1959. [Google Scholar]
  16. Roger, A.H.; Charles, R.J. Topics in Matrix Analysis; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vasilyev, V.; Virchenko, Y. Hyperbolicity of First Order Quasi-Linear Equations. Symmetry 2022, 14, 1024. https://doi.org/10.3390/sym14051024

AMA Style

Vasilyev V, Virchenko Y. Hyperbolicity of First Order Quasi-Linear Equations. Symmetry. 2022; 14(5):1024. https://doi.org/10.3390/sym14051024

Chicago/Turabian Style

Vasilyev, Vladimir, and Yuri Virchenko. 2022. "Hyperbolicity of First Order Quasi-Linear Equations" Symmetry 14, no. 5: 1024. https://doi.org/10.3390/sym14051024

APA Style

Vasilyev, V., & Virchenko, Y. (2022). Hyperbolicity of First Order Quasi-Linear Equations. Symmetry, 14(5), 1024. https://doi.org/10.3390/sym14051024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop