Next Article in Journal
Analysis and Evaluation of Barriers Influencing Blockchain Implementation in Moroccan Sustainable Supply Chain Management: An Integrated IFAHP-DEMATEL Framework
Next Article in Special Issue
An Improved Taylor Algorithm for Computing the Matrix Logarithm
Previous Article in Journal
On the Hardness of Lying under Egalitarian Social Welfare
Previous Article in Special Issue
Energy and Personality: A Bridge between Physics and Psychology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Evaluation of Matrix Polynomials beyond the Paterson–Stockmeyer Method

1
Instituto de Telecomunicaciones y Aplicaciones Multimedia, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
2
Instituto de Instrumentación para Imagen Molecular, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(14), 1600; https://doi.org/10.3390/math9141600
Submission received: 14 May 2021 / Revised: 27 June 2021 / Accepted: 1 July 2021 / Published: 7 July 2021
(This article belongs to the Special Issue Mathematical Models and Methods in Engineering and Social Sciences)

Abstract

:
Recently, two general methods for evaluating matrix polynomials requiring one matrix product less than the Paterson–Stockmeyer method were proposed, where the cost of evaluating a matrix polynomial is given asymptotically by the total number of matrix product evaluations. An analysis of the stability of those methods was given and the methods have been applied to Taylor-based implementations for computing the exponential, the cosine and the hyperbolic tangent matrix functions. Moreover, a particular example for the evaluation of the matrix exponential Taylor approximation of degree 15 requiring four matrix products was given, whereas the maximum polynomial degree available using Paterson–Stockmeyer method with four matrix products is 9. Based on this example, a new family of methods for evaluating matrix polynomials more efficiently than the Paterson–Stockmeyer method was proposed, having the potential to achieve a much higher efficiency, i.e., requiring less matrix products for evaluating a matrix polynomial of certain degree, or increasing the available degree for the same cost. However, the difficulty of these family of methods lies in the calculation of the coefficients involved for the evaluation of general matrix polynomials and approximations. In this paper, we provide a general matrix polynomial evaluation method for evaluating matrix polynomials requiring two matrix products less than the Paterson-Stockmeyer method for degrees higher than 30. Moreover, we provide general methods for evaluating matrix polynomial approximations of degrees 15 and 21 with four and five matrix product evaluations, respectively, whereas the maximum available degrees for the same cost with the Paterson–Stockmeyer method are 9 and 12, respectively. Finally, practical examples for evaluating Taylor approximations of the matrix cosine and the matrix logarithm accurately and efficiently with these new methods are given.

1. Introduction

The authors of [1] presented a new family of methods for the evaluation of matrix polynomials more efficiently than the state-of-the-art method from [2] by Paterson and Stockmeyer (see [3], Section 4.2). These methods are based on the multiplication of matrix polynomials to get a new matrix polynomial with degree given by the sum of the degrees of the original matrix polynomials. The main difficulty in these methods lies in obtaining the coefficients involved for the evaluation of general matrix polynomials. In this sense, the authors of [1] (Section 3) gave two concrete general methods for evaluating matrix polynomials requiring one less matrix product than the Paterson–Stockmeyer method. Regarding the cost of evaluating matrix polynomials, since the cost of a matrix product, denoted by M, is O ( n 3 ) for n × n matrices, and both the cost of the sum of two matrices and the cost of a product of a matrix by a scalar are O ( n 2 ) , similarly to [3] (Section 4.2) the overall cost of the evaluation of matrix polynomials will be approximated asymptotically by the total number of matrix products.
The stability of the two methods from [1] (Section 3) was also analyzed and applications to the Taylor approximation of the exponential and cosine matrix functions were given.
The two general polynomial evaluation methods from [1] (Section 3) named above were applied in [4,5] and for the evaluation of Taylor polynomial approximations of the matrix exponential and the matrix cosine more efficiently than the state-of-the-art Padé methods from [6,7].
Moreover, the authors of [1] (Example 5.1) provided a polynomial evaluation formula for computing the matrix exponential Taylor approximation of degree 15 with cost 4 M , whereas the maximum available degree for that cost using the two general methods from [1] (Section 3) named above is 12. Based on this example, the authors of [5] (Section 3) proposed other new particular evaluation formulae for computing the matrix exponential Taylor polynomial approximations of degrees 15, 21, 24, and 30 that had cost 4 M , 5 M , 6 M , and 7 M , respectively. In [8], the authors proposed a particular method for evaluating the matrix exponential Taylor approximations with the lower degrees 12, 18, and 22 with cost 4 M , 5 M , and 6 M , respectively (see [8], Table 3). Note that general methods for the evaluation of matrix polynomials more efficiently than Paterson–Stockmeyer method are provided in [1], whereas [8] deals with the concrete case of the matrix exponential Taylor approximation. Moreover, (7) from [9] is equivalent to the particular case of taking s = 1 in (62)–(65) from [1]. This work was submitted in October 2016, and evaluation Formulas (62)–(65) were first introduced as (25)–(28) of the early unpublished version [10] of February 2016, whereas [8] is an updated version of the unpublished reference [9] released more than one year later, i.e., October 2017.
In this paper, we generalize the results from [5] (Section 3) given there for the particular case of the matrix exponential Taylor approximation of degrees 15, 21, 24, and 30. These generalizations consist of giving general procedures for:
  • Evaluating polynomial approximations of matrix functions of degrees 15 and 21 with cost 4 M and 5 M , respectively.
  • Evaluating matrix polynomials of degrees 6 s with s = 3 , 4 , with cost ( s + 2 ) M .
  • Evaluating matrix polynomials of degrees greater than 30 with two matrix products less than the Paterson–Stockmeyer method.
Finally, examples for computing Taylor approximations of the matrix cosine and the matrix logarithm efficiently and accurately using those evaluation formulae are given.
Regarding Taylor approximations, if
f ( X ) = i 0 a i X i ,
is the Taylor series of the matrix function f ( · ) , where X C n × n , then
T m ( X ) = i = 0 m a i X i ,
is its Taylor approximation of order m (for the convergence of matrix Taylor series, see Theorem 4.7 of [3], p. 76).
From [11] (Section 1), a matrix X C n × n is a logarithm of B C n × n if e X = B . Therefore, any nonsingular matrix has infinitely many logarithms and we will focus on the principal logarithm, denoted by log ( B ) . For a matrix B C n × n with no eigenvalues on R the principal logarithm is the unique logarithm whose eigenvalues have imaginary parts lying in the interval ( π , π ) . Therefore, in the given examples, we will assume that B has no eigenvalues on R and we will take the logarithm Taylor series
log ( B ) = log ( I A ) = i > 1 A i / i , where A = I B .
The exponential matrix has been studied in numerous papers (see [3] (Chap. 10), and [5,6,8,12] and the references therein). This matrix function can be defined by
exp ( X ) = i 0 X i i ! .
The matrix cosine has received attention recently (see [4,7] and the references therein). This matrix function can be defined by
cos ( X ) = i 0 ( 1 ) i X 2 i ( 2 i ) ! = i 0 ( 1 ) i Y i ( 2 i ) ! , Y = X 2 .
Note that if we truncate the Taylor series on the right-hand side of (3) by the term i = m , then the order of the corresponding cosine Taylor approximation is 2 m .
Regarding the cost in matrix rational approximations, note that the multiplication by the corresponding matrix inverse is calculated by solving a multiple right-hand side linear system. From [13] (Appendix C), it follows that the cost of the solution of multiple right-hand side linear systems A X = B , where matrices A and B are n × n , denoted by D (see [14], p. 11940) is
D 4 / 3 M .
Therefore, using (4), the cost of computing rational approximations will be also given in terms of M.
In this article, the following notation will be used: x denotes the smallest integer greater than or equal to x, and x the largest integer less than or equal to x. u denotes the unit roundoff in IEEE double precision arithmetic (see [15], Section 2.2). The set of positive integers is denoted as N . The set of real and complex matrices of size n × n are denoted, respectively, by R n × n and C n × n . The identity matrix for both sets is denoted as I. The dependence of a variable y on the variables
x 1 , x 2 , , x n
is denoted by
y = y x 1 , x 2 , , x n .
In Section 2, we recall some results for computing matrix polynomials using the Paterson–Stockmeyer method and summarize the matrix polynomial evaluation methods from [1]. In Section 3, we describe the general methods for computing polynomial approximations of degrees 15, 21, and 6 s with s = 3 , 4 , and give examples for the Taylor approximation of the cosine and logarithm matrix functions. Finally, in Section 4, we give some conclusions. In this paper, we provide a method to evaluate matrix polynomials with two matrix products less than the Paterson–Stockmeyer method and one matrix product less than the methods from [1] (Section 3). Moreover, in this paper, we provide methods to evaluate polynomial approximations of matrix functions of degrees 15 and 21 with cost 3 M and 4 M . These methods are interesting because the maximum available degrees using the other method proposed in this paper are 12 and 18, respectively. All of the new methods proposed can be used in the applications for computing approximations of matrix functions or evaluating matrix polynomials more efficiently than using the state-of-the-art methods.

2. Efficient Evaluation of Matrix Polynomials

2.1. Paterson–Stockmeyer Method

The Paterson–Stockmeyer method [2] for computing a matrix polynomial
P m ( A ) = i = 0 m c i A i ,
consists of calculating P m ( A ) as
P S m ( A ) = c m A s + c m 1 A s 1 + + c m s + 1 A + c m s I × A s + c m s 1 A s 1 + c m s 2 A s 2 + + c m 2 s + 1 A + c m 2 s I × A s + c m 2 s 1 A s 1 + c m 2 s 2 A s 2 + + c m 3 s + 1 A + c m 3 s I × A s + c s 1 A s 1 + c s 2 A s 2 + + c 1 A + c 0 I ,
where P S m ( A ) denotes the Paterson–Stockmeyer evaluation Formula (6) and s > 0 is an integer that divides m. Given a number of matrix products, the maximum degrees of P m ( A ) that are available using the Paterson–Stockmeyer method are the following:
m = s 2 , and m = s ( s + 1 ) ,
where s N , denoted by m * , m * = { 1 , 2 , 4 , 6 , 9 , 12 , } [14] (Section 2.1). The cost C P S for computing (6) for the values of m * are given by [14] (Equation (5)), which appear in [14] (Table 1). In [16], the optimality of the rule m * = ( C P S s + 2 ) s , where s = C P S / 2 + 1 , was demonstrated. This rule gives the same results as (7), since if C P S is even then C P S = 2 s 1 , and in that case m * = s ( s + 1 ) , and if C P S is odd then C P S = 2 s , and then m * = s 2 . Note that, for positive integers m m * , P m ( A ) = P S m 0 ( A ) can be evaluated using (6) taking m 0 = min { m 1 m * , m 1 > m } and setting some coefficients as zero [1] (Section 2.1).

2.2. General Polynomial Evaluation Methods beyond the Paterson–Stockmeyer Method

The authors of [1] (Example 3.1) give a method to compute P 8 ( A ) from (5) with a cost of 3 M with the following evaluation formulae
y 02 ( A ) = A 2 ( q 4 A 2 + q 3 A ) ,
y 12 ( A ) = ( y 02 ( A ) + r 2 A 2 + r 1 A ) ( y 02 ( A ) + s 2 A 2 ) + s 0 y 02 ( A ) + t 2 A 2 + t 1 A + t 0 I ,
where q 4 , q 3 , r 2 , r 1 , s 2 , s 0 , t 2 , t 1 , and t 0 are complex numbers. In order to compute (5) with m = 8 , if we equate y 12 ( A ) = P m ( A ) from (5), then the system of eight equations with eight coefficients from (16)–(24) from [1] arises. In this system, some coefficients can be obtained directly from the polynomial P m ( A ) coefficients as
q 4 = ± c 8 ,
q 3 = c 7 / ( 2 q 4 ) ,
t 2 = c 2 ,
t 1 = c 1 ,
t 0 = c 0 ,
and the remaining equations can be reduced by variable substitution to a quadratic equation on s 2 . This equation gives two solutions for q 4 = c 8 and two more solutions for q 4 = c 8 . The remaining coefficients can be obtained from s 2 , q 4 , and q 3 . From (11), one gets q 4 0 giving condition
c 8 0 ,
for coefficient c 8 in P m ( A ) from (5).
In order to check the stability of the solutions of q i , r i , and s i rounded to IEEE double precision arithmetic, the authors of [1] (Example 3.1) proposed to compute the relative error for each coefficient c i , for i = 3 , 4 , , 8 substituting those solutions into the original system of Equations (16)–(24) from [1]. For instance, from (10), it follows that the relative error for c 8 using q 4 rounded to IEEE double precision arithmetic is
| c 8 q ˜ 4 2 | / | c 8 | ,
where q ˜ 4 is the value q 4 = ± c 8 rounded to IEEE double precision arithmetic. Then, if the relative errors for all the expressions of the coefficients c i are of order the unit roundoff in IEEE double precision arithmetic, i.e., u = 2 53 1.11 × 10 16 , then the solution is stable.
In [1] (Table 4), one of the solutions rounded to IEEE double precision arithmetic for evaluating the Taylor polynomial of the exponential and cosine functions is shown. These solutions were substituted into the original system of equations to calculate the relative error for c i , for i = 3 , 4 , , 8 (see [1], Example 3.1), giving a relative error of order u, turning out to be stable solutions. Moreover, the numerical tests from [1] (Example 3.2) and [4,5] also show that if the relative error for each coefficient is O ( u ) , then the polynomial evaluation formulae are accurate, and if the relative errors are O ( 10 u ) or greater, then the polynomial evaluation formulae are not so accurate.
The authors of [1] (Section 3) also provided a more general method for computing matrix polynomials P m ( A ) from (5) of degree m = 4 s based on the evaluation formulae
y 0 s ( A ) = A s i = 1 s q s + i A i ,
y 1 s ( A ) = y 0 s ( A ) + i = 1 s r i A i y 0 s ( A ) + i = 2 s s i A i + s 0 y 0 s ( A ) + i = 0 s t i A i ,
where s 2 , q s + i , r i , s i and t i are scalar coefficients, q 2 s = ± c 4 s 0 and then c 4 s 0 for coefficient c 4 s from P m ( A ) . Note that A i , i = 2 , 3 , , s are computed only once. The degree and computing cost of y 1 s ( A ) are given by (36) of [1], i.e., d y 1 s = 4 s and C y 1 s = s + 1 , s = 2 , 3 , , respectively. A general solution for the coefficients in (16) and (17) is given in [1] (Section 3), with the condition
c 4 s 0 .
Given a cost C ( M ) , the maximum orders that can be reached when using the Formulae (16) and (17) and the Paterson–Stockmeyer method are shown in [1] (Table 5).
Proposition 1 from [1] (Section 3) shows a method for computing matrix polynomials combining the Paterson–Stockmeyer method with (17) as
z k p s ( x ) = y k s ( x ) x s + a p 1 x s 1 + a p 2 x s 2 + + a p s + 1 x + a p s × x s + a p s 1 x s 1 + a p s 2 x s 2 + + a p 2 s + 1 x + a p 2 s × x s + a p 2 s 1 x s 1 + a p 2 s 2 x s 2 + + a p 3 s + 1 x + a p 3 s × x s + a s 1 x s 1 + a s 2 x s 2 + + a 1 x + a 0 ,
where k = 1 , p is a multiple of s and y k s ( x ) = y 1 s ( x ) is evaluated using (16) and (17). This allows one to increase the degree of the polynomial to be evaluated. The degree of z 1 p s ( A ) and its computational cost are given by (53) of [1], i.e., d z 1 p s = 4 s + p , C z 1 p s = ( 1 + s + p / s ) M , respectively. Ref. [1] (Table 6) shows that evaluating a matrix polynomial using (19) requires one less product than using the Paterson–Stockmeyer Formula (6).
Proposition 2 from [1] (Section 5) gives general formulae more efficient than the formulae of the previous methods, whenever at least one solution for the coefficients in (62)–(65) from [1] (Prop. 2) exists so that y k s ( x ) is equal to the polynomial P m to evaluate. The maximum polynomial degree and the computing cost if x = A , A C n × n , are given by (66) of [1], i.e., d y k s = 2 k + 1 s , C y k s = ( s + k ) where d y k s increases exponentially while C y k s increases linearly. (17) is a particular case of (65) from [1] where k = 1 .

3. Three General Expressions for y 2 s ( A )

This section gives general procedures to obtain the coefficients of y 2 s ( A ) from (65) from [1] with k = 2 , generalizing the results from [5] (Section 3) for the evaluation of the matrix exponential Taylor approximations of degrees 15, 21, 24, and 30, also giving formulae for evaluating matrix polynomials of orders 6 s , where s = 2 , 3 ,

3.1. Evaluation of Matrix Polynomial Approximations of Order 15 with y 2 s ( A ) , s = 2

The following proposition allows to compute polynomial approximations of order 15 with cost 4 M . Note that from [1] (Table 8), the maximum available order with cost 4 M is 9 for the Paterson–Stockmeyer method and 12 for the method given by (16) and (17).
Proposition 1.
Let y 12 ( A ) and y 22 ( A ) be
y 12 ( A ) = i = 2 8 c i A i ,
y 22 ( A ) = ( y 12 ( A ) + d 2 A 2 + d 1 A ) ( y 12 ( A ) + e 0 y 02 ( A ) + e 1 A ) + f 0 y 12 ( A ) + g 0 y 02 ( A ) + h 2 A 2 + h 1 A + h 0 I ,
and let P 15 ( A ) be a polynomial of degree 15 with coefficients b i
P 15 ( A ) = i = 0 15 b i A i .
Then,
y 22 ( A ) = i = 0 16 a i A i ,
where coefficients a i are functions of the following variables
a i = a i ( c 8 , c 7 , , c 2 , d 2 , d 1 , e 1 , e 0 , f 0 , g 0 , h 2 , h 1 , h 0 ) , i = 0 , 1 , , 16 ,
and there exist at least one set of values of the 16 coefficients c 8 , c 7 , …, c 2 , d 2 , d 1 , e 1 , e 0 , f 0 , g 0 , h 2 , h 1 , h 0 so that
a i = b i , i = 0 , 1 , , 15 ,
and
a 16 = c 8 2 ,
provided the following conditions are fulfilled:
c 8 0 ,
3 b 15 2 8 b 14 c 8 2 ,
27 b 15 6 c 8 45 / 2 + 576 b 14 2 b 15 2 c 8 53 / 2 512 b 14 3 c 8 57 / 2 + 216 b 14 b 15 4 c 8 49 / 2 .
Proof of Proposition 1.
Note that y 12 ( A ) from (20) is a matrix polynomial of degree 8. Therefore, if condition (26) holds, then Example [1] (Example 3.1) gives four possible solutions for evaluating y 12 ( A ) using the evaluation Formulas (8) and (9) with cost 3 M . Similarly to [5] (Section 3.2), we will denote these four solutions as nested solutions.
Using (10) and (11), one gets that y 02 ( A ) from (8) can be written as
y 02 ( A ) = ± A 2 ( c 8 A 2 + c 7 / ( 2 c 8 ) A ) .
Then, taking the positive solution in (29), if we equate y 22 ( A ) from (23) to P 15 ( A ) from (22), we obtain the following nonlinear system with 16 coefficients c i , i = 2 , 3 , , 8 , d 2 , d 1 , e 1 , e 0 , f 0 , g 0 , h 2 , h 1 , h 0 :
a 15 = 2 c 7 c 8 = b 15 , a 14 = c 7 2 + 2 c 6 c 8 = b 14 , a 13 = 2 c 5 c 8 + 2 c 6 c 7 = b 13 , a 12 = c 6 2 + c 8 c 4 + c 8 e 0 + c 4 c 8 + 2 c 5 c 7 = b 12 , a 11 = c 7 c 4 + c 8 e 0 + c 3 c 8 + c 4 c 7 + 2 c 5 c 6 + c 8 c 3 + c 7 e 0 2 c 8 = b 11 , a 10 = c 5 2 + c 6 c 4 + c 8 e 0 + i = 0 2 c 2 + i c 8 i + c 7 c 3 + c 7 e 0 2 c 8 + c 8 ( c 2 + d 2 ) = b 10 , a 9 = c 5 c 4 + c 8 e 0 + i = 0 2 c 2 + i c 7 i + c 6 c 3 + c 7 e 0 2 c 8 + c 7 ( c 2 + d 2 ) + c 8 ( d 1 + e 1 ) = b 9 , a 8 = c 4 c 4 + c 8 e 0 + c 2 c 6 + c 3 c 5 + c 5 c 3 + c 7 e 0 2 c 8 + c 6 ( c 2 + d 2 ) + c 7 ( d 1 + e 1 ) + c 8 f 0 = b 8 , a 7 = c 3 c 4 + c 8 e 0 + c 2 c 5 + c 4 c 3 + c 7 e 0 2 c 8 + c 5 ( c 2 + d 2 ) + c 6 ( d 1 + e 1 ) + c 7 f 0 = b 7 , a 6 = c 2 c 4 + c 3 c 3 + c 7 e 0 2 c 8 + ( c 2 + d 2 ) c 4 + c 8 e 0 + c 5 ( d 1 + e 1 ) + c 6 f 0 = b 6 , a 5 = d 1 c 8 e 0 + c 2 c 3 + ( c 2 + d 2 ) c 3 + c 7 e 0 2 c 8 + c 4 ( d 1 + e 1 ) + c 5 f 0 = b 5 , a 4 = d 1 c 7 e 0 2 c 8 + c 2 ( c 2 + d 2 ) + c 3 ( d 1 + e 1 ) + c 4 f 0 + c 8 g 0 = b 4 , a 3 = e 1 d 2 + c 2 ( d 1 + e 1 ) + c 3 f 0 + c 7 g 0 2 c 8 = b 3 , a 2 = d 1 e 1 + c 2 f 0 + h 2 = b 2 , a 1 = h 1 = 1 , a 0 = h 0 = 1 .
This system of equations can be solved for a set of given variables b i , i = 1 , 2 , , 15 , using variable substitution with the MATLAB Symbolic Toolbox using the following MATLAB code fragment (we used MATLAB R2020a in all the computations):
% MATLAB code fragment 4.1: solves coefficient c8 of
% the system of equations (30) for general coefficients bi
1 syms A c2 c3 c4 c5 c6 c7 c8 d1 d2 e0 e1 f0 g0 h2 h1 h0I
2 syms b0 b1 b2 b3 b4 b5 b6 b7 b8 b9 b10 b11 b12 b13 b14 b15 b16
3 c=[c2;c3;c4;c5;c6;c7;c8];
4 b=[b16;b15;b14;b13;b12;b11;b10;b9;b8;b7;b6;b5;b4;b3;b2;b1;b0];
5 y0=A^2*(sqrt(c8)*A^2+c7/(2*sqrt(c8))*A); % y0 0 from (29)
6 y1=sum(c.*A.^([2:8]’));
7 y2=(y1+d2*A^2+d1*A)*(y1+e0*y0+e1*A)+f0*y1+g0*y0+h2*A^2+h1*A+h0I;
8 [cy2,a1]=coeffs(y2,A);
9 cy2=cy2.’;
10 v=[cy2 b a1.’]%v shows the coefficients of each power of A
11 cy2=cy2(2:end)-b(2:end); %System of equations
12 c7s=solve(cy2(1),c7,’ReturnConditions’,true); %c7s=f(c8,bi)
13 c7s.conditions %c8 ~= 0 condition for the existence of solutions
14 c7s=c7s.c7;
15 cy2=subs(cy2,c7,c7s);
16 c6s=solve(cy2(2), c6); %c6s depends on c8 bi
17 cy2=subs(cy2,c6,c6s);
18 c5s=solve(cy2(3), c5); %c5s depends on c8 bi
19 cy2=simplify(subs(cy2,c5,c5s));
20 symvar(cy2(4)) %cy2(4) depends on c8, c4, e0 bi
21 e0s=solve(cy2(4), e0);
22 cy2=simplify(subs(cy2,e0,e0s));
23 symvar(cy2(5)) %cy2(5) depends on c8, c3, c4, bi
24 c3s=solve(cy2(5), c3);
25 cy2=simplify(subs(cy2,c3,c3s));
26 symvar(cy2(6)) %cy2(6) depends only on c8, c2, d2, bi
27 d2s=solve(cy2(6), d2);
28 cy2=simplify(subs(cy2,d2,d2s));
29 symvar(cy2(7)) %cy2(7) depends only on c8, d1, e1, bi
30 d1s=solve(cy2(7), d1);
31 cy2=simplify(subs(cy2,d1,d1s));
32 symvar(cy2(8)) %cy2(8) depends only on c8, c4, f0, bi
33 f0s=solve(cy2(8), f0);
34 cy2=simplify(subs(cy2,f0,f0s));
35 symvar(cy2(9)) %cy2(9) depends only on c8, b7, b8,...,b15
36 c8s=solve(cy2(9), c8)
Since cy2(9) from the code fragment line 35 depends only on coefficients b i , for i = 7 , 8 , , 15 , and c 8 , the solutions for c 8 are given by the zeros of equation cy2(9) with the condition given by MATLAB code fragment line 13, i.e., condition (26). solve function gives 16 solutions for c 8 . They are the roots of a polynomial with coefficients depending on variables b i , for i = 7 , 8 , , 15 .
Once the 16 solutions for c 8 are obtained for concrete values of the coefficients b i , i = 0 , 1 , , 15 , the remaining variables can be obtained with the following MATLAB code fragment:
% MATLAB code fragment 4.2: solves coefficient c2 of the
% system of equations (30) for general coefficients bi by
% using the solutions for coefficient c8 obtained using the
% MATLAB piece of code 4.1
1 symvar(cy2(10)) %cy2(10) depends on c8, c2, c4, bi
2 c4s=solve(cy2(10), c4) % two solutions depending on c8, c2, bi
3 cy2=simplify(subs(cy2,c4,c4s(1)))%change c4s(1) for c4s(2) for more solutions
4 symvar(cy2(11)) %cy2(11) depends on c8, c2, e1, bi
5 e1s=solve(cy2(11), e1)
6 cy2=simplify(subs(cy2,e1,e1s))
7 symvar(cy2(12)) %cy2(12) depends on c8, c2, g0, bi
8 g0s=solve(cy2(12), g0,’ReturnConditions’,true)
9 g0s.conditions %conditions for the existence of solutions:
%3*b15^2 ~= 8*b14*c8^2 &
%27*b15^6*c8^(45/2) + 576*b14^2*b15^2*c8^(53/2) ~=
%512*b14^3*c8^(57/2) + 216*b14*b15^4*c8^(49/2) &
%c8 ~= 0
10 g0s=g0s.g0
11 cy2=simplify(subs(cy2,g0,g0s))
12 symvar(cy2(13)) %cy2(13) depends on c8, c2, bi
Since cy2(13) depends only on coefficients b i , for i = 3 , 4 , , 15 , c 8 , and c 2 , substituting the values obtained previously for c 8 in cy2(13), the solutions for c 2 are given by the zeros of equation cy2(13) with the conditions given by line 9 from MATLAB code fragment 4.2 when solving c 7 and g 0 , given by (26)–(28). Both code fragments are available (http://personales.upv.es/jorsasma/Software/coeffspolm15plus.m (accessed on 24 June 2021)).
All of the coefficients c 7 , c 6 , …, c 3 , d 2 , d 1 , e 1 , e 0 , f 0 , g 0 , can be obtained from c 2 , c 8 and b i , i = 0 , 1 , 15 , and then h i can be obtained from the three last equations of system (30) as
h 2 = b 2 d 1 e 1 c 2 f 0 ,
h 1 = b 1 ,
h 0 = b 0 .
Finally, using (20) and (21), coefficient a 16 from (23) is given by (25).
Hence, for any values of the coefficients b i , i = 0 , 1 , , 15 , of the polynomial (22), then there exist at least one solution of system (30) giving a set of values of the coefficients from y 22 ( A ) from (20) and (21) so that (24) and (25) hold, provided conditions (26)–(28) are fulfilled.  □
Given certain coefficients b i , i = 1 , 2 , , 15 for P 15 ( A ) from (22), using MATLAB code fragments 4.1 and 4.2, one can get typically more than one solution of system (30). Moreover, if we take the negative sign in (29) another set of solutions fulfilling (24) can be obtained. For each of those solutions there are also different solutions for the nested solutions for evaluating (20) with the solutions from Example [1] (Example 3.1).
For each of those solutions, coefficient a 16 from y 22 ( A ) in (23) is given by (25). For the particular case of the matrix exponential Taylor approximation from [5] (p. 209), there were two real solutions of c 8 giving
| c 8 2 1 / 16 ! | 16 ! 0.454 ,
| c 8 2 1 / 16 ! | 16 ! 2.510 .
Therefore, we selected the first solution (34) since both solutions were stable according to the stability study from Section 2.2 (see [1], p. 243), but (34) had a lower error for a 16 with respect to the corresponding Taylor coefficient 1 / 16 ! . Then, considering exact arithmetic, one gets that the matrix exponential approximation from y 22 ( A ) in evaluation Formulas (10)–(12) from [5] (p. 209) with the coefficients from [5] (Table 3) is more accurate than the exponential Taylor approximation of order 15. For that reason, the corresponding Taylor approximation order was denoted by m = 15 + in [5] (Section 4).
Recently, in [17], an evaluation formula of the type given in Proposition 1 was used to evaluate a Taylor polynomial approximation of degree 15+ of the hyperbolic tangent. However, in this case, all the solutions obtained were complExample We tried different configurations of the evaluation formulae giving degree 15+, but all of them gave complex solutions. Then, we proposed the similar evaluation Formula (11) from [17] (p. 6) with degree 14+ that did give real solutions. Similarly to (34), in the case of the hyperbolic tangent, the relative error of the coefficients a i , i = 15 , and 16 was also lower than 1—concretely, 0.38 and 0.85 , respectively (see [17], p. 6). This method was compared to the Paterson–Stockmeyer method being noticeably more efficient without affecting the accuracy (see [17], Section 3) for details. Proposition 1 allows us to evaluate polynomial approximations of degree 15 not only for the matrix exponential or the hyperbolic tangent but also for other matrix functions. If all the given solutions were complex, we can modify the formula to evaluate approximation formulae with a lower degree, such as 14+, to check if they give real solutions.
Example 1.
In [4] (Section 2), we showed that the solutions for the coefficients of the polynomial evaluation method similar to [5] (Section 3.2) of the matrix cosine Taylor approximation of order 2 m = 30 + were not stable, giving poor accuracy results. Using Proposition 1, this example gives a stable solution for calculating a Taylor-based approximation of the matrix cosine with a combination of formula (21) with the Paterson–Stockmeyer method from (19). Setting k = p = s = 2 in (19) and y k s = y 22 from (21), one gets
z 222 ( B ) = y 22 ( B ) B 2 B / 2 + I = P 17 ( B ) = i = 0 17 b i B i + a 18 B 18 ,
where B = A 2 and z 222 ( B ) is a Taylor-based approximation of the matrix cosine (3) of order 2 m = 34 + , i.e., b i = ( 1 ) i / ( 2 i ) ! for i = 0 , 1 , , 17 , coefficient a 18 is given by a 18 = c 8 2 (see (25)).
MATLAB code fragment 4.1 was used for obtaining all the real solutions of c 8 . Then, MATLAB code fragment 4.2 was used with these solutions taking solution 1 for coefficient c 4 in line 3 of the MATLAB code fragment 4.2. Then, we obtain the equationcy2(13)from the code fragment in line 12 depending on c 2 and c 8 . This equation was solved for every real solution of c 8 , using the MATLAB Symbolic Math Toolbox with variable precision arithmetic. Finally, we obtained the nested solutions for computing (20) with (8) and (9) with q 4 > 0 from (10).
The real solutions of system (30) rounded to IEEE double precision arithmetic explored in [4] (Section 2) gave errors of order 10 14 , greater than the unit roundoff in IEEE double precision arithmetic u = 2 53 1.11 × 10 16 . Using MATLAB code fragments 4.1 and 4.2, we checked that there is no solution with a lower error. Then, according to the stability check from Section 2.2, the solutions are unstable, and we checked in [4] that they gave poor accuracy results. However, using Proposition 1, for 2 m = 34 + , we could find two real solutions of system (30) giving a maximum error of order u. For those two solutions, a 18 gave
| a 18 1 / 36 ! | 36 ! 0.394 ,
| a 18 1 / 36 ! | 36 ! 16.591 ,
respectively. Therefore, the solution (37) giving the lowest error was selected. Table 1 gives the corresponding coefficients in IEEE double precision arithmetic from (8) and (9) for computing (20) with three matrix products, and the rest of the needed coefficients for computing y 22 ( B ) from (21) with s = 2 , given finally by
y 02 ( B ) = B 2 ( q 4 B 2 + q 3 B ) ,
y 12 ( B ) = y 02 ( B ) + r 2 B 2 + r 1 B y 02 ( B ) + s 2 B 2 + s 0 y 02 ( B ) + t 2 B 2 ,
y 22 ( B ) = ( y 12 ( B ) + d 2 B 2 + d 1 B ) ( y 12 ( B ) + e 0 y 02 ( B ) + e 1 B ) + f 0 y 12 ( B ) + g 0 y 02 ( B ) + h 2 B 2 + h 1 B + h 0 I .
Using (39)(41) with the coefficients from Table 1 and (36), a matrix cosine Taylor approximation of order 2 m = 34 + can be computed in IEEE double precision arithmetic with a cost of six matrix products, i.e., B = A 2 , B 2 , three for evaluating (39)(41), and one more for evaluating (36). The maximum available and stable order given in [4] (Section 2) with six matrix products was 2 m = 30 . The coefficients from Table 1 were computed with variable precision arithmetic with a precision of 32 and 250 decimal digits to check its correctness, giving the same results.
Taking into account (3) and the selection of the solution in (37), in exact arithmetic, one gets
z 222 ( x ) = i = 0 17 ( 1 ) i x 2 i ( 2 i ) ! + a 18 x 36 .
where, using (39)(41), one gets a 18 = q 4 4 .
To check if the new evaluation formulae are accurate, we compared the results of computing the matrix cosine with functioncosmfrom [7] with a function using the coefficients from Table 1 in (39)(41) and (36) with no scaling for simplicity. Since [7] used a relative backward error analysis, we used the values of Θ from [15] (Table 1) corresponding to the backward relative error analysis of the Taylor approximation of the matrix cosine, denoted by E b . Then, if | | B | | = | | A 2 | | Θ , then | | E b | | u for the corresponding Taylor approximations. In [15] (Table 1), Θ for Taylor approximation of order 16 was 9.97 and Θ 20 = 10.18 , showing two decimal digits. Then, for our test with order 2 m = 34 + , we used a set of 48 8 × 8 matrices from the Matrix Computation Toolbox [18] divided by random numbers to give B between 9 and 10. We compared the forward error E f of both functions
E f = | | cos ( A ) f ( A ) | | ,
where function f ( A ) wascosmand the function using z 222 ( B ) . The “exact value” of cos ( A ) was computed using the method in [19]. The total cost of the new matrix cosine computation function z 222 summing up the number of matrix products over all the test matrices is denoted by Cost z 222 . Taking into account (4), the cost for thecosmPadé function summing up the number of matrix products and inversions over all the test matrices is denoted by Cost cosm . Then, the following cost comparison was obtained for that set of test matrices
100 × Cost cosm Cost z 222 Cost z 222 = 40.78 % ,
i.e., the cost of z 222 is 40.78 % lower than the cost ofcosm. Moreover, the results were more accurate in 76.60 % of the matrices. Therefore, the new formulae are efficient and accurate.

3.2. Evaluation of Matrix Polynomial Approximations of Order 21

In this section, we generalize the results from [5] (Section 3.3) for evaluating polynomial approximations of order m = 21 with cost 5 M . Note that for that cost, from [1] (Table 8), the maximum available orders using the Paterson–Stockmeyer method and the evaluation Formulas (16) and (17) are 12 and 16, respectively. Applying a similar procedure to that in Section 3.1 to obtain the coefficients for evaluating a matrix polynomial approximation of order 21, in this case, a system of 22 equations with 22 unknown variables arises. This system can be reduced to three equations with three unknowns using variable substitution with the MATLAB Symbolic Toolbox, provided that two of the variables are not zero. The following proposition summarizes the results
Proposition 2.
Let y 13 ( A ) and y 23 ( A ) be
y 13 ( A ) = i = 2 12 c i A i
y 23 ( A ) = ( y 13 ( A ) + d 3 A 3 + d 2 A 2 + d 1 A ) ( y 13 ( A ) + e 0 y 03 ( A ) + e 1 A ) + f 0 y 13 ( A ) + g 0 y 03 ( A ) + h 3 A 3 + h 2 A 2 + h 1 A + h 0 I
and let P 21 ( A ) be a polynomial of degree 21 with coefficients b i
P 21 ( A ) = i = 0 21 b i A i .
Then,
y 23 ( A ) = i = 0 24 a i A i ,
where coefficients a i = a i ( c 12 , c 11 , , c 2 , d 3 , d 2 , d 1 , e 1 , e 0 , f 0 , g 0 , h 3 , h 2 , h 1 , h 0 ) , i = 0 , 1 , …, 24, and the system of equations arising when equating
a i = b i , i = 0 , 1 , , 21 ,
can be reduced to a system of three equations of variables c 12 , c 11 and c 10 , provided
c 12 0 , e 0 0 ,
and then variables a i , i = 22 , 23 and 24 are
a 24 = c 12 2 , a 23 = 2 c 11 c 12 , a 22 = c 11 2 + 2 c 10 c 12 .
Proof of Proposition 2.
The proof of Proposition 2 is similar to the proof of Proposition 1. Analogously, if condition (18) is fulfilled with s = 3 , i.e., c 12 0 , then polynomial y 13 ( A ) can be evaluated using (16) and (17) with s = 3 and cost 4 M , where y 03 is given by (21) of [5] (Section 3.3), i.e.,
y 03 ( A ) = ± A 3 ( c 12 A 3 + c 11 / ( 2 c 12 ) A 2 + ( 4 c 10 c 12 c 11 2 ) / ( 8 c 12 3 / 2 ) A ) .
If we apply (48), we obtain a similar system to (30). Using variable substitution with the MATLAB Symbolic Toolbox, the MATLAB code coeffspolm21plus.m (http://personales.upv.es/jorsasma/Software/coeffspolm21plus.m (accessed on 24 June 2021)) similar to MATLAB code fragments 4.1 and 4.2 is able to reduce the whole nonlinear system of 22 equations to a nonlinear system of three equations with three variables c 10 , c 11 , and c 12 . The MATLAB code coeffspolm21plus.m returns conditions (49) (see the actual code for details.)  □
If there is at least one solution for c 10 , c 11 , and c 12 fulfilling condition (49), all of the other coefficients can be obtained using the values of c 10 , c 11 , c 12 . Then, y 13 ( A ) from (44) can be evaluated using (16) and (17) giving several possible solutions. Finally, the solutions are rounded to the required precision. Then, according to the stability study from Section 2.2 (see [1], p. 243), the solution giving the least error should be selected.
Similarly to (34) and (35), the degree of y 23 ( A ) of (45) is 24, but with the proposed method, we can only set the polynomial approximation coefficients of (46) up to order m = 21 . The coefficients of a i of the power A i , i = 22, 23, and 24 are given by (50). The authors of [5] (Section 3.3) give one particular example of this method for calculating a matrix Taylor approximation of the exponential function, where in exact arithmetic
y 23 ( A ) = T 21 ( A ) + a 22 A 22 + a 23 A 23 + a 24 A 24 ,
where T 21 is the Taylor approximation of order m = 21 of the exponential function and
| a 22 1 / 22 ! | 22 ! 0.437 ,
| a 23 1 / 23 ! | 23 ! 0.270 ,
| a 24 1 / 24 ! | 24 ! 0.130 ,
showing three decimal digits. Again, in exact arithmetic, the approximation y 23 ( A ) is more accurate than T 21 ( A ) . Therefore, the order of that approximation was denoted as m = 21 + in [5] (Section 4). The experimental results from [5] showed that this method was more accurate and efficient than the Padé method from [6].
Recently, in [17], an evaluation formula similar to (45) was used to evaluate a Taylor polynomial approximation of the hyperbolic tangent. Similarly to (53), in the case of the hyperbolic tangent, the relative error of the coefficients a i , i = 22 , 23, and 24 was also lower than 1—concretely, 0.69 , 0.69 , and 0.70 , respectively (see [17], p. 7). This method was compared to the Paterson–Stockmeyer method being noticeably more efficient without affecting the accuracy (see [17], Section 3 for details).
Proposition 2 allows us to evaluate polynomial approximations of degree 21 not only for the matrix exponential or the hyperbolic tangent but also for other matrix functions. In the following example, we show an application for the evaluation of the Taylor approximation of the matrix logarithm.
Example 2.
In this example, we give real coefficients for computing a Taylor-based approximation of the matrix logarithm of order m = 21 + in a stable manner based on the previous results. Evaluating (44) using (16) and (17) with s = 3 , and using (45), the following formulae can be used to compute the approximation of order m = 21 + of the principal logarithm log ( B ) for a square matrix B = I A with no eigenvalues on R
y 03 ( A ) = A 3 ( c 1 A 3 + c 2 A 2 + c 3 A ) ,
y 13 ( A ) = ( y 03 ( A ) + c 4 A 3 + c 5 A 2 + c 6 A ) ( y 03 ( A ) + c 7 A 3 + c 8 A 2 ) + c 9 y 03 ( A ) + c 10 A 3 + c 11 A 2 ,
y 23 ( A ) = ( y 13 ( A ) + c 12 A 3 + c 13 A 2 + c 14 A ) ( y 13 ( A ) + c 15 y 03 ( A ) + c 16 A ) + c 17 y 13 ( A ) + c 18 y 03 ( A ) + c 19 A 3 + c 20 A 2 + A ,
where the coefficients are numbered correlatively, and using (1), we take
log ( B ) = log ( I A ) = i > 1 A i / i y 23 ( A ) .
The coefficients can be obtained solving first the system of equations arising from (48) with b i = 1 / i for i = 1 , 2 , , 21 , b 0 = 0 . We usedvpasolve(https://es.mathworks.com/help/symbolic/vpasolve.html (accessed on 24 June 2021)) function from the MATLAB Symbolic Computation Toolbox to solve those equations with variable precision arithmetic. We used theRandomoption ofvpasolve, which allows to obtain different solutions for the coefficients, running it 100 times. The majority of the solutions were complex, but there were two real stable solutions. Then, we obtained the nested solutions for the coefficients of (16) and (17) with s = 3 for computing polynomial (44) with four matrix products (see [1], Section 3), giving also real and complex solutions.
Again, we selected the real stable solution given in Table 2. This solution avoids complex arithmetic if the matrix A is real. The relative errors of the coefficients of A 22 , A 23 and A 24 of y 23 ( A ) with respect to the corresponding Taylor approximation of order 24 of log ( I A ) function are:
a 22 = 3.205116205918952 × 10 2 , | a 22 1 / 22 | 22 0.295 ,
a 23 = 1.480540983455180 × 10 2 , | a 23 1 / 23 | 23 0.659 ,
a 24 = 3.754613237786792 × 10 3 , | a 24 1 / 24 | 24 0.910 ,
where a 22 , a 23 , and a 24 are rounded to double precision arithmetic. Then, considering exact arithmetic, one gets
y 23 ( A ) = i = 1 21 A i / i + a 22 A 22 + a 23 A 23 + a 24 A 24 ,
which is more accurate than the corresponding Taylor approximation of log ( B ) of order m = 21 . Therefore, similarly to [5] (Section 4), the approximation order of (63) is denoted by m = 21 + .
The θ values such that the relative backward errors for the Padé approximations are lower than u are shown in [11] (Table 2.1). The corresponding θ value for the Taylor approximation of log ( I A ) of order m = 21 + , denoted by θ 21 + , can be computed similarly (see [11] for details), giving θ 21 + = 0.211084493690929 , where the value is rounded to IEEE double precision arithmetic.
We compared the results of using (56)(58) with the coefficient values from Table 2, with the results given by functionlogm_iss_fullfrom [20]. For that comparison, we used a matrix test set of 43 8 × 8 matrices of the Matrix Computation Toolbox [18]. We reduced their norms so that they are random with a uniform distribution in [ 0.2 , θ 21 + ] in order to compare the Padé approximations oflogm_iss_fullwith the Taylor-based evaluation Formulas (56)(58) using no inverse scaling in none of the approximations (see [11]).
The “exact” matrix logarithm was computed using the method from [19]. The error of the implementation using Formula (58) was lower thanlogm_iss_fullin 100 % of the matrices with a 19.61 % lower relative cost in flops. Therefore, evaluation Formulas (56)(58) are efficient and accurate for a future Taylor-based implementation for computing the matrix logarithm.

3.3. Evaluation of Matrix Polynomials of Degree m = 6 s

The following proposition generalizes the particular cases of the evaluation of the matrix exponential Taylor approximation with degrees m = 24 and 30 from [5] (Section 3.4) for evaluating general matrix polynomials of degree m = 6 s , s = 2 , 3 ,
Proposition 3.
Let y 0 s ( A ) , y 1 s ( A ) , and y 2 s ( A ) be the polynomials
y 0 s ( A ) = A s i = 1 s e s + i A i ,
y 1 s ( A ) = i = 1 4 s c i A i ,
y 2 s ( A ) = y 1 s ( A ) y 0 s ( A ) + i = 1 s e i A i + i = 0 s f i A i ,
and let P m ( A ) be the polynomial
P m ( A ) = i = 0 6 s b i A i .
Then,
y 2 s ( A ) = i = 0 6 s a i A i ,
where coefficients a i = a i ( c i , e j , f k ) , i = 1 , 2 , , 4 s , j = 1 , 2 , , 2 s , k = 0 , 1 , , s , and if
b 6 s 0 ,
then, if we equate y 2 s ( A ) = P m ( A ) , i.e.,
a i = b i , i = 0 , 1 , , 6 s ,
then the following relationships between the coefficients of the polynomials y 0 s ( A ) , y 1 s ( A ) , y 2 s ( A ) , and P m ( A ) are fulfilled:
a. 
c 4 s k = c 4 s k ( b 6 s , b 6 s 1 , , b 6 s k ) , for   k = 0 , 1 , , s 1 , e 2 s k = e 2 s k ( b 6 s , b 6 s 1 , , b 6 s k ) , for   k = 0 , 1 , , s 1 .
b. 
c 3 s k = c 3 s k b 6 s , b 6 s 1 , , b 5 s k , e s , , e s k , k = 0 , , s 1 .
c. 
c 2 s k = c 2 s k ( b 6 s , , b 4 s k , e s , , e 1 ) , k = 0 , , s 1 .
d. 
c s k = c s k ( b 6 s , , b 3 s k , e s , , e 1 ) , k = 0 , , s 1 .
Proof of Proposition 3.
Polynomial y 1 s ( A ) from (65) can be computed using the general method from [1] (Section 3), reproduced here as (16) and (17), provided condition (18) is fulfilled, i.e., c 4 s 0 .
  • In the following, we show that (71) holds. Taking (16) and (17) into account, one gets
    y 1 s ( A ) = y 0 s 2 ( A ) + q ( A ) ,
    where q ( x ) is a polynomial of degree lower than 3 s + 1 , and equating the terms of degree 4 s in (75), we obtain e 2 s = ± c 4 s . On the other hand, equating the terms of degree 6 s in (66), taking condition (69) into account, we obtain
    c 4 s e 2 s = b 6 s , c 4 s ± c 4 s = b 6 s , c 4 s = b 6 s 2 3 0 ,
    e 2 s = ± b 6 s 3 0 .
    Since condition (69) is fulfilled, then by (76), one gets that condition (18) is also fulfilled. Then, polynomial y 1 s ( A ) from (65) can be effectively computed using (16) and (17), and by (76) and (77), one gets that c 4 s and e 4 s depend on b 6 s , i.e.,
    c 4 s = c 4 s ( b 6 s ) , e 2 s = e 2 s f ( b 6 s ) .
    Equating the terms of degree 4 s 1 in (75), we obtain
    c 4 s 1 = 2 e 2 s e 2 s 1 .
    Therefore,
    e 2 s 1 = c 4 s 1 2 e 2 s .
    Equating the terms of degree 4 s 2 in (75), we obtain
    c 4 s 2 = e 2 s e 2 s 2 + e 2 s 1 2 + e 2 s 2 e 2 s ,
    then
    e 2 s 2 = c 4 s 2 e 2 s 1 2 2 e 2 s .
    Equating the terms of degree 4 s 3 in (75), we obtain
    c 4 s 3 = e 2 s e 2 s 3 + e 2 s 1 e 2 s 2 + e 2 s 2 e 2 s 1 + e 2 s 3 e 2 s ,
    then
    e 2 s 3 = c 4 s 3 ( e 2 s 1 e 2 s 2 + e 2 s 2 e 2 s 1 ) 2 e 2 s .
    Equating the terms of degree 4 s 4 in (75), we obtain
    c 4 s 4 = e 2 s e 2 s 4 + e 2 s 1 e 2 s 3 + e 2 s 2 e 2 s 2 + e 2 s 3 e 2 s 1 + e 2 s 4 e 2 s ,
    then
    e 2 s 4 = c 4 s 4 i = 1 3 e 2 s i e 2 s + i 4 2 e 2 s .
    Proceeding in an analogous way with e 2 s k for k = 5 , 6 , , s 1 , we obtain
    e 2 s k = c 4 s k i = 1 k 1 e 2 s i e 2 s + i k 2 e 2 s , k = 1 , 2 , , s 1 .
    On the other hand, equating the terms of degree 6 s 1 in (66), and taking (79) into account, we obtain
    c 4 s e 2 s 1 + c 4 s 1 e 2 s = c 4 s 1 c 4 s 2 e 2 s + e 2 s = b 6 s 1 ,
    Since
    c 4 s 2 e 2 s + e 2 s = c 4 s + 2 e 2 s 2 2 e 2 s = b 6 s 2 3 + 2 b 6 s 2 3 2 b 6 s 3 = 3 b 6 s 3 2 0 ,
    then
    c 4 s 1 = 2 b 6 s 1 3 b 6 s 3 .
    Taking into account (77), (79) and (83), we obtain that c 4 s 1 and e 2 s 1 depend on b 6 s and b 6 s 1 , i.e.,
    c 4 s 1 = c 4 s 1 ( b 6 s , b 6 s 1 ) , e 2 s 1 = e 2 s 1 ( b 6 s , b 6 s 1 ) .
    Equating the terms of degree 6 s 2 in (66) and taking (82) into account, we obtain
    c 4 s e 2 s 2 + c 4 s 1 e 2 s 1 + c 4 s 2 e 2 s = b 6 s 2 ,
    and taking into account (80) and (82), it follows that
    c 4 s 2 = b 6 s 2 c 4 s 1 e 2 s 1 + e 2 s 1 2 2 e 2 s 3 b 6 s 3 2 .
    On the other hand, from (77), (84) and (85), one gets that c 4 s 2 and   e 2 s 2 can be computed explicitly depending on b 6 s , b 6 s 1 , and b 6 s 2 , i.e.,
    c 4 s 2 = c 4 s 2 b 6 s , b 6 s 1 , b 6 s 2 , e 2 s 2 = e 2 s 2 b 6 s , b 6 s 1 , b 6 s 2 .
    Proceeding similarly when equating the terms of degrees 6 s 3 , 6 s 4 , 5 s + 1 in (66), one gets (71).
  • In the following, we show that (72) holds. Equating the terms of degree 5 s in (66) and taking condition e 2 s 0 from (77) into account, we obtain
    c 4 s e s + c 4 s 1 e s + 1 + + c 3 s + 1 e 2 s 1 + c 3 s e 2 s = b 5 s , c 3 s = b 5 s c 4 s e s + c 4 s 1 e s + 1 + + c 3 s + 1 e 2 s 1 e 2 s .
    Hence, taking (71) into account, it follows that
    c 3 s = c 3 s b 6 s , b 6 s 1 , , b 5 s + 1 , b 5 s , e s .
    Equating the terms 5 s 1 in (66) and taking condition e 2 s 0 from (77) into account, we obtain
    c 4 s e s 1 + c 4 s 1 e s + + c 3 s e 2 s 1 + c 3 s 1 e 2 s = b 5 s 1 , c 3 s 1 = b 5 s 1 c 4 s e s 1 + c 4 s 1 e s + + c 3 s e 2 s 1 e 2 s .
    Hence, using (87), one gets
    c 3 s 1 = c 3 s 1 b 6 s , b 6 s 1 , , b 5 s , b 5 s 1 , e s , e s 1 )
    Proceeding similarly, equating the terms of degrees 5 s 2 , 5 s 3 …, 4 s + 1 in (66), one gets (72).
  • In the following, we show that (73) holds. Equating the terms of degree 4 s in (66) and taking condition e 2 s 0 from (77) into account, it follows that
    c 4 s 1 e 1 + c 4 s 2 e 2 + + c 2 s + 1 e 2 s 1 + c 2 s e 2 s = b 4 s , c 2 s = b 4 s c 4 s 1 e 1 + c 4 s 2 e 2 + + c 2 s + 1 e 2 s 1 e 2 s
    Taking (71) and (72) into account, we obtain
    c 2 s = c 2 s b 6 s , , b 4 s , e s , , e 1 .
    Equating the terms of degree 4 s 1 in (66) and condition e 2 s 0 , one gets
    c 4 s 2 e 1 + c 4 s 3 e 2 + + c 2 s e 2 s 1 + c 2 s 1 e 2 s = b 4 s 1 , c 2 s 1 = b 4 s 1 c 4 s 2 e 1 + c 4 s 3 e 2 + + c 2 s e 2 s 1 e 2 s .
    Taking (71) and (72) into account, we obtain
    c 2 s 1 = c 2 s 1 b 6 s , , b 4 s 1 , e s , , e 1 .
    Proceeding similarly, equating the terms of degrees 4 s 2 , 4 s 3 , , 3 s + 1 in (66) and taking (71), (72), and condition e 2 s 0 into account, one gets (73).
  • In the following, we show that (74) holds. Equating the terms of degree 3 s in (66) and takingcondition e 2 s 0 into account, it follows that
    c 3 s 1 e 1 + c 3 s 2 e 2 + + c s + 1 e 2 s 1 + c s e 2 s = b 3 s , c s = b 3 s c 3 s 1 e 1 + c 3 s 2 e 2 + + c s + 1 e 2 s 1 e 2 s .
    Hence, from (71)–(73), we obtain
    c s = c s b 6 s , , b 3 s , e s , , e 1 .
    Equating the terms of degree 3 s 1 in (66) and condition e 2 s 0 , we obtain
    c 3 s 2 e 1 + c 3 s 3 e 2 + + c s e 2 s 1 + c s 1 e 2 s = b 3 s 1 , c s 1 = b 3 s 1 c 3 s 2 e 1 + c 3 s 3 e 2 + + c s e 2 s 1 e 2 s .
    Hence, from (71)–(73) we obtain
    c s 1 = c s 1 b 6 s , , b 3 s 1 , e s , , e 1 .
    Proceeding similarly, equating the terms of degrees 3 s 2 , 3 s 3 , , 2 s + 1 , in (66), one gets (74).
 □
Corollary 1.
If condition (69) holds, then the system of 6 s + 1 equations with 7 s + 1 variables arising from (70) can be reduced using variable substitution to a system of s equations with s variables, and if there exist at least one solution for that system, then all the coefficients from (64)(66) can be calculated using the solution of the system.
Proof of Corollary 1.
If we equate the terms of degree 2 s , 2 s 1 , , s + 1 in (66), we obtain the following system of equations:
c 2 s 1 e 1 + c 2 s 2 e 2 + + c 2 e 2 s 2 + c 1 e 2 s 1 = b 2 s c 2 s 2 e 1 + c 2 s 3 e 2 + + c 2 e 2 s 3 + c 1 e 2 s 2 = b 2 s 1 c s e 1 + c s 1 e 2 + + c 2 e s 1 + c 1 e s = b s + 1 .
Taking (71), (73), and (74) into account, it follows that system (89) can be written as a system of s equations with a set of s unknown variables e 1 , e 2 , e s , where, in general, (89) is nonlinear system since the c k coefficients depend on the e k coefficients.
Equating the terms of degrees s , s 1 , , 0 , in (66), one gets
f s = b s c s 1 e 1 c s 2 e 2 c 1 e s 1 , f s 1 = b s 1 c s 2 e 1 c s 3 e 2 c 1 e s 2 , f 2 = b 2 c 1 e 1 , f 1 = b 1 , f 0 = b 0 .
Using (71) from Proposition 3, one gets that the values c 4 s k and e 2 s k , for k = 0 , , s 1 , can be calculated explicitly depending on the polynomial coefficients b i for i = 6 s , 6 s 2 , , 5 s + 1 .
If there exist at least one solution of system (89), then the values c 3 s k , c 2 s k and c s k can be calculated for k = 0 , , s 1 (see (72)–(74)), and coefficients f s k can be calculated for k = 0 , , s , using (90), allowing one to obtain all the coefficients from (64)–(66).  □
Using [1] (Table 6), in Table 3, we present the maximum available order for a cost C ( M ) in the following cases:
  • The Paterson–Stockmeyer evaluation formula.
  • z k p s from (19) with k = 1 , denoting the combination of (17) with the Paterson–Stockmeyer formula proposed in [1] (Section 3.1).
  • z k p s from (19) with k = 2 , denoting the combination of (66) with the Paterson–Stockmeyer formula, whenever a solution for the coefficients of z 2 p s exist.
Table 3 also shows the values of p and s for z 2 p s ( A ) such that s is minimum to obtain the required order, giving the minimum size of the system (89) to solve, i.e., s equations with s unknown variables. Note that it makes no sense to use (66) for s = 1 and cost C = 3 M since the order obtained is m = 6 s = 6 and for that cost the Paterson–Stockmeyer method obtains the same order. Table 3 shows that evaluation formula z 2 p s obtains a greater order than z 1 p s for d z 1 s > 12 . Concretely, for s z 2 s 5 , where the available order with z 2 s is d z 2 s = 30 , 36 , 42 , , z 2 p s allows increments 10 , 11 , 12 of the available order with respect to using the Paterson–Stockmeyer method, and increments of s z 2 s = 5 , 6 , 6 with respect to using z 1 p s .
In [5], real stable solutions were found for the coefficients of (64)–(66) for the exponential Taylor approximation with degrees 6 s with s = 4 and 5, i.e., 24, and 30. The following example deals with the matrix logarithm Taylor approximation.
Example 3.
In this example, we provide real coefficients for calculating the Taylor approximation of the principal matrix logarithm log ( B ) of order m = 6 s = 30 , s = 5 , in a stable manner based on the results of Proposition 3 and Corollary 1 with the following expressions
y 05 ( A ) = A 5 ( c 1 A 5 + c 2 A 4 + c 3 A 3 + c 4 A 2 + c 5 A ) ,
y 15 ( A ) = ( y 05 ( A ) + c 6 A 5 + c 7 A 4 + c 8 A 3 + c 9 A 2 + c 10 A ) × ( y 05 ( A ) + c 11 A 5 + c 12 A 4 + c 13 A 3 + c 14 A 2 ) + c 15 y 05 ( A ) + c 16 A 5 + c 17 A 4 + c 18 A 3 + c 19 A 2 + c 20 A ,
y 25 ( A ) = y 15 ( A ) ( y 05 ( A ) + c 21 A 5 + c 22 A 4 + c 23 A 3 + c 24 A 2 + c 25 A ) + c 26 A 5 + c 27 A 4 + c 28 A 3 + c 29 A 2 + c 30 A ,
where the coefficients were numbered correlatively. The coefficients c i i = 1 , 2 , , 30 , can be obtained following the procedure from Section 3.3, reducing the whole system of 30 equations with 30 unknown variables to the system (89) of s = 5 variables with s unknowns e i , i = 1 , 2 , 5 , corresponding in (93) to e 1 = c 25 , e 2 = c 24 , , e 5 = c 21 . Once this was done, we checked that e 1 and e 2 could be easily solved as functions of e 3 , e 4 and e 5 , reducing the system to a system of three equations with three unknown variables. To obtain a real solution of the three coefficients, we used the MATLAB Symbolic Math Toolbox functionvpasolvegiving a range [ 10 , 10 ] for the solutions of the three variables and using 32 decimal digits. The results of the coefficients from (91)(93) rounded to IEEE double precision arithmetic are given in Table 4.
Note that using the evaluation Formulas (91)(93), the Taylor approximation y 25 ( A ) of order m = 30 can be computed with a cost of 7 M . For the same order the cost of the Paterson–Stockmeyer method is 9 M , and using z 1 p s from (19) the cost is 8 M (see Table 3). Similarly to [11], we computed the value such that the relative backward error is lower than u for the Taylor approximation of log ( I A ) of order m = 30 giving θ 30 = 0.329365534847136 .
Similarly to Example 2, to check if y 25 ( A ) is competitive, we prepared a new matrix test set with 50 8 × 8 matrices of the Matrix Computation Toolbox [18] reducing their norms so that they are random with a uniform distribution in [ 0.3 , θ 30 ] , and the inverse scaling algorithm is not used in either the Padé and Taylor algorithms. Then, we compared the results of using (91)(93) with the results given by functionlogm_iss_fullfrom [20] for the previous matrix set, computing the “exact” values of the matrix logarithm in the same way. The error of using the evaluation Formulas (91)(93) was lower thanlogm_iss_fullin 97.62 % of the matrices with a 42.40 % lower relative cost in flops, being competitive in efficiency and accuracy for future implementations for computing the matrix logarithm.
Note that using the evaluation formulae from Section 1 and Section 2 with cost 4 M and 5 M , one can get an order of approximation 15 + and 21 + , respectively, whereas using z 2 p s from (19) combining (66) with the Paterson–Stockmeyer method, the orders that can be obtained are lower, i.e., 12 and 18, respectively (see Table 3). Note that for the approximation 15 + where s = 2 (see Section 3.1), one gets order 15 + = ( 6 s + 3 ) + and the total degree of the polynomial obtained is 8 s = 16 . For the approximation 21 + where s = 3 , one gets order 21 + = ( 6 s + 3 ) + and the total degree of the polynomial degree is 8 s = 24 . The next step in our research is to extend the evaluation formulae from Propositions 1 and 2 to evaluate polynomial approximations of order ( 6 s + 3 ) + of the type
y 0 s ( A ) = A s i = 1 s c i A i ,
y 1 s ( A ) = i = s + 1 4 s a i A i = y 0 s ( A ) + i = 1 s d i A i y 0 s ( A ) + i = 2 s e i A i + f 0 y 0 s ( A ) + i = 3 s f i A i ,
y 2 s ( A ) = y 1 s ( A ) + i = 1 s g i A i y 1 s ( A ) + h 0 y 0 s ( A ) + i = 1 s h i A i + j 0 y 1 s ( A ) + k 0 y 0 s ( A ) + i = 0 s l i A i .
Those formulae correspond to a particular case of Formulas (62)–(65) of [1] (Prop. 2) where k = 2 . It is easy to show that the degree of y 2 s ( A ) is 8 s and the total number of coefficients of y 2 s is 6 s + 4 , i.e., 3 s coefficients a i , s coefficients g i , s coefficients h i , s + 1 coefficients l i , and coefficients f 0 , j 0 and k 0 . Using vpasolve in a similar way as in Example 2, we could find solutions for the coefficients of (94)–(96) and (19) so that y 2 s ( A ) and z 2 p s allows to evaluate matrix logarithm Taylor-based approximations of orders from 15 + up to 75 + . Similarly, we could also find the coefficients for Formulas (94)–(96) to evaluate matrix hyperbolic tangent Taylor approximations of orders higher than 21. Then, our next research step is to show that evaluation Formulas (94)–(96) and its combination with the Paterson–Stockmeyer method from (19) can be used for the general polynomial approximations of matrix functions.

4. Conclusions

In this paper, we extend the family of methods for evaluating matrix polynomials from [1], obtaining general solutions for new cases of the general matrix polynomial evaluation Formulas (62)–(65) from Proposition 2 from [1] (Section 5). These cases allow to compute matrix polynomial approximations of orders 15 and 21 with a cost of 4 M and 5 M , respectively, whenever a stable solution for the coefficients exist. Moreover, a general method for computing matrix polynomials of order m = 6 s , for s = 3 , 4 . . . more efficiently than the methods provided in [1] was provided. Combining this method with the Paterson–Stockmeyer method, polynomials or degree greater than 30 can be evaluated with two matrix products less than using Paterson–Stockmeyer method as shown in Table 3.
Examples for evaluating Taylor approximations of the matrix cosine and the matrix logarithm were given. The accuracy and efficiency results of the proposed evaluation formulae were compared to state-of-the-art Padé algorithms, being competitive for future implementations for computing both functions.
Future work will deal with the generalization of more efficient evaluation formulae based on the evaluation Formulas (62)–(65) from Proposition 2 from [1] (Section 5), its combinations with Paterson–Stockmeyer method (19), and in general, evaluation formulae based on products of matrix polynomials.

Author Contributions

Conceptualization, J.S.; methodology, J.S. and J.I.; software, J.S.; validation, J.S. and J.I.; formal analysis, J.S. and J.I.; investigation, J.S. and J.I.; resources, J.S. and J.I.; writing—original J.S. and J.I.; writing—review and editing, J.S. and J.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the European Regional Development Fund (ERDF) and the Spanish Ministerio de Economía y Competitividad grant TIN2017-89314-P, and by the Programa de Apoyo a la Investigación y Desarrollo 2018 of the Universitat Politècnica de València grant PAID-06-18-SP20180016.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MDPIMultidisciplinary Digital Publishing Institute
DOAJDirectory of open access journals
TLAThree letter acronym
LDLinear dichroism

References

  1. Sastre, J. Efficient evaluation of matrix polynomials. Linear Algebra Appl. 2018, 539, 229–250. [Google Scholar] [CrossRef]
  2. Paterson, M.S.; Stockmeyer, L.J. On the number of nonscalar multiplications necessary to evaluate polynomials. SIAM J. Comput. 1973, 2, 60–66. [Google Scholar] [CrossRef] [Green Version]
  3. Higham, N.J. Functions of Matrices: Theory and Computation; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2008. [Google Scholar]
  4. Sastre, J.; Ibáñez, J.; Alonso, P.; Peinado, J.; Defez, E. Fast taylor polynomial evaluation for the computation of the matrix cosine. J. Comput. Appl. Math. 2019, 354, 641–650. [Google Scholar] [CrossRef]
  5. Sastre, J.; Ibáñez, J.E. Defez, Boosting the computation of the matrix exponential. Appl. Math. Comput. 2019, 340, 206–220. [Google Scholar]
  6. Al-Mohy, A.H.; Higham, N.J. A new scaling and squaring algorithm for the matrix exponential. SIAM J. Matrix Anal. Appl. 2009, 31, 970–989. [Google Scholar] [CrossRef] [Green Version]
  7. Al-Mohy, A.H.; Higham, N.J.; Relton, S. New Algorithms for Computing the Matrix Sine and Cosine Separately or Simultaneously. SIAM J. Sci. Comput. 2015, 37, A456–A487. [Google Scholar] [CrossRef] [Green Version]
  8. Bader, P.; Blanes, S.; Casas, F. Computing the Matrix Exponential with an Optimized Taylor Polynomial Approximation. Mathematics 2019, 7, 1174. [Google Scholar] [CrossRef] [Green Version]
  9. Bader, P.; Blanes, S.; Casas, F. An improved algorithm to compute the exponential of a matrix. arXiv 2017, arXiv:1710.10989. [Google Scholar]
  10. Sastre, J. On the Polynomial Approximation of Matrix Functions. Available online: http://personales.upv.es/~jorsasma/AMC-S-16-00951.pdf (accessed on 20 April 2020).
  11. Al-Mohy, A.H.; Higham, N.J. Improved inverse scaling and squaring algorithms for the matrix logarithm. SIAM J. Sci. Comput. 2012, 34, C153–C169. [Google Scholar] [CrossRef] [Green Version]
  12. Moler, C.B.; Loan, C.V. Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM Rev. 2003, 45, 3–49. [Google Scholar] [CrossRef]
  13. Blackford, S.; Dongarra, J. Installation Guide for LAPACK, LAPACK Working Note 41. Available online: http://www.netlib.org/lapack/lawnspdf/lawn41.pdf (accessed on 20 April 2020).
  14. Sastre, J. Efficient mixed rational and polynomial approximation of matrix functions. Appl. Math. Comput. 2012, 218, 11938–11946. [Google Scholar] [CrossRef]
  15. Sastre, J.; Ibáñez, J.; Alonso, P.; Peinado, J.; Defez, E. Two algorithms for computing the matrix cosine function. Appl. Math. Comput. 2017, 312, 66–77. [Google Scholar] [CrossRef] [Green Version]
  16. Fasi, M. Optimality of the Paterson–Stockmeyer method for evaluating matrix polynomials and rational matrix functions. Linear Algebra Appl. 2019, 574, 182–200. [Google Scholar] [CrossRef] [Green Version]
  17. Ibáñez, J.; Alonso, J.M.; Sastre, J.; Defez, E.; Alonso-Jordá, P. Advances in the Approximation of the Matrix Hyperbolic Tangent. Mathematics 2021, 9, 1219. [Google Scholar] [CrossRef]
  18. Higham, N.J. The Matrix Computation Toolbox. Available online: http://www.ma.man.ac.uk/~higham/mctoolbox (accessed on 18 April 2020).
  19. Davies, E.B. Approximate diagonalization. SIAM J. Matrix Anal. Appl. 2007, 29, 1051–1064. [Google Scholar] [CrossRef] [Green Version]
  20. Higham, N. Matrix Logarithm. 2020. Available online: https://www.mathworks.com/matlabcentral/fileexchange/33393-matrix-logarithm (accessed on 18 April 2020).
Table 1. Coefficients of y 02 , y 12 , and y 22 from (39)–(41) for computing the Taylor-based approximation z 222 ( B ) of order 2 m = 34 + from (36) of the matrix cosine.
Table 1. Coefficients of y 02 , y 12 , and y 22 from (39)–(41) for computing the Taylor-based approximation z 222 ( B ) of order 2 m = 34 + from (36) of the matrix cosine.
q 4 3.571998478323090 × 10 11 d 1 −2.645687940516643 × 10 3
q 3 −1.857982456862233 × 10 8 e 1 1.049722718717408 × 10 1
r 2 3.278753597700932 × 10 5 e 0 8.965376033761624 × 10 4
r 1 −1.148774768780758 × 10 2 f 0 −1.859420533601965 × 10 0
s 2 −2.008741312156575 × 10 5 g 0 1.493008139094410 × 10 1
s 0 1.737292932136998 × 10 1 h 2 1.570135323717639 × 10 4
t 2 6.982819862335600 × 10 5 h 1 −1/6!
d 2 −5.259287265295055 × 10 5 h 0 1/4!
Table 2. Coefficients of y 03 , y 13 , and y 23 from (56)–(58) for computing a Taylor-based approximation of function log ( B ) = log ( I A ) of order m = 21 + .
Table 2. Coefficients of y 03 , y 13 , and y 23 from (56)–(58) for computing a Taylor-based approximation of function log ( B ) = log ( I A ) of order m = 21 + .
c 1 2.475376717210241 × 10 1 c 11 −1.035631527011582 × 10 1
c 2 2.440262449961976 × 10 1 c 12 −3.416046999733390 × 10 1
c 3 1.674278428631194 × 10 1 c 13 4.544910328432021 × 10 2
c 4 −9.742340743664729 × 10 2 c 14 2.741820014945195 × 10 1
c 5 −4.744919764579607 × 10 2 c 15 −1.601466804001392 × 10 0
c 6 5.071515307996127 × 10 1 c 16 1.681067607322385 × 10 1
c 7 2.025389951302878 × 10 1 c 17 7.526271076306975 × 10 1
c 8 −4.809463272682823 × 10 2 c 18 4.282509402345739 × 10 2
c 9 6.574533191427105 × 10 1 c 19 1.462562712251202 × 10 1
c 10 3.236650728737168 × 10 1 c 20 5.318525879522635 × 10 1
Table 3. Maximum available approximation order for a cost C using the Paterson–Stockmeyer method, order denoted by d P S , maximum order using z 1 p s from (19) combining (16) and (17) with the Paterson–Stockmeyer, denoted by d z 1 s , and maximum order using z 2 p s from (19) combining (66) with the Paterson–Stockmeyer method, denoted by d z 2 s , whenever a solution for the coefficients of z 2 p s exist. Parameters s and p for z 2 p s ( x ) such that s is minimum to obtain the required order giving a system (89) of s equations with minimum size.
Table 3. Maximum available approximation order for a cost C using the Paterson–Stockmeyer method, order denoted by d P S , maximum order using z 1 p s from (19) combining (16) and (17) with the Paterson–Stockmeyer, denoted by d z 1 s , and maximum order using z 2 p s from (19) combining (66) with the Paterson–Stockmeyer method, denoted by d z 2 s , whenever a solution for the coefficients of z 2 p s exist. Parameters s and p for z 2 p s ( x ) such that s is minimum to obtain the required order giving a system (89) of s equations with minimum size.
C ( M ) 345678910111213
d P S 69121620253036424956
d z 1 s 812162025303642495664
d z 2 s -12182430364249566472
s z 2 s -2345667788
p z 2 s -0000067141624
Table 4. Coefficients of y 05 , y 15 , y 25 from (91)–(93) for computing the Taylor approximation of log ( B ) = log ( I A ) = y 25 ( A ) of order m = 30 .
Table 4. Coefficients of y 05 , y 15 , y 25 from (91)–(93) for computing the Taylor approximation of log ( B ) = log ( I A ) = y 25 ( A ) of order m = 30 .
c 1 3.218297948685432 × 10 1 c 16 2.231079274704953 × 10 1
c 2 1.109757913339804 × 10 1 c 17 3.891001336083639 × 10 1
c 3 7.667169819995447 × 10 2 c 18 6.539646241763075 × 10 1
c 4 6.192062222365700 × 10 2 c 19 8.543283349051067 × 10 1
c 5 5.369406358130299 × 10 2 c 20 −1.642222074981266 × 10 2
c 6 2.156719633283115 × 10 1 c 21 6.179507508449100 × 10 2
c 7 −2.827270631646985 × 10 2 c 22 3.176715034213954 × 10 2
c 8 −1.299375958233227 × 10 1 c 23 8.655952402393143 × 10 2
c 9 −3.345609833413695 × 10 1 c 24 3.035900161106295 × 10 1
c 10 −8.193390302418316 × 10 1 c 25 9.404049154527467 × 10 1
c 11 −1.318571680058333 × 10 1 c 26 −2.182842624594848 × 10 1
c 12 1.318536866523954 × 10 1 c 27 −5.036471128390267 × 10 1
c 13 1.718006767617093 × 10 1 c 28 −4.650956099599815 × 10 1
c 14 1.548174815648151 × 10 1 c 29 5.154435371157740 × 10 1
c 15 2.139947460365092 × 10 1 c 30 1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sastre, J.; Ibáñez, J. Efficient Evaluation of Matrix Polynomials beyond the Paterson–Stockmeyer Method. Mathematics 2021, 9, 1600. https://doi.org/10.3390/math9141600

AMA Style

Sastre J, Ibáñez J. Efficient Evaluation of Matrix Polynomials beyond the Paterson–Stockmeyer Method. Mathematics. 2021; 9(14):1600. https://doi.org/10.3390/math9141600

Chicago/Turabian Style

Sastre, Jorge, and Javier Ibáñez. 2021. "Efficient Evaluation of Matrix Polynomials beyond the Paterson–Stockmeyer Method" Mathematics 9, no. 14: 1600. https://doi.org/10.3390/math9141600

APA Style

Sastre, J., & Ibáñez, J. (2021). Efficient Evaluation of Matrix Polynomials beyond the Paterson–Stockmeyer Method. Mathematics, 9(14), 1600. https://doi.org/10.3390/math9141600

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop