Next Article in Journal
Cost-Effective Analysis of Control Strategies to Reduce the Prevalence of Cutaneous Leishmaniasis, Based on a Mathematical Model
Previous Article in Journal
Existence of Solutions for Fractional Integro-Differential Equations with Non-Local Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applying Computer Algebra Systems in Approximating the Trigonometric Functions

1
Department of Mathematics, College of Natural Sciences, Cantho University, 3/2 Street, Cantho City, Vietnam
2
Department of Mathematics and Science, Holy Names University, 3500 Mountain Blvd., Oakland, CA 94619, USA
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2018, 23(3), 37; https://doi.org/10.3390/mca23030037
Submission received: 8 June 2018 / Revised: 9 July 2018 / Accepted: 12 July 2018 / Published: 14 July 2018

Abstract

:
We propose numerical algorithms which can be integrated with modern computer algebra systems in a way that is easily implemented to approximate the sine and cosine functions with an arbitrary accuracy. Our approach is based on Taylor’s expansion about a point having a form of k p , k Z and p = π / 2 , and being chosen such that it is closest to the argument. A full error analysis, which takes advantage of current computer algebra systems in approximating π with a very high accuracy, of our proposed methods is provided. A numerical integration application is performed to demonstrate the use of algorithms. Numerical and graphical results are implemented by MAPLE.

1. Introduction

It is an undoubted fact that the design of more reliable and accurate algorithms to compute the approximate values to the sine and cosine functions is one of the most fascinating topics of the constructive approximation of functions. This is clearly motivated by numerous applications of these trigonometric functions ranging from mathematics, physics to engineering.
The use of polynomial approximations plays a key role in computing values of the sine and cosine functions, as well as other elementary functions. The most commonly used polynomials are Taylor, Chebyshev, and Remez. Let us briefly mention an early historical fact related to approximating the trigonometric functions. That is the formula proposed by Bhaskara in the 7th century:
sin ( θ ) 4 θ ( 180 θ ) 40500 θ ( 180 θ ) , 0 θ 180 ,
which is now explained clearly and logically (see [1]). The explanation is based on the view of interpolation by polynomials and rational functions.
For these days, with a rapid development of technology and computer science, there has been great effort to build hardware and software algorithms for evaluating values of the elementary functions, especially the trigonometric functions, to meet an essential need in scientific computations, signal processing, telecommunication and computer graphics [2]. Among these algorithms is the most important and fundamental one called the CORDIC algorithm. The name CORDIC is an acronym for Coordinate Rotation Digital Computer. This is a computing technique for solving the trigonometric relations that consist of plane coordinate rotation, Cartesian and polar coordinates. Details on CORDIC and developments on algorithms for evaluating elementary functions can be found in the excellent monograph by Muller [3] Chapter 7, and also [4,5,6,7,8].
It is worth mentioning that the desirable accuracy of computed transcendental functions mainly depends on the demand of users and/or practical scientific use. On the one hand, it would usually suffice to numerically solve real-world problems with an accuracy to machine precision. This is the reason that the majority of present computing software programs such as MATLAB, and its Chebfun software package, developed by Trefethen and his team [9], aims to support. On the other hand, there has always been an imperative need to seek for simpler and more efficient algorithms with higher accuracy as the interest in improving scientific calculation method itself or from other sources of practical demand. This is purely our intention when writing the present article. The design and development of our algorithms stem from the fact that we are able to easily work with higher-degree polynomials (even degrees of thousands or millions [10] Chapter 1) due to the increasing complexity of computer applications.
The goals of this paper are twofold. Firstly, we present an algorithm that gives pointwise approximate values of the sine and cosine functions at any rational angle with a desired precision. Secondly, our biggest contribution to novelty is the piecewise approximation procedure with an arbitrary small absolute error applied to these trigonometric functions. To this end, we introduce a special partition that can take the advantage of dividing an interval [ a , b ] into appropriate subintervals for the approximation process. Our construction is merely based on the lookup table-like storage of approximate values of Pi (the number π ). In some computer algebra systems, we can do this with built-in commands such as evalf[n](Pi) from MAPLE, vpa(pi,n) from MATLAB and N[Pi,n] from MATHEMATICA.
In principle, approximating the value f ( x ) = sin x by the Taylor polynomial of order n at a point x 0 as
f ( x ) P n ( x ) = m = 0 n f ( m ) ( x 0 ) m ! ( x x 0 ) m
is meaningful when the point x 0 satisfies | x x 0 | < 1 and all the coefficients f ( m ) ( x 0 ) can be explicitly calculated. Such a point can be chosen as k p , where k Z and p = π / 2 . We will choose k to get the inequality even better | x x 0 | < 0.8 . Because we only have rational approximate values of p, this challenges us with several practical questions to be solved: how to evaluate errors arising from replacing p with its approximate value p and how to keep f ( m ) ( x 0 ) unchanged with this change of value of p? In addition, we also need an efficient mechanism to access rational approximate values of p more enough to reach an acceptable or desired precision. Fortunately, more and more precise values of Pi have been updated from the projects of computation of special numbers (see [11], for instance), and these values are now supplied conveniently and efficiently for use by present computer algebra systems. The current record of the longest decimal number computed for Pi has been 13.3 trillion digits (date announced: 8 October 2014), held by an anonymous programmer known online as “houkouonch”, and will be certainly broken later.
The paper is organized as follows. In Section 2, we define the steps to obtain Taylor polynomials for approximating values of the sine function with an absolute error less than a given tolerance. We then, in Section 3, provide appropriate regulations to show that it is possible to keep coefficients of the approximate Taylor polynomials unchanged when p = π / 2 is replaced with its approximate value p . The error analysis is presented in the same section in which the properties of special functions Gamma and Modified Bessel are used. We arrive at our algorithms for pointwise approximating values of the sine and cosine functions, and for piecewise approximating on any interval [ a , b ] in Section 4 and Section 5. To demonstrate the performance of our methods, Section 6 is devoted to an application of numerical integration. Finally, some concluding remarks are discussed in the last section. For our purpose, we use MAPLE to implement our algorithms, and demonstrate numerical and graphical results.

2. Formulation

Recall that the function f ( x ) = sin x is periodic with period 2 π , differentiable of any order, and for each k 0 Z and m = 0 , 1 , , we have
f ( m ) ( k 0 π / 2 ) = 0 , m + k 0 = 2 k , ( 1 ) k , m + k 0 = 2 k + 1 ( k Z ) .
Given a number x, we can choose an integer k such that k π / 2 x < ( k + 1 ) π / 2 , which is k = x / p (here, we denote π / 2 by p, for brevity, and z is the integral part of a number z). Now, we determine k 0 Z by an if-then statement: if x / p x / p < 1 / 2 , then k 0 = x / p , else k 0 = x / p + 1 . In both cases, we have | x x 0 | π / 4 with x 0 = k 0 p . From the choice of x 0 , if P n is the Taylor polynomial at this point, we always have that
| f ( x ) P n ( x ) | π 4 n + 1 1 ( n + 1 ) ! < ( 0.8 ) n + 1 1 ( n + 1 ) ! .
Therefore, for a given number r N , to find the degree of P n such that
δ = sup { | f ( x ) P n ( x ) | : x R } < 1 10 r ,
we only need to determine n N satisfying
( 0.8 ) n + 1 10 r ( n + 1 ) ! 0 .
Note that the Taylor polynomial
P n ( y ) = m = 0 n f ( m ) ( x 0 ) m ! ( y x 0 ) m , x 0 = k 0 p ,
which depends on x, only contains the values of m such that m = 2 k + 1 k 0 due to (1). Since 0 m n , k takes all integral values satisfying
k 0 1 2 k n + k 0 1 2 .
Then we derive P n ( x ) for the approximation f ( x ) P n ( x ) from the following cases:
  • If k 0 is odd, we put k 1 = ( k 0 1 ) / 2 , hence k 1 k k 1 + n / 2 . Thus, we can write
    P n ( x ) = k = k 1 k 1 + n / 2 ( 1 ) k [ 2 ( k k 1 ) ] ! ( x k 0 p ) 2 ( k k 1 ) = ( 1 ) k 1 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k 0 p ) 2 m .
  • If k 0 is even, we put k 1 = k 0 / 2 , hence k 1 k k 1 + ( n 1 ) / 2 , and we have
    P n ( x ) = ( 1 ) k 1 m = 0 ( n 1 ) / 2 ( 1 ) m ( 2 m + 1 ) ! ( x k 0 p ) 2 m + 1 .

3. Regulation

In practice, we are only provided with approximate values of p = π / 2 . Assume that such a value is p and let σ = | p p | . If | x | < 0.8 , we use the formula (5) with k 0 = k 1 = 0 and we have the approximation sin x P n ( x ) with the accuracy of 1 / 10 r , where n is determined from (2). Therefore, we consider from this point on any rational number x such that | x | 0.8 and we then have that
0 < x p x p < 1 , x p x p 1 2 .
For an initial setting, we assume that σ satisfies
σ < 1 | x | 10 r + 2
for a given r N .
Because the formulas (4) and (5) depend on the distance between x / p and x / p according to the choice of k 0 , we need to choose p so that k 0 will not change its values. This can be attained when x / p = x / p or equivalently
x p < x p < x p + 1 .
Since
| x p x p | = σ | x | p p ,
(7) is satisfied if and only if
x p x p < ± σ | x | p p < x p + 1 x p .
Since p p > 2.4 , we then have (8) if σ simultaneously satisfies the following inequalities
σ < 2.4 | x | x p x p ,
σ < 2.4 | x | x p + 1 x p .
Before proceeding any further, we emphasize here that we can set p as the approximate value of p with the accuracy up to m + 3 significant digits when σ < 1 / 10 m to prevent inexactitude from rounding-off rules. Such an approximate value of p can be declared by, for instance, evalf[m + 3](Pi/2) in MAPLE. As a convention, when the output m has been found for σ to satisfy σ < 1 / 10 m , we then set p with evalf[m + 3](p).
Finally, we recall here the cases of choosing k 0 and assume some supplementary requirements for σ to make k 0 unchanged when replacing p with p . If x / p x / p < 1 / 2 and due to
0 < x p x p = x p ± σ | x | p p x p ,
we also have x / p x / p < 1 / 2 when
σ < 2.4 | x | 1 2 + x p x p ,
and we then choose k 0 = x / p in this case. Similarly, if x / p x / p > 1 / 2 , then we also have x / p x / p > 1 / 2 when
σ < 2.4 | x | x p x p 1 2 ,
and we choose k 0 = x / p + 1 . In particular, if x / p x / p = 1 / 2 , then we also choose k 0 = x / p and note that (9) and (10) are obviously satisfied in this case.
Let us take an important notice that: (11) implies (10), and (12) implies (9). Thus, to find an appropriate value of p and then choose k 0 , we first require σ < 1 / 10 m by determining a number m N such that
1 10 m 1 | x | 10 r + 2 < 1 10 m 1 .
Then, p can be chosen to satisfy simultaneously (9) and (11), which is
min 2.4 × 10 m x p x p , 2.4 × 10 m 1 2 + x p x p | x | ,
or (10) and (12), which is
min 2.4 × 10 m x p + 1 x p , 2.4 × 10 m x p x p 1 2 | x | ,
depending on x / p x / p < 1 / 2 or x / p x / p > 1 / 2 , respectively. To prove the existence of such a p , we need some results from the notion of sequence limit. This result is an important basis of our approximation algorithm.
Supposing that we have found m = m 0 satisfying (13), then we choose p 0 = evalf[m0 + 3](p) and let t 0 = x / p 0 x / p 0 . Letting an index i take its initial value i : = 0 . If t i = 1 / 2 , then we choose k 0 = x / p i ; otherwise, we consider the following process. To choose σ that satisfies simultaneously (6), (9), (11) or (6), (10), (12), from (14) or (15), we check
2.4 × 10 m i min { t i , 0.5 t i } | x | ,
or
2.4 × 10 m i min { 1 t i , t i 0.5 } | x | .
If both (16) and (17) could not occur, we go to the next step by setting m i + 1 = m i + 1 , p i + 1 = evalf[mi+1 + 3](p) and t i + 1 = x / p i + 1 x / p i + 1 . Then, we check again (16) and (17), but with m i + 1 and t i + 1 for this time. Continuing this process, we go to the following conclusion:
  • One of either (16) and (17) first occurs at some step k, and assume it is (16). Hence, we have 0 < t k = x / p k x / p k < 1 / 2 (and also x / p x / p < 1 / 2 ). Then we choose k 0 = x / p k ( = x / p ). Similarly, we have 1 / 2 < t k = x / p k x / p k (and also x / p x / p > 1 / 2 ) and we choose k 0 = x / p k + 1 = x / p + 1 if (17) occurs.
  • Both of (16) and (17) could not occur at any step. Then, we obtain the infinite sequences m i = m 0 + i , p i = evalf[m0 + 3 + i](p) and t i = x / p i x / p i , i = 0 , 1 , , such that
    2.4 × 10 m i min { t i , 0.5 t i } < | x | , for all i = 0 , 1 , ,
    and
    2.4 × 10 m i min { 1 t i , t i 0.5 } < | x | , for all i = 0 , 1 , .
Now, we show the second item of the conclusion cannot be true. Because p i p , then x / p i x / p , i . Since x / p < x / p < x / p + 1 , there exists an integer i 0 such that x / p < x / p i < x / p + 1 for all i i 0 , hence x / p i = x / p for all i i 0 . Therefore,
lim i t i = lim i x p i x p = x p x p .
If x / p x / p < 0.5 , then
min { t i , 0.5 t i } min x p x p , 0.5 x / p + x p > 0 ,
hence the left side of the inequality (18) leads to ∞ as i , and this is a contradiction. If x / p x / p > 0.5 , by a similar argument, we also derive a contradiction from (19).
There is one more question that should be considered explicitly. What is the maximum value of k that we may access in the calling sequence evalf[m0 + 3 + k](p)? We provide here an analysis of this possibility. We set on the intervals ( 0 , 0.5 ) and ( 0.5 , 1 ) the functions f ( t ) and g ( t ) , respectively, such that
f ( t ) = min { t , 0.5 t } = t as 0 < t 0.25 , 0.5 t as 0.25 < t < 0.5 .
and
g ( t ) = min { 1 t , t 0.5 } = t 0.5 as 0.5 < t 0.75 , 1 t as 0.75 < t < 1 .
Then, we define
M ( t ) = f ( t ) as 0 < t < 0.5 , g ( t ) as 0.5 < t < 1 .
and give its graph in Figure 1.
Now, reaching (16) or (17) at the step k can be expressed as
| x | 2.4 × 10 m 0 10 k M ( t k ) .
Because | x | / 10 m 0 1 / 10 r + 2 , we can attain this when
1 2.4 × 10 r + 2 10 k M ( t k ) .
Taking integral values from 0 , 1 , , the variable k makes the value of the right side of (20) increase very fast, whereas the positive values of M ( t k ) slightly change when k increases. We also have the estimate | M ( t ) M ( t ) | | t t | for all t , t ( 0 , 0.5 ) ( 0.5 , 1 ) . Thus, the minimum value of m k = m 0 + k satisfying (16) or (17) is governed by m 0 . In practice, since | x | and r are not too large, we may always determine such a value of m k .
On the other hand, if we find a sequence of rational numbers { q n } converging to p = π / 2 and a strictly increasing sequence of natural numbers { m n } such that
| p q n | < 1 10 m n < 1 | x | 10 r + 2 , n = 0 , 1 , ,
we can determine k 0 , following the above steps without using the command evalf (or other equivalent commands).
According to the choice of k 0 , we always have | x k 0 p | p / 2 < 0.8 . Now, we check that
| x k 0 p | < 0.8
for all choices of k 0 . In the cases of x / p x / p < 1 / 2 or x / p x / p > 1 / 2 , we also have x / p x / p < 1 / 2 or x / p x / p > 1 / 2 and easily check that | x / p k 0 | < 1 / 2 or | x k 0 p | < p / 2 < 0.8 . In the case x / p x / p = 1 / 2 , since (6), (9) and (10) are all satisfied, we have x / p = x / p and need to check (21) directly. Indeed, we can write
0 < x p k 0 = x p x p = x p ± σ | x | p p x p = 1 2 ± σ | x | p p ,
hence
x k 0 p < p 2 + σ | x | p < p 2 + 1 p 10 r + 2 < 3.1416 4 + 1 1.5 × 10 3 < 0.79 + 0.0067 < 0.8 .
We formalize the above discussion by the following lemma.
Lemma 1.
For every rational number x, there exists an integer k 0 and a rational approximation p of p = π / 2 such that x / p = x / p , and k 0 = x / p or k 0 = x / p + 1 , depending on x / p x / p 1 / 2 or x / p x / p > 1 / 2 , respectively. Moreover, the Taylor polynomial P n of the sine function at x 0 = k 0 p can give the approximation sin x P n ( x ) with the accuracy up to 1 / 10 r , where r is a given positive integer and the degree of P n is determined by(2).
In practice, since we cannot use P n from (4) or (5) to approximate the value of the sine function at x, we replace P n with P n c instead. Such a changed polynomial can be either
P n c ( x ) = ( 1 ) ( k 0 1 ) / 2 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k 0 p ) 2 m , or
P n c ( x ) = ( 1 ) k 0 / 2 m = 0 ( n 1 ) / 2 ( 1 ) m ( 2 m + 1 ) ! ( x k 0 p ) 2 m + 1 ,
depending on k 0 odd or even, respectively. Because | x k 0 p | < 0.8 , we derive the estimate | sin x P n ( x ) | δ from (2). Now, we need to estimate δ = | sin x P n c ( x ) | , which we shall call the practical error.
Firstly, we choose the polynomial (22) and try to make an estimate for
δ = | sin x ( 1 ) k 1 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k 0 p ) 2 m | = | sin x ( 1 ) k 1 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k 0 p ± k 0 σ ) 2 m |
or
δ = | sin x ( 1 ) k 1 m = 0 n / 2 ( 1 ) m ( 2 m ) ! k = 0 2 m 2 m k ( x k 0 p ) 2 m k ( ± k 0 σ ) k | .
Since
k = 0 2 m 2 m k ( x k 0 p ) 2 m k ( ± k 0 σ ) k = ( x k 0 p ) 2 m + k = 1 2 m ( 2 m ) ! k ! ( 2 m k ) ! ( x k 0 p ) 2 m k ( ± k 0 σ ) k ,
(24) can be written as
δ = | sin x ( 1 ) k 1 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k 0 p ) 2 m ( 1 ) k 1 m = 1 n / 2 ( 1 ) m k = 1 2 m ( x k 0 p ) 2 m k ( ± k 0 σ ) k k ! ( 2 m k ) ! | ,
hence we obtain the estimate
δ δ + | m = 1 n / 2 ( 1 ) m k = 1 2 m ( x k 0 p ) 2 m k ( ± k 0 σ ) k k ! ( 2 m k ) ! | .
Letting | k 0 | σ = β , from (21) and (25), we derive
δ δ + m = 1 n / 2 ( 0.8 ) 2 m k = 1 2 m β 0.8 k 1 k ! ( 2 m k ) ! .
To estimate the sum in (26), we will use inequalities related to factorial numbers, so we need some properties of the generalized factorial function, the Gamma function:
Γ ( z ) = 0 e t t z 1 d t .
Γ ( z ) has been proven to be a log-convex function on the interval ( 0 , ) ; thus, for y , z ( 0 , ) we have
Γ y + z 2 Γ ( y ) Γ ( z )
(see [12] Section 2). In addition, for a non-negative integer n, we have Γ ( n + 1 ) = n ! , and for y = k + 1 , z = 2 m k + 1 with integers m , k satisfying 1 k 2 m , we derive the following inequality from (27)
m ! k ! ( 2 m k ) ! or 1 k ! ( 2 m k ) ! 1 ( m ! ) 2 .
From (26) and (28), we can write
δ δ + m = 1 n / 2 ( 0.8 ) 2 m ( m ! ) 2 k = 1 2 m β 0.8 k < δ + β 0.8 β m = 1 n / 2 ( 0.8 ) 2 m ( m ! ) 2 ,
due to 1 ( β / 0.8 ) 2 m < 1 .
To obtain the desired estimate of δ , we use an approximate value of the Modified Bessel function of the First kind
I ν ( x ) = m = 0 ( x / 2 ) 2 m + ν m ! Γ ( m + 1 + ν ) ,
which is
I 0 ( 1.6 ) = m = 0 ( 0.8 ) 2 m ( m ! ) 2 < 1.75 ,
according to the approximation formula in ([13] Section 3). Moreover, we have the estimate
β < 2 10 r + 2
that we now check only for x > 0.8 . Indeed, if k 0 = x / p , then k 0 x / p , hence | k 0 / x | 1 / p ; if k 0 = x / p + 1 , we have k 0 1 x / p or | k 0 / x | = k 0 / x 1 / p + 1 / x < 1 / p + 1 / 0.8 < 2 . These estimates for | k 0 / x | lead to
β = | k 0 | σ | k 0 | | x | 10 r + 2 < 2 10 r + 2 .
Finally, from (28), we derive
δ < δ δ + β 0.8 β ( 1.75 1 ) < 40 40 × 10 r 1 + 0.75 40 × 10 r 1 = 40.75 40 × 10 r 1 < 1.1 10 r .
Similarly, if the polynomial (23) has been chosen, we are led to the estimate
δ δ + m = 1 ( n 1 ) / 2 ( 0.8 ) 2 m + 1 k = 1 2 m + 1 β 0.8 k 1 k ! ( 2 m + 1 k ) ! .
In this case, applying (27), we obtain the inequality
1 k ! ( 2 m + 1 k ) ! 1 Γ ( m + 1 + 1 / 2 ) 2 .
To estimate the right-hand side of (31), we use an appropriate inequality of Lazarević and Lupaş in ([12] p. 95–96), stated in the form of
z 1 μ 2 μ Γ ( z + μ ) Γ ( z ) ,
where z > ( 1 μ ) / 2 and μ [ 0 , 1 ] . Applying (32) for z = m + 1 ( m = 0 , 1 , ) and μ = 1 / 2 , we easily obtain the inequality
1 Γ ( m + 1 + 1 / 2 ) 2 1 ( m ! ) 2 ( m + 3 / 4 ) 4 7 ( m ! ) 2 , m = 1 , 2 , .
Combining (30), (31) and (33), we finally get the desired estimate
δ < δ + β 0.8 β × 4 × ( 0.8 ) ( 1.75 1 ) 7 < 40 40 × 10 r 1 + 0.35 40 × 10 r 1 < 1.1 10 r .
Thus, both of approximate polynomials (22) and (23) satisfy the same inequality for the practical error; that is
| sin x P n c ( x ) | < 1.1 10 r .
Finally, for a real number x and its rational approximation x , we have the following estimation
| sin x P n c ( x ) | | sin x sin x | + | sin x P n c ( x ) | < | x x | + 1.1 10 r ,
where P n c is obtained from P n , as indicated in Lemma 1, but for x instead. If r is replaced with r + 1 in (2) and (6), and | x x | < 1 / 10 r + 1 , then | sin x P n c ( x ) | < 1 / 10 r .
To sum up, we state our main result in this section by the following theorem.
Theorem 1.
Let x be a real number and x be a rational approximation of x such that | x x | < 1 / 10 r + 1 , where r is a given positive integer. Then, from Lemma1 applied to x , there exists a polynomial P n c having the form of (22) or (23) that gives the approximation | sin x P n c ( x ) | < 1 / 10 r .

4. Algorithm for Pointwise Polynomial Approximation

Given a rational number x and an integer h 2 , we will construct an algorithm to evaluate the approximate value a of sin x so that δ = | sin x a | is less than an arbitrarily small tolerance, and to display the result in the correct number h of significant digits. The first task can be completed by the steps in Algorithm 1, following the steps as indicated in Section 3, above Lemma 1. This algorithm can be converted into a MAPLE procedure called ApproxSine that takes the two arguments x and r. The output of ApproxSine(x,r) is a with δ < 1 / 10 r , r 1 . This estimation for δ is obtained when r is replaced with r + 1 in (2) and (6). Before completing the second task, we note that the accuracy up to the r-th decimal digit after decimal point is in general quite different from r significant digits. For instance, if a = 0.123437 and b = 0.123415 , then | a b | < 1 / 10 4 and evalf[4](a) = evalf[4](b), but, if a = 0.0024217 and b = 0.002421 , then evalf[6](a)evalf[6](b), whereas | a b | < 1 / 10 6 . Therefore, we should first count how many zero consecutive digits there are right after decimal point contained in the number a = ApproxSine(x,r), and assume that the number of such digits is k, which is a = 0 . 00 0 k digits x y z ( x 0 ). In fact, the number k can be determined easily, for instance, by a simple while-do loop in MAPLE as follows:
> a:=ApproxSine(x,h):
> k:=0:
> while (abs(a)*10^k-1<0) do
> k:=k+1:
> end do:
Then, the second task can be completed by displaying the output of ApproxSine(x,k + h + 1) in h significant digits, which is the calling evalf[h](ApproxSine(x,k + h + 1)). Now, we can make another MAPLE procedure named Sine to perform the second task when using ApproxSine as its local variable. The calling sequence Sine(x,h) returns an approximate value a of sin x such that the correct number of significant digits of a is equal to h. In the following, we present Algorithm 1 as a pseudo-code algorithm as
Algorithm 1
Input:x (rational number), r (positive integer)
Output: a sin x with | sin x a | < 1 / 10 r
Mca 23 00037 i001
Similarly, for approximating values of the cosine function, we use the Taylor polynomial of g ( y ) = cos y :
Q n ( y ) = m = 0 n g ( m ) ( x 0 ) m ! ( y x 0 ) m ,
where x 0 = k 0 p and
g ( m ) ( k 0 p ) = 0 , m + k 0 = 2 k + 1 , ( 1 ) k , m + k 0 = 2 k ( k Z ) .
The integer k 0 is determined by the same way as the one in the previous sections and the index k satisfies
k 0 2 k k 0 + n 2 ,
due to 0 m n . Hence, depending on k 0 even or odd, g ( x ) = cos x is approximated by
Q n ( x ) = ( 1 ) k 0 / 2 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k 0 p ) 2 m
or
Q n ( x ) = ( 1 ) ( k 0 + 1 ) / 2 m = 0 ( n 1 ) / 2 ( 1 ) m ( 2 m + 1 ) ! ( x k 0 p ) 2 m + 1 ,
respectively. Then, we have a similar algorithm to evaluate approximate values of cos x and a corresponding MAPLE procedure Cosine taking the same arguments x and h. In the body of Cosine, ApproxSine becomes its local variable ApproxCosine when the variable a in ApproxSine (as a in Algorithm 1) is assigned to these corresponding values of Q n . Cosine also returns approximate values of the cosine function in a given correct number of significant digits. In practice, Q n ( x ) becomes Q n c ( x ) when p takes the position of p, and again we can check the estimate | cos x Q n c ( x ) | < 1.1 / 10 r . As a convention, we will present all algorithms afterwards in a pseudo–code form and use corresponding MAPLE procedures with names to make calculations. Now, we implement our algorithms with some values of x and h, and the results of the calling sequences Sine(x,r) and Cosine(x,r) are given in Table 1.

5. Algorithm for Piecewise Polynomial Approximation

We recall here what is needed for initial settings to the piecewise approximation process. For a given number α , we find an integer k 0 such that the Taylor polynomial P n at k 0 p , p = π / 2 , can be used to approximate sin α with the accuracy of 1 / 10 r , where r is a given positive integer. Because we only get rational approximate values of p, we have found a way (as presented in previous sections) to determine such a value p , remaining the approximation sin α P n c ( α ) with the accuracy of 1 / 10 r , where P n c is obtained from P n by replacing p with p . In addition, we have proved that | α k 0 p | < 0.8 , or likewise | α k 0 p | < 0.8 and
σ = | p p | < 1 γ 10 r + 3 ,
where | k 0 | γ . In fact, more exactly, we have | α k 0 p | < 0.788 and 0.8 p / 2 > 0.012 for such a chosen p . Moreover, for every x [ k 0 p p / 2 , k 0 p + p / 2 ] , we can use P n c to approximate sin x because of | x k 0 p | < 0.8 (note that | x k 0 p | p / 2 < 0.8 ). Indeed, we have the estimate
| x k 0 p | = | x k 0 p + k 0 p k 0 p | p 2 + | k 0 | | p p | < p 2 + | k 0 | σ ,
and get the desired inequality when | k 0 | σ < 0.8 p / 2 or more strictly, | k 0 | σ < 0.012 . However, this is obviously satisfied because
| k 0 | γ 10 r + 3 1 10 r + 3 < 0.012 .
Therefore, we have the approximation
sin P n c on [ k 0 p p / 2 , k 0 p + p / 2 ] ,
with the accuracy of 1 / 10 r if we replace r with r + 1 . This extension of our pointwise approximating result can be a starting point to construct a piecewise function F that approximates the sine function on an interval [ a , b ] .
From most parts of Algorithm 1 (only without Step 2), we can derive a simpler algorithm that results in three objects k, p and P n c (respectively, k 0 , p and a in Algorithm 1) from an input number α . For convenience, we will use the name FindPoint of a MAPLE procedure performing such an algorithm with one real argument, so its usage can be given in one of the forms: FindPoint(α) = [k, P , P n c ], FindPoint(α)[1] = k, FindPoint(α)[2] = P and FindPoint(α)[1] = P n c .
Because the sine function is odd, we only need to find the function F on [ a , b ] with a 0 . In the cases of 0 a < b 0.8 and 0 a < 0.8 < b , we may set F = F 0 on [ a , b ] and F = F 0 on [ a , 0.8 ] , respectively, where
F 0 ( x ) = m = 0 ( n 1 ) / 2 ( 1 ) m ( 2 m + 1 ) ! x 2 m + 1
and n is determined by Algorithm 1. Therefore, it is sufficient to construct a piecewise approximate function F on [ a , b ] when 0.8 a < b .
On the other hand, from the choice of σ = | p p | < 1 / ( | x | 10 r + 3 ) in Algorithm 1 to approximate sin x , we now change it to
σ = | p p | < 1 ( b + 3.2 ) 10 r + 3 for all x [ a , b ] .
This choice is to guarantee the precision for our later construction of F. Moreover, it is easy to check that if α , β [ a , b ] are numbers such that
FindPoint ( α ) [ 1 ] = FindPoint ( β ) [ 1 ] = k
with P = FindPoint( α )[2], P = FindPoint( β )[2] then we have |α − kP | < 0.8 and |β − kP | < 0.8.
Firstly, we put
n 0 = FindPoint ( b ) [ 1 ] , q 0 = FindPoint ( b ) [ 2 ] and B 0 = FindPoint ( b ) [ 3 ] ,
and similarly,
k 0 = FindPoint ( a ) [ 1 ] , p 0 = FindPoint ( a ) [ 2 ] and A 0 = FindPoint ( a ) [ 3 ] .
The cases of k 0 = a / p 0 and k 0 = a / p 0 + 1 are illustrated in Figure 2.
Now, we consider the case k 0 = n 0 (or even k 0 = n 0 + 1 ); then, we choose F = A 0 on [ a , b ] because | b k 0 p 0 | < 0.8 as shown above for α , β . Next, we consider the case when k 0 < n 0 . We extend the approximation process to determine the next integer by setting k 1 = FindPoint ( (k0 +1 )p0 ) [ 1 ] , and also p 1 = FindPoint ( (k0 + 1)p0 ) [ 2 ] , A 1 = FindPoint ( (k0 +1 )p0 ) [ 3 ] . Because | ( k 0 + 1 ) p 0 k 1 p 1 | < 0.8 , by the choice of k 1 , we must have k 1 = k 0 + 1 . Note that, from (36), we also have | ( k 0 + 1 2 ) p 0 k 1 p 1 | < 0.8 . If k 1 = n 0 , we then have cases that are illustrated in Figure 3. It is obvious that we can set F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ] , regardless of k 0 = a / p 0 or k 0 = a / p 0 + 1 . However, we will take different settings on the other intervals, depending on where b is from ( k 0 + 1 ) p 0 . These settings can be given in the following:
b ( k 0 + 1 2 ) p 0 : F = A 0 on [ a , b ] . ( k 0 + 1 2 ) p 0 < b ( k 0 + 1 ) p 0 : F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A 1 on [ ( k 0 + 1 2 ) p 0 , b ] . ( k 0 + 1 ) p 0 < b : F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A 1 on [ ( k 0 + 1 2 ) p 0 , ( k 0 + 1 ) p 0 ) , B 0 on [ ( k 0 + 1 ) p 0 , b ] .
If k 1 < n 0 , we continue to put
k 2 = FindPoint ( ( k 1 + 1 ) p 1 ) [ 1 ] , p 2 = FindPoint ( ( k 1 + 1 ) p 1 ) [ 2 ] , A 2 = FindPoint ( ( k 1 + 1 ) p 1 ) [ 3 ] .
In the case of k 2 = n 0 , we have the following choice for F according to the different positions of b from ( k 1 + 1 ) p 1 :
b ( k 1 + 1 2 ) p 1 : F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A 1 on [ ( k 0 + 1 2 ) p 0 , b ] . ( k 1 + 1 2 ) p 1 < b ( k 1 + 1 ) p 1 : F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A 1 on [ ( k 0 + 1 2 ) p 0 , ( k 1 + 1 2 ) p 1 ) , A 2 on [ ( k 1 + 1 2 ) p 1 , b ] . ( k 1 + 1 ) p 1 < b : F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A 1 on [ ( k 0 + 1 2 ) p 0 , ( k 1 + 1 2 ) p 1 ) , A 2 on [ ( k 1 + 1 2 ) p 1 , ( k 1 + 1 ) p 1 ) , B 0 on [ ( k 1 + 1 ) p 1 , b ] .
Finally, continuing the above process, we obtain a finite sequence k 0 , k 1 , …, k n such that k i = k i 1 + 1 = FindPoint ( (ki−1 + 1)pi−1 ) [ 1 ] , i = 1 , , n 1 , and k n = n 0 . Then, we also gain two sequences p i = FindPoint ( (ki−1 + 1)pi−1 ) [ 2 ] and A i = FindPoint ( (ki−1 + 1)pi−1 ) [ 3 ] , i = 1 , , n . Before giving our settings for this general case, let us consider ending points of the approximate intervals by taking two consecutive nodes k i p i , k i + 1 p i + 1 . If we let x i = k i p i + p i / 2 , then we have | x i k i + 1 p i + 1 | = k i ( p i + 1 p i ) + p i + 1 p i / 2 , so that
| x i k i + 1 p i + 1 | < 0.8 if and only if ( k i + 1 ) ( p i + 1 p i ) < 0.8 p i / 2 .
To show the assertion of (39), we refer to (36) again and take the same arguments as above. The inequality (39) explains why we can choose F on [ a , b ] to guarantee the accuracy of 1 / 10 r .
In short, by the setting (36), when k n = n 0 with n 3 (see Figure 4), we have constructed the piecewise function F that approximates the sine function on [ a , b ] ( 0.8 a < b ) with the accuracy of 1 / 10 r . After getting the finite sequences { k i } i = 0 n , { p i } i = 0 n , { A i } i = 0 n and the function B 0 defined as above, we choose F as follows:
  • If b ( k n 1 + 1 2 ) p n 1 :
    F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A i on [ ( k i 1 + 1 2 ) p i 1 , ( k i + 1 2 ) p i ) , i = 1 , , n 2 , A n 1 on [ ( k n 2 + 1 2 ) p n 2 , b ] .
  • If ( k n 1 + 1 2 ) p n 1 < b k n p n 1 :
    F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A i on [ ( k i 1 + 1 2 ) p i 1 , ( k i + 1 2 ) p i ) , i = 1 , , n 1 , A n on [ ( k n 1 + 1 2 ) p n 1 , b ] .
  • If k n p n 1 < b :
    F = A 0 on [ a , ( k 0 + 1 2 ) p 0 ) , A i on [ ( k i 1 + 1 2 ) p i 1 , ( k i + 1 2 ) p i ) , i = 1 , , n 1 , A n on [ ( k n 1 + 1 2 ) p n 1 , k n p n 1 ) , B 0 on [ k n p n 1 , b ] .
Although steps for construction of F have been presented, we give here an abbreviated algorithm, Algorithm 2, to specifically determine this function.
In Algorithm 2, the approximate Taylor polynomials A i are derived from Algorithm 1 and given here as
A i ( x ) = ( 1 ) ( k i 1 ) / 2 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k i p i ) 2 m ,
or
( 1 ) k i / 2 m = 0 ( n 1 ) / 2 ( 1 ) m ( 2 m + 1 ) ! ( x k i p i ) 2 m + 1 , i = 0 , , n ,
depending on k i odd or even. The polynomial B 0 takes a similar expression, where k i p i is replaced with n 0 q 0 .
In Figure 5, the graphs of F on intervals [ a , b ] with different values of r are depicted by a MAPLE procedure corresponding to Algorithm 2.
Algorithm 2
Input:FindPoint (the MAPLE procedure, mentioned above); a , b (real numbers); r (positive integer)
Output: F ( x ) on [ a , b ] with sin F = sup { | sin x F ( x ) | : x [ a , b ] } < 1 / 10 r
Mca 23 00037 i002
To obtain a similar algorithm to approximate the cosine function, we just replace the functions A i with
Q i ( x ) = ( 1 ) k i / 2 m = 0 n / 2 ( 1 ) m ( 2 m ) ! ( x k i p i ) 2 m
or
Q i ( x ) = ( 1 ) ( k i + 1 ) / 2 m = 0 ( n 1 ) / 2 ( 1 ) m ( 2 m + 1 ) ! ( x k i p i ) 2 m + 1 , i = 0 , , n ,
depending on k i even or odd, and B 0 with
P 0 ( x ) = m = 0 n / 2 ( 1 ) m ( 2 m ) ! x 2 m .
Note that the cosine function is even, so we should regulate appropriately the function F in Algorithm 2.

6. An Application of Numerical Integration

Suppose sin x is approximated by a polynomial P ( x ) with the accuracy up to 1 / 10 r on an interval [ α , β ] . Then, we will find an estimation of the absolute error for sin s x and [ P ( x ) ] s with some natural number s. One of the possibilities to compute sin s x via sin ( n k ) x is given in ([14] Lemma 1.3), but here we use our previously established results as follows. Firstly, we can write
sin x = P ( x ) + ε ,
where | sin x P ( x ) | = | ε | = δ < 1 / 10 r and we also have | P ( x ) | < 1 + ( 1 / 10 r ) = : d for all x [ α , β ] . Hence, we have that
sin s x = P ( x ) + ε s = k = 0 s s k [ P ( x ) ] s k ε k = [ P ( x ) ] s + k = 1 s s k [ P ( x ) ] s k ε k .
From this relation, we easily derive the estimate
| sin s x [ P ( x ) ] s | d s k = 1 s s k ( δ / d ) k .
Now, applying (28), we obtain
s k = s ! k ! ( s k ) ! Γ ( s + 1 ) Γ ( 1 + s / 2 ) 2 ,
and relying on the inequality ([15] p. 93), we imply
Γ s 2 + 1 2 s 2 Γ s 2 .
Using (44), (45) and the formulas in [15], such as
Γ ( z + 1 ) = z Γ ( z ) , 2 z 1 Γ z 2 Γ z 2 + 1 2 = π Γ ( z ) ,
we attain the estimate
Γ ( s + 1 ) Γ ( 1 + s / 2 ) 2 = s Γ ( s ) s 2 2 Γ s 2 2 = 4 s Γ ( s ) Γ s 2 2 2 π 2 s s .
Finally, from (43), (46) and the formula for the sum of a geometric progression, we derive the following estimate
| sin s x [ P ( x ) ] s | d s 2 π 2 s s ( δ / d ) 1 ( δ / d ) 1 ( δ / d ) s < 2 π ( 2 d ) s s δ ,
also due to d δ > d ( 1 / 10 r ) = 1 .
Thus, we have proven the result stated in the following lemma.
Lemma 2.
Assume that f ( x ) = sin x is approximated by the polynomial P n c ( x ) on an interval [ α , β ] with the accuracy up to 1 / 10 r , derived from the Taylor polynomial P n ( x ) about k p , k Z and p = π / 2 , by replacing p with its rational approximation p , as constructed in the previous sections. Then, if
I = α β x t sin s x d x ( t , s N )
is approximated by
J = α β x t [ P n c ( x ) ] s d x ,
the error | I J | has the following upper bound
| I J | α β | sin s x [ P n c ( x ) ] s | | x | t d x 2 π ( 2 + 2 / 10 r ) s s 10 r α β | x | t d x .
Because the upper bound of | I J | in (47) only depends on r, s, except the integral on the right side, we then easily imply the following theorem.
Theorem 2.
Suppose an interval [ a , b ] can be expressed as a finite union [ a , α 1 ] [ α 1 , α 2 ] [ α n , b ] and f ( x ) = sin x is approximated by the polynomials P m i c ( x ) on the intervals [ α i 1 , α i ] , i = 1 , , n + 1 , with the accuracy up to 1 / 10 r , where α 0 = a and α n + 1 = b . Then, if Q is an arbitrary polynomial and
I = a b Q ( x ) sin s x d x ( s N )
is approximated by
J = a α 1 Q ( x ) [ P m 1 c ( x ) ] s d x + + α n b Q ( x ) [ P m n + 1 c ( x ) ] s d x = i = 1 n + 1 α i 1 α i Q ( x ) [ P m i c ( x ) ] s d x ,
the error | I J | has the following upper bound
| I J | 2 π ( 2 + 2 / 10 r ) s s 10 r i = 1 n + 1 α i 1 α i | Q ( x ) | d x = 2 π ( 2 + 2 / 10 r ) s s 10 r a b | Q ( x ) | d x .
Based on Algorithm 2, we have established a procedure, say AppIntSin, to approximately compute the integral given in Theorem 2 with the desired accuracy. AppIntSin’s arguments may take input values in order as for a, b, r, s and Q, and we can even choose the value for r to get the result satisfying a given tolerance. For instance, to have the estimate | I J | < ε , a positive integral value for r may be chosen as to satisf
10 r > 0.8 ( 2.2 ) s ε s a b | Q ( x ) | d x ,
due to 2 / π < 0.8 and 2 + 2 / 10 r < 2.2 .
Let us consider the following example. Find the approximate value of
I = 1 100 ( 2 x 5 + x 3 ) sin 20 x d x
with the accuracy up to ε = 10 30 . We first derive r = 48 from (49) with the settings a = 1 , b = 100 , s = 20 and Q ( x ) = 2 x 5 + x 3 . Performing Algorithm 2 with a = 1 , b = 100 and r = 48 , we determine n = 63 , ( k 62 + 1 2 ) p 62 = 99.74556674 , k 63 p 62 = 100.5309649 and find the appropriate case (41). Then, the calling AppIntSin( 1 , 100 , 48 , 20 , Q ( x ) ) gives us the desired approximation to I, first put in symbolic form as
J = a ( k 0 + 1 2 ) p 0 Q ( x ) [ A 0 ( x ) ] s d x + i = 1 n 1 ( k i 1 + 1 2 ) p i 1 ( k i + 1 2 ) p i Q ( x ) [ A i ( x ) ] s d x + ( k n 1 + 1 2 ) p n 1 b Q ( x ) [ A n ( x ) ] s d x
and then in numerical value
J = 6.055874915602438876766631353985011986803 × 10 10 .

7. Conclusions

Motivated by the powerfulness and popular use of modern computer algebra systems in terms of its ease in programming and accessibility to the value of π with a very high accuracy, simple and easily-implemented algorithms have been designed to approximate the trigonometric functions with an arbitrary accuracy by a means of Taylor expansion. We have provided a careful analysis of the proposed approach with numerical illustrations. Nevertheless, our initial intention is not to compare or surpass other well-developed and well-established existing algorithms in the research literature; rather, we emphasize on the simplification of our methods as a computational application of CAS for which we highlight below:
  • Approximating values of the trigonometric functions by Taylor polynomials with an arbitrary accuracy, taking the form of 1 / 10 r , r N .
  • Using a so-called spreading technique to switch the process of pointwise approximating to that of piecewise approximating all over an arbitrary interval.
  • Performing only arithmetics on very small values of finite rational numbers.
  • Only using one command of CAS to access approximate values of π / 2 .
Our approximation method in Algorithm 2, utilizing the achievements of computing values of Pi with more and more exact significant digits and the power of current computer algebra systems, presents the special partition of an interval [ a , b ] with nodes k i p i ’s, where k i Z and p i is an approximate value of π / 2 . This is a great advantage for local approximation to values of the sine (or cosine) function f on each definite interval because p i ’s are all finite rational numbers. These nodes also play a role as adjoining centers of approximation, whereas the functional coefficients f ( m ) still take their normal values 0 , ± 1 at k i π / 2 .
Our procedure, corresponding to Algorithm 1, gives not only numerical results of great precision but also a remedy for mistakes in the use of some existing mathematical software programs when displaying wrong results with a small number n of significant digits (e.g., n < 9 ). In such cases, the argument x’s to approximate sin x are commonly close to k π / 2 ’s, k Z .
In addition, Theorem 2 can be applied to look for desired estimates to the best L 2 -approximation of the sine function in the vector space P of polynomials of degree at most , a subspace of L 2 ( a , b ) .

Author Contributions

Authors both conceived of ideas and designed the paper. L.P.Q. made the formulation of approximation algorithms; T.A.N. chose suitable inequalities and gave all the results of estimation for errors. L.P.Q. and T.A.N. wrote MAPLE codes for calculations.

Acknowledgments

Authors are grateful to all the referees for their careful reading and their valuable corrections and suggestions which improved the manuscript. The work of Thái Anh Nhan was supported by the Irish Research Council under Grant No. RS/2011/179.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stroethoff, K. Bhaskara’s approximation for the sine. Math. Enthus. 2014, 11, 485–492. [Google Scholar]
  2. Cody, W.; Waite, W. Software Manual for the Elementary Functions; Prentice-Hall: Englewood Cliffs, NJ, USA, 1980. [Google Scholar]
  3. Muller, J.M. Elementary Functions—Algorithms and Implementation, 2nd ed.; Birkhäuser: Basel, Switzerland, 2005. [Google Scholar]
  4. Gisuthan, B.; Srikanthan, T. Flat cordic: A unified architecture for high-speed generation of trigonometric and hyperbolic functions. In Proceedings of the IEEE International Midwest Symposium on Circuits and Systems, Lansing, MI, USA, 8–11 August 2000; pp. 798–801. [Google Scholar]
  5. Kantabutra, V. On hardware for computing exponential and trigonometric functions. IEEE Trans. Comput. 1996, 45, 328–339. [Google Scholar] [CrossRef]
  6. Paliouras, V.; Karagianni, K.; Stouraitis, T. A floating-point processor for fast and accurate sine/cosine evaluation. IEEE Trans. Circuits Syst. II Analog Digit. Signal Process. 2000, 47, 441–451. [Google Scholar] [CrossRef] [Green Version]
  7. Takagi, N.; Asada, T.; Yajima, S. Redundant cordic methods with a constant scale factor for sine and cosine computation. IEEE Trans. Comput. 1991, 40, 989–994. [Google Scholar] [CrossRef]
  8. Tang, P.T.P. Table-lookup algorithms for elementary functions and their error analysis. In Proceedings of the 10th Symposium Computer Arithmetic, Grenoble, France, 26–28 June 1991; pp. 232–236. [Google Scholar] [Green Version]
  9. Driscoll, T.A.; Hale, N.; Trefethen, L.N. Chebfun Guide; Pafnuty Publications: Oxford, UK, 2014. [Google Scholar]
  10. Trefethen, L.N. Approximation Theory and Approximation Practice; SIAM: Philadelphia, PA, USA, 2013. [Google Scholar]
  11. Posamentier, A.S.; Lehmann, I. Pi: A Biography of the World’s Most Mysterious Number. In Lecture Notes in Computer Science; Prometheus Books Publisher: New York, NY, USA, 2004. [Google Scholar]
  12. Cerone, P.; Dragomir, S.S. Advances in Inequalities for Special Functions; Nova Science Publishers, Inc.: New York, NY, USA, 2008. [Google Scholar]
  13. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions With Formulas, Graphs, and Mathematical Tables, 10th ed.; Applied Mathematics Series; National Bureau of Standards: Washington, DC, USA, 1972.
  14. Malešević, B.; Makragić, M. A method for proving some inequalities on mixed trigonometric polynomial functions. J. Math. Inequal. 2016, 10, 849–876. [Google Scholar] [CrossRef]
  15. Beals, R.; Wong, R. Special Functions; (A Graduate Text); Cambridge University Press: London, UK, 2010. [Google Scholar]
Figure 1. The graph of y = M ( t ) on ( 0 , 0.5 ) ( 0.5 , 1 ) .
Figure 1. The graph of y = M ( t ) on ( 0 , 0.5 ) ( 0.5 , 1 ) .
Mca 23 00037 g001
Figure 2. The positions of a corresponding to k 0 = a / p 0 (left) or k 0 = a / p 0 + 1 (right).
Figure 2. The positions of a corresponding to k 0 = a / p 0 (left) or k 0 = a / p 0 + 1 (right).
Mca 23 00037 g002
Figure 3. The positions of a corresponding to k 0 = a / p 0 (left) or k 0 = a / p 0 + 1 (right).
Figure 3. The positions of a corresponding to k 0 = a / p 0 (left) or k 0 = a / p 0 + 1 (right).
Mca 23 00037 g003
Figure 4. The approximate intervals and functions will be chosen with the specific position of b from ( k n 1 + 1 2 ) p n 1 .
Figure 4. The approximate intervals and functions will be chosen with the specific position of b from ( k n 1 + 1 2 ) p n 1 .
Mca 23 00037 g004
Figure 5. The graphs of F on [ 3 , 15 ] with r = 20 (left) and on [ 150 , 90 ] with r = 100 (right).
Figure 5. The graphs of F on [ 3 , 15 ] with r = 20 (left) and on [ 150 , 90 ] with r = 100 (right).
Mca 23 00037 g005
Table 1. Some results from using the procedures Sine and Cosine.
Table 1. Some results from using the procedures Sine and Cosine.
Calling SequencesResults
Sine(123,25) 0.4599034906895912512924357
Sine(−1500.024,35) 0.99626189049405202369399458834570209
Cosine(578.99,40) 0.5922469285742675816988406427238119209980
Cosine(3.1415926535,45) 0.999999999999999999995968587163195430469041636

Share and Cite

MDPI and ACS Style

Quan, L.P.; Nhan, T.A. Applying Computer Algebra Systems in Approximating the Trigonometric Functions. Math. Comput. Appl. 2018, 23, 37. https://doi.org/10.3390/mca23030037

AMA Style

Quan LP, Nhan TA. Applying Computer Algebra Systems in Approximating the Trigonometric Functions. Mathematical and Computational Applications. 2018; 23(3):37. https://doi.org/10.3390/mca23030037

Chicago/Turabian Style

Quan, Le Phuong, and Thái Anh Nhan. 2018. "Applying Computer Algebra Systems in Approximating the Trigonometric Functions" Mathematical and Computational Applications 23, no. 3: 37. https://doi.org/10.3390/mca23030037

APA Style

Quan, L. P., & Nhan, T. A. (2018). Applying Computer Algebra Systems in Approximating the Trigonometric Functions. Mathematical and Computational Applications, 23(3), 37. https://doi.org/10.3390/mca23030037

Article Metrics

Back to TopTop