How do Trigonometric functions work?

53,700

Solution 1

First, you have to do some sort of range reduction. Trig functions are periodic, so you need to reduce arguments down to a standard interval. For starters, you could reduce angles to be between 0 and 360 degrees. But by using a few identities, you realize you could get by with less. If you calculate sines and cosines for angles between 0 and 45 degrees, you can bootstrap your way to calculating all trig functions for all angles.

Once you've reduced your argument, most chips use a CORDIC algorithm to compute the sines and cosines. You may hear people say that computers use Taylor series. That sounds reasonable, but it's not true. The CORDIC algorithms are much better suited to efficient hardware implementation. (Software libraries may use Taylor series, say on hardware that doesn't support trig functions.) There may be some additional processing, using the CORDIC algorithm to get fairly good answers but then doing something else to improve accuracy.

There are some refinements to the above. For example, for very small angles theta (in radians), sin(theta) = theta to all the precision you have, so it's more efficient to simply return theta than to use some other algorithm. So in practice there is a lot of special case logic to squeeze out all the performance and accuracy possible. Chips with smaller markets may not go to as much optimization effort.

Solution 2

edit: Jack Ganssle has a decent discussion in his book on embedded systems, "The Firmware Handbook".

FYI: If you have accuracy and performance constraints, Taylor series should not be used to approximate functions for numerical purposes. (Save them for your Calculus courses.) They make use of the analyticity of a function at a single point, e.g. the fact that all its derivatives exist at that point. They don't necessarily converge in the interval of interest. Often they do a lousy job of distributing the function approximation's accuracy in order to be "perfect" right near the evaluation point; the error generally zooms upwards as you get away from it. And if you have a function with any noncontinuous derivative (e.g. square waves, triangle waves, and their integrals), a Taylor series will give you the wrong answer.

The best "easy" solution, when using a polynomial of maximum degree N to approximate a given function f(x) over an interval x0 < x < x1, is from Chebyshev approximation; see Numerical Recipes for a good discussion. Note that the Tj(x) and Tk(x) in the Wolfram article I linked to used the cos and inverse cosine, these are polynomials and in practice you use a recurrence formula to get the coefficients. Again, see Numerical Recipes.

edit: Wikipedia has a semi-decent article on approximation theory. One of the sources they cite (Hart, "Computer Approximations") is out of print (& used copies tend to be expensive) but goes into a lot of detail about stuff like this. (Jack Ganssle mentions this in issue 39 of his newsletter The Embedded Muse.)

edit 2: Here's some tangible error metrics (see below) for Taylor vs. Chebyshev for sin(x). Some important points to note:

  1. that the maximum error of a Taylor series approximation over a given range, is much larger than the maximum error of a Chebyshev approximation of the same degree. (For about the same error, you can get away with one fewer term with Chebyshev, which means faster performance)
  2. Range reduction is a huge win. This is because the contribution of higher order polynomials shrinks down when the interval of the approximation is smaller.
  3. If you can't get away with range reduction, your coefficients need to be stored with more precision.

Don't get me wrong: Taylor series will work properly for sine/cosine (with reasonable precision for the range -pi/2 to +pi/2; technically, with enough terms, you can reach any desired precision for all real inputs, but try to calculate cos(100) using Taylor series and you can't do it unless you use arbitrary-precision arithmetic). If I were stuck on a desert island with a nonscientific calculator, and I needed to calculate sine and cosine, I would probably use Taylor series since the coefficients are easy to remember. But the real world applications for having to write your own sin() or cos() functions are rare enough that you'd be best off using an efficient implementation to reach a desired accuracy -- which the Taylor series is not.

Range = -pi/2 to +pi/2, degree 5 (3 terms)

  • Taylor: max error around 4.5e-3, f(x) = x-x3/6+x5/120
  • Chebyshev: max error around 7e-5, f(x) = 0.9996949x-0.1656700x3+0.0075134x5

Range = -pi/2 to +pi/2, degree 7 (4 terms)

  • Taylor: max error around 1.5e-4, f(x) = x-x3/6+x5/120-x7/5040
  • Chebyshev: max error around 6e-7, f(x) = 0.99999660x-0.16664824x3+0.00830629x5-0.00018363x7

Range = -pi/4 to +pi/4, degree 3 (2 terms)

  • Taylor: max error around 2.5e-3, f(x) = x-x3/6
  • Chebyshev: max error around 1.5e-4, f(x) = 0.999x-0.1603x3

Range = -pi/4 to +pi/4, degree 5 (3 terms)

  • Taylor: max error around 3.5e-5, f(x) = x-x3/6+x5
  • Chebyshev: max error around 6e-7, f(x) = 0.999995x-0.1666016x3+0.0081215x5

Range = -pi/4 to +pi/4, degree 7 (4 terms)

  • Taylor: max error around 3e-7, f(x) = x-x3/6+x5/120-x7/5040
  • Chebyshev: max error around 1.2e-9, f(x) = 0.999999986x-0.166666367x3+0.008331584x5-0.000194621x7

Solution 3

I believe they're calculated using Taylor Series or CORDIC. Some applications which make heavy use of trig functions (games, graphics) construct trig tables when they start up so they can just look up values rather than recalculating them over and over.

Solution 4

Check out the Wikipedia article on trig functions. A good place to learn about actually implementing them in code is Numerical Recipes.

I'm not much of a mathematician, but my understanding of where sin, cos, and tan "come from" is that they are, in a sense, observed when you're working with right-angle triangles. If you take measurements of the lengths of sides of a bunch of different right-angle triangles and plot the points on a graph, you can get sin, cos, and tan out of that. As Harper Shelby points out, the functions are simply defined as properties of right-angle triangles.

A more sophisticated understanding is achieved by understanding how these ratios relate to the geometry of circle, which leads to radians and all of that goodness. It's all there in the Wikipedia entry.

Solution 5

I would like to extend the answer provided by @Jason S. Using a domain subdivision method similar to that described by @Jason S and using Maclaurin series approximations, an average (2-3)X speedup over the tan(), sin(), cos(), atan(), asin(), and acos() functions built into the gcc compiler with -O3 optimization was achieved. The best Maclaurin series approximating functions described below achieved double precision accuracy.

For the tan(), sin(), and cos() functions, and for simplicity, an overlapping 0 to 2pi+pi/80 domain was divided into 81 equal intervals with "anchor points" at pi/80, 3pi/80, ..., 161pi/80. Then tan(), sin(), and cos() of these 81 anchor points were evaluated and stored. With the help of trig identities, a single Maclaurin series function was developed for each trig function. Any angle between ±infinity may be submitted to the trig approximating functions because the functions first translate the input angle to the 0 to 2pi domain. This translation overhead is included in the approximation overhead.

Similar methods were developed for the atan(), asin(), and acos() functions, where an overlapping -1.0 to 1.1 domain was divided into 21 equal intervals with anchor points at -19/20, -17/20, ..., 19/20, 21/20. Then only atan() of these 21 anchor points was stored. Again, with the help of inverse trig identities, a single Maclaurin series function was developed for the atan() function. Results of the atan() function were then used to approximate asin() and acos().

Since all inverse trig approximating functions are based on the atan() approximating function, any double-precision argument input value is allowed. However the argument input to the asin() and acos() approximating functions is truncated to the ±1 domain because any value outside it is meaningless.

To test the approximating functions, a billion random function evaluations were forced to be evaluated (that is, the -O3 optimizing compiler was not allowed to bypass evaluating something because some computed result would not be used.) To remove the bias of evaluating a billion random numbers and processing the results, the cost of a run without evaluating any trig or inverse trig function was performed first. This bias was then subtracted off each test to obtain a more representative approximation of actual function evaluation time.

Table 2. Time spent in seconds executing the indicated function or functions one billion times. The estimates are obtained by subtracting the time cost of evaluating one billion random numbers shown in the first row of Table 1 from the remaining rows in Table 1.

Time spent in tan(): 18.0515 18.2545

Time spent in TAN3(): 5.93853 6.02349

Time spent in TAN4(): 6.72216 6.99134

Time spent in sin() and cos(): 19.4052 19.4311

Time spent in SINCOS3(): 7.85564 7.92844

Time spent in SINCOS4(): 9.36672 9.57946

Time spent in atan(): 15.7160 15.6599

Time spent in ATAN1(): 6.47800 6.55230

Time spent in ATAN2(): 7.26730 7.24885

Time spent in ATAN3(): 8.15299 8.21284

Time spent in asin() and acos(): 36.8833 36.9496

Time spent in ASINCOS1(): 10.1655 9.78479

Time spent in ASINCOS2(): 10.6236 10.6000

Time spent in ASINCOS3(): 12.8430 12.0707

(In the interest of saving space, Table 1 is not shown.) Table 2 shows the results of two separate runs of a billion evaluations of each approximating function. The first column is the first run and the second column is the second run. The numbers '1', '2', '3' or '4' in the function names indicate the number of terms used in the Maclaurin series function to evaluate the particular trig or inverse trig approximation. SINCOS#() means that both sin and cos were evaluated at the same time. Likewise, ASINCOS#() means both asin and acos were evaluated at the same time. There is little extra overhead in evaluating both quantities at the same time.

The results show that increasing the number of terms slightly increases execution time as would be expected. Even the smallest number of terms gave around 12-14 digit accuracy everywhere except for the tan() approximation near where its value approaches ±infinity. One would expect even the tan() function to have problems there.

Similar results were obtained on a high-end MacBook Pro laptop in Unix and on a high-end desktop computer in Linux.

Share:
53,700

Related videos on Youtube

Jurassic_C
Author by

Jurassic_C

I try to do a little bit of everything. For work I do web programming. For fun I dabble in game programming and am still looking for a good opensource project to put some effort into

Updated on January 24, 2021

Comments

  • Jurassic_C
    Jurassic_C over 3 years

    So in high school math, and probably college, we are taught how to use trig functions, what they do, and what kinds of problems they solve. But they have always been presented to me as a black box. If you need the Sine or Cosine of something, you hit the sin or cos button on your calculator and you're set. Which is fine.

    What I'm wondering is how trigonometric functions are typically implemented.

    • Kyle Cronin
      Kyle Cronin over 15 years
      Are you confused about what trig functions are or how they are implemented?
    • Jurassic_C
      Jurassic_C over 15 years
      I know what they are. I know what they do. I know how to determine which I need for what purpose. I can tell you all about the relationship between angles and distances. What I was looking for was more along the lines of John D. Cook's answer. And everybody else who mentioned actual algorithms
    • BBQ.
      BBQ. over 15 years
      This is a good question. For example, sine, cosine, and tangent are transcendental functions and those are hard to solve... On the other hand, they can be defined using a simple Taylor series expansion which will give you the correct answer up to any finite degree of accuracy required.
  • kquinn
    kquinn over 15 years
    This comment is wrong. There is a time and a place for every approximation. If you do not know enough analysis to determine the region of convergence for ANY series approximation, you should NOT be using it. That goes for Taylor, Chebyshev, Padé, etc. series. Taylor series are often Good Enough.
  • Fredrik Johansson
    Fredrik Johansson over 15 years
    Downvoting for inappropriate use of "never". Yes, Taylor series are worse than minimax polynomials on intervals, but sometimes you are interested in asymptotic accuracy around a single point. Further, Taylor series are usually the way to go for arbitrary-precision arithmetic.
  • Jason S
    Jason S over 15 years
    :shrug: I don't know about you but I've never been interested in evaluating a function in a small neighborhood around just one point. Even a quick least-squares fit over an interval is pretty damn easy to do. Anyone who's using Taylor series is just missing the point.
  • Jason S
    Jason S over 15 years
    Great answer -- though the CORDIC doesn't really need range reduction per se (in fact it is essentially a range reduction algorithm in its own right); it works fine for angles between -pi/2 and +pi/2, so you just have to do a 180 degree vector rotation for angles outside that range.
  • Admin
    Admin about 15 years
    Upvoting because the responder knew Hart exists. :smile: Hart is the classic reference here, even if it was difficult to find when I bought a copy (in print) 25 years ago. It is worth every penny. Range reduction wherever possible, coupled with an appropriate approximation, be it Pade, Chebychev, even Taylor series as appropriate, is a good approach. Pade or Chebychev approximants are usually the better choice over a Taylor series though.
  • dotancohen
    dotancohen almost 12 years
    Actually, you might be surprised at just how accurate a Taylor series can be: Taylor series for sine, illustrated.
  • Jason S
    Jason S almost 12 years
    @dotancohen: That misses the point: Chebyshev series require a reduced order polynomial compared to Taylor series for a given accuracy.
  • dotancohen
    dotancohen almost 12 years
    I don't dispute that. I was addressing the statement "If you have accuracy and performance constraints, Taylor series should not be used to approximate functions for numerical purposes.".
  • Jason S
    Jason S almost 12 years
    ??? How is that any different? Taylor series out to 17th degree to calculate sin(x) from -2pi to +2pi can probably be beat by Chebyshev with a 7th or 9th degree polynomial. I wouldn't have any problem making the statement, "If you have time constraints when cutting down trees, you should not use a hand saw. Use a chainsaw." Perhaps I should reword from "should not" to something like "I would not recommend using Taylor series". Sure, you could use Taylor series in some cases, but your accuracy and performance are going to be problematic. By performance I mean CPU execution time.
  • Pascal Cuoq
    Pascal Cuoq about 11 years
    Implementations that use a polynomial approximation may often use Taylor series, but they typically should use coefficients that have been determined with the Remez algorithm. lolengine.net/blog/2011/12/21/better-function-approximations
  • Rhubbarb
    Rhubbarb almost 11 years
    Note that the table of values used by CORDIC must be pre-calculated. So, Taylor might still be used at "compile time".
  • supercat
    supercat almost 10 years
    Is there anything wrong with using a Taylor series approximation to compute function values if the pivot point is chosen to be near the point of interest? For example, if one picks one of 128 pivot points around a circle, a fourth-order Taylor approximation of sine/cosine will suffice for roughly half-part-per-billion accuracy on sine and cosine.
  • Jason S
    Jason S almost 10 years
    Wrong? no. But Chebyshev can beat it for accuracy anyday.
  • Perry
    Perry over 9 years
    It seems that this answer contradicts the high-rated accepted answer to this similar question: stackoverflow.com/questions/2284860/…. This answer says that the sin() function is mostly implemented on hardware-level, while the other says in C.
  • MattHusz
    MattHusz over 3 years
    Isn't CORDIC only more efficient when you don't have access to fast hardware multiplies? Embedded devices with FPUs and FPGAs with DSP elements (which is quite a few of them) can perform multiplies very efficiently. Maybe the answer to this question has changed since this was posted, or maybe I've misunderstood the tradeoff?
  • Fortran
    Fortran over 3 years
    @JasonS can u explain "CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC). On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC." from en.wikipedia.org/w/…
  • Jason S
    Jason S over 3 years
    @Fortran not sure why you're addressing that comment to me; someone else wrote both this answer and the Wikipedia comment. If your question is about the Wikipedia content, you could ask on the Wikipedia Talk page.
  • Jason S
    Jason S about 3 years
    @Wane Anyday. You must be doing something questionable if you make that claim. Chebyshev approximations are relative to the range of interest. For n=1 and -0.5 ≤ x ≤ 0.5 , for example, Taylor = x with max error of ≈0.02057, Chebyshev ≈ 0.974734x with max error of ≈0.00794
  • Wane
    Wane about 3 years
    @JasonS Sorry, I misunderstood. I see what you mean now.