Stable Cotangent

24,516

Solution 1

cot(x) = cos(x)/sin(x) should be more numerically stable close to π/2 than cot(x) = 1/tan(x). You can implement that efficiently using sincos on platforms that have it.

Another possibility is cot(x) = tan(M_PI_2 - x). This should be faster than the above (even if sincos is available), but it may also be less accurate, because M_PI_2 is of course only an approximation of the transcendental number π/2, so the difference M_PI_2 - x will not be accurate to the full width of a double mantissa -- in fact, if you get unlucky, it may have only a few meaningful bits.

Solution 2

TL;DR No.

As a rule of thumb, when looking for sources of inaccuracy one should be concerned first and foremost about additions and subtractions, which can lead to the issue of subtractive cancellation. Multiplications and divisions are typically harmless for accuracy other than adding additional rounding error, but may cause problems through overflow and underflow in intermediate computations.

No machine number x can get close enough to multiples of π/2 to cause tan(x) to overflow, therefore tan(x) is well-defined and finite for all floating-point encodings for any of the IEEE-754 floating-point formats, and by extension, so is cot(x) = 1.0 / tan(x).

This is easily demonstrated by performing an exhaustive test with all numerical float encodings, as exhaustive test using double is not feasible, except perhaps with the largest supercomputers in existence today.

Using a math library with an accurate implementation of tan() with a maximum error of ~= 0.5 ulp, we find that computing cot(x) = 1.0 / tan(x) incurs a maximum error of less than 1.5 ulp, where the additional error compared to tan() itself is contributed by the rounding error of the division.

Repeating this exhaustive test over all float values with cot(x) = cos(x) / sin(x), where sin() and cos() are computed with a maximum error of ~= 0.5 ulp, we find that the maximum error in cot() is less than 2.0 ulps, so slightly larger. This is easily explained by having three sources of error instead of two in the previous formula.

Lastly, cot(x) = tan (M_PI_2 - x) suffers from the issue of subtractive cancellation mentioned earlier when x is near M_PI_2, and from the issue that in finite-precision floating-point arithmetic, M_PI_2 - x == M_PI_2 when x is sufficiently small in magnitude. This can lead to very large errors that leave us with no valid bits in the result.

Solution 3

If you consider the angle between two vectors (v and w), you can also obtain the cotangent as follow (using Eigen::Vector3d):

inline double cot(Eigen::Vector3d v, Eigen::Vector3d w) { 
    return( v.dot(w) / (v.cross(w).norm()) ); 
};

With theta the angle between v and w, the above function is correct because:

  • |v x w| = |v|.|w|.sin(theta)
  • v . w = |v|.|w|.cos(theta)
  • cot(theta) = cos(theta) / sin(theta) = (v . w) / |v x w|
Share:
24,516
Meh.
Author by

Meh.

Updated on September 22, 2022

Comments

  • Meh.
    Meh. over 1 year

    Is there a more stable implementation for the cotangent function than return 1.0/tan(x);?

  • Meh.
    Meh. over 13 years
    Ok. I guess this is the best I can do. Also I learnt about sincos, which I must admit I had never met before!