numpy arbitrary precision linear algebra

19,003

Solution 1

SymPy can calculate arbitrary precision:

from sympy import exp, N, S
from sympy.matrices import Matrix

data = [[S("-800.21"),S("-600.00")],[S("-600.00"),S("-1000.48")]]
m = Matrix(data)
ex = m.applyfunc(exp).applyfunc(lambda x:N(x, 100))
vecs = ex.eigenvects()
print vecs[0][0] # eigen value
print vecs[1][0] # eigen value
print vecs[0][2] # eigen vect
print vecs[1][2] # eigen vect

output:

-2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807463648841185335e-261
2.650396553004310816338679447269582701529092549943247237903254759946483528035516341807466621962539464e-261
[[-0.9999999999999999999999999999999999999999999999999999999999999999999999999999999999999994391176386872]
[                                                                                                      1]]
[[1.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000560882361313]
[                                                                                                    1]]

you can change 100 in N(x, 100) to other precision, but, as I tried 1000, the calculation of eigen vect failed.

Solution 2

On 64-bit systems, there's a numpy.float128 dtype. (I believe there's a float96 dtype on 32-bit systems, as well) While numpy.linalg.eig doesn't support 128-bit floats, scipy.linalg.eig (sort of) does.

However, none of this is going to matter, in the long run. Any general solver for an eigenvalue problem is going to be iterative, rather than exact, so you're not gaining anything by keeping the extra precision! np.linalg.eig works for any shape, but never returns an exact solution.

If you're always solving 2x2 matrices, it's trivial to write your own solver that should be more exact. I'll show an example of this at the end...

Regardless, forging ahead into pointlessly precise memory containers:

import numpy as np
import scipy as sp
import scipy.linalg

a = np.array([[-800.21,-600.00],[-600.00,-1000.48]], dtype=np.float128)
ex = np.exp(a)
print ex

eigvals, eigvecs = sp.linalg.eig(ex)

# And to test...
check1 = ex.dot(eigvecs[:,0])
check2 = eigvals[0] * eigvecs[:,0]
print 'Checking accuracy..'
print check1, check2
print (check1 - check2).dot(check1 - check2), '<-- Should be zero'

However, you'll notice that what you get is identical to just doing np.linalg.eig(ex.astype(np.float64). In fact, I'm fairly sure that's what scipy is doing, while numpy raises an error rather than doing it silently. I could be quite wrong, though...

If you don't want to use scipy, one workaround is to rescale things after the exponentiation but before solving for the eigenvalues, cast them as "normal" floats, solve for the eigenvalues, and then recast things as float128's afterwards and rescale.

E.g.

import numpy as np

a = np.array([[-800.21,-600.00],[-600.00,-1000.48]], dtype=np.float128)
ex = np.exp(a)
factor = 1e300
ex_rescaled = (ex * factor).astype(np.float64)

eigvals, eigvecs = np.linalg.eig(ex_rescaled)
eigvals = eigvals.astype(np.float128) / factor

# And to test...
check1 = ex.dot(eigvecs[:,0])
check2 = eigvals[0] * eigvecs[:,0]
print 'Checking accuracy..'
print check1, check2
print (check1 - check2).dot(check1 - check2), '<-- Should be zero'

Finally, if you're only solving 2x2 or 3x3 matrices, you can write your own solver, which will return an exact value for those shapes of matrices.

import numpy as np

def quadratic(a,b,c):
    sqrt_part = np.lib.scimath.sqrt(b**2 - 4*a*c)
    root1 = (-b + sqrt_part) / (2 * a)
    root2 = (-b - sqrt_part) / (2 * a)
    return root1, root2

def eigvals(matrix_2x2):
    vals = np.zeros(2, dtype=matrix_2x2.dtype)
    a,b,c,d = matrix_2x2.flatten()
    vals[:] = quadratic(1.0, -(a+d), (a*d-b*c))
    return vals

def eigvecs(matrix_2x2, vals):
    a,b,c,d = matrix_2x2.flatten()
    vecs = np.zeros_like(matrix_2x2)
    if (b == 0.0) and (c == 0.0):
        vecs[0,0], vecs[1,1] = 1.0, 1.0
    elif c != 0.0:
        vecs[0,:] = vals - d
        vecs[1,:] = c
    elif b != 0:
        vecs[0,:] = b
        vecs[1,:] = vals - a
    return vecs

def eig_2x2(matrix_2x2):
    vals = eigvals(matrix_2x2)
    vecs = eigvecs(matrix_2x2, vals)
    return vals, vecs

a = np.array([[-800.21,-600.00],[-600.00,-1000.48]], dtype=np.float128)
ex = np.exp(a)
eigvals, eigvecs =  eig_2x2(ex) 

# And to test...
check1 = ex.dot(eigvecs[:,0])
check2 = eigvals[0] * eigvecs[:,0]
print 'Checking accuracy..'
print check1, check2
print (check1 - check2).dot(check1 - check2), '<-- Should be zero'

This one returns a truly exact solution, but will only work for 2x2 matrices. It's the only solution that actually benefits from the extra precision, however!

Solution 3

As far as I know, numpy does not support higher than double precision (float64), which is the default if not specified.

Try using this: http://code.google.com/p/mpmath/

List of features (among others)

Arithmetic:

  • Real and complex numbers with arbitrary precision
  • Unlimited exponent sizes / magnitudes
Share:
19,003
jarondl
Author by

jarondl

Updated on June 06, 2022

Comments

  • jarondl
    jarondl almost 2 years

    I have a numpy 2d array [medium/large sized - say 500x500]. I want to find the eigenvalues of the element-wise exponent of it. The problem is that some of the values are quite negative (-800,-1000, etc), and their exponents underflow (meaning they are so close to zero, so that numpy treats them as zero). Is there anyway to use arbitrary precision in numpy?

    The way I dream it:

    import numpy as np
    
    np.set_precision('arbitrary') # <--- Missing part
    a = np.array([[-800.21,-600.00],[-600.00,-1000.48]])
    ex = np.exp(a)  ## Currently warns about underflow
    eigvals, eigvecs = np.linalg.eig(ex)
    

    I have searched for a solution with gmpy and mpmath to no avail. Any idea will be welcome.

  • jarondl
    jarondl almost 13 years
    Thanks @Sinthet, but that does not solve the problem. The numbers are already floats, it's just that exp(-800) is about 3.7E-348, which is less then numpy's (or python's) precision
  • jarondl
    jarondl almost 13 years
    That is certainly the most interesting answer so far, but I could not find a eigenvalue solver in mpmath. Do you know if it includes one, or do I have to construct it myself?
  • milancurcic
    milancurcic almost 13 years
    Yes, I did not find it either. I don't think numpy itself can help you here, either combined with mpmath or bc (Spike's answer), so you would need to work around it to get your result. You can take a look at numpy source to see what linalg.eig() looks like. It might be not too hard to implement in your program.
  • Joe Kington
    Joe Kington almost 13 years
    There is a numpy.float128 for what it's worth. I don't think it's supported by numpy.linalg, but it is for basic operations.
  • Joe Kington
    Joe Kington almost 13 years
    @milancurcic - For what it's worth numpy.linalg.eig calls LAPACK routines. (Which is why numpy.linalg doesn't support 128-bit precision). Regardless, looking numpy.linalg.eig's source is unlikely to help you implement a basic eigenvalue solver.
  • milancurcic
    milancurcic almost 13 years
    @Joe Kington - is it the newer version of numpy (1.6.*) that has numpy.float128? The version I have (1.5.1) does not have such dtype.
  • Joe Kington
    Joe Kington almost 13 years
    @milancurcic - No, numpy has had float128 for a very long time (at least since 1.1, and I'm pretty sure it was there a long time before that). However, it's only present on a 64-bit system. Otherwise, I think there's a float96 on 32-bit systems? I could be quite wrong about the float96 part, though...
  • asmeurer
    asmeurer almost 12 years
    By the way, SymPy uses mpmath for its arbitrary precision floating point numbers.
  • Tomasz Gandor
    Tomasz Gandor over 4 years
    SymPy is the place to go for many mathematical problems. Meanwhile, if you need arbitrary precision int-s, which don't overflow on simple matrix multiplications when having a dozen digits - you can use dtype=object. Of course - this won't help OP, who is dealing with arbitrary precision float-s.
  • BR123
    BR123 about 2 years
    I think you also need a special treatment for c != 0 and b != 0 to determine the correct eigenvalues or am I missing something?