whats the fastest way to find eigenvalues/vectors in python?

57,348

Solution 1

  • **if your matrix is sparse, then instantiate your matrix using a constructor from scipy.sparse then use the analogous eigenvector/eigenvalue methods in spicy.sparse.linalg. From a performance point of view, this has two advantages:

    • your matrix, built from the spicy.sparse constructor, will be smaller in proportion to how sparse it is.

    • the eigenvalue/eigenvector methods for sparse matrices (eigs, eigsh) accept an optional argument, k which is the number of eigenvector/eigenvalue pairs you want returned. Nearly always the number required to account for the >99% of the variance is far less then the number of columns, which you can verify ex post; in other words, you can tell method not to calculate and return all of the eigenvectors/eigenvalue pairs--beyond the (usually) small subset required to account for the variance, it's unlikely you need the rest.

  • use the linear algebra library in SciPy, scipy.linalg, instead of the NumPy library of the same name. These two libraries have the same name and use the same method names. Yet there's a difference in performance. This difference is caused by the fact that numpy.linalg is a less faithful wrapper on the analogous LAPACK routines which sacrifice some performance for portability and convenience (i.e., to comply with the NumPy design goal that the entire NumPy library should be built without a Fortran compiler). linalg in SciPy on the other hand is a much more complete wrapper on LAPACK and which uses f2py.

  • select the function appropriate for your use case; in other words, don't use a function does more than you need. In scipy.linalg there are several functions to calculate eigenvalues; the differences are not large, though by careful choice of the function to calculate eigenvalues, you should see a performance boost. For instance:

    • scipy.linalg.eig returns both the eigenvalues and eigenvectors
    • scipy.linalg.eigvals, returns only the eigenvalues. So if you only need the eigenvalues of a matrix then do not use linalg.eig, use linalg.eigvals instead.
    • if you have a real-valued square symmetric matrices (equal to its transpose) then use scipy.linalg.eigsh
  • optimize your Scipy build Preparing your SciPy build environement is done largely in SciPy's setup.py script. Perhaps the most significant option performance-wise is identifying any optimized LAPACK libraries such as ATLAS or Accelerate/vecLib framework (OS X only?) so that SciPy can detect them and build against them. Depending on the rig you have at the moment, optimizing your SciPy build then re-installing can give you a substantial performance increase. Additional notes from the SciPy core team are here.

Will these functions work for large matrices?

I should think so. These are industrial strength matrix decomposition methods, and which are just thin wrappers over the analogous Fortran LAPACK routines.

I have used most of the methods in the linalg library to decompose matrices in which the number of columns is usually between about 5 and 50, and in which the number of rows usually exceeds 500,000. Neither the SVD nor the eigenvalue methods seem to have any problem handling matrices of this size.

Using the SciPy library linalg you can calculate eigenvectors and eigenvalues, with a single call, using any of several methods from this library, eig, eigvalsh, and eigh.

>>> import numpy as NP
>>> from scipy import linalg as LA

>>> A = NP.random.randint(0, 10, 25).reshape(5, 5)
>>> A
    array([[9, 5, 4, 3, 7],
           [3, 3, 2, 9, 7],
           [6, 5, 3, 4, 0],
           [7, 3, 5, 5, 5],
           [2, 5, 4, 7, 8]])

>>> e_vals, e_vecs = LA.eig(A)

Solution 2

If your matrices are sparse, you can try using scipy's sparse eigenvalue function, which should be faster:

http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html

You might also check out specialized packages like SLEPc, which has python bindings and can do calculations in parallel using mpi:

http://code.google.com/p/slepc4py/

Share:
57,348

Related videos on Youtube

yurib
Author by

yurib

SOreadytohelp

Updated on July 09, 2022

Comments

  • yurib
    yurib almost 2 years

    Currently im using numpy which does the job. But, as i'm dealing with matrices with several thousands of rows/columns and later this figure will go up to tens of thousands, i was wondering if there was a package in existence that can perform this kind of calculations faster ?

    • JAB
      JAB almost 13 years
      Does numpy not scale well? I thought it was designed for things like that. Isn't that the whole point of vectorized operations?
    • Michael Tamillow
      Michael Tamillow over 5 years
      tens of thousands is not a lot and numpy will crush it quickly.
  • David Ketcheson
    David Ketcheson about 9 years
    On my machine, numpy's eigvals is actually faster than scipy's.
  • Vision
    Vision over 6 years
    I am using scipy.sparse.linalg.eign.eigsh on 40,000 by 40,000 symmetric sparse matrix. It takes me nearly 30 mins to find 125 smallest eigenvectors. So I am also wondering what the most efficient eigenvector solver is in Python.
  • Sideshow Bob
    Sideshow Bob about 6 years
    Is MKL, which comes by default in Anaconda these days, as fast as it's going to get in terms of linalg optimization? docs.anaconda.com/mkl-optimizations