Speed up python code for computing matrix cofactors
Solution 1
If your matrix is invertible, the cofactor is related to the inverse:
def matrix_cofactor(matrix):
return np.linalg.inv(matrix).T * np.linalg.det(matrix)
This gives large speedups (~ 1000x for 50x50 matrices). The main reason is fundamental: this is an O(n^3)
algorithm, whereas the minor-det-based one is O(n^5)
.
This probably means that also for non-invertible matrixes, there is some clever way to calculate the cofactor (i.e., not use the mathematical formula that you use above, but some other equivalent definition).
If you stick with the det-based approach, what you can do is the following:
The majority of the time seems to be spent inside det
. (Check out line_profiler to find this out yourself.) You can try to speed that part up by linking Numpy with the Intel MKL, but other than that, there is not much that can be done.
You can speed up the other part of the code like this:
minor = np.zeros([nrows-1, ncols-1])
for row in xrange(nrows):
for col in xrange(ncols):
minor[:row,:col] = matrix[:row,:col]
minor[row:,:col] = matrix[row+1:,:col]
minor[:row,col:] = matrix[:row,col+1:]
minor[row:,col:] = matrix[row+1:,col+1:]
...
This gains some 10-50% total runtime depending on the size of your matrices. The original code has Python range
and list manipulations, which are slower than direct slice indexing. You could try also to be more clever and copy only parts of the minor that actually change --- however, already after the above change, close to 100% of the time is spent inside numpy.linalg.det
so that furher optimization of the othe parts does not make so much sense.
Solution 2
Instead of using the inverse and determinant, I'd suggest using the SVD
def cofactors(A):
U,sigma,Vt = np.linalg.svd(A)
N = len(sigma)
g = np.tile(sigma,N)
g[::(N+1)] = 1
G = np.diag(-(-1)**N*np.product(np.reshape(g,(N,N)),1))
return U @ G @ Vt
Solution 3
The calculation of np.array(range(row)+range(row+1,nrows))[:,np.newaxis]
does not depended on col
so you could could move that outside the inner loop and cache the value. Depending on the number of columns you have this might give a small optimization.
Related videos on Youtube
new name
I used to be a huge fan of google cloud, but quality has gone down and support is terrible. Google cloud console is buggy, and when you ask for help regarding Google bugs, they tell you to sign up for paid support. I already pay them a lot of money, and then they want me to pay more to get around google bugs. Terrible.
Updated on June 01, 2022Comments
-
new name almost 2 years
As part of a complex task, I need to compute matrix cofactors. I did this in a straightforward way using this nice code for computing matrix minors. Here is my code:
def matrix_cofactor(matrix): C = np.zeros(matrix.shape) nrows, ncols = C.shape for row in xrange(nrows): for col in xrange(ncols): minor = matrix[np.array(range(row)+range(row+1,nrows))[:,np.newaxis], np.array(range(col)+range(col+1,ncols))] C[row, col] = (-1)**(row+col) * np.linalg.det(minor) return C
It turns out that this matrix cofactor code is the bottleneck, and I would like to optimize the code snippet above. Any ideas as to how to do this?
-
new name almost 13 yearsExcellent answer! My matrices are invertible so that one liner is a huge time saver.
-
avmohan over 10 yearsThis calculates the adjoint matrix, not the cofactor matrix. det(A) * inverse(A) = adjoint(A)
-
pv. over 10 years@v3ga: please actually read the answer. It computes
det(A) * inverse(A)^T
. Cofactor is the transpose of the adjugate. -
user2393987 over 3 yearsCould you please provide a reference for this algorithm? I am especially interested in what is happening with g and G. I have seen algorithms based on SVD, but they also used the determinants. Eg: scicomp.stackexchange.com/questions/33028/…
-
AlphaBetaGamma96 over 3 yearsThis derivation comes from applying a Single Value Decomposition to the general definition of the cofactor matrix. C = det(A) A^{-T} which is equal to det(U) det(S) det(V) U S^{-1} V^T. The det(U) and det(V) are just +1 or -1 as they're orthonormal matrices and G is the reduction of det(S) S^{-1} (Sometimes labelled as Gamma). This all comes together as det(U)det(V) U G V^{T} (as shown above)