Sparse matrix multiplication complexity and CPU time

8 Ansichten (letzte 30 Tage)
Cem Gormezano
Cem Gormezano am 9 Sep. 2020
Kommentiert: Cem Gormezano am 10 Sep. 2020
I am multiplying two sparse matrices $A$ and $A^T$ such that I have $A^T*A$. From what I know the complexity of this operation depends on nnz(A). Yet, when I generate a random matrix with fixed nnz(A) but increasing number of rows and compute the product $A^T*A$ I see that the CPU time it takes to perform this operation increases. Am I missing something ?

Akzeptierte Antwort

Dana
Dana am 9 Sep. 2020
Assuming by A^T you mean the transpose of A, and assuming you already have A and A^T stored, then yes, the complexity of A^T*A should depend only on nnz(A) and on the number of rows A^T has (which is equal to the number of columns A has). So if you increase the number of rows m of A but keep the number of columns the same, computing time should eventually stop increasing with m. There are some important caveats here, though:
  • This again assumes that you've already done the transpose operation on A. If you're including that in your computation time, then it's no longer the case that computation will be independent of m, since the transpose operation itself has complexity that's linear in m.
  • Complexity is a limiting characteristic, i.e., it characterizes how computational burden grows with m when m is already large. It need not hold for m small.
  • On top of that, complexity is a sort of "average growth" rate of computational times. Actual computational times are subject to some randomness arising from several different sources, however. As a result, we don't expect to see computation times to be exactly the same when increasing m even when m is large to begin with.

Weitere Antworten (0)

Kategorien

Mehr zu Sparse Matrices finden Sie in Help Center und File Exchange

Produkte

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by