Efficiently calculating the trace of a matrix product
9 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Henry Shackleton
am 9 Mai 2019
Kommentiert: James Tursa
am 10 Mai 2019
I have two NxN square matrices, A and B, and I would like to calculate the trace of AB. Since the trace of AB only depends on its diagonal elements, it should hypothetically not be necessary to compute all of AB, thereby reducing the amount of operations from N^3 to N^2. My question is twofold:
- Does calling tr(AB) in MATLAB automatically exploit this fact?
- If not, is there an efficient way of doing this that doesn't involve calling for loops?
Thanks!
0 Kommentare
Akzeptierte Antwort
Matt J
am 9 Mai 2019
Bearbeitet: Matt J
am 9 Mai 2019
Bt=B.';
traceProduct = A(:).'*Bt(:);
4 Kommentare
Matt J
am 10 Mai 2019
Another way to test if the trace product is JIT optimized is to compare both implementations on a large matrix,
A=rand(3000); B=A;
tic;
version1 =trace(A*B);
toc;
%Elapsed time is 0.956904 seconds.
tic;
version2 = A(:).'*reshape(B.',[],1);
toc;
%Elapsed time is 0.068032 seconds.
It is pretty clear that the direct implementation is not optimized.
James Tursa
am 10 Mai 2019
I get the same results as Matt on various versions. And even if there is some version (maybe future) of MATLAB that does this optimization, it is still only the physical transpose part that could beat the direct implementation above ... i.e., compiled code that avoids physically forming the transpose.
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Logical finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!