How to calculate trace(A' * B) using SVD vectors of A and B?
5 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Xiaohan Du
am 26 Jun. 2018
Bearbeitet: Jan
am 29 Jun. 2018
Hi all,
Imagine I have 2 large matrices which have more rows than columns, I'd like to calculate trace(A' * B) for N times. I have 2 options: 1. calculate trace(A' * B) directly; 2. only calculate vector product of the diagonal, then sum it. I test with the following minimum example, it turns out the 2nd option is faster:
clear; clc;
nd = 100000;
nt = 500;
A = rand(nd, nt);
B = rand(nd, nt);
N = 100;
%%no SVD.
% option 1.
tic
for i = 1:N
abTr = trace(A' * B);
end
toc
% option 2.
tic
abSum = 0;
for i = 1:N
for j = 1:nt
abSum = abSum + A(:, j)' * B(:, j);
end
end
toc
Elapsed time is 58.320345 seconds.
Elapsed time is 13.558039 seconds.
Now I need to apply truncated-SVD to A and B to optimise storage. the following code is applied to leave only 10 vectors
% apply svd
[ua, sa, va] = svd(A, 'econ');
[ub, sb, vb] = svd(B, 'econ');
% group in cells
nRem = 10;
ucell = {ua va ub vb};
scell = {sa sb};
% truncate
ucell = cellfun(@(v) v(:, 1:nRem), ucell, 'un', 0);
scell = cellfun(@(v) v(1:nRem, 1:nRem), scell, 'un', 0);
My question is: is it possible to calculate trace (it's an approximation due to truncation) using option 2 with these SVD vectors in ucell and singular values in scell? I can do it with option 1 using
trace((vb' * va * sa' * (ua' * ub) * sb)
But I'd like to be faster by only considering diagonal elements.
Many thanks!
0 Kommentare
Akzeptierte Antwort
Anton Semechko
am 26 Jun. 2018
Bearbeitet: Anton Semechko
am 26 Jun. 2018
I am not aware of any identities that can express trace(A'*B) in terms of SVDs of A and B, and I dont think one exists, but there is an identity that does not require explicit calculation of the A'*B matrix:
trace(A'*B) = sum(A(:).*B(:))
For example,
nd = 100000;
nt = 500;
A = rand(nd, nt);
B = rand(nd, nt);
N = 100;
tic
tr1=trace(A'*B);
t1=toc;
%Elapsed time is 0.857053 seconds.
tic
t2=sum(A(:).*B(:));
toc
%Elapsed time is 0.210807 seconds.
As you can see, you obtain the result about 4 times faster when using the above identity. This identity is especially useful when nt is 'large', because the machine may not have insufficient memory to hold the nt-by-nt matrix A'*B.
10 Kommentare
Anton Semechko
am 28 Jun. 2018
I thought the problem was resolved because you found an efficient way to evaluate tr(A'*B) using SVDs of A and B.
Nevertheless, of course it is possible to recover ONLY the diagonal elements of A'*B from the SVDs of A and B while staying within memory constraints of the machine. However, I am not sure that computing these diagonal elements first and then summing them would help you improve evaluation of tr(A'*B).
Weitere Antworten (1)
Jan
am 26 Jun. 2018
Bearbeitet: Jan
am 26 Jun. 2018
I assume this is faster:
nd = 100000;
nt = 500;
A = rand(nd, nt);
B = rand(nd, nt);
tic;
abTr = trace(A' * B);
toc % Elapsed time is 0.497233 seconds.
tic;
abSum = 0;
for j = 1:nt
abSum = abSum + A(:, j)' * B(:, j);
end
toc; % Elapsed time is 0.141059 seconds.
tic
abTr2 = sum(A(:).*B(:));
toc % Elapsed time is 0.182408 seconds.
tic;
abTr3 = A(:).' * B(:);
toc % Elapsed time is 0.054548 seconds.
tic;
[ua, sa, va] = svd(A, 'econ');
[ub, sb, vb] = svd(B, 'econ');
toc % Elapsed time is 10.629806 seconds.
% And this does not even include the further computations
% relative error:
abs(abTr - abTr3) / abTr
The reshaping by (:) and the transposition creates at least shared data copies, such that no additional memory is used. Maybe the JIT acceleration can call the dot product of the BLAS library directly, where the transposition can be provided as a flag.
Of course for a more accurate measurement some loops would be smarter, or better use timeit. But a factor of 3 ( abTr3 against abSum) is significant enough.
5 Kommentare
Jan
am 29 Jun. 2018
Bearbeitet: Jan
am 29 Jun. 2018
A(:) creates a shared data copy: Only the header of the variable is changed, but the data are not copied. The same happens for the transposition. But Matlab's JIT acceleration might be so smart and avoid even the shared data copies by calling the underlying BLAS function DDOT directly. This function is implemented with multi-threading - you can see this in the task-manager, where all cores are loaded during the processing.
So A(:).' * B(:) starts as many threads as cores are available and no memory has to be copied around in the slow RAM.
abSum = abSum + A(:, j)' * B(:, j) calls DDOT also and starts multiple threads. The starting of a thread is very costly and here we need 4 starts for each column. I'm not sure if Matlab's JIT is smart enough to avoid the creation of the temporary vectors A(:,j) and B(:,j) in each iteration. But if this is performed explicitly, two variables have to be created and the elements of the columns are copied. Calling the library function DDOT costs some time for the overhead also.
Summary:
- Case 1: DDOT is called one time and 4 threads are started.
- Case 2: For nt=500, 1000 columns are copied, DDOT is called 500 times, 2000 threads are started (on a 4 core CPU).
So for me the speedup is expected.
the first has to calculate the product between every 2 vectors in A and B
This might be a misunderstanding: A(:) and B(:) are two vectors only, not "every two vectors".
Siehe auch
Kategorien
Mehr zu Linear Algebra finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!