How to avoid error accumulating when doing vector transformation?
113 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
serige
am 5 Nov. 2024 um 7:56
Beantwortet: Shashi Kiran
am 7 Nov. 2024 um 9:02
I am working on an eigenvalue problem. For simplicity, Let's say I have a matrix M which has an eigenvalue 1, and the associated eigenvector v (assuming the eigenspace is 1 dimensional), so
Now, if I want , so I have
So if I have a vector that's is really close to v, I would expect the above expression to be very close to 1 as well, but this is not the case. Here I have a (row) stochastic matrix of size 1000 by 1000 and v is a good approximation of the stationary vector which I just computed
>> norm(v*M1000-v, 1)
ans =
1.0300e-17
>> p = (v - 1) ./ v;
>> norm(v*(M1000 - diag(p)) - ones(1,1000),1)
ans =
4.0290e-13
I am trying to understand what makes the errors so different from each other. This is worse if the matrix size is bigger. Is there a better way I can compute the expression to make the errors closer?
2 Kommentare
Shashi Kiran
am 5 Nov. 2024 um 9:33
As specified, M1000 is a 1000x1000 matrix and v is a 1000x1 eigenvector. How is the multiplication v*M1000 carried out, or am I misunderstanding something?
Could you provide M1000 and v so I can examine the issue further?
Akzeptierte Antwort
Shashi Kiran
am 7 Nov. 2024 um 9:02
The difference in error between the two norm calculations arises from the nature of the operations being performed.
This is beacuse
- The first norm calculation, norm(v * M1000 - v, 1), directly checks if v is an eigenvector, which is generally more stable and less prone to numerical errors.
- The second calculation, norm(v * (M1000 - diag(p)) - ones(1, 1000), 1), involves subtracting a diagonal matrix from M1000. This transformation is more sensitive to floating-point precision, so it introduces small numerical errors that accumulate differently than in the first calculation.
To reduce these errors, you can use MATLAB’s vpa (Variable-Precision Arithmetic) function, which allows calculations with higher precision. For more details, refer to the https://www.mathworks.com/help/symbolic/sym.vpa.html. Using higher precision (like 32 or more decimal digits) can significantly reduce floating-point error, especially for larger matrices. This approach, however, can be computationally expensive.
Here is how you can implement this:
% Initialize data
n = 1000;
M1000 = randn(n, n);
M1000 = abs(M1000);
M1000 = bsxfun(@rdivide, M1000, sum(M1000, 2));
v = null(eye(size(M1000)) - M1000', 'r');
v = v';
v = v ./ sum(v);
% Convert M1000 and v to symbolic with higher precision
M1000_sym = vpa(M1000, 32); % Set precision to 32 digits
v_sym = vpa(v, 32);
p_sym = (v_sym - 1) ./ v_sym;
% Compute norms in higher precision
norm1_sym = norm(v_sym * M1000_sym - v_sym, 1);
fprintf('norm(v_sym * M1000_sym - v_sym, 1) = %.4e\n', double(norm1_sym));
expression_sym = v_sym * (M1000_sym - diag(p_sym)) - ones(1, n);
norm2_sym = norm(expression_sym, 1);
fprintf('norm(v_sym * (M1000_sym - diag(p_sym)) - ones(1, n), 1) = %.4e\n', double(norm2_sym));
Hope this helps!
0 Kommentare
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Linear Algebra finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!