Eigenvalues and Eigenvector computation on extremely bad conditioned matrices
43 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hello everyone, I have a question about the eig function for computing the eigenvalues and eigenvector. I have a couple of matrices A and P, for which I want to solve the general eigenproblem A v = P lambda v: these matrices are 160x160, singular, not symmetric and extremely bad conditioned (cond(A) = 1e18, cond(P) = 1e28, rcond(A) = rcond(P) = 0), and with low rank (rank(A) = 150, rank(P) = 120). Clearly, being singular, i can not invert P and compute a standard eigenproblem inv(P)*A v = I lambda v, so i have to call the function eig with the general problem:
eig(A,P), which automatically choose the qz algorithm to solve the eigenvalues.
Unfortunately, more or less 40 of this poles are inf + 0i, and in some cases A v is not equal to P lambda v, so the problem is not solved with precision. The eigenvectors that correspond to the inf eigenvalues shows that in that case a non infinite eigenvalues can exist, of the order of 10^15: so why the eig function does not compute them?
I have tried to balance, normalized each rows of A, P for decrease the conditioning number, but the result does not change.
I have tried to use the pseudoinverse matrix of P for solving a standard problem eig(pinv(P)*A), and in this case I have no inf eigenvalues: however, I am not sure if the eigenvalues compute with this method are a solution of my original problem: can I solve them in this way? With a correct methodology I should solve eig(pinv(P)*A, pinv(P)*P), but the second matrix remains singular and so inf eigenvalues are computed also in this case.
Has anyone an idea of how solve this bad-conditioned problem?
Thanks in advance
0 Kommentare
Antworten (2)
Sam Chak
am 5 Okt. 2023
Hi @Andrea
You may consider employing the balance() function to execute diagonal scaling on the ill-conditioned matrix, thereby enhancing the conditioning of the eigenvalue problem. If this approach proves ineffective, it becomes necessary to resort to specialized algorithms such as the Implicitly Restarted Arnoldi Method (IRAM), which excel in dealing with ill-conditioned matrices.
Radke's Master thesis, presenting the MATLAB implementation of IRAM for tackling large-scale eigenvalue problems, is accessible via the following link:
% Ill-conditioned matrix
A = [-1 1e-2 1e-4;
1e+2 -1 1e-2;
1e+4 1e+2 -1];
[Va, Ea] = eig(A)
cond(Va)
[T, B] = balance(A)
[Vb, Eb] = eig(B)
cond(Vb)
7 Kommentare
Sam Chak
am 6 Okt. 2023
Hi @Andrea
Previously, I mentioned the Arnoldi method. The Lanczos algorithm is quite well-known for the computation of eigenvalues. However, it appears that your interest lies not in developing the algorithm from scratch in MATLAB but in utilizing eigenvalue-finding tools from algorithm libraries capable of efficiently handling sparse matrices.
My colleague recommends using SuiteSparse, as it should prove effective for relatively small to moderately sized sparse matrices, such as the one you have (). You can find more information here:
Christine Tobler
am 9 Okt. 2023
If the matrix P has rank 120 and its size is 160, you should expect 40 eigenvalues to be Inf - this is how singularities in the second input matrix are being represented by generalized eig.
For the simple case where A and P are both diagonal, each eigenvalue would be just A(i, i) ./ P(i, i). So if there is a diagonal element of P that is zero, its eigenvalue is going to be Inf. This is usually fine, in many practical problems the Inf eigenvalues can simply be ignored.
So my question would be, is it really a problem that some of the computed eigenvalues are Inf? That probably depends on what your next steps are going to be with those computed results.
0 Kommentare
Siehe auch
Kategorien
Mehr zu Linear Algebra finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!