EIGS: Why is 'smallestabs' faster than 'largestabs'?

Hi! Let L be the Laplacian matrix of a grid graph of n nodes, D a diagonal matrix of positive values and s a vector of dimension n. I want to find x, such that for the second smallest generalized eigenvalue. I've tried to run (1) "eigs(L, S, 2, "smallestabs")", (2) "eigs(S*L, 2, "smallestabs")" and (3) "eigs(speye(n) - S*L, 2, "largestabs")", where S is an analitical form for inv(D + s*s'), so I don't need to compute the inversion explicitly. It happens that (1) and (2) are faster than (3), and (1) faster than (2). I thought that computing "largestabs" should be faster. Am I wrong about it?
Beyond this question, I'd like to use a function handle instead of an explicit definition of S or L, since S has a lot of structure and S*L*x can be computed very efficiently. However, it seems that I can't accomphish that when I'm using "smallestabs", which in this case I have to find a function that computes A\x, not A*x. Can someone help me go around this problem?
Many thanks!

1 Kommentar

By the way, please correct if I am wrong, but I am assuming that (1), (2) and (3) output the same eigenvectors.

Melden Sie sich an, um zu kommentieren.

 Akzeptierte Antwort

Christine Tobler
Christine Tobler am 19 Feb. 2021
Bearbeitet: Christine Tobler am 19 Feb. 2021

0 Stimmen

Whether 'largestabs' or 'smallestabs' depends on the problem: 'smallestabs' needs to factorize the first input, which can make it slower than 'largestabs'. However, the speed of convergence of the iteration that's done afterwards depends on the spectrum of the matrix used, and sometimes that spectrum is better in the 'smallestabs' case (where we're using the inverse of the original problem), then if the matrix is just shifted. This roughly speaking comes down to how large the ratio between the eigenvalues to be calculated is.
It's usually a good idea to keep a generalized eigenvalue problem in the generalized form, because computing S*L will result in a nonsymmetric matrix, which prevents us from using symmetric methods which are usually faster and more accurate.
The problem here is that D+s*s' will be a dense matrix, so whenever that's computed it will slow down all the computation by a lot.
I'd first try solving the reverse eigenproblem instead, (D+s*s')*x = mu*L*x, where lambda = 1/mu and now you'd want to compute the second largest eigenvalue:
eigs(D+s*s', L, 2, "largestabs")
Here you can then use a function handle to apply S instead of forming the matrix explicitly.
Note I'm assuming here that L is symmetric positive definite. The approach will still work otherwise, but I'm not sure how well convergence will turn out in that case - generalized eigenvalue solvers are usually optimized for the case where the right-hand side matrix is s.p.d., since this is often the mass matrix.

5 Kommentare

Hi! Thanks fro the info! I was very helpful! I was afraid of using (D+s*s')*x = mu*L*x because L is positive semidefinite (it's a graph laplacian matrix). How does eigs convergence usually work in that case? Do you also know the efect on the results if I add something like 1e-10 * eye(length(L)) to L and then proceed as you suggested?
In any case, thanks a lot!
So S being semi-definite is a bit of a problem in either formulation, because it needs to be inverted to find the smallest eigenvalues (either this is done internally by EIGS, or externally in the function handle). If you pass in an exactly singular matrix, EIGS will actually just error:
>> n = 1e3; S = speye(n); L = speye(n); L(end) = 0;
>> eigs(L, S, 2, 'smallestabs')
Error using eigs>WarnIfIllConditioned (line 1259)
First input matrix is singular. The shift 0 is an eigenvalue. Consider using a nonzero
numeric value for sigma.
>> eigs(S, L, 2, 'largestabs');
Error using eigs>WarnIfIllConditioned (line 1259)
Singular second input matrix is only supported when sigma is 'SM' or a scalar double.
So the thing to do here is apply a shift to L, yes. It doesn't necessarily have to be as small as 1e-10, as long as you reverse the shift in the eigenvalues that are computed:
So you can apply any shift sigma, as long you shift with the matrix S here. But unfortunately that's a problem, since S is a dense matrix. (note: The reason the right matrix can't be passed in as a function handle is that we need to compute its Cholesky decomposition - and unfortunately that also comes out as dense for D + x*x').
So maybe we should go back to using
where you can use Sherman-Morrison to split L + sigma*S up into (L + sigma*D) + sigma*x*x', use backslash for the sparse matrix and update for the rank-one addition. This won't know about the symmetry, but it will avoid constructing any dense matrix.
Ok, that's awesome. That makes a lot of sense. Thank you so much!
Hi Mrs Tobler,
Also, if you don't mind me asking one last question, my the problme that I need to solve is , so I thought that computing
eigs(L - s*s', D, 2, 'smallestabs')
and
eigs(L, D + s*s', 2, 'smallestabs')
should give the same result, but they aren't. Would you know why? Thanks!
So the first here will return the smallest (lambda, x) s.t.
lambda*D*x - L*x + s*s'*x = 0,
while the second will return the smallest (lambda, x) s.t.
lambda*D*x - L* x + lambda*s*s'*x = 0,
so on the face of it these aren't solving the same problem. However, if you're looking to solve (L - s*s' - D) * x = 0, that's a different question again, and I might try to just compute
eigs(-L + D + s*s', 1, 'smallestabs')
in that case.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Sparse Matrices finden Sie in Hilfe-Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by