lobpcg returning incorrect results for large sparse symmetric matrices

12 Ansichten (letzte 30 Tage)
Hi,
I need to solve symmetric generalized eigenvalue problems with large, sparse stiffness and mass matrices, say A and B. The problem is of the form A*v = lambda *BV.
They are both Hermitian, and B is positive definite. Both are approximately of size 70 000 x 70 000. A has about 5 million non- zero values and B has about 1.6 million. I'm using them both in Matlab's sparse matrix format. A has condition number on the order of 10^9 and B has a condition number on the order of 10^6.
I had no problems getting the correct values/vectors with eigs. However, I would like to solve this problem using the lobpcg function, available from Matlab Central's File Exchange website. This is partly because I'm interested in solving this problem in Python, where I have to use a function identical to this lobpcg.
I tried this first on a scaled down eigenvalue problem, using a = A(1:10000, 1:10000) and b = B(1:10000,1:10000), finding the 10 smallest eigenvalues. eigs and lobpcg returned similar results for these values;
Here is the syntax I used for the two:
X = rand(68394, 10)
[v, lambda] = lobpcg(X, a, b, 1e-5, 20,0)
[V, D] = eigs(a,b, 10, 'sm')
I could not get lobpcg to work for full sized A and B, no matter how many iterations I used. lobpcg requires an initial "guess" for the eigenvectors ( X ), a matrix whose columns determine the number of eigenvalues desired. I made a plot (link below) for an eigenvalue computed correctly with eigs ("Matlab" in the legend), and how the number of iterations affects the result using lobpcg. For lobpcg, I tried 4 different guesses for X, where X1 to X3 are random matrices and Xorthornormal is an orthonormalized random matrix.
The plot shows that an unreasonable number of iterations would be necessary to get the correct values from lobpcg.
Thanks

Akzeptierte Antwort

Andrew Knyazev
Andrew Knyazev am 8 Jun. 2011
Very slow convergence is an expected normal behavior of lobpcg for such a problem, without preconditioning. Unfortunately, I am not aware of any good preconditioning code in MATLAB. In http://www.mathworks.com/help/techdoc/ref/pcg.html , they suggest using http://www.mathworks.com/help/techdoc/ref/ichol.html for preconditioning. Last time I have checked, it has worked very poorly for large matrices, relative to http://www.mathworks.com/help/techdoc/ref/mldivide.html , so I do not recommend it.
One way around it is to follow the MATLAB's eigs approach and use the function Tx=(A-sigma*B)\x for preconditioning, where you need to choose sigma small enough to make A-sigma*B positive definite. That should make the convergence speed of lobpcg similar to that of eigs.
In your case, since you want to do it in Python anyway, you might want simply to try http://www.scipy.org/ First, it has the function lobpcg coded in Python already. Second, SciPy can use http://code.google.com/p/petsc4py/ to bind for PETSc libraries http://www.mcs.anl.gov/petsc/ In its term, PETSc provides high-quality preconditioning, which can be used in lobpcg. See http://code.google.com/p/blopex/ for more details on lobpcg in PETSc with preconditioning.
I have helped SciPy developers a bit to port MATLAB's version of lobpcg to Python, but please contact the developers directly if you have any questions about the their code.

Weitere Antworten (0)

Kategorien

Mehr zu Sparse Matrices finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by