How can I quickly find the eigenvalues and eigenvectors of a matrix with a special structure using high-precision calculations?

I have a large sparse matrix with the following structure:
It’s 5000×5000 in size. I want to find the 1000 eigenvalues and eigenvectors whose absolute values are closest to zero. The challenge is that the matrix has a large condition number, so to ensure accurate results, I have to use high-precision calculations—which, as you’d expect, makes things extremely slow. Luckily, I came across the following identity:
This identity cuts down on costly high-precision computations (since the matrix on the right is much smaller). So I created a function handle, Afun, that computes M*v for a given input vector v.
But unfortunately, the eigs function only works correctly with a function handle like Afun (where input v returns M*v) when you're looking for the eigenvalues with the largest magnitude—that is, 'largestabs'. I need the ones whose absolute values are closest to zero. Is there a way around this?
Or, setting aside this identity, is there any way to speed up high-precision calculations for large sparse matrices? The current speed is unbearably slow.

18 Kommentare

Most probably, "smallestabs" uses "largestabs", applied to M^(-1). Thus if you manage to find a representation of M^(-1) for your application, you can use "largestabs" together with M^(-1) * x.
What do you mean when you say you are using high-precision calculations? Since MATLAB has only one common choice for precision, i.e. double precision, then high precision can only imply you are using symbolic tools to do the work. And yes, that would be immensely slow for that large of a matrix. Face the fact that this is a big problem, and big problems take big time. Were it common to solve this specific problem much more quickly, you would have chosen to solve larger problems yet.
Please explain the notation. What is the operator in this context? Kronecker product? And what are the sizes of Ak, Bk, and V individually?
Thank you very much for all the comments under this question.
Here \otimes denotes the Kronecker product. Bk is 50×50, Ak is 100×100, and V is a matrix of size 100×50. The relationship between V and v (where v is the input vector in M*v) is given by V = reshape(v, 100, 50).
I’m currently working on a task that involves finding the eigenvalues and eigenvectors of a 5000×5000 sparse matrix. Because the matrix has a large condition number, it’s very sensitive to rounding errors—and unfortunately, MATLAB’s double precision only supports about 16 decimal digits. To obtain reliable results, I need to perform high-precision computations (at least 30 decimal digits). I’m not using MATLAB’s built-in vpa function; instead, I’m using the AdvanpixMCT toolbox, which is significantly faster than vpa—but still too slow for my current problem.
To speed things up, I’ve been trying to use the identity I mentioned earlier. Since M is much larger than Bk and Ak, directly applying M in matrix multiplications involves a large number of expensive high-precision operations. Unfortunately, I’ve run into an issue with the eigs function. When searching for eigenvalues closest to zero (i.e., using 'smallestabs'), eigs expects Afun to represent M\x rather than M*x. This means I can’t directly take advantage of the identity in this setting.
I would really appreciate any advice or suggestions on how to work around this limitation. Thank you in advance for your help.
I need to perform high-precision computations (at least 30 decimal digits)
I for one don't see the rationale behind pursuing 30-digit precision. As far as I'm aware, it isn't generally true that eigendecomposition is harder for ill-conditioned matrices. If it were, you could never reliably decompose a singular matrix. The conditioning is a problem for the particular solution approach you've been pursuing, but that is because eigs, as you mentioned, expects Afun to represent M\x which cannot be calculated stably if M is ill-conditioned. However, any other solution path that doesn't require M\x should be fine in double floating point.
It seems using "eigs" in double precision was not sufficiently exact for the OP's problem:
Possible reasons are discussed in the answers, but no really fast and at the same time reliable method could be found.
This sounds like an XY problem to me. An eigendecomposition is ill-conditioned when the eigenvalues are repeating, or nearly so. What is the use case for distinguishing between eigenvalues that are similar to within double float precision??
Anything you might subsequently do with such a high-precision decomposition would probably require 30 digit precision as well. Are you committed to working at such high precision for all the computations you might want to do downstream?
What is the use case for distinguishing between eigenvalues that are similar to within double float precision??
The eigenvalues are not similar to within double float precision - they can differ greatly depending on what method is used (at least for the application I gave a link to) (see below). But maybe the actual application is different - I don't know why @ma Jack didn't give an example together with his/her new question here. Maybe because his/her question is in principle a duplicate.
format long
vnum = linspace(-3,3,100);
N = 30;
unum = 2.5;
wnum = 1;
Lnum = eig(H(unum,vnum(10),wnum,N))
Lnum =
-0.000000000000004 + 0.000000637176401i -0.000000000000004 - 0.000000637176401i 0.117913226667825 + 2.815209265412365i 0.117913226667825 - 2.815209265412365i 0.058263159695574 + 2.847351613723787i 0.058263159695574 - 2.847351613723787i -0.003527084064689 + 2.857496369370038i -0.003527084064689 - 2.857496369370038i -0.065451748175237 + 2.845500743757776i -0.065451748175237 - 2.845500743757776i -0.125551134748330 + 2.811591954120779i -0.125551134748330 - 2.811591954120779i 0.173282561441625 + 2.761452452122647i 0.173282561441625 - 2.761452452122647i -0.181781668336172 + 2.756321445319460i -0.181781668336172 - 2.756321445319460i 0.222183382929459 + 2.686234285694165i 0.222183382929459 - 2.686234285694165i -0.232082229146546 + 2.680545724956049i -0.232082229146546 - 2.680545724956049i 0.263746680442496 + 2.588555362681767i 0.263746680442496 - 2.588555362681767i -0.274862100741577 + 2.585453886527108i -0.274862100741577 - 2.585453886527108i 0.302890771046344 + 2.471955990603175i 0.302890771046344 - 2.471955990603175i 0.145849001350266 + 2.510232327098566i 0.145849001350266 - 2.510232327098566i -0.309905715522997 + 2.473283236090422i -0.309905715522997 - 2.473283236090422i 0.334249947896254 + 2.348418957711169i 0.334249947896254 - 2.348418957711169i -0.087485083148853 + 2.430947123540911i -0.087485083148853 - 2.430947123540911i -0.336706094025641 + 2.349493259761486i -0.336706094025641 - 2.349493259761486i 0.350119758606805 + 2.219641791931203i 0.350119758606805 - 2.219641791931203i -0.350876110431463 + 2.219949033114284i -0.350876110431463 - 2.219949033114284i 0.347483747230019 + 2.088634094777779i 0.347483747230019 - 2.088634094777779i -0.347693883780828 + 2.088684401654675i -0.347693883780828 - 2.088684401654675i 0.323575519219198 + 1.960727392120438i 0.323575519219198 - 1.960727392120438i -0.323624238534635 + 1.960730968148195i -0.323624238534635 - 1.960730968148195i 0.275937695315164 + 1.843213722793182i 0.275937695315164 - 1.843213722793182i -0.275946931262473 + 1.843214126147148i -0.275946931262473 - 1.843214126147148i -0.203496648292608 + 1.745683177169336i -0.203496648292608 - 1.745683177169336i 0.203495211261072 + 1.745682622497439i 0.203495211261072 - 1.745682622497439i -0.108701766122748 + 1.679727642760095i -0.108701766122748 - 1.679727642760095i 0.108701703664981 + 1.679727352055979i 0.108701703664981 - 1.679727352055979i 0.000000069567721 + 1.656185300634534i 0.000000069567721 - 1.656185300634534i
%%%%%%%%%%=========%%%%%%%%%%%%%%
syms u v w real
H(u,v,w,N)
ans = 
Lsym=vpa(eig(subs(H(u,v,w,N),[u v w],[unum,vnum(10),wnum])))
Lsym = 
function [H0]=H(u,v,w,N)
T1=[];
T2=[];
for hh=1:N
T1=[T1;v+u;w+u];
T2=[T2;v-u;w-u];
end
T1=[T1;v+u];
T2=[T2;v-u];
H0=diag(T1,1)+diag(T2,-1);
end
If so, then the problem is not actually ill-posed though, right? The symbolic eig does badly only because it uses a bad algorithm (presumably, it tries to directly solve the characteristic polynomial). That wouldn't be a good argument for going to higher precision. It would be an argument for going to a better eigendecomposition algorithm.
Why do you think symbolic "eig" does badly in the above example ? I think it solves the problem, but it is too slow.
Lsym=eig(vpa(subs(H(u,v,w,N),[u v w],[unum,vnum(10),wnum])))
succeeds as well and is a bit faster (but still too slow for larger matrices, as far as @ma Jack reports).
Symbolic eig tries to factor the characterisitic polynomial of H, in other words a 62-degree polynomial. That is a numerically unstable operation. The numeric decomposition is giving the correct eigenvalues. The symbolic result is wrong.
The roots appear as +/- sqrt(zeros of a polynomial of degree (N+1) ). Thus eigenvalues with equal real parts (as in the numerical decomposition) won't happen to come out - the eigenvalues will appear as signed pairs.
I strongly believe the symbolic result of purely imaginary eigenvalues is correct.
syms u v w real
N = 10;
size(H(u,v,w,N))
ans = 1×2
22 22
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
charpoly(H(u,v,w,N))
ans = 
eig(H(u,v,w,N))
ans = 
function [H0]=H(u,v,w,N)
T1=[];
T2=[];
for hh=1:N
T1=[T1;v+u;w+u];
T2=[T2;v-u;w-u];
end
T1=[T1;v+u];
T2=[T2;v-u];
H0=diag(T1,1)+diag(T2,-1);
end
OK, after a bit more research, I located literature on condition numbers for eigenvalue stability. It's pretty bad in this case for N=30,
vnum = linspace(-3,3,100);
N = 30;
unum = 2.5;
wnum = 1;
Hnum=H(unum,vnum(10),wnum,N);
[W,~,V]=eig(Hnum);
condition=max(vecnorm(V).*vecnorm(W)./abs(diag(W'*V))') %Condition number for eigenvalue stability
condition = 1.5747e+16
which explains why two decomposition approaches give signficantly different sets of eigenvalues.
Either way @Catalytic's comment from before still stands, as far as I can tell. Why is it useful to pursue this decomposition if it is unstable? It is rarely a good idea to try to combat ill-conditioning with higher precision calculation. The better way is to reformulate the problem with better conditioning.
function H0 = H(u,v,w,N)
n = 2*N + 2;
T1 = repmat([v+u; w+u],N,1);
T1 = [T1; v+u];
T2 = repmat([v-u; w-u],N,1);
T2 = [T2; v-u];
H0 = diag(T1,1) + diag(T2,-1);
end
I think @ma Jack 's main problem is the reliable computation of the eigenvalues. And if both MATLAB and Advanpix are too slow in extended precision for a reliable solution, I don't think that we can help in this case:
Advanpix's homepage
states:
At the moment the only reliable way of dealing with ill-conditioned eigenproblems is to use the extended-precision computations. In fact, there is no other alternative in contrast to solving systems of linear equations where various pre-conditioners can be used to improve the conditioning.
I doubt that the above stated identity for computing M*v can improve speed significantly.
The statement on the Advanpix homepage applies to the situation where you accept the problem as given. Usually, if you are facing an atrociously ill-conditioned problem, it means something has gone wrong in the problem-definition stage, e.g., you've written down too few equations for the number of unknowns you are solving for. It means you need to scrap the problem and try to define a better one.
Yes, if numerical methods reach their limits, it's always a good idea to think about the model based on which the eigenvalue problem is derived and look for alternative approaches. But usually, if you already invested much time, it takes a great deal of willpower to start all over again. In this situation, it often seems easier first to try a different numerical method.
I didn't test it, but maybe a comparison of Advanpix and
could be interesting.
Not sure whether MATLAB uses this multiprecision LAPACK and BLAS for vpa eigenvalue computations.
But after looking into it more closely, installing and running "mplapack" won't be an easy task (at least not for me).

Melden Sie sich an, um zu kommentieren.

Antworten (2)

Does Advanpix support eig / eigs for their class, using an overload? If not, I think eigs may have some issues, because the tolerances used internally, based on eps(A) for example, have only been testing for double and single precision.
Unfortunately eigs requires a function that applies inv(A)*b to compute the smallest eigenvalue by absolute value. Generally, all "faster" algorithms (like eigs instead of eig) require some level of approximation, which might conflict with your goal of getting higher accuracy than the direct solve in double precision.
Perhaps there is some way of improving the condition of the matrix M? For example by applying some scaling to the rows and columns (see https://www.mathworks.com/help/matlab/ref/balance.html and option "nobalance" in eig (the default is to balance, but in some rare cases it is better to avoid balancing)).
@Torsten has hinted, based on your earlier posts, that the eigenvalues of M may occur in distinct and purely imaginary conjugate pairs. If that is the case, it is possible to design a quadratic polynomial p() such that the eigenvalues of p(M) are also distinct, but the small eigenvalues become large ones and vice versa. Recall that such an operation is eigenvector-preserving.
You can therefore apply eigs(Afun,5000,1000, 'largestabs') as before but with an Afun that implements p(M)*x instead of M*x. Such an Afun can be accelerated with Kronecker product identities as before.
As an illustration of the transformation technique,
clear, clc
%Toy input data
n=5;
E=rand(1,n)*10i; E=[-E,E];
P=normalize(rand(2*n),'n');
M=P*diag(E)/P;
Eigs0=sort(eig(full(M)), Comp='abs').'; %Compute eigenvalues without eigenvectors (a fast operation).
p=getPoly(Eigs0); %Design the polynomial
Solving problem using linprog. Optimal solution found.
qopt = 0.6471
exitflag =
OptimalSolution
output = struct with fields:
iterations: 2 algorithm: 'dual-simplex-highs' constrviolation: 1.1102e-16 message: 'Optimal solution found.' firstorderopt: 8.8818e-16 solver: 'linprog'
Eigs1=polyval(p,Eigs0); %Transformed eigenvalues
Note that the smallest 2*N eigenvalues transform to the largest 2*N eigenvalues for every N,
disp(abs(Eigs0))
1.8736 1.8736 3.3685 3.3685 3.8471 3.8471 4.1012 4.1012 9.4781 9.4781
disp(abs(Eigs1))
87.6369 88.2840 80.7058 79.5425 77.3353 76.0067 75.3596 73.9433 0.0000 3.2732
function [p,pE]=getPoly(E0)
E0=sort(imag(E0(:)));
E=E0(E0>0);
tmax=min(E);
a=optimvar('a',Lower=0, Upper=1);
b=optimvar('b',Lower=0);
c=optimvar('c');
q=optimvar('q',Lower=0);
Q=E.^[2,1,0];
pval1=Q*[a;-b;c];
pval2=Q*[a;b;c];
Con.b= b<=2*a*tmax;
Con.c= c>=b*tmax-a*tmax^2;
Con.con1=pval1<=pval2;
Con.con2=pval1(2:end) >= pval2(1:end-1);
expr=[pval1,pval2].'; expr=diff(expr(:));
Con.con3= q<=expr;
prob=optimproblem('Objective',q,'ObjectiveSense','max','Constraints',Con);
[sol,~,exitflag,output]=solve(prob);
p0=[sol.a,sol.b,sol.c];
qopt=min(evaluate(expr,sol))
exitflag,output
%flip the magnitudes
Emax=max(abs(polyval(p0,E0)));
p=p0.*[1,-1i,-1];
p(3)=p(3)+Emax;
p=p*1i;
if nargout>1, pE=polyval(p,E0.'); end
end

Kategorien

Mehr zu Linear Algebra finden Sie in Hilfe-Center und File Exchange

Produkte

Version

R2023b

Gefragt:

am 10 Mär. 2026

Bearbeitet:

am 15 Mär. 2026

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by