# Matlab code problem (calculate eigenvalues and eigenvectors)

33 views (last 30 days)
Yang Yu on 7 May 2015
Edited: John D'Errico on 7 May 2015
c0 = 2.4*1000;
c1 = 67*1000;
c2 = 58*1000;
c3 = 57*1000;
c4 = 50*1000;
c5 = 38*1000;
k0 = 1200*1000;
k1 = 33732*1000;
k2 = 29093*1000;
k3 = 28621*1000;
k4 = 24954*1000;
k5 = 19059*1000;
m1 = 6800;
m2 = 5897;
m3 = 5897;
m4 = 5897;
m5 = 5897;
m6 = 5897;
Mmat = [m1 0 0 0 0 0;
0 m2 0 0 0 0;
0 0 m3 0 0 0;
0 0 0 m4 0 0;
0 0 0 0 m5 0;
0 0 0 0 0 m6];
Kmat = [(k0+k1) -k1 0 0 0 0
-k1 (k1+k2) -k2 0 0 0
0 -k2 (k2+k3) -k3 0 0
0 0 -k3 (k3+k4) -k4 0
0 0 0 -k4 (k4+k5) -k5
0 0 0 0 -k5 k5];
A = Kmat/Mmat;
%Calculate eigenvalues and eigenvectors of A
[V,D] = eig(A);
%Check using det
det(A-D(1,1)*eye(6))
det(A-D(2,2)*eye(6))
det(A-D(3,3)*eye(6))
det(A-D(4,4)*eye(6))
det(A-D(5,5)*eye(6))
det(A-D(6,6)*eye(6))
Because the det values are larger than 0, there's something wrong with MATLAB. Please help me to check what's the problem. Thanks.
John D'Errico on 7 May 2015
Well, for the det test it is easy to show what happens.
E = eig(A)
E =
17941
13379
8573.2
4213.6
31.232
1220.7
>> eig(A - E(5)*eye(6))
ans =
17910
13347
8542
4182.4
1.5323e-12
1189.5
>> prod(ans)
ans =
1.5566e+07
Again, this is why one should NEVER use det for ANY numerical test.
rank(A - E(5)*eye(6))
ans =
5
As I have said many times, floating point arithmetic is not truly mathematics. The differences are subtle. One looks a lot like the other, and the two have many similarities. But results that are true in mathematics will often fail to work in floating point arithmetic. And anything that involves a matrix determinant is often in the category of something to expect to fail miserably.

John D'Errico on 7 May 2015
Edited: John D'Errico on 7 May 2015
As has been pointed out several times, this matrix is not diagonalizable. eig can go as far as to find matrices V and D such that A*V-V*D is effectively zero, to within reasonable tolerances. But there is no matrix V such that V is an orthogonal matrix, where D=V'*A*V, and D is diagonal.
A*V - V*D
ans =
-4.5475e-12 -2.7285e-12 9.0949e-13 2.7285e-12 1.263e-12 6.8212e-13
9.0949e-12 5.457e-12 2.2737e-12 -1.7053e-13 -1.8066e-12 -1.08e-12
-5.457e-12 3.4106e-13 9.0949e-13 2.2737e-13 -7.6739e-13 -3.4106e-13
0 -5.457e-12 -3.1832e-12 -1.819e-12 4.9738e-14 -4.8317e-13
4.5475e-13 -9.0949e-13 0 -1.4779e-12 1.4388e-13 1.3642e-12
0 -1.3642e-12 -4.5475e-13 2.2737e-12 -2.5757e-13 -1.1369e-12
These numbers are all as close to zero as one can expect. But the eigenvector matrix so produced is not orthogonal, not a complete set of eigenvectors. In the sense that an eigenvalue/vector pair satisfies A*v = lambda*v, we can check that for a few eigenvalues just to convince you of that fact...
[A*V(:,1),D(1,1)*V(:,1)]
ans =
4739.4 4739.4
-10608 -10608
10904 10904
-7536.4 -7536.4
3265.1 3265.1
-717.44 -717.44
[A*V(:,2),D(2,2)*V(:,2)]
ans =
4756.5 4756.5
-6853.2 -6853.2
-997.38 -997.38
7704.3 7704.3
-6673.1 -6673.1
2125.5 2125.5
This is what it means to say that A*V == V*D.
Anyway, it appears that the determinant test is what may have been bothering you. So if we look at the computed eigenvalues...
E = eig(A)
E =
17941
13379
8573.2
4213.6
31.232
1220.7
See that we can shift the eigenvalues.
eig(A - E(5)*eye(6))
ans =
17910
13347
8542
4182.4
1.5323e-12
1189.5
which leaves one of those eigenvalues essentially zero. HOWEVER, the determinant, which could be computed using the product of the eigenvalues, is not zero. Of course, this is to be expected, since even though that one element is tiny, it is NOT sufficiently small when you take the product of those numbers.
prod(ans)
ans =
1.5566e+07
Again, this is why one should NEVER use det for ANY numerical test. See that rank recognizes the matrix as being singular.
rank(A - E(5)*eye(6))
ans =
5
In fact, det in MATLAB is computed using an LU factorization, which gives aslightly different value for the determinant.
[L,U,P] = lu(A - E(5)*eye(6))
L =
1 0 0 0 0 0
-0.97155 1 0 0 0 0
0 -0.97404 1 0 0 0
0 0 -0.98044 1 0 0
0 0 0 -0.98517 1 0
0 0 0 0 -0.99034 1
U =
5105.8 -5720.2 0 0 0 0
0 5065 -4933.5 0 0 0
0 0 4950.3 -4853.5 0 0
0 0 0 4295.4 -4231.6 0
0 0 0 0 3263.5 -3232
0 0 0 0 0 9.0949e-13
P =
1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
prod(diag(U))
ans =
1.6322e+06
Compare that result to what det gives:
det(A - E(5)*eye(6))
ans =
1.6322e+06
Again, DON'T USE DET! The factor of roughly 10 difference between that and what we get from the product of the eigenvalues just emphasizes the reason why you should never use det for these things. That number IS zero, as close as we can come to zero in terms of floating point arithmetic.
For the perfectionists among you, lets try it in symbolic form.
[V,D] = eig(sym(A));
det(A - D(5,5)*eye(6))
ans =
-0.000000000000000062420635190136154950824971272636
Still not zero, but closer to that value.
eig(sym(A) - D(5,5)*eye(6))
ans =
4562.3474811932686073791772754019
-1.2794464375961745433981454406958e-36
-4805.4337708041558234015199714421
-9165.0606437451593865984338324246
-12157.955591367554507466929334176
-13347.448098888493727903150725422

the cyclist on 7 May 2015
Edited: the cyclist on 7 May 2015
Maybe I don't remember my matrix algebra well enough, but I don't understand what you are checking. Are you trying to prove that the eigenvectors are really eigenvectors? If so, then I would do
A*V(:,1) - D(1,1)*V(:,1)
A*V(:,2) - D(2,2)*V(:,2)
% etc
which is equal to zero (within expected error of floating point machine calculation).
You can also do those six calculations in one line as follows:
A*V - repmat(diag(D)',6,1).*V

Alfonso Nieto-Castanon on 7 May 2015
Edited: Alfonso Nieto-Castanon on 7 May 2015
In your example the matrix A is not normal (check that A*A'-A'*A is not zero), hence it does not have a proper eigenvalue/eigenvector decomposition (it is not diagonalizable by a unitary matrix).
Depending on what exactly you are trying to accomplish I would suggest to try instead a singular value decomposition:
[Q,D,R] = svd(A);
which provides a somewhat similar decomposition (A = Q*D*R' with D diagonal, and Q and R orthonormal) for any matrix.
EDIT: also, Kmat is symmetric (and hence normal), so it is the division by the diagonal matrix Mmat (column-wise division of Kmat by the Mmat diagonal elements) that is breaking this symmetry and making the result non-normal, so I would suggest: a) checking where the Kmat/Mmat formula is coming from to make sure you got that right; and b) checking why would you expect the resulting A matrix to be diagonalizable to make sure you got that right as well.