Representing a matrix as a product of two matrices in MATLAB

14 Ansichten (letzte 30 Tage)
rihab
rihab am 17 Nov. 2015
Kommentiert: Torsten am 19 Nov. 2015
I have a 4x4 matrix of complex numbers, say
X = [0.4079 + 0.0000i 0.7532 + 0.0030i 0.9791 + 0.0272i 0.9335 - 0.0036i;
0.7532 - 0.0030i 1.2288 + 0.0000i 1.3074 + 0.0052i 0.9791 + 0.0272i;
0.9791 - 0.0272i 1.3074 - 0.0052i 1.2288 + 0.0000i 0.7532 + 0.0030i;
0.9335 + 0.0036i 0.9791 - 0.0272i 0.7532 - 0.0030i 0.4079 + 0.0000i]
I want to represent it as a product of a 4x1 & 1x4 vectors say x such that X = xx^H where H denotes hermitian transpose of x. does anyone have any hint/suggestion to solve this matrix decomposition problem? Any suggestion would be appreciated.
  5 Kommentare
rihab
rihab am 18 Nov. 2015
I see. I used SVD to decompose this matrix X. However, the matrix UV^H (i obtained U & V after using svd command of MATLAB) doesnt have rank 1. Could you be a bit elaborate on what you meant by "It could be done with a rank 1 matrix"? I would appreciate suggestions.
rihab
rihab am 18 Nov. 2015
@stefan the matrix X is a solution to a least squares problem (AX-B). X is a 16x1 vector and for my purpose I have reshaped into a 4x4 matrix and now I want to decompose this matrix X as a product of two vectors such that X = xx^H (I am basically interested in extracting the four elements of vector x). Since X a rank 4 matrix and cant be decomposed in such a way, I am looking for some approximate solution. Any suggestions on obtaining this vector x would be appreciated.

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

John D'Errico
John D'Errico am 18 Nov. 2015
Bearbeitet: John D'Errico am 18 Nov. 2015
I'm sorry, but this is flat out impossible.
X = [0.4079 + 0.0000i 0.7532 + 0.0030i 0.9791 + 0.0272i 0.9335 - 0.0036i;
0.7532 - 0.0030i 1.2288 + 0.0000i 1.3074 + 0.0052i 0.9791 + 0.0272i;
0.9791 - 0.0272i 1.3074 - 0.0052i 1.2288 + 0.0000i 0.7532 + 0.0030i;
0.9335 + 0.0036i 0.9791 - 0.0272i 0.7532 - 0.0030i 0.4079 + 0.0000i];
Not even close to being possible. Lets see why.
rank(X)
ans =
4
svd(X)
ans =
3.7714
0.62027
0.11221
0.010048
Only if the rank of X was 1, i.e., it had 1 non-zero singular values, and 3 essentially zero values, could you do this.
The point is, to represent the matrix as an outer product of two vectors, i.e., x'*y, where x and y are row vectors, the result would have rank 1. That is a fundamental of linear algebra.
As you can see, that is clearly not true. Simply wanting to do the impossible is not an option. Case closed, IF you want an exact solution.
If you wish to find the closest approximation as such a result (based on reading your comments) then it is possible. Use the SVD.
[U,S,V] = svd(X);
u = U(:,1)
u =
-0.41048 - 4.0811e-18i
-0.57576 + 0.001811i
-0.57571 + 0.0081111i
-0.41042 + 0.0070737i
s = S(1,1)
s =
3.7714
v = V(:,1)
v =
-0.41048 + 0i
-0.57576 + 0.001811i
-0.57571 + 0.0081111i
-0.41042 + 0.0070737i
Since X is symmetric, we see that u and v will be the same. If the goal is to write Xhat as a product w*w', then wejust use
w = u*sqrt(s)
w =
-0.79716 - 7.9255e-18i
-1.1181 + 0.003517i
-1.118 + 0.015752i
-0.79705 + 0.013737i
w*w'
ans =
0.63547 + 0i 0.89134 + 0.0028036i 0.89125 + 0.012557i 0.63538 + 0.010951i
0.89134 - 0.0028036i 1.2502 + 0i 1.2502 + 0.013681i 0.89125 + 0.012557i
0.89125 - 0.012557i 1.2502 - 0.013681i 1.2502 + 0i 0.89134 + 0.0028036i
0.63538 - 0.010951i 0.89125 - 0.012557i 0.89134 - 0.0028036i 0.63547 + 0i
As you can see, this must be rank 1.
rank(w*w')
ans =
1
The norm of the error of approximation is as small as possible.
norm(w*w' - X)
ans =
0.62027
So this is the best way (least squares) to produce a rank 1 approximation to X. By way of comparison,
norm(X)
ans =
3.7714
  2 Kommentare
Torsten
Torsten am 18 Nov. 2015
Since the OP wants to use w*w^H as approximation, he can use w=u*sqrt(s).
Best wishes
Torsten.
John D'Errico
John D'Errico am 18 Nov. 2015
Bearbeitet: John D'Errico am 18 Nov. 2015
Fixed

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

Torsten
Torsten am 18 Nov. 2015
Bearbeitet: Walter Roberson am 18 Nov. 2015
In your case, X=U*Sigma*U^H.
Consequently, u1*sigma1*u1^H where u1 is the eigenvector corresponding to the largest eigenvalue sigma1, is the best rank-1 - approximation to X in the Frobenius norm.
Best wishes
Torsten.
  2 Kommentare
Stefan Raab
Stefan Raab am 18 Nov. 2015
Hi Torsten, don't you mean singular value? ;)
Kind regards, Stefan
Torsten
Torsten am 19 Nov. 2015
For Hermitian matrices, the sigma's are eigenvalues, too.
Best wishes
Torsten.

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu Linear Algebra finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by