MATLAB Answers

Solve equationsystem (A*V).'*B*(A*V)=C for matrix A

7 views (last 30 days)
David Kriebel
David Kriebel on 8 Jan 2020
Edited: Matt J on 9 Jan 2020
Hello all,
im asking me now a while how can i solve this equation system:
(A*V).'*B*(A*V)=C
where V, B and C are known.
V is a vector basis. B and C are symmetric square matrices. i want to find a matrix A which is initial simply a unity square matrix. The matrix A should be like a scaling matrix for a vector basis.
Is there a possibility to solve the equation system, maybe iterative, for the matrix A?
Maybe there are more than one solution. but tell me your guesses or ideas what is maybe possible (algorithms, ideas what ever).
i have to find the minimum: min((A*V).'*B*(A*V)-C=0).
Thanks in advance and i hope for some answers :)
best regards

  0 Comments

Sign in to comment.

Accepted Answer

John D'Errico
John D'Errico on 8 Jan 2020
Edited: John D'Errico on 8 Jan 2020
As it turns out though, your problem is actually quite simple.
Assume that V is full rank. As long as that is true, then as can arbitrarily make the substitution
X = A*V
V is known and of full rank, so if we can solve the simpler quadratic form
X'*B*X = C
then we can trivially recover A from X.
I'll note there is much to be found in the literature, solving what seems to be a similar equation, X*A*X=B. For example:
Of course, your problem is different because of the transpose. And that is what makes the solution trivial. Lets take this a step further. B is symmetric and square. Is it positive definite? If so, then we can write B as
B = L'*L
So a Cholesky decomposition of B. Now make a further transformation, with
Y = L*X
Your problem now becomes to solve for Y in the problem
Y'*Y = C
Again, is C positive definite? If so, then we can write Y in the form of a Cholesky decomposition of C. Once you have Y, recover X. Once you have X, recover A.
So the problem becomes trivial, if B and C are positive definite, and V is any full rank matrix.
Edit: Since I see by your response to Matt that V is non-square, the problem is still relatively easy, although the solution now becomes far less unique. If
X = A*V
then once X is known, recover a solution for A as
A = X*pinv(V)

  7 Comments

Show 4 older comments
David Kriebel
David Kriebel on 9 Jan 2020
yes, thats true that we can simply extend Y in dimensions. And yes the problem is that Y is not unique.
Another improvement idea:
If i had many different B-C combinations. So different B and C matrices but the vector basis V and scaling matrix A should be the same for all these combinations. Can i improve my solution to make it more unique?
John D'Errico
John D'Errico on 9 Jan 2020
It is not just that Y is not unique. The extraction of A, given X is also not unique. So there is a set of infinitely many matrices A that solve the problem.
Anyway, this is now a completely different problem. For example, suppose you had matrices B1,C1, B2,C2, with a fixed basis V? The method I showed that can generate one non-unique solution A is not directly extensible to two sets of matrices. Could you generate two solutions, A1 and A2, (or A1,A2, ... , An) then take the average? It is not clear that the average of thoe matrices, for example, would also be a solution, or even be at all meaningful.
The set of 10x2 matrix solutions for any given 2x2 matrix C, such that Y'*Y ==C is not related linearly to the matrix C. Those are essentially quadratic equations.
That leaves you with a problem where you have a 10x10 matrix of unknowns, so 100 unknowns, but still possibly not enough pieces of information to generate a unique solution.
You might decide to try a nonlinear optimizer, perhaps lsqnonlin.
David Kriebel
David Kriebel on 9 Jan 2020
yes it it should be a fit procedure (as you said with lsqnonlin), if i decide to take a training set a C and B matrices.
EDIT:
It is not just that Y is not unique. The extraction of A, given X is also not unique.
I maybe can get Y with a known solution of the product (A*V).'*B*(A*V) =V.'*D*V
i have the matrix D (known) and the projection of D onto a subspace spanned by vector basis V is the correct solution which i know. This projection should be equal to the matrix B projected onto a subspace spanned by a modified vector basis X=A*V. So its enough for me, to get a solution for X which is the modified new vector basis.

Sign in to comment.

More Answers (3)

Matt J
Matt J on 8 Jan 2020
Edited: Matt J on 8 Jan 2020
Looks like it would just be,
A = sqrtm(B)\sqrtm(C)/V;

  5 Comments

Show 2 older comments
David Kriebel
David Kriebel on 8 Jan 2020
its 2 equation and 100 unknowns.. or if A is diagonal then 10 unknowns
Matt J
Matt J on 8 Jan 2020
its 2 equation and 100 unknowns.
I don't see how it's 2 equations. Both sides of the equation are 2x2 matrices, so that gives 4 conditions.
or if A is diagonal then 10 unknowns
Is A diagonal? Do you want us to assume that?
David Kriebel
David Kriebel on 8 Jan 2020
for a first solution we could assume A is diagonal. but maybe there is also a solution for a full matrix A

Sign in to comment.


Matt J
Matt J on 8 Jan 2020
Edited: Matt J on 8 Jan 2020
for a first solution we could assume A is diagonal.
An iterative solution:
fun=@(a) objective(a,B,C,V);
a=lsqnonlin(fun, ones(length(B),1) );
A=diag(a);
function F=objective(a,B,C,V)
A=diag(a);
F=C-(A*V).'*B*(A*V);
end

  8 Comments

Show 5 older comments
David Kriebel
David Kriebel on 9 Jan 2020
yes, but the principle should be the same. even for a first try
Matt J
Matt J on 9 Jan 2020
With the modified code below, and after suitable normalization of your B,C,V data (note that this doesn't change the solutions A), I obtain a solution a that indeed is not very different from the initial guess, but it is different and measurably improves the error. As I told you, the initial guess of a=ones(N,1) was almost optimal to begin with.
First-Order Norm of
Iteration Func-count Residual optimality Lambda step
0 3331 0.000683151 0.37 0.01
1 6664 0.000681193 0.074 1 0.00211418
2 9999 0.000671141 0.0379 10000 2.57346e-05
C =
0.0042 0.0000
0.0000 227.3884
>> Error(a)
ans =
-0.0259 0.0000
0.0000 -0.0000
Modified code:
load matlab.mat
B=sparse(B/1e4);
C=C/1e14;
V=V/1e5;
Error=@(A) objective(A,B,C,V);
opts=optimoptions(@lsqnonlin,'MaxIterations',10);
a=lsqnonlin(Error, ones(length(B),1) ,[],[],opts );
function F=objective(a,B,C,V)
N=length(B);
A=spdiags(a(:),0,N,N);
F=C-(A*V).'*B*(A*V);
end
David Kriebel
David Kriebel on 9 Jan 2020
ok thanks for that.
so that means that a diagonalization of A is not optimal for this specific problem.

Sign in to comment.


David Kriebel
David Kriebel on 9 Jan 2020
Edited: David Kriebel on 9 Jan 2020
If i had many different B-C combinations. So different B and C matrices but the vector basis V and scaling matrix A should be the same for all these combinations. Can i improve my solution to make it more unique?
EDIT:
i can make it unique if i have a known solution for Y which can be expressed by another known matrix D.
Y.'*Y = V.'*D*V = V.'*D^(1/2)*D^(1/2)*V
then Y results in:
Y=D^(1/2)*V
Then i can use the cholesky decomp as described above to recover A in the original subspace spanned by the basis V.
Thanks for all your ideas, guesses and help!!!
I got it now to work and get a unique solution!

  0 Comments

Sign in to comment.

Sign in to answer this question.


Translated by