reverse lookup table from vector multiplikation with unknown vectors
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hello everyone,
i want to find a way to part a matrix into a produkt of two vectors without knowing the vectors. The idea is that i will get a lookup table and can have a good guess for the vectors which this lookup table is coming from. As you can think, the problem is that the lookup table can be made of infinite many different vectors, but i just want to find a way to have a good guess.
So what i did was, starting to build my own vector-product matrix M2D (coming from 'ax' and 'ay')
M2D = bsxfun(@times, ax', ay);
From that i started with for and if loops:
ax_forif = zeros(1,6);
ay_forif = zeros(1,6);
for R = 1:20;
for T = 1:20;
for i = 1:length(M2D);
ax_forif(:,i) = R;
ay_forif(:,i) = T;
Optimize_ay_ax = lsqr(M2D,ax_forif'); % Least Square Methode
Optimize_ax_ay = lsqr(M2D',ay_forif'); % Least Square Methode
Optimize = [Optimize_ay_ax, Optimize_ax_ay];
ax_test = M2D*Optimize_ay_ax;
ay_test = M2D'*Optimize_ax_ay;
ax_test = ax_test';
ay_test = ay_test';
M2D_test = bsxfun(@times, ax_test', ay_test);
if M2D_test(i,i) - M2D(i,i) < 0.01 && M2D_test(i,i) - M2D (i,i) > 0;
clear A B C D E F G H I J i j m ans Optimize Optimize_ax_ay Optimize_ay_ax
return;
end
The problem is that you have to iterate the T parameter to have a good guess for 'ax' and 'ay'. I can see that because i know the 'ax' and 'ay' in this case, but that wouldn't be the case normally.
I want the script to work automatically and give me a good guess with a small enough error. There is also a possibility to give intervals for 'ax_test' and 'ay_test' but i haven't found a way to implement this.
3 Kommentare
Torsten
am 13 Okt. 2022
I'm not sure what you try to achieve.
Do you want to determine two column vectors a and b such that
norm(a*b'-M,'fro')
is minimized for a given matrix M ?
Antworten (3)
David Goodmanson
am 15 Okt. 2022
Bearbeitet: David Goodmanson
am 15 Okt. 2022
Hello Henning,
Assuming that you want column vectors a and b such that their outer product C = a*b' is as close as possible to M in the sense that
sum{i,j} |(M(i,j)-C(i,j)|^2
is minimized (as discussed), then
[u s v] = svd(M);
n = 1; % initially
a = n*u(:,1);
b = (s(1,1)/n)*v(:,1);
C = a*b'
Here n is an arbitrary scale factor. Since M is the product of a and b, you can multiply a by n, divide b by n and have the same result.
This works since the svd characterizes M as a sum of outer products of column vectors ui, row vectors v'i, and positive coefficients si, i = 1,2, ...
M = s1*u1*v'1 + s2*u2*v'2 ...
s1 is the largest singular value, and in this sum of terms it most closely characterizes the matrix.
1 Kommentar
Bjorn Gustavsson
am 13 Okt. 2022
This problem will be very "variably difficult" depending on the size of the vectors. It also might not have an easily found global optimum. If I understand your problem correct you can knock up a simple optimization-based solution like this:
% Sum-of-squared-residuals function, here we create the 2 arrays by
% partitionin the first input-argument (which will be our
% optimization-variable) into 2 parts. To be used with FMINSEARCH
errf = @(a1a2,M,idxPart) sum((M-(a1a2(1:idxPart)*a1a2((idxPart+1):end).')).^2,'all');
% Or a function returning just the residuals. To be used with LSQNONLIN
resf = @(a1a2,M,idxPart) (M-(a1a2(1:idxPart)*a1a2((idxPart+1):end).'));
%% Test example.
% generate a matrix as the outer product of 2 arrays:
a1 = randn(3,1);
a2 = randn(4,1);
M = a1*a2.';
% Search for the arrays giving the best fit to M:
a1a2Best = fminsearch(@(a1a2) errf(a1a2,M,3),ones(7,1));
a1a2Best2 = lsqnonlin(@(a1a2) resf(a1a2,M,3),a1a2Best);
% Display the results. This is what I got for my normal-distributed
% pseudo-random arrays:
[a1a2Best,[a1;a2],a1a2Best2]
% ans =
% -0.17987 0.27207 -0.17987
% 0.77254 -1.1685 0.77253
% 0.80936 -1.2242 0.80936
% 3.1757 -2.0995 3.1757
% 0.59029 -0.39024 0.59028
% -1.0048 0.66428 -1.0048
% 1.0622 -0.70226 1.0622
M2 = a1a2Best2(1:3)*a1a2Best2(4:end).'
%M2 =
% -0.5712 -0.10617 0.18073 -0.19106
% 2.4533 0.45601 -0.77623 0.82062
% 2.5703 0.47775 -0.81324 0.85974
M
%M =
% -0.5712 -0.10617 0.18073 -0.19106
% 2.4533 0.45601 -0.77623 0.82062
% 2.5703 0.47775 -0.81324 0.85974
M2 = a1a2Best(1:3)*a1a2Best(4:end).'
% M2 =
% -0.57122 -0.10618 0.18073 -0.19107
% 2.4533 0.45602 -0.77623 0.82063
% 2.5703 0.47775 -0.81322 0.85974
You might be helped to find better solutions if you use multi-start optimization-searches and if you have some known range of the elements of the array, for example all elements positive.
HTH
0 Kommentare
Torsten
am 15 Okt. 2022
Bearbeitet: Torsten
am 15 Okt. 2022
No I got a matrix M, which is a product of two unknown vectors a and b. I want to estimate those to vectors.
Sorry if i wrote it complicated, i am just a beginner in programming / Matlab.
And what is the difference to what I wrote ?
% Test example
a = [2 4 6];
b = [-2 9 16];
M = a.'*b
%M = rand(13,45);
% Initial condition
x0 = [1;[1;zeros(size(M,1)-1,1)];[1;zeros(size(M,2)-1,1)]];
% Objective value for initial condition
obj0 = fun(x0,M)
options = optimset('MaxFunEvals',100000,'MaxIter',100000);
% Call solver
x = fmincon(@(x)fun(x,M),x0,[],[],[],[],[],[],@(x)nonlcon(x,M),options)
obj = fun(x,M)
n = size(M,1);
m = size(M,2);
scal = x(1);
a = x(1+1:n+1);
b = x(1+n+1:1+n+m);
Mslash = scal*a.*b.'
function obj = fun(x,M)
n = size(M,1);
m = size(M,2);
scal = x(1);
a = x(1+1:n+1);
b = x(1+n+1:1+n+m);
Mslash = scal*a.*b.';
obj = norm(M-Mslash,'fro');
end
function [c,ceq] = nonlcon(x,M)
n = size(M,1);
m = size(M,2);
scal = x(1);
a = x(1+1:n+1);
b = x(1+n+1:1+n+m);
ceq(1) = norm(a)^2 - 1.0;
ceq(2) = norm(b)^2 - 1.0;
c = [];
end
0 Kommentare
Siehe auch
Kategorien
Mehr zu Logical finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!