Fixing Value in lsqnonlin

1 Ansicht (letzte 30 Tage)
Shawn Z
Shawn Z am 27 Aug. 2019
Bearbeitet: Matt J am 27 Aug. 2019
I have seen that based off the lsqnonlin algorithm, that the upper and lower bounds are not strictly respected for different iterations of lsqnonlin. I have a case where I am attempting to optimize a general N by N square array where some of the values should occassionally be fixed to 0. The issue is illustrated by the function used to optimize, shown below.
An example K can have the form [a, 0; b, c]. If I don't include the 0 in the optimization, the code runs with no problem. However, I would like to make this function general to any N by N array of K. The issue arrises when lsqnonlin changes the 0 to something like 1e-8 even though my upper and lower bounds are both 0, which in turn gets blown up as expm(1e8). Is there a good way to fix the value of the 0s?
Secondary question: I only inverted the K inside the optimzation function because the fit didn't work very well when I was optimizing the small decimal values. It seems that lsqnonlin thinks that a change of lets say 1/500 vs 1/600 is less significant than a change between 500 and 600. If this is something I can tune in the options that I have not discovered, I would happily optimize the non-inverted numbers.
Thanks!
function xout = Global_SAS_1(T,Y,K)
%if Y is n1 by n2, T needs to be size 1 x n2
%K can be an arbitrary matrix size. Will have units as T^-1.
Tm = zeros(numel(K),length(T));
Kinv = 1./K;
Kinv(isinf(Kinv)) = 0; % in matlab, 1/0 = inf
for i = 1:length(T)
A = expm(Kinv*T(i));
Tm(:,i) = A(:);
end
Gm = real(Y)*pinv(Tm);
xout = Gm*Tm - Y
  5 Kommentare
Shawn Z
Shawn Z am 27 Aug. 2019
Hi Matt,
Thanks for taking the time. I am running R2019a.
I wrote a quick script that illustrates the issue. If I throw in an assignin('base','K', K); in the Global_SAS_1.m function, I can see that K(1,2) is no longer 0. If I go ahead and add a line in Global_SAS_1.m K(1,2) = 0, I get a result roughly corresponding to the K I input in the example.
% generate example dataset of the same form as SAS would generate
clear all; clc;
K = [-1/1500, 0;
1/500, -1/3000];
T = 1:100:1e5;
for i = 1:length(T)
A = expm(K*T(i));
Tm(:,i) = A(:);
end
for j = 1:numel(K)
Gm(:,j) = normpdf(1:100,rand*100,10);
end
Y = Gm * Tm;
% Run regression
options = optimoptions('lsqnonlin','Display','iter');
options.TolFun = 1e-15;
options.MaxIter = 1e4;
options.MaxFunEvals = 100000;
lb = [-1e5, 0 ;
0, -1e5 ];
ub = [0, 0 ;
1e5, 0 ];
K0 = [-500, 0;
500, -2000];
fun = @(K)Global_SAS_1(T,Y,K);
Kout = lsqnonlin(fun,K0,lb,ub,options);
Matt J
Matt J am 27 Aug. 2019
Bearbeitet: Matt J am 27 Aug. 2019
Note also this related thread,
You may have to explicitly reshape K to have the original NXN dimensions in your model function if you set one or more lb(i)=ub(i).

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Matt J
Matt J am 27 Aug. 2019
Bearbeitet: Matt J am 27 Aug. 2019
OK, I think I see what's happening. In i_sfdnls, I see this line:
% Pass in user's objective function and not the wrapped function as
% we want the Jacobian with respect to all the variables
[A,findiffevals] = sfdnls(xcurr,fvec,Jstr,group,alpha,funfcn,l,u,...
options,sizes,finDiffFlags,varargin{:});
Apparently, the code thinks it needs to do finite difference Jacobian calculations on the original, non-reduced objective. I've no idea why they think that's necessary, but the point is that the bounds are being ignored deliberately as a part of the finite differencing.
The best is if you formulate the problem in terms of [a,b,c] (see my comment elow). That's what Matlab is attempting to automate for you anyway. However, the following modification also works as a way of tricking the Jacobian calculation into doing the right thing
function xout = Global_SAS_1(T,Y,K)
%if Y is n1 by n2, T needs to be size 1 x n2
%K can be an arbitrary matrix size. Will have units as T^-1.
Tm = zeros(numel(K),length(T));
Kinv = reshape(1./K,[2,2]);
Kinv(3) = 0;
for i = 1:length(T)
A = expm(Kinv*T(i));
Tm(:,i) = A(:);
end
Gm = real(Y)*pinv(Tm);
xout = Gm*Tm - Y;
end
  3 Kommentare
Shawn Z
Shawn Z am 27 Aug. 2019
Hi Matt,
Thanks again for your input and digging into the underlying issue. Your idea of inputting a map of which variables to optimize will work fine for me and is a simple workaround.
While you're here, might I ask for your expertise on my second question? In general is there a reason why the optimization does worse when I try to directly optimize the K matrix in the form 1/x?
Matt J
Matt J am 27 Aug. 2019
Bearbeitet: Matt J am 27 Aug. 2019
If you are expressing the K(i,j) parameters in unnaturally large units (so that the values in the neighborhood of the optimum are small in magnitude), then the values of lsqnonlin's default tolerance parameters (e.g. StepTolerance and FiniteDifferenceStepSize) may be inappropriately scaled in relation to your unknowns.
You could experiment with smaller values of these tolerance parameters, but I believe it is always better just to choose judicious units for your unknown variables. Just as you wouldn't optimize the dimensions of a house with length variables measured in kilometers, you shouldn't be choosing units such that your K(i,j) are on the order of 1e-5.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Get Started with Optimization Toolbox finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by