How can I write a self scaling function for fmincon ?

Hey,
I use fmincon and I want to maximize this function =
fun = @(x) -(x(1)*x(2)*x(3))
and now I do not want to change this function everytime I in- or decrease the size of my optimization.
For example:
If I am looking for 6 solutions my function should look like this =
fun = @(x) -(x(1)*x(2)*x(3)*x(4)*x(5)*x(6))
Is there a way to do it automatically ?
Thank you so much!

 Akzeptierte Antwort

Matt J
Matt J am 28 Nov. 2018
Bearbeitet: Matt J am 28 Nov. 2018
fun = @(x) -sum(log(x))

11 Kommentare

This would work without difficulty for positive x but not if any components were negative .
Matt J
Matt J am 28 Nov. 2018
Bearbeitet: Matt J am 28 Nov. 2018
Yes, that is true, but if the x(i) are all positively constrained, it can be important to implement it this way to avoid overflow. It also makes the objective function convex, which can be a good thing.
As there are an indefinite number of duplicate solutions (any one x can be multiplied by an arbitrary real non-zero factor if another is divided by that factor), I figure that the original poster must be distinguishing the valid ones according to satisfying some constraints, possibly including nonlinear and/or equality constraints. However I am not confident at the moment that all of the items are positive.
.... Actually I am even less confident that the product is the real objective function; I think it is just an example.
I am adding some code for better understanding. Here you can see my fun I want to optimize. In this case its a minimization. Thank you for all your comments.
Bd=[3 3 6 2 4 2];
Anz_Var = 18;
PV = 6;
% lb ub
lb = zeros(Anz_Var,1);
ub = zeros(Anz_Var,1);
ub = ub+10;
% A b
E_start = 4*3;
beq = zeros(8,1);
Aeq = zeros(8,Anz_Var);
%%%
Aeq(1,13:15) = 1; beq(1) = E_start;
Aeq(2,7) = 1; Aeq(2,13) = -1; Aeq(2,1) = -1; beq(2) = -Bd(1,1);
Aeq(3,8) = 1; Aeq(3,14) = -1; Aeq(3,2) = -1; beq(3) = -Bd(1,2);
Aeq(4,9) = 1; Aeq(4,15) = -1; Aeq(4,3) = -1; beq(4) = -Bd(1,3);
%%%
Aeq(5,16:18) = 1; Aeq(5,7:9) = -1; beq(5) = 0;
Aeq(6,10) = 1; Aeq(6,16) = -1; Aeq(6,4) = -1; beq(6) = -Bd(1,4);
Aeq(7,11) = 1; Aeq(7,17) = -1; Aeq(7,5) = -1; beq(7) = -Bd(1,5);
Aeq(8,12) = 1; Aeq(8,18) = -1; Aeq(8,6) = -1; beq(8) = -Bd(1,6);
%test=@(X) -prod(6)
fun = @(x) x(1)*x(2)*x(3)*x(4)*x(5)*x(6);
% x0
x0 = zeros(1,Anz_Var);
[x fval] = fmincon(fun,x0,[],[],Aeq,beq,lb,ub)
Matt J
Matt J am 29 Nov. 2018
Bearbeitet: Matt J am 29 Nov. 2018
Well then you have several options. Minimize the product,
[x1 fval1] = fmincon(@(x) prod(x(1:6)),x0,[],[],Aeq,beq,lb,ub);
or equivalently minimize its log,
[x2 fval2] = fmincon(@(x) sum(log(x(1:6))),x0,[],[],Aeq,beq,lb,ub);
fval2=exp(fval2);
I like version #2 a bit better, because I find it gives better convergence:
fval1 =
1.9901e-04
fval2 =
1.8983e-18
With the lb of 0 and x0 of 0, then the log version would involve sum of negative infinities which might present difficulties with the algorithm.
Matt J
Matt J am 29 Nov. 2018
Bearbeitet: Matt J am 29 Nov. 2018
That's a legitimate concern, but I think it works out because the interior point algorithm is used. The singularities on the boundary can only be approached asymptotically. The bigger problem is that the optimization is ill-posed. Both formulations give me lots of different solutions when I randomize the initial guess,
x0 = 0.5+rand(1,Anz_Var);
In any case, the non-log version is problematic for some reason. It always takes many more iterations to converge, often terminating because MaxIter is reached, and always gets stuck in a local minimum. Here is an amplification of my test code, in which I supply analytical gradients:
x0 = ones(1,Anz_Var);
opts=optimoptions(@fmincon,'SpecifyObjectiveGradient',true,'MaxIter',1e4,'MaxFunEvals',3e4);
[x1, ~,ef1,out1] = fmincon(@fun1,x0,[],[],Aeq,beq,lb,ub,[],opts); fval1=fun1(x1);
[x2, ~,ef2,out2] = fmincon(@fun2,x0,[],[],Aeq,beq,lb,ub,[],opts); fval2=fun1(x2);
fval1,
fval2
function [f,g]=fun1(x)
f=prod(x(1:6)) ;
g(1:18)=0;
g(1:6)=f./x(1:6);
end
function [f,g]=fun2(x)
f=sum(log( x(1:6) )) ;
g(1:18)=0;
g(1:6)=1./x(1:6);
end
Tim
Tim am 29 Nov. 2018
Thank you for the detailed answer! I will try the solution of fun2, because it seems that it provides the best minimum solution. I still have to figure out some of the discussion around my problem.
Matt J
Matt J am 29 Nov. 2018
Bearbeitet: Matt J am 29 Nov. 2018
Just as a small follow-up, I am finding that the first version, with prod(x), performs much better when the 'HessianFcn' option is used, but I still generally see lower objective values reached by the logged version.
Tim
Tim am 30 Nov. 2018
Ah okay. I will try this as well. One additional question came to my mind: Is my code a good way to minimize each of the objective values individually or would you suggest something else?
Matt J
Matt J am 30 Nov. 2018
What is "each of the objective values"?

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

Walter Roberson
Walter Roberson am 28 Nov. 2018

1 Stimme

@(X) -prod(X)

2 Kommentare

Matt J
Matt J am 28 Nov. 2018
Care is needed here to avoid overflow/underflow.
Tim
Tim am 29 Nov. 2018
Thank you for answer! I appreciate that you are so passionate to solve my problem.

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu Optimization Toolbox finden Sie in Hilfe-Center und File Exchange

Gefragt:

Tim
am 28 Nov. 2018

Kommentiert:

am 30 Nov. 2018

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by