Unconstraint optimization: providing good intial guess does not lead to lower calculation times

1 Ansicht (letzte 30 Tage)
Dear all,
I have a simple optimization problem in which I approximate a set of points by a B-spline. This is done by optimizing the B-spline coefficients such that the distance between the functions, evaluated at a set of collocation points, is minimum.
In this typical unconstraint optimization problem the prefered methods is obviously lsqnonlin, but fminunc can be used as well. I tried both. The thing that surprises me is the following:
When I provide a good initial guess the optimization problem, on average it does not seem to reduce the calculation time significant significantly. In some cases it even increases the calculating time.
I also noticed simular things using IPOPT.
Does anyone have a clue about what causes this? I can think of e.g. scaling that can have an effect.

Antworten (2)

Bjorn Gustavsson
Bjorn Gustavsson am 22 Mär. 2012
Maybe it is so that your error-function is "nice"? With nice I mean that is doesn't have a large number of valleys and gorges (in the simple-to-imagine 2-parameter/2-D case) meandering down towards your optimal point. In that case the optimizer might be very efficient in the first few steps when starting from far away.
  2 Kommentare
Martijn
Martijn am 22 Mär. 2012
You mean convex I guess.
It is true that offcourse, it might converge very fast when starting from further away. However, this does not explain why it sometimes takes more time when using a close initial guess!
Bjorn Gustavsson
Bjorn Gustavsson am 22 Mär. 2012
Well not necessarily convex - but something like that, maybe "not too concave" or not "badly buckled". I was toying up a 2-D function that might be tricky for optimizers:
fTough = @(x,y) (sin(((x).^2+y.^2).^.5).^2).*sin(atan2(y,x)+x/3).^2+x.^2/1000+y.^2/1000+y/100;
I haven't checked if this has local minima but an optimizer would have to twist and turn to get to the global minimum.

Melden Sie sich an, um zu kommentieren.


Martijn
Martijn am 22 Mär. 2012
I can also guess that it might have something to do with the estimator of the gradient. Close to the optimum, the cost function might be very flat, making the estimation very bad.
As a result, when providing the solution from a previous optimization as initial guess, it still takes some iteration to determine it is indeed optimal.
  1 Kommentar
Bjorn Gustavsson
Bjorn Gustavsson am 22 Mär. 2012
Yes, it at least has to make sure that the function increases in every direction should be at least 2 function evaluations per parameter.

Melden Sie sich an, um zu kommentieren.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by