different minimum result of convex function by using fminsearch

3 Ansichten (letzte 30 Tage)
A function which I used is already proved as a convex function.
1. initial point = [0,0,0,0,0,0]
myfun = @(x)(......)
[x, fval] = fminsearch(myfun, [0,0,0,0,0,0])
2. initial point = [0.2,0.2,0.2,0.2,0.2,0.2]
myfun = @(x)(......)
[x, fval] = fminsearch(myfun, [0.2,0.2,0.2,0.2,0.2,0.2])
  • _ | | The "fval" results are quite different.||_*
I expect that if "myfun" is a convex function, the result should be similar. Although the fminsearch find out the local minimum, since the myfun is convex, so the result should be one whatever the initial point is.
What I guess:
# Lack of Number of iteration?
Help me!
  5 Kommentare
Hyeyun Yang
Hyeyun Yang am 20 Feb. 2017
sx = [0.4994 0.4994 0.2967];
sy = [0.8971 0.6985 0.3978];
tx = [0.1008 0.9003 0.6976];
ty = [0.2051 0.2051 0.4036];
s1p12 = @(x)((sx(1)-x(1))*(sx(1)-x(1))+(sy(1)-x(2))*(sy(1)-x(2))).^.5;
s1p13 = @(x)((sx(1)-x(3))*(sx(1)-x(3))+(sy(1)-x(4))*(sy(1)-x(4))).^.5;
p12p13 = @(x)((x(1)-x(3))*(x(1)-x(3))+(x(2)-x(4))*(x(2)-x(4))).^.5;
t1p12 = @(x)((tx(1)-x(1))*(tx(1)-x(1))+(ty(1)-x(2))*(ty(1)-x(2))).^.5;
t1p13 = @(x)((tx(1)-x(3))*(tx(1)-x(3))+(ty(1)-x(4))*(ty(1)-x(4))).^.5;
s2p12 = @(x)((sx(2)-x(1))*(sx(2)-x(1))+(sy(2)-x(2))*(sy(2)-x(2))).^.5;
s2p23 = @(x)((sx(2)-x(5))*(sx(2)-x(5))+(sy(2)-x(6))*(sy(2)-x(6))).^.5;
p12p23 = @(x)((x(1)-x(5))*(x(1)-x(5))+(x(2)-x(6))*(x(2)-x(6))).^.5;
t2p12 = @(x)((tx(2)-x(1))*(tx(2)-x(1))+(ty(2)-x(2))*(ty(2)-x(2))).^.5;
t2p23 = @(x)((tx(2)-x(5))*(tx(2)-x(5))+(ty(2)-x(6))*(ty(2)-x(6))).^.5;
s3p13 = @(x)((sx(3)-x(3))*(sx(3)-x(3))+(sy(3)-x(4))*(sy(3)-x(4))).^.5;
s3p23 = @(x)((sx(3)-x(5))*(sx(3)-x(5))+(sy(3)-x(6))*(sy(3)-x(6))).^.5;
p13p23 = @(x)((x(3)-x(5))*(x(3)-x(5))+(x(4)-x(6))*(x(4)-x(6))).^.5;
t3p13 = @(x)((tx(3)-x(3))*(tx(3)-x(3))+(ty(3)-x(4))*(ty(3)-x(4))).^.5;
t3p23 = @(x)((tx(3)-x(5))*(tx(3)-x(5))+(ty(3)-x(6))*(ty(3)-x(6))).^.5;

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

John D'Errico
John D'Errico am 20 Feb. 2017
Bearbeitet: John D'Errico am 20 Feb. 2017
fminsearch is a very simple optimizer, so I am not at all surprised to see this happen.
1. A 6 dimensional problem is near the upper limit of where I would recommend using fminsearch.
2. ANY optimizer will converge to slightly different solutions, stopping when it decides it is close enough. Close enough is determined by when the objective function is not changing by a significant amount. If the solver hits the iteration or function evaluation limit, then it will also stop looking.
3. If your objective function is flat enough, then an optimizer can also easily terminate far away from the true minimum.
Solution? Start by using a better optimizer. Even then, don't be surprised to see if some other numerical optimizer does exactly the same thing.
  2 Kommentare
Hyeyun Yang
Hyeyun Yang am 20 Feb. 2017
Bearbeitet: Walter Roberson am 20 Feb. 2017
Thank you. Then I will search for the better optimizer.
One more Q:
"function is flat enough"
What's "flat" like?
Walter Roberson
Walter Roberson am 20 Feb. 2017
A function with a slope of (for example) 1E-15 might have all of the probe values within the current tolerance, with it not being detectable that the slope is real and important to chase for long ways.
You can construct functions of the form
-1/(delta + (x-c)^p)
for large even integer p and positive delta. These functions have a sharp valley close to x == c, but the delta and the fact that (x-c)^even is non-negative prevents the function from being a singularity; instead it has a minimum at -1/delta and the "effective" catch basin of the minimum can be made arbitrarily tight by increasing p. Calculus has no problem dealing with the situation, but numeric algorithms have finite tolerance and give up.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by