# fminsearch

Find minimum of unconstrained multivariable function using derivative-free method

## Syntax

`x = fminsearch(fun,x0)x = fminsearch(fun,x0,options)x = fminsearch(problem)[x,fval] = fminsearch(...)[x,fval,exitflag] = fminsearch(...)[x,fval,exitflag,output] = fminsearch(...)`

## Description

`fminsearch` finds the minimum of a scalar function of several variables, starting at an initial estimate. This is generally referred to as unconstrained nonlinear optimization.

`x = fminsearch(fun,x0)` starts at the point `x0` and returns a value `x` that is a local minimizer of the function described in `fun`. `x0` can be a scalar, vector, or matrix. `fun` is a `function_handle`.

Parameterizing Functions in the MATLAB® Mathematics documentation explains how to pass additional parameters to your objective function `fun`. See also Example 2 and Example 3 below.

`x = fminsearch(fun,x0,options)` minimizes with the optimization parameters specified in the structure `options`. You can define these parameters using the `optimset` function. `fminsearch` uses these `options` structure fields:

 `Display` Level of display. `'off'` displays no output; `'iter'` displays output at each iteration; `'final'` displays just the final output; `'notify'` (default) displays output only if the function does not converge. See Iterative Display in MATLAB Mathematics for more information. `FunValCheck` Check whether objective function values are valid. `'on'` displays an error when the objective function returns a value that is `complex`, `Inf` or `NaN`. `'off'` (the default) displays no error. `MaxFunEvals` Maximum number of function evaluations allowed `MaxIter` Maximum number of iterations allowed `OutputFcn` User-defined function that is called at each iteration. See Output Functions in MATLAB Mathematics for more information. `PlotFcns` Plots various measures of progress while the algorithm executes, select from predefined plots or write your own. Pass a function handle or a cell array of function handles. The default is none (`[]`). `@optimplotx` plots the current point`@optimplotfval` plots the function value`@optimplotfunccount` plots the function countSee Plot Functions in MATLAB Mathematics for more information. `TolFun` Termination tolerance on the function value `TolX` Termination tolerance on `x`

`x = fminsearch(problem)` finds the minimum for `problem`, where `problem` is a structure with the following fields:

 `objective` Objective function `x0` Initial point for `x` `solver` `'fminsearch'` `options` Options structure created using `optimset`

`[x,fval] = fminsearch(...)` returns in `fval` the value of the objective function `fun` at the solution `x`.

`[x,fval,exitflag] = fminsearch(...)` returns a value `exitflag` that describes the exit condition of `fminsearch`:

 `1` `fminsearch` converged to a solution `x`. `0` Maximum number of function evaluations or iterations was reached. `-1` Algorithm was terminated by the output function.

`[x,fval,exitflag,output] = fminsearch(...)` returns a structure `output` that contains information about the optimization in the following fields:

 `algorithm` `'Nelder-Mead simplex direct search'` `funcCount` Number of function evaluations `iterations` Number of iterations `message` Exit message

## Arguments

`fun` is the function to be minimized. It accepts an input `x` and returns a scalar `f`, the objective function evaluated at `x`. The function `fun` can be specified as a function handle for a function file

`x = fminsearch(@myfun, x0)`

where `myfun` is a function file such as

```function f = myfun(x) f = ... % Compute function value at x```

or as a function handle for an anonymous function, such as

`x = fminsearch(@(x)sin(x^2), x0);`

Other arguments are described in the syntax descriptions above.

## Examples

### Example 1

The Rosenbrock banana function is a classic test example for multidimensional minimization:

$f\left(x\right)=100{\left({x}_{2}-{x}_{1}^{2}\right)}^{2}+{\left(1-{x}_{1}\right)}^{2}.$

The minimum is at `(1,1)` and has the value `0`. The traditional starting point is `(-1.2,1)`. The anonymous function shown here defines the function and returns a function handle called `banana`:

`banana = @(x)100*(x(2)-x(1)^2)^2+(1-x(1))^2;`

Pass the function handle to `fminsearch`:

`[x,fval] = fminsearch(banana,[-1.2, 1])`

This produces

```x = 1.0000 1.0000 fval = 8.1777e-010```

This indicates that the minimizer was found to at least four decimal places with a value near zero.

### Example 2

If `fun` is parameterized, you can use anonymous functions to capture the problem-dependent parameters. For example, suppose you want to minimize the objective function `myfun` defined by the following function file:

```function f = myfun(x,a) f = x(1)^2 + a*x(2)^2;```

Note that `myfun` has an extra parameter `a`, so you cannot pass it directly to `fminsearch`. To optimize for a specific value of `a`, such as ```a = 1.5```.

1. Assign the value to `a`.

`a = 1.5; % define parameter first`
2. Call `fminsearch` with a one-argument anonymous function that captures that value of `a` and calls `myfun` with two arguments:

`x = fminsearch(@(x) myfun(x,a),[0,1])`

### Example 3

You can modify the first example by adding a parameter a to the second term of the banana function:

$f\left(x\right)=100{\left({x}_{2}-{x}_{1}^{2}\right)}^{2}+{\left(a-{x}_{1}\right)}^{2}.$

This changes the location of the minimum to the point `[a,a^2]`. To minimize this function for a specific value of `a`, for example a = `sqrt(2)`, create a one-argument anonymous function that captures the value of `a`.

```a = sqrt(2); banana = @(x)100*(x(2)-x(1)^2)^2+(a-x(1))^2;```

Then the statement

```[x,fval] = fminsearch(banana, [-1.2, 1], ... optimset('TolX',1e-8));```

seeks the minimum `[sqrt(2), 2]` to an accuracy higher than the default on `x`.

## Limitations

`fminsearch` can often handle discontinuity, particularly if it does not occur near the solution. fminsearch may only give local solutions.

`fminsearch` only minimizes over the real numbers, that is, x must only consist of real numbers and f(x) must only return real numbers. When x has complex variables, they must be split into real and imaginary parts.

collapse all

### Algorithms

fminsearch uses the simplex search method of Lagarias et al. [1]. This is a direct search method that does not use numerical or analytic gradients.

If `n` is the length of `x`, a simplex in `n`-dimensional space is characterized by the `n+1` distinct vectors that are its vertices. In two-space, a simplex is a triangle; in three-space, it is a pyramid. At each step of the search, a new point in or near the current simplex is generated. The function value at the new point is compared with the function's values at the vertices of the simplex and, usually, one of the vertices is replaced by the new point, giving a new simplex. This step is repeated until the diameter of the simplex is less than the specified tolerance.

## References

[1] Lagarias, J.C., J. A. Reeds, M. H. Wright, and P. E. Wright, "Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions," SIAM Journal of Optimization, Vol. 9 Number 1, pp. 112-147, 1998.