# fminimax

Solve minimax constraint problem

## Syntax

``x = fminimax(fun,x0)``
``x = fminimax(fun,x0,A,b)``
``x = fminimax(fun,x0,A,b,Aeq,beq)``
``x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)``
``x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)``
``x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)``
``x = fminimax(problem)``
``````[x,fval] = fminimax(___)``````
``````[x,fval,maxfval,exitflag,output] = fminimax(___)``````
``````[x,fval,maxfval,exitflag,output,lambda] = fminimax(___)``````

## Description

`fminimax` seeks a point that minimizes the maximum of a set of objective functions.

The problem includes any type of constraint. In detail, `fminimax` seeks the minimum of a problem specified by

where b and beq are vectors, A and Aeq are matrices, and c(x), ceq(x), and F(x) are functions that return vectors. F(x), c(x), and ceq(x) can be nonlinear functions.

x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.

You can also solve max-min problems with `fminimax`, using the identity

`$\underset{x}{\mathrm{max}}\underset{i}{\mathrm{min}}{F}_{i}\left(x\right)=-\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}\left(-{F}_{i}\left(x\right)\right).$`

You can solve problems of the form

`$\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}|{F}_{i}\left(x\right)|$`

by using the `AbsoluteMaxObjectiveCount` option; see Solve Minimax Problem Using Absolute Value of One Objective.

example

````x = fminimax(fun,x0)` starts at `x0` and finds a minimax solution `x` to the functions described in `fun`. NotePassing Extra Parameters explains how to pass extra parameters to the objective functions and nonlinear constraint functions, if necessary. ```

example

````x = fminimax(fun,x0,A,b)` solves the minimax problem subject to the linear inequalities `A*x ≤ b`.```
````x = fminimax(fun,x0,A,b,Aeq,beq)` solves the minimax problem subject to the linear equalities `Aeq*x = beq` as well. If no inequalities exist, set ```A = []``` and `b = []`.```

example

````x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)` solves the minimax problem subject to the bounds `lb `≤` x `≤` ub`. If no equalities exist, set `Aeq = []` and `beq = []`. If `x(i)` is unbounded below, set `lb(i) = –Inf`; if `x(i)` is unbounded above, set `ub(i) = Inf`. NoteSee Iterations Can Violate Constraints. NoteIf the specified input bounds for a problem are inconsistent, the output `x` is `x0` and the output `fval` is `[]`. ```

example

````x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)` solves the minimax problem subject to the nonlinear inequalities `c(x)` or equalities `ceq(x)` defined in `nonlcon`. The function optimizes such that `c(x) ≤ 0` and `ceq(x) = 0`. If no bounds exist, set ```lb = []``` or `ub = []`, or both.```

example

````x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)` solves the minimax problem with the optimization options specified in `options`. Use `optimoptions` to set these options.```
````x = fminimax(problem)` solves the minimax problem for `problem`, where `problem` is a structure described in `problem`. Create the `problem` structure by exporting a problem from the Optimization app, as described in Exporting Your Work.```

example

``````[x,fval] = fminimax(___)```, for any syntax, returns the values of the objective functions computed in `fun` at the solution `x`.```

example

``````[x,fval,maxfval,exitflag,output] = fminimax(___)``` additionally returns the maximum value of the objective functions at the solution `x`, a value `exitflag` that describes the exit condition of `fminimax`, and a structure `output` with information about the optimization process.```

example

``````[x,fval,maxfval,exitflag,output,lambda] = fminimax(___)``` additionally returns a structure `lambda` whose fields contain the Lagrange multipliers at the solution `x`.```

## Examples

collapse all

Create a plot of the `sin` and `cos` functions and their maximum over the interval `[–pi,pi]`.

```t = linspace(-pi,pi); plot(t,sin(t),'r-') hold on plot(t,cos(t),'b-'); plot(t,max(sin(t),cos(t)),'ko') legend('sin(t)','cos(t)','max(sin(t),cos(t))','Location','NorthWest')```

The plot shows two local minima of the maximum, one near 1, and the other near –2. Find the minimum near 1.

```fun = @(x)[sin(x);cos(x)]; x0 = 1; x1 = fminimax(fun,x0)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x1 = 0.7854 ```

Find the minimum near –2.

```x0 = -2; x2 = fminimax(fun,x0)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x2 = -2.3562 ```

The objective functions for this example are linear plus constants. For a description and plot of the objective functions, see Compare fminimax and fminunc.

Set the objective functions as three linear functions of the form $dot\left(x,v\right)+{v}_{0}$ for three vectors $v$ and three constants ${v}_{0}$.

```a = [1;1]; b = [-1;1]; c = [0;-1]; a0 = 2; b0 = -3; c0 = 4; fun = @(x)[x*a+a0,x*b+b0,x*c+c0];```

Find the minimax point subject to the inequality `x(1) + 3*x(2) <= –4`.

```A = [1,3]; b = -4; x0 = [-1,-2]; x = fminimax(fun,x0,A,b)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x = 1×2 -5.8000 0.6000 ```

The objective functions for this example are linear plus constants. For a description and plot of the objective functions, see Compare fminimax and fminunc.

Set the objective functions as three linear functions of the form $dot\left(x,v\right)+{v}_{0}$ for three vectors $v$ and three constants ${v}_{0}$.

```a = [1;1]; b = [-1;1]; c = [0;-1]; a0 = 2; b0 = -3; c0 = 4; fun = @(x)[x*a+a0,x*b+b0,x*c+c0];```

Set bounds that `–2 <= x(1) <= 2` and `–1 <= x(2) <= 1` and solve the minimax problem starting from `[0,0]`.

```lb = [-2,-1]; ub = [2,1]; x0 = [0,0]; A = []; % No linear constraints b = []; Aeq = []; beq = []; [x,fval] = fminimax(fun,x0,A,b,Aeq,beq,lb,ub)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x = 1×2 -0.0000 1.0000 ```
```fval = 1×3 3.0000 -2.0000 3.0000 ```

In this case, the solution is not unique. Many points satisfy the constraints and have the same minimax value. Plot the surface representing the maximum of the three objective functions, and plot a red line showing the points that have the same minimax value.

```[X,Y] = meshgrid(linspace(-2,2),linspace(-1,1)); Z = max(fun([X(:),Y(:)]),[],2); Z = reshape(Z,size(X)); surf(X,Y,Z,'LineStyle','none') view(-118,28) hold on line([-2,0],[1,1],[3,3],'Color','r','LineWidth',8) hold off```

The objective functions for this example are linear plus constants. For a description and plot of the objective functions, see Compare fminimax and fminunc.

Set the objective functions as three linear functions of the form $dot\left(x,v\right)+{v}_{0}$ for three vectors $v$ and three constants ${v}_{0}$.

```a = [1;1]; b = [-1;1]; c = [0;-1]; a0 = 2; b0 = -3; c0 = 4; fun = @(x)[x*a+a0,x*b+b0,x*c+c0];```

The `unitdisk` function represents the nonlinear inequality constraint $‖x{‖}^{2}\le 1$.

`type unitdisk`
```function [c,ceq] = unitdisk(x) c = x(1)^2 + x(2)^2 - 1; ceq = []; ```

Solve the minimax problem subject to the `unitdisk` constraint, starting from `x0 = [0,0]`.

```x0 = [0,0]; A = []; % No other constraints b = []; Aeq = []; beq = []; lb = []; ub = []; nonlcon = @unitdisk; x = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x = 1×2 -0.0000 1.0000 ```

`fminimax` can minimize the maximum of either ${F}_{i}\left(x\right)$ or $|{F}_{i}\left(x\right)|$ for the first several values of $i$ by using the `AbsoluteMaxObjectiveCount` option. To minimize the absolute values of $k$ of the objectives, arrange the objective function values so that ${F}_{1}\left(x\right)$ through ${F}_{k}\left(x\right)$ are the objectives for absolute minimization, and set the `AbsoluteMaxObjectiveCount` option to `k`.

In this example, minimize the maximum of `sin` and `cos`, specify `sin` as the first objective, and set `AbsoluteMaxObjectiveCount` to 1.

```fun = @(x)[sin(x),cos(x)]; options = optimoptions('fminimax','AbsoluteMaxObjectiveCount',1); x0 = 1; A = []; % No constraints b = []; Aeq = []; beq = []; lb = []; ub = []; nonlcon = []; x1 = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x1 = 0.7854 ```

Try starting from `x0 = –2`.

```x0 = -2; x2 = fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x2 = -3.1416 ```

Plot the function.

```t = linspace(-pi,pi); plot(t,max(abs(sin(t)),cos(t)))```

To see the effect of the `AbsoluteMaxObjectiveCount` option, compare this plot to the plot in the example Minimize Maximum of sin and cos.

Obtain both the location of the minimax point and the value of the objective functions. For a description and plot of the objective functions, see Compare fminimax and fminunc.

Set the objective functions as three linear functions of the form $dot\left(x,v\right)+{v}_{0}$ for three vectors $v$ and three constants ${v}_{0}$.

```a = [1;1]; b = [-1;1]; c = [0;-1]; a0 = 2; b0 = -3; c0 = 4; fun = @(x)[x*a+a0,x*b+b0,x*c+c0];```

Set the initial point to `[0,0]` and find the minimax point and value.

```x0 = [0,0]; [x,fval] = fminimax(fun,x0)```
```Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x = 1×2 -2.5000 2.2500 ```
```fval = 1×3 1.7500 1.7500 1.7500 ```

All three objective functions have the same value at the minimax point. Unconstrained problems typically have at least two objectives that are equal at the solution, because if a point is not a local minimum for any objective and only one objective has the maximum value, then the maximum objective can be lowered.

The objective functions for this example are linear plus constants. For a description and plot of the objective functions, see Compare fminimax and fminunc.

Set the objective functions as three linear functions of the form $dot\left(x,v\right)+{v}_{0}$ for three vectors $v$ and three constants ${v}_{0}$.

```a = [1;1]; b = [-1;1]; c = [0;-1]; a0 = 2; b0 = -3; c0 = 4; fun = @(x)[x*a+a0,x*b+b0,x*c+c0];```

Find the minimax point subject to the inequality `x(1) + 3*x(2) <= –4`.

```A = [1,3]; b = -4; x0 = [-1,-2];```

Set options for iterative display, and obtain all solver outputs.

```options = optimoptions('fminimax','Display','iter'); Aeq = []; % No other constraints beq = []; lb = []; ub = []; nonlcon = []; [x,fval,maxfval,exitflag,output,lambda] =... fminimax(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)```
``` Objective Max Line search Directional Iter F-count value constraint steplength derivative Procedure 0 4 0 6 1 9 5 0 1 0.981 2 14 4.889 0 1 -0.302 Hessian modified twice 3 19 3.4 8.132e-09 1 -0.302 Hessian modified twice Local minimum possible. Constraints satisfied. fminimax stopped because the size of the current search direction is less than twice the value of the step size tolerance and constraints are satisfied to within the value of the constraint tolerance. ```
```x = 1×2 -5.8000 0.6000 ```
```fval = 1×3 -3.2000 3.4000 3.4000 ```
```maxfval = 3.4000 ```
```exitflag = 4 ```
```output = struct with fields: iterations: 4 funcCount: 19 lssteplength: 1 stepsize: 6.0684e-10 algorithm: 'active-set' firstorderopt: [] constrviolation: 8.1323e-09 message: '...' ```
```lambda = struct with fields: lower: [2x1 double] upper: [2x1 double] eqlin: [0x1 double] eqnonlin: [0x1 double] ineqlin: 0.2000 ineqnonlin: [0x1 double] ```

Examine the returned information:

• Two objective function values are equal at the solution.

• The solver converges in 4 iterations and 19 function evaluations.

• The `lambda.ineqlin` value is nonzero, indicating that the linear constraint is active at the solution.

## Input Arguments

collapse all

Objective functions, specified as a function handle or function name. `fun` is a function that accepts a vector `x` and returns a vector `F`, the objective functions evaluated at `x`. You can specify the function `fun` as a function handle for a function file:

`x = fminimax(@myfun,x0,goal,weight)`

where `myfun` is a MATLAB® function such as

```function F = myfun(x) F = ... % Compute function values at x.```

`fun` can also be a function handle for an anonymous function:

`x = fminimax(@(x)sin(x.*x),x0,goal,weight);`

If the user-defined values for `x` and `F` are arrays, `fminimax` converts them to vectors using linear indexing (see Array Indexing (MATLAB)).

To minimize the worst-case absolute values of some elements of the vector F(x) (that is, min{max abs{F(x)} } ), partition those objectives into the first elements of F and use `optimoptions` to set the ``` AbsoluteMaxObjectiveCount``` option to the number of these objectives. These objectives must be partitioned into the first elements of the vector `F` returned by `fun`. For an example, see Solve Minimax Problem Using Absolute Value of One Objective.

Assume that the gradients of the objective functions can also be computed and the `SpecifyObjectiveGradient` option is `true`, as set by:

`options = optimoptions('fminimax','SpecifyObjectiveGradient',true)`

In this case, the function `fun` must return, in the second output argument, the gradient values `G` (a matrix) at `x`. The gradient consists of the partial derivative dF/dx of each `F` at the point `x`. If `F` is a vector of length `m` and `x` has length `n`, where `n` is the length of `x0`, then the gradient `G` of `F(x)` is an `n`-by-`m` matrix where `G(i,j)` is the partial derivative of `F(j)` with respect to `x(i)` (that is, the `j`th column of `G` is the gradient of the `j`th objective function `F(j)`). If you define `F` as an array, then the preceding discussion applies to `F(:)`, the linear ordering of the `F` array. In any case, `G` is a 2-D matrix.

### Note

Setting `SpecifyObjectiveGradient` to `true` is effective only when the problem has no nonlinear constraint, or when the problem has a nonlinear constraint with `SpecifyConstraintGradient` set to `true`. Internally, the objective is folded into the constraints, so the solver needs both gradients (objective and constraint) supplied in order to avoid estimating a gradient.

Data Types: `char` | `string` | `function_handle`

Initial point, specified as a real vector or real array. Solvers use the number of elements in `x0` and the size of `x0` to determine the number and size of variables that `fun` accepts.

Example: `x0 = [1,2,3,4]`

Data Types: `double`

Linear inequality constraints, specified as a real matrix. `A` is an `M`-by-`N` matrix, where `M` is the number of inequalities, and `N` is the number of variables (number of elements in `x0`). For large problems, pass `A` as a sparse matrix.

`A` encodes the `M` linear inequalities

`A*x <= b`,

where `x` is the column vector of `N` variables `x(:)`, and `b` is a column vector with `M` elements.

For example, to specify

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30,

enter these constraints:

```A = [1,2;3,4;5,6]; b = [10;20;30];```

Example: To specify that the x components sum to 1 or less, use ```A = ones(1,N)``` and `b = 1`.

Data Types: `double`

Linear inequality constraints, specified as a real vector. `b` is an `M`-element vector related to the `A` matrix. If you pass `b` as a row vector, solvers internally convert `b` to the column vector `b(:)`. For large problems, pass `b` as a sparse vector.

`b` encodes the `M` linear inequalities

`A*x <= b`,

where `x` is the column vector of `N` variables `x(:)`, and `A` is a matrix of size `M`-by-`N`.

For example, to specify

x1 + 2x2 ≤ 10
3x1 + 4x2 ≤ 20
5x1 + 6x2 ≤ 30,

enter these constraints:

```A = [1,2;3,4;5,6]; b = [10;20;30];```

Example: To specify that the x components sum to 1 or less, use ```A = ones(1,N)``` and `b = 1`.

Data Types: `double`

Linear equality constraints, specified as a real matrix. `Aeq` is an `Me`-by-`N` matrix, where `Me` is the number of equalities, and `N` is the number of variables (number of elements in `x0`). For large problems, pass `Aeq` as a sparse matrix.

`Aeq` encodes the `Me` linear equalities

`Aeq*x = beq`,

where `x` is the column vector of `N` variables `x(:)`, and `beq` is a column vector with `Me` elements.

For example, to specify

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20,

enter these constraints:

```Aeq = [1,2,3;2,4,1]; beq = [10;20];```

Example: To specify that the x components sum to 1, use `Aeq = ones(1,N)` and `beq = 1`.

Data Types: `double`

Linear equality constraints, specified as a real vector. `beq` is an `Me`-element vector related to the `Aeq` matrix. If you pass `beq` as a row vector, solvers internally convert `beq` to the column vector `beq(:)`. For large problems, pass `beq` as a sparse vector.

`beq` encodes the `Me` linear equalities

`Aeq*x = beq`,

where `x` is the column vector of `N` variables `x(:)`, and `Aeq` is a matrix of size `Me`-by-`N`.

For example, to specify

x1 + 2x2 + 3x3 = 10
2x1 + 4x2 + x3 = 20,

enter these constraints:

```Aeq = [1,2,3;2,4,1]; beq = [10;20];```

Example: To specify that the x components sum to 1, use `Aeq = ones(1,N)` and `beq = 1`.

Data Types: `double`

Lower bounds, specified as a real vector or real array. If the number of elements in `x0` is equal to the number of elements in `lb`, then `lb` specifies that

`x(i) >= lb(i)` for all `i`.

If `numel(lb) < numel(x0)`, then `lb` specifies that

`x(i) >= lb(i)` for ```1 <= i <= numel(lb)```.

If there are fewer elements in `lb` than in `x0`, solvers issue a warning.

Example: To specify that all x components are positive, use ```lb = zeros(size(x0))```.

Data Types: `double`

Upper bounds, specified as a real vector or real array. If the number of elements in `x0` is equal to the number of elements in `ub`, then `ub` specifies that

`x(i) <= ub(i)` for all `i`.

If `numel(ub) < numel(x0)`, then `ub` specifies that

`x(i) <= ub(i)` for ```1 <= i <= numel(ub)```.

If there are fewer elements in `ub` than in `x0`, solvers issue a warning.

Example: To specify that all x components are less than 1, use ```ub = ones(size(x0))```.

Data Types: `double`

Nonlinear constraints, specified as a function handle or function name. `nonlcon` is a function that accepts a vector or array `x` and returns two arrays, `c(x)` and `ceq(x)`.

• `c(x)` is the array of nonlinear inequality constraints at `x`. `fminimax` attempts to satisfy

`c(x) <= 0` for all entries of `c`.

• `ceq(x)` is the array of nonlinear equality constraints at `x`. `fminimax` attempts to satisfy

`ceq(x) = 0` for all entries of `ceq`.

For example,

`x = fminimax(@myfun,x0,...,@mycon)`

where `mycon` is a MATLAB function such as the following:

```function [c,ceq] = mycon(x) c = ... % Compute nonlinear inequalities at x. ceq = ... % Compute nonlinear equalities at x.```

Suppose that the gradients of the constraints can also be computed and the `SpecifyConstraintGradient` option is `true`, as set by:

`options = optimoptions('fminimax','SpecifyConstraintGradient',true)`

In this case, the function `nonlcon` must also return, in the third and fourth output arguments, `GC`, the gradient of `c(x)`, and `GCeq`, the gradient of `ceq(x)`. See Nonlinear Constraints for an explanation of how to “conditionalize” the gradients for use in solvers that do not accept supplied gradients.

If `nonlcon` returns a vector `c` of `m` components and `x` has length `n`, where `n` is the length of `x0`, then the gradient `GC` of `c(x)` is an `n`-by-`m` matrix, where `GC(i,j)` is the partial derivative of `c(j)` with respect to `x(i)` (that is, the `j`th column of `GC` is the gradient of the `j`th inequality constraint `c(j)`). Likewise, if `ceq` has `p` components, the gradient `GCeq` of `ceq(x)` is an `n`-by-`p` matrix, where `GCeq(i,j)` is the partial derivative of `ceq(j)` with respect to `x(i)` (that is, the `j`th column of `GCeq` is the gradient of the `j`th equality constraint `ceq(j)`).

### Note

Setting `SpecifyConstraintGradient` to `true` is effective only when `SpecifyObjectiveGradient` is set to `true`. Internally, the objective is folded into the constraint, so the solver needs both gradients (objective and constraint) supplied in order to avoid estimating a gradient.

### Note

Because Optimization Toolbox™ functions accept only inputs of type `double`, user-supplied objective and nonlinear constraint functions must return outputs of type `double`.

See Passing Extra Parameters for an explanation of how to parameterize the nonlinear constraint function `nonlcon`, if necessary.

Data Types: `char` | `function_handle` | `string`

Optimization options, specified as the output of `optimoptions` or a structure such as `optimset` returns.

Some options are absent from the `optimoptions` display. These options appear in italics in the following table. For details, see View Options.

For details about options that have different names for `optimset`, see Current and Legacy Option Name Tables.

OptionDescription
`AbsoluteMaxObjectiveCount`

Number of elements of Fi(x) for which to minimize the absolute value of Fi. See Solve Minimax Problem Using Absolute Value of One Objective.

For `optimset`, the name is `MinAbsMax`.

`ConstraintTolerance`

Termination tolerance on the constraint violation (a positive scalar). The default is `1e-6`. See Tolerances and Stopping Criteria.

For `optimset`, the name is `TolCon`.

Diagnostics

Display of diagnostic information about the function to be minimized or solved. The choices are `'on'` or `'off'` (the default).

DiffMaxChange

Maximum change in variables for finite-difference gradients (a positive scalar). The default is `Inf`.

DiffMinChange

Minimum change in variables for finite-difference gradients (a positive scalar). The default is `0`.

`Display`

Level of display (see Iterative Display):

• `'off'` or `'none'` displays no output.

• `'iter'` displays output at each iteration, and gives the default exit message.

• `'iter-detailed'` displays output at each iteration, and gives the technical exit message.

• `'notify'` displays output only if the function does not converge, and gives the default exit message.

• `'notify-detailed'` displays output only if the function does not converge, and gives the technical exit message.

• `'final'` (default) displays only the final output, and gives the default exit message.

• `'final-detailed'` displays only the final output, and gives the technical exit message.

`FiniteDifferenceStepSize`

Scalar or vector step size factor for finite differences. When you set `FiniteDifferenceStepSize` to a vector `v`, the forward finite differences `delta` are

`delta = v.*sign′(x).*max(abs(x),TypicalX);`

where `sign′(x) = sign(x)` except `sign′(0) = 1`. Central finite differences are

`delta = v.*max(abs(x),TypicalX);`

Scalar `FiniteDifferenceStepSize` expands to a vector. The default is `sqrt(eps)` for forward finite differences, and `eps^(1/3)` for central finite differences.

For `optimset`, the name is `FinDiffRelStep`.

`FiniteDifferenceType`

Type of finite differences used to estimate gradients, either `'forward'` (default) or `'central'` (centered). `'central'` takes twice as many function evaluations, but is generally more accurate.

The algorithm is careful to obey bounds when estimating both types of finite differences. For example, it might take a backward difference, rather than a forward difference, to avoid evaluating at a point outside the bounds.

For `optimset`, the name is `FinDiffType`.

`FunctionTolerance`

Termination tolerance on the function value (a positive scalar). The default is `1e-6`. See Tolerances and Stopping Criteria.

For `optimset`, the name is `TolFun`.

FunValCheck

Check that signifies whether the objective function and constraint values are valid. `'on'` displays an error when the objective function or constraints return a value that is `complex`, `Inf`, or `NaN`. The default `'off'` displays no error.

`MaxFunctionEvaluations`

Maximum number of function evaluations allowed (a positive integer). The default is `100*numberOfVariables`. See Tolerances and Stopping Criteria and Iterations and Function Counts.

For `optimset`, the name is `MaxFunEvals`.

`MaxIterations`

Maximum number of iterations allowed (a positive integer). The default is `400`. See Tolerances and Stopping Criteria and Iterations and Function Counts.

For `optimset`, the name is `MaxIter`.

MaxSQPIter

Maximum number of SQP iterations allowed (a positive integer). The default is ```10*max(numberOfVariables, numberOfInequalities + numberOfBounds)```.

MeritFunction

If this option is set to `'multiobj'` (the default), use the goal attainment or minimax merit function. If this option is set to `'singleobj'`, use the `fmincon` merit function.

`OptimalityTolerance`

Termination tolerance on the first-order optimality (a positive scalar). The default is `1e-6`. See First-Order Optimality Measure.

For `optimset`, the name is `TolFun`.

`OutputFcn`

One or more user-defined functions that an optimization function calls at each iteration. Pass a function handle or a cell array of function handles. The default is none (`[]`). See Output Function Syntax.

`PlotFcn`

Plots showing various measures of progress while the algorithm executes. Select from predefined plots or write your own. Pass a name, function handle, or cell array of names or function handles. For custom plot functions, pass function handles. The default is none (`[]`).

• `'optimplotx'` plots the current point.

• `'optimplotfunccount'` plots the function count.

• `'optimplotfval'` plots the objective function values.

• `'optimplotconstrviolation'` plots the maximum constraint violation.

• `'optimplotstepsize'` plots the step size.

Custom plot functions use the same syntax as output functions. See Output Functions and Output Function Syntax.

For `optimset`, the name is `PlotFcns`.

RelLineSrchBnd

Relative bound (a real nonnegative scalar value) on the line search step length such that the total displacement in `x` satisfies x(i)| ≤ relLineSrchBnd· max(|x(i)|,|typicalx(i)|). This option provides control over the magnitude of the displacements in `x` when the solver takes steps that are too large. The default is none (`[]`).

RelLineSrchBndDuration

Number of iterations for which the bound specified in `RelLineSrchBnd` should be active. The default is `1`.

`SpecifyConstraintGradient`

Gradient for nonlinear constraint functions defined by the user. When this option is set to `true`, `fminimax` expects the constraint function to have four outputs, as described in `nonlcon`. When this option is set to `false` (the default), `fminimax` estimates gradients of the nonlinear constraints using finite differences.

For `optimset`, the name is `GradConstr` and the values are `'on'` or `'off'`.

`SpecifyObjectiveGradient`

Gradient for the objective function defined by the user. Refer to the description of `fun` to see how to define the gradient. Set this option to `true` to have `fminimax` use a user-defined gradient of the objective function. The default, `false`, causes `fminimax` to estimate gradients using finite differences.

For `optimset`, the name is `GradObj` and the values are `'on'` or `'off'`.

`StepTolerance`

Termination tolerance on `x` (a positive scalar). The default is 1e-6. See Tolerances and Stopping Criteria.

For `optimset`, the name is `TolX`.

TolConSQP

Termination tolerance on the inner iteration SQP constraint violation (a positive scalar). The default is `1e-6`.

`TypicalX`

Typical `x` values. The number of elements in `TypicalX` is equal to the number of elements in `x0`, the starting point. The default value is `ones(numberofvariables,1)`. The `fminimax` function uses `TypicalX` for scaling finite differences for gradient estimation.

`UseParallel`

Option for using parallel computing. When this option is set to `true`, `fminimax` estimates gradients in parallel. The default is `false`. See Parallel Computing.

Example: `optimoptions('fminimax','PlotFcn','optimplotfval')`

Problem structure, specified as a structure with the fields in this table.

Field NameEntry

`objective`

Objective function `fun`

`x0`

Initial point for `x`

`Aineq`

Matrix for linear inequality constraints

`bineq`

Vector for linear inequality constraints

`Aeq`

Matrix for linear equality constraints

`beq`

Vector for linear equality constraints
`lb`Vector of lower bounds
`ub`Vector of upper bounds

`nonlcon`

Nonlinear constraint function

`solver`

`'fminimax'`

`options`

Options created with `optimoptions`

You must supply at least the `objective`, `x0`, `solver`, and `options` fields in the `problem` structure.

The simplest way to obtain a `problem` structure is to export the problem from the Optimization app.

Data Types: `struct`

## Output Arguments

collapse all

Solution, returned as a real vector or real array. The size of `x` is the same as the size of `x0`. Typically, `x` is a local solution to the problem when `exitflag` is positive. For information on the quality of the solution, see When the Solver Succeeds.

Objective function values at the solution, returned as a real array. Generally, `fval` = `fun(x)`.

Maximum of the objective function values at the solution, returned as a real scalar. `maxfval = max(fval(:))`.

Reason `fminimax` stopped, returned as an integer.

 `1` Function converged to a solution `x` `4` Magnitude of the search direction was less than the specified tolerance, and the constraint violation was less than `options.ConstraintTolerance` `5` Magnitude of the directional derivative was less than the specified tolerance, and the constraint violation was less than `options.ConstraintTolerance` `0` Number of iterations exceeded `options.MaxIterations` or the number of function evaluations exceeded `options.MaxFunctionEvaluations` `-1` Stopped by an output function or plot function `-2` No feasible point was found.

Information about the optimization process, returned as a structure with the fields in this table.

 `iterations` Number of iterations taken `funcCount` Number of function evaluations `lssteplength` Size of the line search step relative to the search direction `constrviolation` Maximum of the constraint functions `stepsize` Length of the last displacement in `x` `algorithm` Optimization algorithm used `firstorderopt` Measure of first-order optimality `message` Exit message

Lagrange multipliers at the solution, returned as a structure with the fields in this table.

 `lower` Lower bounds corresponding to `lb` `upper` Upper bounds corresponding to `ub` `ineqlin` Linear inequalities corresponding to `A` and `b` `eqlin` Linear equalities corresponding to `Aeq` and `beq` `ineqnonlin` Nonlinear inequalities corresponding to the `c` in `nonlcon` `eqnonlin` Nonlinear equalities corresponding to the `ceq` in `nonlcon`

## Algorithms

`fminimax` solves a minimax problem by converting it into a goal attainment problem, and then solving the converted goal attainment problem using `fgoalattain`. The conversion sets all goals to 0 and all weights to 1. See Equation 1 in Multiobjective Optimization Algorithms.