fmincon with handle function input
11 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hynod
am 9 Sep. 2023
Kommentiert: Walter Roberson
am 22 Okt. 2023
Hello, I have a question about the usage of the fmincon routine. I've read the documentation, and the call requires an explicit function with a certain number of variables and coefficients that determine equality and inequality constraints. I wanted to ask if, instead of an explicit function, it's possible to input a handle function, such as a numerical integrator (for example, one that varies initial conditions to minimize the integration output). In the documentation, the only example that shows the use of a handle function still includes an explicit function within the handle.
20 Kommentare
Yousef
am 22 Okt. 2023
Bearbeitet: Walter Roberson
am 22 Okt. 2023
Yes, you can definitely use a function handle as an input to `fmincon`, and the function that the handle refers to can be quite complex. In fact, many optimization problems involve non-trivial functions, including numerical integrators.
Let's break down your question a bit:
1. **Function Handle for Objective Function**: Your objective function can be a numerical integrator or any other complex routine. Here's a simple way to set it up:
matlab code:
function cost = myObjectiveFunction(x)
% Here, x is the vector of variables you're optimizing.
% You can use x as initial conditions or parameters for your numerical integrator.
% ... Your complex routine, e.g., numerical integration
result = myIntegrator(x); % This is just a hypothetical function
% The objective is to minimize 'result'
cost = result;
end
% Call fmincon
x0 = [initial_guesses]; % replace with your initial guesses
[x_optimal, fval] = fmincon(@myObjectiveFunction, x0, ...);
```
2. **Equality and Inequality Constraints**: If you have constraints that involve complex operations or other integrations, you can use function handles for those as well.
matlab code:
function [c, ceq] = myConstraints(x)
% c(x) <= 0 constraints
c = ...; % your inequality constraints
% ceq(x) = 0 constraints
ceq = ...; % your equality constraints
end
% Call fmincon
options = optimoptions('fmincon','SpecifyConstraintGradient',false);
[x_optimal, fval] = fmincon(@myObjectiveFunction, x0, [], [], [], [], [], [], @myConstraints, options);
```
3. **Using Integrators within the Optimization Routine**: If you're using MATLAB's built-in numerical integrators (like `ode45`, etc.), you can easily embed them inside your objective function or constraints:
matlab code:
function cost = myObjectiveFunction(x)
[~, Y] = ode45(@(t,y) myODE(t, y, x), tspan, initial_conditions);
% Process Y to get the cost, if necessary
cost = ...; % some function of Y
end
```
Here, `myODE` would be another function that defines your differential equations. The point is, `fmincon` doesn't care how the function value is computed. It just needs a function handle to obtain values for given inputs.
In essence, as long as you can define your problem in terms of functions (handles) that `fmincon` can call to get objective values, constraints, and optionally gradients, you can use any complex logic within those functions, including numerical integrators.
Walter Roberson
am 22 Okt. 2023
In essence, as long as you can define your problem in terms of functions (handles) that `fmincon` can call to get objective values, constraints, and optionally gradients, you can use any complex logic within those functions, including numerical integrators.
Both fmincon() and ode45() have the restriction that the functions being invoked by them must have continuous first derivatives (and second derivatives too in some cases). Therefore you cannot use any complex logic in the functions.
The particular system being modeled has four phases. If there is a smooth (2 second derivatives) transition between each of the phases, then fmincon() and ode45() should be able to handle it, but if there is an abrupt transition anywhere in the mix then you would need to break up the logic into multiple parts.
Akzeptierte Antwort
Sam Chak
am 11 Sep. 2023
I'm unsure if this is what you want to minimize using fmincon(). This is just a simple example without constraints. It returns the solution that is very to the true solution (the equilibrium point).
% search the initial states that minimize the norm using fmincon
initial_guess = [10 -10];
[x0sol, fval] = fmincon(@objfun, initial_guess)
% objective function <-- solve the Lander ODE with ode113 and find the norm
function J = objfun(x0)
tspan = [0 10];
init = [x0(1) x0(2)]; % initial states (equilibrium)
[t, x] = ode45(@odefcn, tspan, init);
J = norm(x);
end
% system <-- you already have this one (the Lander ODE)
function dxdt = odefcn(t, x)
A = [0 1; -1 -sqrt(3)];
dxdt = A*x;
end
0 Kommentare
Weitere Antworten (3)
Matt J
am 12 Sep. 2023
Bearbeitet: Matt J
am 12 Sep. 2023
The problem is that it is not an explicit function, so it seems that fmincon never stops finding the minimum
It is possible that fmincon is struggling to find the minimum, but that shouldn't be related to the "explicitness" of the function. There certainly is no requirement that fun(p) be implemented in a single-line formula, if that's what worries you.
Note however, that the solution to an ODE can be a discontinuous (and therefore non-differentiable) function of the initial state. That can be a problem, since fmincon assumes fun(p) to be continuous and differentiable. Note also that numerical ODE solvers like ode113() have certain fragilities. This is why MathWorks has posted some relevant guidelines about problems involving ODEs, which you should read.
so it seems that fmincon never stops finding the minimum, how can I limitate the iterations?
opts=optimoptions('fmincon','MaxIterations',100);
p=fmincon(fun,____,opts)
Generally, though, if you have to cut fmincon off after some number of iterations, it means the optimization is failing.
13 Kommentare
Sam Chak
am 14 Sep. 2023
As you can see in my example below, we can work on a vector (state x) and return a scalar representing the Euclidean norm. This scalar can be used as the objective function value. However, in your code, you take the norm of the final state value, which also returns a scalar. Nevertheless, there is a significant difference in the physical meaning between the vector norm and the end-state norm.
% solving the 1st-order ODE
[t, x] = ode45(@(t, x) - x, [0 10], 1);
% check the size of vector, x(t)
Sx = size(x)
% vector-based norm
Nx = norm(x)
% end-state norm
Ne = norm(x(end))
% exponential decay solution
plot(t, x, 'linewidth', 1.5), grid on
Walter Roberson
am 16 Sep. 2023
Unfortunately at the moment I do not really understand the difference between gamultiobj and paretosearch . It looks like they use different algorithms, with gamultiobj() using genetic algorithm approaches, and paretosearch using pattern searching; https://www.mathworks.com/help/gads/paretosearch-algorithm.html
Hynod
am 16 Sep. 2023
2 Kommentare
Walter Roberson
am 22 Okt. 2023
gamultiobj() for one internally generates a number of starting points (number == "population size") based upon the upper and lower bound. You can use the options for gamultiobj to supply a particular starting population.
Siehe auch
Kategorien
Mehr zu Surrogate Optimization finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!