Understanding why GA's penalty profile is so similar for any run

8 Ansichten (letzte 30 Tage)
Mitsu
Mitsu am 8 Jul. 2020
Beantwortet: Alan Weiss am 9 Jul. 2020
I am running a genetic algorithm with only nonlinear inequality constraints. The solutions is an array of "nvars" integers that are either 0 or 1.
The setup looks like this:
ObjectiveFunction = @(x) myfunc_objective(x,par); %fuction to be minimized
ConstraintFunction = @(x) myfunc_constraint(x,par);
lb = zeros(1,nvars);
ub = ones(1,nvars);
IntCon = linspace(1,nvars,nvars);
options = gaoptimset(options,'PlotFcns', {@gaplotbestf});
options.Generations = 150;
options.PopulationSize = 1e3; % I have tried up to 1e4 for now
options.Display = 'iter';
x = ga(ObjectiveFunction,nvars,[],[],[],[],lb,ub,ConstraintFunction,IntCon,options); %single obj
When I look at the plot of penalty value vs iterations, a similar pattern seems to appear:
  • For the first 35-45 generations, the penalty value decreases from about 100 to 0 or about -0.5.
  • These solutions do not satisfy all the constraints (they fail 1~5 constraints out of the ~500 in).
  • Then, there is a jump in best and mean penalty from around 0 to 1e4.
  • If given enough generations, the plot sometimes returns to values similar to before the jump, but not always.
I think this corresponds to a way to either get out of a local optima, or maybe a way to discard the entirety of the population and apply a drastic change to try to find actual solutions that satisfy the constraints.
If you could help me out understand the method a bit more, here are the three questions I have:
  1. If ga() prepares an initial solution that is probably different for each case, why is the plot pattern the same for any run? Shouldn't any jumps happen in different iterations for different runs? And maybe for some runs there could be more than one or maybe zero jumps.
  2. When I check the previous generaitons before the jump, they don't seem to satisfy all the constraints (only a few unsatisfied out of hundreds). However, the first generation after the jump seems to satisfy the constraints. Therefore, why is the penalty value so high after the jump, if the solution is actually more feasible than before the jump?
  3. With elite counts between 5% to 50% I still get the same pattern. I thought that fixing a 50% elite count would mean keeping a big portion of the previous generation untouched at each iteration. If this is the case, how come the best and mean penalty values are not affected by this restriction in the setup?
Thank you so much for your help!
  2 Kommentare
Walter Roberson
Walter Roberson am 8 Jul. 2020
Is it the penalty value that is high, or the mean penalty? The mean penalty would include population members that do not satisfy the constraint, and relative to the ones that do satisfy the constraint, their penalty should be high.
A configuration that satisfies more constraints could in theory end up with a higher penalty, I suspect... but I would need to re-examine how they calculate penalty.
Mitsu
Mitsu am 8 Jul. 2020
Bearbeitet: Mitsu am 8 Jul. 2020
Both the mean penalty and the best penalty become high, although the mean does indeed go higher (which makes sense, as far a I understand).
"A configuration that satisfies more constraints could in theory end up with a higher penalty, I suspect... but I would need to re-examine how they calculate penalty."
I see. So, a solution that satisfies all constraints but with lower fitness has more penalty than a solution that does not satisfy all constraints but has higher fitness. That would be what I see in the plots.
This would answer questions 2 and 3.
---
Question 1) remains though, regarding the randomness of the initial algorithm and how it always seems to perform the big jump around the same generation number.

Melden Sie sich an, um zu kommentieren.

Antworten (1)

Alan Weiss
Alan Weiss am 9 Jul. 2020
Look at the definition of penalty value in the Integer ga Algorithm:
  • If the member is feasible, the penalty function is the fitness function.
  • If the member is infeasible, the penalty function is the maximum fitness function among feasible members of the population, plus a sum of the constraint violations of the (infeasible) point.
Until you get a feasible member, all members have their penalty value as their fitness function value, not taking feasibility into account. However, once there is a feasible member, then all those infeasible member penalty values jump by at least the fitness of the feasible member. After there is at least one feasible member, then the optimization proceeds as best it can.
Alan Weiss
MATLAB mathematical toolbox documentation

Produkte


Version

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by