how to pass multiple values of x to objective function in GA / Patternsearch

2 Ansichten (letzte 30 Tage)
I'm looking for ways to speed up the computation of my optimisation process. I've already used the PROFILE command to pinpoint bottlenecks in my code and have optimised the code to a point that I am happy with.
Unfortunately, the biggest chunk of time in the optimisation process occurs within the objective function which runs Finite Element solvers.
After discussing this at length with the support team of my FE package, I learned that their software has built in efficiencies for running solvers and is capable of running multiple solves at once.
Is there some way that I can force the GA and Patternsearch to pass an array of values for x into the objective function and accept a vector of objective function values?
For example, say x is a vector of length 20 (representing FE model parameters) and my FE solver is able to solve 8 FE models simultaneously. Can I get the GA and Patternsearch to chunk their outputs of x into an array of 8 rows and 20 columns, then accept a vector of length 8 as the objective function values?
I am guessing that, if this is possible, I would need to ensure that my GA population was a multiple of 8 and, likewise the number of polling points in the Patternsearch would also have to be a multiple of 8.
Any idea if this is possible? Btw, I'm already using Parallel Computing, so please don't suggest that I use Parallel Computing.

Akzeptierte Antwort

Matt J
Matt J am 14 Jun. 2013
Bearbeitet: Matt J am 14 Jun. 2013
Can I get the GA and Patternsearch to chunk their outputs of x into an array of 8 rows and 20 columns, then accept a vector of length 8 as the objective function values?
I think the practical approach might be to increase the problem dimension from 20 to 160 and inside your objective function, split the vector X of length 160 into 8 smaller vectors of length 20 to form your batch of FE jobs. You would then calculate your 8 original objective function values as a batch, but only return the minimum among those 8 values as the final output of this new, redefined objective function routine.
To compensate for the increased computational expense stemming from the increased dimension of X, however, you would reduce the size of your population by a factor of 8.
You would of course have to reformulate any constraints you have in this higher dimensional space as well, along with the bounds of your initial population, if you are specifying them manually.
  3 Kommentare
Matt J
Matt J am 14 Jun. 2013
Bearbeitet: Matt J am 14 Jun. 2013
The problem I foresee is that the value of the objective function would relate to 20 out of the 160 parameters. From a practical point of view, there would be no way of knowing which group of 20 parameters was the correct one
Remember, your objective function is allowed to return more than 1 output argument. The first one has to be the minimum over each subgroup of 8, and that's the one GA/patternsearch will use, but additional output arguments can give you the subgroup in R^20 that attains this minimum.
When GA/patternsearch terminates with a solution in R^160 you would feed that solution back in one last call to the objective function and, by calling with the additional output arguments, extract the minimizing subgroup. That will be the solution in R^20 that you are looking for.
An alternative would be to use OutputFcn to extract the subgroup that you are actually interested in. That would save you that one redundant final function call. My only concern with that is that I don't know how OutputFcn works in conjunction with parallelization.
I also suspect that the objective function (as a function of the 160 parameters) would not be smooth (although this shouldn't be a problem for the GA or Patternsearch).
That's what I'm banking on as well.
The other issue I see with this approach is that, by increasing the dimensions of the search space 8fold, the GA will have to work a lot longer in order to poll enough regions within the search space.
Well, success isn't completely certain, but I don't think failure is certain either. Each of your function calls is polling and eliminating an additional 7 out of 8 solutions in R^20 compared to what it was doing before. In theory you should be able to cover the same search space in R^160 with 1/8 the number of seed points.
The only thing that's a little unclear is how the solution vectors mutate in R^160 and if it's possible to get each subgroup of 8 to mutate as part of this larger vector with the same randomness as when they were mutating separately in R^8. But I don't see why not...
Matt J
Matt J am 14 Jun. 2013
Bearbeitet: Matt J am 14 Jun. 2013
Well, no, that's the point. Each of your function calls is polling and eliminating an additional 7 out of 8 solutions in R^20 compared to what it was doing before.
You may have to introduce additional constraints among the 8 sub-groups so that the solver doesn't test redundant solutions. For example,
x1>=x2
x2>=x3
...
x7>=x8
Also, instead of minimizing over f(x1),...,f(x8), it may be worthwhile instead to return the average of f(x1),...,f(x8). That way the fitness function will be penalized if any of the x1,..x8 is a bad choice. That would also restore smoothness to the fitness function.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by