optimization of matrices with random initialization
3 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Jing Xie
am 19 Okt. 2021
Kommentiert: Jing Xie
am 21 Okt. 2021
Ciao, everyone
I want to optimize a function
where pi are optimal variables and matrices of different sizes. (each pi_bar have the same size as corresponding pi)
I reshape all the pi to one single column so I could use the fminunc to solve the problem.
The problem is unconstraint but yi is updated using pi. It also involves some random initialization (using randn) at the beginning for some variables.
pi_bar and yi_bar are already known.
case 1: I run the optimization, it returns different values each time, which is understandable as there is random inialization in the algorithm
case 2: I fixed the random initialization, using rng for example. It returns an error "maximum number of function evaluations has been exceeded " even if I set the value to very high (500000).
It seems that the algorithm only finds a better point when there is better random initialization points. Is there a better way to cope with the random initialization in optimization problem? And what could be the reasons for the case 2?
Thanks a lot for any suggestion in advance!
0 Kommentare
Akzeptierte Antwort
Matt J
am 20 Okt. 2021
Make sure your objective function code does not contain any randomization steps. Your initial guess can be random, but the objective function itself needs to be deterministic. Aside from that, nothing can be diagnosed without seeing your code.
8 Kommentare
Matt J
am 20 Okt. 2021
Bearbeitet: Matt J
am 20 Okt. 2021
That doesn't clarify why you are not optimizing xp. Surely the accuracy of the prediction of depends jointly on xp and the other parameters. If you change x1...x8, your prediction can become worse if you don't change xp as well.
Also, you say that your initial guess of x1...x8 came from a previously trained LSTM. Why not use the xp from that network as well (regardless of whether xp is treated as an unknown or not)?
Weitere Antworten (1)
Alan Weiss
am 19 Okt. 2021
You would probably do well to use the Problem-Based Optimization Workflow. But you can just as easily change your current solution method to use a more efficient algorithm. The point is that lsqnonlin is the solver of choice for sum-of-squares problems. Your objective function should return the and lsqnonlin implicitly sums the squares and minimizes.
That said, I might be misunderstanding your problem. You said that your are functions of , and I do noot see that connection in your problem formulation. So I might have it wrong somehow.
In any case, see whether the problem-based formulation makes sense for you and whether it chooses a more efficient solver.
Alan Weiss
MATLAB mathematical toolbox documentation
Siehe auch
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!