How to refine a feedforward neural network after train?
Ältere Kommentare anzeigen
I'm trying to stablish a surrogated-based solver using the NN toolbox and the local and global solver toolbox. I want to have a method that does something similar to this:
Given fine function f(x).
Sample f(x) in a domain .
Train a feedforward network to aproximate f(x).
While condition, do:
Search solution (minimum) of aproximated function.
Evaluate f(x) for that solution.
Refine the network including that point.
end while.
My problem comes when I try to refine the network using adapt(). I've read the documentation and some other threads and it seems posible to initialize the network to use adapt with the structure I obtained previously with train, but I get lost with the naming conventions using weights and delays in similar ways.
My questions are these ones:
- Can I actually refine the network using adapt() on top of my already trained with train() network?
- In case I can. How should it be done? I mean, which variables should I store between calls and pass them to adapt for this to work correcly?. (Actual code would be nice)
- If i use adapt this way, would the new points have more weight than the previous ones? Can I control this in any way so the original network doesn't get completely destroyed?.
- In case adapt can not be used this way. Which other approaches could I try to achieve this? I'm currently retraining completely the network every iteration with train() and adding the new points to the list of initial samples, but this is not very desirable as it would scalate poorly with bigger problems.
Thank you all in advance for your time.
Edit: I would like to point out that the use I'm making of the NN is just to find the global minimum of a problem. So the function aproximation doesn't have to be too good initially and then, I would like to improve around the possible minima by adding points. So, I'm not too concerned with the effectiveness of the process, I just want to know if it can be done and test it to see for myself if it suits want I want.
Antworten (1)
Greg Heath
am 8 Okt. 2016
My previous post shows the assumption that combining TRAIN and ADAPT is a guarantee for success is not necessarily a good one.
http://www.mathworks.com/matlabcentral/answers/302930-
effect-by-splitting-data-in-training-neural-network
Hope this helps.
Thank you for formally accepting my answer
Greg
1 Kommentar
Javier Tapia
am 10 Okt. 2016
Kategorien
Mehr zu Deep Learning Toolbox finden Sie in Hilfe-Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!