How to refine a feedforward neural network after train?

I'm trying to stablish a surrogated-based solver using the NN toolbox and the local and global solver toolbox. I want to have a method that does something similar to this:
Given fine function f(x).
Sample f(x) in a domain .
Train a feedforward network to aproximate f(x).
While condition, do:
Search solution (minimum) of aproximated function.
Evaluate f(x) for that solution.
Refine the network including that point.
end while.
My problem comes when I try to refine the network using adapt(). I've read the documentation and some other threads and it seems posible to initialize the network to use adapt with the structure I obtained previously with train, but I get lost with the naming conventions using weights and delays in similar ways.
My questions are these ones:
  1. Can I actually refine the network using adapt() on top of my already trained with train() network?
  2. In case I can. How should it be done? I mean, which variables should I store between calls and pass them to adapt for this to work correcly?. (Actual code would be nice)
  3. If i use adapt this way, would the new points have more weight than the previous ones? Can I control this in any way so the original network doesn't get completely destroyed?.
  4. In case adapt can not be used this way. Which other approaches could I try to achieve this? I'm currently retraining completely the network every iteration with train() and adding the new points to the list of initial samples, but this is not very desirable as it would scalate poorly with bigger problems.
Thank you all in advance for your time.
Edit: I would like to point out that the use I'm making of the NN is just to find the global minimum of a problem. So the function aproximation doesn't have to be too good initially and then, I would like to improve around the possible minima by adding points. So, I'm not too concerned with the effectiveness of the process, I just want to know if it can be done and test it to see for myself if it suits want I want.

Antworten (1)

Greg Heath
Greg Heath am 8 Okt. 2016
My previous post shows the assumption that combining TRAIN and ADAPT is a guarantee for success is not necessarily a good one.
http://www.mathworks.com/matlabcentral/answers/302930-
effect-by-splitting-data-in-training-neural-network
Hope this helps.
Thank you for formally accepting my answer
Greg

1 Kommentar

First of all, thank you for taking your time to read and and answer Greg.
After reading the link you posted, I understand it just state how the results change depending on the initial weights both for adapt and train, and how the best ones might be different. I'm still not sure if its even possible to combine them, how it should be done or if they are ment to be used separately.
I would like to point out that the use I'm making of the NN is just to find the global minimum of a problem. So the function aproximation doesn't have to be too good initially and then, I would like to improve around the possible minima by adding points. So, I'm not too concerned with the effectiveness of the process, I just want to know if it can be done and test it to see for myself if it suits want I want.
Thank you for your time again. I will edit the main question with this for more clarity.

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu Deep Learning Toolbox finden Sie in Hilfe-Center und File Exchange

Gefragt:

am 7 Okt. 2016

Bearbeitet:

am 10 Okt. 2016

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by