The danger is OVERTRAINING an OVERFIT NET. There are several approaches.
1. PREVENT OVERFITTING the I-H-O net by having the number of training equations
Ntrneq = Ntrn* I
be no smaller than the number of unknown weights,
Nw = (I+1)*H+(H+1)*O
i.e., Ntrneq >= Nw
2. PREVENT OVERTRAINING by using a reasonable training goal
mse(target-output) <= 0.01 * mse(target-mean(target')' )
3. PREVENT OVERTRAINING by using a validation subset to implement
4. PREVENT OVERTRAINING by using TRAIN BR to implemet BAYESIAN
EARLY STOPPING (3) is automatic with the default TRAINLM.
Typically I try to implement 1-3 via minimizing H in addition to using EARLY STOPPING with the default training algoithm TRAINLM (2).
On rare equations I will use TRAINBR (4) when 1-3 do not yield satisfactory results.
Searching BOTH NEWSGROUP and ANSWERS using
Greg Ntrneq Nw
should yield zillions of examples
Hope this helps
THANK YOU FOR FORMALLY ACCEPTING MY ANSWER