R2a vs R2 in neural network MSE
Ältere Kommentare anzeigen
I've read quite a few posts regarding adjusted co-efficient of determination (R2a) and using this to derive an mse goal for training purposes. I've applied the posts to a training case below where I'm looking to evaluate varying the hidden nodes in my net. My objective is to find the "best" trained net for each of a given set of starting weights for (H=?) - and then apply this trained net to some held back test data to evaluate generalisation.
- does the table below reflect the correct application of adjusted R2?
- does reverting to (unadjusted) R2 make any sense when faced with -ive Ndof adjustment?
- how do I determine a training goal when Ndof is -ive ? setting msegoal = 0 doesn't seem to be realistic ?
- assuming I have static/given starting weights for each of my nets (H=?) is it better to abandon MSE goal - set the epochs high (5000/10000) and use k-fold cross validation to find an optimal MSE(training) vs MSE(validation) ratio and then retrain the net based on that fold to the appropriate epoch? (is this even a valid approach?)
N I H O Nw Neq Ndof Ndof/Ntrneq
200 16 5 1 91 200 109 0.545
200 16 15 1 271 200 -71 -0.355
200 16 25 1 451 200 -251 -1.255
200 16 35 1 631 200 -431 -2.155
200 16 45 1 811 200 -611 -3.055
200 16 55 1 991 200 -791 -3.955
1 Kommentar
Greg Heath
am 17 Apr. 2013
Hub = -1 + ceil( ( Ntrn*O-O)/ (I +O +1)
= -1 + ceil( 199/18) = 11
MSEgoal = max(0, 0.01*Ndof*mean(var(ttrn'))/Ntrneq)
Akzeptierte Antwort
Weitere Antworten (0)
Kategorien
Mehr zu Deep Learning Toolbox finden Sie in Hilfe-Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!