THE DIFFERACE RESULT IN NEURAL NETWORK PROBLEM
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
primrose khaleed
am 19 Jun. 2014
Bearbeitet: Cedric
am 21 Jun. 2014
Hi
I hope someone can help me with my question.
When I run the backprop neural network more than once on the same data set i get a different set of results. (the predicted results are different each time). Is there a way to train the Neural Network to output the same (lowest error predictions) if you run the code more than once for the same data set ? when enter the testimage for first time will classify as first class but when rerun the program with same testimage will classify as second class.. how can solve this problem ...this is my file Thanks
0 Kommentare
Akzeptierte Antwort
Greg Heath
am 19 Jun. 2014
if you are using a saved net that has been previously trained, this should not happen.
If you are setting the RNG to the same initial state before retraining, this should not happen.
Hope this helps.
Thank you for formally accepting my answer
Greg
6 Kommentare
Greg Heath
am 21 Jun. 2014
What are the sizes of your training, validation and test sets?
What range of hidden node values are you searching over?
How many random initial weight initializations for each hidden node value?
What are the trn/val/test R-squared values for the "best" (i.e. max(R2val)) design?
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!