Why is there a difference in performance error using 'nntool' and 'nftool' when the properties assumed are same?

1 Ansicht (letzte 30 Tage)
I have created a neural network in 'nftool' with 10 inputs and 20 outputs using 5283 samples and found that the architecture with 16 hidden layer neurons gives least performance error of 0.012. I have tried using same properties in 'nntool' and expected to get same error. These were the network properties:
Network Type: Feed-forward backprop
Input Ranges: Got from Input
Adaptation Function: LEARNGDM
Number of Layers: 2
Properties :
Layer 1 Layer 2
16 Neurons 20 Neurons
LOGSIG PURELIN
However, when I train this network I get a performance error of around 0.5. Why is the error much larger?

Akzeptierte Antwort

Greg Heath
Greg Heath am 24 Apr. 2013
The nets are initialized with random weights and, by default, random trn/val/tst data division. Sometimes they lead to a good performance and sometimes they do not.
1. Always initialize your RNG to a specified state before using the function CONFIGURE or, if that is not used, the function TRAIN. Then you can always reproduce what you have done.
2. Make Ntrials (e.g.,10) different designs in a loop. Initialize the RNG before the loop and record the state of the RNG at the beginning of each pass through the loop. Record the 10 trn/val/tst performances. Choose the one with the best val performance. Then get an unbiased estimate of nondesign data from the test data.
3. Sometimes the summary statistics (e.g., min, median, stdv, max) are the desired result.
Hope this helps.
Thank you for formally accepting my answer
Greg

Weitere Antworten (0)

Kategorien

Mehr zu Neural Simulation finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by