trial and error approach to find optimal hidden neurones number

3 Ansichten (letzte 30 Tage)
I have decomposed the data into three parts: 70% (training), 10% (validation) and 20% (testing). When I used trial and error approch, I found the smallest MSE (0.53088525) of training with 15 hidden nodes but focusing on MSE of validation, the smallest MSE (0.27098756) was achieved with only one node!!!!!! it's makes sense???
we started with 1 hidden node and added one each time up to 20. trials=10.
Is 15 the optimal hidden neurone number????
Thanks in advance
  2 Kommentare
John D'Errico
John D'Errico am 11 Mai 2015
NO!
If there were a truly optimal value, do you think they would set it at that? Every problem is different.
coqui
coqui am 11 Mai 2015
Thank you for your quick answer, so how we can decide the optimal number?

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Greg Heath
Greg Heath am 14 Mai 2015
What are the sizes of the input [I N] and target [O N ] matrices?
What is Hub for Ntrneq = Ntrn*O = 0.7*N*O >= Nw = (I+1)*H+(H+1)*O?
You designed 200 nets? 10 for each of 20 values for H << Hub? (Typically, I only look at 10 values for H)
Were the random initial weights and data divisions different for each net?
If you use the degree-of-freedom adjustment for the training set performance
MSEtrna = SSEtrn/(Ntrneq-Nw)
MSEtrn00a = mean(var(target',0))
R2trna = 1 - MSEtrna/MSEtrn00a
you can plot the Rsquare summary statistics e.g., min, median, mean and max vs H for the trna, val and tst sets; i.e., Four plots, 3 curves each.
I typically use the median and mean plots to determine the smallest acceptable value for H.
Of course, if N isn't large enough to insure relatively stable trna/val/tst data division estimates you might want to use Bayesian regularization via TRAINBR. I am not that familiar with regularization. However, it tends to make the results much less sensitive to the value of H. Then the question of an "optimal value" tends to beome mute.
Greg
PS: The normalized NMSE = MSE/mean(var(target',1)) is scale dependent and therefore, easier to use.
  4 Kommentare
coqui
coqui am 17 Jan. 2016
Thank you Greg. I want to know if we can use trial and error based on the minimum of training MSE,in this case?
Greg Heath
Greg Heath am 24 Jan. 2016
1. Trial and error on MSEval: Minimum H that satisfies MSEval < 0.01*mean(var(targetval',1))
2. net.divideFcn = ''. Trial and error on MSEtrna = SSEtrn/(Ntrneq-Nw). Minimum H that satisfies MSEtrna < 0.01*mean(var(targettrn',0))

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by