Neural Network Toolbox - Backpropagation stopping criteria
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Haider Ali
am 21 Mär. 2015
Kommentiert: Greg Heath
am 25 Apr. 2015
I am using Neural Network Toolbox to classify a data of 12 alarms into 9 classes with one hidden layer containing 8 neurons. I wanted to know:
- What equations does training algorithm traingdm use to update the weights and bias? Are these the same as given below (etta is learning rate i.e. 0.7 and alpha is momentum coefficient i.e. 0.9):
where delta_j for output layer is:
while for hidden layer it is:
These equations are taken directly from the paper attached.
2. What does the stopping criteria net.trainParam.goal mean? Which field to update if I want my stopping criteria to be mean square error equal to 0.0001? Do I need to update net.trainParam.min_grad to 0.0001 for this?
3. How are the weights being updated in traingdm? Is it batch updation (like after every epoch) or is it updation after every input pattern of every epoch?
4. I have 41 training input patterns. How many of those are use for training process and how many for recall process. What if I want all 41 of them to be used only for training process?
5. I have tried the following code but the outputs are not being classified accurately.
clear all; close all; clc;
p = [
1 0 0 0 0 0 0 0 0 0 0 0; ... %c1
1 0 1 0 0 0 0 0 0 0 0 0; ...
1 0 1 1 0 0 0 0 0 0 0 0; ...
1 0 1 0 1 0 0 0 0 0 0 0; ...
1 0 1 0 0 0 0 0 0 1 0 0; ...
1 0 1 1 1 0 0 0 0 0 0 0; ...
1 0 1 0 1 1 0 0 0 1 0 0; ...
1 0 1 0 1 0 0 0 0 1 0 0; ...
1 0 1 1 0 0 0 0 0 1 0 0; ...
1 0 1 0 1 1 1 0 0 0 0 0; ...
1 0 1 0 1 1 0 1 0 0 0 0; ...
1 0 1 1 1 0 0 0 0 1 0 0; ...
0 1 0 0 0 0 0 0 0 0 0 0; ... %c2
0 0 0 0 0 0 0 0 0 0 0 0; ...
0 0 0 1 0 0 0 0 0 0 0 0; ...
0 0 0 0 1 0 0 0 0 0 0 0; ...
0 0 0 0 0 0 0 0 0 1 0 0; ...
0 0 0 1 1 0 0 0 0 0 0 0; ...
0 0 0 0 1 1 0 0 0 1 0 0; ...
0 0 0 0 1 0 0 0 0 1 0 0; ...
0 0 0 1 0 0 0 0 0 1 0 0; ...
0 0 0 0 1 1 1 0 0 0 0 0; ...
0 0 0 0 1 1 0 1 0 0 0 0; ...
0 0 0 1 1 0 0 0 0 1 0 0; ...
0 0 0 1 0 0 0 0 0 0 0 0; ... %c3
0 0 0 0 1 0 0 0 0 0 0 0; ... %c4 or c5
0 0 0 0 1 1 0 0 0 0 0 0; ...
0 0 0 0 1 1 1 0 0 0 0 0; ...
0 0 0 0 1 1 0 1 0 0 0 0; ...
0 0 0 0 0 1 0 0 0 0 0 0; ... %c6
0 0 0 0 0 1 1 0 0 0 0 0; ...
0 0 0 0 0 1 0 1 0 0 0 0; ...
0 0 0 0 0 0 0 1 0 0 0 0; ... %c7
0 0 0 0 0 0 0 0 1 0 0 0; ... %c8
0 0 0 0 0 0 0 0 0 0 1 0; ...
0 0 0 0 0 0 0 0 1 1 0 0; ...
0 0 0 0 0 0 0 0 0 0 1 1; ...
0 0 0 0 0 0 0 0 1 0 1 0; ...
0 0 0 0 0 0 0 0 0 0 0 1; ... %c9
0 0 1 0 0 0 0 0 0 0 0 0; ... %c1 or c2
0 0 0 0 0 0 0 0 0 1 0 0; ... %c1 or c2 or c3
]';
t = [
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0;...
1 0 0 0 0 0 0 0 0; ...
1 0 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ... %c2
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 1 0 0 0 0 0 0 0; ...
0 0 1 0 0 0 0 0 0; ... %c3
0 0 0 1 1 0 0 0 0; ... %c4 or c5
0 0 0 1 1 0 0 0 0; ...
0 0 0 1 1 0 0 0 0; ...
0 0 0 1 1 0 0 0 0; ...
0 0 0 0 0 1 0 0 0; ... %c6
0 0 0 0 0 1 0 0 0; ...
0 0 0 0 0 1 0 0 0; ...
0 0 0 0 0 0 1 0 0; ... %c7
0 0 0 0 0 0 0 1 0; ... %c8
0 0 0 0 0 0 0 1 0; ...
0 0 0 0 0 0 0 1 0; ...
0 0 0 0 0 0 0 1 0; ...
0 0 0 0 0 0 0 1 0; ...
0 0 0 0 0 0 0 0 1; ... %c9
1 1 0 0 0 0 0 0 0; ... %c1 or c2
1 1 1 0 0 0 0 0 0; ... %c1 or c2 or c3
]';
net = feedforwardnet(8,'traingdm'); %8 hidden layers and training algorithm
net = configure(net,p,t);
net.layers{2}.transferFcn = 'logsig'; %sigmoid function in output layer
net.layers{1}.transferFcn = 'logsig'; %sigmiod fucntion in hidden layer
net.performFcn = 'mse';
net = init(net);
net.trainParam.epochs = 100000; %no. of epochs are not my concern hence a large number
net.trainParam.lr = 0.7; %obtained from the paper attached
net.trainParam.mc = 0.9; %obtained from the paper attached
net.trainParam.max_fail = 100000;
net.trainParam.min_grad = 0.00015; %is this stopping criteria same as mse?
net = train(net,p,t);
view(net);
Let me know if something else needs to be specified. Regards.
1 Kommentar
Greg Heath
am 25 Apr. 2015
% Target columns should sum to 1
% If targets are mutually exclusive there is only one "1"
% init(net) unecessary because of configure
NO MITIGATION FOR OVERTRAINING AN OVERFIT NET
1. max_epoch is HUGE
2. msegoal not specified ==> default of 0
3. no validation stopping
4. no regularization (trainbr)
Hope this helps.
Greg
Akzeptierte Antwort
Greg Heath
am 21 Mär. 2015
If you are going to use MATLAB, I suggest using as many defaults as possible.
1. Use PATTERNNET for classification
2. To see the default settings, type into the command line WITHOUT AN ENDING SEMICOLON
net = patternnet % default H = 10
3. If a vector can belong to m of c classes,
a. The c-dimensional unit target vector should contain
i. m positive components that sum to 1
ii. c-m components of value 0
4. Typically, the only things that need to be varied are
a. H, the number of hidden nodes
b. The initial weights
5. The best way to do this is
a. Initialize the RNG
b. Use an outer loop over number of hidden nodes
c. Use an inner loop over random weight initializations
d. For example
Ntrials = 10
rng('default')
j=0
for h = Hmin:dH:Hmax
j=j+1
...
for i = 1:Ntrials
...
end
end
6. See the NEWSGROUP and ANSWERS for examples. Search with
greg patternnet Ntrials
Hope this helps
Thank you for formally accepting my answer
Greg
8 Kommentare
Greg Heath
am 25 Apr. 2015
%GEH1: LOUSY TARGET CODING
%GEH2: traingd instead of traingdm
% GEH3: Logsig output INVALID for default mapmaxmin [-1 1 ] scaling
Hope this helps
Greg
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!