Can somebody explain me how to use "divideind"??
Ältere Kommentare anzeigen
I am using Neural Network tool box for pattern recognition. The tool box uses random values of the input matrix for training test validation of defined percentage which results in different performance graph every time i train it.
I read that if i generate the advanced script and use divideind i can fix the matrix of validation,testing and training. But i'm not sure how to use it and what amendments should be made in the advance script. Kindly Help.
P.S dont tell me to read help and doc its use less (atleast for me)
Akzeptierte Antwort
Weitere Antworten (5)
Greg Heath
am 17 Nov. 2013
[ inputs, targets ] = simpleclass_dataset;
[ I N ] = size(inputs) % [ 2 1000 ]
[ O N ] = size(targets) % [ 4 1000 ]
hiddenLayerSize = 10;
net = patternnet(hiddenLayerSize);
view(net)
net.divideFcn = 'divideind';
net.divideParam.trainInd = 151:850;
net.divideParam.valInd = 1:150;
net.divideParam.testInd = 851:1000;
[net,tr] = train(net,inputs,targets);
view(net)
outputs = net(inputs);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
trainTargets = targets .* tr.trainMask{1};
valTargets = targets .* tr.valMask{1};
testTargets = targets .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
3 Kommentare
Ihtisham
am 20 Nov. 2013
Greg,
Thank you! Your code demystified trainMask!
I changed this code at two places:
hiddenLayerSize = 4;
and
net.divideFcn = 'divideind';
net.divideParam.trainInd = 1:6;
net.divideParam.valInd = 7:8;
net.divideParam.testInd = 9:12;
With above changes and XOR as dataset, I got:
>> outputs
outputs =
0.0495 0.2594 0.9486 0.9044 0.0495 0.2594 0.9486 0.9044 0.0495 0.2594 0.9486 0.9044
The dataset was:
>> inputs
inputs =
0 0 1 1 0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1 0 1 0 1
>> targets
targets =
0 1 1 0 0 1 1 0 0 1 1 0
Questions:
1. perform() would be comparing integer targets with real-valued outputs. Shouldn't we first threshold the NN's output to 0 1] (e.g. by a unit-step function with transition at 0.5).
2. Many elements of trainTargets, valTargets and testTargets are NaN. Can this effect the performance calculation in anyway? Are NaN values ignored by perform() for calculating the MSE?
Greg Heath
am 21 Nov. 2013
1. It depends upon the application. For classification or pattern-recognition, VEC2IND is the most common.
2. NaNs are ignored.
Greg Heath
am 28 Jan. 2014
I forgot to apply the mask to the outputs when calculating trn/val/tst performance!
Greg Heath
am 28 Jan. 2014
Your original problem of nonrepeatibility is easily solved by initializing the RNG before it is used to divide data or initialize weights. If you search using
greg Ntrials
you will see the command
rng(0)
However, you can use any positive integer, e.g., rng(4151941). This tends to be preferable because random division eliminates any bias in the way the data was collected.
However, I will find one of my examples that uses divideind and post the URL.
Greg
Greg Heath
am 28 Jan. 2014
{close all, clear all, clc
[ x, t ] = simpleclass_dataset;
[ I N ] = size(x) % [ 2 1000]
[ O N ] = size(t) % [ 4 1000]
trueclassind = vec2ind(t);
ind1 = find(trueclassind == 1);
ind2 = find(trueclassind == 2);
ind3 = find(trueclassind == 3);
ind4 = find(trueclassind == 4);
N1 = length(ind1) % 243
N2 = length(ind2) % 247
N3 = length(ind3) %233
N4 = length(ind4) %277
minmax1 = minmax(ind1) % [ 5 993 ]
minmax2 = minmax(ind2) % [ 1 1000 ]
minmax3 = minmax(ind3) % [ 4 996 ]
minmax4 = minmax(ind4) % [ 6 985 ]
mean(diff(trueclassind)) % 0 Classes completely mixed up
trnind = 1:700;
valind = 701:850;
tstind = 851:1000;
Ntrn = 700
Nval = 150
Ntst = 150
Ntrneq = Ntrn*O
MSEtrn00 = mean(var(t(trnind)',1)) % 0.1875
MSEtrn00a = mean(var(t(trnind)',0)) % 0.1878
MSEval00 = mean(var(t(valind)',1)) % 0.1892
MSEtst00 = mean(var(t(tstind)',1)) % 0.1858
% Create a Pattern Recognition Network
H = 10;
net = patternnet(H);
Nw = (I+1)*H+(H+1)*O % 74
Ndof = Ntrneq-Nw % 2726
net.divideFcn = 'divideind';
net.divideParam.trainInd = trnind;
net.divideParam.valInd = valind;
net.divideParam.testInd = tstind;
[net tr y e ] = train(net,x,t); % e = t-y
% Test the Network
MSEtrn = mse(e(trnind)) % 1.5629e-7
MSEtrna = Ntrneq*MSEtrn/Ndof % 1.6053e-7
R2trn = 1-MSEtrn/MSEtrn00 % 1}
R2trna = 1-MSEtrna/MSEtrn00a % 1
R2val = 1-mse(e(valind))/MSEval00 % 1
R2tst = 1-mse(e(tstind))/MSEtst00 % 1}
Greg Heath
am 11 Nov. 2013
0 Stimmen
Try something and post it. If it is wrong maybe someone can help.
1 Kommentar
Greg Heath
am 17 Nov. 2013
Still waiting
Mehrukh Kamal
am 17 Nov. 2013
4 Kommentare
Greg Heath
am 17 Nov. 2013
You posted in the answers box instead of the comments box.
Please move it. If you can, move it into the original question.
amit patil
am 28 Jan. 2014
Did you figure out how to change the code? I am also facing the same problem. M new to this NN stuff
Greg Heath
am 28 Jan. 2014
1. Please remove all statements that are covered by defaults
2. Test on one of MATLAB's example data sets for classification/pattern-recognition
help nndatasets
doc nndatasets
Their example for patternnet is the iris_dataset. However, that is a multidimensional input set. Try one of the single dimensional sets.
Greg Heath
am 28 Jan. 2014
Sorry there are no single dimensional input examples. Just use
simpleclass_dataset
Kategorien
Mehr zu Pattern Recognition finden Sie in Hilfe-Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!