Neural Network: how can I get the correct output answer without using the function "sim", neural network function "sim" vs my calculation with trained network's weight and bias
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
oneclock
am 11 Nov. 2016
Kommentiert: Mohamed El Ibrahimi
am 25 Jun. 2020
I want to calculate the Neural network output with weight produced by neural network toolbox. but my caclulated output is different from the sim(net,X)
1. I made input data and target data
M = [1:1:10];
M = [M,M,M,M,M].*rand();
M = [M,M].*rand();
M = [M,M,M,M,M].*10;
M = [M,M].*10;
a=M.*rand().*2^rand()+5*rand()-5*rand();
b=M.*rand().*2^rand()+5*rand()-5*rand();
c=M.*rand().*2^rand()+5*rand()-5*rand();
n=rand(1,1000)*0.05;
y = 5*a + b.*c + 7*c + n;
x=[a; b; c];
t=y;
and set the FF Neural network
hiddenLayerSize = 4;
net = feedforwardnet(hiddenLayerSize);
net.divideFcn = 'dividerand'; % Split random data
net.divideMode = 'sample';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net = train(net,x,t);
And output of trained network with intput data X = [22,25,21]' is
X = [22,25,21]'
y_sim=sim(net,X)
This procedure is the result using the function "sim".
Next, i will calculate output with above network's weight parameters.
weight parameter like as,
b1 = net.b{1};
b2 = net.b{2};
IW = net.IW{1,1};
LW = net.LW{2,1};
and calculate the output with input X = [22,25,21]'
X = [22,25,21]'
y_my = b2 + LW * tanh(b1 + (IW * X))
I really don't know why these two output is different. y_my and y_sim is different.
this is full codes.
clc
clear all
rng(4151945);
M = [1:1:10];
M = [M,M,M,M,M].*rand();
M = [M,M].*rand();
M = [M,M,M,M,M].*10;
M = [M,M].*10;
a=M.*rand().*2^rand()+5*rand()-5*rand();
b=M.*rand().*2^rand()+5*rand()-5*rand();
c=M.*rand().*2^rand()+5*rand()-5*rand();
n=rand(1,1000)*0.05;
y = 5*a + b.*c + 7*c + n;
x=[a; b; c];
t=y;
% Setting the sample size
hiddenLayerSize = 4;
net = feedforwardnet(hiddenLayerSize);
net.divideFcn = 'dividerand'; % Split random data
net.divideMode = 'sample';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net = train(net,x,t);
% syms p q r real
% X = [p,q,r]';
X = [22,25,21]'
b1 = net.b{1};
b2 = net.b{2};
IW = net.IW{1,1};
LW = net.LW{2,1};
y_my = b2 + LW * tanh(b1 + (IW * X))
y_sim = sim(net,X)
y1compare = 5*X(1) + X(2)*X(3) + 7*X(3)
Is there a calculation process in the function "sim" I do not know? What do i miss? Please let me know.
2. This quastion is different from above. I thought the output would be more accurate if the number of neurons in hidden layer was large. But in my case, the more hidden layers, the worse the performance. How to find the appropriate number of hidden layers? please give me some tips.
Thanks.
0 Kommentare
Akzeptierte Antwort
Brendan Hamm
am 11 Nov. 2016
There is a scaling of the data which happens for the inputs and outputs which is not being considered in the above example. All of the inputs are first mapped to the range [-1 1] and so are the targets.
These mappings are held in the following locations:
net.inputs{1}
net.outputs{2}
If you wish to take this into consideration you need to apply the mapminmax function yourself using the processSettings stored in the inputs/outputs.
X1 = mapminmax('apply',X,net.inputs{1}.processSettings{1})
y_my = purelin(b2 + LW * tansig(b1 + (IW * X1)))
yMY = mapminmax('reverse',y_my,net.outputs{2}.processSettings{1})
5 Kommentare
Brendan Hamm
am 14 Nov. 2016
It is not generally true that having more neurons in a hidden layer will produce a more accurate model.
The most common recommendation is to have a number of neurons somewhere between the number of inputs and number of outputs. There are others who may may have different guidelines for selecting the number of neurons. I have also seen suggestions of no more than 2x the number of inputs.
The above are of course guidelines. You can of course try several models within that range and compare them for predictive performance. I would recommend this as the number of neurons which is best for your specific model is unknown. Luckily it is fast to train so you can start at 2 neurons and ratchet it up 1 extra neuron at a time to 6 neurons and choose the model with the best results.
Mohamed El Ibrahimi
am 25 Jun. 2020
I like your advice. It was helpful for me too. Thank you Mr Brendan
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Define Shallow Neural Network Architectures finden Sie in Help Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!