Filter löschen
Filter löschen

How to calculate the NN outputs manually?

2 Ansichten (letzte 30 Tage)
Darshana Abeyrathna
Darshana Abeyrathna am 6 Mai 2016
Beantwortet: hassan khatir am 19 Jul. 2023
Can anyway help me explaining manual calculation for testing outputs with trained weights and bias? Seems it does not give the correct answers when I directly substitute my inputs to the equations (Transfer function equations). Answers are different than what I get from MATLAB NN toolbox. How is it possible to get a large number as an output (eg: 100) when the output node has a transfer function, because as an example output from the "logistic" transfer function is always between 0 and 1?

Akzeptierte Antwort

Matthew Eicholtz
Matthew Eicholtz am 6 Mai 2016
If you use a squashing function on the output, then yes, it is impossible to get a result of 100 at an output. If you need to have outputs outside [0,1] or [-1,1], which are typical ranges for many squashing functions, I suggest using a linear transfer function on the output (or a rectified linear unit).
As for your main question, here is an example of how to calculate outputs manually if you have trained weights and biases. Suppose you had an input x that is 100-by-1 and 1000 hidden layer neurons (so a weight matrix w1 that is 100-by-1000 and bias b1 that is 1000-by-1).
Then, the input to the hidden layer is
z1 = w1'*x+b1;
and the output of the hidden layer is
h1 = f(z1); %where f is the hidden activation function (e.g. logistic, tanh, ReLU)
Next, if you have a single neuron in the output layer, you would have a second weight matrix w2 that is 1000-by-1 and a scalar bias b2. The output to the whole network is then given by
z2 = w2'*h+b2;
h2 = g(z2); %where g is the output activation function, not necessarily the same as f()
Hope this helps!
  1 Kommentar
Darshana Abeyrathna
Darshana Abeyrathna am 6 Mai 2016
Thanks for the answer :) really appreciate. I think I should try again with a linear transfer function for output nodes. Thanks again :)

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (2)

Greg Heath
Greg Heath am 8 Mai 2016
By default,
1. The hidden node transfer function is TANSIG (TANH)
2. The output node transfer function is PURELIN (LINEAR)
3. Inputs and targets will be AUTOMATICALLY transformed
to [-1,1] for calculating purposes
4. The outputs will be AUTOMATICALLY transformed from
[ -1,1] to the original target scale.
Hope this helps.
Greg
  2 Kommentare
Darshana Abeyrathna
Darshana Abeyrathna am 8 Mai 2016
Hi Greg Heath,
Thank you very much for the comment :) What I understood here, during the training process, "Inputs and targets will be transformed to [-1,1]". During the testing process, "The outputs will be transformed from [ -1,1] to the original target scale".
I want to know that, can we do those transformations manually? If yes, how can we do that?
Thanks again :)
Amir Qolami
Amir Qolami am 12 Apr. 2020
For apply mapminmax to inputs:
xoffset = net.Inputs{1}.processSettings{1}.xoffset; gain = net.Inputs{1}.processSettings{1}.gain; ymin = net.Inputs{1}.processSettings{1}.ymin; In0 = bsxfun(@plus,bsxfun(@times,bsxfun(@minus,inputs,xoffset),gain),ymin);
And for apply reverse mapminmax to outputs:
gain = net.outputs{end}.processSettings{:}.gain; ymin = net.outputs{end}.processSettings{:}.ymin; xoffset = net.outputs{end}.processSettings{:}.xoffset; output = bsxfun(@plus,bsxfun(@rdivide,bsxfun(@minus,outputs,ymin),gain),xoffset);

Melden Sie sich an, um zu kommentieren.


hassan khatir
hassan khatir am 19 Jul. 2023
use this function:
function y2=sim2(net,x)
xoffset=net.inputs{1}.processSettings{1}.xoffset;
gain=net.inputs{1}.processSettings{1}.gain;
ymin=net.inputs{1}.processSettings{1}.ymin;
w1 = net.IW{1}; % (10x6)
w2 = net.LW{2}; % (2x10)
b1 = net.b{1}; % (10x1)
b2 = net.b{2};
% Input 1
y1 = (x-xoffset).*gain+ymin;
% Layer 1
a1 = 2 ./ (1 + exp(-2*(repmat(b1,1,size(x,2)) + w1*y1))) - 1;
% output
outputs=repmat(b2,1,size(x,2)) + w2*a1;
gain = net.outputs{2}.processSettings{:}.gain;
ymin = net.outputs{2}.processSettings{:}.ymin;
xoffset = net.outputs{2}.processSettings{:}.xoffset;
y2 = (outputs-ymin)./gain + xoffset;
end

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by