MATLAB Answers


Do Multiple Output Neural Networks share the same weights and biases?

Asked by Andrea Parizzi on 25 Jan 2019
Latest activity Commented on by Andrea Parizzi on 3 Feb 2019
Hello, I think it may be a stupid question but training a multiple output NN in matlab shares the weights and biases for all of them or creates internally one NN for each Output?
I mean training one NN for each component of the ouptut (like netx and nety below) is the same as training only one NN for both the components (like netb)?
Context: total_force is a vector of 20000 samples x 2 directions,
total_emg is a vector of 8 features x 20000 samples
netx = fitnet(i); %MLN for x (the first component of total_force)
nety = fitnet(i); %MLN for y
netb = fitnet(i); %MLN for both x and y (i.e. the whole vector total_force)
[netx xtr] = train(netx,total_emg,total_force(:,1)');
% I set the same division train/test/val for the other 2 MLN
nety.divideFcn = 'divideind';
netb.divideFcn = 'divideind';
nety.divideParam.trainInd = xtr.trainInd;
netb.divideParam.trainInd = xtr.trainInd;
nety.divideParam.valInd = xtr.valInd;
netb.divideParam.valInd = xtr.valInd;
nety.divideParam.testInd = xtr.testInd;
netb.divideParam.testInd = xtr.testInd;
[nety ytr] = train(nety,total_emg,total_force(:,2)');
[netb btr] = train(netb,total_emg,total_force');
running the code beow gives different performances, so I think that the single net for both x and y is trained sharing biases and weights(and is the one that usually performs worst), can someone tell me if I'm right?
x = netx(total_emg(:, xtr.testInd));
y = nety(total_emg(:, xtr.testInd));
x_y = netb(total_emg(:, xtr.testInd));
% R
t_1 = corrcoef(total_force(xtr.testInd,:), x_y');
t_2 = corrcoef(total_force(xtr.testInd,:), [x', y']);
MSE_1net= immse(total_force(xtr.testInd,:), x_y');
MSE_2net = immse(total_force(xtr.testInd,:), [x', y']);


Sign in to comment.




1 Answer

Answer by Greg Heath
on 26 Jan 2019
 Accepted Answer

The similarity or orthogonality of the outputs tends to be irrelevant.
If two set of outputs are not caused by a significant number of shared inputs, then it makes no sense to use the same net.
Thank you for formally accepting my answer


I don.t understand your answer:
The outputs are generated from the same inputs, i.e. total_emg (so of course they "share" it).
Also I noticed I forgot a question mark in my question and edited it (so maybe my question wasn't clear).
What i try to understand is if the 2 output share all the net previous to the output layer and they differ only for the biases and weights of the last layer (like in the image) or if matlab automagically creates 2 separated nets (1 for each output)
The former. The net does not separate contributions to each output.
In addition, the smaller the hidden layer , the more stable the output with respect to random perturbations in the input.
Hope this helps.

Sign in to comment.