Filter löschen
Filter löschen

Jacobian matrix of neural network

15 Ansichten (letzte 30 Tage)
Rita
Rita am 25 Feb. 2016
Bearbeitet: MAHSA YOUSEFI am 5 Feb. 2022
what is inside of jacobian matrix ?I know that for a trained network with number of data :1,2,..., n is equall to the number of column in Jacobian matrix . what is rows?

Akzeptierte Antwort

Cam Salzberger
Cam Salzberger am 29 Feb. 2016
Hello Rita,
The number of rows in the Jacobian output by "defaultderiv" is the sum of the number of weights and biases for the network. For example, if you do this to create the network:
[x,t] = simplefit_dataset;
net = feedforwardnet(10);
net = train(net,x,t);
y = net(x);
perf = perform(net,t,y);
dwb = defaultderiv('de_dwb',net,x,t);
Now "dwb" is the Jacobian of errors with respect to the net's weights and biases. It is a 31x94 matrix. If you check out the following properties in the network:
net.IW % Input weight matrices
net.LW % Layer weight matrices
net.b % Bias vectors
you can see that "net.IW" contains a 10x1 matrix, "net.LW" contains a 1x10 matrix, and "net.b" contains a 10-element vector and a 1-element vector. The number of elements adds up to 31.
I hope this helps clarify the Jacobian.
-Cam
  1 Kommentar
MAHSA YOUSEFI
MAHSA YOUSEFI am 5 Feb. 2022
Bearbeitet: MAHSA YOUSEFI am 5 Feb. 2022
Hi Cam.
I am following the answer of this question regarding Hessian in a deep network. Now, I see this answer. However, I am asking you a different way for computing Hessian, if there is.
I am using a training loop for my simple model in which gradients are computing by dlgradient. As you know, dlgradient (through dlfeval) returns a TABLE in which the layers, parameters (weights and bias) and gradients' values are stored. Also, we know that dlgradient accepts "loss" as a SCALLER and dlnet.Learnables, data samples dlX and targets dlY for these computations. I am interested in computing Hesseian for a small network using dlX and dlY. In fact I am going to compute a sub-sampled Hessian if I uses mini-batch dlX. (SO, I do not have problem for storing this matrix then!). May I ask you please let me know how it would be possible? (I put this question on Community titled "Computing Hessian by dllgradient" as well. Thanks...

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (2)

Greg Heath
Greg Heath am 27 Feb. 2016
The number of input variables
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 Kommentar
Rita
Rita am 29 Feb. 2016
Thanks Greg.I used this function and the number of columns are more than the number of inputs.
b=defaultderiv('de_dwb',y.net,y.inputs,y.targets);

Melden Sie sich an, um zu kommentieren.


Monsij Biswal
Monsij Biswal am 19 Jun. 2019
In which order are the derievatives present ? I am unable to figure it out what is the exact order columnwise. Is it layerwise starting from the first layer and then weights->biases for each layer or something else ?

Kategorien

Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by