num2deriv
Numeric two-point network derivative function
Syntax
num2deriv('dperf_dwb',net,X,T,Xi,Ai,EW)
num2deriv('de_dwb',net,X,T,Xi,Ai,EW)
Description
This function calculates derivatives using the two-point numeric derivative rule.
This function is much slower than the analytical (non-numerical) derivative functions, but
is provided as a means of checking the analytical derivative functions. The other numerical
function, num5deriv, is slower but more accurate.
num2deriv('dperf_dwb',net,X,T,Xi,Ai,EW) takes these arguments,
net | Neural network |
X | Inputs, an RxQ matrix (or NxTS cell array of RixQ matrices) |
T | Targets, an SxQ matrix (or MxTS cell array of SixQ matrices) |
Xi | Initial input delay states (optional) |
Ai | Initial layer delay states (optional) |
EW | Error weights (optional) |
and returns the gradient of performance with respect to the network’s weights and biases, where R and S are the number of input and output elements and Q is the number of samples (and N and M are the number of input and output signals, Ri and Si are the number of each input and outputs elements, and TS is the number of timesteps).
num2deriv('de_dwb',net,X,T,Xi,Ai,EW) returns the Jacobian of errors
with respect to the network’s weights and biases.
Examples
Here a feedforward network is trained and both the gradient and Jacobian are calculated.
[x,t] = simplefit_dataset;
net = feedforwardnet(20);
net = train(net,x,t);
y = net(x);
perf = perform(net,t,y);
dwb = num2deriv('dperf_dwb',net,x,t)Version History
Introduced in R2010b
See Also
bttderiv | defaultderiv | fpderiv | num5deriv | staticderiv