What is the quickest way to find a gradient or finite difference in Matlab of a real function in high dimensions?

8 Ansichten (letzte 30 Tage)
I have a function handle (black box) that gives me the value of f which is a real number and a function of multiple variables. What is the most efficient way to calculate the gradient or finite differences for f?
Currently I am doing it as follow but I would like to avoid looping in Matlab if that helps speed up my code:
function [ dU_dW ] = numerical_gradient(W,f,eps)
%compute GD update numerically
[D1, D2] = size(W);
dU_dW = zeros(D1, D2);
for d1=1:D1
for d2=1:D2
e = zeros([D1,D2]);
e(d1,d2) = eps;
f_e1 = f(W+e);
f_e2 = f(W-e);
%numerical_derivative = (f_e1 - f_e2)/(2*eps);
numerical_derivative = f_e1 - f_e2;
dU_dW(d1,d2) = numerical_derivative;
end
end
end
----
I did see the following:
https://www.mathworks.com/help/matlab/ref/gradient.html
but it doesn't seem to take function handles as input.

Antworten (1)

John D'Errico
John D'Errico am 28 Mär. 2017
Bearbeitet: John D'Errico am 28 Mär. 2017
There is my derivest code on the File exchange. It can compute the gradient of functions, doing so adaptively, even with an estimate of the error in its estimates of the derivatives.
I'm in the process of providing an update on the file, but the changes are entirely in the documentation.
  3 Kommentare
Brando Miranda
Brando Miranda am 28 Mär. 2017
actually I think finite distance is more important to me than derivative. Does the software u gave do that?
John D'Errico
John D'Errico am 28 Mär. 2017
Bearbeitet: John D'Errico am 29 Mär. 2017
Huh? First of all, the toolbox has a tool in it to compute a gradient (gradest). So trying to use derivest on a multivariate function is not meaningful. derivest itself applies only to functions of ONE variable. Just use gradest. Read the examples provided. I'm pretty sure I put an example in there to compute a gradient. Several in fact.
Example:
[grad,err] = gradest(@(x) sum(x.^2),[1 2 3])
grad =
2 4 6
err =
5.8899e-15 1.178e-14 0
Example:
At [x,y] = [1,1], compute the numerical gradient
of the function sin(x-y) + y*exp(x)
z = @(xy) sin(diff(xy)) + xy(2)*exp(xy(1))
[grad,err ] = gradest(z,[1 1])
grad =
1.7183 3.7183
err =
7.537e-14 1.1846e-13
Example:
At the global minimizer (1,1) of the Rosenbrock function,
compute the gradient. It should be essentially zero.
rosen = @(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;
[g,err] = gradest(rosen,[1 1])
g =
1.0843e-20 0
err =
1.9075e-18 0
Next, you ask about something called "finite distance". I cannot guess what you mean by that, unless you think you were writing the words "finite difference". Even then, finite differences are indeed ONE simple scheme to estimate a derivative, but just to say you need a finite difference has no meaning, since one can compute lots (infinitely many) of possible finite differences.
I THINK you are asking if you can use the derivest tools to use finite differences to compute a gradient vector. The answer is of course you can, since that is what I use internally in the code. You never need to think about it at that level though, as that is the virtue of a tool to do something like this.

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu Symbolic Math Toolbox finden Sie in Help Center und File Exchange

Tags

Produkte

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by