How to implement in Matlab Deep Learning PyTorch detach or TensorFlow stop_gradient?
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
John Smith
am 19 Aug. 2021
Beantwortet: Damien T
am 7 Nov. 2022
Hello!
Pytorch has a facility to detach a tensor so that it will never require a gradient, i.e. (from here):
In order to enable automatic differentiation, PyTorch keeps track of all operations involving tensors for which the gradient may need to be computed (i.e., require_grad is True). The operations are recorded as a directed graph. The detach() method constructs a new view on a tensor which is declared not to need gradients, i.e., it is to be excluded from further tracking of operations, and therefore the subgraph involving this view is not recorded.
These are useful when one needs copies of expressions that are treated as constants and whose gradient should not be calculated during learning.
How does one implement such a thing in Matlab's Deep Learning Toolbox? (Possibly in a custom training loop)
Thx,
D
1 Kommentar
Akzeptierte Antwort
Damien T
am 7 Nov. 2022
You just need to call the following. It will turn a traced dlarray into a standard Matlab variable, hence the autodiff engine will treat the new ("detached") variable as a constant.
myDetachedVar = extractdata(myVar);
0 Kommentare
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!