Speed up 'dlgradient' with parallelism?

6 views (last 30 days)
Evan Scope Crafts
Evan Scope Crafts on 11 Apr 2021
Answered: Jon Cherrie on 12 Apr 2021
Hi all,
I am wondering if there is a way to speed up the 'dlgradient' function evaluation using parallelism or GPUs.

Answers (1)

Jon Cherrie
Jon Cherrie on 12 Apr 2021
You can use a GPU for the dlgradient computation by using a gpuArray with dlarray.
In this example, the minibtachqueue, puts data on to the GPU and thus the GPU is used for the rest of the computation, both the "forward" pass the "backward" (gradient) pass:

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by