Does MATLAB use NVIDIA Tensor Cores for GPU Computing?
21 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
MathWorks Support Team
am 19 Apr. 2023
Bearbeitet: MathWorks Support Team
am 19 Apr. 2023
I have an NVIDIA GPU with Tensor Cores; can MATLAB make use of these during GPU Computing?
Akzeptierte Antwort
MathWorks Support Team
am 19 Apr. 2023
Bearbeitet: MathWorks Support Team
am 19 Apr. 2023
Modern NVIDIA® GPUs use Tensor Cores to accelerate matrix multiplication-accumulate operations. These operations are used in many common workflows, such as linear algebra and deep learning. Many functions in MATLAB® automatically use Tensor Cores without you having to change your code.
Wherever appropriate, MATLAB automatically uses Tensor Cores when you have a supported NVIDIA GPU. Ampere architectures or later support double-precision (FP64) arithmetic using Tensor Cores (this currently includes Ampere, Ada Lovelace, and Hopper architecture GPUs). To check whether MATLAB supports your GPU, see GPU Computing Requirements. GPU Computing in MATLAB requires a Parallel Computing Toolbox™ license.
Deep Learning Toolbox™ automatically uses Tensor Cores wherever possible when you have a supported NVIDIA GPU. For example, if you use your GPU for training or inference in MATLAB using Deep Learning Toolbox, such as training a network using the `auto` or `gpu` execution environment options. You do not need to make any changes to your network.
Code that you generate in MATLAB using GPU Coder™ for deployment to a GPU can also make use of Tensor Cores. In the generated code, BLAS operations with half-precision (FP16) data types run on Tensor Cores with Volta architectures or later. BLAS operations with double-precision (FP64) data types run on Tensor Cores with Ampere architectures or later. Additionally, code you generate for deep learning networks can use Tensor Cores for inference. Specifically, Tensor Cores are enabled for FP16 inference on Volta architectures or later, INT8 inference on Turing architectures or later, and FP32 inference on Ampere architectures or later. For an example that shows how to generate deep learning code that uses GPU Tensor Cores, see Code Generation for Deep Learning Networks by Using cuDNN.
0 Kommentare
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu GPU Computing finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!