MATLAB Answers

GPU Coder vs. ONNXRuntime, is there a difference in inference speed?

5 views (last 30 days)
David
David on 1 Apr 2021
Answered: Joss Knight on 2 Apr 2021
Since I can export from Matlab to ONNX format, why can't I just import my model into TensorRT etc.? Will I get significant speed increases or is the benefit of GPU Coder more about being able to compile all my other Matlab code into optimized Cuda?
Thanks in advance.

Answers (1)

Joss Knight
Joss Knight on 2 Apr 2021
You can compile your network for TensorRT using GPU Coder if that's your intended target, no need to go through ONNX.
I don't believe MathWorks have any published benchmarks against ONNX runtime specifically. GPU Coder on the whole outperforms other frameworks, although it does depend on the network.

Products


Release

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by