GPU Coder vs. ONNXRuntime, is there a difference in inference speed?
Ältere Kommentare anzeigen
Since I can export from Matlab to ONNX format, why can't I just import my model into TensorRT etc.? Will I get significant speed increases or is the benefit of GPU Coder more about being able to compile all my other Matlab code into optimized Cuda?
Thanks in advance.
Antworten (1)
Joss Knight
am 2 Apr. 2021
0 Stimmen
You can compile your network for TensorRT using GPU Coder if that's your intended target, no need to go through ONNX.
I don't believe MathWorks have any published benchmarks against ONNX runtime specifically. GPU Coder on the whole outperforms other frameworks, although it does depend on the network.
2 Kommentare
Matti Kaupenjohann
am 7 Jan. 2022
Could you show/link the benchmark which includes the performance of gpucoder against other frameworks (which one?).
Joss Knight
am 7 Jan. 2022
Bearbeitet: Joss Knight
am 7 Jan. 2022
We don't publish the competitive benchmarks, you'll have to make a request through your sales agent. we can provide some numbers for MATLAB.
Kategorien
Mehr zu Get Started with GPU Coder finden Sie in Hilfe-Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!