Community Profile

photo

Joss Knight


Last seen: Today

MathWorks

370 total contributions since 2013

Although I cannot be contacted directly, if you would like to ask me a question all you have to do is mention "GPU" somewhere in your MATLAB Answers question.

Joss Knight's Badges

  • 36 Month Streak
  • Knowledgeable Level 4
  • Pro
  • Revival Level 2
  • First Answer

View details...

Contributions in
View by

Answered
Arrayfun GPU in "Game of Life" works slower than CPU
Check out this Answer. The arrayfun version is rather dependent on good memory performance since the kernel is accessing global ...

3 Tage ago | 0

Answered
Deep Learning - Distributed GPU Memory
No, there is nothing like what you are after, to distribute the weights of a fully connected layer across multiple GPUs. You cou...

3 Tage ago | 0

Answered
Multiple GPUs perform slower than single GPU to train a semantic segmentation network
On Windows, due to GPU communication issues on that platform, it is difficult to get any benefit from multi-GPU training. This w...

22 Tage ago | 0

| accepted

Answered
The utilization of GPU is low in deep learning
Try following some of the advice in the following MATLAB Answer: https://uk.mathworks.com/matlabcentral/answers/463367-gpu-utili...

29 Tage ago | 0

Answered
Assigning gpuArrays to different graphics cards
There is no way to do what you ask. Selecting a GPU is the only way to move data there, and selecting a GPU resets all GPU data....

etwa ein Monat ago | 0

| accepted

Answered
GPU out of memory
In your example code you are using the default mini-batch size of 128. Reduce the MiniBatchSize training option until you stop g...

etwa 2 Monate ago | 0

Answered
incorrect memory copy when using property size validator
I was able to reproduce this in R2018b but not in R2019a or R2019b. It looks like property validators used to trigger a deep cop...

3 Monate ago | 0

| accepted

Answered
Solution of large sparse matrix systems using GPU MLDIVIDE
The general advice is that Sparse MLDIVIDE may be convenient, but it is 'usually' slower than use of an iterative solver with an...

3 Monate ago | 0

| accepted

Answered
Deep Learning: Training Network with "parallel" option using only CPUs
Even with a weak graphics card you will usually see better performance than on multiple CPUs. However, to try it out, after you ...

4 Monate ago | 0

| accepted

Answered
How to use Levenberg-Marquardt backprop with GPU?
This isn't supported out of the box yet. You could convert your network to use dlarray and train it with a custom training loop....

4 Monate ago | 0

Answered
hardware requierments for MATLAB
Partial answer GPU Computing. You can't have a MATLAB without CPU Computing so obviously both is better. No Mostly using the ...

4 Monate ago | 1

Answered
Which Visual Studio 2019 package should I install to work with CUDA?
To accelerate your MATLAB code with an NVIDIA GPU, you do not need to install a C++ Compiler.

4 Monate ago | 0

Answered
Why would the file size of a deep learning gradient become much bigger after saving as a .mat file?
The difference is that whos is unable to account for the fact that the data is all stored on the GPU, and is only showing CPU me...

4 Monate ago | 0

| accepted

Answered
Training a Variational Autoencoder (VAE) on sine waves
It looks like your input data size is wrong. Your formatting says that the 4th dimension is the batch dimension, but actually it...

5 Monate ago | 0

| accepted

Answered
Is lhsdesign (latin hypercube sampling) supported by gpuArray?
It is not supported. You can tell whether or not a function supports gpuArray, more reliably than from the list of gpuArray meth...

5 Monate ago | 1

| accepted

Answered
DOES GEFORCE GTX1080 GPU WORKS WELL FOR DEEP LEARNING TRAINING??
Yes.

5 Monate ago | 1

| accepted

Answered
Feed data into Neural Networks file-by-file
Datastores are designed for precisely this purpose. It may be that you're after an imageDatastore processed by a transform.

5 Monate ago | 0

Answered
Gather cell array from GPU to CPU
A_cpu = cellfun(@gather, A_gpu, 'UniformOutput', false);

5 Monate ago | 1

| accepted

Answered
Error in matlab included deep learning example
There is a bug in this Example which will be rectified. Thanks for reporting. To workaround, initialize the loss variable in the...

6 Monate ago | 2

| accepted

Answered
movsum slower than conv2 in GPU
One might theorize, perhaps, that movsum literally uses the same kernels as conv2, but first has to construct the filter of ones...

6 Monate ago | 2

Answered
Does MATLAB require dedicated graphic card
If you want hardware-rendered plots and 3-D visualizations, you need a GPU of some kind. Without it, these things will be a bit ...

7 Monate ago | 0

Answered
Deep learning with a GPU that supports fp16
You can take advantage of FP16 when generating code for prediction on a deep neural network. Follow the pattern of the Deep Lear...

7 Monate ago | 1

| accepted

Answered
Select a GPU to be used by a function running in parallel(parfeval)
I'd have to know what kind of postprocessing you're doing - please post some code. On the face of it, the answer is simply to us...

8 Monate ago | 0

| accepted

Answered
'radix_sort: failed to get memory buffer' when executing accumarray on gpuArrays of certain size
There is an issue in an NVIDIA library that is not functioning correctly when memory is limited. This is fixed in CUDA 10 / MATL...

8 Monate ago | 0

Answered
Why does gpuArray() error out?
Make sure you have read this: https://uk.mathworks.com/matlabcentral/answers/442324-can-i-use-matlab-with-an-nvidia-gpu-on-macos...

8 Monate ago | 1

Answered
GPU recommendation for Deep Learning and AI
The Tesla V100 is a passively cooled device only suitable for servers. Is that available to you? The Quadro card you indicate is...

8 Monate ago | 0

Answered
.CU Files for MATLAB
Hi Oli. You don't run nvcc in MATLAB, since it isn't a MATLAB feature. You run it at a Windows Command Prompt (or Powershell). U...

8 Monate ago | 0

Answered
Error using nnet.internal.cnngpu.convolveBiasReluForward2D
If you want to go back to using your CPU, add the 'ExecutionEnvironment' 'cpu' to your call to semanticseg. C = semanticseg(Img...

8 Monate ago | 0

Answered
Fast 2D distance calculation
pdist2 is the usual way to do this, if you have Statistics and Machine Learning Toolbox.

8 Monate ago | 0

Answered
Unexpected speed decrease of 2D Fourier Transform on GPU when iFFTed
I modified your code inserting wait(gpuDevice) before each tic and toc and got a much more sensible graph: The GPU runs async...

10 Monate ago | 0

| accepted

Load more