Answered
Can I process 'fit' with a GPU?
No, there isn't, but other options may be adaptable to your problem. https://uk.mathworks.com/matlabcentral/answers/431697-make...

8 Monate ago | 1

| accepted

Answered
how can I output different array size than input by GPU arrayfun?
You can't output a variable sized output from GPU arrayfun, which would require atomic operations. You're going to have to compu...

8 Monate ago | 0

| accepted

Answered
How can I fix the CUDNN errors when I'm running train with RTX 2080?
Regarding issues with memory, the Titan XP has 12GB of memory while the RTX 2080 has only 8GB. You'll need to reduce your MiniBa...

9 Monate ago | 0

Answered
How can I fix the CUDNN errors when I'm running train with RTX 2080?
This a known issue. Before you start anything else run try nnet.internal.cnngpu.reluForward(1); catch ME end That shoul...

9 Monate ago | 5

| accepted

Answered
Reduction variables on the GPU II and arrayfun: cannot assign to parent function variable?
No, you can only read from uplevel variables, and then only one element at a time. You cannot write to them. That is not the int...

9 Monate ago | 0

| accepted

Answered
Using reduction variables on the GPU: arrayfun or other options
I suppose it depends on what f is, is it a scalar operation for each element of x? If so you can move your loop over I inside yo...

9 Monate ago | 0

| accepted

Answered
MATLAB & Cuda 10
It works, although it will erroneously warn that JIT compilation is required (it isn't). Also, there is a known bug with Deep Le...

9 Monate ago | 1

Answered
Sensible difference between computation on GPU single type variable and CPU single type
Jan's answer is correct of course; but perhaps the succinct point is to ask the question, which answer is right? You've been ass...

9 Monate ago | 1

Answered
Make curve fitting faster
It does rather depend on what you're doing. The functions polyfit and interp1 work with gpuArray inputs.

9 Monate ago | 0

Answered
How to reassign values for sparse GPU arrays?
You can use FIND to retrieve the nonzeros and their row and column indices. Then replace the values you want and construct a new...

9 Monate ago | 1

| accepted

Answered
(Temporary) Memory requirements of conv2/convn and fft2/fftn computations in GPU and CPU computing
FFT requires a workspace size dependent on the radix of the signal, and it can be pretty huge. A rule of thumb says you'll alway...

9 Monate ago | 0

| accepted

Answered
Undefined function or variable 'optIdx'. Error in sh (line 28) dev = gpuDevice( optIdx )? what is this
Sorry if this seems like a facetious answer - I may be missing the point. The error means you haven't defined the variable |optI...

9 Monate ago | 0

Answered
Matlab 2018b GPU Training
Do you mean you switched to using hard-sigmoid or softsign activations? This is supported in 18b, but is a non-optimized version...

9 Monate ago | 3

| accepted

Answered
Efficiently run matrix or vector-valued function in element-wise fashion on GPU?
It's difficult to say what the best solution is without seeing what |fun| does. Typically you can address it using a series of c...

9 Monate ago | 0

Answered
CUDA 10 supported with MATLAB 2017b
There are a number of possible explanations for this. One is that you are using a Volta or Turing card with an older MATLAB v...

10 Monate ago | 1

Answered
gpuArray of variable size for codegen
Support for gpuArray input comes with R2018b. I don't know about the other thing, sorry.

10 Monate ago | 0

| accepted

Answered
Calculation of integrals and summation with an error "The following error occurred converting from gpuArray to double: Conversion to double from gpuArray is not possible"
On the face of it this code is not very advisable to run on the GPU, since I don't think it is well vectorized. Still, for a poi...

10 Monate ago | 1

Answered
Cuda with Turing GPU and NeuralNetworkToolbox in 2017b
This is a bit distressing, to discover this. However, if the option is available to you, you should upgrade MATLAB to 18a or 18b...

10 Monate ago | 1

| accepted

Answered
Support for NVlink with multi GPU
MATLAB supports NVLink in Deep Learning applications (calling |trainNetwork| and similar) and explicitly through the GOP functio...

10 Monate ago | 0

| accepted

Answered
Is it possible to use GPU coder in macos system?
Yes, that is correct.

10 Monate ago | 1

| accepted

Answered
nvcc fatal : Unsupported gpu architecture 'compute_20'
Strictly speaking, for R2017b you need to be using an older version of the CUDA toolkit. MATLAB is expecting you to be using a v...

11 Monate ago | 0

Answered
GPU computing Monte Carlo
It's pretty hard to tell from your description. The normal way for Monte Carlo would be to use |arrayfun|, following the documen...

11 Monate ago | 1

Answered
Sparse Matrix (gpuArray) LU Decomposition
It looks like you're just after the backslash operator, so why not use that, and forget about looping over the RHS. result ...

11 Monate ago | 1

Answered
Parallel computing Monte Carlo
You seem to have the right idea. For highly vectorized code, parallelization should be done in batches. However, it does depend ...

11 Monate ago | 0

| accepted

Answered
Does Nvidia GTX 2080 TI support MATLAB?
While we have not yet had the opportunity to confirm it with this new hardware (the Ti is released on Thursday), there is no rea...

11 Monate ago | 0

| accepted

Answered
GPU performance significantly slow when running multiple MATLAB
I'd be very surprised if memory isn't the issue here, because contention between your two MATLABs will be forcing continual sync...

11 Monate ago | 0

Answered
Reproducibility convolutional neural network training with gpu
Use of the GPU has non-deterministic behaviour. You cannot guarantee identical results when training your network, because it de...

11 Monate ago | 0

| accepted

Answered
Training network with a large validation set running out of memory
The validation uses the same |MiniBatchSize| as you are using for training to break your data up into chunks. So you might have ...

11 Monate ago | 0

Answered
How Can I change ComputeMode in gpuDevice specs?
Compute mode is modified using the <https://developer.nvidia.com/nvidia-system-management-interface |nvidia-smi|> utility.

11 Monate ago | 0

Answered
GTX-1080ti Shows 9 GB Memory Available
NVIDIA have responded to confirm that this is expected behaviour. In summary: * WDDM2 releases 90% of available memory to CUD...

11 Monate ago | 1

Load more