I have access to a cluster with several Nvidia A100 40GB GPU's. I am training a deep learning network on these GPU's, however using trainNetwork() only makes use of around 10GB of the GPU's vRAM. I beleive this is a limitation of Nvidia Cuda, see here.
I have two related questions;
  1. Other cluster users are writting in python with the 'DistributedDataParallel' module in PyTorch and are able to load in 40Gb of data (over the cuda limitation) onto the GPU's; is there a similar work around for MATLAB?
  2. If this isn't the case is there any way to use Multi-instance GPU's, so essentially split the physical card into several smaller virtual GPU's and compute in parrellel?
Ideally I would like to speed up computation, so having a 3/4 of the vRAM empty which could otherwise be used for mini-batches is a little heart breaking.

 Akzeptierte Antwort

Joss Knight
Joss Knight am 14 Mär. 2023

2 Stimmen

Just increase the MiniBatchSize and it'll use more memory.

6 Kommentare

Hi Joss,
Thank you for the answer. This was my inital port of call.
My initial MiniBatchSize was 30 which was about 10GB of vRAM, as this is a 40GB card I then doubled MiniBatchSize to 60 expecting around 20GB of vRAM usage. At that stage I get the dreaded;
Maximum variable size allowed on the device is exceeded.
However, when I monitor vRAM usage in real time I seem to 'max' out at 12GB. (Right hand graph, Red plot. A second idle A100 can be seen in the backgound as blue)
I can see GPU utilisation is around 90% for this GPU which is good (Left hand graph, green plot), but also that there are regular dips which I assume is a pause to load in more data. (There could also be spikes of vRAM usage which I dont't see due to a slow polling rate.)
I am using the inbuilt trainNetwork with a datastore, I can share this code if needed however it is all very vanilla.
net = trainNetwork(dstrain,layers,options);
@Joss Knight, am I correct in saying that the expected behaviour is that increasing MiniBatchSize should allow me to use most of the vRAM on the card, say ~36GB. (With some vRAM reserved for Nvidia tasks etc.)?
Christopher
Joss Knight
Joss Knight am 14 Mär. 2023
To get that error requires intermediate arrays with more than 2 billion elements. Which is a lot. Your network is probably too shallow, or your inputs are too high resolution, or both.
The point really is that you GPU's memory capacity and its capacity for simultaneous threads are not the same thing. Your GPU can only do so many things at once. Putting more stuff in memory isn't going to change that, and neither is dividing up the GPU between processes. But if your memory is full because you have a lot of arrays in flight, because your network is very deep, you could potentially have a better model. Or just a slower one. It depends.
If you are simply concerned that your GPU is having to wait for file i/o, you could try the DispatchInBackground option.
Christopher McCausland
Christopher McCausland am 14 Mär. 2023
Bearbeitet: Christopher McCausland am 14 Mär. 2023
Hi Joss,
Thank you for the reply.
From my end as this was test data, the network being shallow and at a high resoloution is true. Would you happen to have reading recomendations on this topic so I can make a bit more sense in terms of what is going on?
If I understand you correctly, the vRAM usage (and therefore minibatchsize) is determined by the physical memory on the card, and/or processing throughput on the card (whatever is exceeded first)?
I will try DispatchInBackground, my only issue is that I am also facing issues with isPartitionable() as detailed here. I don't know if that is your speciality but it would be fantastic if you could take a look. I am the first to admit I am learning with this topic!
Christopher
Joss Knight
Joss Knight am 14 Mär. 2023
You'll have to ask a specific question about DispatchInBackground but I can certainly help.
GPU memory usage is determined by the number and size of GPU arrays that are allocated on the card (plus some other stuff). But the allowed size of each of those arrays is also limited by the number of elements that can be indexed with an int32, so you can't have more than 2147483647 elements.
Most people never hit the array size limit, so they can just increase their MiniBatchSize until they run out of memory. This increases the size of the arrays stored on the GPU during training.
Once your card is running something on every available thread, it can't do more work in parallel, so it has to schedule to do it in chunks. So if you're using every available thread and passing it data as fast as you can, it's neither here nor there whether you're processing huge arrays in a single go or smaller arrays in multiple goes. You should worry less about whether your memory is full and more about whether your GPU is working flat out.
Hi Joss,
That makes much more sense, thank you for the explination too.
I will be able to eek out the final 10% of GPU utilsation by finding the exact minibatchsize that cases the fail. Regardless, as you mentioned, down-sampling the data should allow for larger minibatchsize size too.
I will wait for an answer to https://uk.mathworks.com/matlabcentral/answers/1926685-deep-learning-with-partitionable-datastores-on-a-cluster?s_tid=srchtitle as i'll need partition to be true before I can use DispatchInBackground. Ideally, I would like to distrabute this over multiple GPU workers so hopefully I can get partition working.
In the mean time I will mark this question as answered and will @ you in the next one if I can get partition behaving.
Thank you!
Christopher
Joss Knight
Joss Knight am 14 Mär. 2023
You may never get that 10% so don't get your hopes up! Also, the best utilization is not necessarily at the highest batch size.
Why not ask a new question where you show your code for your datastore and one of us can help you make it partitionable.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by