Why does matlab.io.​datastore.​MiniBatcha​ble not support paralle processing (multi-gpu training)

1 Ansicht (letzte 30 Tage)
For my application, I need to arrange data in a specific order when they are fed into the CNN during the training stage. To do so, I implemented a custom mini-batchable datastore called CmapDatastore that inherits from the matlab.io.datastore.MiniBatchable class. It works fine with single GPU in training a CNN model. However, when I try to use multiple GPUs in training by setting trainingOptions as:
trainingOptions( ..........
'ExecutionEnvironment', 'multi-gpu', .......)
The error message is:
The MiniBatchable Datastore CmapDatastore does not support parallel operations.
The code of CmapDatastore.m is attached to this messsage.
I would appreciate your help greatly.
Many thanks,
Yong

Akzeptierte Antwort

Yoann Roth
Yoann Roth am 14 Mär. 2022
Hello Yong,
To support parallel training for your custom datastore, you need to select the one of the following option
  • Implement MiniBatchable + PartitionableByIndex (see here)
  • Implement Partitionable. This is what is documented here.
Unfortunately, you implemented MiniBatchable + Partitionable and this is not the correct combination.
Usually, the recommendation is just to stick to datastores that we ship (e.g. fileDatastore), and to use the transform function to modify it appropriately.
In your case, it seems that the choice of a custom datastore is justified because the data seems to have a specific structure and shuffle and partition behave in a specific way.
To support parallel training, you could
  • Implement partitionableByIndex it it is not too much effort. It sort of seems that because of the structure of the data, it might not be possible to index it directly.
  • Otherwise, remove the MiniBatchable interface from your datastore, and modify read to not return a table but just a row of data, like so
function [data,info] = read(ds)
info = struct;
data = {read(ds.Datastore), ds.Labels(ds.CurrentFileIndex)};
ds.CurrentFileIndex = ds.CurrentFileIndex + 1;
end
This will put your datastore in the case Partitionable only and should support the multi-gpu option.
  4 Kommentare
Yoann Roth
Yoann Roth am 15 Mär. 2022
Hi Yong,
Your custom datastore does not have to be "MiniBatchable" to be supported.
Having a custom datastore that is only "Partitionable" will work, and support Parallel training.
Yoann
Yong
Yong am 16 Mär. 2022
Hi Yoann,
Thank you very much for your response. Yes, you are correct that Paritionable datastore will work with trainNet.
I have another question: because the read function in Partitionable reads only one file at a time, it seems that when multiple-gpus are used in training, the datastore might be partitioned into segments that don't follow my specific requirement, i.e. the data files are read in a gruop of at least 4 files, and the files are in the order of x, x_rotated_90, x_roated_180 and x_rotated_270. I arrange the files in that order in the datastore. Any suggestion to work around?
Many thanks,
Yong

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by