Not able to use multibel GPUs when training a DDPG agent
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hi there,
I am having some problems training a DDPG agent on a local machine with multiple (4) GPUs. There is always only one GPU doing any work and I don't know what I'm doing wrong.
I am using a parapool with 4 workers:
parpool('Processes',4);
With
spmd
gpuDevice
end
I can see, that each worker is using its own GPU.
The circuit uses the UseDevice option:
critic = rlQValueFunction(criticNet,obsInfo,actInfo,...
'UseDevice','gpu', ...
ObservationInputNames="obsInLyr",ActionInputNames="actInLyr");
As well as the actor:
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo, ...
'UseDevice','gpu');
The training is using the foloing training option:
trainingOpts = rlTrainingOptions(...
MaxEpisodes=maxepisodes,...
MaxStepsPerEpisode=maxsteps,...
Verbose=true,...
Plots="none",...
StopTrainingCriteria="AverageReward",...
StopTrainingValue=500000, ...
ScoreAveragingWindowLength=5, ...
SaveAgentCriteria="AverageReward", ...
SaveAgentValue=70000);
trainingOpts.UseParallel = true;
trainingOpts.ParallelizationOptions.Mode = "async";
and is startet using:
agent = rlDDPGAgent(actor,critic,agentOptions);
trainingStats = train(agent,env,trainingOpts);
Have I forgotten anything else?
I would be very happy to receive help.
0 Kommentare
Antworten (1)
Emmanouil Tzorakoleftherakis
am 24 Jan. 2024
Can you share your agent options and the architecture of the actor and critic networks? As mentioned here, "Using GPUs is likely to be beneficial when you have a deep neural network in the actor or critic which has large batch sizes or needs to perform operations such as multiple convolutional layers on input images". So it could be that there is no need to use more
0 Kommentare
Siehe auch
Kategorien
Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!