Filter löschen
Filter löschen

PPO | RL | A single policy controlled many agents during training

4 Ansichten (letzte 30 Tage)
Hi ,
I am currently working on PPO agent using RL and Parallel toolboxes. I read about this share policy to controlled 20 agents (as quoted below).
"During training, a single policy controlled 20 agents that interact with the enviroment. Though the 20 agents shared a single policy and same measured dataset, actions of each agent varied during a training session because of entropy regularization simulation samples and converging speed."
I wonder, how do set this condition while using RL toolbox.
Thank you in in advance.

Akzeptierte Antwort

Shivansh am 27 Dez. 2023
Hi Muhammad,
I understand that you want to implement a model where 20 agents share a single policy. This involves training one agent with multiple parallel environments where each environment represents an instance where the agent can interact and learn.
This approach can help with improved efficiency and stabilized training in policy gradient methods like Proximal Policy Optimization (PPO).
You can set up a model with the required conditions using the RL toolbox by following the below steps:
  1. Create or define an environment as per your problem statement. If you're using a custom environment, make sure it is compatible with the RL Toolbox. You can read more about RL environments here
  2. Define the PPO agent with the desired policy representation.
  3. Use the 'parpool' function to create a parallel pool with the desired number of workers (in your case, 20). You can read more about the “parpool” function here
  4. Use the 'rlTrainingOptions' function to set up your training options. Make sure to set the 'UseParallel' option to 'true' and specify the 'ParallelizationOptions' to use 'async' updates. You can read more about “rlTrainingOptions” here
  5. Call the 'train' function with your agent, environments, and training options. You can read more about “train” here
You can refer to the below example code for the implementation:
% Assuming you have already created your custom environment 'myEnv'
env = myEnv();
% Create the PPO agent 'agent' with your desired policy and critic representations
agent = rlPPOAgent(observationInfo,actorNetwork,criticNetwork,options);
% Setup parallel training option
numEnvironments = 20; % Number of parallel environments
trainOpts = rlTrainingOptions(...
% Create a parallel pool
% Train the agent
trainingStats = train(agent,env,trainOpts);
You can refer to the following Reinforcement Learning toolbox documentation for more information
Hope it helps!
  3 Kommentare
Shivansh am 3 Jan. 2024
Hi Mohammad!
  1. GPU and Parallel Environments: The parpool function does not automatically distribute environments across all available GPUs. Distribution of computations to GPUs has to be managed manually within your environment setup code. You can refer to the following link for more information
  2. DataToSendFromWorker and StepsUntilDataIsSent: They are no longer available for Matlab. You can refer to the "rltrainingoptions" for more information
  3. WorkerRandomSeeds: Using the default -1 for WorkerRandomSeeds assigns seeds based on the worker ID, ensuring each worker has a different seed for diversity in exploration. This can be beneficial for training stability and exploration.
Muhammad Fairuz Abdul Jalal
Thanks a lot for your explanation. It's been a big help to me.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)


Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by