Understanding Entropy Loss for PPO Agents Exploration

Hello,
I have been experimenting with a PPO agent training on a continous action space. I am a little confused with how the exploration works when using entopy loss. I have mostly used epsilon greedy exploration in the past which seems easier to understand in terms of how the agent explores (taking random actions with probability epsilon, and epsilon decay is easy to calculate knowing the decay rate). This means I know exactly the number of training iterations where the agent should start relying on the trained policy instead of exploring. Im not able to understand how the entropy term controls exploration in the same sense.

Antworten (1)

0 Stimmen

Hi,
In PPO, the goal of training is to strike a balance between the entropy term and fine tuning the probabilities for all available action. This happens throughout training, as, unlike epsilon greedy approach, exploration in PPO does not diminish over time. This page and references therein should be helpful.
Also, don't forget that PPO is stochastic so there is always some exploration happening when sampling the action distribution. If after training you want to just use the action mean (i.e. not sample to get the policy output), you can set this option to 0.
Hope this helps

4 Kommentare

Mike Jadwin
Mike Jadwin am 26 Okt. 2023
Bearbeitet: Mike Jadwin am 26 Okt. 2023
Thank you for the reply. Does'nt this exploration strategy prevent the agent from converging on a solution while training? Is there anyway to reduce the entropy loss as training progresses? I only have experience using epsilon greedy exploration strategy, which allows you to explicity control how much exploration the agent has while also allowing you to reduce the exploration overtime to allow the agent to hopefully settle on a solution. Any other material you can provide on this subject would be helpful.
Did you find any way to achieve this? I am interested in applying a similar technique for training an agent. I want my agent to explore more during the initial phase of training and gradually decrease the exploration and rely on the inheret stochasticity of the PPO agent in the later stages of training to ensure the policy converges.
yeah I dont think what I tried was very ideal but heres what I did: Set a specific number of trianing epochs you want to complete for each learning rate. For example you can start with high entropy for maybe 1000 epochs, then you take that trained agent and initialize a new agent training paramters with that agent as the initialization and a lower entropy term. Its not ideal because every time you kick off a new training session it will open a new training history window, so depending on how many times you do this it can get pretty cluttered. Especially if you want to do a linear decay where the entropy is changing frequently, I had to just turn off the plotter so it doesnt refresh everytime. It might be worth finidng another agent since there is not a built in way to aneal the exploration in the default PPO agent. Seems like its designed to explore throughout the entire training time, which to me seemed to result in unstable or suboptimal results.
Thank you for your suggestion. I tried this approach and it seemed to work but like you said it is not a very efficient approach.

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu Reinforcement Learning Toolbox finden Sie in Hilfe-Center und File Exchange

Produkte

Version

R2022a

Gefragt:

am 10 Okt. 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by