Filter löschen
Filter löschen

Issue with Q0 Convergence during Training using PPO Agent

8 Ansichten (letzte 30 Tage)
Hi guys,
I have developed my model and trained using PPO agent. Overall, the training process has been successful. However, I have encountered an issue with the Q0 values. The maximum achieable rewards is 6000. I set to stop my training at 98.5% of the maximum rewards (5910).
During the training, I have noticed that the Q0 values did not converge as expected. In fact, they seem to be capped at 100, as indicated by the figures. I am currently seeking an explanation for this behavior and trying to understand why the Q0 values are not reaching the desired convergence.
My agent option is as follow:
If anyone has any insights or explanations regarding the behavior of Q0 during training with the PPO agent, I would greatly appreciate your input. Your expertise and guidance would be invaluable in helping me understanding and addressing this issue.
Thank you.
  2 Kommentare
Emmanouil Tzorakoleftherakis
Can you share the code with the training options?
Muhammad Fairuz Abdul Jalal
Muhammad Fairuz Abdul Jalal am 11 Jul. 2023
Thanks @Emmanouil Tzorakoleftherakis for the reply.
As per request. The snapshot of the code.
The action is set between -1 and 1. However, in the model, each action has their own gain.
The critic
The actor
Training Options
Thank you in advance. Really appreciate your help and support.

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

Emmanouil Tzorakoleftherakis
Bearbeitet: Emmanouil Tzorakoleftherakis am 12 Jul. 2023
It seems you set the training to stop when the episode reward reaches the value of 0.985*(Tf/Ts)*3. I cannot comment on the value itself, but usually it's better to use average reward values as an indicator of when to stop training because it will helps filter out outlier episodes.
Aside fromt hat, in case it wasn't clear, the stopping criteria is not based on Q0, but the light blue value (individual episode reward) that you see in the plots you shared above. The value of Q0 will get better based on how well the critic is trained, but it does not necessarily need to "converge" in order to stop training. Better critic means better more stable training, but at the end of the day you only care about your actor. This is usually why it takes a few trials to see what stopping critiria make sense.
  1 Kommentar
Muhammad Fairuz Abdul Jalal
Muhammad Fairuz Abdul Jalal am 11 Jul. 2023
Thank you for highlighting the better way for stopping criteria. I will do the changes accordingly. Will update here soon.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Produkte


Version

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by