Modify weights update rule of DDPG traning

2 Ansichten (letzte 30 Tage)
Vu Thang
Vu Thang am 20 Sep. 2024
Kommentiert: Vu Thang am 24 Sep. 2024
Currently, i have studied DDPG algorithm for my control project. I want to modify a bit in the weight update rule (gradient descent) to reduce the steady state error of system. How can i do this?

Antworten (1)

Shubham
Shubham am 22 Sep. 2024
Hey Vu,
In order to make modifications in your gradient descent function, you can make changes in "rlOptimizerOptions" object as mentioned in the documentation for DDPG agent: https://www.mathworks.com/help/releases/R2023b/reinforcement-learning/ug/ddpg-agents.html
In order to make changes in the weight update rule,
  • modify the "LearnRate" to change the rate at which the the model is converged.
  • modify the "L2RegularizationFactor" to avoid overfitting.
Have a look at the "rlOptimizerOptions" for more details:
Happy coding!
  1 Kommentar
Vu Thang
Vu Thang am 24 Sep. 2024
Thank you for your comment. I have tried modify the 'LearnRate' but it seem not work, the steady state error still available. I want to add an Integral component to weight update rule as some papers suggest. Is there any way to do it?

Melden Sie sich an, um zu kommentieren.

Produkte


Version

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by