Action Clipping and Scaling in TD3 in Reinforcement Learning

8 Ansichten (letzte 30 Tage)
Hello,
I am trying to tune my TD3 agent to solve my custom environment. The environment has two actions in the following range: the first one in [0 10] and the second one in [0 2*PI) (rlNumericSpace).
I am following this example architecture---
https://in.mathworks.com/help/reinforcement-learning/ug/train-td3-agent-for-pmsm-control.html
Now I have the following questions.
  1. Since tanh is [-1 1], should I use the scaling layer at the actor network's end? maybe with the following values
scalingLayer('Name','ActorScaling1','Scale',[5;pi],'Bias',[5;pi])];
2. How to setup Exploration noise and Target policy noise? I mean, what should be their variance values? Well, not precisely tuned, but a competent range given I have more than one action and the provided action range is not in [-1 1] ?
3. How do I clip those values to fit inside the action bound? I dont see any such option in rlTD3AgentOptions
I see all the TD3 examples (and most RL examples in general) action's range is b/n [-1 1]. I am confused about modifying the parameters when the action space is not within [-1 1], like in my case.
Thanks.

Akzeptierte Antwort

Emmanouil Tzorakoleftherakis
Hello,
In general, for DDPG and TD3, it is good practice to include the scalingLayer as the last actor layer to scale/shift the actor actions within desired range.
To your questions:
1) You should use the scalingLayer yes. To specify different scale/bias values for your two outputs, have a look at this example.
2) This section provides some tips on how to set up exploration variance, e.g. "It is common to have Variance*sqrt(Ts) be between 1% and 10% of your action range".
3) The upper and lower limit options in rlNumericSpec as well as the scalingLayer will ensure your actions are within desired range before exploration noise is added. After adding noise however, it is possible that your actions will go out of range which is also why it's often necessary to account for that on the environment side. If you are using Simulink, add for example a saturation block. In MATLAB add an if statement and clip the actions if out of range.
Hope that helps
  3 Kommentare
Emmanouil Tzorakoleftherakis
In the step function yes. You can just add an if statemeng, or use "max" or "min"

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by