Action value exceeds the boundry of the final layer activation fucntion of the actor
Ältere Kommentare anzeigen
Hi,
I'm using DDPG agent for my RL application with Matlab 2022a version.
I want to take action between 0 and 1 value. To do this, i use SigmoidLayer function at the final layer of the action. However, it exceed the 0-1 boundry. I also tried to use tanh with
scalingLayer(Scale=0.5,Bias=0.5);
,but it exceed the boundry again. How it can be possible?
Meanwhile, i also tried to use
actInfo = rlNumericSpec([1 1],LowerLimit=0,UpperLimit=1);
to limit action, yes it limits the action value but it doesn't scale it. it just act as a saturation block (like putting a saturation block in simulink in front of the action output). So, with this way, the RL works wrong.
How can achive to take action between 0 and 1?
3 Kommentare
Kautuk Raj
am 18 Jun. 2023
It is unexpected that the sigmoid and tanh functions would produce values outside their respective ranges of [0, 1] and [-1, 1]. However, if you are experiencing this issue, you can enforce the bounds on the output of your actor network by applying a custom layer with element-wise clipping.
awcii
am 18 Jun. 2023
awcii
am 19 Jun. 2023
Antworten (1)
Harsh
am 16 Jul. 2025
0 Stimmen
Hi @awcii
I understand that you're seeing action values exceed the [0, 1] range—even when using "sigmoidLayer" or "tanhLayer" with "scalingLayer". Most probable reason for this is because the DDPG agent adds exploration noise after the actor network output. This noise bypasses the bounding effect of the final activation layer, causing the actual actions to fall outside the desired range. Additionally, using "rlNumericSpec" with "LowerLimit" and "UpperLimit" only clips the final action values—it does not scale or constrain the network’s internal outputs, which can interfere with learning by distorting gradients.
To fix this, you should create a custom noise layer that adds Gaussian noise during training and passes data unchanged during inference. Place this layer just before the final "sigmoidLayer" in your actor network. This ensures that the noise is applied to the pre-activation values, and the "sigmoidLayer" guarantees the final output remains strictly within (0, 1), preserving both proper exploration and stable gradient flow.
Please refer to the following documentation to understand more about respective topics:
Kategorien
Mehr zu Reinforcement Learning finden Sie in Hilfe-Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!