Why does the SAC trainning stop at the first episode? What can trigger it?
5 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
I am trainning an SAC Agent for a path following mobile robot in matlab with 2 different PI controllers one for the linear velocity controll and the other for the angular velocity. I connected the parameters Ki and Kp of both Controllers to the SAC Agent. I decided to define the Reward as (Reward = -0.1*(abs(Error_Linear)+abs(Error_Angular))) and the stopping condition (Is_done = (abs(Error_Linear)+abs(Error_Angular))<1). I am not understanding what triggers the trainning process to stop at the first episode.
0 Kommentare
Antworten (1)
Ayush Aniket
am 14 Nov. 2024
Bearbeitet: Ayush Aniket
am 14 Nov. 2024
Hi Renaldo,
The reason for the agent training stopping after first episode could be due to the "Training termination" condition specified as the StopTrainingCriteria argument in the rlTrainingOptions function. Refer to the following documentation link to read about the argument:
One similar example can be found here: https://www.mathworks.com/matlabcentral/answers/1779640-reinforcement-learning-agent-stops-training-unexpectedly
If this is not the issue, please share the script you are using.
0 Kommentare
Siehe auch
Kategorien
Mehr zu Training and Simulation finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!