Plotting a custom reinforcement learning environment template
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hi there,
I am trying to create a custom Reinforcement Learning environment similar to the CartPole-Discrete in RL toolbox, except for a few tweaks to parameters like maximum force, threshold angle, etc.
I type
rlCreateEnvTemplate("CartPole_Environment")
and changed the value of MaxForce to 20 (supposedly). I then saved the file with the same name CartPole_Environment.
Now when I type the following comands in the command window, I am not getting the plot of the environment as I did with CartPole-Discrete.
env = CartPole_Environment;
validateEnvironment(env);
plot(env);
I would really be grateful if someone could help me with this!
0 Kommentare
Antworten (1)
Akshat
am 12 Jan. 2024
Hi Arjun,
I understand you want to plot the "CartPole_Environment" that you have modified at your end.
As per what I see in the code file when I create the template file using the following command,
rlCreateEnvTemplate("CartPole_Environment");
in the file generated, the "plot" method looks something like this:
% (optional) Visualization method
function plot(this)
% Initiate the visualization
% Update the visualization
envUpdatedCallback(this)
end
As you can see, it states that you will need to specify what exactly you want to plot. So I would suggest trying out filling this plot method with whatever it is you want to visualise, if the problem still persists, feel free to reach out here with the error/issue you are facing.
Hope this helps!
0 Kommentare
Siehe auch
Kategorien
Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!