- Output of train function i.e., the trainingStats structure. For that you need to enable the Signal Logging checkbox under Model Settings > Data Import/Export. Then you can specify the signals that you wish to log in to the Simulink model by right-clicking on each signal and selecting Log Selected Signals. After the training is complete, you can access this data from the SimulationInfo field in the trainingStats structure. The output of the train function has the following attributes:https://www.mathworks.com/help/reinforcement-learning/ref/rl.agent.rlqagent.train.html#mw_e64fb86d-43b6-4ef2-821d-f6a3086cf8cd
- Simulation Data Inspector (SDI) This option allows you to visualize the data as the simulations are completed during training. For that you need to enable the Record logged workspace data in the SDI option under Model Settings > Data Import/Export . Then you can specify the signals that you wish to log in to the Simulink model. In your code I can see that you are training in parallel, so you need to execute the following command before starting training for making the SDI compatible with parallel simulations.
How to actively see signals in scopes while RL agent is training?
1 Ansicht (letzte 30 Tage)
Ältere Kommentare anzeigen
Hello everyone,
I'm working on a project where a RL agent in a Simulink model is getting trained to track a signal. During training, I'm only able to see the episode reward in the episode manager window. After training has ended, all the scopes run with what I am assuming the agent of the last episode. I would like to see the scopes with the signals during training, so that I can pinpoint why some errors occur.
At the moment, my script for the training looks like this:
doTraining = true;
if doTraining
% Train the agent.
trainingOpts = rlTrainingOptions(...
"MaxEpisodes",100,...
"MaxStepsPerEpisode",500,...
"StopTrainingCriteria","EpisodeCount",...
"StopTrainingValue",50,...
"Verbose",true,...
"Plots","training-progress",...
"SaveAgentCriteria","EpisodeReward",...
"SaveAgentValue",4500,...
"SaveAgentDirectory","savedAgents\shield1",...
"UseParallel",true);
trainingStats = train(agent,env,trainingOpts);
else
% Load a pretrained agent.
load('savedAgents/shield/Agent70.mat', 'agent');
end
sim(mdl)
Anyone any suggestion on how to make this happen?
Thanks in advance!
0 Kommentare
Antworten (1)
Aiswarya
am 4 Sep. 2023
Hi,
I understand that you are working on a project involving RL agents and you would like to visualise the signals during each episode of the training. There are two ways to visualize and log data during training:
>> Simulink.sdi.enablePCTSupport('local') % if the parallel workers are from the local cluster
Parallel workers will send logged data to the SDI as the simulations are completed.The SDI also provides options to export logged data to the MATLAB workspace.
You can see both the options in the Model Settings window in the image below:
0 Kommentare
Siehe auch
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!