How to extract the trained actor network from the trained agent in Matlab environment? (Reinforcement Learning Toolbox)
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
wujianfa93
am 2 Jun. 2020
Beantwortet: Anh Tran
am 5 Jun. 2020
When the agent is successfully trained using DDPG in Matlab environment, if I want to verify the agent, the following codes should be executed according to the tutorial of MathWorks:
simOptions = rlSimulationOptions('MaxSteps',50);
experience = sim(env,agent,simOptions);
Unfortunately, it is not flexible enough for my program. I hope I can extract the trained actor network from the trained agent so that I can obtain the actions by directly inputting the observation vector to the actor network in each sampling step of my robot program for more complex tasks. However, I can’t seem to find the trained actor network from the following variables in the workspace:
Is there a way to extract the trained actor network? If so, how to call the extracted actor network (e.g., what are the I/O formats of the network)?
0 Kommentare
Akzeptierte Antwort
Anh Tran
am 5 Jun. 2020
You can collect the actor (or policy) from the trained agent with getActor. Then, you can use the actor to predict the best action from an observation wtih getAction.
% get actor representation
actor = getActor(agent);
% actor predicts an action given an observation
action = getAction(actor, observation)
0 Kommentare
Weitere Antworten (0)
Siehe auch
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!