Reinforcement Learning multiple agent validation: Can I have a Simulink model host TWO agents and test them
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Rajesh Siraskar
am 6 Apr. 2021
Kommentiert: Rajesh Siraskar
am 8 Apr. 2021
Hi,
I am conducting research to see how PPO performs versus DDPG - for non-linear plants. I have trained two agents.
Can I have a Simulink circuit host TWO agents and test them? Basically trying to create a unified validation bench. Please see image below.
I did go through the documentation and tried this implementation but am getting errors.
Code:
% Set Simulink model pointers
PLANT_SIMULATION_MODEL = 'sm_Experimental_Setup'; % Simulink experimentation circuit
DDPG_AGENT = '/DDPG Sub-System/DDPG_Agent';
PPO_AGENT = '/PPO Sub-System/PPO_Agent';
% Load experiences from pre-trained agent
DDPG_agent = load(DDPG_MODEL_FILE,'agent');
PPO_agent = load(PPO_MODEL_FILE,'agent');
% Code here for setting (1) obsInfo and (2) actionInfo_DDPG and (3) actionInfo_PPO
% .... ...
% Intialise the environment with the serialised agent and run the test
env = rlSimulinkEnv(VALVE_SIMULATION_MODEL, [DDPG_AGENT PPO_AGENT], [obsInfo obsInfo], [actionInfo_DDPG actionInfo_PPO]);
simOpts = rlSimulationOptions('MaxSteps', 2000);
xpr = sim(env,[DDPG_agent.agent, PPO_agent.agent]);
ERROR message:
Error using rlSimulinkEnv (line 108)
No block diagram name specified.
Error in code_DDPG_PPO_Experimental_Setup (line 97)
env = rlSimulinkEnv(VALVE_SIMULATION_MODEL, [DDPG_AGENT PPO_AGENT], [obsInfo obsInfo], [actionInfo_DDPG actionInfo_PPO]);
Screen capture of Simulink model:
0 Kommentare
Akzeptierte Antwort
Emmanouil Tzorakoleftherakis
am 6 Apr. 2021
That should be possible. Did you follow the multi-agent examples? Since the agents are trained already you may want to check the last part in the links below where agents are simulated.
By the way, the error you are getting makes me think the the path you provide below is not complete
DDPG_AGENT = '/DDPG Sub-System/DDPG_Agent';
PPO_AGENT = '/PPO Sub-System/PPO_Agent';
Try adding the model name too.
Hope this helps
3 Kommentare
Emmanouil Tzorakoleftherakis
am 7 Apr. 2021
Based on what I see on the screenshot it should be something along the lines of 'sm_DDPG_PPO_Experimental_Setup/DDPG_Sub_System/DDPG_Agent'
I suggest taking a look at any of the Simulink examples in Reinforcement Learning Toolbox to see how the path is defined. Even the links above should work. The first thing in the path should be the model name.
Weitere Antworten (0)
Siehe auch
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!