Using getValue in matlab fcn block in simulink

2 Ansichten (letzte 30 Tage)
Sam Chen
Sam Chen am 13 Jul. 2020
I've trained a DDPGAgent and I am trying to test the critic in simulink. I tried using matlab fcn block. However, the error showed up as figure below when I ran it. Is there any method to get value from critic network in simulink?

Antworten (2)

Emmanouil Tzorakoleftherakis
Bearbeitet: Emmanouil Tzorakoleftherakis am 6 Mär. 2021
Hi Sam,
Before R2020a, the easiest way to bring the critic in Simulink without using the Agent block is to call generatePolicyFunction to generate a script that does inference, and then use a MATLAB Fcn block to call the generated script. You may need to add 'coder.extrinsic' statements at the begining of the script for things that do not support code generation, but that should work.
In R2020b, Deep Learning Toolbox ships a couple of blocks that allow you to bring deep neural networks into Simulink. This would be a faster way to do the same.
To do inference on the critic in Simulink before R2020b, create a function that does inference like the following:
function q = evaluateCritic(observation1)
q = localEvaluate(observation1);
end
%% Local Functions
function q = localEvaluate(observation1)
persistent policy
if isempty(policy)
policy = coder.loadDeepLearningNetwork('agentData.mat','policy');
end
q = predict(policy,observation1);
end
and in the MATLAB Fcn block in Simulink put the following
function q = MATLABFcn(observation1)
coder.extrinsic('evaluateCritic')
q=single(zeros(1,2));%dimensions should match number of actions
q = evaluateCritic(observation1);
end
Note that it's preferrable to use the critic architecture that has a single input channel and outputs multiple values based on the number of actions. I tested the above and works in R2020a.
Hope that helps
  4 Kommentare
Sam Chen
Sam Chen am 15 Jul. 2020
Thanks for telling the detail !
You said that "Note that it's preferrable to use the critic architecture that has a single input channel and outputs multiple values based on the number of actions. I tested the above and works in R2020a."
However, DDPG's critic has observation and action as input and a scalar output, could you tell me if there is any way to modify it?
Emmanouil Tzorakoleftherakis
Sorry, to be clear I was referring to cases where you have discrete action spaces like DQN (this link shows how to do this).
Another option to do this using 'getValue' is to create the following inference function
function q = evaluateCritic(observation, action)
persistent critic
if isempty(critic)
temp = load('SimulinkPendulumDDPG.mat','agent') %load your agent;
critic = getCritic(temp.agent); %extract critic
end
q = getValue(critic, {observation}, {action});
end
and put the following in the MATLAB Fcn block
function q = evaluateCritic(observation, action)
coder.extrinsic('evaluateCritic');
q = single(0);%initialize output
q = evaluateCritic(observation, action);
end
Also tested in R2020a. Hope that helps

Melden Sie sich an, um zu kommentieren.


Sam Chen
Sam Chen am 20 Jul. 2020
Bearbeitet: Sam Chen am 20 Jul. 2020
Sorry for the late reply. It works very well in my case. I really appreciate your detailed and clear explanation. I remember that there is a button for accepted answer but I can't find it. So I voted for your answer insteadly.

Produkte


Version

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by