How to create a neural network for Multiple Agent with discrete and continuous action?

2 Ansichten (letzte 30 Tage)
Hi All,
I am trying to create a RL model with 2 agents in my environment.
Both the observations are continuous, but Agent 1 Action is discrete and Agent 2 actions are continuous. How do I specify them while building the actor network?
%Create Action Specifications
numActions = 3;
numActions2 = 1;
actionSizes = numActions+ numActions2
numActionCombinations = 8;
S0 = [0 0 0];
S1 = [0 0 1];
S2 = [0 1 1];
S3 = [0 1 0];
S4 = [1 1 0];
S5 = [1 0 1];
S6 = [1 0 0];
S7 = [1 1 1];
actionInfo = rlFiniteSetSpec({S0,S1,S2,S3,S4,S5,S6,S7});
actionInfo2 = rlNumericSpec([numActions2 1],'LowerLimit',0.05,'UpperLimit',30);
actionInfo.Name = 'Pulse';
actionInfo2.Name = 'cRef';
net = [ featureInputLayer(obsSizes,'Normalization','none','Name','state')
fullyConnectedLayer(actionSizes,'Name','fc')
softmaxLayer('Name','actionProb') ];
actor = rlStochasticActorRepresentation(net,obsInfo,actInfo,'Observation','state');

Akzeptierte Antwort

Emmanouil Tzorakoleftherakis
If you want to specify the neural network structures yourself, there is nothing specific you need to do - simply create two actors and two critics, one for each action space and you are all set.
There is also the option to use the default agent feature where the neural nets are created automatically for you by only providing the observation and action space. See an example here.

Weitere Antworten (0)

Kategorien

Mehr zu Environments finden Sie in Help Center und File Exchange

Produkte


Version

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by