How do I solve this error?
1 Ansicht (letzte 30 Tage)
Ältere Kommentare anzeigen
Apoorv Pandey
am 24 Mär. 2023
Kommentiert: Cris LaPierre
am 27 Mär. 2023
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1334999/image.png)
I am getting this error when I try to train a TD3 RL agent.
Thanking You
Apoorv Pandey
1 Kommentar
Emmanouil Tzorakoleftherakis
am 24 Mär. 2023
If you share a reproduction model it would be easier to debug
Akzeptierte Antwort
Cris LaPierre
am 24 Mär. 2023
When defining your rlQValueFunction, include the ActionInputNames and OvservationInputNames name-value pairs.
See this example: https://www.mathworks.com/help/reinforcement-learning/ref/rl.function.rlqvaluefunction.html#mw_da4065e4-5b9a-41c6-b11b-6692d8698a76
% Observation path layers
obsPath = [featureInputLayer( ...
prod(obsInfo.Dimension), ...
Name="netObsInput")
fullyConnectedLayer(16)
reluLayer
fullyConnectedLayer(5,Name="obsout")];
% Action path layers
actPath = [featureInputLayer( ...
prod(actInfo.Dimension), ...
Name="netActInput")
fullyConnectedLayer(16)
reluLayer
fullyConnectedLayer(5,Name="actout")];
%<snip>
critic = rlQValueFunction(net,...
obsInfo,actInfo, ...
ObservationInputNames="netObsInput",...
ActionInputNames="netActInput")
2 Kommentare
Cris LaPierre
am 27 Mär. 2023
Please share your data and your code. You can attach files using the paperclip icon. If it's easier,save your workspace variables to a mat file and attach that.
Weitere Antworten (0)
Siehe auch
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!