How can I extract a trained RL Agent's network's weights and biases?
8 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
How can I extract a trained RL Agent's network's weights and biases?
My network is:
statePath = [
imageInputLayer([numObservations 1 1], 'Normalization', 'none', 'Name', 'state')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticStateFC1')
reluLayer('Name', 'CriticRelu1')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticStateFC2')];
actionPath = [
imageInputLayer([1 1 1], 'Normalization', 'none', 'Name', 'action')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticActionFC1')
reluLayer('Name', 'ActorRelu1')
fullyConnectedLayer(NumNeuron, 'Name', 'CriticActionFC2')];
commonPath = [
additionLayer(2,'Name', 'add')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(1, 'Name', 'output')];
criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork, actionPath);
criticNetwork = addLayers(criticNetwork, commonPath);
criticNetwork = connectLayers(criticNetwork,'CriticStateFC2','add/in1');
criticNetwork = connectLayers(criticNetwork,'CriticActionFC2','add/in2');
% set some options for the critic
criticOpts = rlRepresentationOptions('LearnRate',learing_rate,...
'GradientThreshold',1);
% create the critic based on the network approximator
critic = rlQValueRepresentation(criticNetwork,obsInfo,actInfo,...
'Observation',{'state'},'Action',{'action'},criticOpts);
agent = rlDQNAgent(critic,agentOpts)
trainingStats = train(agent,env,trainOpts);
After training, I'd like to get the network's trained weights and biases.
0 Kommentare
Akzeptierte Antwort
Anh Tran
am 27 Mär. 2020
Bearbeitet: Anh Tran
am 27 Mär. 2020
You can get the parameters from the trained's critic representation for DQN agent. In MATLAB R2020a, see getLearnableParameters and getCritic functions (function name changes a bit since R2019b). You can follow similar steps to get the actor's parameters from actor-based agent like DDPG or PPO.
critic = getCritic(agent);
criticParams = getLearnableParameters(critic);
6 Kommentare
Francisco Serra
am 14 Dez. 2023
轩
am 5 Jan. 2024
@Francisco Serra I have the same need. I find a silly method: save the agent after each episode and use "getLearnableParameters" to print the parameter of each agent.
Weitere Antworten (0)
Siehe auch
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!