Main Content

Train DDPG Agent to Swing Up and Balance Pendulum

This example shows how to train a deep deterministic policy gradient (DDPG) agent to swing up and balance a pendulum modeled in Simulink®.

For more information on DDPG agents, see Deep Deterministic Policy Gradient (DDPG) Agent. For an example that trains a DDPG agent in MATLAB®, see Compare DDPG Agent to LQR Controller.

Fix Random Seed Generator to Improve Reproducibility

The example code may involve computation of random numbers at various stages such as initialization of the agent, creation of the actor and critic, resetting the environment during simulations, generating observations (for stochastic environments), generating exploration actions, and sampling min-batches of experiences for learning. Fixing the random number stream preserves the sequence of the random numbers every time you run the code and improves reproducibility of results. You will fix the random number stream at various locations in the example.

Fix the random number stream with the seed 0 and random number algorithm Mersenne Twister. For more information on random number generation see rng.

previousRngState = rng(0,"twister")
previousRngState = struct with fields:
     Type: 'twister'
     Seed: 0
    State: [625x1 uint32]

The output previousRngState is a structure that contains information about the previous state of the stream. You will restore the state at the end of the example.

Pendulum Swing-Up Model

The reinforcement learning environment for this example is a simple frictionless pendulum that initially hangs in a downward position. The training goal is to make the pendulum stand upright without falling over using minimal control effort.

Open the model.

mdl = "rlSimplePendulumModel";
open_system(mdl)

For this model:

  • The upward balanced pendulum position is 0 radians, and the downward hanging position is pi radians.

  • The torque action signal from the agent to the environment is from –2 to 2 N·m.

  • The observations from the environment are the sine of the pendulum angle, the cosine of the pendulum angle, and the pendulum angle derivative.

  • The reward rt, provided at every time step, is

rt=-(θt2+0.1θt˙2+0.001ut-12)

Here:

  • θt is the angle of displacement from the upright position.

  • θt˙ is the derivative of the displacement angle.

  • ut-1 is the control effort from the previous time step.

For more information on this model, see Load Predefined Control System Environments.

Create Environment Interface

Create a predefined environment interface for the pendulum.

env = rlPredefinedEnv("SimplePendulumModel-Continuous")
env = 
SimulinkEnvWithAgent with properties:

           Model : rlSimplePendulumModel
      AgentBlock : rlSimplePendulumModel/RL Agent
        ResetFcn : []
  UseFastRestart : on

obsInfo = getObservationInfo(env);

The interface has a continuous action space where the agent can apply torque values between –2 to 2 N·m to the pendulum.

actInfo = getActionInfo(env)
actInfo = 
  rlNumericSpec with properties:

     LowerLimit: -2
     UpperLimit: 2
           Name: "torque"
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

Set the observations of the environment to be the sine of the pendulum angle, the cosine of the pendulum angle, and the pendulum angle derivative.

set_param( ...
    "rlSimplePendulumModel/create observations", ...
    "ThetaObservationHandling","sincos");

To define the initial condition of the pendulum as hanging downward, specify an environment reset function using an anonymous function handle. This reset function sets the model workspace variable theta0 to pi.

env.ResetFcn = @(in)setVariable(in,"theta0",pi,"Workspace",mdl);

Specify the simulation time Tf and the agent sample time Ts in seconds.

Ts = 0.05;
Tf = 20;

Create DDPG Agent

When you create the agent, the actor and critic networks are initialized randomly. Ensure reproducibility by fixing the random seed generation.

rng(0,"twister")

Create a default rlDDPGAgent object using the environment specification objects.

agent = rlDDPGAgent(obsInfo,actInfo);

To ensure that the RL Agent block in the environment executes every Ts seconds (instead of the default one second), set the SampleTime property of the agent.

agent.AgentOptions.SampleTime = Ts;

Set a lower learning rate and gradient thresholds to avoid instability.

agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-3;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-3;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;

Increase the length of the experience buffer and the size of the mini buffer.

agent.AgentOptions.ExperienceBufferLength = 1e5;
agent.AgentOptions.MiniBatchSize = 128;

Train Agent

To train the agent, first specify the training options. For this example, use the following options.

  • Run training for at most 5000 episodes, with each episode lasting at most ceil(Tf/Ts) time steps.

  • Display the training progress in the Reinforcement Learning Training Monitor dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).

  • Stop training when the agent receives an average cumulative reward greater than –740 when evaluating the deterministic policy. At this point, the agent can quickly balance the pendulum in the upright position using minimal control effort.

  • Save a copy of the agent for each episode where the cumulative reward is greater than –740.

For more information, see rlTrainingOptions.

maxepisodes = 5000;
maxsteps = ceil(Tf/Ts);
trainOpts = rlTrainingOptions(...
    MaxEpisodes=maxepisodes,...
    MaxStepsPerEpisode=maxsteps,...
    ScoreAveragingWindowLength=5,...
    Verbose=false,...
    Plots="training-progress",...
    StopTrainingCriteria="EvaluationStatistic",...
    StopTrainingValue=-740,...
    SaveAgentCriteria="EvaluationStatistic",...
    SaveAgentValue=-740);

Train the agent using the train function. Training this agent is a computationally intensive process that takes several hours to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.

doTraining = false;
if doTraining
    % Use the rlEvaluator object to measure policy performance every 10
    % episodes
    evaluator = rlEvaluator(...
        NumEpisodes=1,...
        EvaluationFrequency=10);
    % Train the agent.
    trainingResults = train(agent,env,trainOpts,Evaluator=evaluator);
else
    % Load the pretrained agent for the example.
    load("SimulinkPendulumDDPG.mat","agent")
end

Simulate DDPG Agent

Fix the random stream for reproducibility.

rng(0,"twister");

To validate the performance of the trained agent, simulate it within the pendulum environment. For more information on agent simulation, see rlSimulationOptions and sim.

simOptions = rlSimulationOptions(MaxSteps=500);
experience = sim(env,agent,simOptions);

Figure Simple Pendulum Visualizer contains an axes object. The axes object contains 2 objects of type line, rectangle.

totalReward = sum(experience.Reward)
totalReward = 
-731.2350

The simulation shows that the agent is able to stabilize the pendulum in the vertical position.

Restore the random number stream using the information stored in previousRngState.

rng(previousRngState);

See Also

Apps

Functions

Objects

Blocks

Blocks

Related Examples

More About