Main Content

rlTable

Value table or Q table

Description

Value tables and Q-tables are one of the approximation models that can be used within value functions and Q-value functions, respectively. Value tables store rewards for a finite set of observations. Q tables store rewards for corresponding finite observation-action pairs.

To create a value function approximator using an rlTable object, use an rlValueFunction, rlQValueFunction, or rlVectorQValueFunction object.

Creation

Description

T = rlTable(obsinfo) creates a value table for the given discrete observations.

example

T = rlTable(obsinfo,actinfo) creates a Q table for the given discrete observations and actions.

example

Input Arguments

expand all

Observation specification, specified as an rlFiniteSetSpec object.

Action specification, specified as an rlFiniteSetSpec object.

Properties

expand all

Reward table, returned as an array. When Table is a:

  • Value table, it contains NO rows, where NO is the number of finite observation values.

  • Q table, it contains NO rows and NA columns, where NA is the number of possible finite actions.

Object Functions

rlValueFunctionValue function approximator object for reinforcement learning agents
rlQValueFunction Q-Value function approximator with a continuous or discrete action space reinforcement learning agents
rlVectorQValueFunction Vector Q-value function approximator with hybrid or discrete action space for reinforcement learning agents

Examples

collapse all

Create an environment interface, and obtain its observation specifications.

env = rlPredefinedEnv("BasicGridWorld");
obsInfo = getObservationInfo(env)
obsInfo = 
  rlFiniteSetSpec with properties:

       Elements: [25x1 double]
           Name: "MDP Observations"
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

Create the value table using the observation specification.

vTable = rlTable(obsInfo)
vTable = 
  rlTable with properties:

    Table: [25x1 double]

You can now use your table as an approximation model for a value function for an agent with a discrete observation space. For more information, see rlValueFunction.

This example shows how to use rlTable to create a Q table. Such a table could be used to represent the actor or critic of an agent with finite observation and action spaces.

Create an environment interface, and obtain its observation and action specifications.

env=rlMDPEnv(createMDP(8,["up";"down"]));
obsInfo = getObservationInfo(env)
obsInfo = 
  rlFiniteSetSpec with properties:

       Elements: [8x1 double]
           Name: "MDP Observations"
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

actInfo = getActionInfo(env)
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [2x1 double]
           Name: "MDP Actions"
    Description: [0x0 string]
      Dimension: [1 1]
       DataType: "double"

Create the Q table using the observation and action specifications.

qTable = rlTable(obsInfo,actInfo)
qTable = 
  rlTable with properties:

    Table: [8x2 double]

You can now use your table as an approximation model for a Q-value function for an agent with discrete action and observation spaces. For more information, see rlQValueFunction.

Version History

Introduced in R2019a