rlContinuousGaussianRewardFunction
Stochastic Gaussian reward function approximator object for neural network-based environment
Since R2022a
Description
When creating a neural network-based environment using rlNeuralNetworkEnvironment
, you can specify the reward function approximator using
an rlContinuousDeterministicRewardFunction
object. Do so when you do not know
a ground-truth reward signal for your environment and you expect the reward signal to be
stochastic.
The reward function object uses a deep neural network as internal approximation model to predict the reward signal for the environment given one of the following input combinations.
Observations, actions, and next observations
Observations and actions
Actions and next observations
Next observations
To specify a deterministic reward function approximator, use an rlContinuousDeterministicRewardFunction
object.
Creation
Description
creates a stochastic reward function using the deep neural network
rwdFcnAppx
= rlContinuousGaussianRewardFunction(net
,observationInfo
,actionInfo
,Name=Value
)net
and sets the ObservationInfo
and
ActionInfo
properties.
When creating a reward function you must specify the names of the deep neural network inputs using one of the following combinations of name-value pair arguments.
ObservationInputNames
,ActionInputNames
, andNextObservationInputNames
ObservationInputNames
andActionInputNames
ActionInputNames
andNextObservationInputNames
NextObservationInputNames
You must also specify the names of the deep neural network outputs using the
RewardMeanOutputName
and
RewardStandardDeviationOutputName
name-value pair arguments.
You can also specify the UseDevice
property using an optional
name-value pair argument. For example, to use a GPU for prediction, specify
UseDevice="gpu"
.
Input Arguments
Properties
Object Functions
rlNeuralNetworkEnvironment | Environment model with deep neural network transition models |
Examples
Version History
Introduced in R2022a