MBPO with simulink env,will the reward defined in simulink model overwrite the rewardFcn handle defined in .m file?
3 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
i am currently using matlab 2023a, in the MBPO for cartpole example,the reward function and isDone function are defined in .m file,
this is the following code in example:
lgenerativeEnv = rlNeuralNetworkEnvironment(obsInfo,actInfo, ...[transitionFcn,transitionFcn2,transitionFcn3],@myRewardFunction,@myIsDoneFunction);
now i want to use a simulink model,will the reward defined in simulink model overwrite the rewardFcn handle defined in .m file?
0 Kommentare
Antworten (1)
Yatharth
am 11 Okt. 2023
Hi Bin,
I understand that you have a custom "Reward" and "IsDone" function defined in MATLAB, and you have created an environment using the "rlNeuralNetworkEnvironment" function.
Since you are mentioning you have defined a reward function in the Simulink Model too, I am curious how you are able to achieve that.
However, ideally the reward function defined in the Simulink model will not overwrite the reward function defined in the .m file. In the code you provided, the reward function defined in the .m file is explicitly passed as an argument to the “rlNeuralNetworkEnvironment” constructor.
The “reward” function defined in the .m file will be used by the “rlNeuralNetworkEnvironment” object when computing the reward during the training or simulation process. Since the reward is calculated in the environment itself.
You can refer to the following page to check your reward function in the simulation.
I hope this helps.
0 Kommentare
Siehe auch
Kategorien
Mehr zu Sources finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!