photo

Bay Jay


Aktiv seit 2022

Followers: 0   Following: 0

Statistik

MATLAB Answers

15 Fragen
1 Antwort

RANG
11.611
of 300.369

REPUTATION
4

BEITRÄGE
15 Fragen
1 Antwort

ANTWORTZUSTIMMUNG
20.0%

ERHALTENE STIMMEN
0

RANG
 of 20.936

REPUTATION
N/A

DURCHSCHNITTLICHE BEWERTUNG
0.00

BEITRÄGE
0 Dateien

DOWNLOADS
0

ALL TIME DOWNLOADS
0

RANG

of 168.436

BEITRÄGE
0 Probleme
0 Lösungen

PUNKTESTAND
0

ANZAHL DER ABZEICHEN
0

BEITRÄGE
0 Beiträge

BEITRÄGE
0 Öffentlich Kanäle

DURCHSCHNITTLICHE BEWERTUNG

BEITRÄGE
0 Highlights

DURCHSCHNITTLICHE ANZAHL DER LIKES

  • Knowledgeable Level 1
  • First Answer
  • Thankful Level 2

Abzeichen anzeigen

Feeds

Anzeigen nach

Frage


How can one introduce the previous control inputs(manipulated variables sequence) from solved OCP to define a custom cost function in NLMPC.
I am trying to create cust cost function that penalized rate of change of the input. I would appreciate some help How can you i...

3 Monate vor | 1 Antwort | 0

1

Antwort

Beantwortet
How to fix NLMPC validateFcn() error in input arguments
You stated 4 parameters. Matlab expect you to pass the parameters as comma seperated at the last entry to the validatfcn {par...

3 Monate vor | 0

| akzeptiert

Frage


How can one integrate LSTM into the Neural State Space Deep Learning network. Also what is the exact difference between NSS vrs NeuralODE
Hello, I have looked at the examples on using the neural state space and associated neural network used:MLPs. I am intereste...

8 Monate vor | 0 Antworten | 0

0

Antworten

Frage


nlssest function help: How do I extract neural state space training losses to plot. MATLAB generates an automatic plot (neural state space); I want it in a different way
I am using the nlssest function to train my neural network. MATLAB generates an automatic plot, which is okay; however, I want t...

12 Monate vor | 1 Antwort | 0

1

Antwort

Frage


Does MATLAB have four in-wheel motor Electric vehicle model, where one motor control one wheel independently. Sometimes called 4-Indepdendent Drive EV, 4 In-wheel motor EV
Hello, I am looking a model for battery electric vehicle with four motors. In this configuration, each motor controls one of th...

etwa ein Jahr vor | 2 Antworten | 0

2

Antworten

Frage


TD3 agent fails to explore again after hitting the max action and gets stuck at the max action value. Additionally, the Q0 value exploded to large value.
The range of the a single action = 0.01 to 5. During learning using TD3, the learning is consist. However, if the agent applie...

mehr als ein Jahr vor | 0 Antworten | 0

0

Antworten

Frage


Could you help clarify the terminology and usage of Exploratory Policy and Exploratory Model in TD3 Reinforcement Learning
TD3 agent has the exploratory model that we set for noise parameters. By default example PMSM Control, the UseExploratorypolic...

fast 2 Jahre vor | 2 Antworten | 0

2

Antworten

Frage


I get error when I try to test an agent trained on a PC( with GPU) on a second computer which has no GPU (Reinforcement Learning)
Hello, I trained a DDPG agent on PC where I placed the critic and actor network on a GPU. After training, I am able to run and ...

mehr als 2 Jahre vor | 1 Antwort | 0

1

Antwort

Frage


How do I find the objective/cost function for the example Valet parking using multistage NLMPC. (https://www.mathworks.com/help/mpc/ug/parking-valet-using-nonlinear-model-pred
Hello Sir/Madam, I a trying to understand cost/objective function for NLMPC for the valet parking example, but not able to accu...

mehr als 2 Jahre vor | 1 Antwort | 0

1

Antwort

Frage


How do you temporarily disable Fast Restart in Reinforcement Learning.
Hello, I want to temporaly disable fast restart. I wish to monitor actions of the RL environment, but I think the simulation is...

mehr als 2 Jahre vor | 1 Antwort | 0

1

Antwort

Frage


Training a DDPG, and observation values are zero. How do I initialize the first episode to have initial values to the action?
Hello, I am training a DDPG agent with four actions. My observations are zero for more than 1000 episodes. I suspect because ...

mehr als 2 Jahre vor | 0 Antworten | 0

0

Antworten

Frage


How do you change the vehiclecostmap to a specific x and y axis dimension. Eg: [0 735] x [0 814] meters to [-30 3] x [-30 1] meters
Hello, I saved a matlab figure with axis dimension xlim = [-30 3] and ylim=[-30 and 1] as image.png to create occupancy map....

mehr als 2 Jahre vor | 0 Antworten | 0

0

Antworten

Frage


How to send values to workspace during reinforcement agent validation for further plot and analysis. Using "RUN" button on Simulink produces some difference from Validation.
I want to export specific values to workspace during the Agent validation to plot. I donot want to use the Simulink "RUN" but...

mehr als 2 Jahre vor | 1 Antwort | 0

1

Antwort

Frage


How do we specify a polygon (5 sided, 3 sided, 6 sided...) as an obstacle in the costmap RRTplanner.
I am trying to specify a polygon (pentagon and triangle) as a costmap for the vehicle costmap. How do I implement that in the c...

mehr als 2 Jahre vor | 0 Antworten | 0

0

Antworten

Frage


Applying reinforcement learning with two continuous actions. During training one varies but the other is virtually static.
Hello, I am trying to train the DDPG agent to control the vehicle's (model:Kinetmatic) steering angle and velocity. The purpose...

fast 3 Jahre vor | 1 Antwort | 0

1

Antwort