Simulink interruption during RL training

6 Ansichten (letzte 30 Tage)
gerway
gerway am 29 Mär. 2025
Kommentiert: gerway am 3 Apr. 2025
Hey everyone,
Anyone who has used reinforcement learning (RL) to train on physical models in Simulink knows that during the initial training phase, random exploration often triggers assertions or other instabilities that can cause Simulink to crash or diverge. This makes it very difficult to use the official train function provided by MathWorks, because once Simulink crashes, all the RL experience (replay buffer) is lost—essentially forcing you to start training from scratch each time.
So far, the only workaround I’ve found is to wrap the training process in an external try-catch block. When a failure occurs, I save the current agent parameters and load them again at the start of the next training run. But as many of you know, this slows down training by 100x or more.
Alternatively, one could pre-train on a simpler case and then fine-tune on the full model, but that’s not always feasible.
Has anyone discovered a better way to handle this?

Akzeptierte Antwort

Jaimin
Jaimin am 1 Apr. 2025
To address instabilities that may lead to crashes or divergence in Simulink, I can recommend a few strategies.
  1. Consider enhancing the approach by implementing custom error handling directly within the Simulink model, in addition to using a try-catch block. You might find it useful to incorporate blocks that detect when the model is nearing an unstable state, allowing you to dynamically reset or adjust parameters to help prevent crashes.
  2. Utilize Simulink Test to create test cases that can help identify and rectify scenarios that lead to instability before running RL training.
  3. Start training with a simplified version of your model and gradually increase its complexity. This can help the agent learn stable behaviours before tackling the full model.
  4. Regularly save checkpoints of the agent's parameters and replay buffer during training. This way, you can resume training from the last stable state rather than starting over.
  5. Use surrogate models or simplified representations of your Simulink model to perform initial training. Once the agent has learned a stable policy, transfer it to the full model.
I recommend implementing a custom training loop, as it gives you complete control over the training process. This includes managing episodes, determining how often to save checkpoints, and handling errors. This approach offers the flexibility to incorporate custom logic for stability checks, dynamically adjust exploration parameters, and integrate domain-specific knowledge.
Kindly refer the following links for additional information.
I hope you find this helpful.
  1 Kommentar
gerway
gerway am 3 Apr. 2025
The main issue is that the .ssc in the Rankine Cycle example can't be modified, so I can only passively try to adapt to it. Analyzing all the assertions is far beyond my capabilities, so ending the episode before an assertion occurs or creating a simplified version of the model is nearly impossible for me. Still, thank you.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Training and Simulation finden Sie in Help Center und File Exchange

Produkte


Version

R2024b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by