Markov Decision Processes (MDP) Toolbox

Functions related to the resolution of discrete-time Markov Decision Processes.
15,5K Downloads
Aktualisiert 20. Jan 2015

Lizenz anzeigen

The MDP toolbox proposes functions related to the resolution of discrete-time Markov Decision Processes: backwards induction, value iteration, policy iteration, linear programming algorithms with some variants.
The functions were developped with MATLAB (note that one of the functions requires the Mathworks Optimization Toolbox) by Iadine Chadès, Marie-Josée Cros, Frédérick Garcia, Régis Sabbadin of the Biometry and Artificial Intelligence Unit of INRA Toulouse (France).
Toolbox page: http://www.inra.fr/mia/T/MDPtoolbox

Zitieren als

Marie-Josee Cros (2026). Markov Decision Processes (MDP) Toolbox (https://de.mathworks.com/matlabcentral/fileexchange/25786-markov-decision-processes-mdp-toolbox), MATLAB Central File Exchange. Abgerufen.

Kompatibilität der MATLAB-Version
Erstellt mit R2014b
Kompatibel mit allen Versionen
Plattform-Kompatibilität
Windows macOS Linux
Kategorien
Mehr zu Markov Chain Models finden Sie in Help Center und MATLAB Answers
Quellenangaben

Inspiriert: Betavol(x,R,fig)

Version Veröffentlicht Versionshinweise
1.6

Add the possibility to download as a toolbox (.mltbx file).

1.5.0.0

Complete Other Requirements.

1.4.0.0

Mainly improve documentation (Jan. 2014)

1.3.0.0

Update the zip file !

1.2.0.0

The version 4.0 (October 2012) is entirely compatible with GNU Octave (version 3.6), the output of several functions: mdp_relative_value_iteration, mdp_value_iteration and mdp_eval_policy_iterative, were modified.

1.1.0.0

Add all authors names

1.0.0.0