How can I get the symbolic steady state vector of a Markov Chain?
172 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Hello, does anyone know how to obtain the symbolic steady state vector (i.e. the long-term probability of each state) of this Markov Chain example in MATLAB?
At the end of this demonstration, it does not show how can I further get a steady state vector?
It will be very appreciated if you can help me with this problem.
0 Kommentare
Antworten (1)
John D'Errico
am 7 Aug. 2022
Bearbeitet: John D'Errico
am 7 Aug. 2022
Easy, peasy. For example, given a simple Markov process, described by the 3x3 transition matrix T.
T = [.5 .2 .3;.1 .4 .5;.1 .1 .8]
There are no absorbing states. We can see this is indeed the transition matrix of a Markov chain. One good test is the rows all sum to 1, and none of the elements are greater than 1, or less than zero.
sum(T,2)
What are the steady-state probabilities?
[V,D] = eig(T')
Take eigenvector that corresponds to the unit eigenvalue. In this case, it is the first eigenvector.
P = V(:,1)';
Normalize so the elements sum to 1.
format long g
P = P/sum(P)
Those are the steady state probabilites for this system. We can see that this does not change P.
P*T
I won't do your homework for you, but you can easily enough see how to proceed from here.
4 Kommentare
Walter Roberson
am 8 Aug. 2022
John, with symbolic coefficients, is it going to be possible to find the entry with eigenvalue 1?
Bruno Luong
am 8 Aug. 2022
You don't need to compute eigen value, you can compute this, possibly easier in symbolic way:
ss = null(T.'-eye(size(T))).';
ss = ss/sum(ss)
Siehe auch
Kategorien
Mehr zu Markov Chain Models finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!