I have a very simple markov chain and I was wondering if there is a simplified equation for times spend in each state

27 Ansichten (letzte 30 Tage)
I have the following transition matrix of a Markov Chain.
M=[p, 1-p, 0, 0, ...;
p, 0, 1-p, 0, ....;
p, 0, 0, 1-p ....;
...]
i.e. all M(:,1)=p and all M(i,i+1)=1-p. M(i,j) is transition probability from i to j. The chain is potentially infinite, but numerically I guess it would be fine to say there are only 100 states, since each state is harder and harder to reach, because the probability to get there starting from state 1 is (1-p)^i, and every step I might fall back to state 1.
It seems obvious, that the times for i=2:n lay on a exponential decay function. But I have no intuition for i=1.
Any ideas to derive a formular (no simulation) for times spend as a function of p?
  4 Kommentare
Rik
Rik am 19 Aug. 2022
Well, then you need help from someone else. I suggested a simulation because that could give you an idea of what formula you would need. I already mentioned a simulation would be slow.
John D'Errico
John D'Errico am 22 Aug. 2022
See my answer, where I give enough of a solution to allow you to deduce what the final formula would be. For a transition matrix of infinite size, you can see how it would work. I've stopped short of providing a complete solution, since I consider this far too likely to be homework in some form.

Melden Sie sich an, um zu kommentieren.

Akzeptierte Antwort

John D'Errico
John D'Errico am 22 Aug. 2022
Bearbeitet: John D'Errico am 22 Aug. 2022
This is not even a question about MATLAB. And worse, this is very likely a homework problem. However, since you have gotten an answer, I'll suggest how to approach the problem.
You state that you have a valid transition matrix, essentially as a function of the variable p. It is of the form
M=[p, 1-p, 0, 0, ...;
p, 0, 1-p, 0, ....;
p, 0, 0, 1-p ....;
...]
But you state this is an INFINITE transition matrix, so we never really have a last row. Then you point out, that as long as p is non-zero, then the probabilty is high that you will never make it down to the last state anyway, if the matrix were finite, since the probability of getting to the last state is something like (1-p)^(n-1). So in a finite markov chain transition matrix, we might write the matrix to have a fairly simple last row. For example, we might set the final row to ALWAYS return the process to state 1, IF it ever enters that final state. (We dont want the final state to be an absorbing one though.)
M = @(p,n) diag(repmat(1-p,n-1,1),1) + [[repmat(p,n-1,1);1],zeros(n,n-1)];
Now we can build a simple matrix, to verify I built it correctly.
syms p
M(p,5)
ans = 
So a small matrix, just to show you what it looks like, and that I generated it properly. As I have written it, if the process ever gets to state 5, it always drops back to state 1. How do we find the long term state probabilities? We can use eigenvalues for this, but for any value of n that is reasonably large, we know the matrix will be too large to perform an analytical eigenvalue decomposition. Regardless, there is a simpler solution. We already know the eigenvalue is 1 that we care about, and we just want the associated eigenvector. Consider an example, for a 10x10 matrix. I'll do this so you can gain some intuition about the solution.
steadyStateTimes = null(M(p,10).' - eye(10))
steadyStateTimes = 
You can spend some time thinking about why that works. (Hey, not my homework, so you need to do some thinking.) Don't worry about what seem to be negative numbers there, since p is less than 1, and so raising (p-1) to an odd power is also going to be negative. However, that vector is not scaled to sum to 1. We need to do that before it makes sense. What does this look like for p==0.5?
subs(steadyStateTimes/sum(steadyStateTimes),p,.5)
ans = 
That does look exactly as I expect it world. Do you get the general idea of what it looks like? You should also be able to extrapolate what the solution is for a general matrix of size n, as a function of p. Again, this is surely a homework assignment of some sort, and I have already done too much.
  4 Kommentare
nobody
nobody am 31 Aug. 2022
Thank you John, for your (somewhat grudging) help :D
You can rest assured, it was not for a homework. You are helping out a bolding mid-thirty PhD student, with a master in psychology who thought it a good idea to do reseach in evolutionary game theory ^^ (not sure if that makes it any better).
I usually try to figure out such things myself (I can't stand not really understanding why things are done the way the are), but time was of an essence and I was somewhat overworked. I am sorry if I missused the forum. Especially with a math, not a Matlab question. If I ever need some help in this regard again, do you know a good forum where such questions are more appropriate?
I will post another answer where I show my final implementation based on your suggestion to conclude the topic. Thanks again!
nobody
nobody am 31 Aug. 2022
Bearbeitet: nobody am 31 Aug. 2022
Thank you everybody for your support!
John and William are right, I failed to define the last row of the matrix. [1 0 0 .... 0] is a good solution.
To speed things up, I use John's solution but define the Eigenvector directly:
n=10; %number of states
p=0.5; %probability to go back to state 1
V=(-1).^(n-1:-1:0)./((p-1).^(n-1:-1:0)); %concrete Eigenvector, not symbolic
TS=V/sum(V); %final solution for steady state times
TS(end)
Of course, the smaller the p, the larger n must be to give accurate results. I have settled to check it by looking at the last state TS(end) (there are probably better solutions).
I guess, it depends on how much we value accuracy over computational speed. For me the code is actually fast enough to add an order of magnitude to n, for example with p=0.001:
n==1e3, TS(end)==5.8210e-04, takes about 0.1ms
n==1e4, TS(end)==4.5221e-08
n==1e5, TS(end)==3.5421e-47, takes about 10ms

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (1)

William Rose
William Rose am 22 Aug. 2022
Bearbeitet: William Rose am 22 Aug. 2022
[edit: added a note about possibility of more than one equilibrium state]
Let's note that your matrix would be applied to the right side of a row vector representing a state. Some people write a Markov matrix as the transpose of yours, in which case the matrix would be to the left of a column vector.
A Markov matrix is square. I don't understand what you intend for the last line of your matrix. Your fomula does not work for the last row. For example, with three states, we have
M=[p,1-p, 0;
p, 0 ,1-p;
p, 0 , ? ];
The probabilities on each row must sum to unity, so the last row must not fit your formula.
A Markov matrix has one eigenvalue that is unity.* The eigenvector assocated with that eigenvalue is the distribution that the system converges to after many iterations. This gives you a way to determine, without simulation, the equilibrium distribution. But we cannnot demonstrate it until we answer the quesiton about the last row.
*There can be more than one eigenvalue of unity. For example, if the Markov matrix is the identity matrix, there are multiple eigenvalues of unity, and any initial distribution of probabbility is an equilibrium distribution.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by