IntroductionA stochastic process X={X(t),t∈T}is a collection of random variables. Thatis, for each t in the index set T, X(t) is a random variable. We often interprett as time and call X(t) the state of the process at time t. If the index set T is acountable set, say T={0,1,2,...},we say that X is a discrete time stochasticprocess, whereas if T consists of a continuum of possible values, we say that X isa continuous time stochastic process.Inthischapterweconsider adiscretetimestochasticprocess Xn,n=0,1,2,...that takes on a finite or countable number of possible values. Unless otherwisementioned, this set of possible values will be denoted by the set of nonnegativeintegers 0,1,2,....If Xn=i,then the process is said to be in state i at time n.We suppose that whenever the process is in state i, there is a fixed probability Pi,jthat it will next be in state j. That is, we suppose thatP{Xn+1=j|Xn=i,Xn−1=in−1,...,X0=i0}=Pi,j(4.1)for all states i0,i1,...,in−1,i,j and all n≥0.Such a stochastic process is knownas a Markov chain. Equation (4.1) may be interpreted as stating that, for a Markovchain, the conditional distribution of any future state Xn+1,given the past statesX0,X1,...,Xn−1and the present state Xn,is independent of the past states anddepends only on the present state. That is, given the present state, the past andfuture states of a Markov chain are independent.The value Pi,jrepresents the probability that the process will, when in state i,next make a transition into state j. As probabilities are nonnegative and the processmust make a transition into some state, we havePi,j≥0,_jPi,j=1103 1044Markov ChainsLet P denote the matrix of one-step transition probabilities Pi,jP=P0,0P0,1...P0,j...P1,0P1,1...P1,j..................Pi,0Pi,1...Pi,j..................Example 4.1aConsider a communications system that transmits the digits0and 1. Each digit transmitted must pass through several stages, at each of whichthere is a probability p that the digit entered will be unchanged when it leaves.Letting Xndenote the digit entering the nth stage, then{Xn,n≥0}is a two-stateMarkov chain having a transition probability matrixP= p1−p1−ppExample 4.1bSuppose that whether it rains today depends on previousweather conditions only from the last two days. Specifically, suppose that if it hasrained for the past two days, then it will rain tomorrow with probability 0.7;if itrained today but not yesterday, then itwill rain tomorrow with probability 0.5; ifit rained yesterday but not today, then it will rain tomorrow with probability 0.4; if ithas not rained in the past two days, then it will rain tomorrow with probability 0.2.If we let the state at time n depend on whether it is raining on day n, then thepreceding would not be a Markov chain (why not?). However, we can transform itinto a Markov chain by letting the state on any day be determined by the weatherconditions during both that day and the preceding one. For instance, we can saythat the process is instate 0if it rained both today and yesterdaystate 1if it rained today but not yesterdaystate 2if it rained yesterday but not todaystate 3if it rained neither today nor yesterdayThe preceding would then represent a four-state Markov chain whose transitionprobability matrix is easily shown to be as follows:P=0.700.300.500.5000.400.600.200.8 4.2.Chapman-Kolmogorov Equations1054.2.Chapman-Kolmogorov EquationsThe n-step transition probability Pni,jof the Markov chain is defined as the condi-tional probability, given that the chain is currently in state i, that it will be in statej after n additional transitions. That is,Pni,j=P{Xn+m=j|Xm=i},n≥0,i,j≥0Of course P1i,j=Pi,j.The Chapman-Kolmogorov equations provide a method ofcomputing these n-step probabilities. These equations arePn+mi,j=∞_k=0Pni,kPmk,j(4.2)and are derived by noting that Pni,kPmk,jis the probability that the chain, currently instate i, will go to state j after n+m transitions through a path that takes it into statek at the nth transition. Hence, summing these probabilities over all intermediatestates k yields the probability that the process will be in state j after n+mtransitions. Formally, we havePn+mi,j=P{Xn+m=j|X0=i}=∞_k=0P{Xn+m=j,Xn=k|X0=i}=∞_k=0P{Xn+m=j|Xn=k,X0=i}P{Xn=k|X0=i}=∞_k=0Pmk,jPni,kIf we let P(n)denote the matrix of n-step transition probabilities Pni,j, then theChapman-Kolmogorov equations assert thatP(n+m)=P(n)∙P(m)where the dot represents matrix multiplication. Hence,P(2)=P(1+1)=P∙P=P2and, by induction,P(n)=P(n−1+1)=P(n−1)∙P=PnThat is, the n-step transition probability matrix may be obtained by multiplyingthe matrix P by itself n times. 1064Markov ChainsExample 4.2aSuppose, in Example 4.1a, that it rained on both Monday andTuesday. What is the probability that it will rain on Thursday?Solution:Because the transition probability matrix isP=0.700.300.500.5000.400.600.200.8the two-step transition probability matrix isP2=0.490.120.210.180.350.200.150.300.200.120.200.480.100.160.100.64Because the chain in in state 0 on Tuesday, and because it will rain on Thurs-day if the chain is in either state 0 or state 1 on that day, the desired probability isP20,0+P20,1=0.49+0.12=0.61✷4.3.Classification of StatesState j is said to be accessible from state i if Pni,j>0for some n≥0.Note thatthis implies that state j is accessible from state i if and only if, starting in state i,it is possible that the process will ever be in state j. This is true because if j is notaccessible from i, thenP{ever enterj|start in i}=P_∞_n=0{Xn=j}|X0=i_≤∞_n=0P{Xn=j|X0=i}=0BecauseP0i,i=P{X0=i|X0=i}=1it follows that any state is accessible from itself. If state j is accessible from state i,and state i is accessible from state j, then we say that states i and j communicate.Communication between states i and j is expressed symbolically by i↔j. 4.3.Classification of States107The communication relation satisfies the following three properties:1.i↔i2.if i↔j then j↔i3.if i↔j and j↔k then i↔kProperties 1 and 2 follow immediately from the definition of communication. Toprove 3, suppose that i communicates with j, and j communicates with k. Then,there exist integers n and m such that Pni,jPmj,k>0.By the Chapman-Kolmogorovequations,Pn+mi,k=_rPni,rPmr,k≥Pni,jPmj,k>0Hence state k is accessible from state i. By the same argument we can show thatstate i is accessible from state k,completing the verification of Property 3.Two states that communicate are said to be in the same class. It is an easy
consequence of Properties 1, 2, and 3 that any two classes of states are either
identical or disjoint. In other words, the concept of communication divides the
state space up into a number of separate classes. The Markov chain is said to be
irreducible if there is only one class, that is, if all states communicate with each
other.
Example 4.3a
Consider the Markov chain consisting of the three states
0
,
1
,
2
,
and having transition probability matrix
P
=
1
2
1
2
0
1
2
1
4
1
4
0
1
3
2
3
It is easy to verify that this Markov chain is irreducible. For example, it is
possible to go from state 0 to state 2 because
0
→
1
→
2
That is, one way of getting from state 0 to state 2 is to go from state 0 to
state 1 (with probability 1
/
2)
and then go from state 1 to state 2 (with probability
1
/
4).
✷
Example 4.3b
Consider a Markov chain consisting of the four states 0, 1, 2,
3
and having transition probability matrix
P
=
1
2
1
2
00
1
2
1
2
00
1
4
1
4
1
4
1
4
0001
108
4
Markov Chains
The classes of this Markov chain are
{
0
,
1
}
,
{
2
}
,
and
{
3
}
.
Note that while state 0
(or 1) is accessible from state 2, the reverse is not true. As state 3 is an absorbing
state (i.e., P
3
,
3
=
1)
,
no other state is accessible from it.
✷
For any state i, let f denote the probability that, starting in state i, the process
i
will ever reenter that state. State i is said to be recurrent if f
i
=
1
,
and transient
if f
i
<
1
.
Suppose now that the process starts in state i, and i is recurrent. Then,
with probability 1, the process will eventually reenter state i. However, by the
definition of a Markov chain, it follows that the process will be probabilistically
starting over again when it reenters state i and, therefore, state i will eventually be
visited a second time. Continual repetition of this argument leads to the conclusion
that if state i is recurrent then, starting in state i, the process will reenter state i
again and again and again — in fact, infinitely often. On the other hand, suppose
that state i is transient. In this case, each time the process enters state i there will
be a positive probability, namely, 1
−
f
i
,
that it will never again enter that state.
Therefore, starting in state i, the probability that the process will be in state i for
exactly n time periods equals f
n
−
1
i
(1
−
f )
i
,
n
≥
1
.
In other words, if state i is
transient then, starting in state i, the number
đang được dịch, vui lòng đợi..