IntroductionA stochastic process X={X(t),t∈T}is a collection of random dịch - IntroductionA stochastic process X={X(t),t∈T}is a collection of random Việt làm thế nào để nói

IntroductionA stochastic process X=

Introduction
A stochastic process X
={
X(t)
,
t

T
}
is a collection of random variables. That
is, for each t in the index set T, X(t) is a random variable. We often interpret
t as time and call X(t) the state of the process at time t. If the index set T is a
countable set, say T
={
0
,
1
,
2
,...
}
,
we say that X is a discrete time stochastic
process, whereas if T consists of a continuum of possible values, we say that X is
a continuous time stochastic process.
Inthischapterweconsider adiscretetimestochasticprocess X
n
,
n
=
0
,
1
,
2
,...
that takes on a finite or countable number of possible values. Unless otherwise
mentioned, this set of possible values will be denoted by the set of nonnegative
integers 0
,
1
,
2
,...
.I
f X
n
=
i
,
then the process is said to be in state i at time n.
We suppose that whenever the process is in state i, there is a fixed probability P
i
,
j
that it will next be in state j. That is, we suppose that
P
{
X
n
+
1
=
j
|
X
n
=
i
,
X
n

1
=
i
n

1
,...,
X
0
=
i
0
}=
P
i
,
j
(4.1)
for all states i
0
,
i
1
,...,
i
n

1
,
i
,
j and all n

0
.
Such a stochastic process is known
as a Markov chain. Equation (4.1) may be interpreted as stating that, for a Markov
chain, the conditional distribution of any future state X
n
+
1
,
given the past states
X
0
,
X
1
,...,
X
n

1
and the present state X
n
,
is independent of the past states and
depends only on the present state. That is, given the present state, the past and
future states of a Markov chain are independent.
The value P
i
,
j
represents the probability that the process will, when in state i,
next make a transition into state j. As probabilities are nonnegative and the process
must make a transition into some state, we have
P
i
,
j

0
,
_
j
P
i
,
j
=
1
103



104
4
Markov Chains
Let P denote the matrix of one-step transition probabilities P
i
,
j
P
=









P
0
,
0
P
0
,
1
...
P
0
,
j
...
P
1
,
0
P
1
,
1
...
P
1
,
j
...
...
...
...
...
...
P
i
,
0
P
i
,
1
...
P
i
,
j
...
...
...
...
...
...









Example 4.1a
Consider a communications system that transmits the digits
0
and 1. Each digit transmitted must pass through several stages, at each of which
there is a probability p that the digit entered will be unchanged when it leaves.
Letting X
n
denote the digit entering the nth stage, then
{
X
n
,
n

0
}
is a two-state
Markov chain having a transition probability matrix
P
=

p
1

p
1

pp

Example 4.1b
Suppose that whether it rains today depends on previous
weather conditions only from the last two days. Specifically, suppose that if it has
rained for the past two days, then it will rain tomorrow with probability 0
.
7;
if itrained today but not yesterday, then itwill rain tomorrow with probability 0
.
5; if
it rained yesterday but not today, then it will rain tomorrow with probability 0
.
4; if it
has not rained in the past two days, then it will rain tomorrow with probability 0
.
2.
If we let the state at time n depend on whether it is raining on day n, then the
preceding would not be a Markov chain (why not?). However, we can transform it
into a Markov chain by letting the state on any day be determined by the weather
conditions during both that day and the preceding one. For instance, we can say
that the process is in
state 0
if it rained both today and yesterday
state 1
if it rained today but not yesterday
state 2
if it rained yesterday but not today
state 3
if it rained neither today nor yesterday
The preceding would then represent a four-state Markov chain whose transition
probability matrix is easily shown to be as follows:
P
=






0
.
700
.
30
0
.
500
.
50
00
.
400
.
6
00
.
200
.
8









4.2.
Chapman-Kolmogorov Equations
105
4.2.
Chapman-Kolmogorov Equations
The n-step transition probability P
n
i
,
j
of the Markov chain is defined as the condi-
tional probability, given that the chain is currently in state i, that it will be in state
j after n additional transitions. That is,
P
n
i
,
j
=
P
{
X
n
+
m
=
j
|
X
m
=
i
}
,
n

0
,
i
,
j

0
Of course P
1
i
,
j
=
P
i
,
j
.
The Chapman-Kolmogorov equations provide a method of
computing these n-step probabilities. These equations are
P
n
+
m
i
,
j
=

_
k
=
0
P
n
i
,
k
P
m
k
,
j
(4.2)
and are derived by noting that P
n
i
,
k
P
m
k
,
j
is the probability that the chain, currently in
state i, will go to state j after n
+
m transitions through a path that takes it into state
k at the nth transition. Hence, summing these probabilities over all intermediate
states k yields the probability that the process will be in state j after n
+
m
transitions. Formally, we have
P
n
+
m
i
,
j
=
P
{
X
n
+
m
=
j
|
X
0
=
i
}
=

_
k
=
0
P
{
X
n
+
m
=
j
,
X
n
=
k
|
X
0
=
i
}
=

_
k
=
0
P
{
X
n
+
m
=
j
|
X
n
=
k
,
X
0
=
i
}
P
{
X
n
=
k
|
X
0
=
i
}
=

_
k
=
0
P
m
k
,
j
P
n
i
,
k
If we let P
(n)
denote the matrix of n-step transition probabilities P
n
i
,
j
, then the
Chapman-Kolmogorov equations assert that
P
(n
+
m)
=
P
(n)

P
(m)
where the dot represents matrix multiplication. Hence,
P
(2)
=
P
(1
+
1)
=
P

P
=
P
2
and, by induction,
P
(n)
=
P
(n

1
+
1)
=
P
(n

1)

P
=
P
n
That is, the n-step transition probability matrix may be obtained by multiplying
the matrix P by itself n times.



106
4
Markov Chains
Example 4.2a
Suppose, in Example 4.1a, that it rained on both Monday and
Tuesday. What is the probability that it will rain on Thursday?
Solution:
Because the transition probability matrix is
P
=






0
.
700
.
30
0
.
500
.
50
00
.
400
.
6
00
.
200
.
8






the two-step transition probability matrix is
P
2
=






0
.
49
0
.
12
0
.
21
0
.
18
0
.
35
0
.
20
0
.
15
0
.
30
0
.
20
0
.
12
0
.
20
0
.
48
0
.
10
0
.
16
0
.
10
0
.
64






Because the chain in in state 0 on Tuesday, and because it will rain on Thurs-
day if the chain is in either state 0 or state 1 on that day, the desired probability is
P
2
0
,
0
+
P
2
0
,
1
=
0
.
49
+
0
.
12
=
0
.
61

4.3.
Classification of States
State j is said to be accessible from state i if P
n
i
,
j
>
0
for some n

0
.
Note that
this implies that state j is accessible from state i if and only if, starting in state i,
it is possible that the process will ever be in state j. This is true because if j is not
accessible from i, then
P
{
ever enter
j
|
start in i
}=
P
_

_
n
=
0
{
X
n
=
j
}|
X
0
=
i
_


_
n
=
0
P
{
X
n
=
j
|
X
0
=
i
}
=
0
Because
P
0
i
,
i
=
P
{
X
0
=
i
|
X
0
=
i
}=
1
it follows that any state is accessible from itself. If state j is accessible from state i
,
and state i is accessible from state j, then we say that states i and j communicate.
Communication between states i and j is expressed symbolically by i

j
.



4.3.
Classification of States
107
The communication relation satisfies the following three properties:
1.
i

i
2.
if i

j then j

i
3.
if i

j and j

k then i

k
Properties 1 and 2 follow immediately from the definition of communication. To
prove 3, suppose that i communicates with j, and j communicates with k. Then,
there exist integers n and m such that P
n
i
,
j
P
m
j
,
k
>
0
.
By the Chapman-Kolmogorov
equations,
P
n
+
m
i
,
k
=
_
r
P
n
i
,
r
P
m
r
,
k

P
n
i
,
j
P
m
j
,
k
>
0
Hence state k is accessible from state i. By the same argument we can show that
state i is accessible from state k
,
completing the verification of Property 3.
Two states that communicate are said to be in the same class. It is an easy
consequence of Properties 1, 2, and 3 that any two classes of states are either
identical or disjoint. In other words, the concept of communication divides the
state space up into a number of separate classes. The Markov chain is said to be
irreducible if there is only one class, that is, if all states communicate with each
other.
Example 4.3a
Consider the Markov chain consisting of the three states
0
,
1
,
2
,
and having transition probability matrix
P
=




1
2
1
2
0
1
2
1
4
1
4
0
1
3
2
3




It is easy to verify that this Markov chain is irreducible. For example, it is
possible to go from state 0 to state 2 because
0

1

2
That is, one way of getting from state 0 to state 2 is to go from state 0 to
state 1 (with probability 1
/
2)
and then go from state 1 to state 2 (with probability
1
/
4).

Example 4.3b
Consider a Markov chain consisting of the four states 0, 1, 2,
3
and having transition probability matrix
P
=






1
2
1
2
00
1
2
1
2
00
1
4
1
4
1
4
1
4
0001









108
4
Markov Chains
The classes of this Markov chain are
{
0
,
1
}
,
{
2
}
,
and
{
3
}
.
Note that while state 0
(or 1) is accessible from state 2, the reverse is not true. As state 3 is an absorbing
state (i.e., P
3
,
3
=
1)
,
no other state is accessible from it.

For any state i, let f denote the probability that, starting in state i, the process
i
will ever reenter that state. State i is said to be recurrent if f
i
=
1
,
and transient
if f
i
<
1
.
Suppose now that the process starts in state i, and i is recurrent. Then,
with probability 1, the process will eventually reenter state i. However, by the
definition of a Markov chain, it follows that the process will be probabilistically
starting over again when it reenters state i and, therefore, state i will eventually be
visited a second time. Continual repetition of this argument leads to the conclusion
that if state i is recurrent then, starting in state i, the process will reenter state i
again and again and again — in fact, infinitely often. On the other hand, suppose
that state i is transient. In this case, each time the process enters state i there will
be a positive probability, namely, 1

f
i
,
that it will never again enter that state.
Therefore, starting in state i, the probability that the process will be in state i for
exactly n time periods equals f
n

1
i
(1

f )
i
,
n

1
.
In other words, if state i is
transient then, starting in state i, the number
0/5000
Từ: -
Sang: -
Kết quả (Việt) 1: [Sao chép]
Sao chép!
IntroductionA stochastic process X={X(t),t∈T}is a collection of random variables. Thatis, for each t in the index set T, X(t) is a random variable. We often interprett as time and call X(t) the state of the process at time t. If the index set T is acountable set, say T={0,1,2,...},we say that X is a discrete time stochasticprocess, whereas if T consists of a continuum of possible values, we say that X isa continuous time stochastic process.Inthischapterweconsider adiscretetimestochasticprocess Xn,n=0,1,2,...that takes on a finite or countable number of possible values. Unless otherwisementioned, this set of possible values will be denoted by the set of nonnegativeintegers 0,1,2,....If Xn=i,then the process is said to be in state i at time n.We suppose that whenever the process is in state i, there is a fixed probability Pi,jthat it will next be in state j. That is, we suppose thatP{Xn+1=j|Xn=i,Xn−1=in−1,...,X0=i0}=Pi,j(4.1)for all states i0,i1,...,in−1,i,j and all n≥0.Such a stochastic process is knownas a Markov chain. Equation (4.1) may be interpreted as stating that, for a Markovchain, the conditional distribution of any future state Xn+1,given the past statesX0,X1,...,Xn−1and the present state Xn,is independent of the past states anddepends only on the present state. That is, given the present state, the past andfuture states of a Markov chain are independent.The value Pi,jrepresents the probability that the process will, when in state i,next make a transition into state j. As probabilities are nonnegative and the processmust make a transition into some state, we havePi,j≥0,_jPi,j=1103 1044Markov ChainsLet P denote the matrix of one-step transition probabilities Pi,jP=P0,0P0,1...P0,j...P1,0P1,1...P1,j..................Pi,0Pi,1...Pi,j..................Example 4.1aConsider a communications system that transmits the digits0and 1. Each digit transmitted must pass through several stages, at each of whichthere is a probability p that the digit entered will be unchanged when it leaves.Letting Xndenote the digit entering the nth stage, then{Xn,n≥0}is a two-stateMarkov chain having a transition probability matrixP= p1−p1−ppExample 4.1bSuppose that whether it rains today depends on previousweather conditions only from the last two days. Specifically, suppose that if it hasrained for the past two days, then it will rain tomorrow with probability 0.7;if itrained today but not yesterday, then itwill rain tomorrow with probability 0.5; ifit rained yesterday but not today, then it will rain tomorrow with probability 0.4; if ithas not rained in the past two days, then it will rain tomorrow with probability 0.2.If we let the state at time n depend on whether it is raining on day n, then thepreceding would not be a Markov chain (why not?). However, we can transform itinto a Markov chain by letting the state on any day be determined by the weatherconditions during both that day and the preceding one. For instance, we can saythat the process is instate 0if it rained both today and yesterdaystate 1if it rained today but not yesterdaystate 2if it rained yesterday but not todaystate 3if it rained neither today nor yesterdayThe preceding would then represent a four-state Markov chain whose transitionprobability matrix is easily shown to be as follows:P=0.700.300.500.5000.400.600.200.8 4.2.Chapman-Kolmogorov Equations1054.2.Chapman-Kolmogorov EquationsThe n-step transition probability Pni,jof the Markov chain is defined as the condi-tional probability, given that the chain is currently in state i, that it will be in statej after n additional transitions. That is,Pni,j=P{Xn+m=j|Xm=i},n≥0,i,j≥0Of course P1i,j=Pi,j.The Chapman-Kolmogorov equations provide a method ofcomputing these n-step probabilities. These equations arePn+mi,j=∞_k=0Pni,kPmk,j(4.2)and are derived by noting that Pni,kPmk,jis the probability that the chain, currently instate i, will go to state j after n+m transitions through a path that takes it into statek at the nth transition. Hence, summing these probabilities over all intermediatestates k yields the probability that the process will be in state j after n+mtransitions. Formally, we havePn+mi,j=P{Xn+m=j|X0=i}=∞_k=0P{Xn+m=j,Xn=k|X0=i}=∞_k=0P{Xn+m=j|Xn=k,X0=i}P{Xn=k|X0=i}=∞_k=0Pmk,jPni,kIf we let P(n)denote the matrix of n-step transition probabilities Pni,j, then theChapman-Kolmogorov equations assert thatP(n+m)=P(n)∙P(m)where the dot represents matrix multiplication. Hence,P(2)=P(1+1)=P∙P=P2and, by induction,P(n)=P(n−1+1)=P(n−1)∙P=PnThat is, the n-step transition probability matrix may be obtained by multiplyingthe matrix P by itself n times. 1064Markov ChainsExample 4.2aSuppose, in Example 4.1a, that it rained on both Monday andTuesday. What is the probability that it will rain on Thursday?Solution:Because the transition probability matrix isP=0.700.300.500.5000.400.600.200.8the two-step transition probability matrix isP2=0.490.120.210.180.350.200.150.300.200.120.200.480.100.160.100.64Because the chain in in state 0 on Tuesday, and because it will rain on Thurs-day if the chain is in either state 0 or state 1 on that day, the desired probability isP20,0+P20,1=0.49+0.12=0.61✷4.3.Classification of StatesState j is said to be accessible from state i if Pni,j>0for some n≥0.Note thatthis implies that state j is accessible from state i if and only if, starting in state i,it is possible that the process will ever be in state j. This is true because if j is notaccessible from i, thenP{ever enterj|start in i}=P_∞_n=0{Xn=j}|X0=i_≤∞_n=0P{Xn=j|X0=i}=0BecauseP0i,i=P{X0=i|X0=i}=1it follows that any state is accessible from itself. If state j is accessible from state i,and state i is accessible from state j, then we say that states i and j communicate.Communication between states i and j is expressed symbolically by i↔j. 4.3.Classification of States107The communication relation satisfies the following three properties:1.i↔i2.if i↔j then j↔i3.if i↔j and j↔k then i↔kProperties 1 and 2 follow immediately from the definition of communication. Toprove 3, suppose that i communicates with j, and j communicates with k. Then,there exist integers n and m such that Pni,jPmj,k>0.By the Chapman-Kolmogorovequations,Pn+mi,k=_rPni,rPmr,k≥Pni,jPmj,k>0Hence state k is accessible from state i. By the same argument we can show thatstate i is accessible from state k,completing the verification of Property 3.Two states that communicate are said to be in the same class. It is an easy
consequence of Properties 1, 2, and 3 that any two classes of states are either
identical or disjoint. In other words, the concept of communication divides the
state space up into a number of separate classes. The Markov chain is said to be
irreducible if there is only one class, that is, if all states communicate with each
other.
Example 4.3a
Consider the Markov chain consisting of the three states
0
,
1
,
2
,
and having transition probability matrix
P
=




1
2
1
2
0
1
2
1
4
1
4
0
1
3
2
3




It is easy to verify that this Markov chain is irreducible. For example, it is
possible to go from state 0 to state 2 because
0

1

2
That is, one way of getting from state 0 to state 2 is to go from state 0 to
state 1 (with probability 1
/
2)
and then go from state 1 to state 2 (with probability
1
/
4).

Example 4.3b
Consider a Markov chain consisting of the four states 0, 1, 2,
3
and having transition probability matrix
P
=






1
2
1
2
00
1
2
1
2
00
1
4
1
4
1
4
1
4
0001









108
4
Markov Chains
The classes of this Markov chain are
{
0
,
1
}
,
{
2
}
,
and
{
3
}
.
Note that while state 0
(or 1) is accessible from state 2, the reverse is not true. As state 3 is an absorbing
state (i.e., P
3
,
3
=
1)
,
no other state is accessible from it.

For any state i, let f denote the probability that, starting in state i, the process
i
will ever reenter that state. State i is said to be recurrent if f
i
=
1
,
and transient
if f
i
<
1
.
Suppose now that the process starts in state i, and i is recurrent. Then,
with probability 1, the process will eventually reenter state i. However, by the
definition of a Markov chain, it follows that the process will be probabilistically
starting over again when it reenters state i and, therefore, state i will eventually be
visited a second time. Continual repetition of this argument leads to the conclusion
that if state i is recurrent then, starting in state i, the process will reenter state i
again and again and again — in fact, infinitely often. On the other hand, suppose
that state i is transient. In this case, each time the process enters state i there will
be a positive probability, namely, 1

f
i
,
that it will never again enter that state.
Therefore, starting in state i, the probability that the process will be in state i for
exactly n time periods equals f
n

1
i
(1

f )
i
,
n

1
.
In other words, if state i is
transient then, starting in state i, the number
đang được dịch, vui lòng đợi..
 
Các ngôn ngữ khác
Hỗ trợ công cụ dịch thuật: Albania, Amharic, Anh, Armenia, Azerbaijan, Ba Lan, Ba Tư, Bantu, Basque, Belarus, Bengal, Bosnia, Bulgaria, Bồ Đào Nha, Catalan, Cebuano, Chichewa, Corsi, Creole (Haiti), Croatia, Do Thái, Estonia, Filipino, Frisia, Gael Scotland, Galicia, George, Gujarat, Hausa, Hawaii, Hindi, Hmong, Hungary, Hy Lạp, Hà Lan, Hà Lan (Nam Phi), Hàn, Iceland, Igbo, Ireland, Java, Kannada, Kazakh, Khmer, Kinyarwanda, Klingon, Kurd, Kyrgyz, Latinh, Latvia, Litva, Luxembourg, Lào, Macedonia, Malagasy, Malayalam, Malta, Maori, Marathi, Myanmar, Mã Lai, Mông Cổ, Na Uy, Nepal, Nga, Nhật, Odia (Oriya), Pashto, Pháp, Phát hiện ngôn ngữ, Phần Lan, Punjab, Quốc tế ngữ, Rumani, Samoa, Serbia, Sesotho, Shona, Sindhi, Sinhala, Slovak, Slovenia, Somali, Sunda, Swahili, Séc, Tajik, Tamil, Tatar, Telugu, Thái, Thổ Nhĩ Kỳ, Thụy Điển, Tiếng Indonesia, Tiếng Ý, Trung, Trung (Phồn thể), Turkmen, Tây Ban Nha, Ukraina, Urdu, Uyghur, Uzbek, Việt, Xứ Wales, Yiddish, Yoruba, Zulu, Đan Mạch, Đức, Ả Rập, dịch ngôn ngữ.

Copyright ©2024 I Love Translation. All reserved.

E-mail: