A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.
The definition of Markov chains has evolved during the 20th century. In 1953 the term Markov chain was used for stochastic processes with discrete or continuous index set, living on a countable or finite state space, see Doob.[1] or Chung.[2] Since the late 20th century it became more popular to consider a Markov chain as a stochastic process with discrete index set, living on a measurable state space.[3] [4] [5]
Denote with
(E,\Sigma)
p
(E,\Sigma)
(Xn)n
(\Omega,l{F},P)
p
\mu
P[X0\inA0,X1\inA1,...,Xn\inAn]=
\int | |
A0 |
...
\int | |
An-1 |
p(yn-1,An)p(yn-2,dyn-1)...p(y0,dy1)\mu(dy0)
n\inN,A0,...,An\in\Sigma
\mu\colon\Sigma\to[0,infty]
\mu
f\colonE\toR\cup\{infty,-infty\}
\intEf(x)\mu(dx)
\nux\colon\Sigma\to[0,infty]
\nux(A):=p(x,A)
\intEf(y)p(x,dy):=\intEf(y)\nux(dy).
If
\mu
x
p
\mu
(Xn)n
(\Omega,l{F},Px)
Ex[X]=\int\OmegaX(\omega)Px(d\omega)
Px
X
Px[X0=x]=1
We have for any measurable function
f\colonE\to[0,infty]
\intEf(y)p(x,dy)=Ex[f(X1)].
For a Markov kernel
p
\mu
(pn)n\inN
pn+1(x,A):=\intEpn(y,A)p(x,dy)
n\inN,n\geq1
p1:=p
(Xn)n
p
\mu
P[X0\inA,Xn\inB]=\intApn(x,B)\mu(dx)
A probability measure
\mu
p
\intA\mu(dx)=\intEp(x,A)\mu(dx)
A\in\Sigma
(Xn)n
(\Omega,l{F},P)
p
\mu
X0
\mu
Xn
P[Xn\inA]=\mu(A)
A\in\Sigma
A Markov kernel
p
\mu
\intAp(x,B)\mu(dx)=\intBp(x,A)\mu(dx)
A,B\in\Sigma
A=E
p
\mu
\mu
p