Markov chain central limit theorem

In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theory, but the quantity in the role taken by the variance in the classic CLT has a more complicated definition. See also the general form of Bienaymé's identity.

Statement

Suppose that:

  • the sequence X 1 , X 2 , X 3 , {\textstyle X_{1},X_{2},X_{3},\ldots } of random elements of some set is a Markov chain that has a stationary probability distribution; and
  • the initial distribution of the process, i.e. the distribution of X 1 {\textstyle X_{1}} , is the stationary distribution, so that X 1 , X 2 , X 3 , {\textstyle X_{1},X_{2},X_{3},\ldots } are identically distributed. In the classic central limit theorem these random variables would be assumed to be independent, but here we have only the weaker assumption that the process has the Markov property; and
  • g {\textstyle g} is some (measurable) real-valued function for which var ( g ( X 1 ) ) < + . {\textstyle \operatorname {var} (g(X_{1}))<+\infty .}

Now let[1][2] [3]

μ = E ( g ( X 1 ) ) , μ ^ n = 1 n k = 1 n g ( X k ) σ 2 := lim n var ( n μ ^ n ) = lim n n var ( μ ^ n ) = var ( g ( X 1 ) ) + 2 k = 1 cov ( g ( X 1 ) , g ( X 1 + k ) ) . {\displaystyle {\begin{aligned}\mu &=\operatorname {E} (g(X_{1})),\\{\widehat {\mu }}_{n}&={\frac {1}{n}}\sum _{k=1}^{n}g(X_{k})\\\sigma ^{2}&:=\lim _{n\to \infty }\operatorname {var} ({\sqrt {n}}{\widehat {\mu }}_{n})=\lim _{n\to \infty }n\operatorname {var} ({\widehat {\mu }}_{n})=\operatorname {var} (g(X_{1}))+2\sum _{k=1}^{\infty }\operatorname {cov} (g(X_{1}),g(X_{1+k})).\end{aligned}}}

Then as n , {\textstyle n\to \infty ,} we have[4]

n ( μ ^ n μ )   D   Normal ( 0 , σ 2 ) , {\displaystyle {\sqrt {n}}({\hat {\mu }}_{n}-\mu )\ {\xrightarrow {\mathcal {D}}}\ {\text{Normal}}(0,\sigma ^{2}),}

where the decorated arrow indicates convergence in distribution.

Monte Carlo Setting

The Markov chain central limit theorem can be guaranteed for functionals of general state space Markov chains under certain conditions. In particular, this can be done with a focus on Monte Carlo settings. An example of the application in a MCMC (Markov Chain Monte Carlo) setting is the following:

Consider a simple hard spheres model on a grid. Suppose X = { 1 , , n 1 } × { 1 , , n 2 } Z 2 {\displaystyle X=\{1,\ldots ,n_{1}\}\times \{1,\ldots ,n_{2}\}\subseteq Z^{2}} . A proper configuration on X {\displaystyle X} consists of coloring each point either black or white in such a way that no two adjacent points are white. Let χ {\displaystyle \chi } denote the set of all proper configurations on X {\displaystyle X} , N χ ( n 1 , n 2 ) {\displaystyle N_{\chi }(n_{1},n_{2})} be the total number of proper configurations and π be the uniform distribution on χ {\displaystyle \chi } so that each proper configuration is equally likely. Suppose our goal is to calculate the typical number of white points in a proper configuration; that is, if W ( x ) {\displaystyle W(x)} is the number of white points in x χ {\displaystyle x\in \chi } then we want the value of

E π W = x χ W ( x ) N χ ( n 1 , n 2 ) {\displaystyle E_{\pi }W=\sum _{x\in \chi }{\frac {W(x)}{N_{\chi }{\bigl (}n_{1},n_{2}{\bigr )}}}}

If n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} are even moderately large then we will have to resort to an approximation to E π W {\displaystyle E_{\pi }W} . Consider the following Markov chain on χ {\displaystyle \chi } . Fix p ( 0 , 1 ) {\displaystyle p\in (0,1)} and set X 1 = x 1 {\displaystyle X_{1}=x_{1}} where x 1 χ {\displaystyle x_{1}\in \chi } is an arbitrary proper configuration. Randomly choose a point ( x , y ) X {\displaystyle (x,y)\in X} and independently draw U U n i f o r m ( 0 , 1 ) {\displaystyle U\sim \mathrm {Uniform} (0,1)} . If u p {\displaystyle u\leq p} and all of the adjacent points are black then color ( x , y ) {\displaystyle (x,y)} white leaving all other points alone. Otherwise, color ( x , y ) {\displaystyle (x,y)} black and leave all other points alone. Call the resulting configuration X 1 {\displaystyle X_{1}} . Continuing in this fashion yields a Harris ergodic Markov chain { X 1 , X 2 , X 3 , } {\displaystyle \{X_{1},X_{2},X_{3},\ldots \}} having π {\displaystyle \pi } as its invariant distribution. It is now a simple matter to estimate E π W {\displaystyle E_{\pi }W} with w n ¯ = i = 1 n W ( X i ) / n {\displaystyle {\overline {w_{n}}}=\sum _{i=1}^{n}W(X_{i})/n} . Also, since χ {\displaystyle \chi } is finite (albeit potentially large) it is well known that X {\displaystyle X} will converge exponentially fast to π {\displaystyle \pi } which implies that a CLT holds for w n ¯ {\displaystyle {\overline {w_{n}}}} .

Implications

Not taking into account the additional terms in the variance which stem from correlations (e.g. serial correlations in markov chain monte carlo simulations) can result in the problem of pseudoreplication when computing e.g. the confidence intervals for the sample mean.

References

  1. ^ On the Markov Chain Central Limit Theorem, Galin L. Jones, https://arxiv.org/pdf/math/0409112.pdf
  2. ^ Markov Chain Monte Carlo Lecture Notes Charles J. Geyer https://www.stat.umn.edu/geyer/f05/8931/n1998.pdf page 9
  3. ^ Note that the equation for σ 2 {\displaystyle \sigma ^{2}} starts from Bienaymé's identity and then assumes that lim n k = 1 n ( n k ) n cov ( g ( X 1 ) , g ( X 1 + k ) ) lim n k = 1 n cov ( g ( X 1 ) , g ( X 1 + k ) ) k = 1 cov ( g ( X 1 ) , g ( X 1 + k ) ) {\displaystyle \lim _{n\to \infty }\sum _{k=1}^{n}{\frac {(n-k)}{n}}\operatorname {cov} (g(X_{1}),g(X_{1+k}))\approx \lim _{n\to \infty }\sum _{k=1}^{n}\operatorname {cov} (g(X_{1}),g(X_{1+k}))\to \sum _{k=1}^{\infty }\operatorname {cov} (g(X_{1}),g(X_{1+k}))} which is the Cesàro summation, see Greyer, Markov Chain Monte Carlo Lecture Notes https://www.stat.umn.edu/geyer/f05/8931/n1998.pdf page 9
  4. ^ Geyer, Charles J. (2011). Introduction to Markov Chain Monte Carlo. In Handbook of MarkovChain Monte Carlo. Edited by S. P. Brooks, A. E. Gelman, G. L. Jones, and X. L. Meng. Chapman & Hall/CRC, Boca Raton, FL, Section 1.8. http://www.mcmchandbook.net/HandbookChapter1.pdf

Sources

  • Gordin, M. I. and Lifšic, B. A. (1978). "Central limit theorem for stationary Markov processes." Soviet Mathematics, Doklady, 19, 392–394. (English translation of Russian original).
  • Geyer, Charles J. (2011). "Introduction to MCMC." In Handbook of Markov Chain Monte Carlo, edited by S. P. Brooks, A. E. Gelman, G. L. Jones, and X. L. Meng. Chapman & Hall/CRC, Boca Raton, pp. 3–48.