$$x^{(0)} \rightarrow x^{(1)} \rightarrow x^{(2)} \dots \rightarrow x^{(t)} \rightarrow \dots$$
$$\begin{array}{ll} \hline 1 & state\,space:\, \color{blue}{x}\\\\ 2 & transition\,operator:\, \color{blue}{p(x^{(t+1)} | x^{(t)})}\\\\ 3 & initial\,condition\,distribution:\, \color{blue}{\pi^{(0)} }\\\\ \hline \end{array}$$
--- ## Markov Chains #### Weather of Berkeley, CA: $$ \begin{array}{|c|c|c|c|} \hline today->next week & sunny & foggy & rainy\\\\ \hline sunny & 0.8 & 0.15 & 0.05\\\\ \hline foggy & 0.4 & 0.5 & 0.1\\\\ \hline rainy & 0.1 & 0.3 & 0.6\\\\ \hline \end{array} $$ .center[ ### weather next month, next year ? ] --- ## Markov Chains
1. state space: {sunny, foggy, rainy} 2. transition operator: $ P = \begin{bmatrix} 0.8 & 0.15 & 0.05\\ 0.4 & 0.5 & 0.1\\ 0.1& 0.3 & 0.6 \end{bmatrix} $ 3. initial condition distribution: {sunny, foggy, rainy} = {0, 0, 1}
1. state space {depends on target distribution, e.g. [$-\infty, \infty$]} 2. proposal distribution: $q(x|x^{(t-1)})$ 3. initial state: $x^{(0)}=\pi^{(0)}$
1. sampling from $q(x|x^{(t-1)})$ 2. accept or reject the sample following some rules
1. $\quad\pi^{(0)} \sim \mathcal{N}(0,1)$ 2. $\quad q(x|x^{(t-1)}) \sim \mathcal{N}(x^{(t-1)},1)$
$$\begin{array}{|l|} \hline \text{1. set } t = 0\\ \text{2. generate an initial state }x^{(0)}\text{ from a prior distribution }\pi^{(0)}\text{ over initial states}\\ \text{3. repeat until }t = M\\ \qquad\text{set }t = t+1\\ \qquad\text{generate a proposal state }x^*\text{ from }q(x | x^{(t-1)})\\ \qquad\text{calculate the acceptance probability }\alpha = \min \left(1, \frac{p(x^*)}{p(x^{(t-1)})}\right)\\ \qquad\text{draw a random number u from }\text{Unif}(0,1)\\ \qquad\qquad\text{if }u \leq \alpha\text{, accept the proposal and set }x^{(t)} = x^*\\ \qquad\qquad\text{else set }x^{(t)} = x^{(t-1)}\\ \hline \end{array} $$
$$ \begin{aligned} p(z_i = k|\overrightarrow{\mathbf{z}}_{\neg i}, \overrightarrow{\mathbf{w}}) & \propto p(z_i = k, w_i = t |\overrightarrow{\mathbf{z}}_{\neg i}, \overrightarrow{\mathbf{w}}_{\neg i}) \\ &= \int p(z_i = k, w_i = t, \overrightarrow{\theta}_m,\overrightarrow{\varphi}_k | \overrightarrow{\mathbf{z}}_{\neg i}, \overrightarrow{\mathbf{w}}_{\neg i}) d \overrightarrow{\theta}_m d \overrightarrow{\varphi}_k \\ &= \int p(z_i = k, \overrightarrow{\theta}_m|\overrightarrow{\mathbf{z}}_{\neg i}, \overrightarrow{\mathbf{w}}_{\neg i}) \cdot p(w_i = t, \overrightarrow{\varphi}_k | \overrightarrow{\mathbf{z}}_{\neg i}, \overrightarrow{\mathbf{w}}_{\neg i}) d \overrightarrow{\theta}_m d \overrightarrow{\varphi}_k \\ &= \int p(z_i = k |\overrightarrow{\theta}_m) p(\overrightarrow{\theta}_m|\overrightarrow{\mathbf{z}}_{\neg i}, \overrightarrow{\mathbf{w}}_{\neg i}) \cdot p(w_i = t |\overrightarrow{\varphi}_k) p(\overrightarrow{\varphi}_k|\overrightarrow{\mathbf{z}}_{\neg i}, \overrightarrow{\mathbf{w}}_{\neg i}) d \overrightarrow{\theta}_m d \overrightarrow{\varphi}_k \\ &= \int p(z_i = k |\overrightarrow{\theta}_m) Dir(\overrightarrow{\theta}_m| \overrightarrow{n}_{m,\neg i} + \overrightarrow{\alpha}) d \overrightarrow{\theta}_m \\ & \hspace{0.2cm} \cdot \int p(w_i = t |\overrightarrow{\varphi}_k) Dir( \overrightarrow{\varphi}_k| \overrightarrow{n}_{k,\neg i} + \overrightarrow{\beta}) d \overrightarrow{\varphi}_k \\ &= \int \theta_{mk} Dir(\overrightarrow{\theta}_m| \overrightarrow{n}_{m,\neg i} + \overrightarrow{\alpha}) d \overrightarrow{\theta}_m \cdot \int \varphi_{kt} Dir( \overrightarrow{\varphi}_k| \overrightarrow{n}_{k,\neg i} + \overrightarrow{\beta}) d \overrightarrow{\varphi}_k \\ &= \color{red}{E(\theta_{mk}) \cdot E(\varphi_{kt})} \\ \end{aligned} $$