5. Variational Inference (2)

( Assume that we all know Jensen’s Inequality, KL-Divergence, Variational Transform - look at the previous three posts )

Example Question

figure2

(1) Question Introduction

How can we find the probability distribution from the network above?

  • \(\mu\) follows a Normal Distribution
  • \(\tau\) follows a Gamma Distribution
  • \(x_i\) follows a Normal Distribution


From the distribution, we can find out that

\(P(\mu \mid \tau) = \sqrt{\frac{\mu_0 \tau}{2\pi}}e^{-\frac{\lambda_0 \tau(\mu-\mu_0)^2}{2}}\)

\(P(\tau) = \frac{1}{Gamma(a_0)}\) \(b_0^{a_o}\tau^{a_0-1}e^{-b_0 \tau}\)

\(P(x_i) = \sqrt{\frac{\tau}{2\pi}}e^{-\frac{\tau(x_i-\mu)^2}{2}}\)

We will use MFVI to solve this problem.

( remember \(lnq_i^{*}(H_i \mid E, \lambda_i)\) = \(ln\widetilde{P}(H,E\mid \theta)\) = \(E_{q_{i\neq j}}[lnP(H,E\mid \theta)]+C\) ,

H : Hypothesis, E : Evidence )


First, we can express \(P(H,E \mid \theta)\) like the below

\(P(H,E\mid \theta) = P(X,\mu,\tau \mid \mu_0, \lambda_0, a_0, b_0)\)
\(=P(X\mid \mu, \tau) P(\mu \mid \tau, \mu_0, \lambda_0)P(\tau \mid a_0, b_0)\)


\(= \prod_{i\leq N}P(x_i\mid \mu, \tau)P(\mu \mid \tau, \mu_0, \lambda_0)P(\tau \mid a_0, b_0)\)

Second, we can express \(Q(H\mid E,\lambda)\) like the below

\(Q(H\mid E,\lambda) = Q(\mu,\tau \mid X, \mu^{*},\tau^{*}) =q(\mu \mid X, \mu^{*})q(\tau \mid X, \tau^{*})\)

We have brought 2 new variational parameters, \(\mu^{*}\) and \(\tau^{*}\)

We will optimize the variational distribution to optimize the original model’s parameter.

(2) Optimize (each) Variational Distribution

a. with respect to \(\mu\)

Since what we want to do is to minimize with respect to \(\mu\), we will absorb terms, which are not related to \(\mu\), as a constant. Then \(lnq_{\mu}^{*}(\mu)\) will become like this :

figure2

The final expression that we get is a quadratic function with respect to \(\mu\).

\(ln q_{\mu}^{*}(\mu)\)​ = \(-\frac{1}{2}\) \(\{(\lambda_0 +N)E_{\tau}[\tau](\mu-\frac{\lambda_0 \mu_0 + \sum_{i\leq N}x_i}{\lambda_0 + N})^2\} + C4\)

What distribution will we set for \(q_\mu^{*}(\mu)\)?

Did you notice something from the equation above? It looks like a normal distribution with

  • mean : \(\frac{\lambda_0 \mu_0 + \sum_{i\leq N}x_i}{\lambda_0 + N}\)
  • variance : \(\frac{1}{(\lambda_0 +N)E_{\tau}[\tau]}\)

So we get the following distribution :

\(q_\mu^{*}(\mu) \sim\) \(N(\mu \mid \frac{\lambda_0 \mu_0 + \sum_{i\leq N}x_i}{\lambda_0 + N},\frac{1}{(\lambda_0 +N)E_{\tau}[\tau]} )\)

From the distribution above, (from mean & var), what we

  • already know are : \(\mu_0, \lambda_0, \sum_{i\leq N}x_i\)
  • don’t know are : \(E_{\tau}[\tau]\)

So we have to do the same thing with respect to \(\tau\) this time.

b. with respect to \(\tau\)

Since what we want to do is to minimize with respect to \(\tau\), we will absorb terms, which are not related to \(\tau\), as a constant. Then \(lnq_\tau^{*}(\tau)\) will become like this :

figure2

What distribution will we set for \(q_\mu^{*}(\mu)\)?

Did you notice something from the equation above? Maybe it is hard to notice at once glance. But if you look closely, it looks like a gamma distribution with

  • k : \(a_0 + \frac{N+1}{2}\)
  • \(\theta\) : \(b_0 + \frac{1}{2}E_\mu[\sum_{i\leq N}(x_i-\mu)^2 + (\mu-\mu_0)^2\lambda_0]\)

( Gamma Distribution : \(P(X) = \frac{x^{k-1}e^{-\frac{x}{\theta}}}{\theta^k Gamma(k)}\), when \(X \sim Gamma(k,\theta)\) )

So we get the following distribution :

\(q_\tau^{*}(\tau) \sim\) \(Gamma(\tau \mid a_0 + \frac{N+1}{2} ,b_0 + \frac{1}{2}E_\mu[\sum_{i\leq N}(x_i-\mu)^2 + (\mu-\mu_0)^2\lambda_0])\)

( let this \(q_\tau^{*}(\tau) \sim\) \(Gamma(\tau \mid a^{*},b^{*})\) )

From the distribution above, (from \(a^{*}\) & \(b^{*}\)), what we

  • already know are : \(a_0, b_0, N, X_i, \mu_0, \lambda_0\)
  • don’t know are : \(\mu\)

c. Coordinated Optimization

From a) and b), we have found out that ..

  • 1 ) To find out \(q_\mu^{*}(\mu)\), we have to know \(E_{\tau}[\tau]\)
  • 2 ) To find out \(q_\tau^{*}(\tau)\), we have to know \(\mu\)

They need each other to get optimized. Did you guess how we should solve this problem? ( If you haven’t go and check about EM Algorithm)

Yes. It looks very similar with E-step and M-step of EM Algorithm. But the difference is that, it EM algorithm, we made a coordinated optimization with hidden variable & visible variable, but in this case, we are making a coordinated optimization between hidden variables.

This is how the coordinated optimization works :

figure2

(3) Conclusion

Inference (\(X, a_0, b_0, \mu_0, \lambda_0\))

\(a^{*} = a_0 + \frac{N+1}{2}\)

\(\mu^{*} = \frac{\lambda_0 \mu_0 + \sum_{i\leq N}x_i}{\lambda_0 + N}\)

\(\lambda^{*}\) = (arbitrary number)

  • Iterate until convergence :

    • \(b^{*} = b_0 + \frac{1}{2}E_\mu[\sum_{i\leq N}(x_i-\mu)^2 + (\mu-\mu_0)^2\lambda_0]\) ( with \(E_\mu[\mu] = \mu^{*}\), \(E_\mu[\mu^2]=\lambda^{*-1}+(\mu^{*})^2\))

    • \(\lambda^{*} = (\lambda_0 + N)E_{\tau}[\tau]\) ( with \(E_{\tau}[\tau] = \frac{a^{*}}{b^{*}}\) )

    • Return the approximated value :

      • \(\mu \sim N(\mu \mid \mu^{*},\lambda^{*-1}\))

        \(\tau \sim Gamma(\tau \mid a^{*}, b^{*})\)