CS229 Notes9 Lagrange Multipliers and Factor Analysis

Lagrange Multiplier

说明拉格朗日乘数函数的关键点同时也是原函数的关键点

Consider maximize a function with constraints \(f(x), s.t. Ax=b\)

\(A: \mathbb{R}^{n\times d}, x: \mathbb{R}^d, b: \mathbb{R}^n, f: \mathbb{R}^d \to \mathbb{R}\)

Finding the space of solutions

Assume \(n >> d\). If \(A\) is column full rank, then \(x_0\) is unique. Let \(k=d-r>0\), the null space \(\mathbb{R}^{d\times k}\) has basis

\[U = [u_1, ..., u_k] \in \mathbb{R}^{d\times k}\]

The solution to \(Ax=b\) can be written with free parameters \[x = x_0 + Uy\]

Consider the function \(g\)

\[g(y) = f(x_0+Uy), g:\mathbb{R}^k \to \mathbb{R}\]

\[\nabla_yg(y) = 0 \to U^T\nabla f(x_0+Uy) = \nabla f(x) = 0\]

Since \(U\) is the null space of A, \(\nabla f(x)\) should be in the row space of A.

\[\therefore \nabla f(x) + A^T \mu = 0, \mu \in \mathbb{R}^n\]

for a certain \(x\) that \(Ax=b\)

The Clever Langrangian(show that the critical points of the constrained function \(f\) are critical points of \(L(x, \mu)\))

\[L(x,\mu) = f(x) + \mu^T(Ax-b)\]

Normal equation:

\[\nabla_xL(x,\mu) = \nabla f(x) + A^T\mu = 0\]

\[\nabla_\mu L(x,\mu) = Ax-b = 0\]

First equation is equivalent to the normal equation for \(g\). The second one just says that the constraint be statisfied.

So if \(x\) is critical point for \(f\), \((x,\mu)\) is the critical point for \(L\).

Factor analysis

稀疏数据的分布估计、EM

marginals and conditions of \(\Sigma\)

A random variable

\[x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}\]

where \(x_1 \in \mathbb{R}^r, x_2 \in \mathbb{R}^s, x \in \mathbb{R}^{r+s}\)

Suppose \(x \sim N(\mu, \Sigma)\), where

\[\mu = \begin{bmatrix} \mu_1 \\ \mu_2 \end{bmatrix}, \Sigma = \begin{bmatrix} \Sigma_{11} \quad \Sigma_{12} \\ \Sigma_{21} \quad \Sigma_{22}\end{bmatrix}\]

The conditional distribution \(x_1|x_2\)

\[\mu_{1|2} = \mu_1 + \Sigma_{12}\Sigma_{22}^{-1}(x_2-\mu_2)\]

\[\Sigma_{1|2} = \Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}\]

The Factor analysis model

Posit a joint distribution \((x,z)\) with \(z\in \mathbb{R}^k\) being the latent random variable

\[z \sim N(0,I)\]

\[x|z \sim N(\mu+\Lambda z, \Psi)\]

Here, \(\mu \in \mathbb{R}^{d}, \Lambda \in \mathbb{R}^{d\times k}, \Psi \in \mathbb{R}^{d\times d}\). Usually \(k<d\), which means \(z\) is a low-dimensional to \(x\).

Define the factor analysis model according to:

\[z \sim N(0, I)\]

\[\epsilon \sim N(0, \Psi)\]

\[x = \mu + \Lambda z + \epsilon\]

Remark: Map k-dimension multivariate Gaussian \(z\) to a d-dimensional affine space of \(\mathbb{R}^d\)

Assertion: \((z,x)\) have a joint Gaussian distribution:

\[ \begin{bmatrix} z \\ x \end{bmatrix} \sim N(\mu_{zx}, \Sigma )\]

Then find \(\mu_{zx}\) and \(\Sigma\)

\[E[x] = \mu\]

\[E[z] = 0\]

\[\therefore \mu_{zx} = \begin{bmatrix} \vec{0} \\ \mu \end{bmatrix}\]

To find \(\Sigma\), calculate its subcomponents \(\Sigma_{zz}, \Sigma_{xx}, \Sigma_{xz}\)

\[\Sigma_{zz} = cov(z) = I\]

\[\begin{aligned} \Sigma_{xz} &= E[(z-Ez)(x-Ex)^T]\\ &= E[z(\Lambda z + \epsilon)^T]\\ &= E[zz^T]\Lambda^T + E[z\epsilon^T]\\ &= \Lambda^T \end{aligned}\]

\[\Sigma_{xx} = \Lambda\Lambda^T + \Psi \]

Putting everything together,

\[ \begin{bmatrix} z \\ x \end{bmatrix} \sim N(\begin{bmatrix} \vec{0} \\ \mu \end{bmatrix}, \begin{bmatrix} &I \quad &\Lambda^T \\ &\Lambda \quad &\Lambda\Lambda^T + \Psi \end{bmatrix} )\]

The marginal distribution of x is \(x\sim N(\mu, \Lambda\Lambda^T+\Psi)\).

We can write out the log-likelihood of this multivariate gaussian. To optimize the MLE, we have to use EM.

\[l(\mu, \Lambda, \Psi) = \log \prod_{i=1}^{n} \frac1{(2\pi)^{d/2} |\Lambda\Lambda^T + \Psi|^{1/2} } \exp \bigg( -\frac12 (x^{(i)}-\mu)^T(\Lambda\Lambda^T+\Psi)^{-1}(x^{(i)}-\mu) \bigg) \]

EM for factor analysis

The E-step

\[Q_i(z^{(i)}) = p(z_i|x^{(i)}, \mu, \Lambda, \Psi)\]

Given the joint distribution of \((z,x)\) and the conditional distribution formula of a gaussian,

太长略...

讨论

本篇notes开头讲的是 \(n<d\)时,数据集不足以估计多元高斯分布的协方差矩阵\(\Sigma\),即该矩阵奇异。

第一种方法限制 \(\Sigma\) 的形式,使其为对角矩阵。因为数据不够填满整个协方差矩阵,就只关注各分量坐标的各自方差。这样只要\(n \geq d+1\)\(\Sigma\) 就非奇异了。 这种方法完全忽略了各分量间相关性。

第二种方法Factor Analysis则是假设高维(d)高斯随机量 \(x\) 是低维(k)高斯随机量 \(z\) 投射到高维并加上 \(x\) 的均值、随机噪声 \(\epsilon\) 得到。同时 \(z,x\) 的vertical stack 也是一个 (d+k) 高斯分布随机量。注意到这种形式和 OLS 的做法类似,这里的参数就是仿射矩阵 \(\Lambda\) 和噪音 \(\epsilon\)

如此,\(x\) 的边际分布 \(x\sim N(\mu, \Lambda \Lambda^T, \Psi)\). 这样相比于原来的整个\(d\times d\)协方差矩阵实现了降维,从而可能以更少的数据量实现估计。

本章一块比较重要的内容是, 两个高斯随机量vstack形成的一个更高维的高斯随机量的mean/var,以及条件分布\(x_1|x_2\)也是个高斯分布,其mean/var的表达式。

\[\mu_{1|2} = \mu_1 + \Sigma_{12}\Sigma_{22}^{-1}(x_2-\mu_2)\]

\[\Sigma_{1|2} = \Sigma_{11} - \Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}\]

在EM for factor analysis中, E-step的 \(z|x\) 的分布就是这么得出的。

M-step 的推导非常tricky(太长不看)。最有意思的一部分推导结果是,对于参数 仿射矩阵 \(\Lambda\), 可以直接normal equation一步求得,其表达式有与 OLS参数 \(\theta\) 相似的形式。

于11月3日凌晨