Jump to content

User:Biserhong

From Wikipedia, the free encyclopedia

Probabilistic principal component analysis (PPCA) formulates the principal component analysis procedure (PCA) as a maximum likelihood solution of a probabilistic latent variable model. When only the first few leading dimensions are needed, an EM algorithm can be used for efficiency instead of evaluating the sample covariance matrix. A Bayesian method can be utilized to automatically find the dimensionality of the principal subspace in situations where it is not clearly defined. The probabilistic PCA model can be used to sample from the distribution obtained by generating an observed variable in the original d dimensions from a linear transformation of a lower dimensional latent variable plus Gaussian noise [1]. Probabilistic PCA overcomes several problems of standard approaches to principal component analysis - it allows to deal with missing data, whereas normally incomplete points must be discarded, defines a proper density model, so that we can estimate whether new data points are fit well by the model, and provides an efficient way to deal with high-dimensional data. [2]

Description of the model

[edit]

Probabilistic PCA is a generative latent variable model. It describes an observed d-dimensional data vector as produced by a mapping of a q-dimensional latent (hidden) variable plus shift to some mean vector plus some additive Gaussian noise :

Generally , q < d. Thus, the d × q matrix W denotes a linear transformation from the lower dimensional latent space to the higher dimensional data space. The parameter allows the data model to have a mean different from zero. The latent variable is distributed according to a normal distribution with zero mean and an isotropic covariance matrix :

Here, the matrix is q-dimensional (i.e. the latent variable has q dimensions) and is a design parameter that needs to be specified according to the application. The noise model is assumed to be independent of and is also distributed according to a Gaussian distribution:

The covariance matrix is diagonal, which means the observed variables are conditionally independent given the latent variables . For a general matrix it is not possible to obtain a closed-form analytic solution. This is possible only if is of the form:

Given the above formulation, and since the linear transformation W of the normally distributed random variable is still normally distributed, the observed vectors are distributed as follows:

where

where we have the term because the covariance of the linear transformation of is:

Therefore, we obtain a probability distribution over the data space given a latent variable :

The we integrate over to get the marginal distribution of :

Using Bayes' rule we can also obtain the posterior distribution of the latent variable [3]

References

[edit]
  1. ^ Tipping, M; Bishop, C. "Probabilistic Principal Component Analysis" (PDF). Journal of the Royal Statistical Society: 611-622.
  2. ^ Cite error: The named reference roweis1998 was invoked but never defined (see the help page).
  3. ^ Tipping, M; Bishop, C. "Mixtures of Probabilistic Principal Component Analysers" (PDF). Neural Computation: 443-482.
[edit]

Category:Image processing Category:Artificial intelligence