Dimensionality reduction: Principal Components Analysis

Before going into more details about nonlinear manifold learning, I’ll present the linear description that is used in most of the applications.

PCA, for Principal Components Analysis, is the other name for the Karhunen-Loeve transform. It aims at describing the data by a single linear model. The reduced space is the space on the linear model, it is possible to project a new point on the manifold and thus testing the belonging of point to the manifold.

The problem with PCA is that it cannot tackle nonlinear manifold, as the SwissRoll that was presented in my last item.

Here is the SwissRoll manifold :
Original SwissRoll
When compressing it with PCA in 2 dimensions, this is the result :
PCA compression of the Swissroll

It is obvious that PCA does not respect the manifold structure. One has to use 3 dimensions to describe this manifold, thus no compression can be achieved at all. Besides, projecting in this 3D space does not project on the manifold : another regularization term must be added so that the “reduced” space is probabilized. This is where Maximum A Posteriori (or the modified Mean-Shift algorithm from Vik et al.) comes into the game.

The goal of non linear manifold learning is to get the most reduced space, so that this space is full of points with a good structure. Isomap achieves the first step quite well, that is, it can reduce the SwissRoll to a full 2-dimension space.

Buy Me a Coffee!
Other Amount:
Your Email Address:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.