I’ve already given some answers in one of my first tickets on manifold learning. Here I will give some more complete results on the quality of the dimensionality reduction performed by the most well-known techniques.
First of all, my test is about respecting the geodesic distances in the reduced space. This is not possible for some manifolds like a Gaussian 2D plot. I used the SCurve to create the test, as the speed on the curve is unitary and thus the distances in the coordinate space (the one I used to create the SCurve) are the same as the geodesic ones on the manifold. My test measures the matrix (Frobenius) norm between the original coordinates and the computed one up to an affine transform of the latter.
I tested several noise levels:
- no noise
- 5% of Gaussian noise
- 2% of Laplacian noise (only 2% because there are many outliers in the Laplacian law)
- Impulsive noise on 12.5% elements of the distance matrix with 200% Laplacian noise (quantified with respect to the variance of the noise-free d1-d2, where d2 was estimated with my robust cost function)
Here are the results:
|Method||no noise||Gaussian Noise 5%||Laplacian noise 2%||Impulsive noise|
The geodesic-based algorithms perform obviously and logically better than every other algorithm. In my case, I want this to happen as I want to estimate a mapping function between the reduced space and the original space. This estimation and the effect of the reduction algorithm on it will be the subjects of future tickets.