Tag Archives: Manifold learning

Dimensionality reduction: Refactoring the manifold module

It’s been a while since I last blogged about manifold learning. I don’t think I’ll add much in terms of algorithms to the scikit, but now that a clear API is being defined (http://sourceforge.net/apps/trac/scikit-learn/wiki/ApiDiscussion), it’s time for the manifold module to comply to it. Also, documentation will be enhanced and some dependencies will be removed.

I’ve started a branch available on github.com, and I will some examples in the scikit as well. I may explain them here, but I won’t rewrite what is already published. A future post will explain the changes, and I hope that interested people will understand the modifications and apply them to my former posts. It’s just that I don’t have much time to change everything…

Dimensionality reduction: projection and classification

It has been a while since my last post on manifold learning, and I still have some things to speak about (unfortunately, it will be the end post of the dimensionality reduction series on my blog, as my current job is not about this anymore). After the multidimensional regression, it is possible to use it to project new samples on the modelized manifold, and to classify data.
Continue reading Dimensionality reduction: projection and classification

Finally over !

For four years I’ve been researching the intersting field of manifold learning. Indeed, I’ve defended my PhD thesis last friday and now I have a PhD in electronics, electrotechnics and automatism in the university Louis Pasteur in Strasbourg, France. Those years after my engineering school allowed me to read and to learn a lot, but I’m still glad those years are over.

During these years, I enjoyed looking for new solutions for my problems, and it helped me discover some technologies I didn’t really master (like Python but also C++, parallel computing, … now I can use some of them safely). So this is my advice to every new PhD student: read a lot, and not only articles or books in your field, but also in the other fields, like the tools you may use (Computer Science mainly) so that you can use the best of them (it takes some time, but you’ll be rewarded in the end).

And before I forget: read http://phdcomics.com/ (my favourite character being Mike, although since he’s doing a post-doc, we don’t see him much :().

Dimensionality reduction: the scikit is available !

My manifold learning code was for some time a Technology Preview in the scikit learn. Now I can say that it is available (BSD license) and there should not be any obvious bug left..

I’ve written a small tutorial. It is not an usual tutorial (there is a user tutorial and then what developers should know to enhance it), and some results of the techniques are exposed in my blog. It provides the basic commands to start using the scikit yourself (reducing some data, projecting new points, …) as well as the expoed interface to enhance the scikit.

If you have any question, feel free to ask me, I will add the answers to the tutorial page so that everyone can benefit from it.

Be free to contribute new techniques and additional tools as well, I cannot write them all ! For instance, the scikit lacks some robust neighbors selection to avoid short-cuts in the manifold…

Tutorial and the learn scikit mainpage.

Buy Me a Coffee!
Other Amount:
Your Email Address:

A Metric Multidimensional Scaling-Based Nonlinear Manifold Learning Approach for Unsupervised Data Reduction

At last, my article on manifold learning has been published and is accessible with doi.org (it was not the case last week, that’s why I waited before publishing this post).
The journal is free, so you won’t have to pay to read it : Access to the EURASIP JASP article

I will publish additional figures here in a short time. The scikit is almost completed as well, I’m finishing the online tutorial for those who are interested in using it and/or enhancing it.

Dimensionality reduction: comparison of different methods

I’ve already given some answers in one of my first tickets on manifold learning. Here I will give some more complete results on the quality of the dimensionality reduction performed by the most well-known techniques.

First of all, my test is about respecting the geodesic distances in the reduced space. This is not possible for some manifolds like a Gaussian 2D plot. I used the SCurve to create the test, as the speed on the curve is unitary and thus the distances in the coordinate space (the one I used to create the SCurve) are the same as the geodesic ones on the manifold. My test measures the matrix (Frobenius) norm between the original coordinates and the computed one up to an affine transform of the latter.
Continue reading Dimensionality reduction: comparison of different methods

Dimensionality reduction: similarities graph and its use

Some of the widely used method are based on a similarity graph made with the local structure. For instance LLE uses the relative distances, which is related to similarities. Using similarities allows the use of sparse techniques. Indeed, a lot of points are not similar, and then the similarities matrix is sparse. This also means that a lot of manifold can be reduced with these techniques, but not with Isomap or the other geodesic-based techniques.

It is worth mentioning that I only implemented Laplacian Eigenmaps with a sparse matrix, due to the lack of generalized eigensolver for sparse matrix, but it will be available in a short time, I hope.

Continue reading Dimensionality reduction: similarities graph and its use

Dimensionality reduction: explicit optimization of a cost function

Analytical solutions to the dimensionality reduction problem are only possible for quadratic cost functions, like Isomap, LLE, Laplacian Eigenmaps, … All these solutions are sensitive to outliers. The issue with the quadratic hypothesis is that there is no outilers, but on real manifolds, the noise is always there.

Some cost functions have been proposed, also known as stress functions as they measure the difference between the estimated geodesic distance and the computed Euclidien distance in the “feature” space. Every metric MDS can be used as stress functions, here are some of them.

Continue reading Dimensionality reduction: explicit optimization of a cost function

Some news about the manifold learning scikit

I got the word today that my paper was accepted, so I can now focus on delivering the code.

I’m in the process of refactoring it so that it depends less on some of our libraries here. In two weeks, there is a nipy sprint in Paris I will attend, and machine learning is one of the topic we will discuss, so this may indicate where and how I’ll contribute the code I will keep going on showing some results next week.

Buy Me a Coffee!
Other Amount:
Your Email Address:

Dimensionality reduction: Locally Linear Embedding

One of the most cited algorithm in nonlinear manifold learning, with Isomap, is LLE. Contrary to Isomap, LLE tries to retain the local data structure of the sampled manifold. Whereas Isomap preserves absolute distances, LLE preserves local relative distances (it preserves barycenter weights).

This means that LLE is not suitable for every dimensionality reductions. For visualization purposes, it can lead to very different solutions if the manifold is noisy.
Continue reading Dimensionality reduction: Locally Linear Embedding