Monitoring embedded space quality for classification

This entry is part 2 of 2 in the series Deep adventures

A few weeks ago, on StackOverflow, a user asked for an accuracy measure on the embedded space for an autoencoder. This was with Keras, but I thought it would be a nice exercise for Tensorflow as well.

The idea in this case is to add a few layers to the embedded space to create a classifier and measure its accuracy while we optimize the autoencoder.

We will train the autoencoder in alternation with the classifier. When one is updated, the other will be frozen, and then we can measure classification accuracy and reconstruction loss concurrently in Tensorboard.

Let’s see all this in action

(Forgot to mention as well, that I’m reducing the number of nodes in each layer in this example, compared to what I used in the book).

Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

We should also have a look at the loss and accuracy over time, in Tensorboard.

Tensorboard cost and accuracy reports

The graph for the first network is shorter than the other two, as we double the number of epochs (due tot the alternative training).

The interesting bit is that the autoencoders have trouble to get to a low-cost, but get optimized fast, whereas the classifier requires more time. This makes sense as the autoencoder doesn’t have enough capacity (the embedded space is not descriptive enough) to recreate proper images, and the classifier has to work with an embedded space that is getting built.

Of course, with a 3D space, our accuracy sky-rockets (not perfect, but it’s an order of magnitude better than 2D), when the reconstruction loss decreases only a little bit. What we can say, and I think it’s true in general, the capacity for an autoencoder has to be bigger than for a classifier, due to the amount of information that we don’t care in a classifier that are important to make the original data real.

We can also check here that our accuracy at the beginning is 10%, which is the odds for a random classifier, so that’s good as well.

Just for reference, here is the graph for the last network:

Tensorboard graph for the third network


Here, I output the embedded space only after the training, and I used the classifier accuracy as a way of checking with one number that the embedded space was relevant for me. I could have created an image for each epoch, instead of only at the end, but the accuracy gave me confidence about the quality of the embedded space.

I will probably try to port this exercise to Keras at some point, but this is for the future.

Buy Me a Coffee!
Other Amount:
Your Email Address:
Series Navigation<< Book: Building Machine Learning Systems with Python – third edition

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.