Audio ToolKit is finally updated to 3.2.0. The main changes for this release are mainly cleanup of the API and better C++17 support with variable defaulting.
While I was reading an article on Google last Deep Learning achievement, I was reminded of a previous discussion with former colleagues about replacing reservoir simulations with neural networks. At the time, I dismissed the idea as ridiculous due to the complexity of the task and the requirement for the training.
But now, Google seems to have done it. Or have they?
On my quest for a good Flask book, I saw this book from Tarek Ziade. We are more or less of the same generation, both from France and he wrote a far better introductory book to Python in French than mine. He also founded the French Python community (AFPY), so I always had a huge respect for the guy. And the book was appetizing.
I’m thinking of writing a Web service for a project of mine. For this purpose, I wanted to learn Flask (and a bunch of other technologies), as Flask seems well established and well documented. This is a book from Packt that agglomerates 3 previously released books. One of the main questions is the relevance of them as the Flask API evolves.
ATK is updated to 3.1.0 with heavy code refactoring. Old C++ standards are now dropped and it requires now a full C++17 compliant compiler.
The main difference for filter support is that explicit SIMD filters using libsimdpp have been dropped while tr2::simd becomes standard and supported by gcc, clang and Visual Studio.
All major cloud providers provide some support for Machine Learning algorithms. They also evolve all the time. There are not many books ont he subject, due to the evolution of these services, so let’s have a look at this one.
A few weeks ago, on StackOverflow, a user asked for an accuracy measure on the embedded space for an autoencoder. This was with Keras, but I thought it would be a nice exercise for Tensorflow as well.
The idea in this case is to add a few layers to the embedded space to create a classifier and measure its accuracy while we optimize the autoencoder.
We will train the autoencoder in alternation with the classifier. When one is updated, the other will be frozen, and then we can measure classification accuracy and reconstruction loss concurrently in Tensorboard.
Today, I’m presenting at the ADC my work on analog modelling for the past year.
I will make a more detailed post later this year, but I’d like to put some teasers here. SPICE net lists are an efficient way of representing electronics circuits and there are several very good free and paying simulators. Unfortunately, they are not easy to integrate in a VST plugin.
Audio ToolKit now has a sister project around this topic. The lite version is also licensed under the BSD and can generate a dynamic filter of a net list. The full project is now also capable of generating static filter, with a source file (and compiling it in memory) that can be manually tuned.
Future work on this project will include different solvers for the static filter, as well as a tuner that will be able to drop entries in the Jacobian (full entries or component contributions for a given pin) in the Newton Raphson solver.