My last blog post on optimization helped me generate orthogonal sequences. Now, I will use those sequences to separate two signals. The basic use case is a linear system with two inputs, one output, and instead of recording the response of one input at a time, one plays both inputs simultaneously with specific sequences so that they can be separated in another process.
Year: 2010
When twenty or so langage creators are put together to make a book, it can only be interesting. It’s a good revealer of character, as they tend to open their heart. In fact I think that’s exactly what happened in this book.
Although I’m fond of numerical optimization through gradients, … there are some times where a global optimization is much more powerfull. For instance, I have to generate two sequences/combs that are orthogonal and for which their autocorrelation is almost an impulse. The two combs have a fixed number of impulse, so it’s a perfect job for genetic algorithms.
It’s been a while since I last blogged about manifold learning. I don’t think I’ll add much in terms of algorithms to the scikit, but now that a clear API is being defined (http://sourceforge.net/apps/trac/scikit-learn/wiki/ApiDiscussion), it’s time for the manifold module to comply to it. Also, documentation will be enhanced and some dependencies will be removed.
I’ve started a branch available on github.com, and I will some examples in the scikit as well. I may explain them here, but I won’t rewrite what is already published. A future post will explain the changes, and I hope that interested people will understand the modifications and apply them to my former posts. It’s just that I don’t have much time to change everything…
After my last post on QtAgain, I’ve decided to test a few simple digital filters. I’ve tried to make them as generic as possible, and with a VST interface.
In my last post about optimization, I’ve derived my function analytically. Sometimes, it’s not as easy. Sometimes also, a simple gradient optimization is not enough.
scikits.optimization has a special class for handling numerical differentiation, and several tools for conjugate gradients.
We have now several petaflopic clusters available in the Top500. Of course, we are trying to get the most of their peak computational power, but I think we should sometimes also look at optimal resource allocation.
I’ve been thinking about this for several months now, for work that has thousands of tasks, each task being massively data parallel. Traditionnally, one launches a job through one’s favorite batch scheduler (favorite or mandatory…) with fixed resources and during an estimated amount of time. This may work well in research, but in the industrial world, there often a new job that arises and that needs part of your scarce resources. You may have to stop your work, loose your current advances and/or restart the job with less resources. And then the cycle goes on.
After last week review, I’ve decided to try another book from a much higher standard publisher, Springer. The price is also far higher, but it covers what I think are the current best supports for building automation.
Last few days, I was looking for tools for building automation (I’m investigating the technology I may be using in my future home), so I borrowed this book. It seemed to be on a par with my ideal of home automation: Linux as a ground basis for steering the automation. Let’s see if it kept its promises.