Category Archives: Generic optimizers

All about the generic optimization framework available in scikits.openopt.solvers.optimizers

Annoucement: scikits.optimization 0.3

I’m please to announce a new version for scikits.optimization. The main focus of this iteration was to finish usual unconstrained optimization algorithms.

Changelog

  • Fixes on the Simplex state implementation
  • Added several Quasi-Newton steps (BFGS, rank 1 update…)

The scikit can be installed with pip/easy_install or downloaded from PyPI

Old announces:

Buy Me a Coffee!
Other Amount:
Your Email Address:

Optimization scikit: Polytope (Simplex/Nelder-Mead) optimization

Now that version 0.2 of scikit.optimization is out, here is a tutorial on the gradient-free optimizer based on the simplex algorithm.

When the only thing you have is the cost function and when you don’t have dozens of parameters, the first thing that can be tried is a simplex algorithm.

Continue reading Optimization scikit: Polytope (Simplex/Nelder-Mead) optimization

Annoucement: scikits.optimization 0.2

It has been a while, too long for sure, since my last update on this scikit. I’m pleased to announce that some algorithms are finally fixed as well as some tests.

Changelog:

  • Fixed Polytope/Simplex/Nelder-Mead
  • Fixed the Quadratic Hessian helper class

Additional tutorials will be available in the next weeks.

Old announces:

Buy Me a Coffee!
Other Amount:
Your Email Address:

Just a small example of numerical optimization in C++

I had to port a simplex/Nelder-Mead optimizer that I already have in Python in C++. As for the Python version, I tried to be as generic as possible but as efficient as possible, so the state is no longer a dictionary, but a simple structure.

I could have used the Numerical Recipes version, but the licence cost is not worth it, and the code is not generic enough, not explained enough. And also there are some design decisions that are questionable (one method = one responsibility).

Continue reading Just a small example of numerical optimization in C++

Optimization scikit: separation of orthogonally convoluted signals

My last blog post on optimization helped me generate orthogonal sequences. Now, I will use those sequences to separate two signals. The basic use case is a linear system with two inputs, one output, and instead of recording the response of one input at a time, one plays both inputs simultaneously with specific sequences so that they can be separated in another process.
Continue reading Optimization scikit: separation of orthogonally convoluted signals

Genetic algorithms in Python

Although I’m fond of numerical optimization through gradients, … there are some times where a global optimization is much more powerfull. For instance, I have to generate two sequences/combs that are orthogonal and for which their autocorrelation is almost an impulse. The two combs have a fixed number of impulse, so it’s a perfect job for genetic algorithms.
Continue reading Genetic algorithms in Python

Optimization scikit: a conjugate-gradient optimization

In my last post about optimization, I’ve derived my function analytically. Sometimes, it’s not as easy. Sometimes also, a simple gradient optimization is not enough.

scikits.optimization has a special class for handling numerical differentiation, and several tools for conjugate gradients.
Continue reading Optimization scikit: a conjugate-gradient optimization

Optimization scikit: a gradient-based optimization

Last time, I’ve made a simple example of a gradient-free optimization. Now, I’d like to use the gradient of my function (analytical gradient I’ve computed) to be able to get the global minimum in less iterations.
Continue reading Optimization scikit: a gradient-based optimization