When I worked on the common reflection surface stack, one of our biggest issues was selecting the proper optimization algorithms. There are so many for global problems! The book tries to browse through several classical algorithms.
Content and opinions
First things first, I read the first edition, I couldn’t get the latest version, as the authors switched publishers, and I only had access to the old one (French ranting: why are scientific papers/books/… so expensive in digital formats???). I still could check the first chapter, see how it was improved, and based on the former content and the new content, I think I can also give an opinion on the second edition.
The book main format is to have all mathematical explanations in different chapters, and then int he last chapters, the methods are used to solve different geophysical problems. The authors tried to use as much geoscientific vocabulary as possible. It’s the only real geoscientific material in the first chapters, as it is mainly presentation of methods that could be found in any book.
So the first chapter presents all the general statistical tools you need. Although the rest of the book uses all the tools, the central point of the book is random number generation, as all the next methods use it, one way or another. My biggest concern is that the authors think that random numbers can be generated from a simple modulo distribution. And when you are doing statistics on final numbers, that sounds really odd. In the 90’s, this could be explained, there were not that many good random number generators, Mersenne Twister appeared a few years after the first edition, so OK. But how come this is still the case in the second edition? We can now generate random number with an almost-cryptographic quality (see Random 123), with no additional complexity, and some people are still advocating for out-dated practice? This mainly means one thing: content was added to the book, but nothing was updated. So when reading the book, you need to keep this in mind.
Next are tackled direct, linear and iterative linear inverse methods. Some of those methods are specific to the geophysical inverse problem: contrary to CT, you can’t turn your object that you want to “image”, you only get one aspect, one view. So this means that you need to use specific algorithms. But it still stems to the same usual cost functions, so even though the algorithms themselves can be specific, there is a general aspect to the inverse problem. A last part in the chapter is about the probabilistic formulation and the usual maximum likelihood and maximum a posterior.
The third chapter starts the issue of solving a problem on a specific space, with several local minimas, and where the previous methods would fail. After the simple grid search (always useful when you want to have a broad picture of the problem) come the Monte Carlo methods. They are usually very costly, but they are a simple statistical tool to sample a problem after a grid search.
Then Simulated Annealing is introduced. It is logical to have it after MC methods, as they can see as a better way of sampling the search space for the best solution. I appreciated the fact that several different ways of doing SA are explained, with their issues and their qualities. After that chapter come Genetic Algorithms. There are different ways to introduce them, and I have another book review on the subject that I will publish soon. The explanation here is really basic, but the main points are there. You could do better, but you could do worse.
There is an additional chapter in the second edition on additional evolutionary algorithms. I don’t think that in 6 pages you can tackle everything, but these algorithms are a little bit more trendy than SA and the simple GA, so it is always welcome.
The last but not least chapter presents several applications of SA and GA. They don’t tackle real fields scale tests, only one group of trace, but at least you can see something, and it is also reproducible if you want to. There are (too?) many images, seismograms with everything that happens, explanations on the number of iterations… I’m still missing the total run time for the examples, as it is an important aspect when doing field-scale experiments, as well as some additional robustness tests. There is one additional part in the second edition about joint inversion.
The book finishes with uncertainties on the solutions. All methods draw some kind of statistical models, so it is possible to have uncertainties thanks to the thousands drawn models. I think one warning is missing, as samplers have a starting period before they are stable, and I think it is missing in this section.
Conclusion
The book tries to bridge a rather large gap, between new global optimization methods (even now, some methods can still be considered new) and the industrial oil and gas industry. I always saw it as being late on new algorithms in the geophysics department (reservoir tends to be less late, perhaps because of the far lesser scale of their data), so it is interesting to see a book, with all its missed opportunities, trying to bridge that gap.