Tag Archives: Grid computing

On the importance of not reinventing the wheel in distributed applications

Sometimes, it’s so easy to rewrite some existing code because it doesn’t fit exactly your bill.

I just so the example with an All To All communication that was written by hand. The goal was to share how many elements would be sent from one MPI process to another, and these elements were stored on one process in different structure instances, one for each MPI process. So in the end, you had n structures on each of the n MPI processes.

The MPI_Alltoall cannot map directly to this scattered structure, so it sounds fair to assume that using MPI_Isend and MPI_Irecv would be simpler to implement. The issue is that this pattern uses buffers on each process for each other process it will send values to or receive values from. A lot of MPI library allocate their buffer when needed, but will never let go of the memory until the end. So you end up with a memory consumption that doesn’t scale. In my case, when using more than 1000 cores, the MPI library uses more than 1GB per MPI process when it hits these calls, just for these additional hidden buffers. This is just no manageable.

Now, if you use MPI_Alltoall, two things happen:

  • there are no additional buffer allocated, so this scales nicely when you increase the number of cores
  • it is actually faster than your custom implementation

Now with MPI 3 standard having non-blocking collective operations, there is absolutely no reason to try to outsmart the library when you need a collective operation. It has heuristics when it knows that it is doing a collective call, so let them work. You won’t be smarter if you try, but you will if you use them.

In my case, the code to retrieve all values and store them in an intermediate buffer was smaller that the one with the Isend/Irecv.

Redirecting Python processes out and err streams to several streams

Today, I encountered an issue with subprocess calls. I was faced with the need of redirecting output streams from a subprocess call to the standard outputs and simultaneously to log files.
Continue reading Redirecting Python processes out and err streams to several streams

Thinking of good practices when developing with accelerators

Due to the end of the free lunch, manufacturers started to provide differents processing units and developers started to go parallel. It’s kind of back to the future, as accelerators existed before today (the x87 FPU started as a coprocessor, for instance). If those accelerators were integrated into the CPU, their instruction set were also.

Today’s accelerators are not there yet. The tools are not ready yet (code translators) and usual programming practices may not be adequate. All the ecosystem will evolve, accelerators will change (GPUs are the main trend, but they will be different in a few years), so what you will do today needs to be shaped with these changes in mind. How is it possible to do so? Is it even possible?
Continue reading Thinking of good practices when developing with accelerators

Book review: From P2P to Web Services and Grids: Peers In A Client/Server World

I was looking for an introductory book on peer-to-peer (P2P) application and their application to grid computation. Web services was a bonus, as it is something I don’t usually play with.
Continue reading Book review: From P2P to Web Services and Grids: Peers In A Client/Server World

Book review: Tools and environments for parallel and distributed computing

After Advanced Computer Architecture and Parallel Processing, I’m going to review another book from the same serie. As the title hints it, the goal of this book is to introduce the tools that may be used in parallel, grid and distributed computing. This is the layer above the architecture the last book presented.
Continue reading Book review: Tools and environments for parallel and distributed computing

Book review: Advanced Computer Architecture and Parallel Processing

This is my first review. I read this book some time ago but I still want to write about it because the topic is very interesting.
Continue reading Book review: Advanced Computer Architecture and Parallel Processing

Parallel computing in large-scale applications

In March 2008 issue, IEEE Computers published a case study on large-scale parallel scientific code development. I’d like to comment this article, a very good one in my mind.

Five research centers were analyzed, or more precisely their development tool and process. Each center did a research in a peculiar domain, but they seem share some Computational Fluid Dynamics basis.

Continue reading Parallel computing in large-scale applications

Grid computing for Python

In my lab, we frequently process huge amounts of data, each process can take hours or days. The problem is that we don’t have a usable tool to do this.

Our legacy software is in C and we plan on moving to Python in the next weeks. We could use some commercial software, but it is not optimal.

This is where P2P comes into the game. We have a lot of unused computers or dual cores that are not used even at 50% because we are not trained in parallel computing (and we won’t in the near future). By “we”, I mainly mean PhD students. Our background is signal or image processing, not Computer Science and even less parallel computing. Those unused computers could be used for our computations, but this implies that the computer is only used if nobody works on it, that we only use what is available at a precise moment, and that some computers may get used during the computations. That’s why P2P seems an elegant idea, as a grid computing tool.

P2P computation is not new in the lab (we developed P2P-MPI in Java for instance), but for our team, it is. For the time being, I did not find much about the tools that we could use, but the JXTA protocol seems a good start. I hope I will be able to talk more about this subject in the near future.