Saturday, August 10, 2013

Space-filling experiment

I remember reading in a paper once that contour advection simulations are not space-filling.  I thought this was interesting, but my normal reaction is first, to be skeptical and second, well why not try it out myself and see? There are actually practical applications for this result--I once did a derivation attempting to link Lyapunov exponents to the diffusivity in which I assumed the fractal dimension of an advected contour was constant, thus not space-filling.

And so was born the space-filling experiment.  My first attempt used a swap file to hold the advected contour as it grew to (hopefully) epic proportions. I realized much later that this was a dumb approach, especially in the world of multiple cores and distributed memory.  Instead of working on one massive contour--which required a specialized executable--why not just string together a whole bunch of smaller contours?  Each contour is advected as a separate process.  Once a contour gets too big, it is halved and each half given its own process.  Later on, the contours can be spliced back together.  To facilitate this, we use an aggregate naming system: every time a contour is split, the output file names are updated by appending a '0' to the first file and a '1' to the second file.  Simple, lexical ordering can tell you which contour goes where.

The main issue with this exercise is the recursive, exponential process allocation.  If things go wrong and start to run away, it can eat your computer rather quickly.  It is possible of course to delete the whole process tree. Someday I might even write that script.  The other issue, unfortunately, was that it didn't work.  Even at 60 million nodes, the fractal dimension was still well under two and showed no signs of flattening out.

What I find most interesting about this exercise, however, was how it unlocked for me a big part of the philosophy of Unix programming.  I had already realized how much better small, specialized programs strung together work for most applications than monolithic, general programs, but hadn't realized how well the philosophy 'scales' to parallel programming. Here, instead of running one program which subsequently spreads itself over several processors, we split the program into multiple processes each of which are run separately.  It works especially well for "trivially parallel" problems, such as this one, however Unix does provide a plethora of message-passing facilities that allow processes to communicate with one another.

The main point is the division of labour into processor-intensive problems which are solved by compiled executables and non-processor-intensive problems which are solved by shell scripts calling the executables.  Each represents a distinct "level" and there often seems to be a point at which the problem handled by an executable--normally viewed as a discrete granule by the operating system--is about the right size. The real irony is that I spent quite a long time devising an extremely clever algorithm for parallel processing of contour advection. It involved splitting the contour over a fixed number of processors and passing points between them to ensure a balanced load such that each processor would finish in roughly the same amount of time.  Presumeably it would've been written using MPI. Screw that.  Just divide the problem into a larger number of processes than there are processors and let the operating system balance out the load for you.

One of the reasons the methods I'm discussing here are less known and used than in the past is because of increases in computer power.  In the past, to perform a series of operations on matrices it would have made sense to output one result to a file then call another executable to operate on that result. In today's world, where operations on even large matrices are trivial and the storage requirements even more so, you can do everything in a single interactive program, storing the results in RAM as you go along. Nonetheless, it's interesting to examine the parallels between a programming language and the Unix operating system--Unix is in fact a programming language designed to operate at the file system level. Files are akin to variables, executables are subroutines and pipes are like nested function calls.








Wednesday, August 7, 2013

Miscellaneous ramblings: ctraj and higher-order functions in C

Paul Graham is a well known personality in the tech world and he has a number of outspoken opinions about computer programming. While I enjoy reading his essays, I don't often agree with him. A constant claim of his is that some programming languages are more powerful than others because they are more expressive. For instance, in Lisp, you can create a function (a higher-order function) that returns another function, whereas in C this is quite a lot more difficult. You can also perform similar types of meta-programming--programs that write programs--very easily in Lisp, whereas in C this is a bit more tricky.

I would argue that this is, in fact, a deliberate design decision in C, and a very wise one at that. The chief reason for it is efficiency--the style of programming possible in Lisp is often very difficult to optimize. Suppose we have a function that returns a function. What if this returns another function that returns a function? It's impossible for a compiler to predict how far this recursive loop will unroll, thus such higher-order functions would almost by necessity be left uncompiled until run time.

In fact, a C program can extend itself at run-time--with higher-order functions and whatnot--and it's actually quite easy to implement. The latest version of ctraj demonstrates how it can be done, more or less. All the main algorithms--trajectory integration, tracer simulation and contour advection--in ctraj have been refactored to make them more general. All of them call a base-class that represents a velocity field that, at least in theory, could be implemented in just about any way.

I was most interested in integrating velocity fields with simple, analytic representations, such as the blinking vortex. An obvious, but naive, way of doing this would be to write a new class for each velocity field. Instead, I wrote a class that is just a wrapper for a function that, from the program's point of view, doesn't even exist yet. When the program is called, the user must supply the name of the object file containing the function, and it's entry point. The function is then dynamically linked in, allowing the main routine to integrate any arbitrary function, without recompiling the main "engine."

In this example, the user must both write and compile the function to be linked in--the software is after all designed for scientific researchers--but it's easy to see how both could be done automatically by the program itself. In other words, once the programming language has access to the operating system and a dynamic linking library, writing higher order functions is always possible.

To download the latest version of ctraj, click here.

Another puzzling claim that Graham makes, right at the beginning of "On Lisp," is that Lisp somehow makes bottom up design--writing a language on top of the language that better solves your problem--easier. Every good programmer is very familiar with bottom-up design and uses it extensively and any 3GL allows it. What are subroutines other than user-defined commands? Commands that extend the language and allow you to implement a new language on top of the old language. Within limits, obviously: the syntax is fixed after all. Relatively few languages actually allow you to extend the syntax. Lisp is no different: unlike most languages, there is in fact only one syntactical construct! --which is perhaps both its greatest advantage and its greatest limitation. Perhaps it is easier to write a compiler or interpretter in Lisp--it probably is, I've never tried it so I don't know--but until those nuts and bolts are sorted out, you're stuck with the usual prefix operator form--commonly called a function--just like every other language. I am actually working on a language that allows the syntax to be arbitrarily extended. It's not much of a priority, so I don't know when I'll finish it. The main issue is what the base language should look like.

It's not just 3GL's that allow abstracting larger pieces of code into a single or fewer commands. Any software tool that contains something like functions, procedures or macros will allow this--even a gosub command will do. Practically the whole business of computer programming is the act of cleverly abstracting a problem into a set of more convenient sub-problems. The very nature of programming languages mean that any language can be re-written into any other language. One of the reasons that I haven't bothered to learn Lisp yet is that it would take a lot less time to write a Lisp-like interpretter than to learn all of the commands and syntactical sugar of the actual language.

In the past I thought that using C and C++ instead of Fortran 77 (still popular to this day) was a great boon for scientific programming since it allowed for dynamic memory allocation and pointers made it easy to create powerful and flexible data structures such as linked lists. I thought switching to Lisp or even Prolog might offer up similar advantages. It's easy, however, to exaggerate such advantages. In practice, I find that I rarely use specialized data structures except within algorithms that are well encapsulated to begin with (such as in a sort routine, for instance.)

Going back once again to ctraj, a good example is the contour advection algorithm.  My first attempt at contour advection, written in IDL, used a linked list.  The version in ctraj, however, uses cubic spline interpolation to calculate the curvature, so the coordinate list has either to be converted to an array or already be in the form of an array.  Therefore it uses a pair of arrays that are constantly re-allocated.

In Fortran, I would solve the problem by just using fixed-length arrays.  Since the program bums out at about a million points, there's our length.  Of course arbitrary limits are another pet-peeve of real computer programmers.  It's not as bad as all that--in real life, there is always a limit, whether it be the maximum size of the executable allowed in RAM, or the RAM itself.  The fact that the contour advection simulation bums out at a million points or so is not so bad either--it's quite easy to split the simulation into two or more separate processes which can then be farmed out to different processors.  More on this in the next article.

As another example, it's quite easy to write a linked list in Fortran 77. It takes a bit more thought and requires passing a "work-space" variable to the subroutines (common practice in Fortran 77 programming) but once it's done it can be re-used as needed, just like any suite of subroutines. Maybe if I get around to it I'll post my version of a linked list written in Fortran 77. I guarantee that from the interface level it won't look much different from the equivalent code written in C.