Thursday, November 7, 2013

Why conspiracy theorists are always raving lunatics

There are three elements to the definition of a conspiracy.  In order for something to be a conspiracy it must :

1. be conducted by a group of individuals
2. who have been communicating in secret
3. and be immoral or illegal

By this definition, certainly we can rule out Guantanamo, the invasion of Iraq, the drone assassinations in Pakistan and the NSA wire-tapping as being conspiratorial in nature. In fact we can rule out just about all of the activities of law-enforcement, intelligence and government (not to mention the military), which, as we all know, are always conducted completely out-in-the-open with results that are without a doubt beyond-the-pale.

Tuesday, November 5, 2013

A complex restatement of the Golden Rule

I've been interested in ethical philosophy for a long time now, in particular, how ethics interfaces with reason and logic.  These are just some off-the-cuff thoughts I jotted down in a restaurant after attending a blues workshop.  It starts from the following question:

How did humans become self-aware?

My basic answer: I am the only being that I know with certainty to be aware.  I am not always self-aware.  I only assume, through symmetry that others are aware as I am.

The belief (faith?) that others are aware, as I am, and that because they are aware, they can suffer, is the birth of morality.  Because I see the other not as self, but as equivalent to self, I regret the suffering of others.

Once you come to value the happiness of others, you realize a responsibility to help those others and to alleviate their suffering.  This is the birth of sin.  Too often, you will fail in your responsibility.  You will hurt the ones to whom you have an obligation.  You will cause suffering and author evil.

This is a long-winded explanation of the Golden Rule.

[Did awareness grow as an "emergent" property or has it always existed?  At what point does the evil in the world become so great that you cannot stand against it?  At such point, surely the only logical course of action is self-immolation.  (And evil often seems like a disease: it travels from person to person)

If I am right about 9/11, I'm pretty sure that point has long since come and gone.]

My observation is that certain people cannot comprehend the Golden Rule.  There is something that does not "click."  Can such people be held responsible for the evil that they author?  (I have also long since suspected that there is a huge "disconnect" or dissociation within most human brains (especially those of Europeans) such that we believe that we are doing good and that we are moral beings, even as we commit the most heinous crimes.  The right hand truly does not see what the left hand is doing.)

I have come to believe that statements of fact are a subset of moral or ethical statements.  Because a factual statement is a true statement and telling the truth is a moral act.  Those who do not value the truth have no reason to differentiate true from false statements. [This is questionable.  I'm trying to create an association between an ethical outlook and the production of "true beliefs" but it may well be that they are in fact two distinct classes of statement.]

However, (and to finally answer the question posed at the beginning) our understanding of natural selection makes our position as moral beings more comprehensible.  Acting morally requires tremendous application of intelligence and self-discipline because it is only through intelligence that we come to understand the moral imperative.

It is evolution through natural selection--in other words competition--that gave us our intelligence.  Those who can empathize--to see from the point of view of others--have a competitive advantage.  Those who can empathize also have the potential for moral agency.

Sunday, October 20, 2013

My Philosophy

I like to be in places that allow me to get lost in thought.  Sometimes it's a luxury we can ill afford.

I do not want to be a professional philosopher.  In my philosophy, we each take turns contributing to the greater good.  This should allow leisure time for thinking.

Saturday, August 10, 2013

Space-filling experiment

I remember reading in a paper once that contour advection simulations are not space-filling.  I thought this was interesting, but my normal reaction is first, to be skeptical and second, well why not try it out myself and see? There are actually practical applications for this result--I once did a derivation attempting to link Lyapunov exponents to the diffusivity in which I assumed the fractal dimension of an advected contour was constant, thus not space-filling.

And so was born the space-filling experiment.  My first attempt used a swap file to hold the advected contour as it grew to (hopefully) epic proportions. I realized much later that this was a dumb approach, especially in the world of multiple cores and distributed memory.  Instead of working on one massive contour--which required a specialized executable--why not just string together a whole bunch of smaller contours?  Each contour is advected as a separate process.  Once a contour gets too big, it is halved and each half given its own process.  Later on, the contours can be spliced back together.  To facilitate this, we use an aggregate naming system: every time a contour is split, the output file names are updated by appending a '0' to the first file and a '1' to the second file.  Simple, lexical ordering can tell you which contour goes where.

The main issue with this exercise is the recursive, exponential process allocation.  If things go wrong and start to run away, it can eat your computer rather quickly.  It is possible of course to delete the whole process tree. Someday I might even write that script.  The other issue, unfortunately, was that it didn't work.  Even at 60 million nodes, the fractal dimension was still well under two and showed no signs of flattening out.

What I find most interesting about this exercise, however, was how it unlocked for me a big part of the philosophy of Unix programming.  I had already realized how much better small, specialized programs strung together work for most applications than monolithic, general programs, but hadn't realized how well the philosophy 'scales' to parallel programming. Here, instead of running one program which subsequently spreads itself over several processors, we split the program into multiple processes each of which are run separately.  It works especially well for "trivially parallel" problems, such as this one, however Unix does provide a plethora of message-passing facilities that allow processes to communicate with one another.

The main point is the division of labour into processor-intensive problems which are solved by compiled executables and non-processor-intensive problems which are solved by shell scripts calling the executables.  Each represents a distinct "level" and there often seems to be a point at which the problem handled by an executable--normally viewed as a discrete granule by the operating system--is about the right size. The real irony is that I spent quite a long time devising an extremely clever algorithm for parallel processing of contour advection. It involved splitting the contour over a fixed number of processors and passing points between them to ensure a balanced load such that each processor would finish in roughly the same amount of time.  Presumeably it would've been written using MPI. Screw that.  Just divide the problem into a larger number of processes than there are processors and let the operating system balance out the load for you.

One of the reasons the methods I'm discussing here are less known and used than in the past is because of increases in computer power.  In the past, to perform a series of operations on matrices it would have made sense to output one result to a file then call another executable to operate on that result. In today's world, where operations on even large matrices are trivial and the storage requirements even more so, you can do everything in a single interactive program, storing the results in RAM as you go along. Nonetheless, it's interesting to examine the parallels between a programming language and the Unix operating system--Unix is in fact a programming language designed to operate at the file system level. Files are akin to variables, executables are subroutines and pipes are like nested function calls.








Wednesday, August 7, 2013

Miscellaneous ramblings: ctraj and higher-order functions in C

Paul Graham is a well known personality in the tech world and he has a number of outspoken opinions about computer programming. While I enjoy reading his essays, I don't often agree with him. A constant claim of his is that some programming languages are more powerful than others because they are more expressive. For instance, in Lisp, you can create a function (a higher-order function) that returns another function, whereas in C this is quite a lot more difficult. You can also perform similar types of meta-programming--programs that write programs--very easily in Lisp, whereas in C this is a bit more tricky.

I would argue that this is, in fact, a deliberate design decision in C, and a very wise one at that. The chief reason for it is efficiency--the style of programming possible in Lisp is often very difficult to optimize. Suppose we have a function that returns a function. What if this returns another function that returns a function? It's impossible for a compiler to predict how far this recursive loop will unroll, thus such higher-order functions would almost by necessity be left uncompiled until run time.

In fact, a C program can extend itself at run-time--with higher-order functions and whatnot--and it's actually quite easy to implement. The latest version of ctraj demonstrates how it can be done, more or less. All the main algorithms--trajectory integration, tracer simulation and contour advection--in ctraj have been refactored to make them more general. All of them call a base-class that represents a velocity field that, at least in theory, could be implemented in just about any way.

I was most interested in integrating velocity fields with simple, analytic representations, such as the blinking vortex. An obvious, but naive, way of doing this would be to write a new class for each velocity field. Instead, I wrote a class that is just a wrapper for a function that, from the program's point of view, doesn't even exist yet. When the program is called, the user must supply the name of the object file containing the function, and it's entry point. The function is then dynamically linked in, allowing the main routine to integrate any arbitrary function, without recompiling the main "engine."

In this example, the user must both write and compile the function to be linked in--the software is after all designed for scientific researchers--but it's easy to see how both could be done automatically by the program itself. In other words, once the programming language has access to the operating system and a dynamic linking library, writing higher order functions is always possible.

To download the latest version of ctraj, click here.

Another puzzling claim that Graham makes, right at the beginning of "On Lisp," is that Lisp somehow makes bottom up design--writing a language on top of the language that better solves your problem--easier. Every good programmer is very familiar with bottom-up design and uses it extensively and any 3GL allows it. What are subroutines other than user-defined commands? Commands that extend the language and allow you to implement a new language on top of the old language. Within limits, obviously: the syntax is fixed after all. Relatively few languages actually allow you to extend the syntax. Lisp is no different: unlike most languages, there is in fact only one syntactical construct! --which is perhaps both its greatest advantage and its greatest limitation. Perhaps it is easier to write a compiler or interpretter in Lisp--it probably is, I've never tried it so I don't know--but until those nuts and bolts are sorted out, you're stuck with the usual prefix operator form--commonly called a function--just like every other language. I am actually working on a language that allows the syntax to be arbitrarily extended. It's not much of a priority, so I don't know when I'll finish it. The main issue is what the base language should look like.

It's not just 3GL's that allow abstracting larger pieces of code into a single or fewer commands. Any software tool that contains something like functions, procedures or macros will allow this--even a gosub command will do. Practically the whole business of computer programming is the act of cleverly abstracting a problem into a set of more convenient sub-problems. The very nature of programming languages mean that any language can be re-written into any other language. One of the reasons that I haven't bothered to learn Lisp yet is that it would take a lot less time to write a Lisp-like interpretter than to learn all of the commands and syntactical sugar of the actual language.

In the past I thought that using C and C++ instead of Fortran 77 (still popular to this day) was a great boon for scientific programming since it allowed for dynamic memory allocation and pointers made it easy to create powerful and flexible data structures such as linked lists. I thought switching to Lisp or even Prolog might offer up similar advantages. It's easy, however, to exaggerate such advantages. In practice, I find that I rarely use specialized data structures except within algorithms that are well encapsulated to begin with (such as in a sort routine, for instance.)

Going back once again to ctraj, a good example is the contour advection algorithm.  My first attempt at contour advection, written in IDL, used a linked list.  The version in ctraj, however, uses cubic spline interpolation to calculate the curvature, so the coordinate list has either to be converted to an array or already be in the form of an array.  Therefore it uses a pair of arrays that are constantly re-allocated.

In Fortran, I would solve the problem by just using fixed-length arrays.  Since the program bums out at about a million points, there's our length.  Of course arbitrary limits are another pet-peeve of real computer programmers.  It's not as bad as all that--in real life, there is always a limit, whether it be the maximum size of the executable allowed in RAM, or the RAM itself.  The fact that the contour advection simulation bums out at a million points or so is not so bad either--it's quite easy to split the simulation into two or more separate processes which can then be farmed out to different processors.  More on this in the next article.

As another example, it's quite easy to write a linked list in Fortran 77. It takes a bit more thought and requires passing a "work-space" variable to the subroutines (common practice in Fortran 77 programming) but once it's done it can be re-used as needed, just like any suite of subroutines. Maybe if I get around to it I'll post my version of a linked list written in Fortran 77. I guarantee that from the interface level it won't look much different from the equivalent code written in C.

Monday, June 24, 2013

All men are rapists

Lately I've become interested in debates between feminists and men's rights activists.  In particular, there is an increasing amount of pressure by feminists to address "epidemic" levels of domestic violence and rape.  Much of it borders on hate speech.  There is one aspect of this I would like to take issue with: the idea that all men are capable of rape.  The story, at least as told by feminists, is that we are living in a "rape culture" and that men must be conditioned not to rape.  Some feminists go so far as to assume that all men are rapists until they can (somehow) prove otherwise.  By that logic I should assume that all women are liars.  Statistically I'd have a much better chance of being right.  Or perhaps assume that the black teenagers hanging around the street corner are potential muggers.

My question is simple: are all men even capable of rape?  And by capable I mean actually physically capable.  Can they achieve an erection under circumstances akin to rape?  Human males are quite different from just about any other mammal in that they have no bone inside their penis.  An erection is maintained by blood-flow alone.  This also means that they cannot simply have sex on demand.  Something must arouse them.

I hear a lot of talk about how men must impress women in order to gain access to them--by becoming an "alpha" or whatever.  But the implication of male physiology is that the female must also impress the male by sexually stimulating him.  It's an entrenched position that men rape far more frequently than women, at least when it's even acknowledged that women rape at all.  The typical argument against women raping men is that it is impossible to force a man to have an erection.  This same argument can also be used to show that not all men can rape.  Just as they cannot be aroused by being forced into submission, they are not necessarily aroused by forcing another.

I find the notion that "all men are potential rapists" offensive in the extreme because it implies that all men are violent psychopaths who are sexually aroused not only by violence, but by their own violence.

Thursday, June 6, 2013

Thoughts on Turing-completeness

Writing this sparse matrix calculator has gotten me thinking about software tools and Turing-completeness.  The sparse calculator, at least as originally conceived, wasn't meant to be a a general-purpose computing tool.  It was simply meant to do some simple computations with sparse matrices: mainly find the eigenvalues and solve matrix equations.

It seems to be the fate, however, of just about every software tool (especially mini-languages) to acquire, at some point, the trappings of a full-featured programming language.  Suppose you want to write your own solver or eigenvalue routines instead of just using using built-in functions?  At the very least this will require iteration capability and the ability to exit from that iteration based on some criterion.

The complexity required for Turing-completeness is surprisingly low.  I'm wondering at what point the sparse calculator will acquire it--or perhaps it has already with the addition of vector-subscripting and subscript assignment?  In the course of the minimal research I've performed, one thing that comes up repeatedly is this idea of non-halting programs.  After all, one of the properties of a Turing-complete Turing-machine is halting theorem: that no general algorithm exists that can prove with certainty whether a program on a Turing-complete machine halts or not.

First of all, I've always thought that halting theorem was over-emphasized.  After all, human programmers write programs that halt all the time and they can frequently determine whether a program halts or, if it does not, why.  Just because you cannot prove whether or not some programs halt, does not mean that you should not do the work for those that are less pathological, i.e., that are easy to classify.  Typically these less pathological cases will be the ones of interest anyway, that is, more real world as opposed to "of theoretical interest."  The problem is equivalent to that of automated theorem proving.  There is plenty of work being done and plenty of software in that area, even though theorems exist that are unproveable.

It's obvious that being capable of non-halting behaviour is a necessary, not sufficient, condition of being Turing-complete.  But I question this as well: couldn't we have languages that are, for all intents-and-purposes, Turing-complete, yet still always halt?  In array-based languages such as APL, MatLab, Interactive Data Language and many others, you can do a lot of both the iteration and conditionals using array operations.  And at least in IDL, you ought to because it is more efficient: array operations are built-in while loops and other control-of-flow operations are interpretted from byte code.  Since arrays always have a finite length, this implies that any array operation will ultimately halt.  I find I like the idea of element-by-element comparison, array element selection and similar mechanisms since these have more of a declarative feel.  I remember being disappointed by the control structures in Awk, because while in concept the language had a strongly declarative feel, the control structures simply aped those of common imperative languages.

What's wrong with science: academe

One of the things I hated most in elementary school was reading comprehension. It went like this: read a passage or short story and then answer a bunch of questions about it.  But there was a rub: not only could you not quote from the story, you had to explain your answer.  This irked me no end because the correct answer was almost invariably sitting right there, stated baldly and directly within the text. And being in third grade, try as I might to paraphrase it, there was no way I was going to come up with a piece of prose that stated the same thing quite so well.

I find a lot of life is like this: trying to explain to people things that really should be self-evident.  And part of the reason that people are so unreasonable is the mental gymnastics they have to go through in order to earn marks in school.  This of course continues well past college: college just sets it like concrete.

Are you searching for that virtuoso, three page proof that will win you accolades among your peers and hopefully win you a professorship?  The purpose of science is not to make things complicated, rather it is to simplify them.  The real beauty is the one-liner that makes you snap your fingers and say, "Aha!  It's so simple..."  I went into physics because it is simple, unlike psychology: because understanding a particle in a box is simpler (one hopes) than understanding a human brain.

I realized early on in my science career that it's probably far easier for an intelligent person to forget about jumping through all the hoops they set for you on the way to becoming a verified, card-carrying, official, professional, bona fide, authentic Scientist (TM) (C) B.S. M.S. M.D. Ph.D. and just focus on making some really big discovery. Once you do that, qualifications mean nothing.

Of course, if you are doing this, this means you are actually working on difficult, interesting problems. As a scientist, even a scientist in training, isn't this what you should be doing? I'm not saying that the two are mutually exclusive, that you could be working on difficult, interesting problems at the same time as you are working towards a degree. But in practice, they often are.

Tuesday, June 4, 2013

Women in science

When I was a youth, I considered myself a feminist. I even wrote stories about a swash-buckling woman pirate (there were actually real women pirates--look it up). As I grew into adulthood, that began to unravel--about the time, in fact, that I took a course in women's studies at university. A friend of mine from the physics department was the AV guy for the course and so had to sit in on the lectures. After one of them he said to me, "I find her arguments speculative and inflammatory." After some thought, I realized that he was quite right.

One of the essays I wrote for the course was about discrimination in science. Increasingly, I'm also beginning to question whether the lack of women in the "hard" sciences, math and physics in particular, has anything to do with discrimination and might rather be caused by other factors. To tease out the answer to this question is quite difficult and there are no doubt multiple factors.

Instead, I'm going to look at a closely related phenomenon: the lack of female editors on Wikipedia (estimated at around 16%) and the lack of females creating open source software (probably less than 5%). Based on personal experience, it's virtually impossible that this discrepancy is caused by any kind of overt discrimination. The reason I say that is simple. I have been both a Wikipedian and an open source software developer for several years now. Neither activity require any qualifications, nor do they require permission. All you need are a few technical skills or the patience to learn.

In the case of Wikipedia, you don't even need to be the stereotypical "tech geek"--the only technical skills you need are the very simple mark-up language and you can write about anything you like. So if you aren't interested in math or physics or computer programming, you can write about ladies handbags or your favourite soap operas or whatever. The articles on popular culture, in fact, far outnumber the articles on technical subjects.

In all the time I've been working at these things the amount of human interaction I've experienced has been minimal which is the way I like it. So I can only conclude that the real reason for the lack of females is up to one factor and one factor only: interest. Women are simply not interested in doing this kind of work. After all, unless your project hits it big like Linux, you get no rewards other than personal satisfaction.

If you enjoy programming, you already know how easy it is to engage your passion, but in case you don't, let me tell you.  Most of my programming is done on a netbook that cost me $350 which is not a large sum of money and I've seen similar models go for as little as $200.  My computer has a dual-core, 1.66 GHz processor and is more powerful than many super-computers from 20 years ago.  In addition to a computer, you will also need software: a programming language in the form of a compiler or interpretter.  I use the GNU suite of compilers which includes C, C++ and Fortran and can be downloaded for free.  It will even run on Windows using another free program called Cygwin.

To turn the software you write into open source, you can start an account on Sourceforge and upload it so that others can download and use your software, or if they choose, modify it or use it as a starting point for their own work.  The amount of human-to-human interaction up to this point has been all but nil.  In other words, ladies, if you want to become free software developers, get coding because nobody and nothing is stopping you.

Suppose that you are a geek (which in my case is not supposition). You have virtually no romantic prospects, maybe not much of a social life and you are wary of the bullshit that might get you there. But you know what you like. You enjoy tinkering: writing computer programs and learning about science. And you are honest. So you take this passion and this integrity and you pour it into a hobby--developing free software. At first, nobody notices because you yourself are also a nobody. But because of your passion and because of your integrity, everybody starts using it because it's just so damn good.

Seems like the dream scenario right? Geek loser makes good. Except now come the wannabes and coat-tail hangers who want in on a piece of the action, even though they may not have half the passion or ability. It's hard to view feminist complaining about this issue in any other way--they see what men are up to and they want a piece of the pie. Nothing is stopping a determined woman--any woman--from opening an account on Sourceforge and starting up her own Software project or from starting an article on Wikipedia.

One of the complaints that gets bandied about from feminist quarters is this nebulous concept of "harassment"--that women who try to do so are routinely harassed. In many instances, I suspect they are confusing standards and normal "turf wars" with harassment. Wikipedia has certain rules and guidelines that must be adhered to in order for it to be what it is. If you fail to abide by these rules, yes you will be warned and your edits removed if you do not address the issues.

Turf wars are a bit more tricky. If you invade or otherwise get involved in a project or article that is already quite advanced, it's quite common to be harshly criticized for your work. The men (we know already that it's mostly men) who work on these things invest in them a lot personal passion and the fact that they don't want some newbie to step in and change the direction of a project is a sign of this passion. So in some ways, it is a very positive thing.

Speaking personally, I know that I too am very particular this way and also that I don't have much of a stomach for the type of turf wars I was just discussing. The solution in my case was simple: I work solo on all my free software and I only work on Wikipedia articles that few others are interested in yet I have expertise in.

There is one final issue I have not yet touched on and that is the idea that women have been socialized to be less interested in things like math and science. This is possible, but it may also be something that is inborn--this was almost taken for granted less than 50 years ago. It's hard to come to an objective determination in the matter especially since feminists will try to undermine any evidence that suggests it is as the whole debacle with former Harvard president Lawrence Summers shows.

On the other hand, girls now lead boys in just about every school subject and I believe that includes math and science. Also, as I already pointed out, editing Wikipedia requires little technical ability.

Attempts to redress the balance through "affirmative action" programs are damaging to both men and women.  Damaging to men because they no longer compete on a level field.  Damaging to women because the abilities of any woman who "makes it" are now suspect.

One of the reasons levelled by feminists for the lack of females in this or that field is the "lack of role models."  This, unfortunately, does not fly: if you look back through history, there have been always plenty of notable women doing just about everything, from pirates to mathematicians to composers, even if they have been in the minority.  Think of the courage and determination it required for these people to get to where they were given the reigning social prejudices and strictures of the time.  If modern women cannot emulate these matriarchs, despite these barriers having been knocked down, despite a plethora of affirmative action programs such as grants, scholarships and mentorship programs available only to women, perhaps the problem is not with society at all but rather the modern human female.

A tale of seven swimming holes

This afternoon I took a walk in one of the ritzier areas in my neighbourhood.  In particular, I remember walking along a ridge and noticing that each house had a spectacular (and private) view of the Ottawa River Valley.  But then I realized that the lovely view was marred by a large pulp mill belching foul-smelling smoke.  I noted, with some satisfaction, that the same system that allows for the commodification of something so ephemeral as a view, also produces its destruction through "industry."  I also found myself wondering, "Am I the only one who gets this?"

Lets say you live in a town with seven swimming holes and you can visit any one of these swimming holes any time you want.  Now these beaches vary a lot in quality depending on the time of day and year and your personal tastes. One is East-facing and so is ideal for early morning swims while another is West-facing and so is ideal for evening swims.  Another has large, imposing Oak trees to the South making it perfect for afternoon swims during the dog-days of summer.  Of course, sometimes you are in the mood for crowds because maybe you want to rock a new swimsuit or search for old friends and so head to one of the most popular swimming holes.  Other days you might prefer to be alone, either because you're depressed or have a romantic engagement and so head to one of the less popular beaches.  And then, you never know when there might be an algae bloom which will make one of the lakes unpleasant for swimming

Now lets say that a bunch of development companies buy up all the land around these lakes--as they are wont to do in "free" countries like Canada--and sub-divide it into residential lots. You are lucky enough to score one of them and it is on the East-facing lake.  Now you have an entire lake that you only have to share with a few other neighbours! One month later you are re-assigned at work and have to work early mornings.  Two months later a giant algae bloom invades the lake.  Despite chlorine treatments and other invasions, it does not clear up.  You never go swimming in your town again and instead drive 250 km to an out-of-town resort.

So, were you richer before or after?



Thursday, May 16, 2013

A call for private currency

Modern governments are gung ho to privatize just about everything they can lay their hands on these days.  In the '80s, the Thatcher government was busy selling the utilities.  Her successor sold the railways. Here in Canada we've begun to open up the phone services to competition and have sold the national airline.  And in the states there are experiments to privatize the prisons, the police force and much of the military.

One thing that has remained stubbornly in the public domain is the currency. And by this, I don't mean the creation of money, which now too, is largely in the private sector.  I mean the currency itself.  If we are not happy with our dollar and what it buys and its value (which, after all, is nothing--it's just a piece of paper, or in these times a blip in some computer data bank somewhere), why shouldn't we exchange it for another one, or for that matter for a Canadian Franc, or Dragma or Schekel or Pound or Ob or Guinea or whatever it is that the backer decides to call it?

Lets face it, competition for the bank conglomerates is sorely needed.  These days, because of fees, abysimally low interest rates and other factors, if your net worth isn't close to a cool million, you're better off keeping your money in your shoe.  And as the WikiLeaks debacle proves (in addition to the fact that freedom of the press is a convenient myth) the banks now wield an excessive amount of power.

What we need is a new standard of value.  If the "Occupy" Movement is really serious about achieving its goals, two things need to be done.

The first thing that we need to do is organize a general strike.  Nobody does any work, and by work I mean any revenue-generating activity. I'm not saying we need to go hungry.  You can do anything you want to, as long as it doesn't involve cash.  So for instance, you can bring your neighbours food or fix their cars for them and so on.

It is now well understood how fiat currency created primarily through debt helps concentrate wealth in the hands of the few--primarily those who already have control over a considerable amount of wealth.  The more the wealth is concentrated, the more control the global elite ultimately possess.  This paper shows just how concentrated that power has now become. So the second thing the Occupy Movement needs--and this compliments the first--is a new standard of value. Not necessarily currency since any new currency will be in immanent danger of being usurped for the purposes of control by an elite few just like what has happened to the existing currency.

My own vision for a better society and a better system of distribution ("market") is simple. We just need a clearing-house or bulletin board for work that needs to be done and goods that need to be distributed, whether for ourselves or for our neighbours. Sort of like Craigslist except everything in the want-ads is free. If I am free and I possess the skills, then I can do one of the tasks. Likewise, if my neighbour is free, he can come do one of the tasks I've posted on the board.  If I have some spare bread that I need to get rid of, I just post it on the board.

Here is a book that discusses strategies for alternative currency.  I think it is one of the most important books written in recent times.

Friday, May 10, 2013

Celebrations are in order

I stated in the previous post that all of the work from the past year-and-a-half was done for free.  This is not entirely true.  First, I did received 1500 Euros for a stale contract.  Two of the papers on arxiv are offshoots of that work.  But more importantly, I received a $10 donation quite recently.  This is actually my first donation, and it's still sitting in my Paypal acount--or at least what's left of it after the fees.  Once a few more donations start rolling in I intend to divert them to a dedicated account. 

Why did I not announce it sooner?  First of all, and I don't know if this says something about me or about society in general or perhaps both, but when I first received the e-mail, I assumed it was a scammer.  I even filled out an electronic report to the RCMP about it!  It turns out my Paypal account hadn't been verified.  Second, after almost two years of dedicated work as an independent research scientist this is all I have to show for it, at least monetarily.  I realize the necessity, passed down to us from all the great moral teachers, of gratitude.  But I am somewhat embarassed about this.

Anyway, thanks very much to William Oquendo for your generous donation to the libAGF project!

Jumping on the crowd-funding bandwagon--again

Recently I've become interested in "men's issues" because I believe that modern feminism has gone too far.  This is particularly true in academia where intellectual freedom (assuming it ever existed at all) has practically disappeared.  A Harvard president is asked to resign after he merely suggests that men and women may have different abilities, meanwhile, academic feminists can write the most blatant hate-speech against men and other, so-called, "privileged" groups without fear of repercussion.

One of the characters I've encountered through men's rights activists is Anita Sarkeesian.  She is a feminist "vlogger" who submitted a project to Kickstarter.com for the purpose of examining female stereotypes in video games.  She collected over $158 000.

That's a lot of money for something so trivial.  Women are stereotyped in videogames?  Get over yourself.  As if men aren't stereotyped as well.  It seems inevitable in a medium where the characters are composed of pixels that a certain amount of cartoonishness is going to creep in.  Yes, your female avatar may have an unnaturally large chest and be rather scantily dressed, but so too, in all probability, will your male avatar.  Big deal.

Anyway, this has inspired me to try my hand once again at collecting donations using crowdfunding.  This time, however, I'm going to use generic sites, rather than science specific ones.  I'm also going to focus on those that use a "keep-it-all" funding model.

In the past year-and-a-half alone, I have made over 250 code commits to free software projects, have submitted a total of five articles to the arxiv.org database, as well as dozens of contributions to Wikipedia.  These activities represent advances in diverse fields including statistics, machine-learning, numerical computation, atmospheric physics, remote sensing and environmental science.  Little of which has yet been paid for.  If something as frivolous as tropes in video games can raise over six figures, surely real science deserves at least a small nugget.

So give me money.  You owe it to me.  Or at least somebody does...

Friday, April 12, 2013

A sparse matrix calculator

For the past few months I've been working quite hard on both the ctraj and "libpetey" software.  I hope to have new releases of both within the next month.

One of the most significant additions is a sparse matrix calculator.  This tool allows you to perform operations on sparse matrices much as one might do to simple floating point numbers on a pocket calculator or a calculator app.  This addition is so significant, in fact, that I'm consider creating a whole new project to contain it.

Two of the "peteysoft" projects already output sparse matrices.  The Gaspard-Rice simulator, gr_scat, can output a sparse matrix representing a discretization of the Laplace or Helmholtz equation which models the wave scattering behaviour of the system.  Also, the ctraj tracer simulation outputs a series of sparse matrices that represent the tracer dynamics.  To model a tracer, the initial tracer configuration is multiplied through with these matrices.  Storing them as an array is both more space and time efficient than multiplying them through.

I have already used the matrices output from ctraj in a quite sophisticated analysis: by correlating the singular vectors with a series of sparse measurements (e.g. from a satellite), they can be used in a type of dynamical tracer reconstruction I call "PC proxy."  I submitted a paper about PC proxy analysis to Atmospheric Environment--it can be found here.  The genesis of the sparse calculator project comes about because the reviewers were interested in learning in more detail how the method works, what makes it better than existing methods and how better to parametrize it.  Since I don't really understand the method all that well myself, I decided to write this software tool to conduct a series of numerical experiments.

Sparse matrics can be used to model a large class of partial differential equations.  In the past, the process of matrix multiplication would be coded directly in Fortran.  A similar method is used to compute the tangent model for generating singular vectors from numerical weather models. Each time a matrix multiplication is desired, to be fed back to the eigenvalue subroutines, the model is re-run from scratch. Porting them to a data structure instead is not only faster, it is far more versatile.  Since the matrix is typically quite sparse, the extra storage is not particularly burdensome.

The basic idea behind the tool is that files generated from other programs are imported and used as variables.  Because these files can become quite large, the matrices are only loaded into RAM when they are used in a calculation.  Except for scalars, variable names always correspond to a file name.

Using the ctraj package, we can generate a series of sparse matrices which represent the tracer dynamics for a given range of dates, say May 1, 2000 to May 5, 2000, and a given vertical level, say 500 K.  First, you will need to download the NetCDF files containing NCEP temperature and wind data: air.2000.nc, uwnd.2000.nc, vwnd.2000.nc. Next, extract and convert the desired velocity data using the nc2ds command:

  > nc2ds -i 2000/5/1 -f 2000/5/6 -T 500 vfield_n.ds
  > nc2ds -- -i 2000/5/1 -f 2000/5/6 -T 500 vfield_s.ds


Next, we run the tracer model on this data:

  > ctraj_tracer vfield_s.ds vfield_n.ds tracer.dat

Now we can find the top 10 singular vectors of the tracer dynamics using sparse_calc and put them in a file called sv.vec:

  > echo "sv.vec=svd(sparse_array(tracer.dat)*transpose(tracer.dat), 10)"\
      | sparse_calc -A 50


Finally, plot and display the first singular vector:

  > plot_frame.sh 0 sv.vec sv0.ps

  > gv sv0.ps


Now lets say you want to do the same thing, but only for the first day's
worth of data:

  > echo \
"sv_1day.vec=svd(sparse_array(tracer.dat(0:4))*transpose(tracer.dat(0:4)), 10)"\
| sparse_calc -A 50


The sparse matrix calculator will be available in the next release of libpetey , currently a part of the msci project.  The tool can handle generalized sparse matrices, sparse matrix arrays, full matrices and simple floating point calculations.  There are routines for matrix solution, inversion and eigenvalue calculation.  In addition, there is a generous online help function as well as a man page to quickly get you up to speed using it.

Thursday, February 14, 2013

Additional thoughts on software design: keeping it simple

There are some aspects of software design that are quite difficult to teach and even more difficult to learn.  Like elegance in design.  I remember helping a colleague with a subroutine to randomly permute a list--just like shuffling a deck of cards.  He had this enormous subroutine that generated random numbers then checked them against other numbers.  I just shook my head.  It's counter-intuitive--at least until you figure it out--but the best way I've found is to generate a list of random numbers, sort them, and then use the shifts so generated to permute the list.  In the language we were working with (Interactive Data Language) it's a one-liner.

I don't remember having trouble coming up with this simply because I've worked with simulated random numbers for so long.  I've also used sorting for a unique application: calculation of equivalent latitude.  Check out the Wikipedia page (which I wrote).  I think the algorithm for calculating equivalent latitude is the epitome of elegance and I've discovered a number of very similar tricks for quite different applications.

Not to say that I haven't created my share of bloated code.  I remember once writing a piece of code to find local maxima in a gridded, two dimensional field.  What I came up with involved dividing the data values into several ranges and using a domain-filling algorithm (not of my own design).  Not very accurate and even less fast.  A simple, gradient ascent algorithm is faster by far (I know because I eventually rewrote it) or what about just checking for a local maximum in the eight nearest-neighbours??  I recall having tried both at the time, but couldn't get either working.  That's the other lesson: sometimes it takes patience to ferret out all the bugs in a piece of code.  It also takes confidence in believing that a particular approach will work.

Another particularly embarrassing piece of bloatware I've written involved binning measurement locations from satellite data in the process of interpolating them to a regular grid.  The program did work quite well, but I'm pretty sure the binning actually made the algorithm slower.  I'm not certain of that since I haven't rewritten it yet.  Meanwhile, the indices it generated were gigantic.  (I'm still convinced that some type of binning can produce significant speed gains in nearest-neighbour type problems, but I haven't yet been able to come up with the goods.)

Thursday, February 7, 2013

Stuff that I'm working on

I started this article some time ago, so some of the stuff is already out of date. It's quite long as well, so be warned...

New software release and planned improvements

The Peteysoft Foundation has released a new version of the libagf software and the libpetey scientific library required by it. Changes are fairly minor, but nonetheless important. The Runge Kutta integrator now has a thread-safe version. The function pointer passed to this routine is compatible with the GSL. New routines have been introduced including a symbol table class and a "smart" pointer class. More significant improvements are detailed in the next two sections, while many exciting planned new developments follow.

k-least algorithm

The kleast subroutine selects the k smallest elements in an array. Past versions of this algorithm were based on a binary tree and have a theoretical average time efficiency of O(n log k), although tests suggest that the actual running time is slightly better, closer to n+k log k [1]. This is the theoretical best efficiency for a k-least algorithm that sorts the resulting output, necessarily true for a binary tree. According to Knuth, if the output is not sorted, the efficiency can be as good as O(n) [2]. This can be acheived through a modified quicksort algorithm, but not one based on a heap which again, sorts the output. A new, faster k-least algorithm based on a quicksort has been implemented and applied to the libagf library to pre-select the data and to find the k-nearest- neighbours.

Solving for the filter width

The old algorithm for solving for the filter width while holding the total of the weights, W, was based on repeated squaring of the weights until the total passed the target value. The final result was found by exponential interpolation between the final value and the previous value. The purpose of this excessively clever method was to reduce the number of computed exponentials--the number is fixed at two per weight. Unfortunately, it was quite finicky and improper initialization or certain sample distributions (such as duplicate samples) would cause the method to hang. To address these issues, it was replaced with nothing more than a root-finding algorithm, although a fairly sophisticated one. This is the same as used for searching out the class borders: a third order interpolation method which brackets the roots. I call this root-finding method, "supernewton". [1]

While they should make the software faster and more robust, they are rather routine and somewhat boring improvements. What follows are some more interesting ideas, in order of interest and complexity. Of course, what makes an idea interesting, often also makes it complex and difficult to implement.

The inverse error function and the superNewton algorithm

The inverse error function is quite useful since it tells you for a given integrated probability and Guassian statistics, how far from the centre of the distribution you need to be. So for instance, if you have n normally distributed random samples, you can use the inverse error function to figure out approximately where the most distant sample should lie. Unfortunately, most math libraries don't include a numerical implementation of it. I figured, no problem, I'll use my powerful numerical inverse method "superNewton" to implement my own. Rather than giving me a simple, efficient math function in record time, it revealed to me some rather striking limitations in the algorithm as it is currently implemented. Most importantly, it tends to get stuck always re-bracketing the the root on the same side. This could probably be fixed quite easily, but at the cost of some extra book-keeping. Simply track how many times the estimated root has landed on the same side of the actual root as well as the relative distance between the previous left and right brackets. If these values are too high, the estimated root can be shifted so that it's now on the opposite side of the actual root. I would suggest doubling the difference between the estimated root and the closest bracket.

Another issue is tolerance of the root. This is exacerbated by the problem just discussed: very low x-tolerances can't be used. If we are using y- tolerances, then for the error function, clearly they must be varied depending on the value of x. For a root-finding algorithm, the idea of "relative tolerance" doesn't have much meaning [3].

Cross-validation of PDF estimates

Cross-validation or n-fold cross validation are simple and reliable ways of testing the accuracy of a supervised statistical classification algorithm. Something similar would be nice to have for probability density function (PDF) estimates as well. Unfortunately, unlike that for a classification problem, actual values of the PDF are not contained in training data for a probability estimation problem and the underlying PDF is frequently not known. This can be solved quite readily by adding an extra step. A new dataset with the equivalent distribution as the existing training data is first simulated using the same method as a metropolis Monte Carlo [3]. This dataset is not used for PDF estimation, but rather the original dataset is randomly split up, PDFs are calculated for each portion at the locations in the simulated dataset and then compared, e.g. through cross-correlation. Note that the parameters used (k for KNN, W for AGF) will need to scaled up in proportion (i.e. by n if this is considered a form of n-fold validation) when using the full dataset.

This has now been implemented in version 0.9.6.

Binning

There are so many applications for an efficient, multi-dimensional binning algorithm, I don't even know where to begin. For instance, in the surgery step of contour advection with surgery, a nearest neighbour search must be performed for every pair of points to check if they are too close or not. Kernel estimators are more efficient if the samples used are restricted to a subset near the centre of the kernel function. Binning can also be used for collocation routines used for the purpose, for instance, of creating satellite inverse methods.

In the absence of binning, the time efficiency of finding a set of k-nearest neighbours is on the order n, where n is the number of samples. With binning, we can do one of two things: we can either reduce the coefficient, or we can get it down to order klog n.

In the first case, the standard, rectangular binning method in which each dimension is evenly divided into several blocks, would apply. Ideally, the bins should be large enough that most of them contain several samples, but small enough that the number of samples in each bin is significantly smaller than the total. As the number of dimensions increase, the number of bins becomes unmanageably large, while the fraction occupied also gets smaller. I have designed binning routines of this type for application over the surface of the globe. Oddly, they don't provide any speed advantage over simply running through all the samples. Either there is too much book- keeping and overhead, or there are optimizations implemented in the processor, the compiler or both that already provide a similar advantage.

An interesting solution to the multiplication of bins at higher dimensions and the presence of empty bins is to discretize each of the coordinate samples based on the bin size. Each dimension is assigned a rank and the resulting list of integer numbers sorted as you would sort any other integer. The storage requirement is typically no more than double the original. If the bin size is set so that the average number of samples per bin is very low, on the order of one, then we can build up the number of bins to check in a concentrically expanding hypersphere until we have the required number of samples. While this idea looks good on paper, the required amount of book-keeping and overhead is far more than simply searching through each of the samples one-by-one, especially since finding a given bin is now a log n operation.

Both these examples are using a somewhat naive rectangular binning strategy. A more sophisticated technique would use an adaptive binning strategy that depends on the actual distribution of data. Something like a KD-tree. I still have yet to try these out--coding one up would be far from trivial, while I find I have a distinct aversion to black-boxes so I tend to use pre-compiled libraries rather sparingly. The KD-tree also suffers from an excess of book-keeping as even searching for a single nearest neighbour requires checking every adjacent bin. The number of adjacent bins growing, of course, with the number of dimensions and often lying on very different branches of the tree. Searching for k-nearest-neighbours or all neighbours within a radius would be even more problematic.

Optimal AGF

The current version of Adaptive Gaussian filtering is quite useful and has a reasonable justification. We can, however, write an expression for the error of kernel density estimator as a function of the bandwidth. There are two terms, a variance term and and a bias term:

e2=(P/n/hD)2+(h2/n grad2 P)2

calculated respectively, in the limit as the filter width becomes very small and in the limit as it becomes very large. Here, P is the actual density function, n is the number of samples, h is the bandwidth and D is the number of dimensions. The actual density function, P, is of course, not known. But there is no reason why we cannot fit the function through repeated estimates using different bandwidths. Since e=P-f where f is the estimated density we can rearrange the above equation as follows:

n2(f-P)=(1/hD-n2)P2/f + h4 (grad2 P)2/f

Since we have several different samples, we can construct a system of equations by substituting values for f and h. The constant term, P, on the left-hand-side can be eliminated by subtracting different equations in the system:

(1/hD-n2)(1/fi - 1/fj) P2 + (hi4/fi - hj4/fj) (grad2 P)2 = n2(fi - fj)

so in theory we need a minimum of three samples to solve for P and grad2P. In practice, the method should be more stable by taking more than the minimum number of samples and performing a least-squares fit.

I recently took a second look at this project--I've had the method implemented for some time but it didn't work. I've now got it returning reasonable results for the optimal filter width, at least when it works at all, though not if you take the P2 factor from the above equation. Part of the problem is that the matrix is very ill-conditioned--the two variables we are solving for are separated by many orders of magnitude likewise all the different elements of the matrix.

Sparse matrix calculator

I have recently put out a paper that describes a powerful dynamical tracer reconstruction method called principal component proxy analysis [4]. The method is based on decomposing a matrix representation of the tracer dynamics using principal component analysis (PCA). Each PC is then regressed with a series of (typically sparse) tracer measurements and the entire field thus reconstructed. The manuscript was sent to Atmospheric Environment but was unfortunately rejected.

The reviewers were interested first, that the manuscript be fleshed out somewhat, and second, how better to determine the parameters of the method: how many principal components to use and how long of a lead time to use. I am attempting something more general here: to build a simple sparse matrix calculator so that I can perform a series of numerical experiments with the matrices generated by the tracer simulation. I'm hoping that will help me to better understand just how the method works: for instance, how is it possible that you can correlate measurements from before the matrix was integrated and the method will still give good results?

The sparse matrix calculator has already been implemented and has even passed a few very basic tests. I still need to test it with actual, practical matrices like those from the tracer simulation, and of course it needs many refinements and features to make it complete, but the basic idea is there and it's running. At the moment I'm working on refactoring the entire trajectory suite and I'd like to finish that first before continuing with this.

Multi-class classification with AGF borders

If you have a discrimination border, this is only good for discriminating two classes. If you want to be more discriminating, you need more than one border. This is something I've been planning for a long time, but made very little progress on. The problem is: there is no one-size-fits-all solution. You can divide up the classes in many different ways and still get usable results and the best way to divide them may be highly problem-dependent. For instance: in the case of a discretized continuum variable, you'd probably want to make sure that all the borders only separate "adjacent" classes, that is, ranges that are next to each other in the discretization scheme. On the other hand, if there are four classes, but they can be reduced into two overlapping sets of two classes, then that right there tells you how to divide them up.

There are two major ways you can divide up the classes: in the first case so that all of the conditional probabilities can be solved for and in the second case so that only the conditional probability of the "winning" class is returned. In the first case, you need at least (n-1)/2 borders--that is, for each border you can write two equations relating the conditional probabilities plus one more for the entire system (they sum to unity.) In the second case, the method is hierarchical: divide the set of classes into two groups, divide each pair of groups into two more and so on. Consider the case of dividing up four classes:

x | o
-----
+ | *

The four classes are x, o, + and *. In the first case, we think of this diagram as representing two borders: one horizontal, and one vertical. In the second case, consider first a horizontal border, and then two vertical borders. It stands to reason that the two methods can also be combined.

References

[1] Peter Mills (2011) "Efficient statistical classification of satellite measurements." International Journal of Remote Sensing, 32 (21): 6109-6132.

[2] Donald Knuth (1998) The Art of Computer Programming, Volume 3: Sorting and Searching, 2nd edition Addison-Wesley.

[3] William Press, Saul Teukolsky, William Vetterling, Brian Flannery (1992). Numerical Recipes in C, 2nd edition Cambridge University Press, Cambridge.

[4] Peter Mills (2012). "Principal component proxy tracer analysis." arxiv:1202.2194

[5] Peter Mills (2006), "The influence of noise on a classical chaotic scatterer." Communications in Nonlinear Science and Numerical Simulation, 11 (8) 899-906.

Thursday, January 31, 2013

Bias in drug trials

I just came across this talk and I think it is very important. It deals with the tendency for journals and researchers to publish only positive results while either ignoring or rejecting negative results. The problem exists not just in clinical medicine, but in every area of science. Science is a very competitive business and positive results often herald a "new discovery." By contrast, we tend to think that a negative result says very little, thus it's in scientists' own best interests to publish only positive results, assuming that the journal will even accept the negative ones.

From my own experience, I have seen scientists not only picking the best exemplar from a series of experiments and publishing only that one, but also essentially cherry-picking the data by using some creative averaging. I also know that I am not immune from this myself.

Some music for your enjoyment




Sunday, January 13, 2013

The known and the unknowable

While I was still an undergraduate at the University of Waterloo, I attended a lecture by the Canadian philosopher William Hatcher detailing a logical proof of God.  Having recently had something of an existential crisis, thoughts of God and spirituality were very much on my mind.  Here was something that I could really sink my teeth into: in contrast to the blind dogma fed to me by the fundamentalist Christians I was hanging out with at the time, Hatcher took more of a scientific approach.

At the same time, I was taking a philosophy course on science and religion. As soon as I heard the talk, I just knew I had to write an essay on it. What I came up with, which can be found here, turned out to be rather critical. At first  I was extremely impressed with the simplicity of the argument, but soon discovered the hole--detailed in the essay.

Hatcher has written others essay that attempt to apply the tools of mathematics to metaphysical problems such as "A Logical Solution to the Problem of Evil" which can be found in "Logic and Logos: Essays in Science Religion and Philosophy." While I don't believe either example is entirely successful, the approach is nonetheless greatly appreciated.  Sadly, Prof. Hatcher is no longer with us and passed away in 2004.  His legacy, however, is alive and well and his estate is attempting to compile his work online: http://william.hatcher.org/.

The point I wanted to make with this essay was not about God, nor about the problem of evil, it was rather about knowledge itself.  I have looked at a number of other proofs of God, and while they are all interesting and ultimately quite audacious, none of them is really commensurate with more useful proofs in science and mathematics.  What if certain questions, such as, "Is there a God?" simply cannot be answered?

A more obvious example would be certain more extreme claims in the interpretation of quantum mechanics such as, "The moon truly isn't there when we aren't looking at it." That is, when something is not being observed, it no longer exists. Such an assertion is clearly absurd because the only way we ascertain existence is through observation and hence, unanswerable.

Multi-valued logics have a long and somewhat chequered history, culminating in the current, slightly controversial extreme of "fuzzy logic."  In fuzzy logic, it's not that there are more than two truth-values, rather truth exists on a continuum between 100% false and 100% true, but nonetheless it is in the same spirit.

My own contribution is somewhat more modest (and likely not original).  I propose adding a single extra truth value, namely that something cannot be known.  Here are the truth tables:

AND

   0 1 X

0  0 0 0
1  0 1 X
X  0 X X


OR

   0 1 X

0  0 1 X
1  1 1 1
X  X 1 X


NOT

0  1
1  0
X  X


Here, X denotes that a proposition "cannot be known" and functions somewhat like an NaN or Inf in floating point arithmetic in the sense that once one creeps into a calculation, the tendency is for it to propogate itself throughout.

As I've gotten older, I haven't lost my interest in questions of God, metaphysics and spirituality, although I try now to take them a bit less seriously. If you want to waste a lot of time, log onto one of the discussion boards debating the existence of God.  The same tired arguments are brought up again and again and the same tired counter-arguments brought up in response.

One development that interests me is the current crop of fervently atheist authors such as Richard Dawkins.  Part of their argument against God and religion is that science has become so powerful in explanatory ability that faith has lost any purchase it might once have had. My own belief is the exact opposite: science and faith actually complement one another perfectly.  That being said, we should guard against having a "God of the gaps" wherein we attribute to God and spirituality whatever it is that science has yet failed to conquer, hence the domain of spirituality becomes smaller and smaller.  Once we acknowledge, however, that certain things cannot be known, we now have an enormous zone wherein faith and spirituality can thrive.  I like to think of the boundary between the known and the unknowable as a "jumping off point."