I guess I've succumbed to pressure and moved over to github. Sourceforge was down for over a week and there has been a lot of bad press about it lately. I've rolled all of my software projects of any importance into a single scientific mega-library which you can find here:

libmsci

Not sure if this is a good approach. Probably not. I was just getting sick of dependency issues especially during release time. None of the libraries are all that big, so I thought, why not just roll them into one?

Also, be sure to check out the new homepage:

Peteysoft.github.io

## Sunday, July 26, 2015

## Monday, June 1, 2015

### Test driven development

Although I don't consider myself a computer programmer no matter how much code I churn out, nonetheless I feel I ought to keep up with some of the latest trends in software development. Don't get me wrong, but I also suspect that many of these trends are also a lot of hot air and that they go in and out of fashion. Certainly object-oriented programming is over-rated.

One idea that's caught my attention is so-called test-driven development. I know that I don't write enough tests for my computer programs.

Unlike some other current fashions such as functional programming, this is something I can put into practice right away. Again, I have my doubts about it so I will describe my initial attempts here. I guess this is a somewhat boring topic, delving once again into the minutiae of scientific programming--hopefully I will have the wherewithal to write about more far-reaching and interesting topics later on some point.

To start with, I've picked an object class that's easy to test. I have a family of classes that select out the k-least elements in a list. Because some methods, such as a tree or heap, don't require storing the whole list, the object works by adding each item element-by-element and then extracting the k-least. The base class looks like this:

template

class kiselect_base {

protected:

long ncur;

long k;

public:

kiselect_base();

kiselect_base(long k1);

virtual ~kiselect_base();

//add an element, set associated index to element count;

//return current size of data structure:

virtual long add(type val)=0;

//add an element and set the index, returns current size of data structure:

virtual long add(type val, long ind)=0;

//marshall out the k-least and associated indices:

virtual void get(type * kleast, long *ind)=0;

//test the selection algo:

int test(long n); //number of elements

};

I wrote a family of these things because I wanted to test out which version was fastest. It turns out that it makes fat little difference and even a naive method based on insertion or selection with supposed O(nk) performance is almost as good as a more sophisticated method based on quick-sort with supposed O(n) performance. In addition to the k-least elements of the list, this version returns a set of numbers that index into the list in case there is auxiliary data that also needs to be selected. This also makes slightly easier to test.

As you can see I've already added the test method. Sticking it in the base class means that all of the children can be tested without any new code. This is precisely in keeping with my thoughts about test routines: they should be as general as possible. Forget having a small selection of specific test cases: this is a recipe for disaster. Maybe it's not likely, but suppose your code just happens to pass all of them while still being incorrect? Plus it's trivial to write code that passes all the test cases without being anywhere close to correct.

Rather, we need to be able to generate as many random test cases as we need. Five test cases not enough? How about 100? How about one million? Here is my first crack at the problem:

//trying to move towards more of a test-driven development:

template

int kiselect_base::test(long n) {

int err=0;

type list[n];

type kleast[k];

long ind[k];

long lind; //index of largest of k-least

int flag;

//generate a list of random numbers and apply k-least algo to it:

for (long i=0; i

list[i]=ranu();

add(list[i]);

}

get(kleast, ind);

//find the largest of the k-least:

lind=0;

for (long i=1; i<k; i++) {

if (kleast[i]<kleast[lind]) lind=i;

}

//largest of k-least must be smaller than all others in the list:

for (long i=0; i<n; i++) {

//this is efficient:

flag=1;

for (long j=0; j<k; j++) {

if (ind[j]==i) {

flag=0;

break;

}

}

if (flag && kleast[lind]<list[i]) {

err=-1;

break;

}

if (err!=0) break;

}

return err;

}

Notice how we first generate a random list of any size so that we have effectively an infinite number of test cases. But wait, this is kind of inefficient: it runs in O(kn) time. Here I'm kind of torn: on the one hand the test algorithm should be as simple as possible so that it is easy to verify, but on the other, there seems to be no excuse that verification should take longer than the algorithm itself! No question that in this case, it should take at most O(n) time, as this version demonstrates:

//trying to move towards more of a test-driven development:

template

int kiselect_base::test(long n) {

int err=0;

type list[n];

type kleast[k];

long ind[k];

long lind; //index of largest of k-least

int flag[n];

//generate a list of random numbers and apply k-least algo to it:

for (long i=0; i

list[i]=ranu();

add(list[i]);

}

get(kleast, ind);

//find the largest of the k-least:

lind=0;

for (long i=1; i<k; i++) {

if (kleast[i]<kleast[lind]) lind=i;

}

//set flags to exclude all k-least from the comparison:

for (long i=0; i<n; i++) flag[i]=1;

for (long i=0; i<k; i++) flag[ind[i]]=0

//largest of k-least must be smaller than all others in the list:

for (long i=0; i<n; i++) {

if (flag[i] && kleast[lind]<list[i]) {

err=-1;

break;

}

if (err!=0) break;

}

return err;

}

The problem we run into here is that the test algorithm is becoming quite complicated: almost as complicated as the original algorithm itself! In some cases, such as parsing, it may need to be just as complex. How do we test the test code?

Well, obviously there's a boot-strapping problem here! At some point, we need human discretion and judgement. My preferred test engine, and I have a small number of these lying around, is one that allows you to manually input any desired test case and then display the result.

Probably the best example of this approach is the date calculator I wrote to test a time class. The time class (as well as the calculator that wraps it) allows you to make arithmetic calculations with dates and times and print them out in a pretty format. Here is an example session that calculates the number of days between today and Christmas:

$ date_calc.exe

%d%c>(2015/12/25_2015/06/02)|1-0:0:0

206

%d%c>

One idea that's caught my attention is so-called test-driven development. I know that I don't write enough tests for my computer programs.

Unlike some other current fashions such as functional programming, this is something I can put into practice right away. Again, I have my doubts about it so I will describe my initial attempts here. I guess this is a somewhat boring topic, delving once again into the minutiae of scientific programming--hopefully I will have the wherewithal to write about more far-reaching and interesting topics later on some point.

To start with, I've picked an object class that's easy to test. I have a family of classes that select out the k-least elements in a list. Because some methods, such as a tree or heap, don't require storing the whole list, the object works by adding each item element-by-element and then extracting the k-least. The base class looks like this:

template

class kiselect_base {

protected:

long ncur;

long k;

public:

kiselect_base();

kiselect_base(long k1);

virtual ~kiselect_base();

//add an element, set associated index to element count;

//return current size of data structure:

virtual long add(type val)=0;

//add an element and set the index, returns current size of data structure:

virtual long add(type val, long ind)=0;

//marshall out the k-least and associated indices:

virtual void get(type * kleast, long *ind)=0;

//test the selection algo:

int test(long n); //number of elements

};

I wrote a family of these things because I wanted to test out which version was fastest. It turns out that it makes fat little difference and even a naive method based on insertion or selection with supposed O(nk) performance is almost as good as a more sophisticated method based on quick-sort with supposed O(n) performance. In addition to the k-least elements of the list, this version returns a set of numbers that index into the list in case there is auxiliary data that also needs to be selected. This also makes slightly easier to test.

As you can see I've already added the test method. Sticking it in the base class means that all of the children can be tested without any new code. This is precisely in keeping with my thoughts about test routines: they should be as general as possible. Forget having a small selection of specific test cases: this is a recipe for disaster. Maybe it's not likely, but suppose your code just happens to pass all of them while still being incorrect? Plus it's trivial to write code that passes all the test cases without being anywhere close to correct.

Rather, we need to be able to generate as many random test cases as we need. Five test cases not enough? How about 100? How about one million? Here is my first crack at the problem:

//trying to move towards more of a test-driven development:

template

int kiselect_base

int err=0;

type list[n];

type kleast[k];

long ind[k];

long lind; //index of largest of k-least

int flag;

//generate a list of random numbers and apply k-least algo to it:

for (long i=0; i

list[i]=ranu();

add(list[i]);

}

get(kleast, ind);

//find the largest of the k-least:

lind=0;

for (long i=1; i<k; i++) {

if (kleast[i]<kleast[lind]) lind=i;

}

//largest of k-least must be smaller than all others in the list:

for (long i=0; i<n; i++) {

//this is efficient:

flag=1;

for (long j=0; j<k; j++) {

if (ind[j]==i) {

flag=0;

break;

}

}

if (flag && kleast[lind]<list[i]) {

err=-1;

break;

}

if (err!=0) break;

}

return err;

}

//trying to move towards more of a test-driven development:

template

int kiselect_base

int err=0;

type list[n];

type kleast[k];

long ind[k];

long lind; //index of largest of k-least

int flag[n];

//generate a list of random numbers and apply k-least algo to it:

for (long i=0; i

list[i]=ranu();

add(list[i]);

}

get(kleast, ind);

//find the largest of the k-least:

lind=0;

for (long i=1; i<k; i++) {

if (kleast[i]<kleast[lind]) lind=i;

}

//set flags to exclude all k-least from the comparison:

for (long i=0; i<n; i++) flag[i]=1;

for (long i=0; i<k; i++) flag[ind[i]]=0

//largest of k-least must be smaller than all others in the list:

for (long i=0; i<n; i++) {

if (flag[i] && kleast[lind]<list[i]) {

err=-1;

break;

}

if (err!=0) break;

}

return err;

}

Well, obviously there's a boot-strapping problem here! At some point, we need human discretion and judgement. My preferred test engine, and I have a small number of these lying around, is one that allows you to manually input any desired test case and then display the result.

Probably the best example of this approach is the date calculator I wrote to test a time class. The time class (as well as the calculator that wraps it) allows you to make arithmetic calculations with dates and times and print them out in a pretty format. Here is an example session that calculates the number of days between today and Christmas:

$ date_calc.exe

%d%c>(2015/12/25_2015/06/02)|1-0:0:0

206

%d%c>

Note that the minus sign (-) and the forward slash are already used in the date format so we substitute an underscore (_) and a vertical line (|) respectively for the equivalent arithmetic operations. Another example is test wrapper for the following option parsing routine:

//returns number of options found

//if there is a non-fatal error, returns -(# of found options)

int parse_command_opts(int argc, // number of command line args

char **argv, // arguments passed to command line

const char *code, // code for each option

const char *format, // format code for each option

void **parm, // returned parameters

int *flag, // found flags

int opts=0); // option flags (optional)

This subroutine is a lot more code-efficient than getopt. There is a brief set-up phase in which you set each element of the void parameter list to point to the variable in which you want to store the option parameter. Options without arguments can be left null and use a null parameter code, "%". As a simple example, suppose you want to return the parameter of the -d option to the integer variable, d:

int main(int argc, char **argv) {

int d;

void *parm[1];

int flag[1];

int err;

parm[0]=&d;

err=parse_command_opts(argc, argv, "d", "%d", parm, flag, 1);

...

The test program scans options for all possible format codes using an option flag that's (usually) the same as the code and prints out the parameters to standard out. We can do it with whitespace (-b option):

$ ./test_parse_opts.exe -b -g 0.2 -i 20 -c t -s teststring

./test_parse_opts -g 0.2 -i 20 -c t -s teststring -b

number=0.2

integer=20

string=teststring

char=t

Arguments: ./test_parse_opts

or without:

$ ./test_parse_opts.exe -g0.2 -i20 -ct -steststring

./test_parse_opts -g0.2 -i20 -ct -steststring

number=0.2

integer=20

string=teststring

char=t

Arguments: ./test_parse_opts

Of course, this is one reason why interactive interpreters are so great for rapid development. You don't have to write all this (sometimes very complex) wrapper code to test functions and classes. Just type out your test cases on the command line.

*UPDATE: I realize that the test_parse_opts wrapper is a bad example since it's quite limited in the number of test cases you can generate. Therefore I've expanded it to accept an arbitrary list of option letters with corresponding format codes to pass to the function:

$ ./test_parse_opts -b -p adc -f %d%g%s -a 25 -d 0.2 -c hello

./test_parse_opts -a 25 -d 0.2 -c hello -b -p adc -f %d%g%s

-a (integer)=25

-d (float)=0.2

-c (string)=hello

Arguments: ./test_parse_opts

*UPDATE: I realize that the test_parse_opts wrapper is a bad example since it's quite limited in the number of test cases you can generate. Therefore I've expanded it to accept an arbitrary list of option letters with corresponding format codes to pass to the function:

$ ./test_parse_opts -b -p adc -f %d%g%s -a 25 -d 0.2 -c hello

./test_parse_opts -a 25 -d 0.2 -c hello -b -p adc -f %d%g%s

-a (integer)=25

-d (float)=0.2

-c (string)=hello

Arguments: ./test_parse_opts

## Sunday, May 3, 2015

### Is agnosticism a reasonable belief?

There is an argument against God that I've lately been seeing repeatedly. It goes something like this: "just because you can't prove there's a tiny teapot orbiting Venus, doesn't mean it's not there" or "just because you can't prove that the East Bunny is real doesn't mean that he isn't." Implying that both these situations are so unlikely that we might as well assume that they are false and the statements are therefore logical fallacies.

A God or gods, unfortunately, are not the equivalent of the Easter Bunny or a tiny teapot orbiting Venus or a dragon flying around Saturn's rings or whatever other absurd object you might be able to dream up.

Consider the meteoric advance in computer simulation technology: there's little doubt that virtual reality is just around the corner. A related practice is the design of artificial life simulations. It is possible that we are all a simulation inside of a giant computer. Would it not be reasonable to call the creator(s) and/or master(s) of this simulation a god or gods?

The principle of sufficient reason seems to dominate our perception of reality. It is reasonable to suggest that there might be a first cause or prime mover. Might this not be God?

The very nature of God is that He is a "higher power" and therefore quite beyond our ken. It is arrogant and naive to assume that we are the highest level of understanding that this universe has attained.

A God or gods, unfortunately, are not the equivalent of the Easter Bunny or a tiny teapot orbiting Venus or a dragon flying around Saturn's rings or whatever other absurd object you might be able to dream up.

Consider the meteoric advance in computer simulation technology: there's little doubt that virtual reality is just around the corner. A related practice is the design of artificial life simulations. It is possible that we are all a simulation inside of a giant computer. Would it not be reasonable to call the creator(s) and/or master(s) of this simulation a god or gods?

The principle of sufficient reason seems to dominate our perception of reality. It is reasonable to suggest that there might be a first cause or prime mover. Might this not be God?

The very nature of God is that He is a "higher power" and therefore quite beyond our ken. It is arrogant and naive to assume that we are the highest level of understanding that this universe has attained.

## Saturday, March 28, 2015

### For the love of solitude

Last week I spent some time in a small cabin ("chalet") in the woods. At the time it was completely deserted: the silence was delicious. Finally I could breath. Finally my thoughts were my own. I am always struck in these moments, first, how essential they are to someone of my temperament, and second, how it clears the mind so that the real thinking may finally begin.

Such solitude is increasingly hard to find. I point out in another article how the world is becoming a panopticon. Soon there will be satellites with sufficient resolution and coverage that they can observe us in real time. Not even walls will be enough to conceal us: the combination of penetrating microwaves, tomography and synthetic apertures will soon allow us (or rather our nosy leaders) to map the insides of buildings in 3-D from space.

I think I finally understand what's going on here; why I am driven to find more and more extreme isolation, so much so that loneliness and anxiety frequently overcomes any peace-of-mind that might be gained from the endeavour.

When you observe something, it changes. Many interpretations of quantum mechanics have been attempted, many of them rather flaky:

*something is not there when you are not looking at it; there is a mystic union between the observed and the observer such that they cannot be distinguished*. Lets not even get into all the different "many-worlds" hypotheses.

No, it is much simpler than that, and from the mathematics, undeniable. Chances are you can walk into any toy store today and buy a simple device consisting of an array of needles free to slide within a series of holes. Using this device, you can take a temporary cast of your face (or any other object for that matter) by simply pressing into it. Now at the same time your face is pushing these needles outwards, the needles are creating pock-mark depressions on your skin. Granted, because skin is elastic, it will almost certainly spring back to it previous form, although you might feel it for some time afterwards.

Rays of light are just like those needles. When you take leave to walk in the woods or cycle in the mountains, you are reclaiming your thoughts. You are becoming fully yourself again, because as long as others watch you, your thoughts are not your own.

It might well be my epitaph. Such solitude is no longer supportable. I guess I consider myself a dying breed.

### How to deal with objects

In my coding standards, I state that I strive for a predominantly functional programming model, even when using an imperative, procedural language such as C++. All functions as well as main routines should, in the ideal case, take a set of inputs and return a set of outputs while avoiding any kind of side effect. This leaves open the question of how to deal with objects which I use quite often. Object-oriented programming doesn't quite fit the functional paradigm, so here is my attempt to lay out an approach consistent with my earlier standards.

Class data ("fields") can be divided into two, broad categories: those that define the object and workspace variables. As an example of the first type, consider a matrix class. A matrix object would be defined by its dimensions and the values of each individual element. These can be modified by the class methods such that the method can be considered an operator on the object. So for instance the matrix class might have a "transpose" or "invert" method. More basic operators might include subscript assignment, permuting rows or columns, and in particular, reading from and writing to a file.

The other type of class data are workspace variables. Using the matrix example: internally the inverse method might comprise two steps--the first to decompose the matrix and the second to perform some other operation on the decomposition. The decomposition is stored inside of the object. This approach gives a minor performance advantage: if we know in advance the size of the workspace variable, it can be pre-allocated as soon as the the object has been defined. It also simplifies memory management.

My current view on object use and modification is rather extreme: data fields should be filled completely at or near the moment of creation and from that moment on the object should be considered more-or-less immutable. Methods may take other objects as arguments and return newly-created objects, but modifications to an existing object should be avoided in favour of deleting it and creating a new one in its place.

Workspace variables in particular should be avoided--they are, in fact, global variables in disguise. This is especially so if they are being used in a context that doesn't modify the more essential object data and is meant to keep the re-entrant property and make the code thread-safe. Operators that modify the object, especially if they are simple such as subscript assignment, can always be made atomic. By contrast, workspace variables by their very nature frequently span multiple methods. Computers are now so fast and memory management so efficient that the prior justification of improving efficiency is nullified. Enabling the program to run on multiple processors will dwarf any marginal gains that could be had with the use of workspace variables.

I admit that most of my current code isn't anywhere close to conforming to these standards although I rarely use workspace variables and have started to actively remove them from my O-O code.

A better would be the multi-class classification objects in the

Class data ("fields") can be divided into two, broad categories: those that define the object and workspace variables. As an example of the first type, consider a matrix class. A matrix object would be defined by its dimensions and the values of each individual element. These can be modified by the class methods such that the method can be considered an operator on the object. So for instance the matrix class might have a "transpose" or "invert" method. More basic operators might include subscript assignment, permuting rows or columns, and in particular, reading from and writing to a file.

The other type of class data are workspace variables. Using the matrix example: internally the inverse method might comprise two steps--the first to decompose the matrix and the second to perform some other operation on the decomposition. The decomposition is stored inside of the object. This approach gives a minor performance advantage: if we know in advance the size of the workspace variable, it can be pre-allocated as soon as the the object has been defined. It also simplifies memory management.

My current view on object use and modification is rather extreme: data fields should be filled completely at or near the moment of creation and from that moment on the object should be considered more-or-less immutable. Methods may take other objects as arguments and return newly-created objects, but modifications to an existing object should be avoided in favour of deleting it and creating a new one in its place.

Workspace variables in particular should be avoided--they are, in fact, global variables in disguise. This is especially so if they are being used in a context that doesn't modify the more essential object data and is meant to keep the re-entrant property and make the code thread-safe. Operators that modify the object, especially if they are simple such as subscript assignment, can always be made atomic. By contrast, workspace variables by their very nature frequently span multiple methods. Computers are now so fast and memory management so efficient that the prior justification of improving efficiency is nullified. Enabling the program to run on multiple processors will dwarf any marginal gains that could be had with the use of workspace variables.

I admit that most of my current code isn't anywhere close to conforming to these standards although I rarely use workspace variables and have started to actively remove them from my O-O code.

**Addendum**: I've just re-read this post and realized that the example I've used for a workspace variable is not a very good one. The matrix decomposition depends on the data that define the object and in fact is simply another representation of it. Assuming there is space, the decomposition can be computed once and then kept until the death of the object.A better would be the multi-class classification objects in the

*libAGF*project. These build up a multi-class classifier from a set of binary classifiers. You feed in a test point and the object returns the conditional probabilities of each of the classes. These probabilities are calculated from the probabilities returned by each of the binary classifiers. In earlier versions of the object-classes, the probabilities from the binary classifiers were stored first in a workspace variable before being passed to the next stage of the computation.## Tuesday, February 17, 2015

### Quine

A quine is a program that prints out it's own listing. In other words, if it's a C program, you can do something like this:

$ gcc quine.c

$ ./a.out > quine2.c

$ gcc quine2.cc

$ ./a.out > quine3.c

$ diff quine3.c quine.c

$

Writing a quine is not as trivial as it sounds. Here is the quine I wrote in C which takes advantage of the format codes in the printf statement to insert a partial listing of the program into itself:

$ gcc quine.c

$ ./a.out > quine2.c

$ gcc quine2.cc

$ ./a.out > quine3.c

$ diff quine3.c quine.c

$

Writing a quine is not as trivial as it sounds. Here is the quine I wrote in C which takes advantage of the format codes in the printf statement to insert a partial listing of the program into itself:

#include <stdio.h> int main() {char listing[200]="#include <stdio.h> %cint main() {char listing[200]=%c%s%c; char delim=10; char delim2=34; printf(listing, delim, delim2, listing, delim2);}"; char delim=10; char delim2=34; printf(listing, delim, delim2, listing, delim2);}

## Sunday, January 18, 2015

### The Overseer

One of the more interesting things I did this summer was visit an old-growth forest. While the scale of these trees was truly astounding, what struck me most was just how little of the tree was canopy and how much was trunk. Most of the mass of these trees is there to hold up other parts of the tree! Only a tiny fraction is actually devoted to gathering sunlight.

Moreover, there will be an equilibrium density for the trees. If they are clustered together too tightly, then one or more of their number will fall. If they are too sparse, new trees will grow up in the gaps. In other words, in their natural state, they live a marginal existence: clinging to the edge of life.

That is, unless there is someone tending the forest who culls the trees as necessary. Then those remaining can grow up healthier and stronger, with thicker trunks relative to their height and a larger, broader canopy.

## Monday, January 12, 2015

### What is spirit?

In a two or more posts, I've talked a little bit about "spirit". If you're a skeptic and you've been reading this, you probably think, that's nonsense, there's no such thing as spirit, it doesn't exist. Probably not, but then again, there isn't any such thing as energy either, at least not in the sense that you can measure it directly. It is a

In particular, I wanted to clarify some statements I made in the post called, My Dream. I thought about simply writing an addendum, but why not create a brand new post? In that post I talk about negative spirit flowing out of Washington D.C. through a network of trails leading, ultimately, to long-distance scenic trails like the Appalachian Trail. How does that work?

I experienced this directly. I picked up two hikers from the A.T. and let them sleep over in my apartment. The next day, we made arrangements to meet after I had finished work and they had finished sight-seeing. Later, I began to worry that I would having trouble finding them since we had agreed to meet at one of the busiest subway stations during rush hour. It turns out they were easy to spot: they were the only ones smiling.

*derived quantity*: it always has to be calculated from two or more other quantities. Nonetheless it's still an extremely useful quantity in physics.In particular, I wanted to clarify some statements I made in the post called, My Dream. I thought about simply writing an addendum, but why not create a brand new post? In that post I talk about negative spirit flowing out of Washington D.C. through a network of trails leading, ultimately, to long-distance scenic trails like the Appalachian Trail. How does that work?

*What is the mechanism behind this*? Simple. The people walking (or cycling) towards the city are smiling. Those moving away are not.I experienced this directly. I picked up two hikers from the A.T. and let them sleep over in my apartment. The next day, we made arrangements to meet after I had finished work and they had finished sight-seeing. Later, I began to worry that I would having trouble finding them since we had agreed to meet at one of the busiest subway stations during rush hour. It turns out they were easy to spot: they were the only ones smiling.

## Sunday, January 11, 2015

### Are miracles possible?

What about the double-slit experiment? That's kind of a miracle isn't it? Point an electron gun at a pair of slits and then look at where they land on a screen behind. It doesn't matter how fast you fire the electrons, they will, over time, form an interference pattern. In other words, they are behaving like waves. OK, so now you're wondering, if they are behaving like waves, yet they're being fired one at a time, what is interfering with what? Does each electron interfere with itself, or do they interfere with each other, somehow "seeing" the other electrons both forward and backward in time?

Lets design an experiment to test these ideas. Now you flash a strobe light every time you fire an electron so you can figure out through which slit it passes. The interference pattern disappears.

I've heard any number of interpretations of quantum mechanics:

But at least in this case, I think the interpretation is much, much simpler. When you observe something, it changes. Suppose instead of using the term

The act of observation is the act of touching. When something is touched, it will always move or change in response.

The analogy to investigations of so-called "psi" phenomenon is (or should be) obvious. Lets use prayer as an example. I have at various times in my life, prayed with considerable regularity. While I no longer identify as Christian, I still pray, though less often than I used to. It was a great sense of relief when I stopped attending church because I could finally admit to myself that I am completely ignorant about spiritual matters. But then again, the whole point of a Supreme Being is that He or She or It is completely beyond us.

I don't know how many times I've heard of studies purporting to demonstrate the effectiveness of prayer. I don't know how many times I've heard skeptics try to debunk such claims and reference their own studies. But after all, if spiritual configurations (for lack of a better word) are as delicate as sub-atomic particles, should we be surprised if close observation destroys them? Should we be surprised if they behave in the same way as the double slit experiment and lose their magic as soon as they are seriously scrutinized?

I don't know the answer to these questions, because there is no answer. I guess it just comes down to faith.

Lets design an experiment to test these ideas. Now you flash a strobe light every time you fire an electron so you can figure out through which slit it passes. The interference pattern disappears.

I've heard any number of interpretations of quantum mechanics:

*An object isn't really there when you aren't observing it. There is a mystical connection between observer and observed. The distinction between subject and object is an illusion.*And so on.But at least in this case, I think the interpretation is much, much simpler. When you observe something, it changes. Suppose instead of using the term

*observe*, we substitute the word*probe*. To probe is almost by definition to interfere with since a probe (as a noun) is almost always something material and solid. Granted, if you drop a sounder into the ocean to measure depth, for instance, the ocean floor won't change much in response, certainly less than measurement error or the local variance. But if you fire a photon at a sub-atomic particle, it will change the velocity of that particle, sometimes by a large amount.The act of observation is the act of touching. When something is touched, it will always move or change in response.

The analogy to investigations of so-called "psi" phenomenon is (or should be) obvious. Lets use prayer as an example. I have at various times in my life, prayed with considerable regularity. While I no longer identify as Christian, I still pray, though less often than I used to. It was a great sense of relief when I stopped attending church because I could finally admit to myself that I am completely ignorant about spiritual matters. But then again, the whole point of a Supreme Being is that He or She or It is completely beyond us.

I don't know how many times I've heard of studies purporting to demonstrate the effectiveness of prayer. I don't know how many times I've heard skeptics try to debunk such claims and reference their own studies. But after all, if spiritual configurations (for lack of a better word) are as delicate as sub-atomic particles, should we be surprised if close observation destroys them? Should we be surprised if they behave in the same way as the double slit experiment and lose their magic as soon as they are seriously scrutinized?

I don't know the answer to these questions, because there is no answer. I guess it just comes down to faith.

## Saturday, January 10, 2015

### Fun with matrix operators

Now that I have more time, I can turn over ideas more carefully. I'm not so focused on results. Take matrix multiplication. It is an associative operator:

It's a property I use all the time, but typically never consciously think about. For instance, in my sparse matrix array classes, the sparse matrices are stored in reverse order because they are multiplied in reverse order. This is for efficiency reasons. If you have a set of matrices, {

then this:

is more efficient than multiplying them through in the order in which they are written:

(( ... ((

because each intermediate result is a vector, rather than a matrix. Similarly, if

Is associativity related to linearity? No, because we can define a new, linear operator, lets call it, #:

that loses the associative property. Now lets rewrite (1) in terms of #:

= (

since:

(

So why is this useful? From a strictly mathematical standpoint, the new operator is quite useless: it loses the useful and important property of associativity, without producing any new advantages. Numerically, however, it is extremely useful: multiplying a sparse matrix with the transpose of a sparse matrix is much more straightforward than straight multiplication since you move through the non-zero elements of both matrices in order. This is assuming that both are sorted by subscript in row-major order.

Suppose, once again, that

So lets assume that not only is

When constructing parallel algorithms, it might be better to stick with (2) since we can multiply through in just about any order we want: e.g., multiply, in order, every second pair of consecutive matrices, then take the results and again multiply by consecutive pairs and so on.

I mention multiplying an array of sparse matrices with a full matrix. This is also an interesting case, although mostly from a numerical perspective. The intuitive approach would be to have two full matrices: one for the result, and one for the multiplicand. We multiply each of the A's in turn, swapping the result with the multiplicand in between. A second approach would be to decompose the multiplicand into columns and multiply each of the columns in turn, all the way through.

It turns out that for large problems, the latter approach is much, much faster, even though it violates at least one optimization principle. If we have two loops, both of which have the same limits, convert them to one loop. For example:

for (int i=0; i < n; i++) a[i]=b[i]*b[i];

for (int i=0; i < n; i++) r+=a[i];

should be converted to:

for (int i=0; i < n; i++) {

a[i]=b[i]*b[i];

r+=a[i];

}

The column-by-column method is better because it uses less memory and because it operates on a smaller chunk of memory for longer. Whereas the first method accesses a whole full matrix at each multiplication step, the second method accesses only one column of the matrix. Therefore there is less paging, either between virtual memory and RAM or between core memory and cache. Like multiplication between two sparse matrices, this method is also more efficient when multiplying with the transpose, assuming row-major storage order once again for both the sparse matrices and the full matrices.

It does well to remind us that while high-level languages like C and C++ do a good job isolating the programmer from the hardware, you don't always end up with the best code by considering the computer as this perfect, abstract machine, instead of a real one with real limitations and real, hacked-together solutions and work-arounds.

It is worth comparing the code for both methods, so I will post it in full. Here is method #1:

template

void sparse_array::mat_mult(real **cand,

real **result) {

real **med;

real **swap;

med=allocate_matrix(m, m);

*(***A***)=(***BC***)***AB****C**It's a property I use all the time, but typically never consciously think about. For instance, in my sparse matrix array classes, the sparse matrices are stored in reverse order because they are multiplied in reverse order. This is for efficiency reasons. If you have a set of matrices, {

*}, and you want to multiply them through and then multiply the result with a vector, as follows:***A**_{i}*A*(1)_{1}A_{2}A_{3}... A_{n-1}A_{n}vthen this:

**A**_{1}(**A**_{2}(**A**_{3}( ... (**A**_{n-1}(**A**_{n}**v**)) ... ))) (2)is more efficient than multiplying them through in the order in which they are written:

(( ... ((

**A**_{1}**A**_{2})**A**_{3}) ...**A**_{n-1})**A**_{n}**v**) (3)because each intermediate result is a vector, rather than a matrix. Similarly, if

**is a full matrix instead of a vector, (2) is still more efficient than (3) because multiplying a sparse matrix with a full one is more efficient than multiplying two sparse matrices.***v*Is associativity related to linearity? No, because we can define a new, linear operator, lets call it, #:

**#***A***=***B**AB*^{T}= Σ_{k}a_{ik}b_{ik}that loses the associative property. Now lets rewrite (1) in terms of #:

**#(***A*_{1}**A**_{2}#(**A**_{3}( ... (**A**_{n-1}#(**A**_{n}*v*^{T})^{T})^{T}... )^{T})^{T})^{T}^{= }**#***A*_{1}*v*^{T}#**A**_{n}#**A**_{n-1}# ... #**A**_{3}#**A**_{2}= (

*v*^{T}#**A**_{n}#**A**_{n-1}# ... #**A**_{1}#**A**_{2}#**A**_{1})^{T }(4)since:

(

**#***A***)***B*^{T}=**#***B***A**So why is this useful? From a strictly mathematical standpoint, the new operator is quite useless: it loses the useful and important property of associativity, without producing any new advantages. Numerically, however, it is extremely useful: multiplying a sparse matrix with the transpose of a sparse matrix is much more straightforward than straight multiplication since you move through the non-zero elements of both matrices in order. This is assuming that both are sorted by subscript in row-major order.

Suppose, once again, that

*is a matrix. Even if***v****is sparse, it's usually more efficient to convert it to a full matrix and multiply through using (2) since the result will become full after only a few iterations. Not always: if all the matrices are tri-diagonal, for instance, then the result will also be tri-diagonal.***v*So lets assume that not only is

*a sparse matrix, but that the result will also be sparse. Clearly, (4) is now more efficient than (2) since we now need only 2 matrix transpositions instead of***v***n*, where*n*is the number of**'s. Since matrix multiplication is O(***A**n*^{2}) (where*n*is the number of non-zero elements in each matrix, assuming it is about the same for each) while matrix transposition is O(*n*log*n*) (for a generalized sparse matrix: swap the subscripts, then re-sort them in row-major order) the gains will be quite modest.When constructing parallel algorithms, it might be better to stick with (2) since we can multiply through in just about any order we want: e.g., multiply, in order, every second pair of consecutive matrices, then take the results and again multiply by consecutive pairs and so on.

I mention multiplying an array of sparse matrices with a full matrix. This is also an interesting case, although mostly from a numerical perspective. The intuitive approach would be to have two full matrices: one for the result, and one for the multiplicand. We multiply each of the A's in turn, swapping the result with the multiplicand in between. A second approach would be to decompose the multiplicand into columns and multiply each of the columns in turn, all the way through.

It turns out that for large problems, the latter approach is much, much faster, even though it violates at least one optimization principle. If we have two loops, both of which have the same limits, convert them to one loop. For example:

for (int i=0; i < n; i++) a[i]=b[i]*b[i];

for (int i=0; i < n; i++) r+=a[i];

should be converted to:

for (int i=0; i < n; i++) {

a[i]=b[i]*b[i];

r+=a[i];

}

The column-by-column method is better because it uses less memory and because it operates on a smaller chunk of memory for longer. Whereas the first method accesses a whole full matrix at each multiplication step, the second method accesses only one column of the matrix. Therefore there is less paging, either between virtual memory and RAM or between core memory and cache. Like multiplication between two sparse matrices, this method is also more efficient when multiplying with the transpose, assuming row-major storage order once again for both the sparse matrices and the full matrices.

It does well to remind us that while high-level languages like C and C++ do a good job isolating the programmer from the hardware, you don't always end up with the best code by considering the computer as this perfect, abstract machine, instead of a real one with real limitations and real, hacked-together solutions and work-arounds.

It is worth comparing the code for both methods, so I will post it in full. Here is method #1:

template

void sparse_array

real **result) {

real **med;

real **swap;

med=allocate_matrix

sparse_a[0]->mat_mult(cand, result, m);

for (long i=1; i

sparse_a[i]->mat_mult(result, med, m);

swap=med;

med=result;

result=swap;

}

if (nsparse % 2 == 0) {

copy_matrix(result, med, m, m);

}

delete_matrix(med);

}

Here is method #2, applied, however, to the transpose of the multiplicand:

template

void sparse_array::mat_mult_tt(real **cand,

real **result) {

printf("%6.1f%%", 0.);

for (index_t i=0; i

printf("\b\b\b\b\b\b\b%6.1f%%", 100.*i/m);

fflush(stdout);

vect_mult(cand[i], result[i]);

}

printf("\b\b\b\b\b\b\b%6.1f%%\n", 100.);

}

There are a few of things worth noting. In the first method, if the number of sparse matrices is odd, the result and the multiplicand are swapped an odd number of times. For efficiency, only the memory locations are swapped, not the actual values so the memory locations for the multiplicand and the result end up swapped. Therefore we need to copy the values back to the correct location in memory before the function can return. In the second method, we print out a crude progress meter in the form of the percentage of rows that have been multiplied through.

The second version is not only much simpler, it is also easier to parallelize since each row multiplication is completely independent.

Subscribe to:
Posts (Atom)