One idea that's caught my attention is so-called test-driven development. I know that I don't write enough tests for my computer programs.
Unlike some other current fashions such as functional programming, this is something I can put into practice right away. Again, I have my doubts about it so I will describe my initial attempts here. I guess this is a somewhat boring topic, delving once again into the minutiae of scientific programming--hopefully I will have the wherewithal to write about more far-reaching and interesting topics later on some point.
To start with, I've picked an object class that's easy to test. I have a family of classes that select out the k-least elements in a list. Because some methods, such as a tree or heap, don't require storing the whole list, the object works by adding each item element-by-element and then extracting the k-least. The base class looks like this:
template
class kiselect_base {
protected:
long ncur;
long k;
public:
kiselect_base();
kiselect_base(long k1);
virtual ~kiselect_base();
//add an element, set associated index to element count;
//return current size of data structure:
virtual long add(type val)=0;
//add an element and set the index, returns current size of data structure:
virtual long add(type val, long ind)=0;
//marshall out the k-least and associated indices:
virtual void get(type * kleast, long *ind)=0;
//test the selection algo:
int test(long n); //number of elements
};
I wrote a family of these things because I wanted to test out which version was fastest. It turns out that it makes fat little difference and even a naive method based on insertion or selection with supposed O(nk) performance is almost as good as a more sophisticated method based on quick-sort with supposed O(n) performance. In addition to the k-least elements of the list, this version returns a set of numbers that index into the list in case there is auxiliary data that also needs to be selected. This also makes slightly easier to test.
As you can see I've already added the test method. Sticking it in the base class means that all of the children can be tested without any new code. This is precisely in keeping with my thoughts about test routines: they should be as general as possible. Forget having a small selection of specific test cases: this is a recipe for disaster. Maybe it's not likely, but suppose your code just happens to pass all of them while still being incorrect? Plus it's trivial to write code that passes all the test cases without being anywhere close to correct.
Rather, we need to be able to generate as many random test cases as we need. Five test cases not enough? How about 100? How about one million? Here is my first crack at the problem:
//trying to move towards more of a test-driven development:
template
int kiselect_base
int err=0;
type list[n];
type kleast[k];
long ind[k];
long lind; //index of largest of k-least
int flag;
//generate a list of random numbers and apply k-least algo to it:
for (long i=0; i
list[i]=ranu();
add(list[i]);
}
get(kleast, ind);
//find the largest of the k-least:
lind=0;
for (long i=1; i<k; i++) {
if (kleast[i]<kleast[lind]) lind=i;
}
//largest of k-least must be smaller than all others in the list:
for (long i=0; i<n; i++) {
//this is efficient:
flag=1;
for (long j=0; j<k; j++) {
if (ind[j]==i) {
flag=0;
break;
}
}
if (flag && kleast[lind]<list[i]) {
err=-1;
break;
}
if (err!=0) break;
}
return err;
}
//trying to move towards more of a test-driven development:
template
int kiselect_base
int err=0;
type list[n];
type kleast[k];
long ind[k];
long lind; //index of largest of k-least
int flag[n];
//generate a list of random numbers and apply k-least algo to it:
for (long i=0; i
list[i]=ranu();
add(list[i]);
}
get(kleast, ind);
//find the largest of the k-least:
lind=0;
for (long i=1; i<k; i++) {
if (kleast[i]<kleast[lind]) lind=i;
}
//set flags to exclude all k-least from the comparison:
for (long i=0; i<n; i++) flag[i]=1;
for (long i=0; i<k; i++) flag[ind[i]]=0
//largest of k-least must be smaller than all others in the list:
for (long i=0; i<n; i++) {
if (flag[i] && kleast[lind]<list[i]) {
err=-1;
break;
}
if (err!=0) break;
}
return err;
}
Well, obviously there's a boot-strapping problem here! At some point, we need human discretion and judgement. My preferred test engine, and I have a small number of these lying around, is one that allows you to manually input any desired test case and then display the result.
Probably the best example of this approach is the date calculator I wrote to test a time class. The time class (as well as the calculator that wraps it) allows you to make arithmetic calculations with dates and times and print them out in a pretty format. Here is an example session that calculates the number of days between today and Christmas:
$ date_calc.exe
%d%c>(2015/12/25_2015/06/02)|1-0:0:0
206
%d%c>
Note that the minus sign (-) and the forward slash are already used in the date format so we substitute an underscore (_) and a vertical line (|) respectively for the equivalent arithmetic operations. Another example is test wrapper for the following option parsing routine:
//returns number of options found
//if there is a non-fatal error, returns -(# of found options)
int parse_command_opts(int argc, // number of command line args
char **argv, // arguments passed to command line
const char *code, // code for each option
const char *format, // format code for each option
void **parm, // returned parameters
int *flag, // found flags
int opts=0); // option flags (optional)
This subroutine is a lot more code-efficient than getopt. There is a brief set-up phase in which you set each element of the void parameter list to point to the variable in which you want to store the option parameter. Options without arguments can be left null and use a null parameter code, "%". As a simple example, suppose you want to return the parameter of the -d option to the integer variable, d:
int main(int argc, char **argv) {
int d;
void *parm[1];
int flag[1];
int err;
parm[0]=&d;
err=parse_command_opts(argc, argv, "d", "%d", parm, flag, 1);
...
The test program scans options for all possible format codes using an option flag that's (usually) the same as the code and prints out the parameters to standard out. We can do it with whitespace (-b option):
$ ./test_parse_opts.exe -b -g 0.2 -i 20 -c t -s teststring
./test_parse_opts -g 0.2 -i 20 -c t -s teststring -b
number=0.2
integer=20
string=teststring
char=t
Arguments: ./test_parse_opts
or without:
$ ./test_parse_opts.exe -g0.2 -i20 -ct -steststring
./test_parse_opts -g0.2 -i20 -ct -steststring
number=0.2
integer=20
string=teststring
char=t
Arguments: ./test_parse_opts
Of course, this is one reason why interactive interpreters are so great for rapid development. You don't have to write all this (sometimes very complex) wrapper code to test functions and classes. Just type out your test cases on the command line.
*UPDATE: I realize that the test_parse_opts wrapper is a bad example since it's quite limited in the number of test cases you can generate. Therefore I've expanded it to accept an arbitrary list of option letters with corresponding format codes to pass to the function:
$ ./test_parse_opts -b -p adc -f %d%g%s -a 25 -d 0.2 -c hello
./test_parse_opts -a 25 -d 0.2 -c hello -b -p adc -f %d%g%s
-a (integer)=25
-d (float)=0.2
-c (string)=hello
Arguments: ./test_parse_opts
*UPDATE: I realize that the test_parse_opts wrapper is a bad example since it's quite limited in the number of test cases you can generate. Therefore I've expanded it to accept an arbitrary list of option letters with corresponding format codes to pass to the function:
$ ./test_parse_opts -b -p adc -f %d%g%s -a 25 -d 0.2 -c hello
./test_parse_opts -a 25 -d 0.2 -c hello -b -p adc -f %d%g%s
-a (integer)=25
-d (float)=0.2
-c (string)=hello
Arguments: ./test_parse_opts