Friday, November 23, 2012

Super-efficient motor-assisted bicycle: first baby steps

The photo above shows the beginnings of a project that has held my interest for a long time: building super-efficient vehicles.  As you can see, there is a tiny motor mounted behind the saddle that I hope at some point to have driving the rear wheel.  Because this engine displaces a mere 23 cc, it should be very efficient.  In spite of its small size, it generates a very reasonable (for this application) 2 horsepower (1491 W) which could potentially be good for a top speed of up to 80 km/h.

But before I get into the details of how this thing will eventually work, I want to discuss a more pressing issue.  When I get this thing completed, it will be highly illegal on Ontario roads, a situation which concerns me and which should concern anyone interested in improving the environment.  Politicians say that they want improve the environment, but the current state of the Ontario Highway Traffic Act proves that they are not serious.  The definition of a power-assisted bicycle or "e-bike" is an electrically assisted bicycle (it must have pedals) which is limited in power to 500 W and a top speed of 32 km/h.  This is a joke.  They are legal for anyone over the age of 16 to ride (without a license) but why anyone would want to is beyond me.  I regularly pass these things on my (unassisted) pedal bicycle despite lungs blackened by years of abuse with cigarettes.

The standard argument is that the absurd speed and power limitations are for safety.  This is easily debunked.  If this is the case, why are Porches and other super-cars that are mostly engine still allowed on public roads?  For that matter, why is any car?  My parents' bottom-end sedan could easily be driven at two to three times the posted limit on most roads.  It is up to the operator to exercise discretion in the conduction of their vehicle.  In fact increasing the top speed of these things will actually make them safer as they can now keep up with traffic.  As a long-time cyclist, I have always felt safer on tight and busy roads when I am moving at the same pace (or faster) than the surrounding traffic.

The definition of a motor-assisted bicycle (or mo-ped) is a gasoline assisted bicycle (again, it must have pedals) whose engine has a maximum displacement of 50 cc and a top speed limited to 60 km/h.  This is a bit more reasonable.  Ontario, however, has recently changed the rules regarding who can drive these vehicles; a driver's license used to be sufficient.  Now you need either a full-on motorcycle license or an emasculated motorcycle license good for only this type of vehicle.  Way to make driving efficient vehicles sexy!

I remember once boarding a ferry in Schleswig-Holstein.  Shortly after me came two scruffy motor-cyclists riding custom-style bikes with engines big enough to power a small car.  A woman in the passenger seat of a mini-van right next to me was making googly eyes at them the whole trip.  I was riding a bicycle, was dressed in skin-tight spandex and was quite buff at the time, but received nary a glance.  The point of this story?  That large, over-powered vehicles are sexy.  Bicycles and mopeds are not.  If we want people to ride these things we have to lower the bar for entry, not raise it.

The final death knell to the moped as a useful and efficient vehicle (at least in Ontario) comes from this further limitation: "It must not have a hand- or foot-operated clutch or gearbox."  Anyone who's ridden a bicycle knows that under-powered vehicles need lots of gears to be efficient.  This means that in order for the bike to have gears at all, it must be equipped with either a continuously variable transmission (CVT) or some kind of torque-converter, both of which are notorious for robbing power.  On a vehicle that is 1. under-powered to begin with and 2. supposed to be efficient, this makes no sense whatsoever.

Lets be honest here.  Any competent engineer could design a motor-assisted bicycle at least as good as what I am building.  In fact the technology to create such a device has existed a lot longer.  A similar situation, it could be argued, existed in the eighteenth century in regards to bicycles.  The technology was sufficient to create them, but there was no demand because of prevailing social conditions.  For that, we had to wait until the nineteenth century with the rise of the middle class, who demanded cheap and efficient transportation despite not being able to afford horses.  Today, I would argue that the reasons good motor-assisted bicycles are all but forbidden are two-fold:
1. to restrict the poor from having access to cheap and efficient transportation
2. because of the power wielded by the oil and automotive companies
The first, at least, is held only semi-consciously.  The second may well be due to overt lobbying by powerful corporate interests.

I'll step off my soap-box for now.  Stay tuned for the next installment which will discuss the more technical aspects of this creation: design considerations and how I intend to actually build it. The next installment can be found here.

Wednesday, September 5, 2012

Peteysoft coding standards

Since, despite all my efforts, I have not received a flood of donations (to donate to Peteysoft, click here) I have been applying for jobs in order to support myself and my Foundation.  In a number of applications, the potential employers were looking for experience coding to a standard.  It occurs to me I have never coded to any explicit standard.  That should not be taken to mean, however, that my coding is done haphazardly.  I have always had in mind a certain method, that, at least until now, has remained implicit.  Complementary to my design philosophy, here is a first draft of my personal coding standards.  What I have adopted is essentially a functional programming model.

- Functions as well as main routines should take a set of inputs and generate a set of outputs while avoiding side effects.  Global variables and similar constructs should be avoided.  In the ideal case, both functions and main routines should be thread-safe, reentrant and idempotent.

- File names used in main routines should, as much as possible, be explicit and passed in the form of arguments.  In this way, main routines are path-independent.

- Temporary files should be avoided but when they are used, they should be named in such a way as to prevent conflicts with other running instances of the program.  This can be done by modifying input or output file names, by appending random numbers and by appending the current date and time. The user should have complete control over the location of temporary files.

- All code should be self-documenting.  In functions, this can be accomplished in several ways:
  • function parameters are vertical with comments beside each one
  • a block of text just before or just after the function declaration with the following contents: purpose, syntax, arguments (input/output), optional arguments, authors, dependencies, list of revisions
  • descriptive symbol names
  • comments beside all variable declarations
  • comments describing each major task
In main routines, the executable should produce a brief summary of its operation (roughly following point 2, above) by either typing the command name with no arguments or with a reserved option such as -h, -H or -?.  When inventing variable names, the programmer should strive for a balance between descriptiveness and length as variable names that are too long tend to decrease rather than increase the readability of the code.

- Indentation: code within blocks should be indented by two or four spaces (with two preferred for space reasons) from the next higher block. If there is a branch statement or label (goto etc.) code should be indented in the closest possible analogue to block-style coding.

- Defaults: all main level routines should be supplied with a set of useful defaults so that the program can be called with as few arguments as possible. Defaults should be contained as constants in a single, top-level include file. In languages with optional subroutine parameters (such as IDL) all optional parameters should be supplied with defaults.

- Physical parameters: physical parameters should be collected in a top-level include file.  When possible, physical parameters in functions should be passed as arguments.

- IO: input and output stages should be contained in a separate module from the process module, i.e. that which does the "work" or the "engine."

- GUI: similarly, graphical or text-based interfaces should be separated from the main engine.

- The main routine should do very little work.  In general, it should:
1. initialize data structures
2. call the input routines
3. call subroutines that process the data
4. call the output routines
5. clean up
That way, the software can be operated in at least two different modes: from the command line, or from another compiled program.

- Arbitrary limitions in sizes of data structures, such as those containing
symbols, names, lists or lines of text should be avoided.  When they are used, the structure size should be controlled by a single, easily-modifiable macro as high up in the dependency chain as possible.

- Interoperability: main routines should read and output data in formats that are easily parsed by and/or compatible with other programs.  If a routine uses a native format highly specific to the application, other routines should be supplied that easily convert to and from more generic formats. If a main routine takes as input a single text file, the option should exist for it to read from standard in.  Likewise, if it outputs a single text file, it should be able to write to standard out.

- In a similar fashion, lower-level routines should avoid, in as much as it is possible, specialized data structures and extended set-up and clean-up phases.  In the ideal case, they should require only one call and take as arguments data types native to the language.  This keeps the lower level routines interoperable as well, particular by other languages (e.g. C from Fortran or vice versa).

- Error-handling: if a routine has the possibility of failing, the error should be caught and it should pass back an error code describing its status. On the other hand, range-checking anywhere but the main routine should be avoided, especially in production code.  This is the job of the calling routine. Error codes should be consistent within libraries.  Excessive error checking should be avoided as this tends to clutter the code.  If there are many points in the program where errors can occur, the programmer should figure out a way to do this in a single block of code, such as an error-handling routine. I have still not figured out a way to do this that is both code-efficient and general, trapping both fatal and recoverable errors.

- Command-line parameters: command line options within a single library should be consistent across all executables and should not be repeated.  Likewise, command-line syntax should be as consistent as possible.

- Duplicate code should be avoided.  As a rule of thumb, if a piece of code is duplicated more than three times, the program should be re-factored.

- Atomicity: in compiled languages such as C, a low-level subroutines should be reduced to their most atomic forms.  That is, if a routine takes as input two variables, but the operations performed on the first variable do not affect the operations on the second variable and vice versa, the function call should be split in two: as either two calls to the same function or two calls to two different functions. Note that this rule is not applicable to vector-based languages such as IDL where efficiency depends upon the use of as many vector operations as possible.  Here we want to keep everything in the form of a vector, including arguments passed to low-level subroutines.

- Parameters for physical simulations, such as grid sizes, should be modifiable at runtime through dynamic memory allocation.  Fixed grid sizes modifiable only at compile-time are unacceptable.

- Machine independence: portability should be enforced through simplicity and transparency, not complex configure scripts.  If a section of code is either machine- or compiler-dependent, it should be fixed by adding extra indirection (typedefs etc.) and moving the different versions into another module or other modules. Which version to use is determined at compile time through preprocessor directives or a similar mechanism.  Machine- or compiler-dependent code should be kept as brief as possible.  An excellent example of this type of mechanism is the "stdint.h" header in C.

- Modules: one class definition should occupy one file (plus header, if applicable), while short functions should be arranged so that closely related functions are contained in a single file (plus header, if applicable). Long functions should occupy a single file (plus header).  In general, modules (code contained in single file) should be roughly 200 lines or less.
                      

Download these guidelines as an ASCII text file.

Wednesday, August 15, 2012

Updated arXiv articles

Three of my articles posted to arXiv.org have been updated:
Mostly it is fairly minor corrections, however for the first article I have added a whole new appendix on how one of the equations is derived.  When I first wrote the article, I had thought that the derivation was trivial enough that it could be omitted.  I now realize that this is somewhat presumptive and arrogant and that not everyone has the time or inclination to sit down trying to derive complex formulas.  In a piece of writing whose job it is to convey information, better to include too much than too little.

Friday, July 20, 2012

My design philosophy

I've generally found that the software I write seems to work better in so many ways from other free software I've found on the 'net.  Part of this is just familiarity: if something goes wrong, I know exactly where to go to fix it and if something's missing, I know exactly how to extend it.  But it's also that I know how to code.  I remember once writing a subroutine for another scientist to help him process his data.  When we compared it to what he had done, we found that my code was better in just about every way: faster, smaller and more general.

Here are a bunch thoughts on software design and development.  At this point it could hardly be called a unified or integrated philosophy, just a roughed out, loosely connect set of ideas.  For instance, #6 says, "Test you algorithm with the problem at hand."  Well, this is obvious and to properly test a program, you usually need a lot more test-cases.

In the process of laying this down, I realized my ideas on program development most closely matched those of the Unix/Linux communities.  For more information, I would recommend interested readers to read up on the Unix Philosophy, which is now quite mature and has many adherents:
Not that I'm advocating this, but I picked both The Art of Unix Programming and The Unix Programming Environment as free pdf e-books.  Wikipedia states:
This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface. (Doug McIllroy)
I tend to write first a suite of libraries.  I then encapsulate those libraries in simple, stand-alone executables which I string together using a makefile which defines a set of data-dependences.

Without further ado, here is what I came up with:


1. Primitive types exist for a reason.  Do not use an object class or defined type if a primitive type will do.  Using primitive types makes the code easier to understand, produces less overhead when calling subroutines and makes it easier to call from different languages (e.g. calling C from Fortran).

2. By the same token, if an algorithm can work with primitive types, write it for primitive types.  When using the algorithm with defined types, translate from those to the primitive types rather than adding unnecessary indirection.

 E.g. when working with dates from climate data, I almost always perform Runge-Kutta integrations using a floating point value for the times even though the templated routines will work with a more complex date type.

3. Do not use an object or class hierarchy when a function or subroutine will do.  Again, the tendency is to reduce overhead, e.g. associated with the class definition itself and with initializing and setting up each object instantiation.

4. Build highly general tools and use those tools as building blocks for more specific algorithms.

5. Form re-useable libraries from the general tools.

 E.g. (4. & 5.), my trajectory libraries have very little in them, they are mostly pieced together from bits and pieces taken from several other libraries.  In the semi-Lagrangian tracer scheme, the wind fields are interpolated using generalized data structures from another library, they are integrated using a 4th-order Runge-Kutta subroutine from still another library, intermediate results are output using a general sparse matrix class and the final fields are integrated using sparse matrix multiplication, an entirely separate piece of software.

6. Test your algorithms with the problem at hand.

7. Use makefiles for your test cases.

8. In the beginning, write a single main routine that solves the simplest case.  Later, once it is working, you can form it into a subroutine or object class and flesh it out with extra parameters and other features.

9. If a parameter usually takes on the same or similar values, make it default to that value/one of those values.  This is easy if the syntactical structure is a main routine called from the command line or an object class, but can be difficult if it is a subroutine and the language does not support keyword parameters.

10. Do not use more indirection than absolutely necessary.  The call stack should rarely be more than three or four levels high (excluding system calls or recursive algorithms).

11. Enforced data hiding is rarely necessary.  If you need to access the fields in your object class directly from unrelated classes or subroutines, maybe the data shouldn't be inside of a class in the first place.

12. Generally, the closer the structural match of the syntactical structures (data, subroutines and classes) to the problem at hand, the more transparent the program and the better it works.

E.g., simulation software for the Gaspard-Rice scattering system has two classes: one for the individual discs and one for the scattering system as a whole.

13. Given the choice between a simple solution and a complex solution, always choose the simpler one.  The gains in speed or functionality are rarely worth it.

E.g., to overcome the small time step required by the presence of gravity waves in an ocean GCM, one can use an inverse method to solve for surface pressure for a "flat-top" ocean, or one can have a separate simulation for the surface-height with a smaller time step, or one can have layers with variable thickness in which the gravity waves move independently (and more slowly because the layers are thin).  I would choose the last of the three options because it is the simplest and most symmetric, that is, elegant.

14. It is not always possible or desirable to strive for the most general solution.  Sometimes you have to pick a design from amongst many possible different and even divergent designs and stick with it.  Once you have more information, the code can be refactored later.

E.g. in my libagf software, there are only two choices of kernel function (the kernel function is not that critical) and with the Gaussian kernel, there is only one way to solve for the bandwidth.  These choices were not easy to make but the alternative (make it more general) was just too fiddly and had too many divergent paths for the user to easily choose amongst.

15. Unless the old code is written by an expert (and I will let the reader decide what I mean by that) and well documented, it is usually quicker and easier to rewrite it from scratch rather than work with someone else's code.  This is especially true in relation to scientific software which is not usually written by professional programmers and tends to be poorly documented.

16. If you do decide to use someone else's code, it is usually better to encapsulate it using function and system calls rather than delve deep into the bowels.

17. Do not be afraid to re-invent the wheel.  Re. 4. and 16., there is now a plethora of standard libraries available for just about every language, but something you write yourself will often fulfill your needs better, provide fewer surprises and give you a greater understanding of your own program as there will be no "black boxes".  Obviously, it will also improve your understanding of computer programming in general.  If you are a good programmer, your implementation may also be better in every way.

18. Avoid side effects: write subroutines and executables that take a set of inputs and return a set of outputs and that's it.

19. Try to separate the major components of the program into modules: IO should be separated from the data processing or "engine."  By the same token, the GUI interface should also be separated.

20. Despite recent improvements in memory management, core memory that creeps into the swap space is a sure way to destroy performance.  Customized paging algorithms that operate directly on the input/output files can significantly alleviate this problem.


Download these guidelines as an ASCII text file.

Wednesday, July 4, 2012

Support free software, support free science

For those of you who have been following the Peteysoft sites and software, and I know there are quite a few of you, you know that I have been dutifully posting free software and free scientific content at least since 2007. Recently I've taken the step of monetized one of my software websites, http://libagf.sourceforge.net , by including advertising, in an attempt to recoup some of the costs of my time and effort.  If you want your free software to remain truly free, please do us all a favour by clicking on the donation button. Even if all you can spare is $5 dollars, that may make all the difference to help keep the Peteysoft projects free for all and free of advertising.

Monday, July 2, 2012

Thoughts on music


When I was in Vancouver, living in its notorious East Side, I spent a lot of time hanging out with an aspiring musician.  He said he was going to teach me to play base and wanted me to manage his band.  He even restrung his guitar left-handed for me.  At the time I didn't take this too seriously as I'd hardly picked up an instrument and didn't think I had any talent to speak of.  A couple of years later my sister and her husband gave me a guitar for my birthday and I've been playing ever since.

One of the things I remember about my time with Adam was a silly argument we got into.  I was willing to go along with his plans as long as he was willing to go along with mine--being at loose ends with no prior commitments, I wanted to go travelling.


"We can take it to the road," I would tell him.  "Bring guitars and advertise our open air concerts on the internet."

"Will we bring amps?" he would ask.  "We've gotta have amps..."

I thought this was silly.  Like Tony Hawks fridge, I figured it would be difficult to hitch with a pair of 25-pound amplifiers.

"C'mon," I would reply, "Musicians have been playing instruments for thousands of years and they didn't need amps..."

Later on I realized there was more to this argument than meets the eye.  What is it that makes good music?  A fundamental idea in classical music theory is that of consonant versus dissonant intervals.  That is, to produce a pleasant-sounding chord, the ratio between the fundamental frequencies of the notes must be simple, rational-fraction intervals, say 3:2.

To understand what I mean by this, we must go back to our basic physics: the mechanics of standing wave.  If we have a vibrating string (such as on a guitar) that is fixed at both ends, the fundamental frequency will be a wave twice the length of the string.  Of course the string won't just vibrate at this frequency, there will also be standing waves with wavelengths the length of the string, 2/3 the length of the string, 1/2 the length of the string and so on.  Thus all the frequencies (or harmonics) can be predicted to first order by a simple arithmetic sequence.

If two strings are vibrating at a rational-fraction interval, say 3:2, then every second harmonic of the first string will constructively interfere with every third harmonic of the second string.  To demonstrate this effect, try taking a guitar and fretting the low 'E'  string (topmost, thickest string) on the fifth fret.  It is now at the simplest rational fraction interval with the 'A' string (the one below it), 1:1.  If you pluck one of the two strings, the other will start to vibrate in sympathy, assuming your guitar is well tuned.

The question this leads me to, is simply, is good music simply louder than bad music?  This makes a certain brutal and obvious sense: louder music will shout down quieter music.  Hence Adam's desire for amplification.

Ever since the sixties, rock'n rollers have been on a quest for ever more volume.  This has led to some interesting developments.  First, when you try to amplify a standard guitar, you frequently get feedback, that squealing noise often heard from microphone PA systems, as the sound from the amplifiers gets picked up again by the guitar and re-amplified.  This led to the development of solid-body electric guitars which don't suffer from this problem as much.  Also, when you try to drive an amplifier too hard, it goes outside of its linear range, resulting in a distortion of the signal as the wave-forms get clipped.  Rock'n rollers decided that they liked this sound, resulting in the development of devices, such as this effects pedal, to produce the effect artificially at much lower volumes.


With the development of equally-tempered tunings, much of the preceding discusion about consonance and dissonance is fairly moot.  In the past, it was common to use a just tuning, that is, every note in a scale is a rational fraction interval from every other note, with consonant intervals being simple fractions while dissonant intervals are more complex fractions.  Older music is based on a 7-note scale which defines the key of the piece--music is still written in this way.  When we switch keys, a consonance in one key may become a dissonance in another.  This led to the development of equal-temperament.  That is, we take the number of notes in the scale and divide the octave into that number of equal intervals.  Modern Western music uses a chromatic, or twelve-note scale, meaning that the next-higher note is the twelfth root of two times the frequency of the previous one.  If we now go back to our basic maths, the twelfth root of two is not a rational fraction.  The older, diatonic, or seven-note scale, is now picked out from the chromatic scale.  All keys sound the same, just sharpened or flattened by a certain interval.

A fretted, stringed instrument such as a guitar is almost by necessity tuned in equal temperament.  Most pianos are tuned somewhere between a just and equal temperament.  The implication being, except for perfect octave intervals and their multiples, no two notes are ever perfectly consonant as they only ever approximate a rational fraction interval.  

All chords on a guitar are somewhat dissonant.  Hence modern musicians' reliance on electronic amplifiers and the feedback they produce for generating volume.  Heavy metal musicians in particular are fond of what was traditionally considered the most dissonant interval: the tri-tone or one-half octave.

Friday, May 11, 2012

I AM scientist...

The wave of the future is for colloboration and communication in science to be done completely online.  With the internet, print journals are completely obsolete.  iamscientist.com takes it a step further with crowd-sourced funding. Check out my project.

Thursday, April 12, 2012

The Global Panopticon and the Hive Mind

In Discipline and Punish, Michel Foucault describes a specific type of prison in which the inmates can be observed at all times but do not know when they are being observed. Cells are internally lit with large windows or transparent walls and are arranged in a circle. In the centre of the circle of cells sits an observation tower which is not lit. This type of prison is called a "panopticon."

--> Privacy is a thing of the past. The newer generations, for the most part, seem to accept this, posting photos of themselves online in compromising positions and "tweeting" their every thought and movement. Privace watchdogs and rights groups are increasingly up in arms about the increasing discretion of federal authorities to monitor and control our online presence and about the misuse of private data by large corporations.

To see how far our privacy has been erroded, you only have to visit Google Earth. It will soon be possible to track our every movement using nothing but remote sensing satellites. As a remote sensing specialist, I am acutely aware of this. There are literally thousands of satellites sending terrabytes of information down to Earth, mapping the globe a thousand times over in a single day. I call this phenomenon, the "Global Panopticon."
Another thing I notice about today's young people (myself included, even though I'm not that young anymore), is that they seem to spend relatively little time living in the real world. Most of their living is done online. Walk into any bus and you will see many people staring into their cell phones, not looking out the window, not interacting with their fellow passengers. At some point, the internet becomes an extension of the brain. Suppose you're in a strange city and you get a sudden craving for coffee. Open up your smart phone and Google Maps can figure out where you are and where the nearest coffee house is in relation to you, printing off step-by-step directions.

Soon, even the intermediary in this procedure--the cell phone--will be unnecessary. The technology exists to do this directly from thought through a brain-computer interface. I estimate that it will be fully commercially viable in less than 40 years. I also predict that the vast majority of people will opt to have such a device implanted in their skulls. Make no mistake, there will be no coercion: people will be lining up for these things.

How does it work? Since the human brain operates primarily through electrical impulses sent along the neurons, all you need is an array of electrodes wired to a transceiver. A cross-wise incision is made in the gray-matter (perpendicular to the striations of the neurons) and then it is just a matter of training the brain to interpret the incoming signals and produce its own, outgoing signals which are picked up and decoded by the implant. This is the birth of the global consciousness, or, if we are more pessimistic, the hive mind.

If such a radical shift is going to take place in the nature of human consciousness, it seems we ought to prepare for it. Unfortunately, the young people, who, by-and-large, will lead the shift, strike me as anything but prepared. They will be acculturated in a way that will allow the shift to take place, but they will not be prepared.

Most multi-cellullar organisms are arranged in such a way that each cell is differentiated and performs a highly specialized function. The analogy of individual human beings as cells, however, may not be a fruitful or desirable one. I would suggest, rather, that we should strive to keep the same democratic ideals of freedom and equality that, at least nominally, exist today.

Suppose for the sake of argument we say that we need a "brain centre," a person or group to coordinate this great mass of bodies, a "ruling elite." I cannot think of a single person who would be qualified for the job.

Monday, April 9, 2012

Vernier clock

Tools, people and elephants

In the past, one of the things that has been cited as setting humans above the other animals is our ability to make tools. I think it's more accurate to say, rather, that we have a tool. Before you get the wrong idea, let me clarify that: we have, in fact, two tools. Highly sensitive and versatile tools. Tools we can use to make music, write poetry, design buildings and many other things. One of the most important functions of these tools is to make other tools, so you could call them meta-tools.

Of course, other animals have tools as well, but most of them are not as versatile. Contrast, for instant, an elephant. This is another animal with a highly versatile tool and a similarly large brain. How is it that humans got so much farther and became so much more dominant than elephants? Unfortunately, the elephant has only one such tool and although it is prehensile, it only has two digits instead of five. Because a human has two tools, it can use one tool to hold the object being worked on, say a piece of flint for an arrowhead, and the other tool to hold an object to work in the other object, say a strong, hard piece of rock to chip the arrowhead into shape. The extra digits on human tools serve a similar purpose, whereas the bare minimum would be only two opposable digits, as on the elephant's tool.

People argue about what, among the locus of traits--high intelligence, tool-making, language, walking upright--that supposedly distinguish us from the other animals, came first and what drove the changes? I would argue that our sensitive hands not only preceded the other traits, they are what drove them. Because we have these highly sensitive tools on our upper bodies, better to walk upright so they are not damaged. Their use requires high intelligence, but more importantly, instruction from an early age, which requires language.
-->