Sunday, March 29, 2015

A connection machine in your pocket

In the 90's I read the paper "Evolution as a Theme in Artificial Life: The Genesys System" in which  the researchers evolved an ant program which would allow an ant to traverse a winding broken trail they dubbed their The John Muir Trail.  As the ant walked the trail it had to first figure out how to turn and then step over gaps in the trail that grew wider the longer the ant walked.

Execution of a single run took 30 seconds and the program typically had a populate size of 65,536 and ran for 100 generations resulting in a program that took about an hour to run on their Connection Machine (CM2) with 16K processors.  For those not familiar with the CM2, it is an amazing machine, a cube 1.5 meters on a side with up to 65,536 processors and from the eyes of a teen age boy the image below would have been the equivalent of the Ferrari poster hung on the wall.  The machines were so cool that the CM-5 with its red light design was the featured super computer seen in Jurassic Park.  For a teenage it was an insanely awesome and powerful machine that if I have had access to I wouldn't have had a clue what to do with it.


The story of these ants were one of several things that sparked my interest in genetic algorithms and continued interest in programming in general. At the time I had a Pentium 133 and could only dream about the computational power that the CM2 has. I would sketch out ideas for things I would do, but off course it never left the paper which was okay because I was still figuring out the basics of programming.

Lasts weekend I was cleaning out my bookshelf and came across a book that mentioned the Genesys/Tracker system which made me look up the paper again.  Reading the paper in bed I discovered that unlike when I was a teenager I actually understood the mechanics of what the researchers were doing.  Looking around the next evening I didn't find any source code anywhere and thought it would be fun to try to reproduce their program and see how long it took to run.  I coded up a simple little C program with the finite-state machine design. It isn't multi-threaded, it is wasteful of memory and the little bit I did optimize was clearly overkill considering the runtime.  I tried not to pre-optimize (other than replacing rand()) and implemented a simple design.  After fixing two annoying bugs I saw on my screen it rapidly finding better and better ants.  On my five year old laptop running inside a VM it took around a minute to run the same experiment with a population of 65K for 100 generations.

The phone I carry around in my pocket isn't that much slower than the laptop and it is weird to think that a machine I will soon think of as obsolete is already faster than a CM2.  It is one thing to have heard of Moore's law, but every once in a while you get reminded about what it really means.

A quick profile show that 50% of the time is spent not executing the fitness functions, but simply in the crossover function used to generate the next generation.  I even started re-writing it to not be so brute force, but then I stopped myself because at the end of the day 100 generations only took a minute or two and when I let it go crazy and run for 500 generations it wasn't longer than going to make a cup a coffee.  The value of making it faster is more of a challenge of how fast could you make it rather than any need to actually make it faster.  Moore's law has enabled me to write simpler code and get results faster than if I needed to go through a week of making it fast enough first.  If anyone is up for nothing more than a coding challenge grab the source and see how much faster you can make the execution time.

A nice aspect of having such fast hardware today is that I can reproduce the results published in the paper if I want.  While not every run found a solution within 100 generations it often was close so at the minimum I can confirm those results.  The score distribution and results shown in figure 7 and figure 8 wouldn't be too much effort to reproduce either.  I don't know if it would be worth a letter to the journal, but putting these notes and source up online where they can be searched is probably just as good.

In the paper they mention that they didn't select for the minimum number of steps or the minimum number of states only to find a solution that could walk the whole trail.  The best solution presented in the paper required 12 states and took all 200 steps to complete.  Adding a quick tweak to the code I let it run and after first accomplishing walking the full trail bonus points were awarded for using the least amount of states.  After ten minutes or so it had found a state machine that only required 9 states.  I let the application run overnight before stopping it at 28340 rounds, this would have been the equivalent of around 12 straight days on a CM2 machine which given the cost of the machine was probably an unthinkably long period of time to have the machine running a single program.  But it was unable to find a smaller program that would run in less than 9 states.  I then tweaked it to find a fast solution.  Very quickly it found one highly evolved to the specific trail that could run in only 159 steps.

Last but not least I made a basic genesys simulator webpage where you can see various ant's walk their solution live.  I included two 9 state solutions, the papers 12 state solution*, the "fast" 159 step solution, and when you first load the page it shows the basic hand generated state machine mentioned in the paper which is unable to solve the solution in less than 200 steps.

If your curious in running the genesys/tracker I wrote or up to the challenge of making it faster the source code to can be found at https://github.com/icefox/genesys

* In the paper Figure 11 the Champ-100 12 state solution on State 11 it shows 0/S which is an invalid action and a typo and should be 0/N.

No comments:

Popular Posts