Evolution as machine learning

This week I had the chance to attend the Turing award lecture of the 2010 winner, Leslie Valiant. His research focuses on computational complexity which is of marginal interest for me, so I had no great expectations – nevertheless, this being the equivalent of Nobel price acceptance speech for computer science, I felt compelled to attend.

How wrong I was: instead of a lecture on his research he delivered one of those speeches that really make you think and provide plenty of food for thought. First – after making clear that he believes strongly in the theory of evolution – he laid out the basic problem with evolution: we don’t have a way to prove that evolution was indeed possible, given the huge potential variety – is it possible to reach the current living ecosystem without ‘external guidance’? Is at all possible that 4.5 billion years are sufficient?

His thesis was that evolution can be modeled as a machine learning process. The ‘machine’ is the living ecosystem itself; the training samples are the variations in the DNA; the result of learning is derived from survival (positive) or not (negative). In this context, one of the most intriguing questions is – what the machine is trained for, or, simply put, what is the meaning of the living world and that of evolution? Valiant’s answer: living ecosystem that always reacts in an optimal way to the surrounding world. Pretty interesting idea, I must admit.

Unfortunately Valiant was unable to provide the actual “formula of life” modeled as a machine learning process and, consequently, no proof that evolution can autonomously happen, without that ‘external guidance’ (God?). However, he made a very compelling argument that this kind of model actually can be useful and the whole line of thought was just intriguingly fresh, visionary and challenging.

As a Turing award winner’s speech should be.

Comments are closed.