That talk looks fascinating... bookmarked it for a later read. Quite a few people in all kinds of areas would benefit from a study of machine learning theory -- I think some of these findings are philosophically profound.
The idea that simple descent (in this case a modified Newtons Method) works is very simple... the curvature of fitness the across at each point across dimensions are mostly uncorrelated; a local minimum occurs when the curvature across all dimensions are positive, which is exponentially unlikely in the dimension. I hadn't heard of such a simple and useful insight in a while... this is exactly what theory is for! Anyway, that talk is very interesting.