Since that renowned conference at Dartmouth College in 1956, AI research has experienced many crests and troughs of progress through the years. From the many lessons learned during this time, some have needed to be re-learned – repeatedly – and the most important of which has also been the most difficult to accept by many researchers.

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore’s law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters, in the long run, is the leveraging of computation. These two need not run counter to each other, but in practice, they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation. There were many examples of AI researchers’ belated learning of this bitter lesson, and it is instructive to review some of the most prominent.

#ai #alphago #chess #machine learning #reinforcement learning #richard sutton #scalability #trends

The Bitter Lesson of Machine Learning
4.20 GEEK