Deep artificial neural networks, especially recurrent ones, have recently taken first place in many machine learning and pattern recognition competitions. This historical study provides a succinct summary of pertinent works, many of which date back to the previous millennium.
The complexity of credit assignment paths—chains of potentially understandable causal linkages between actions and effects—distinguishes shallow and deep learners. It reviews unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and massive networks, as well as deep supervised learning (including recapitulating the history of backpropagation).
The invited Deep Learning (DL) overview's preprint is available here. Giving thanks to individuals who made contributions to the current state of the art is one of its objectives. It admits that striving to accomplish this goal has its limitations. The DL research community itself can be seen as a dynamic, deep network of researchers who have affected one another in a variety of intricate ways. It attempted, beginning with current DL results, to track the history of pertinent concepts going back fifty years and beyond, occasionally utilizing "local search" to follow the citations of citations backward in time. Additional global search techniques were used, helped by interviewing multiple neural network specialists, because not all DL publications correctly mention earlier pertinent work. As a result, the present preprint mostly consists of references.