All of the below papers are free to access and cover a range of topics from Hypergradients to modeling yield response for CNNs. Each expert also included a reason as to why the paper was picked as well as a short bio.

Jeff Clune, Research Team Lead at OpenAI

We spoke to Jeff back in January, and at that time, he couldn’t pick just one paper as a must-read, so we let him pick two. Both papers are listed below:

Learning to Reinforcement Learn (2016) - Jane X Wang et al.

This paper unpacks two key talking points, the limitations of sparse training data, and also if recurrent networks can support meta-learning in a fully supervised context. These points are addressed in seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL. We consider prospects for extending and scaling up the approach and also point out some potentially important implications for neuroscience.

Gradient-based Hyperparameter Optimization through Reversible Learning (2015) - Dougal Maclaurin, David Duvenaud, and Ryan P. Adams.

#ai #andrew ng #hyperparameter #lstm #research

13 must-read papers from AI experts
1.25 GEEK