Neural networks are the models responsible for the deep learning revolution since 2006, but their foundations go as far as to 1960s. In this lecture DeepMind Research Scientist Wojciech Czarnecki will go through basics of how these models operate, learn and solve problems. He also introduces various terminology/naming conventions to prepare attendees for further, more advanced talks. Finally, he briefly touches upon more research oriented directions of neural network design and development.

Speak Bio:

Wojciech Czarnecki is a Research Scientist at DeepMind. He obtained his PhD from the Jagiellonian University in Cracow, during which he worked on the intersection of machine learning, information theory and cheminformatics. Since joining DeepMind in 2016, Wojciech has been mainly working on deep reinforcement learning, with a focus on multi-agent systems, such as recent Capture the Flag project or AlphaStar, the first AI to reach the highest league of human players in a widespread professional esport without simplification of the game.

About the lecture series:

The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence. Over the past decade, Deep Learning has evolved as the leading artificial intelligence paradigm providing us with the ability to learn complex functions from raw data at unprecedented accuracy and scale. Deep Learning has been applied to problems in object recognition, speech recognition, speech synthesis, forecasting, scientific computing, control and many more. The resulting applications are touching all of our lives in areas such as healthcare and medical research, human-computer interaction, communication, transport, conservation, manufacturing and many other fields of human endeavour. In recognition of this huge impact, the 2019 Turing Award, the highest honour in computing, was awarded to pioneers of Deep Learning.

In this lecture series, leading research scientists from leading AI research lab, DeepMind, deliver 12 lectures on an exciting selection of topics in Deep Learning, ranging from the fundamentals of training neural networks via advanced ideas around memory, attention, and generative modelling to the important topic of responsible innovation.

#deep learning

 Neural Networks Foundations
1.25 GEEK