MIT’s official motto is “<a href="https://libraries.mit.edu/mithistory/institute/seal-of-the-massachusetts-institute-of-technology/" target="_blank"><em>Mens et Manus</em></a>” — Mind and Hand — so it’s no coincidence that we, too, are big believers in this philosophy. As the organizers and lecturers for MIT’s Introduction to Deep Learning, we wanted to develop a course that focused on both the <strong>conceptual foundation</strong> and <strong>the practical skills</strong> it takes to understand and implement deep learning algorithms. And we’re beyond excited to share what we’ve put together with you, here and now: a series of nine technical lectures and three TensorFlow software labs, designed to be accessible to a variety of technical backgrounds, free and open to all.

MIT’s official motto is “*Mens et Manus*” — Mind and Hand — so it’s no coincidence that we, too, are big believers in this philosophy. As the organizers and lecturers for MIT’s Introduction to Deep Learning, we wanted to develop a course that focused on both the **conceptual foundation** and **the practical skills** it takes to understand and implement deep learning algorithms. And we’re beyond excited to share what we’ve put together with you, here and now: a series of nine technical lectures and three TensorFlow software labs, designed to be accessible to a variety of technical backgrounds, free and open to all.

** Mens: Technical Lectures**. We start from the very fundamentals of neural networks — the Perceptron, fully connected networks, and the backpropagation algorithm; journey through recurrent and convolutional neural networks, generative models, and deep reinforcement learning; and explore the expanding frontiers of modern deep learning research, concluding with a series of guest lectures from leading industry researchers. All lectures are free and open to all, with links in the thumbnails below.

** Manus: TensorFlow Software Labs**. We’ve designed three open-source,

This blog highlights each of these three software labs and their accompanying lectures.

Designing the course and the labs to be accessible to as many people as possible was a big priority for us. So, Lecture 1 focuses on neural network fundamentals, and the first module in Lab 1 provides a clean introduction to TensorFlow, and is written in preparation for the upcoming release of TensorFlow 2.0.

Our introduction to TensorFlow exercises highlight a few key concepts in particular: how to **execute computations** using math operators, how to **define neural network** models,and how to use **automatic differentiation** to train networks with backpropagation.

Following the Intro to TensorFlow module, Lab 1’s second module dives right into building and applying a **recurrent neural network (RNN) for music generation**, designed to accompany Lecture 2 on deep sequence modeling. You’ll build an AI algorithm that can generate brand new, never-heard-before Irish folk music. Why Irish folk music, you may ask? Well, we think these cute dancing clovers (courtesy of Google) are reason enough.

You’ll fill in code blocks to define the RNN model, train the model using a dataset of Irish folk songs (in the ABC notation), use the learned model to generate a new song, and then play back what’s generated to hear how well your model performs. Check out this example song we generated:

Lab 2 accompanies lectures on deep computer vision and deep generative models. Part 1 provides continued practice with the implementation of fundamental neural network architectures through an example of convolutional neural networks (CNNs) for classification of handwritten digits in the famous MNIST dataset.

The second portion of this lab takes things a step further, and explores two prominent examples of applied deep learning: facial detection and algorithmic bias. Though it may be no surprise that neural networks perform really well at recognizing faces in images, there’s been a lot of attention recently on how some of this AI may suffer from hidden algorithmic bias. Actually, it turns out that ** deep learning itself**can help fight this bias.

In recent work, we trained a model, based on a variational autoencoder (VAE), that learns both a specific task, like face detection, and the **underlying structure** of the training data. In turn, the algorithm uses this learned latent structure to **uncover and minimize** hidden biases. When applied to the facial detection task, our algorithm **decreased categorical bias** and maintained high overall accuracy compared to state-of-the-art models.

This software lab is inspired by this work: **you’ll actually build this debiasing model and evaluate its efficacy in debiasing the facial detection task**.

In addition to thinking about issues of algorithmic bias — and how to combat them — you’ll gain practical exposure in **working with VAEs**, an architecture that is often not highlighted in deep learning implementation tutorials. For example, you’ll fill out this code block that defines the loss function for the VAE used in the debiasing model:

```
def debiasing_loss_func(x, x_pred, y_label, y_logit, z_mu, z_logsigma, kl_weight=0.005):
# compute loss components
reconstruction_loss = tf.reduce_mean(tf.keras.losses.MSE(x,x_pred), axis=(1,2))
classification_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_label, logits=y_logit)
kl_loss = 0.5 * tf.reduce_sum(tf.exp(z_logsigma) + tf.square(z_mu) - 1.0 - z_logsigma, axis=1)
# propogate debiasing gradients only on relevant datapoints
gradient_mask = tf.cast(tf.equal(y_label, 1), tf.float32)
# define the total debiasing loss as a combination of the three losses
vae_loss = kl_weight * kl_loss + reconstruction_loss
total_loss = tf.reduce_mean(classification_loss + gradient_mask * vae_loss)
return total_loss
```

**debiasing_loss.py**

Importantly, this approach can be applied **beyond facial detection to any setting** where we may want to debias against imbalances that may exist within the data.

**If you’re excited to learn more, check out the paper:** Amini, Soleimany, et al., “Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure”. *AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society*, 2019.

In the final lab, students explore a different class of learning problems from the previous two labs. In Lecture 5 students are exposed to foundational techniques in Deep Reinforcement Learning.

Compared to previous labs which focused on supervised and unsupervised learning, reinforcement learning seeks to teach an agent how to act in the world to maximize its own reward. Tensorflow’s imperative execution provides a streamlined method for RL which students program from the ground-up in Lab 3.

We focus on learning two tasks in both control (e.g. Cart-Pole) and games (e.g. Pong). Students were tasked in building a modular RL framework to learn these two very different environments using only a single “RL brain”.

Dealing with these baseline environments provides students with a fast way to quickly prototype new algorithms, get a concrete understanding of how to implement RL training procedures, and use these ideas as templates moving forward in their final projects.

We would like to thank the TensorFlow team and our incredible group of TA’s for their continued support in making this MIT course possible. MIT 6.S191 software labs provide a great way to quickly gain practical experience with TensorFlow, by implementing state-of-the-art techniques and algorithms introduced in lecture.

☞ Tools to Scale Your Production Machine Learning

☞ Machine Learning at Uber Natural Language Processing Use Cases

☞ Machine Learning Strategies for Time Series Forecasting

You are asking Why TensorFlow, Spark MLlib, Scikit-learn, PyTorch, MXNet, and Keras shine for building and training machine learning and deep learning models.If you’re starting a new machine learning or deep learning project, you may be confused about which framework to choose...

Over the past few years, deep learning has given rise to a massive collection of ideas and techniques which are disruptive to conventional machine learning practices. However, are those ideas totally different from the traditional methods? Where are the connections and differences? What are the advantages and disadvantages? How practical are the deep learning methods for business applications? Chao will share her thoughts on those questions based on her readings and hands on experiments in the areas of text analytics (question answering system, sentiment analysis) and healthcare image classification.

In this post talks about the differences and relationship between Artificial intelligence (AI), Machine Learning (ML) and Deep Learning (DL)