1593734280

# Lecture Notes in Deep Learning: Introduction — Part 4

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!

Previous Lecture** / Watch this Video / Top Level / **Next LectureWelcome to our deep learning lecture. We are now in Part 4 of the introduction. Now, in this fourth path we want to talk about machine learning and pattern recognition and first of all we have to introduce a bit of terminology and notation.

Notation in this lecture. Image under CC BY 4.0 from the Deep Learning Lecture.

So, throughout this entire lecture series, we will use the following notation: Matrices are bold and uppercase. So, examples here are M and A. Vectors are bold and lowercase examples are v and x. Scalars are italic and lowercase: y, w, α. For the gradient of a function, we use the gradient symbol ∇, for partial derivatives we use the partial notation ∂. Furthermore, we have some specifics about deep learning. So, the trainable weights will generally be called w. Features or inputs are x. They are typically vectors. Then, we have the ground truth label which is y. We have some estimated output that is y hat and if we have some iterations going on, we typically do that in superscript and put it into brackets. This is an iteration index here: Iteration i for variable x. Of course, this is a very coarse notation and we will develop it further throughout the lecture.

Classical image processing pipeline. Image under CC BY 4.0 from the Deep Learning Lecture.

If you have attended previous lectures of our group, then you should know the classical image processing pipeline of pattern recognition. It does recording with sampling followed by analog to digital conversion.Then, you have the pre-processing, feature extraction followed by classification. Of course, in the classification step, you have to do the training. The first part of the pattern recognition pipeline is covered in our lecture introduction pattern recognition. The main part of classification is covered in pattern recognition.

Classical feature extraction. Image under CC BY 4.0 from the Deep Learning Lecture.

Now, what you see in this image is a classical image recognition problem. Let’s say, you want to differentiate apples from pears. Now, one idea that you could do is you could draw a circle around them and then measure the length of the major and minor axes. So, you will recognize that apples are round and pears are longer. So, their ellipses have a difference in the major and minor axis.

Apples and pears in vector space. Image under CC BY 4.0 from the Deep Learning Lecture.

Now, you could take those two numbers and represent them as a vector value. Then, you enter a two-dimensional space which is basically a vector space representation in which you will find that all of the apples are located on the diagonal through the x-axis. If their diameter and one direction increases, also the diameter and the other direction increases. Your pears are off this straight line because they have a difference in their minor and major axes. Now, you can find a line that separates those two and there you have your first classification system.Now, what many people think about how the big data processing works is shown in this small figure:

“So, is this your machine learning system?”“Yep, pour the data into this big pile of linear algebra then collect the answers on the other side.”“And what if the answers are wrong?”“Just stir the pile until they start looking right!”KXCD Machine Learning

So, what you can see in this picture is that of course this is how many people think that they approach deep learning. You just pour the data in and in the end you just stir a bit and then you get the right results.

Pipeline in deep learning. Image under CC BY 4.0 from the Deep Learning Lecture.

But that’s not actually how it works. Remind them what you want to do is you want to build a system that learns a classification. This means that from your measurement you first have to do some pre-processing like reduce noise. You have to get a meaningful image then do feature extraction and from that, you can then do a classification. Now, the difference to deep learning is that you put everything into a single kind of engine. So, this does the pre-processing, the feature extraction, and the classification just in a single step. You just use the training data and the measurement in order to produce those systems. Now, this has been shown to work in a lot of applications, but as we’ve already talked about in the last video, you have to have the right data. You cannot just pour some data in and then stir until it starts looking right. You have to have a proper data set, a proper data collection, and if you don’t do that in the appropriate way, you just get nonsense.

#lecture-notes #deep learning

1618317562

## Top Deep Learning Development Services | Hire Deep Learning Developer

We at Inexture, strategically work on every project we are associated with. We propose a robust set of AI, ML, and DL consulting services. Our virtuoso team of data scientists and developers meticulously work on every project and add a personalized touch to it. Because we keep our clientele aware of everything being done associated with their project so there’s a sense of transparency being maintained. Leverage our services for your next AI project for end-to-end optimum services.

#deep learning development #deep learning framework #deep learning expert #deep learning ai #deep learning services

1593738180

## Lecture Notes in Deep Learning: Introduction — Part 5

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!

A fast-forward to layer-wise back-propagation. Don’t worry, we will explain all the details. Image under CC BY 4.0 from the Deep Learning Lecture.

Thanks for tuning in again and welcome to deep learning! In this small video, we will look into the organizational matters and conclude the introduction. So, let’s look at organizational matters. Now, the module that you can obtain here at FAU consists of a total of five ECTS. This is the lecture plus the exercises. So, it’s not just sufficient to watch all of these videos, you have to pass the exercises. In the exercises, you will implement everything that we’re talking about here also in Python. We’ll start from scratch so you will implement perceptrons neural networks up to deep learning. In the very end, we will even move ahead towards GPU implementation and also large deep learning frameworks. So, this is a mandatory part it’s not sufficient only to pass the oral exam.

We will also implement max pooling in the excercise. Image under CC BY 4.0 from the Deep Learning Lecture.

The content of the exercise is Python. You’ll do an introduction to Python if you have never used, it because python is one of the main languages that deep learning implementations use today. You will really develop a neural network from scratch. There will be feed-forward neural networks, there will be convolutional neural networks. You will look into regularization techniques and how you can actually adjust weights such that they have specific properties. You will see how you can beat overfitting with certain or regularization techniques. Of course, we will also implement recurrent networks. Later, we will use the Python framework and use that on large-scale classification. For the exercises, you should bring a basic knowledge of Python and NumPy. You should know about linear algebra such as matrix multiplication. Image processing is a definite plus. You should know how to process images and — of course — requirements for this class are pattern recognition fundamentals and that you have attended the other lectures of pattern recognition already. If you haven’t, you might have to consult additional references to follow this class.

You should be passionate about coding for this class’ exercises. Photo by Markus Spiske from Pexels.

You should bring a passion for coding and you have to code quite a bit, but you can also learn it during the exercises. If you have not done a lot of programming before this class, you will spend a lot of time on the exercises. But if you complete those exercises, you will be able to implement things in deep learning frameworks and this is very good training. After this course, you can not just download code from GitHub and run it on your own data, but:

• you also understand the inner workings of the networks,how to write your own layers andhow to extend deep learning algorithms also on a very low level.

In the exercises of the course, you will also have the opportunity to work with really big data sets. Image courtesy of Marc Aubreville. Access full video here.

So, pay attention to detail and if you are not very well used to programming, it will cost a bit of time. There will be five exercises throughout the semester. There are unit tests for all but the last exercise. So, these unit tests should help you with the implementations and in the last exercise there will be a PyTorch implementation and you will be facing a challenge: You have to solve image recognition tasks in order to pass the exercise. Deadlines are announced in the respective exercise sessions. So, you have to register for them in StudOn.What we’ve seen in the lecture so far is the deep learning is more and more present in daily life. So, it’s not just a technique that’s done in research. We’ve seen this emerging really into many many different applications from speech recognition, image processing, and so on to autonomous driving. It’s a very active area of research. If you’re doing this lecture you have a very good preparation for a research project with our lab or industry or other partners.

More exciting things coming up in this deep learning lecture. Image under CC BY 4.0 from the Deep Learning Lecture.

So far, we looked into the perceptron and it’s relation to biological neurons. So, next time on deep learning, we will actually start with the next lecture block which means, we will extend the perceptron to a universal function approximator. We will look into gradient-based training algorithms for these models and then we also look into the efficient computation of gradients.Now, if you want to prepare for the oral exam, it’s good to think of a couple of comprehensive questions. Questions may be

• “What are the six postulates of pattern recognition?”“What is the perceptron objective function?”“Can you name three applications successfully tackled by deep learning?”

and of course, we have a lot of further reading. So you can find the links on the slides and we will also post the links and references below this post.If you have any questions,

• you can ask tutors in your exercise,you can email me,or if you’re watching this on youtube, you actually can use the comment function and ask your questions.

So, there are many options to get in contact and of course, we have quite a few references for these first five videos. This is now too quick to read all of them but you can pause the video and then review them and we will also post those references in the references below this post. So, I hope that and you like this video and see you next time in deep learning!

#machine-learning #artificial-intelligence #introduction #lecture-notes #deep-learning

1593719340

## Lecture Notes in Deep Learning: Introduction — Part 2

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!

Previous Lecture** / Watch this Video / Top Level / **Next LectureSo, thank you very much for tuning in again and welcome to our second part of the deep learning lecture and in particular the introduction. So, in the second part of the introduction, I want to show you some research that we are doing here in the pattern recognition lab, here at FAU Germany.

Today’s cars are huge sensor systems. Image under CC BY 4.0 from the Deep Learning Lecture.

One first example that I want to highlight is a cooperation with Audi and here we are working with assisted and automated driving. We are working on smart sensors in the car. You can see that the Audi A8 today is essentially a huge sensor system that has cameras and different sensors attached. Data is processed in real-time in order to figure out things about the environment. It has functionalities like parking assistance. There are also functionalities to support you during driving and traffic jams and this is all done using sensor systems. So of course, there’s a lot of detection and segmentation tasks involved.

Online road scene analysis. Full video can be found here. Image generated using gifify.

What you can see here is an example, where we show some output of what is recorded by the car. This is a frontal view, where we are actually looking around the surroundings and have to detect cars. You also have to detect — here shown in green — the free space where you can actually drive to and all of this has to be detected and many of these things are done with deep learning. Today, of course, there’s a huge challenge because we need to test the algorithms. Often this is done in simulated environments and then a lot of data is produced in order to make them reliable. But in the very end, this has to run on the road which means that you have to consider different periods of the year and different daylight conditions. This makes it all extremely hard. What you’ve seen in research is that many of the detection results are working with nice day scenes and the sun is shining. Everything is nice, so the algorithms work really well.But the true challenge is actually to go towards rainy weather conditions, night, winter, snow, and you still want to be able to detect, of course, not just cars but traffic signs and landmarks. Then you analyze the scenes around you such that you have a reliable prediction towards your autonomous driver system.

#lecture-notes #deep-learning #deep learning

1593726780

## Lecture Notes in Deep Learning: Introduction — Part 3

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!

Previous Lecture** / Watch this Video / Top Level / **Next LectureThanks for tuning in to the next video of deep learning. So, what I want to show you in this video are a couple of limitations of deep learning. So, you may wonder are there any limitations? Are we done yet? Aren’t we learning something here that will solve all of the problems?

Positive examples for image captioning. Image under CC BY 4.0 from the Deep Learning Lecture.

Well of course there are some limitations. For example tasks like image captioning yield impressive results. You can see that the networks are able to identify the baseball player, the girl in a pink dress jumping in the air, or even people playing the guitar.

Errors in image captioning. Image under CC BY 4.0 from the Deep Learning Lecture.

So let’s look at some errors. Here on the left, you can see, this is clearly not a baseball bat. Also, this isn’t a cat in the center image and they are also slight errors like the one on the right-hand side: The cat on top of the suitcases isn’t black.

Clear errors in image captioning: Image under CC BY 4.0 from the Deep Learning Lecture.

Sometimes they’re even plain errors like here in the left image: There, I don’t see a horse in the middle of the road and also on the right image there is no woman holding a teddy bear in front of a mirror.So, the reason for this is of course there’s a couple of challenges and one major challenge is training data. Deep learning applications require huge manually annotated data sets and these are hard to obtain. Annotation is time-consuming and expensive and often ambiguous. So, as you’ve seen already in the image net challenge, sometimes it’s not clear which label to assign, and obviously you would have to assign a distribution of labels. Also, we see that even in the human annotations, there are typical errors. What you have to do in order to get a really good representation of the labels, you actually have to ask two or even more experts to do the entire labeling process. Then, you can find out the instances where you have a very sharp distribution of labels. These are typical prototypes and broad distributions of labels are images where people are not sure. If we have such problems, then we typically get a significant drop in performance. So the question is how far can we get simulations for example to expand training data.Of course, there are also challenges with trust and reliability. So, verification is mandatory for high-risk applications, and regulators can be very strict about those. They really want to understand what’s happening in those high-risk systems. End-to-end learning essentially prohibits to identify how the individual parts work. So, it’s very hard for regulators to tell what part does what and why the system actually works. We must admit at this point that this is largely unsolved to a large degree. It’s difficult to tell which part of the network is doing what. Modular approaches that are based on classical algorithms may be one approach to solve these problems in the future.

#lecture-notes #deep-learning #deep learning

1593672405

## Lecture Notes in Deep Learning: Introduction — Part 1

These are the lecture notes for FAU’s YouTube Lecture “Deep Learning”. This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!

Previous Lecture** / Watch this Video / Top Level / **Next LectureWelcome everybody to this semester’s deep learning lecture! As you can see I’m not in the lecture hall. Like many of you, I am in my home office and we have to work from home in order to stop the current pandemic.Therefore, I decided to record these lectures and then also make them available on the Internet such that everybody can use them freely. You will see that we did a couple of changes to this format. First of all, we reduced the length of the lectures. We no longer go for 90 minutes in a row. Instead, we decided to reduce the length into smaller parts such that you can watch them in 15 to 30 minutes in one go, then stop and then continue to the next lecture. This means that we had to introduce a couple of changes. Of course, as every semester, we also updated all of the contents such that we really present the state-of-the-art that is up-to-date to current research.

Deep Learning Buzzwords. Image under CC BY 4.0 from the Deep Learning Lecture.

This first lecture will be about the introduction into deep learning we will deal with a broad variety of topics in this lecture — first and foremost, of course, deep learning. We summarized some of the buzzwords here that you may have already heard. We cover topics from supervised to unsupervised learning. Of course, we talk about neural networks, feature representation, feature learning, big data, artificial intelligence, machine representation learning, but also different tasks such as classification, segmentation, regression, and generation.

Outline of the Introduction. Image under CC BY 4.0 from the Deep Learning Lecture.

Let’s have a short look at the outline. So first, we’ll start with a motivation why we are interested in deep learning. We see that we have seen tremendous progress over the last couple of years, so it will be very interesting to look into some applications and some breakthroughs that have been done. Then in the next videos, we want to talk about machine learning and pattern recognition and how they are related to deep learning. And of course in the first set of lectures, we also want to start from the very basics. We will talk about the perceptron and we also have to talk about a couple of organizational matters that you will see in video number five.

Nvidia stock market value. Image under CC BY 4.0 from the Deep Learning Lecture.

So let’s look into the motivation and what are the interesting things that are happening right now. First and foremost, I want to show you this little graph about the stock market value of Nvidia shares. You can see here that over the last couple of years in particular since 2016, the market value has been growing. One reason why this has been tremendously increasing is that in 2012 the deep learning discovery started and this really took off approximately in 2016. So, you can see as many people needed additional compute-hardware. Nvidia is manufacturing general purpose graphics processing units that allow arbitrary computation on their boards. In contrast to traditional hardware that doubles the computing capabilities within every two years, graphics boards double their compute power within approximately 14 to 16 months which means that they have quite an extraordinary amount of computing power. This enables us to train really deep networks and state-of-the-art machine learning approaches.

#deep-learning #lecture-notes #deep learning