The field of Machine Learning is huge. You can easily be overwhelmed by the amount of information out there. To not get lost, the following list helps you estimate where you are. It provides an outline of the vast Deep Learning space and does not emphasize certain resources. Where appropriate, I have included clues to help you orientate.
Since the list has gotten rather long, I have included an excerpt above; the full list is at the bottom of this post.
The entry-level is split into 5 categories:
At the entry level, the datasets used are small. Often, they easily fit into the main memory. If they don’t already come pre-processed then it’s only a few lines of code to apply such operations. Mainly you’ll do so for the major domains Audio, Image, Time-series, and Text.
Before diving into the large field of Deep Learning it’s a good choice to study the basic techniques. These include regression, clustering, and SVMs. Of the listed algorithms, only the SVM might be a bit more tricky. Don’t let yourself be overwhelmed by this: Give it a try, and then move on.
No Deep Learning without its most important ingredient: Neural networks in all variants; GANs, AEs, VAEs, Transformers, DNNs, CNNs, RNNs, and many more. But there’s no need to cover everything yet. At this stage, it’s sufficient to have a look at the last three keywords.
No Deep Learning without neural networks, and no neural networks without (mathematical) theory. You can begin the learning by getting to know mathematical notation. A bit scary at first, you’ll soon begin to embrace the concise brevity. Once you grasp it, look at matrix operations, a central concept behind neural networks.
And with this knowledge, you can then proceed to the convolution operations, another central concept.
Put simply, you move a matrix over another matrix and calculate the inner product between the overlapping areas. There are many variants — keep learning and you’ll naturally use them, too!
#data-science #deep-learning #machine-learning