I’ve wanted to learn more about neural nets and in particular TensorFlow for a while now. I recently had a bit more time to dedicate to it, so began to write about what I had learned and some of the basic examples I had ran through.
What is a Neural Network? The advantages and disadvantages of the standard “vanilla” neural network versus recurrent neural networks (RNNs). A neural network is a type of machine learning which models itself after the human brain
In this article, I will discuss what I think are the three most important architectures to be aware of for NLP.
Recurrent Neural Networks, a.k.a. RNN is a famous supervised Deep Learning methodology. Other commonly used Deep Learning neural networks are Convolutional Neural Networks and Artificial Neural Networks. The main goal behind Deep Learning is to reiterate the functioning of a brain by a machine. As a result of which, loosely, each neural network structure epitomizes a part of the brain. Text Classification with RNN
Image capture makes a snapshot in time of a person, place, or object. Many devices include cameras for taking pictures. This is integrated into everyday life. When taking the picture, there is recognition of that picture and often an autocorrection. Taking that further, there is Optical Character Recognition (OCR) that can take a picture of text and create a usable file that is same as document. Creating a definition of a picture, understanding content, is a complex task. OCR addresses this, and a piece of OCR is knowledge from images. Text Extraction in Python with Neural Networks
What is AI? What is image recognition? What does CNN do? How is CNN implemented? How many images do I need? What exactly is CNN? How to train and test this dataset? This might be your questions when you first approach image recognition or CNN. In this tutorial, we'll discuss Things to know about Conventional Neutron Network (CNN)
We will loop through batches of data points and let TensorFlow update the slope and y-intercept. Instead of generated data, we will use the iris dataset that is built into the Scikit Learn. Specifically, we will find an optimal line through data points where the x-value is the petal width and the y-value is the sepal length. We choose these two because there appears to be a linear relationship between them, as we will see in the graphs at the end. We will also talk more about the effects of different loss functions in the next section, but for now we will use the L2 loss function. Learning The TensorFlow Way of Linear Regression
Keras Deep Learning library helps to develop the neural network models fast and easy. There are two ways to build a model in Keras — Sequential and Functional. Keras Model Sequential API VS Functional API
Neural Networks are one type of deep learning, a piece of Machine Learning…
So…data engineering again! Last week I participated in a Kaggle competition on Mechanisms of Action Prediction. (This competition is still on going, try it if you want!) Basically it asks you to train an algorithm to classify drugs based on their biological activity, and I want to share with you now some quite useful (and simple!) techniques to improve accuracy for tabular data I learned in this competition. Hope it helps! Some useful ways to engineer your data for better performance! Simple Data Engineering to Improve Your Machine Learning Results
In this article, I evaluate the many ways of weight initialization and current best practices.
In this article, I’m using Keras (https://keras.io/) for exploring layer implementation and source code, but in general, most types of layers are quite generic and the main principles don’t depend that much on the actual library implementing them.
In this article, I will attempt to explain how this algorithm works, and then build a simple neural network from scratch to test this network on a regression problem I used in my previous post.
Significant Wave Prediction Using Neural Network. Test Drive Using Pytorch and Comparison of Models
Neurotransmitters, learning, memory, pathfinding and how all that is connected to AI algorithms. So today you’ll learn about different kinds of neurotransmitters, regulations in our bodies, experiments on frogs, and axon’s ability to regrow.
How Transfer Learning works. “Transfer Learning will be the next driver of ML success” — Andrew Ng. Transfer Learning is the process of taking a pre-trained neural network and adapting the neural network to a new different dataset by transferring or repurposing the learned features.
In this article, we will be implementing a Deep Learning Model using CIFAR-10 dataset. The dataset is commonly used in Deep Learning for testing models of Image Classification.
In this post, we will cover the differences between a Fully connected neural network and a Convolutional neural network. We will focus on understanding the differences in terms of the model architecture and results obtained on the MNIST dataset.
In this blog post, I’m going to present to you the ResNet architecture and summarize its paper, “Deep Residual Learning for Image Recognition” (PDF). I’ll explain where it comes from and the ideas behind this architecture, so let’s get into it!
Foundational concepts in the fields of Machine Learning and Deep Neural Networks. Now we will learn how to use one of the most exciting tools and self-driving car development, deep neural networks. A deep neural network is just a term that describes a big multi-layer neural network.