Deep Learning

Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised...

deep-learning deeplearning

Named Entity Recognition for Turkish with BERT

Fine-tune BERT model for NER task utilizing HuggingFace Trainer class.In this article, I’m making the assumption that the readers already have background information on the following subjects: Named Entity Recognition (NER). Bidirectional Encoder Representations from Transformers (BERT). HuggingFace (transformers) Python library.

Introduction to Pytorch and Tensors

I’ve taken up the 6-week course by on “Deep Learning with PyTorch: Zero to GANs” taught by Aakash N S. In this article (and the…Pytorch is an open-source machine learning and deep learning framework developed by Facebook. It is created to process large-scale image analysis, including object detection, recognition and classification and several other tasks. It is written in Python and C++ language. It can be used with other frameworks like Keras etc., to implement complex algorithms.

NAF: Normalized Advantage Function — DQN for Continuous Control Tasks

Q-Learning and difficulties with continuous action space.For this kind of problem, the agent has a discrete set of possible actions to take. Whereas an action can only be taken or not taken. Certainly, this limits the scope of applicability. Because a wide range of problems, arguably the majority, deal with continuous action space problems!

Getting started with PyTorch

A practical walkthrough on how to use PyTorch for data analysis and inference. PyTorch is nowadays one of the fastest-growing Python frameworks for Deep Learning.

Firing up the neurons: All about Activation Functions

Everything you need to know about the activation functions in Deep learning!I’ll try to give answers to all such questions below. Still, if you have any other doubt related to activation function post them below in the comment section.

Volumetric Medical Image Segmentation with Vox2Vox

How to implement a 3D volumetric generative adversarial network for CT/ MRI segmentation. If you are familiar with generative adversarial networks (GANs).

Encoder-Decoder Model for Multistep time series forecasting

Learn how to use encoder-decoder model for multi-step time series forecasting. Encoder -Decoder Model for Multistep time series forecasting

Understanding Gradient Descent and Adam Optimization

An intuitive view of Adam and why it is more commonly used in Deep Learning. How artificial intelligence has influenced our daily lives in the past decade is something we can only ponder about.

Deploy models and create custom handlers in Torchserve 🚀

let’s put a model into production. Recently, PyTorch has introduced its new production framework to properly serve models, called torchserve.So, without further due, let’s present today’s roadmap.

From LeNet to EfficientNet: The evolution of CNNs

An easy-to-follow journey through mainstream CNN variations and novelties. Convolutional Neural Networks: The building blocks

Artificial Intelligence, Machine Learning, and Deep Learning. What’s the Real Difference?

The twenty-first century brought tremendous technological advancement that we could not dream about a couple of decades earlier. Today, it can be found that people benefit from Google’s AI-controlled predictions, Ridesharing apps such as Uber and Lyft, as well as commercial flights with an AI autopilot that uses everyday music recommender systems to involve artificial intelligence.

Support Vector Regression and it’s Mathematical Implementation

What is Support Vector Regression (SVR)? Support Vector Regression(SVR) is quite different than other Regression models.

Neural Networks and the Universal Approximation Theorem

And the boom of deep neural networks in recent times. The concept of Neural Networks has been around us for a few decades. Why did it take so much time to pace up?

Neural Network Part1:Inside a single neuron

The perceptron or a single neuron is the fundamental building block of a neural network .The idea of a neuron is basic but essential . Lets start understanding the forward propagation of information through a single neuron.

Interview with a Head of AI: Pawel Godula

A few weeks ago I had a chance to interview an amazing person and total rockstar when it comes to modeling and understanding customer data.

Fake News Classification Using LSTM And Word Embedding layers in Keras

Fake News Classification Using LSTM And Word Embedding layers in Keras: Here we will use the word embedding and Long short time memory technique for fake news classification.

Hypothesis Testing | Inferential Statistics

Hypothesis testing is about to allow statisticians and data scientists to make decisions for the real-world based on the results of their statistical analysis. That’s made using the probability of results errors, to explain how to calculate if a statistic represents the real-world data, we need to introduce some probability terminology.

Tensorflow Extended, ML Metadata and Apache Beam on the Cloud

A practical and self-contained example using GCP Dataflow. Tensorflow Extended, ML Metadata and Apache Beam on the Cloud. The fully end to end example that tensorflow extended provides by running tfx template copy taxi $target-dir produces 17 files scattered in 5 directories. If you are looking for a smaller, simpler and self contained example that actually runs on the cloud and not locally, this is what you are looking for. Cloud services setup is also mentioned here.

GPT-3: The New Mighty Language Model from OpenAI

Pushing Deep Learning to the Limit with 175B Parameters. OpenAI recently released pre-print of its new mighty language model GPT-3. Its a much bigger and better version of its predecessor GPT-2.

Is bigger also smarter? — Open AI releases GPT-3 language model

The race for larger language models is entering the next round. Progress in NLP applications is driven by larger language models consisting of neural networks using the Transformer Architecture.