Facial Recognition on Video with Python

Facial Recognition on Video with Python

Introduction to LSTM Autoencoder using Keras

Simple Neural Network is feed-forward wherein info information ventures just in one direction.i.e. the information passes from input layers to hidden layers finally to the output layers. Recurrent Neural Network is the advanced type to the traditional Neural Network. It makes use of sequential information. Unlike conventional networks, the output and input layers are dependent on each other. RNNs are called recurrent because they play out a similar undertaking for each component of an arrangement, with the yield being relied upon the past calculations.LSTM or Long Short Term Memory are a type of RNNs that is useful in learning order dependence in sequence prediction problems. In this article, we will cover a simple Long Short Term Memory autoencoder with the help of Keras and python. Introduction to LSTM Autoencoder using Keras

Building Neural Networks with PyTorch in Google Colab

Combining PyTorch and Google's cloud-based Colab notebook environment can be a good solution for building neural networks with free access to GPUs. This article demonstrates how to do just that.

All About Artificial Neural Networks – What, Why, and How?

Artificial Neural Network is a learning system that seeks to emulate the human brain. Here is an overview of ANN, critical to the discipline of Artificial Intelligence.

Getting Started with PyTorch

A practical walkthrough on how to use PyTorch for data analysis and inference. In this article, I will walk you through a practical example in order to get started using PyTorch. All the code used throughout this article (and more!) is available on my GitHub and Kaggle accounts. For this example, we are going to use the Kaggle Rain in Australia dataset in order to predict if tomorrow is going to rain or not.

Challenges of Human Pose Estimation in AI-Powered Fitness Apps

In this article, author discusses the human pose estimation solution powered by AI technologies and the challenges faced in online fitness apps which use the pose estimation to predict the position of the human body based on an image or a video containing a person.

Understanding Transformers, the Data Science Way

Understanding Transformers, the Data Science Way. Read this accessible and conversational article about understanding transformers, the data science way — by asking a lot of questions that is.

AI Training Method Exceeds GPT-3 Performance with 99.9% Fewer Parameters

AI Training Method Exceeds GPT-3 Performance with 99.9% Fewer Parameters. A team of scientists at LMU Munich have developed Pattern-Exploiting Training (PET), a deep-learning training technique for natural language processing (NLP) models.

Deep Learning Explained in Layman's Terms

Deep Learning Explained in Layman's Terms. In this post, you will get to learn deep learning through a simple explanation (layman terms) and examples.

Looking Inside The Blackbox: How To Trick A Neural Network

Looking Inside The Blackbox: How To Trick A Neural Network. In this tutorial, I’ll show you how to use gradient ascent to figure out how to misclassify an input.

The Most Complete Guide to PyTorch for Data Scientists

The Most Complete Guide to PyTorch for Data Scientists. All the PyTorch functionality you will ever need while doing Deep Learning. From an Experimentation/Research Perspective.

Facebook Releases AI Model for Protein Sequence Processing

Facebook Releases AI Model for Protein Sequence Processing. A team of scientists at Facebook AI Research have released a deep-learning model for processing protein data from DNA sequences.

Why neural networks struggle with the Game of Life

Why neural networks struggle with the Game of Life. This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Implementing a Deep Learning Library from Scratch in Python

Implementing a Deep Learning Library from Scratch in Python. A beginner’s guide to understanding the fundamental building blocks of deep learning platforms.

Can Neural Networks Show Imagination? DeepMind Thinks They Can

Can Neural Networks Show Imagination? DeepMind Thinks They Can. DeepMind has done some of the relevant work in the area of simulating imagination in deep learning systems.

Autograd: The Best Machine Learning Library You’re Not Using?

If there is a Python library that is emblematic of the simplicity, flexibility, and utility of differentiable programming it has to be Autograd.

Salesforce Releases Photon Natural Language Interface for Databases

A team of scientists from Salesforce Research and Chinese University of Hong Kong have released Photon, a natural language interface to databases (NLIDB). The team used deep-learning to construct a parser that achieves 63% accuracy on a common benchmark and an error-detecting module that prompts users to clarify ambiguous questions.

PyTorch-Ignite: training and evaluating neural networks flexibly and transparently

We will introduce the basic concepts of PyTorch-Ignite with the training and evaluation of a MNIST classifier as a beginner application case. We also assume that the reader is familiar with PyTorch.

Artificial Intelligence Can Create Sound Tracks for Silent Videos

Artificial Intelligence Can Create Sound Tracks for Silent Videos. Researchers Ghose and Prevost created a deep learning algorithm which, given a silent video, can generate a realistic sounding synchronized soundtrack.

What are deepfakes?

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.