The Numpy, Scipy, Pandas, and Matplotlib stack: prep for deep learning, machine learning, and artificial intelligence Welcome! This is Deep Learning, Machine Learning, and Data Science Prerequisites: The Numpy Stack in Python. One question or concern I get a lot is that people want to learn deep learning and data science, so they take these courses, but they get left behind because they don’t know enough about the Numpy stack in order to turn those concepts into code.
This course is designed to remove that obstacle - to show you how to do things in the Numpy stack that are frequently needed in deep learning and data science.
So what are those things?
Numpy. This forms the basis for everything else. The central object in Numpy is the Numpy array, on which you can do various operations.
The key is that a Numpy array isn’t just a regular array you’d see in a language like Java or C++, but instead is like a mathematical object like a vector or a matrix.
That means you can do vector and matrix operations like addition, subtraction, and multiplication.
The most important aspect of Numpy arrays is that they are optimized for speed. So we’re going to do a demo where I prove to you that using a Numpy vectorized operation is faster than using a Python list.
Then we’ll look at some more complicated matrix operations, like products, inverses, determinants, and solving linear systems.
Pandas. Pandas is great because it does a lot of things under the hood, which makes your life easier because you then don’t need to code those things manually.
Pandas makes working with datasets a lot like R, if you’re familiar with R.
The central object in R and Pandas is the DataFrame.
We’ll look at how much easier it is to load a dataset using Pandas vs. trying to do it manually.
Then we’ll look at some dataframe operations, like filtering by column, filtering by row, the apply function, and joins, which look a lot like SQL joins.
So if you have an SQL background and you like working with tables then Pandas will be a great next thing to learn about.
Since Pandas teaches us how to load data, the next step will be looking at the data. For that we will use Matplotlib.
In this section we’ll go over some common plots, namely the line chart, scatter plot, and histogram.
We’ll also look at how to show images using Matplotlib.
99% of the time, you’ll be using some form of the above plots.
I like to think of Scipy as an addon library to Numpy.
Whereas Numpy provides basic building blocks, like vectors, matrices, and operations on them, Scipy uses those general building blocks to do specific things.
For example, Scipy can do many common statistics calculations, including getting the PDF value, the CDF value, sampling from a distribution, and statistical testing.
It has signal processing tools so it can do things like convolution and the Fourier transform.
If you’ve taken a deep learning or machine learning course, and you understand the theory, and you can see the code, but you can’t make the connection between how to turn those algorithms into actual running code, this course is for you.
All the code for this course can be downloaded from my github: /lazyprogrammer/machine_learning_examples
In the directory: numpy_class
Make sure you always “git pull” so you have the latest version!
HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:
TIPS (for getting through the course):
A step-by-step guide to setting up Python for Deep Learning and Data Science for a complete beginner
A step-by-step guide to setting up Python for Deep Learning and Data Science for a complete beginner
You can code your own Data Science or Deep Learning project in just a couple of lines of code these days. This is not an exaggeration; many programmers out there have done the hard work of writing tons of code for us to use, so that all we need to do is plug-and-play rather than write code from scratch.
You may have seen some of this code on Data Science / Deep Learning blog posts. Perhaps you might have thought: “Well, if it’s really that easy, then why don’t I try it out myself?”
If you’re a beginner to Python and you want to embark on this journey, then this post will guide you through your first steps. A common complaint I hear from complete beginners is that it’s pretty difficult to set up Python. How do we get everything started in the first place so that we can plug-and-play Data Science or Deep Learning code?
This post will guide you through in a step-by-step manner how to set up Python for your Data Science and Deep Learning projects. We will:
Once you’ve set up the above, you can build your first neural network to predict house prices in this tutorial here:
The main programming language we are going to use is called Python, which is the most common programming language used by Deep Learning practitioners.
The first step is to download Anaconda, which you can think of as a platform for you to use Python “out of the box”.
Visit this page: https://www.anaconda.com/distribution/ and scroll down to see this:
This tutorial is written specifically for Windows users, but the instructions for users of other Operating Systems are not all that different. Be sure to click on “Windows” as your Operating System (or whatever OS that you are on) to make sure that you are downloading the correct version.
This tutorial will be using Python 3, so click the green Download button under “Python 3.7 version”. A pop up should appear for you to click “Save” into whatever directory you wish.
Once it has finished downloading, just go through the setup step by step as follows:
Click “I Agree”
Choose a destination folder and click Next
Click Install with the default options, and wait for a few moments as Anaconda installs
Click Skip as we will not be using Microsoft VSCode in our tutorials
Click Finish, and the installation is done!
Once the installation is done, go to your Start Menu and you should see some newly installed software:
You should see this on your start menu
Click on Anaconda Navigator, which is a one-stop hub to navigate the apps we need. You should see a front page like this:
Anaconda Navigator Home Screen
Click on ‘Launch’ under Jupyter Notebook, which is the second panel on my screen above. Jupyter Notebook allows us to run Python code interactively on the web browser, and it’s where we will be writing most of our code.
A browser window should open up with your directory listing. I’m going to create a folder on my Desktop called “Intuitive Deep Learning Tutorial”. If you navigate to the folder, your browser should look something like this:
Navigating to a folder called Intuitive Deep Learning Tutorial on my Desktop
On the top right, click on New and select “Python 3”:
Click on New and select Python 3
A new browser window should pop up like this.
Browser window pop-up
Congratulations — you’ve created your first Jupyter notebook! Now it’s time to write some code. Jupyter notebooks allow us to write snippets of code and then run those snippets without running the full program. This helps us perhaps look at any intermediate output from our program.
To begin, let’s write code that will display some words when we run it. This function is called print. Copy and paste the code below into the grey box on your Jupyter notebook:
Your notebook should look like this:
Entering in code into our Jupyter Notebook
Now, press Alt-Enter on your keyboard to run that snippet of code:
Press Alt-Enter to run that snippet of code
You can see that Jupyter notebook has displayed the words “Hello World!” on the display panel below the code snippet! The number 1 has also filled in the square brackets, meaning that this is the first code snippet that we’ve run thus far. This will help us to track the order in which we have run our code snippets.
Instead of Alt-Enter, note that you can also click Run when the code snippet is highlighted:
Click Run on the panel
If you wish to create new grey blocks to write more snippets of code, you can do so under Insert.
Jupyter Notebook also allows you to write normal text instead of code. Click on the drop-down menu that currently says “Code” and select “Markdown”:
Now, our grey box that is tagged as markdown will not have square brackets beside it. If you write some text in this grey box now and press Alt-Enter, the text will render it as plain text like this:
If we write text in our grey box tagged as markdown, pressing Alt-Enter will render it as plain text.
There are some other features that you can explore. But now we’ve got Jupyter notebook set up for us to start writing some code!
Now we’ve got our coding platform set up. But are we going to write Deep Learning code from scratch? That seems like an extremely difficult thing to do!
The good news is that many others have written code and made it available to us! With the contribution of others’ code, we can play around with Deep Learning models at a very high level without having to worry about implementing all of it from scratch. This makes it extremely easy for us to get started with coding Deep Learning models.
For this tutorial, we will be downloading five packages that Deep Learning practitioners commonly use:
The first thing we will do is to create a Python environment. An environment is like an isolated working copy of Python, so that whatever you do in your environment (such as installing new packages) will not affect other environments. It’s good practice to create an environment for your projects.
Click on Environments on the left panel and you should see a screen like this:
Click on the button “Create” at the bottom of the list. A pop-up like this should appear:
A pop-up like this should appear.
Name your environment and select Python 3.7 and then click Create. This might take a few moments.
Once that is done, your screen should look something like this:
Notice that we have created an environment ‘intuitive-deep-learning’. We can see what packages we have installed in this environment and their respective versions.
Now let’s install some packages we need into our environment!
The first two packages we will install are called Tensorflow and Keras, which help us plug-and-play code for Deep Learning.
On Anaconda Navigator, click on the drop down menu where it currently says “Installed” and select “Not Installed”:
A whole list of packages that you have not installed will appear like this:
Search for “tensorflow”, and click the checkbox for both “keras” and “tensorflow”. Then, click “Apply” on the bottom right of your screen:
A pop up should appear like this:
Click Apply and wait for a few moments. Once that’s done, we will have Keras and Tensorflow installed in our environment!
Using the same method, let’s install the packages ‘pandas’, ‘scikit-learn’ and ‘matplotlib’. These are common packages that data scientists use to process the data as well as to visualize nice graphs in Jupyter notebook.
This is what you should see on your Anaconda Navigator for each of the packages.
Installing pandas into your environment
Installing scikit-learn into your environment
Installing matplotlib into your environment
Once it’s done, go back to “Home” on the left panel of Anaconda Navigator. You should see a screen like this, where it says “Applications on intuitive-deep-learning” at the top:
Now, we have to install Jupyter notebook in this environment. So click the green button “Install” under the Jupyter notebook logo. It will take a few moments (again). Once it’s done installing, the Jupyter notebook panel should look like this:
Click on Launch, and the Jupyter notebook app should open.
Create a notebook and type in these five snippets of code and click Alt-Enter. This code tells the notebook that we will be using the five packages that you installed with Anaconda Navigator earlier in the tutorial.
import tensorflow as tf import keras import pandas import sklearn import matplotlib
If there are no errors, then congratulations — you’ve got everything installed correctly:
A sign that everything works!
If you have had any trouble with any of the steps above, please feel free to comment below and I’ll help you out!
*Originally published by Joseph Lee Wei En at *medium.freecodecamp.org
An overview of using Python for data science including Numpy, Scipy, pandas, Scikit-Learn, XGBoost, TensorFlow and Keras.
An overview of using Python for data science including Numpy, Scipy, pandas, Scikit-Learn, XGBoost, TensorFlow and Keras.
So you’ve heard of data science and you’ve heard of Python.
You want to explore both but have no idea where to start — data science is pretty complicated, after all.
If you look at the contents of this article, you may think there’s a lot to master, but this article has been designed to gently increase the difficulty as we go along.
One article obviously can’t teach you everything you need to know about data science with python, but once you’ve followed along you’ll know exactly where to look to take the next steps in your data science journey.
Python, as a language, has a lot of features that make it an excellent choice for data science projects.
It’s easy to learn, simple to install (in fact, if you use a Mac you probably already have it installed), and it has a lot of extensions that make it great for doing data science.
Just because Python is easy to learn doesn’t mean its a toy programming language — huge companies like Google use Python for their data science projects, too. They even contribute packages back to the community, so you can use the same tools in your projects!
You can use Python to do way more than just data science — you can write helpful scripts, build APIs, build websites, and much much more. Learning it for data science means you can easily pick up all these other things as well.
There are a few important things to note about Python.
Right now, there are two versions of Python that are in common use. They are versions 2 and 3.
Most tutorials, and the rest of this article, will assume that you’re using the latest version of Python 3. It’s just good to be aware that sometimes you can come across books or articles that use Python 2.
The difference between the versions isn’t huge, but sometimes copying and pasting version 2 code when you’re running version 3 won’t work — you’ll have to do some light editing.
The second important thing to note is that Python really cares about whitespace (that’s spaces and return characters). If you put whitespace in the wrong place, your programme will very likely throw an error.
There are tools out there to help you avoid doing this, but with practice you’ll get the hang of it.
If you’ve come from programming in other languages, Python might feel like a bit of a relief: there’s no need to manage memory and the community is very supportive.
If Python is your first programming language you’ve made an excellent choice. I really hope you enjoy your time using it to build awesome things.
The best way to install Python for data science is to use the Anaconda distribution (you’ll notice a fair amount of snake-related words in the community).
It has everything you need to get started using Python for data science including a lot of the packages that we’ll be covering in the article.
If you click on Products -> Distribution and scroll down, you’ll see installers available for Mac, Windows and Linux.
Even if you have Python available on your Mac already, you should consider installing the Anaconda distribution as it makes installing other packages easier.
If you prefer to do things yourself, you can go to the official Python website and download an installer there.
Packages are pieces of Python code that aren’t a part of the language but are really helpful for doing certain tasks. We’ll be talking a lot about packages throughout this article so it’s important that we’re set up to use them.
Because the packages are just pieces of Python code, we could copy and paste the code and put it somewhere the Python interpreter (the thing that runs your code) can find it.
But that’s a hassle — it means that you’ll have to copy and paste stuff every time you start a new project or if the package gets updated.
To sidestep all of that, we’ll instead use a package manager.
If you chose to use the Anaconda distribution, congratulations — you already have a package manager installed. If you didn’t, I’d recommend installing pip.
No matter which one you choose, you’ll be able to use commands at the terminal (or command prompt) to install and update packages easily.
Now that you’ve got Python installed, you’re ready to start doing data science.
But how do you start?
Because Python caters to so many different requirements (web developers, data analysts, data scientists) there are lots of different ways to work with the language.
Python is an interpreted language which means that you don’t have to compile your code into an executable file, you can just pass text documents containing code to the interpreter!
Let’s take a quick look at the different ways you can interact with the Python interpreter.
If you open up the terminal (or command prompt) and type the word ‘python’, you’ll start a shell session. You can type any valid Python commands in there and they’d work just like you’d expect.
This can be a good way to quickly debug something but working in a terminal is difficult over the course of even a small project.
If you write a series of Python commands in a text file and save it with a .py extension, you can navigate to the file using the terminal and, by typing python YOUR_FILE_NAME.py, can run the programme.
This is essentially the same as typing the commands one-by-one into the terminal, it’s just much easier to fix mistakes and change what your program does.
An IDE is a professional-grade piece of software that helps you manage software projects.
One of the benefits of an IDE is that you can use debugging features which tell you where you’ve made a mistake before you try to run your programme.
Some IDEs come with project templates (for specific tasks) that you can use to set your project out according to best practices.
None of these ways are the best for doing data science with python — that particular honour belongs to Jupyter notebooks.
Jupyter notebooks give you the capability to run your code one ‘block’ at a time, meaning that you can see the output before you decide what to do next — that’s really crucial in data science projects where we often need to see charts before taking the next step.
If you’re using Anaconda, you’ll already have Jupyter lab installed. To start it you’ll just need to type ‘jupyter lab’ into the terminal.
If you’re using pip, you’ll have to install Jupyter lab with the command ‘python pip install jupyter’.
It probably won’t surprise you to learn that data science is mostly about numbers.
The NumPy package includes lots of helpful functions for performing the kind of mathematical operations you’ll need to do data science work.
It comes installed as part of the Anaconda distribution, and installing it with pip is just as easy as installing Jupyter notebooks (‘pip install numpy’).
The most common mathematical operations we’ll need to do in data science are things like matrix multiplication, computing the dot product of vectors, changing the data types of arrays and creating the arrays in the first place!
Here’s how you can make a list into a NumPy array:
Here’s how you can do array multiplication and calculate dot products in NumPy:
And here’s how you can do matrix multiplication in NumPy:
With mathematics out of the way, we must move forward to statistics.
The Scipy package contains a module (a subsection of a package’s code) specifically for statistics.
You can import it (make its functions available in your programme) into your notebook using the command ‘from scipy import stats’.
This package contains everything you’ll need to calculate statistical measurements on your data, perform statistical tests, calculate correlations, summarise your data and investigate various probability distributions.
Here’s how to quickly access summary statistics (minimum, maximum, mean, variance, skew, and kurtosis) of an array using Scipy:
Data scientists have to spend an unfortunate amount of time cleaning and wrangling data. Luckily, the Pandas package helps us do this with code rather than by hand.
The most common tasks that I use Pandas for are reading data from CSV files and databases.
It also has a powerful syntax for combining different datasets together (datasets are called DataFrames in Pandas) and performing data manipulation.
You can see the first few rows of a DataFrame using the .head method:
You can select just one column using square brackets:
And you can create new columns by combining others:
In order to use the pandas read_sql method, you’ll have to establish a connection to a database.
The most bulletproof method of connecting to a database is by using the SQLAlchemy package for Python.
Because SQL is a language of its own and connecting to a database depends on which database you’re using, I’ll leave you to read the documentation if you’re interested in learning more.
Sometimes we’d prefer to do some calculations on our data before they arrive in our projects as a Pandas DataFrame.
If you’re working with databases or scraping data from the web (and storing it somewhere), this process of moving data and transforming it is called ETL (Extract, transform, load).
You extract the data from one place, do some transformations to it (summarise the data by adding it up, finding the mean, changing data types, and so on) and then load it to a place where you can access it.
There’s a really cool tool called Airflow which is very good at helping you manage ETL workflows. Even better, it’s written in Python.
It was developed by Airbnb when they had to move incredible amounts of data around, you can find out more about it here.
Sometimes ETL processes can be really slow. If you have billions of rows of data (or if they’re a strange data type like text), you can recruit lots of different computers to work on the transformation separately and pull everything back together at the last second.
This architecture pattern is called MapReduce and it was made popular by Hadoop.
Nowadays, lots of people use Spark to do this kind of data transformation / retrieval work and there’s a Python interface to Spark called (surprise, surprise) PySpark.
Both the MapReduce architecture and Spark are very complex tools, so I’m not going to go into detail here. Just know that they exist and that if you find yourself dealing with a very slow ETL process, PySpark might help. Here’s a link to the official site.
We already know that we can run statistical tests, calculate descriptive statistics, p-values, and things like skew and kurtosis using the stats module from Scipy, but what else can Python do with statistics?
One particular package that I think you should know about is the lifelines package.
Using the lifelines package, you can calculate a variety of functions from a subfield of statistics called survival analysis.
Survival analysis has a lot of applications. I’ve used it to predict churn (when a customer will cancel a subscription) and when a retail store might be burglarised.
These are totally different to the applications the creators of the package imagined it would be used for (survival analysis is traditionally a medical statistics tool). But that just shows how many different ways there are to frame data science problems!
The documentation for the package is really good, check it out here.Machine Learning in Python
Now this is a major topic — machine learning is taking the world by storm and is a crucial part of a data scientist’s work.
Simply put, machine learning is a set of techniques that allows a computer to map input data to output data. There are a few instances where this isn’t the case but they’re in the minority and it’s generally helpful to think of ML this way.
There are two really good machine learning packages for Python, let’s talk about them both.
Most of the time you spend doing machine learning in Python will be spent using the Scikit-Learn package (sometimes abbreviated sklearn).
This package implements a whole heap of machine learning algorithms and exposes them all through a consistent syntax. This makes it really easy for data scientists to take full advantage of every algorithm.
The general framework for using Scikit-Learn goes something like this –
You split your dataset into train and test datasets:
Then you instantiate and train a model:
And then you use the metrics module to test how well your model works:
The second package that is commonly used for machine learning in Python is XGBoost.
Where Scikit-Learn implements a whole range of algorithms XGBoost only implements a single one — gradient boosted decision trees.
This package (and algorithm) has become very popular recently due to its success at Kaggle competitions (online data science competitions that anyone can participate in).
Training the model works in much the same way as a Scikit-Learn algorithm.Deep Learning in Python
The machine learning algorithms available in Scikit-Learn are sufficient for nearly any problem. That being said, sometimes you need to use the most advanced thing available.
Deep neural networks have skyrocketed in popularity due to the fact that systems using them have outperformed nearly every other class of algorithm.
There’s a problem though — it’s very hard to say what a neural net is doing and why it’s making the decisions that it is. Because of this, their use in finance, medicine, the law and related professions isn’t widely endorsed.
The two major classes of neural network are convolutional neural networks (which are used to classify images and complete a host of other tasks in computer vision) and recurrent neural nets (which are used to understand and generate text).
Exploring how neural nets work is outside the scope of this article, but just know that the packages you’ll need to look for if you want to do this kind of work are TensorFlow (a Google contibution!) and Keras.
Keras is essentially a wrapper for TensorFlow that makes it easier to work with.
Once you’ve trained a model, you’d like to be able to access predictions from it in other software. The way you do this is by creating an API.
An API allows your model to receive data one row at a time from an external source and return a prediction.
Because Python is a general purpose programming language that can also be used to create web services, it’s easy to use Python to serve your model via API.
If you need to build an API you should look into the pickle and Flask. Pickle allows you to save trained models on your hard-drive so that you can use them later. And Flask is the simplest way to create web services.
Finally, if you’d like to build a full-featured web application around your data science project, you should use the Django framework.
Django is immensely popular in the web development community and was used to build the first version of Instagram and Pinterest (among many others).
And with that we’ve concluded our whirlwind tour of data science with Python.
We’ve covered everything you’d need to learn to become a full-fledged data scientist. If it still seems intimidating, you should know that nobody knows all of this stuff and that even the best of us still Google the basics from time to time.
All the basics to start using the Python library NumPy. In this course I'll cover the basics of using number and have several interactive course videos that will challenge you to learn how to use NumPy.
Learn NumPy Fundamentals - Python Library for Data Science
What you'll learn