Being able to predict the future is awesome.Conquer the basics of multiple linear regression (and backward elimination!) and use your data to predict the future!
Being able to predict the future is awesome.
You might want to predict how well a stock will do based on some other information that you just happen to have.
It might help you to know if how often you bathe and how many cats you have relates to how long you’ll live.
You might want to figure out if there’s a relationship between a man who 1.) calls his mom more than three times a day, 2.) refers to another man as “bro,” 3.) has never done his own laundry and above average divorce rates.
Multiple linear regression might be for you!
GIF via GIPHY
Multiple linear regression is fun because it looks at the relationships within a bunch of information. Instead of just looking at how one thing relates to another thing (simple linear regression), you can look at the relationship between a lot of different things and the thing you want to predict.
A linear regression model is a statistical model that’s frequently used in data science. It’s also one of the basic building blocks of machine learning! Multiple linear regression (MLR/multiple regression) is a statistical technique. It can use several variables to predict the outcome of a different variable. The goal of multiple regression is to model the linear relationship between your independent variables and your dependent variable. It looks at how multiple independent variables are related to a dependent variable.
I’m going to assume that you know a little bit about simple linear regression. If you don’t, check out this article on building a simple linear regressor. It will give you a quick (and fun) walk-through of the basics.
Simple linear regression is what you can use when you have one independent variable and one dependent variable. Multiple linear regression is what you can use when you have a bunch of different independent variables!
Multiple regression analysis has three main uses.
Let’s do that one!
We’re going to keep things super simple here so that multiple linear regression as a whole makes sense. I do want you to know that things can get a lot more complex than this in the real world.
For the purposes of this post, you are now working for a venture capitalist.
GIF via GIPHY
So here’s the thing: you have a dataset in front of you with information on 50 companies. You have five columns that contain information about how much those companies spend on admin, research and development (R&D), and marketing, their location by state, and their profit for the most recent year. This dataset is anonymized, which means we don’t know the names of these companies or any other identifying information.
You’ve been hired to analyze this information and create a model. You need to inform the guy who hired you what kind of companies will make the most sense in the future to invest in. To keep things simple, let’s say that your employer wants to make this decision based on last year’s profit. This means that the profits column is your dependent variable. The other columns are the independent variables.
So you want to learn about the dependent variable (profit) based on the other categories of information you have.
The guy who hired you doesn’t want to invest in these specific companies. He wants to use the information in this dataset as a sample. This sample will help him understand which of the companies he looks at in the future will perform better based on the same information.
Does he want to invest in companies that spend a lot on R&D? Marketing? Does he want to invest in companies that are based in Illinois? You need to help him create a set of guidelines. You’re going to help him be able to say something along the lines of, “I’m interested in a company that’s based in New York that spends very little on admin expenses but a lot on R&D.”
You’re going to come up with a model that will allow him to assess where and into which companies he wants to invest to maximize his profit.
GIF via GIPHY
Linear regression is great for correlation, but remember that correlation and causation are not the same things! You are not saying that one thing causes the other, you’re finding which independent variables are strongly correlated to the dependent variable.
There are some assumptions that absolutely have to be true:
You need to check that these assumptions are true before you proceed and build your model. We’re totally skipping past that here. Make sure that if you’re doing this in the real world, you aren’t just blindly following this tutorial. Those assumptions need to be correct when you’re building your regression!
If you aren’t familiar with the concept of dummy variables, check out this article on data cleaning and preprocessing. It has some simple code that we can go ahead and copy and paste here.
So we’ve already decided that “profit” is our dependent variable (y) and the others are our independent variables (X). We’ve also decided that what we want is a linear regression model. What about that column of states? “State” is a categorical variable, not a numerical variable. We need our independent variables to be numbers, not words. What do we do?
If you looked at the information in the locations column, you might see that all of the companies that are being examined are based in two states. For the purposes of this explanation, let’s say all of our companies are located in either New York or Minnesota. That means that we’ll want to turn this one column of information into two columns of 1s and 0s. (If you want to learn more about why we’re doing that, check out that article on simple linear regression. It explains why this would be the best way to arrange our data.)
So how do we populate those columns? Basically, we’ll turn each state into its own column. If a company is located in New York, it will have a 1 in the “New York” column and a 0 in the “Minnesota” column. If you were using more states, you’d have a 1 in the New York column, and, for example, a 0 in the “California” column, a zero in the “Illinois” column, a 0 in the Arkansas column, and so on. We won’t be using the original “locations” column anymore because we won’t need it!
These 1s and 0s are basically working as a light switch. 1 is “on” or “yes” and 0 is “off” or “nope.”
You never want to include both variables at the same time.
Why is that?
You’d be duplicating a variable. The first variable (d1) is always equal to 1 minus the second variable (d2). (d1 = 1-d2) When one variable predicts another, it’s called multicollinearity. As a result, the model wouldn’t be able to distinguish the results of d1 from the results of d2. You can’t have the constant and both dummy variables at the same time. If you have nine variables, include eight of them. (If you have two sets of dummy variables, then you have to do this for each set.)
You’re going to want to be familiar with the concept of a P-value. That’s definitely going to come up.
The P-value is the probability of getting a sample like ours (or more extreme than ours) if the null hypothesis is true.
It gives a value to the weirdness of your sample. If you have a large P-value, then you probably won’t change your mind about the null hypothesis. A large value means that it wouldn’t be at all surprising to get a sample like yours if the hypothesis is true. As the P-value gets smaller, you should probably start to ask yourself some questions. You might want to change your mind and maybe even reject the hypothesis.
The null hypothesis is the official way to refer to the claim (hypothesis) that’s on trial here. It’s the default position where there’s just no association among the groups that are being tested. In every experiment, you’re looking for an effect among the groups that are being tested. Unfortunately, there’s always the possibility that there’s no effect (or no difference) between the groups. That lack of difference is called the null hypothesis.
It’s like if you were doing a trial of a drug that doesn’t work. In that trial, there just wouldn’t be a difference between the group that took the drug and the rest of the population. The difference would be null.
You always assume that the null hypothesis is true until you have evidence that it isn’t.
We need to figure out which columns we want to keep and which we want to toss. If you just chuck a bunch of stuff into your model, it won’t be a good one. It definitely won’t be reliable! (Also, at the end of the day, you need to be able to explain your model to the guy who hired you to create this thing. You’re only going to want to explain the variables that actually predict something!)
There are essentially five methods of building a multiple linear regression model.
You’ll almost certainly hear about Stepwise Regression as well. Stepwise regression is most commonly used as another way of saying bidirectional elimination (method 4). Sometimes when people use that phrase they’re referring to a combination of methods 2, 3, and 4. (That’s the idea behind bidirectional elimination as well.)
Method 1 (Chuck Everything In): Okay. That isn’t the official name for this method (but it should be). Occasionally you’ll need to build a model where you just throw in all your variables. You might have some kind of prior knowledge. You might have a particular framework you need to use. You might have been hired by someone who’s insisting that you do that. You might want to prepare for backward elimination. It’s a real option, so I’m including it here.
Method 2 (backward elimination): This has a few basic steps.
(After we go through these concepts, I’ll walk you through an example of backward elimination so you can see it in action! It’s definitely confusing, but if you really look at what’s going on, you’ll get the hang of it.)
Method 3 (forward selection): This is way more complex than just reversing backward elimination.
We can stop when P<SL is no longer true, or there are no more P-values that are less than the significance level. It means that the variable is not significant anymore. You won’t keep the current model, though. You’ll keep the previous one because, in the final model, your variable is insignificant.
Method 3 (bidirectional elimination): This method combines the previous two!
Method 4 (score comparison): Here, you’re going to be looking at all possible methods. You’ll look at a comparison of the scores for all of the possible methods. This is definitely the most resource-consuming approach!
Fun fact: if you have 10 columns of data, you’ll wind up with 1,023 models here. You’d better be ready to commit if you’re going to go this route!
If you’re just getting started with machine learning, statistics, or data science, that all looks like it will be an insane amount of code. It’s not!
So much of what you need to do with a machine learning model is all ready to go with the amazing libraries out there. You’ll need to do the tough parts where you decide what information is important and what kind of models you’ll want to use. It’s also up to you to interpret the results and be able to communicate what you’ve built. However, the code itself is very doable.
GIF via GIPHY
Backward elimination is the fastest and the best method to start with, so that’s what I’m going to walk you through after we build the quick and easy multiple linear regression model.
First, let’s prepare our dataset. Let’s say we have a .csv file called “startups.csv” that contains the information we talked about earlier. We’ll say it has 50 companies and columns for R&D spending, admin spending, marketing spending, what state the company is located in (let’s say, New York, Minnesota, and California), and one column for last year’s profit.
It’s a good idea to import your libraries right away.
# Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd
Now we can go ahead and copy and paste the code from that data cleaning and preparation article! We’re definitely going to want to change the name of our dataset to ours. I’m calling it ‘startups.csv.’ We’ll adjust a couple of other tiny details as well. Profit (y) is still our last column, so we’ll continue to remove that with [:, :-1]. We’ll make a little adjustment to grab our independent variables with [:, 4]. Now we have a vector of the dependent variable (y) and a matrix of independent variables that contains everything except the profits (X). We want to see if there is a linear dependency between the two!
dataset = pd.read_csv('startups.csv') X = dataset.iloc[:, :-1].values y = dataset.iloc[:, 4].values
Now we need to encode the categorical variable. We can use label encoder and one hot encoder to create dummy variables. (We can copy and paste this from that other article too! Make sure you’re grabbing the right information and you don’t encode the dependent variable.) You’re going to change the index of the column in both spots [:, 3] and [:, 3] again, and replace the index in one hot encoder too .
from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder = LabelEncoder() X[:, 3] = labelencoder.fit_transform(X[:, 3]) onehotencoder = OneHotEncoder(categorical_features = ) X = onehotencoder.fit_transform(X).toarray()
You’re ready to go! Our one column of information is now three columns, each of which corresponds to one state!
What about avoiding the dummy variable trap? You don’t actually need to do that with our libraries! It’s all taken care of for you here with the libraries that we’re choosing to use. However, if you ever want or need to run that code, it’s simple! You can do that with one line right after you encode your data.
What does that do? It removes the first column from X. Putting the 1 there means that we want to take all of the columns starting at index 1 to the end. You won’t take the first column. For some libraries, you’ll need to take one column away manually to be sure your dataset won’t contain redundancies.
Now let’s split our training and testing data. The most common split is an 80/20 split, which means 80% of our data would go to training our model and 20% would go to testing it. Let’s do that here!
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
We don’t need to do feature scaling here! The library will take care of that for us.
We’ll import linear regression from Scikit-Learn. (That makes a little sense, doesn’t it?)
from sklearn.linear_model import LinearRegression
Now we’ll introduce our regressor. We’ll create an object of the class LinearRegression and we’ll fit the object to our training set. We want to apply this to both our X_train and y_train.
regressor = LinearRegression() regressor.fit(X_train, y_train)
Now let’s test the performance of our multiple linear regressor!
(We won’t plot a graph here because we’d need five dimensions to do that. If you’re interested in plotting a graph with a simple linear regressor, check out this article on building a simple linear regressor.)
We’ll create the vector of predictions (y_pred). We can use the regressor with the predict method to predict the observations of the test set (X_test).
y_pred = regressor.predict(X_test)
That’s it! Four lines of code and you’ve built a multiple linear regressor!
GIF via GIPHY
Now we can see the ten predicted profits! You can print them any time with a simple
print(y_pred). We can easily compare them by taking a look at the predictions and then comparing them to the actual results. If you were to take a look, you’d see that some are incredibly accurate and the rest are pretty darn good. Nice work!
There is definitely some linear dependency between our dependent and independent variables. We can clearly see a strong linear relationship between the two.
Congratulations!! You now know how to make a multiple linear regressor in Python!
Things are about to get more challenging!
What if some of the variables have a lot of impact on our dependent variable and some are statistically insignificant? We can definitely find out which are the variables that have the highest impact on the dependent variable. We’ll want to find a team of variables that all have a definite effect, positive or negative.
Let’s use backward elimination!
We need to prepare something specific for backward elimination. We want a library stats model, so let’s import statsmodels.formula.api. That’s a little long to have to keep retyping, so we’ll make a shortcut using sm.
import statsmodels.formula.api as sm
We need to add a column of ones in our matrix of features of independent variables because of the way it works with the constant. (Our model needs to take into account our constant b0. In most libraries it’s included, but not in the stats model that we’re using. We’ll add a column of ones so our stats model will understand the formula correctly.)
This starts pretty simply. We’ll use .append because we want to append.
(Love Python ❤️)
We have our matrix of features X. The values argument is perfect for us because it’s an array. We’ll input a matrix of 50 lines and one column with 1s inside. We can create that with Numpy’s np.ones. We’ll need to specify the numbers of lines and columns we want (50,1). We need to convert the array into the integer type to make this work, so we’ll use .astype(int). Then we need to decide if we’re adding a line or a column (line = 0, column = 1), so we’ll say axis = 1 for a column!
We want this column to be located at the beginning of our dataset. What do we do? Let’s add matrix X to the column of 50 ones, rather than the other way around. We can do that with values = X.
X = np.append(arr = np.ones((50, 1)).astype(int), values = X, axis = 1)
We want to create a new matrix of our optimal features (X_opt). These features are the ones that are statistically significant. The ones that have a high impact on the profit. This will be the matrix containing the team of optimal features with high impact on the profit.
We’ll need to initialize it. We can remove the variables that are not statistically significant one by one. We’ll do this by removing the index at each step. First take all the indexes of the columns in X, separated by commas [0,1,2,3,4,5].
If you look back at the methods earlier, you’ll see that we first need to select our significance level, which we talked about earlier. Then we need to fit the model!
We aren’t going to take the regressor we built. We’re using a new library, so now we need a new fit to our future optimal matrix. We’ll create a new regressor (our last one was from the linear regression library). Our new class will be ordinary least squares (OLS). We’ll need to call the class and specify some arguments. (You can check out the official documentation here.) For our arguments, we’ll need an endog (our dependent variable) and an exog (our X_opt, which is just our matrix of features (X) with the intercept, which isn’t included by default). In order to fit it we’ll just use a .fit()!
X_opt = X[:, [0, 1, 2, 3, 4, 5]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit()
Now we’ve initialized X_opt!
Now let’s look at our P-values! How do we look for the predictor with the highest P-values? We’ll take our regressor object and call the function .summary().
Now we can see a table with some very useful information about our model! We can see the adjusted R-squared values and our P-values. The lower the p-value, the more significant your independent variable will be with respect to your dependent variable. Here, we’re looking for the highest one. That’s easy to see.
Now let’s remove it!
We can copy and paste our code from above and remove index 2. That will look like this:
X_opt = X[:, [0, 1, 3, 4, 5]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary()
Just keep going until you don’t have any P-values that are higher than the SL value you chose. Remember that you always want to look at the original matrix in order to choose the correct index! You’re using the columns in your original matrix (X), not in X_opt.
You might get to the point where you have a P-value that’s incredibly close to the SL value that you chose. For example, we chose 0.050 and here’s 0.060.
That’s a tough situation because the value that you chose could have been anything. If you want to thoroughly follow your framework, you’ll need to remove that index. But there are other metrics that can help make more sense of whether or not we want to do that. We could add other metrics, like a criterion, that can help us decide if we really want to make that choice. There’s also a lot of information right in the summary here, like the R-squared value, that can help us make our decision.
So let’s say we ran backward elimination until the end and we’re left with only the index for the R&D spending column.
X_opt = X[:, [0, 1, 3, 4, 5]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() X_opt = X[:, [0, 1, 3, 5]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() X_opt = X[:, [0, 3, 5]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary() X_opt = X[:, [0, 3]] regressor_OLS = sm.OLS(endog = y, exog = X_opt).fit() regressor_OLS.summary()
If we’ve been following our model carefully, that means that we now know that R&D spending is a powerful predictor for our dependent variable! The conclusion here is that the data that can predict profits with the highest impact is composed of only one category: R&D spending!
You did it! You used multiple linear regression and backward elimination! You figured out that looking at R&D spending will give you the best sense of what a company’s profits will be!
Thanks for reading ❤
If you liked this post, share it with all of your programming buddies!
Learn Data Science | How to Learn Data Science for Free. In this post, I have described a learning path and free online courses and tutorials that will enable you to learn data science for free.
The average cost of obtaining a masters degree at traditional bricks and mortar institutions will set you back anywhere between $30,000 and $120,000. Even online data science degree programs don’t come cheap costing a minimum of $9,000. So what do you do if you want to learn data science but can’t afford to pay this?
I trained into a career as a data scientist without taking any formal education in the subject. In this article, I am going to share with you my own personal curriculum for learning data science if you can’t or don’t want to pay thousands of dollars for more formal study.
The curriculum will consist of 3 main parts, technical skills, theory and practical experience. I will include links to free resources for every element of the learning path and will also be including some links to additional ‘low cost’ options. So if you want to spend a little money to accelerate your learning you can add these resources to the curriculum. I will include the estimated costs for each of these.
The first part of the curriculum will focus on technical skills. I recommend learning these first so that you can take a practical first approach rather than say learning the mathematical theory first. Python is by far the most widely used programming language used for data science. In the Kaggle Machine Learning and Data Science survey carried out in 2018 83% of respondents said that they used Python on a daily basis. I would, therefore, recommend focusing on this language but also spending a little time on other languages such as R.
Before you can start to use Python for data science you need a basic grasp of the fundamentals behind the language. So you will want to take a Python introductory course. There are lots of free ones out there but I like the Codeacademy ones best as they include hands-on in-browser coding throughout.
I would suggest taking the introductory course to learn Python. This covers basic syntax, functions, control flow, loops, modules and classes.
Next, you will want to get a good understanding of using Python for data analysis. There are a number of good resources for this.
To start with I suggest taking at least the free parts of the data analyst learning path on dataquest.io. Dataquest offers complete learning paths for data analyst, data scientist and data engineer. Quite a lot of the content, particularly on the data analyst path is available for free. If you do have some money to put towards learning then I strongly suggest putting it towards paying for a few months of the premium subscription. I took this course and it provided a fantastic grounding in the fundamentals of data science. It took me 6 months to complete the data scientist path. The price varies from $24.50 to $49 per month depending on whether you pay annually or not. It is better value to purchase the annual subscription if you can afford it.
If you have chosen to pay for the full data science course on Dataquest then you will have a good grasp of the fundamentals of machine learning with Python. If not then there are plenty of other free resources. I would focus to start with on scikit-learn which is by far the most commonly used Python library for machine learning.
When I was learning I was lucky enough to attend a two-day workshop run by Andreas Mueller one of the core developers of scikit-learn. He has however published all the material from this course, and others, on this Github repo. These consist of slides, course notes and notebooks that you can work through. I would definitely recommend working through this material.
Then I would suggest taking some of the tutorials in the scikit-learn documentation. After that, I would suggest building some practical machine learning applications and learning the theory behind how the models work — which I will cover a bit later on.
SQL is a vital skill to learn if you want to become a data scientist as one of the fundamental processes in data modelling is extracting data in the first place. This will more often than not involve running SQL queries against a database. Again if you haven’t opted to take the full Dataquest course then here are a few free resources to learn this skill.
Codeacamdemy has a free introduction to SQL course. Again this is very practical with in-browser coding all the way through. If you also want to learn about cloud-based database querying then Google Cloud BigQuery is very accessible. There is a free tier so you can try queries for free, an extensive range of public datasets to try and very good documentation.
To be a well-rounded data scientist it is a good idea to diversify a little from just Python. I would, therefore, suggest also taking an introductory course in R. Codeacademy have an introductory course on their free plan. It is probably worth noting here that similar to Dataquest Codeacademy also offers a complete data science learning plan as part of their pro account (this costs from $31.99 to $15.99 per month depending on how many months you pay for up front). I personally found the Dataquest course to be much more comprehensive but this may work out a little cheaper if you are looking to follow a learning path on a single platform.
It is a good idea to get a grasp of software engineering skills and best practices. This will help your code to be more readable and extensible both for yourself and others. Additionally, when you start to put models into production you will need to be able to write good quality well-tested code and work with tools like version control.
There are two great free resources for this. Python like you mean it covers things like the PEP8 style guide, documentation and also covers object-oriented programming really well.
The scikit-learn contribution guidelines, although written to facilitate contributions to the library, actually cover the best practices really well. This covers topics such as Github, unit testing and debugging and is all written in the context of a data science application.
For a comprehensive introduction to deep learning, I don’t think that you can get any better than the totally free and totally ad-free fast.ai. This course includes an introduction to machine learning, practical deep learning, computational linear algebra and a code-first introduction to natural language processing. All their courses have a practical first approach and I highly recommend them.
Whilst you are learning the technical elements of the curriculum you will encounter some of the theory behind the code you are implementing. I recommend that you learn the theoretical elements alongside the practical. The way that I do this is that I learn the code to be able to implement a technique, let’s take KMeans as an example, once I have something working I will then look deeper into concepts such as inertia. Again the scikit-learn documentation contains all the mathematical concepts behind the algorithms.
In this section, I will introduce the key foundational elements of theory that you should learn alongside the more practical elements.
The khan academy covers almost all the concepts I have listed below for free. You can tailor the subjects you would like to study when you sign up and you then have a nice tailored curriculum for this part of the learning path. Checking all of the boxes below will give you an overview of most elements I have listed below.
Calculus is defined by Wikipedia as “the mathematical study of continuous change.” In other words calculus can find patterns between functions, for example, in the case of derivatives, it can help you to understand how a function changes over time.
Many machine learning algorithms utilise calculus to optimise the performance of models. If you have studied even a little machine learning you will probably have heard of Gradient descent. This functions by iteratively adjusting the parameter values of a model to find the optimum values to minimise the cost function. Gradient descent is a good example of how calculus is used in machine learning.
What you need to know:
Many popular machine learning methods, including XGBOOST, use matrices to store inputs and process data. Matrices alongside vector spaces and linear equations form the mathematical branch known as Linear Algebra. In order to understand how many machine learning methods work it is essential to get a good understanding of this field.
What you need to learn:
Vectors and spaces
Here is a list of the key concepts you need to know:
The third section of the curriculum is all about practice. In order to truly master the concepts above you will need to use the skills in some projects that ideally closely resemble a real-world application. By doing this you will encounter problems to work through such as missing and erroneous data and develop a deep level of expertise in the subject. In this last section, I will list some good places you can get this practical experience from for free.
“With deliberate practice, however, the goal is not just to reach your potential but to build it, to make things possible that were not possible before. This requires challenging homeostasis — getting out of your comfort zone — and forcing your brain or your body to adapt.”, Anders Ericsson, Peak: Secrets from the New Science of Expertise
Machine learning competitions are a good place to get practice with building machine learning models. They give access to a wide range of data sets, each with a specific problem to solve and have a leaderboard. The leaderboard is a good way to benchmark how good your knowledge at developing a good model actually is and where you may need to improve further.
The UCI machine learning repository is a large source of publically available data sets. You can use these data sets to put together your own data projects this could include data analysis and machine learning models, you could even try building a deployed model with a web front end. It is a good idea to store your projects somewhere publically such as Github as this can create a portfolio showcasing your skills to use for future job applications.
One other option to consider is contributing to open source projects. There are many Python libraries that rely on the community to maintain them and there are often hackathons held at meetups and conferences where even beginners can join in. Attending one of these events would certainly give you some practical experience and an environment where you can learn from others whilst giving something back at the same time. Numfocus is a good example of a project like this.
In this post, I have described a learning path and free online courses and tutorials that will enable you to learn data science for free. Showcasing what you are able to do in the form of a portfolio is a great tool for future job applications in lieu of formal qualifications and certificates. I really believe that education should be accessible to everyone and, certainly, for data science at least, the internet provides that opportunity. In addition to the resources listed here, I have previously published a recommended reading list for learning data science available here. These are also all freely available online and are a great way to complement the more practical resources covered above.
Thanks for reading!
Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Big DataDownloadable PDF of Best AI Cheat Sheets in Super High Definition
Cheat Sheets for AI, Neural Networks, Machine Learning, Deep Learning & Data Science in HD
Neural Networks Cheat Sheets
Neural Networks Basics Cheat Sheet
An Artificial Neuron Network (ANN), popularly known as Neural Network is a computational model based on the structure and functions of biological neural networks. It is like an artificial human nervous system for receiving, processing, and transmitting information in terms of Computer Science.
Neural Networks Graphs Cheat Sheet
Graph data can be used with a lot of learning tasks contain a lot rich relation data among elements. For example, modeling physics system, predicting protein interface, and classifying diseases require that a model learns from graph inputs. Graph reasoning models can also be used for learning from non-structural data like texts and images and reasoning on extracted structures.
Machine Learning Cheat Sheets
Machine Learning with Emojis Cheat Sheet
Scikit Learn Cheat Sheet
Scikit-learn is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines is a simple and efficient tools for data mining and data analysis. It’s built on NumPy, SciPy, and matplotlib an open source, commercially usable — BSD license
Scikit-learn Algorithm Cheat Sheet
This machine learning cheat sheet will help you find the right estimator for the job which is the most difficult part. The flowchart will help you check the documentation and rough guide of each estimator that will help you to know more about the problems and how to solve it.
If you like these cheat sheets, you can let me know here.### Machine Learning: Scikit-Learn Algorythm for Azure Machine Learning Studios
Scikit-Learn Algorithm for Azure Machine Learning Studios Cheat Sheet
Data Science with Python Cheat Sheets
TensorFlow Cheat Sheet
TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks.
If you like these cheat sheets, you can let me know here.### Data Science: Python Basics Cheat Sheet
Python Basics Cheat Sheet
Python is one of the most popular data science tool due to its low and gradual learning curve and the fact that it is a fully fledged programming language.
PySpark RDD Basics Cheat Sheet
“At a high level, every Spark application consists of a driver program that runs the user’s
main function and executes various parallel operations on a cluster. The main abstraction Spark provides is a resilient distributed dataset (RDD), which is a collection of elements partitioned across the nodes of the cluster that can be operated on in parallel. RDDs are created by starting with a file in the Hadoop file system (or any other Hadoop-supported file system), or an existing Scala collection in the driver program, and transforming it. Users may also ask Spark to persist an RDD in memory, allowing it to be reused efficiently across parallel operations. Finally, RDDs automatically recover from node failures.” via Spark.Aparche.Org
NumPy Basics Cheat Sheet
NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.
***If you like these cheat sheets, you can let me know ***here.
Bokeh Cheat Sheet
“Bokeh is an interactive visualization library that targets modern web browsers for presentation. Its goal is to provide elegant, concise construction of versatile graphics, and to extend this capability with high-performance interactivity over very large or streaming datasets. Bokeh can help anyone who would like to quickly and easily create interactive plots, dashboards, and data applications.” from Bokeh.Pydata.com
Karas Cheat Sheet
Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible.
Padas Basics Cheat Sheet
Pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software released under the three-clause BSD license.
If you like these cheat sheets, you can let me know here.### Pandas Cheat Sheet: Data Wrangling in Python
Pandas Cheat Sheet: Data Wrangling in Python
The term “data wrangler” is starting to infiltrate pop culture. In the 2017 movie Kong: Skull Island, one of the characters, played by actor Marc Evan Jackson is introduced as “Steve Woodward, our data wrangler”.
Data Wrangling with Pandas Cheat Sheet
Data Wrangling with ddyr and tidyr Cheat Sheet
If you like these cheat sheets, you can let me know here.### Data Science: Scipy Linear Algebra
Scipy Linear Algebra Cheat Sheet
SciPy builds on the NumPy array object and is part of the NumPy stack which includes tools like Matplotlib, pandas and SymPy, and an expanding set of scientific computing libraries. This NumPy stack has similar users to other applications such as MATLAB, GNU Octave, and Scilab. The NumPy stack is also sometimes referred to as the SciPy stack.
Matplotlib Cheat Sheet
Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented APIfor embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural “pylab” interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged. SciPy makes use of matplotlib.
Pyplot is a matplotlib module which provides a MATLAB-like interface matplotlib is designed to be as usable as MATLAB, with the ability to use Python, with the advantage that it is free.
Data Visualization with ggplot2 Cheat Sheet
Big-O Cheat Sheet
Big-O Algorithm Cheat Sheet: http://bigocheatsheet.com/
Data Science Cheat Sheet: https://www.datacamp.com/community/tutorials/python-data-science-cheat-sheet-basics
Data Wrangling Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf
Data Wrangling: https://en.wikipedia.org/wiki/Data_wrangling
Ggplot Cheat Sheet: https://www.rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf
Keras Cheat Sheet: https://www.datacamp.com/community/blog/keras-cheat-sheet#gs.DRKeNMs
Machine Learning Cheat Sheet: https://ai.icymi.email/new-machinelearning-cheat-sheet-by-emily-barry-abdsc/
Machine Learning Cheat Sheet: https://docs.microsoft.com/en-in/azure/machine-learning/machine-learning-algorithm-cheat-sheet
Matplotlib Cheat Sheet: https://www.datacamp.com/community/blog/python-matplotlib-cheat-sheet#gs.uEKySpY
Neural Networks Cheat Sheet: http://www.asimovinstitute.org/neural-network-zoo/
Neural Networks Graph Cheat Sheet: http://www.asimovinstitute.org/blog/
Numpy Cheat Sheet: https://www.datacamp.com/community/blog/python-numpy-cheat-sheet#gs.AK5ZBgE
Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet#gs.oundfxM
Pandas Cheat Sheet: https://www.datacamp.com/community/blog/pandas-cheat-sheet-python#gs.HPFoRIc
Pyspark Cheat Sheet: https://www.datacamp.com/community/blog/pyspark-cheat-sheet-python#gs.L=J1zxQ
Scikit Cheat Sheet: https://www.datacamp.com/community/blog/scikit-learn-cheat-sheet
Scikit-learn Cheat Sheet: http://peekaboo-vision.blogspot.com/2013/01/machine-learning-cheat-sheet-for-scikit.html
Scipy Cheat Sheet: https://www.datacamp.com/community/blog/python-scipy-cheat-sheet#gs.JDSg3OI
TesorFlow Cheat Sheet: https://www.altoros.com/tensorflow-cheat-sheet.html
Data Science, Machine Learning, Deep Learning, and Artificial intelligence are really hot at this moment and offering a lucrative career to programmers with high pay and exciting work.
Data Science, Machine Learning, Deep Learning, and Artificial intelligence are really hot at this moment and offering a lucrative career to programmers with high pay and exciting work.
It's a great opportunity for programmers who are willing to learn these new skills and upgrade themselves and want to solve some of the most interesting real-world problems.
It's also important from the job perspective because Robots and Bots are getting smarter day by day, thanks to these technologies and most likely will take over some of the jobs which many programmers do today.
Hence, it's important for software engineers and developers to upgrade themselves with these skills. Programmers with these skills are also commanding significantly higher salaries as data science is revolutionizing the world around us.
You might already know that the Machine learning specialist is one of the top paid technical jobs in the world. However, most developers and IT professionals are yet to learn this valuable set of skills.
For those, who don't know what is a Data Science, Machine learning, or deep learning, they are very related terms with all pointing towards machine doing jobs which is only possible for humans till date and analyzing the huge set of data collected by modern day application.
Data Science, in particular, is a combination of concepts such as machine learning, visualization, data mining, programming, data mugging, etc.
There are a lot of popular scientific Python libraries such as Numpy, Scipy, Scikit-learn, Pandas, which is used by Data Scientist for analyzing data.
To be honest with you, I am also quite new to Data Science and Machine learning world but I have been spending some time from last year to understand this field and have done some research in terms of best resources to learn machine learning, data science, etc.
I am sharing all those resources in a series of a blog post like this. Earlier, I have shared some courses to learn TensorFlow, one of the most popular machine-learning library and today I'll share some more to learn these technologies.
These are a combination of both free and paid resource which will help you to understand key data science concepts and become a Data Scientist. Btw, I'll get paid if you happen to buy a course which is not free.
Here is my list of some of the best courses to learn Data Science, Machine learning, and deep learning using Python and R programming language. As I have said, Data Science and machine learning work very closely together, hence some of these courses also cover machine learning.
If you are still on fence with respect to choosing Python or R for machine learning, let me tell you that both Python and R are a great language for Data Analysis and have good APIs and library, hence I have included courses in both Python and R, you can choose the one you like.
I personally like Python because of its versatile usage, it's the next best in my list of language after Java. I am already using it for writing scripts and other web stuff, so it was an easy choice for me. It has also got some excellent libraries like Sci-kit Learn and TensorFlow.
Data Science is also a combination of many skills e.g. visualization, data cleaning, data mining, etc and these courses provide a good overview of all these concepts and also presents a lot of useful tools which can help you in the real world.
Machine Learning by Andrew Ng
This is probably the most popular course to learn machine learning provided by Stanford University and Coursera, which also provides certification. You'll be tested on each and every topic that you learn in this course, and based on the completion and the final score that you get, you'll also be awarded the certificate.
This course is free but you need to pay for certificates, if you want. Though, it does provide value to you as a developer and gives you a good understanding of the mathematics behind all the machine learning algorithms that you come up with.
I personally really like this one. Andrew Ng takes you through the course using Octave, which is a good tool to test your algorithm before making it go live on your project.
1.Machine Learning A-Z: Hands-On Python and R --- In Data Science
This is probably the best hands on course on Data Science and machine learning online. In this course, you will learn to create Machine Learning Algorithms in Python and R from two Data Science experts.
This is a great course for students and programmers who want to make a career in Data Science and also Data Analysts who want to level up in machine learning.
It's also good for any intermediate level programmers who know the basics of machine learning, including the classical algorithms like linear regression or logistic regression, but who want to learn more about it and explore all the different fields of Machine Learning.
Data science is the practice of transforming data into knowledge, and R is one of the most popular programming language used by data scientists.
In this course, you'll learn first learn about the practice of data science, the R programming language, and how they can be used to transform data into actionable insight.
Next, you'll learn how to transform and clean your data, create and interpret descriptive statistics, data visualizations, and statistical models.
Finally, you'll learn how to handle Big Data, make predictions using machine learning algorithms, and deploy R to production.
Btw, you would need a Pluralsight membership to get access this course, but if you don't have one you can still check out this course by taking their 10-day free Pass, which provides 200 minutes of access to all of their courses for free.
3.** **Harvard Data Science Course
The course is a combination of various data science concepts such as machine learning, visualization, data mining, programming, data mugging, etc.
I suggest you complete the machine learning course on course before taking this course, as machine learning concepts such as PCA (dimensionality reduction), k-means and logistic regression are not covered in depth.
But remember, you have to invest a lot of time to complete this course, especially the homework exercises are very challenging
In short, if you are looking for an online course in data science(using Python), there is no better course than Harvard's CS 109. You need some background in programming and knowledge of statistics to complete this course.
4. Want to be a Data Scientist? (FREE)
This is a great introductory course on what Data Scientist do and how you can become a data science professional. It's also free and you can get it on Udemy.
If you have just heard about Data Science and excited about it but doesn't know what it really means then this is the course you should attend first.
It's a small course but packed with big punches. You will understand what Data Science is? Appreciate the work Data Scientists do on a daily basis and differentiate the various roles in Data Science and the skills needed to perform them.
You will also learn about the challenges Data Scientists face. In short, this course will give you all the knowledge to make a decision on whether Data Science is the right path for you or not.
5. Intro to Data Science by Udacity
This is another good Introductory course on Data science which is available for free on Udacity, another popular online course website.
In this course, you will learn about essential Data science concepts e.g. Data Manipulation, Data Analysis with Statistics and Machine Learning, Data Communication with Information Visualization, and Data at Scale while working with Big Data.
This is a free course and it's also the first step towards a new career with the Data Analyst Nanodegree Program offered by Udacity.
6. Data Science Certification Training --- R Programming
The is another good course to learn Data Science with R. In this course, you will not only learn R programming language but also get some hands-on experience with statistical modeling techniques.
The course has real-world examples of how analytics have been used to significantly improve a business or industry.
If you are interested in learning some practical analytic methods that don't require a ton of maths background to understand, this is the course for you.
7. Intro To Data Science Course by Coursera
This course provides a broad introduction to various concepts of data science. The first programming exercise "Twitter Sentiment Analysis in Python" is both fun and challenging, where you analyze tons of twitter message to find out the sentiments e.g. negative, positive etc.
Btw, It's not so good for beginners, especially if you don't know Python and SQL but if you do and have a basic understanding of Data Science then this is a great course.
8. Python for Data Science and Machine Learning Bootcamp
There is no doubt that Python is probably the best language, apart from R for Data Analysis and that's why it's hugely popular among Data Scientists.
This course will teach you how to use all important Python scientific and machine learning libraries Tensorflow, NumPy, Pandas, Seaborn, Matplotlib, Plotly, Scikit-Learn, Machine Learning, and many more libraries which I have explained earlier in my list of useful machine learning libraries.
It's a very comprehensive course and you will how to use the power of Python to analyze data, create beautiful visualizations, and use powerful machine learning algorithms!
9. Data Science A-Z: Real-Life Data Science Exercises Included
This is another great hands-on course on Data Science from Udemy. It promises to teach you Data Science step by step through real Analytics examples. Data Mining, Modeling, Tableau Visualization and more.
This course will give you so many practical exercises that the real world will seem like a piece of cake when you complete this course.
The homework exercises are also very thought-provoking and challenging. In short, If you love doing stuff then this is a course for you.
10. Data Science, Deep Learning and Machine Learning with Python
If you've got some programming or scripting experience, this course will teach you the techniques used by real data scientists and machine learning practitioners in the tech industry --- and help you to become a data scientist.
The topics in this course come from an analysis of real requirements in data scientist job listings from the biggest tech employers, that makes it even more special and useful.
That's all about some of the popular courses to learn Data Science. As I said, there is a lot of demand for good Data Analytics and there are not many developers out there to fulfill that demand.
It's a great chance for the programmer, especially those who have good knowledge of maths and statistics to make a career in machine learning and Data analytics. You will be awarded exciting work and incredible pay.
Other useful Data Science and Machine Learning resources
Thanks, You made it to the end of the article ... Good luck with your Data Science and Machine Learning journey! It's certainly not going to be easy, but by following these courses, you are one step closer to becoming the Machine Learning Specialists you always wanted to be.