Learn NumPy Arrays With Examples

Learn NumPy Arrays With Examples

In this Python Numpy tutorial, you will understand each aspect of Numpy

So, let’s get started!

What is a Python NumPy?

NumPy is a Python package which stands for ‘Numerical Python’. It is the core library for scientific computing, which contains a powerful n-dimensional array object, provide tools for integrating C, C++ etc. It is also useful in linear algebra, random number capability etc. NumPy array can also be used as an efficient multi-dimensional container for generic data. Now, let me tell you what exactly is a python numpy array.

**NumPy Array: **Numpy array is a powerful N-dimensional array object which is in the form of rows and columns. We can initialize numpy arrays from nested Python lists and access it elements. In order to perform these numpy operations, the next question which will come in your mind is:

How do I install NumPy?

To install Python NumPy, go to your command prompt and type “pip install numpy”. Once the installation is completed, go to your IDE (For example: PyCharm) and simply import it by typing: “import numpy as np”

Moving ahead in** python numpy tutorial**, let us understand what exactly is a multi-dimensional numPy array.

Here, I have different elements that are stored in their respective memory locations. It is said to be two dimensional because it has rows as well as columns. In the above image, we have 3 columns and 4 rows available.

Let us see how it is implemented in PyCharm:

Single-dimensional Numpy Array:

import numpy as np
a=np.array([1,2,3])
print(a)

Output – [1 2 3]

Multi-dimensional Array:

a=np.array([(1,2,3),(4,5,6)])
print(a)

O/P – [[ 1 2 3]
 [4 5 6]]

Many of you must be wondering that why do we use python numpy if we already have python list? So, let us understand with some examples in this python numpy tutorial.

Python NumPy Array v/s List

We use python numpy array instead of a list because of the below three reasons:
Less MemoryFastConvenient
The very first reason to choose python numpy array is that it occupies less memory as compared to list. Then, it is pretty fast in terms of execution and at the same time it is very convenient to work with numpy. So these are the major advantages that python numpy array has over list. Don’t worry, I am going to prove the above points one by one practically in PyCharm. Consider the below example:

import numpy as np
 
import time
import sys
S= range(1000)
print(sys.getsizeof(5)*len(S))
 
D= np.arange(1000)
print(D.size*D.itemsize)

O/P –  14000
4000

The above output shows that the memory allocated by list (denoted by S) is 14000 whereas the memory allocated by the numpy array is just 4000. From this, you can conclude that there is a major difference between the two and this makes python numpy array as the preferred choice over list.

Next, let’s talk how python numpy array is faster and more convenient when compared to list.

import time
import sys
 
SIZE = 1000000
 
L1= range(SIZE)
L2= range(SIZE)
A1= np.arange(SIZE)
A2=np.arange(SIZE)
 
start= time.time()
result=[(x,y) for x,y in zip(L1,L2)]
print((time.time()-start)*1000)
 
start=time.time()
result= A1+A2
print((time.time()-start)*1000)

O/P – 380.9998035430908
49.99995231628418

In the above code, we have defined two lists and two numpy arrays. Then, we have compared the time taken in order to find the sum of lists and sum of numpy arrays both. If you see the output of the above program, there is a significant change in the two values. List took 380ms whereas the numpy array took almost 49ms. Hence, numpy array is faster than list. Now, if you noticed we had run a ‘for’ loop for a list which returns the concatenation of both the lists whereas for numpy arrays, we have just added the two array by simply printing A1+A2. That’s why working with numpy is much easier and convenient when compared to the lists.

Therefore, the above examples proves the point as to why you should go for python numpy array rather than a list!

Moving forward in python numpy tutorial, let’s focus on some of its operations.

Python NumPy Operations

ndim:
You can find the dimension of the array, whether it is a two-dimensional array or a single dimensional array. So, let us see this practically how we can find the dimensions. In the below code, with the help of ‘ndim’ function, I can find whether the array is of single dimension or multi dimension.

import numpy as np
a = np.array([(1,2,3),(4,5,6)])
print(a.ndim)

Output – 2

Since the output is 2, it is a two-dimensional array (multi dimension).

  • itemsize:

You can calculate the byte size of each element. In the below code, I have defined a single dimensional array and with the help of ‘itemsize’ function, we can find the size of each element.

import numpy as np
a = np.array([(1,2,3)])
print(a.itemsize)

Output – 4

So every element occupies 4 byte in the above numpy array.

  • dtype:

You can find the data type of the elements that are stored in an array. So, if you want to know the data type of a particular element, you can use ‘dtype’ function which will print the datatype along with the size. In the below code, I have defined an array where I have used the same function.

import numpy as np
a = np.array([(1,2,3)])
print(a.dtype)

Output – int32

As you can see, the data type of the array is integer 32 bits. Similarly, you can find the size and shape of the array using ‘size’ and ‘shape’ function respectively.

import numpy as np
a = np.array([(1,2,3,4,5,6)])
print(a.size)
print(a.shape)

Output – 6 (1,6)

Next, let us move forward and see what are the other operations that you can perform with python numpy module. We can also perform reshape as well as slicing operation using** python numpy operation**. But, what exactly is reshape and slicing? So let me explain this one by one in this python numpy tutorial.
reshape:
Reshape is when you change the number of rows and columns which gives a new view to an object. Now, let us take an example to reshape the below array:

As you can see in the above image, we have 3 columns and 2 rows which has converted into 2 columns and 3 rows. Let me show you practically how it’s done.

import numpy as np
a = np.array([(8,9,10),(11,12,13)])
print(a)
a=a.reshape(3,2)
print(a)

Output – [[ 8 9 10] [11 12 13]] [[ 8 9] [10 11] [12 13]]

  • slicing:

As you can see the ‘reshape’ function has showed its magic. Now, let’s take another operation i.e Slicing. Slicing is basically extracting particular set of elements from an array. This slicing operation is pretty much similar to the one which is there in the list as well. Consider the following example:

Before getting into the above example, let’s see a simple one. We have an array and we need a particular element (say 3) out of a given array. Let’s consider the below example:

import numpy as np
a=np.array([(1,2,3,4),(3,4,5,6)])
print(a[0,2])

Output – 3

Here, the array(1,2,3,4) is your index 0 and (3,4,5,6) is index 1 of the python numpy array. Therefore, we have printed the second element from the zeroth index.

Taking one step forward, let’s say we need the 2nd element from the zeroth and first index of the array. Let’s see how you can perform this operation:

import numpy as np
a=np.array([(1,2,3,4),(3,4,5,6)])
print(a[0:,2])

Output – [3 5]

Here colon represents all the rows, including zero. Now to get the 2nd element, we’ll call index 2 from both of the rows which gives us the value 3 and 5 respectively.

Next, just to remove the confusion, let’s say we have one more row and we don’t want to get its 2nd element printed just as the image above. What we can do in such case?

Consider the below code:

import numpy as np
a=np.array([(8,9),(10,11),(12,13)])
print(a[0:2,1])

Output – [9 11]

As you can see in the above code, only 9 and 11 gets printed. Now when I have written 0:2, this does not include the second index of the third row of an array. Therefore, only 9 and 11 gets printed else you will get all the elements i.e [9 11 13].
linspace
This is another operation in python numpy which returns evenly spaced numbers over a specified interval. Consider the below example:

import numpy as np
a=np.linspace(1,3,10)
print(a)

Output – [ 1. 1.22222222 1.44444444 1.66666667 1.88888889 2.11111111 2.33333333 2.55555556 2.77777778 3. ]

As you can see in the result, it has printed 10 values between 1 to 3.
max/ min
Next, we have some more operations in numpy such as to find the minimum, maximum as well the sum of the numpy array. Let’s go ahead in python numpy tutorial and execute it practically.

import numpy as np
 
a= np.array([1,2,3])
print(a.min())
print(a.max())
print(a.sum())

Output – 1 3 6

You must be finding these pretty basic, but with the help of this knowledge you can perform a lot bigger tasks as well. Now, lets understand the concept of axis in python numpy.

As you can see in the figure, we have a numpy array 2*3. Here the rows are called as axis 1 and the columns are called as axis 0. Now you must be wondering what is the use of these axis?

Suppose you want to calculate the sum of all the columns, then you can make use of axis. Let me show you practically, how you can implement axis in your PyCharm:

a= np.array([(1,2,3),(3,4,5)])
print(a.sum(axis=0))

Output – [4 6 8]

Therefore, the sum of all the columns are added where 1+3=4, 2+4=6 and 3+5=8. Similarly, if you replace the axis by 1, then it will print [6 12] where all the rows get added.
Square Root & Standard Deviation
There are various mathematical functions that can be performed using python numpy. You can find the square root, standard deviation of the array. So, let’s implement these operations:

import numpy as np
a=np.array([(1,2,3),(3,4,5,)])
print(np.sqrt(a))
print(np.std(a))

Output – [[ 1. 1.41421356 1.73205081]

[ 1.73205081 2. 2.23606798]]

1.29099444874

As you can see the output above, the square root of all the elements are printed. Also, the standard deviation is printed for the above array i.e how much each element varies from the mean value of the python numpy array.
Addition Operation
You can perform more operations on numpy array i.e addition, subtraction,multiplication and division of the two matrices. Let me go ahead in python numpy tutorial, and show it to you practically:

import numpy as np
x= np.array([(1,2,3),(3,4,5)])
y= np.array([(1,2,3),(3,4,5)])
print(x+y)

Output – [[ 2 4 6] [ 6 8 10]]

This is extremely simple! Right? Similarly, we can perform other operations such as subtraction, multiplication and division. Consider the below example:

import numpy as np
x= np.array([(1,2,3),(3,4,5)])
y= np.array([(1,2,3),(3,4,5)])
print(x-y)
print(x*y)
print(x/y)

Output – [[0 0 0] [0 0 0]]

[[ 1 4 9] [ 9 16 25]]

[[ 1. 1. 1.] [ 1. 1. 1.]]
Vertical & Horizontal Stacking
Next, if you want to concatenate two arrays and not just add them, you can perform it using two ways – vertical stacking and horizontal stacking. Let me show it one by one in this python numpy tutorial.

import numpy as np
x= np.array([(1,2,3),(3,4,5)])
y= np.array([(1,2,3),(3,4,5)])
print(np.vstack((x,y)))
print(np.hstack((x,y)))

Output – [[1 2 3] [3 4 5] [1 2 3] [3 4 5]]

[[1 2 3 1 2 3] [3 4 5 3 4 5]]
ravel
There is one more operation where you can convert one numpy array into a single column i.e ravel. Let me show how it is implemented practically:

import numpy as np
x= np.array([(1,2,3),(3,4,5)])
print(x.ravel())

Output – [ 1 2 3 3 4 5]

Let’s move forward in python numpy tutorial, and look at some of its special functions.

Python Numpy Special Functions

There are various special functions available in numpy such as sine, cosine, tan, log etc. First, let’s begin with sine function where we will learn to plot its graph. For that, we need to import a module called matplotlib. To understand the basics and practical implementations of this module, you can refer Matplotlib Tutorial. Moving ahead with python numpy tutorial, let’s see how these graphs are plotted.

import numpy as np
import matplotlib.pyplot as plt
x= np.arange(0,3*np.pi,0.1)
y=np.sin(x)
plt.plot(x,y)
plt.show()

Output –

Similarly, you can plot a graph for any trigonometric function such as cos, tan etc. Let me show you one more example where you can plot a graph of another function, let’s say tan*.*

import numpy as np
import matplotlib.pyplot as plt
x= np.arange(0,3*np.pi,0.1)
y=np.tan(x)
plt.plot(x,y)
plt.show()

Output –

Moving forward with python numpy tutorial, let’s see some other special functionality in numpy array such as exponential and logarithmic function. Now in exponential, the *e *value is somewhere equal to 2.7 and in log, it is actually log base 10. When we talk about natural log i.e log base e, it is referred as Ln. So let’s see how it is implemented practically:

a= np.array([1,2,3])
print(np.exp(a))

Output – [ 2.71828183 7.3890561 20.08553692]

As you can see the above output, the exponential values are printed i.e e raise to the power 1 is e, which gives the result as 2.718… Similarly, e raise to the power of 2 gives the value somewhere near 7.38 and so on. Next, in order to calculate log, let’s see how you can implement it:

import numpy as np
import matplotlib.pyplot as plt
a= np.array([1,2,3])
print(np.log(a))

Output – [ 0. 0.69314718 1.09861229]

Here, we have calculated natural log which gives the value as displayed above. Now, if we want log base 10 instead of Ln or natural log, you can follow the below code:

import numpy as np
import matplotlib.pyplot as plt
a= np.array([1,2,3])
print(np.log10(a))

Output – [ 0. 0.30103 0.47712125]

By this, we come to the end of this python numpy tutorial. We have covered all the basics of python numpy, so you can start practicing now. The more you practice, the more you will learn.

Neural Network Using Python and Numpy

Neural Network Using Python and Numpy

Understanding neural networks using Python and Numpy by coding

Motivation

If you are a junior data scientist who sort of understands how neural nets work, or a machine learning enthusiast who only knows a little about deep learning, this is the article that you cannot miss. Here is how you can build a neural net from scratch using NumPy in 9 steps — from data pre-processing to back-propagation — a must-do practice.

Basic understanding of machine learning, artificial neural network, Python syntax, and programming logic is preferred (but not necessary as you can learn on the go).

Codes are available on Github.

1. Initialization Numpy

Step one. Import NumPy. Seriously.

import numpy as np 
np.random.seed(42) # for reproducibility
2. Data Generation

Deep learning is data-hungry. Although there are many clean datasets available online, we will generate our own for simplicity — for inputs a and b, we have outputs a+b, a-b, and |a-b|. 10,000 datum points are generated.

X_num_row, X_num_col = [2, 10000] # Row is no. of feature, col is no. of datum points
X_raw = np.random.rand(X_num_row,X_num_col) * 100
y_raw = np.concatenate(([(X_raw[0,:] + X_raw[1,:])], [(X_raw[0,:] - X_raw[1,:])], np.abs([(X_raw[0,:] - X_raw[1,:])])))
# for input a and b, output is a+b; a-b and |a-b|
y_num_row, y_num_col = y_raw.shape
3. Train-test Splitting

Our dataset is split into training (70%) and testing (30%) set. Only training set is leveraged for tuning neural networks. Testing set is used only for performance evaluation when the training is complete.

train_ratio = 0.7
num_train_datum = int(train_ratio*X_num_col)
X_raw_train = X_raw[:,0:num_train_datum]
X_raw_test = X_raw[:,num_train_datum:]
y_raw_train = y_raw[:,0:num_train_datum]
y_raw_test = y_raw[:,num_train_datum:]
4. Data Standardization

Data in the training set is standardized so that the distribution for each standardized feature is zero-mean and unit-variance. The scalers generated from the abovementioned procedure can then be applied to the testing set.

class scaler:
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std

def get_scaler(row):
mean = np.mean(row)
std = np.std(row)
return scaler(mean, std)

def standardize(data, scaler):
return (data - scaler.mean) / scaler.std

def unstandardize(data, scaler):
return (data * scaler.std) + scaler.mean

Construct scalers from training set

X_scalers = [get_scaler(X_raw_train[row,:]) for row in range(X_num_row)]
X_train = np.array([standardize(X_raw_train[row,:], X_scalers[row]) for row in range(X_num_row)])

y_scalers = [get_scaler(y_raw_train[row,:]) for row in range(y_num_row)]
y_train = np.array([standardize(y_raw_train[row,:], y_scalers[row]) for row in range(y_num_row)])

Apply those scalers to testing set

X_test = np.array([standardize(X_raw_test[row,:], X_scalers[row]) for row in range(X_num_row)])
y_test = np.array([standardize(y_raw_test[row,:], y_scalers[row]) for row in range(y_num_row)])

Check if data has been standardized

print([X_train[row,:].mean() for row in range(X_num_row)]) # should be close to zero
print([X_train[row,:].std() for row in range(X_num_row)]) # should be close to one

print([y_train[row,:].mean() for row in range(y_num_row)]) # should be close to zero
print([y_train[row,:].std() for row in range(y_num_row)]) # should be close to one

The scaler therefore does not contain any information from our testing set. We do not want our neural net to gain any information regarding testing set before network tuning.

We have now completed the data pre-processing procedures in 4 steps.

5. Neural Net Construction


Photo by freestocks.org on Unsplash

We objectify a ‘layer’ using class in Python. Every layer (except the input layer) has a weight matrix W, a bias vector b, and an activation function. Each layer is appended to a list called neural_net. That list would then be a representation of your fully connected neural network.

class layer:
def init(self, layer_index, is_output, input_dim, output_dim, activation):
self.layer_index = layer_index # zero indicates input layer
self.is_output = is_output # true indicates output layer, false otherwise
self.input_dim = input_dim
self.output_dim = output_dim
self.activation = activation

    # the multiplication constant is sorta arbitrary
    if layer_index != 0:
        self.W = np.random.randn(output_dim, input_dim) * np.sqrt(2/input_dim) 
        self.b = np.random.randn(output_dim, 1) * np.sqrt(2/input_dim)
Change layers_dim to configure your own neural net!

layers_dim = [X_num_row, 4, 4, y_num_row] # input layer --- hidden layers --- output layers
neural_net = []

Construct the net layer by layer

for layer_index in range(len(layers_dim)):
if layer_index == 0: # if input layer
neural_net.append(layer(layer_index, False, 0, layers_dim[layer_index], 'irrelevant'))
elif layer_index+1 == len(layers_dim): # if output layer
neural_net.append(layer(layer_index, True, layers_dim[layer_index-1], layers_dim[layer_index], activation='linear'))
else:
neural_net.append(layer(layer_index, False, layers_dim[layer_index-1], layers_dim[layer_index], activation='relu'))

Simple check on overfitting

pred_n_param = sum([(layers_dim[layer_index]+1)*layers_dim[layer_index+1] for layer_index in range(len(layers_dim)-1)])
act_n_param = sum([neural_net[layer_index].W.size + neural_net[layer_index].b.size for layer_index in range(1,len(layers_dim))])
print(f'Predicted number of hyperparameters: {pred_n_param}')
print(f'Actual number of hyperparameters: {act_n_param}')
print(f'Number of data: {X_num_col}')

if act_n_param >= X_num_col:
raise Exception('It will overfit.')

Finally, we do a sanity check on the number of hyperparameters using the following formula, and by counting. The number of datums available should exceed the number of hyperparameters, otherwise it will definitely overfit.


N^l is number of hyperparameters at l-th layer, L is number of layers (excluding input layer)

6. Forward Propagation

We define a function for forward propagation given a certain set of weights and biases. The connection between layers is defined in matrix form as:


σ is element-wise activation function, superscript T means transpose of a matrix

Activation functions are defined one by one. ReLU is implemented as a → max(a,0), whereas sigmoid function should return a → 1/(1+e^(-a)), and its implementation is left as an exercise to the reader.

def activation(input_, act_func):
if act_func == 'relu':
return np.maximum(input_, np.zeros(input_.shape))
elif act_func == 'linear':
return input_
else:
raise Exception('Activation function is not defined.')

def forward_prop(input_vec, layers_dim=layers_dim, neural_net=neural_net):
neural_net[0].A = input_vec # Define A in input layer for for-loop convenience
for layer_index in range(1,len(layers_dim)): # W,b,Z,A are undefined in input layer
neural_net[layer_index].Z = np.add(np.dot(neural_net[layer_index].W, neural_net[layer_index-1].A), neural_net[layer_index].b)
neural_net[layer_index].A = activation(neural_net[layer_index].Z, neural_net[layer_index].activation)
return neural_net[layer_index].A


Photo by Holger Link on Unsplash

7. Back-propagation

This is the most tricky part where many of us simply do not understand. Once we have defined a loss metric e for evaluating performance, we would like to know how the loss metric change when we perturb each weight or bias.

We want to know how sensitive each weight and bias is with respect to the loss metric.

This is represented by partial derivatives ∂e/∂W (denoted dW in code) and ∂e/∂b (denoted db in code) respectively, and can be calculated analytically.


⊙ represents element-wise multiplication

These back-propagation equations assume only one datum y is compared. The gradient update process would be very noisy as the performance of each iteration is subject to one datum point only. Multiple datums can be used to reduce the noise where ∂W(y1, y2, …) would be the mean of ∂W(y_1), ∂W(y_2), …, and likewise for ∂b. This is not shown above in those equations, but is implemented in the code below.

def get_loss(y, y_hat, metric='mse'):
if metric == 'mse':
individual_loss = 0.5 * (y_hat - y) ** 2
return np.mean([np.linalg.norm(individual_loss[:,col], 2) for col in range(individual_loss.shape[1])])
else:
raise Exception('Loss metric is not defined.')

def get_dZ_from_loss(y, y_hat, metric):
if metric == 'mse':
return y_hat - y
else:
raise Exception('Loss metric is not defined.')

def get_dactivation(A, act_func):
if act_func == 'relu':
return np.maximum(np.sign(A), np.zeros(A.shape)) # 1 if backward input >0, 0 otherwise; then diaganolize
elif act_func == 'linear':
return np.ones(A.shape)
else:
raise Exception('Activation function is not defined.')

def backward_prop(y, y_hat, metric='mse', layers_dim=layers_dim, neural_net=neural_net, num_train_datum=num_train_datum):
for layer_index in range(len(layers_dim)-1,0,-1):
if layer_index+1 == len(layers_dim): # if output layer
dZ = get_dZ_from_loss(y, y_hat, metric)
else:
dZ = np.multiply(np.dot(neural_net[layer_index+1].W.T, dZ),
get_dactivation(neural_net[layer_index].A, neural_net[layer_index].activation))
dW = np.dot(dZ, neural_net[layer_index-1].A.T) / num_train_datum
db = np.sum(dZ, axis=1, keepdims=True) / num_train_datum

    neural_net[layer_index].dW = dW
    neural_net[layer_index].db = db

8. Iterative Optimization

We now have every building block for training a neural network.

Once we know the sensitivities of weights and biases, we try to minimize (hence the minus sign) the loss metric iteratively by gradient descent using the following update rule:

W = W - learning_rate * ∂W
b = b - learning_rate * ∂b


Photo by Rostyslav Savchyn on Unsplash

learning_rate = 0.01
max_epoch = 1000000

for epoch in range(1,max_epoch+1):
y_hat_train = forward_prop(X_train) # update y_hat
backward_prop(y_train, y_hat_train) # update (dW,db)

for layer_index in range(1,len(layers_dim)):        # update (W,b)
    neural_net[layer_index].W = neural_net[layer_index].W - learning_rate * neural_net[layer_index].dW
    neural_net[layer_index].b = neural_net[layer_index].b - learning_rate * neural_net[layer_index].db

if epoch % 100000 == 0:
    print(f'{get_loss(y_train, y_hat_train):.4f}')

Training loss should be going down as it iterates

9. Testing

The model generalizes well if the testing loss is not much higher than the training loss. We also make some test cases to see how the model performs.

# test loss

print(get_loss(y_test, forward_prop(X_test)))

def predict(X_raw_any):
X_any = np.array([standardize(X_raw_any[row,:], X_scalers[row]) for row in range(X_num_row)])
y_hat = forward_prop(X_any)
y_hat_any = np.array([unstandardize(y_hat[row,:], y_scalers[row]) for row in range(y_num_row)])
return y_hat_any

predict(np.array([[30,70],[70,30],[3,5],[888,122]]).T)


The Takeaway

This is how you can build a neural net from scratch using NumPy in 9 steps. Some of you might have already built neural nets using some high-level frameworks such as TensorFlow, PyTorch, or Keras. However, building a neural net using only low-level libraries enable us to truly understand the mathematics behind the mystery.

My implementation by no means is the most efficient way to build and train a neural net. There is so much room for improvement but that is a story for another day. Codes are available on Github. Happy coding!

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Learn More

The Data Science Course 2019: Complete Data Science Bootcamp

Machine Learning A-Z™: Hands-On Python & R In Data Science

Tableau 10 A-Z: Hands-On Tableau Training For Data Science!

R Programming A-Z™: R For Data Science With Real Exercises!

Machine Learning, Data Science and Deep Learning with Python

Deep Learning A-Z™: Hands-On Artificial Neural Networks

Python for Data Science and Machine Learning Bootcamp

Learn NumPy Fundamentals - Python Library for Data Science

Learn NumPy Fundamentals - Python Library for Data Science

All the basics to start using the Python library NumPy. In this course I'll cover the basics of using number and have several interactive course videos that will challenge you to learn how to use NumPy.

Learn NumPy Fundamentals - Python Library for Data Science

We'll cover:

  • Why use NumPy?
  • NumPy Arrays
  • Array Math
  • Array Indexing
  • Advanced Indexing
  • Broadcasting
  • & much more!

What you'll learn

  • Python
  • NumPy

Python NumPy Tutorial for Beginners

Learn the basics of the NumPy library in this tutorial for beginners. It provides background information on how NumPy works and how it compares to Python's Built-in lists. This video goes through how to write code with NumPy. It starts with the basics of creating arrays and then gets into more advanced stuff. The video covers creating arrays, indexing, math, statistics, reshaping, and more.

Code: https://github.com/KeithGalli/NumPy

Course Contents

⌨️ (01:15) What is NumPy

⌨️ (01:35) NumPy vs Lists (speed, functionality)

⌨️ (09:17) Applications of NumPy

⌨️ (11:08) The Basics (creating arrays, shape, size, data type)

⌨️ (16:08) Accessing/Changing Specific Elements, Rows, Columns, etc (slicing)

⌨️ (23:14) Initializing Different Arrays (1s, 0s, full, random, etc...)

⌨️ (31:34) Problem #1 (How do you initialize this array?)

⌨️ (33:42) Be careful when copying variables!

⌨️ (35:45) Basic Mathematics (arithmetic, trigonometry, etc.)

⌨️ (38:20) Linear Algebra

⌨️ (42:19) Statistics

⌨️ (43:57) Reorganizing Arrays (reshape, vstack, hstack)

⌨️ (47:29) Load data in from a file

⌨️ (50:20) Advanced Indexing and Boolean Masking

⌨️ (55:59) Problem #2 (How do you index these values?)

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python for Data Science and Machine Learning Bootcamp

Machine Learning, Data Science and Deep Learning with Python

Deep Learning A-Z™: Hands-On Artificial Neural Networks

Artificial Intelligence A-Z™: Learn How To Build An AI

A Complete Machine Learning Project Walk-Through in Python

Machine Learning: how to go from Zero to Hero

Top 18 Machine Learning Platforms For Developers

10 Amazing Articles On Python Programming And Machine Learning

100+ Basic Machine Learning Interview Questions and Answers

NumPy Tutorial for Beginners

Learn NumPy Arrays With Examples