Machine learning using TensorFlow for Absolute Beginners

Machine learning using TensorFlow for Absolute Beginners

Welcome to this article where you will learn how to train your first Machine Learning model using TensorFlow and use it for Predictions! As the title suggests, this tutorial is only for someone who has no prior understanding of how to use a machine learning model.

Welcome to this article where you will learn how to train your first Machine Learning model using TensorFlow and use it for Predictions! As the title suggests, this tutorial is only for someone who has no prior understanding of how to use a machine learning model.

The only pre-requisite for this course is to have a basic understanding of Python programming language. I have tried to keep things simple here, and only introduce basic concepts of machine learning and Neural Network.

What is TensorFlow: TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. Learn more about TensorFlow from here.

We will be using Google Colab for this demo. Google Colab is a free cloud service which can be used to develop deep learning applications using popular libraries such as Keras, TensorFlow, PyTorch, and OpenCV. To know more about Google Colab, click here.

Let’s jump straight to a sample problem we are going to solve using TensorFlow and Machine-Learning concepts.

Let’s assume that we are given the marketing budget spent (in thousands of dollars) by a media-services provider in the last 8 months along with the number of new subscribers(also in thousands) for the same period of time in the table given below:

Table containing the data

As you can see there is a trend or relationship between the amount spent and new subscribers gained. As the amount is increasing, the number of new subscribers are also increasing.

If you work out the maths using theory of linear equation you will find out:

Subscribers gained = 2Amount Spent+ 40*
Our goal is to find this relationship between the amount spent on marketing and the number of subscribers gained using Machine-Learning technique.

Using the relationship found above we can also predict how many new subscribers can be expected if the organization spends some ‘x’ amount on marketing.

Let’s learn about some basic Machine Learning terminology before jumping into our modeling process:

Feature:* The input(s) to our model. In this case, a single value — marketing Budget.> Labels: The output our model predicts. In this case, a single value — the number of new subscribers gained.> Example: A pair of inputs/outputs used during training. In our case a pair of values from <em>mar_budget</em> and <em>New_subs</em> at a specific index, such as (80,200).*> ***Model: ***A mathematical representation of a real-world process. In machine Learning a model is an artifact or entity which is created by using a class of algorithm and training it with Features and labels.
Now let’s start with our Modeling Process.

Step 1: Creating a new Notebook

Click on the link below to visit colab and click on** File**, then** New python 3 notebook**.

https://colab.research.google.com/notebooks/welcome.ipynb

Step 2: The second step is to import Dependencies/libraries we are going to use in this Demo:

import numpy, matplotlib and tensorflow as given in the snippet below:

Step 3: Generate/Import the set of Data-Points

Let’s Generate a set of Data-Points we will be using to train our model in the form of two arrays named Mar_budget and Subs_gained for each value in mar_budget respectively.

We will also plot our arrays to understand the relationship between mar_budget and Subs_gained.

We’ll use [Matplotlib] to visualize this (you could use another tool).

                code for plotting the Graph

As you can see, there is a linear relationship between the Marketing Budget spent and new Subs Gained. Our Goal will be to find the equation of the line that can be used to fit all the points i.e explain this linear Relationship and later used to predict labels for unseen data points/predictors.

(Note: The relationship between data need not be always perfectly linear. In this example, I am using perfectly linear datapoints but in real life scenarios, there is hardly any dataset which is perfectly linear. Our goal is to find the most approximate line/Curve also called Model, that can be used to explain the relationship between predictors and labels. You can also generate or use Non-Linear Datapoints for this case study. )

I will take this opportunity to explain the principle assumption of linear regression which is:

Principle assumption of Linear Regression: There must be a linear Relationship between labels and Coefficients of the equation of line fitted.

**Step 4: **The next step is to separate our data into training and testing Data. Training Data is used to train our model while testing data will be kept separately and later used for verifying the performance of our Model by comparing the actual Label of our test data with label predicted by our model for test data.

Step 5: Creating the model

We will use the simplest possible model we can, a Dense network. Since the problem is straightforward, this network will require only a single layer, with a single neuron.

Build a layer:

We’ll call the layer layer_0 and create it by instantiating tf.keras.layers.Densewith the following configuration:

  • input_shape=[1]: This specifies that the input to this layer is a single value. That is, the shape is a one-dimensional array with one member. Since this is the first (and only) layer, that input shape is the input shape of the entire model. The single value is a floating point number, representing marketing_budget.
  • units=1: This specifies the number of neurons in the layer. The number of neurons defines how many internal variables the layer has to try to learn how to solve the problem. Since this is the final layer, it is also the size of the model’s output — a single float value representing new subscribers gained. (In a multi-layered network, the size and shape of the layer would need to match the input_shape of the next layer.)

Assemble layers into the model:

Once layers are defined, they need to be assembled into a model. The Sequential model definition takes a list of layers as arguments specifying the calculation order from the input to the output.

This model has just a single layer, layer_0.

Note: You will often see the layers defined inside the model definition, rather than beforehand as below:

Step 6: Compile the model, with loss and optimizer functions

Before training, the model has to be compiled. When compiled for training, the model is given:

  • Loss function: A way of measuring how far off predictions are from the desired outcome. (The measured difference is called the “loss”.)
  • Optimizer function: A way of adjusting internal values in order to reduce the loss.

These parameters are used during training (model.fit(), below) to first calculate the loss at each point, and then improve it. In fact, the act of calculating the current loss of a model and then improving it is precisely what training is.

During training, the optimizer function is used to calculate adjustments to the model’s internal variables. The goal is to adjust the internal variables until the model (which is really a math function) mirrors the actual equation for converting budget_Spent to New Subs Gained.

TensorFlow uses numerical analysis to perform this tuning, and all this complexity is hidden from you so we will not go into the details here. What is useful to know about these parameters are:

The loss function (mean squared error) and the optimizer (Adam) used here are standard for simple models like this one, but many others are available. It is not important to know how these specific functions work at this point.

One part of the Optimizer you may need to think about when building your own models is the learning rate (0.1 in the code above). This is the step size taken when adjusting values in the model. If the value is too small, it will take too many iterations to train the model. Too large, and accuracy goes down. Finding a good value often involves some trial and error, but the range is usually within 0.001 (default), and 0.1.

To read more about loss Function and Optimizer click on the links given below:

Step 7: Train the model by calling the fit method.

During training, the model takes in marketing budget values, performs a calculation using the current internal variables (called “weights”) and outputs values which are meant to be the New subs Gained. Since the weights are initially set randomly, the output will not be close to the correct value. The difference between the actual output and the desired output is calculated using the loss function, and the optimizer function directs how the weights should be adjusted.

This cycle of calculating, comparing, adjusting is controlled by the fit method. The first argument is the inputs, the second argument is the desired outputs.

The* *epochs argument specifies how many times this cycle should be run, and the verbose argument controls how much output the method produces.

Optional Step: Display training statistics

The fit method returns a history object. We can use this object to plot how the loss of our model goes down after each training epoch. A high loss means that the value of new subs gained the model predicts, are far from the corresponding value of*** actual subs gained***.

As you can see, our model improves very quickly at first and then has a steady, slow improvement until it is very near “perfect” towards the end:

Epochs vs Loss

Step 8: Use the model to predict values

Now you have a model that has been trained to learn the relationship between marketing_Budget and new_subs_gained. You can use the predict method to have it calculate the new_subs_gained for a previously known/unknown marketing_budget.

So, for example, if the marketing_budget value is 80 thousand dollars, what do you think the new_subs_gained result will be?

Take a guess before you run this code or refer to your train_data:

Next, we will predict labels for all test data points and compare them with their actual data points:

Step 9: Verifying the Model accuracy using Performance Metric

Let’s check the goodness of fit for model using r2_score(r-squared value)

R^2 is a statistic that will give some information about the goodness of fit of a model. In regression, the R^2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points.

An R2** of 1 indicates that the regression predictions perfectly fit the data.**

So this was our final step in the modeling process.

Here we have chosen r2-score as our performance metric, you can pick any other metric as well to measure the performance of our model.

Let’s review what we have done during our modeling process:

We created a model with a Dense layer> We trained it with 3000 examples (6 pairs, over 500 epochs).> Our model tuned the variables (weights) in the Dense layer until it was able to return the correct new_subs_gained value for any marketing_budget value.> We verified this using our test Data(Remember, 80 was not part of our training data.)> We also measured the goodness of prediction for our model using r2_score
A little Thought experiment

Just for fun, what if we create a new model with 3 more Dense layers with different units, which therefore also has more variables?

In this case study, we used a simple linear regression problem having one predictor and one label. These same steps and concepts can be extended to more complex multiple linear regression problems(having n number of predictors and one label) as well as classification problems.

I am also sharing my notebook for your reference. In case you get stuck somewhere, feel free to access it from this link:

https://colab.research.google.com/drive/1mDakjr9yQDzc3MDPFimMzqFeC9yvSU4D#scrollTo=u_qOSObBO4H7

TO ADD: Assignment Data for Multiple Linear Regression

I will end this article with a definition of machine learning methodology which I think is easily understandable and generalizable:

In machine learning, instead of writing the algorithm to solve a problem, we use a class of learning methodology (linear regression in this case study) and pass historical data to generate the algorithm, that can be verified and used later to solve the problem.
What is your Definition of machine learning? write in comment.

This was an introduction to machine learning and Tensorflow. If you found this article interesting and want to explore more in this field, you can follow and connect with me.

I will also be adding a few sample assignments and questions to this article later, so keep an eye and bookmark this post. Please share it with others too.

Top Machine Learning Framework: 5 Machine Learning Frameworks of 2019

Top Machine Learning Framework: 5 Machine Learning Frameworks of 2019

Machine Learning (ML) is one of the fastest-growing technologies today. ML has a lot of frameworks to build a successful app, and so as a developer, you might be getting confused about using the right framework. Herein we have curated top 5...

Machine Learning (ML) is one of the fastest-growing technologies today. ML has a lot of frameworks to build a successful app, and so as a developer, you might be getting confused about using the right framework. Herein we have curated top 5 machine learning frameworks that are cutting edge technology in your hands.

Through the machine learning frameworks, mobile phones and tablets are getting powerful enough to run the software that can learn and react in real-time. It is a complex discipline. But the implementation of ML models is far less daunting and difficult than it used to be. Now, it automatically improves the performance with the pace of time, interactions, and experiences, and the most important acquisition of useful data pertaining to the tasks allocated.

As we know that ML is considered as a subset of Artificial Intelligence (AI). The scientific study of statistical models and algorithms help a computing system to accomplish designated tasks efficiently. Now, as a mobile app developer, when you are planning to choose machine learning frameworks you must keep the following things in mind.

The framework should be performance-oriented
The grasping and coding should be quick
It allows to distribute the computational process, the framework must have parallelization
It should consist of a facility to create models and provide a developer-friendly tool
Let’s learn about the top five machine learning frameworks to make the right choice for your next ML application development project. Before we dive deeper into these mentioned frameworks, know the different types of ML frameworks that are available on the web. Here are some ML frameworks:

Mathematical oriented
Neural networks-based
Linear algebra tools
Statistical tools
Now, let’s have an insight into ML frameworks that will help you in selecting the right framework for your ML application.

Don’t Miss Out on These 5 Machine Learning Frameworks of 2019
#1 TensorFlow
TensorFlow is an open-source software library for data-based programming across multiple tasks. The framework is based on computational graphs which is essentially a network of codes. Each node represents a mathematical operation that runs some function as simple or as complex as multivariate analysis. This framework is said to be best among all the ML libraries as it supports regressions, classifications, and neural networks like complicated tasks and algorithms.

machine learning frameworks
This machine learning library demands additional efforts while learning TensorFlow Python framework. Your job becomes easy in the n-dimensional array of the framework when you have grasped the Python frameworks and libraries.

The benefits of this framework are flexibility. TensorFlow allows non-automatic migration to newer versions. It runs on the GPU, CPU, servers, desktops, and mobile devices. It provides auto differentiation and performance. There are a few goliaths like Airbus, Twitter, IBM, who have innovatively used the TensorFlow frameworks.

#2 FireBase ML Kit
Firebase machine learning framework is a library that allows effortless, minimal code, with highly accurate, pre-trained deep models. We at Space-O Technologies use this machine learning technology for image classification and object detection. The Firebase framework offers models both locally and on the Google Cloud.

machine learning frameworks
This is one of our ML tutorials to make you understand the Firebase frameworks. First of all, we collected photos of empty glass, half watered glass, full watered glass, and targeted into the machine learning algorithms. This helped the machine to search and analyze according to the nature, behavior, and patterns of the object placed in front of it.

The first photo that we targeted through machine learning algorithms was to recognize an empty glass. Thus, the app did its analysis and search for the correct answer, we provided it with certain empty glass images prior to the experiment.
The other photo that we targeted was a half water glass. The core of the machine learning app is to assemble data and to manage it as per its analysis. It was able to recognize the image accurately because of the little bits and pieces of the glass given to it beforehand.
The last one is a full glass recognition image.
Note: For correct recognition, there has to be 1 label that carries at least 100 images of a particular object.

#3 CAFFE (Convolutional Architecture for Fast Feature Embedding)
CAFFE framework is the fastest way to apply deep neural networks. It is the best machine learning framework known for its model-Zoo a pre-trained ML model that is capable of performing a great variety of tasks. Image classification, machine vision, recommender system are some of the tasks performed easily through this ML library.

machine learning frameworks
This framework is majorly written in CPP. It can run on multiple hardware and can switch between CPU and GPU with the use of a single flag. It has systematically organized the structure of Mat lab and python interface.

Now, if you have to make a machine learning app development, then it is mainly used in academic research projects and to design startups prototypes. It is the aptest machine learning technology for research experiments and industry deployment. At a time this framework can manage 60 million pictures every day with a solitary Nvidia K40 GPU.

#4 Apache Spark
The Apache Spark machine learning is a cluster-computing framework written in different languages like Java, Scala, R, and Python. Spark’s machine learning library, MLlib is considered as foundational for the Spark’s success. Building MLlib on top of Spark makes it possible to tackle the distinct needs of a single tool instead of many disjointed ones.

machine learning frameworks
The advantages of such ML library lower learning curves, less complex development and production environments, which ultimately results in a shorter time to deliver high-performing models. The key benefit of MLlib is that it allows data scientists to solve multiple data problems in addition to their machine learning problems.

It can easily solve graph computations (via GraphX), streaming (real-time calculations), and real-time interactive query processing with Spark SQL and DataFrames. The data professionals can focus on solving the data problems instead of learning and maintaining a different tool for each scenario.

#5 Scikit-Learn
Scikit-learn is said to be one of the greatest feats of Python community. This machine learning framework efficiently handles data mining and supports multiple practical tasks. It is built on foundations like SciPy, Numpy, and matplotlib. This framework is known for supervised & unsupervised learning algorithms as well as cross-validation. The Scikit learn is largely written in Python with some core algorithms in Cython to achieve performance.

machine learning frameworks
The machine learning framework can work on multiple tasks without compromising on speed. There are some remarkable machine learning apps using this framework like Spotify, Evernote, AWeber, Inria.

With the help of machine learning to build iOS apps, Android apps powered by ML have become quite an easy process. With this emerging technology trend varieties of available data, computational processing has become cheaper and more powerful, and affordable data storage. So being an app developer or having an idea for machine learning apps should definitely dive into the niche.

Conclusion
Still have any query or confusion regarding ML frameworks, machine learning app development guide, the difference between Artificial Intelligence and machine learning, ML algorithms from scratch, how this technology is helpful for your business? Just fill our contact us form. Our sales representatives will get back to you shortly and resolve your queries. The consultation is absolutely free of cost.

Author Bio: This blog is written with the help of Jigar Mistry, who has over 13 years of experience in the web and mobile app development industry. He has guided to develop over 200 mobile apps and has special expertise in different mobile app categories like Uber like apps, Health and Fitness apps, On-Demand apps and Machine Learning apps. So, we took his help to write this complete guide on machine learning technology and machine app development areas.

Introduction to Machine Learning with TensorFlow.js

Introduction to Machine Learning with TensorFlow.js

Learn how to build and train Neural Networks using the most popular Machine Learning framework for javascript, TensorFlow.js.

Learn how to build and train Neural Networks using the most popular Machine Learning framework for javascript, TensorFlow.js.

This is a practical workshop where you'll learn "hands-on" by building several different applications from scratch using TensorFlow.js.

If you have ever been interested in Machine Learning, if you want to get a taste for what this exciting field has to offer, if you want to be able to talk to other Machine Learning/AI specialists in a language they understand, then this workshop is for you.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Machine Learning and TensorFlow.js

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning In Node.js With TensorFlow.js

Machine Learning in JavaScript with TensorFlow.js

A Complete Machine Learning Project Walk-Through in Python

Top 10 Machine Learning Algorithms You Should Know to Become a Data Scientist

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

TensorFlow Vs PyTorch: Comparison of the Machine Learning Libraries

Libraries play an important role when developers decide to work in Machine Learning or Deep Learning researches. In this article, we list down 10 comparisons between TensorFlow and PyTorch these two Machine Learning Libraries.

According to this article, a survey based on a sample of 1,616 ML developers and data scientists, for every one developer using PyTorch, there are 3.4 developers using TensorFlow. In this article, we list down 10 comparisons between these two Machine Learning Libraries

1 - Origin

PyTorch has been developed by Facebook which is based on Torch while TensorFlow, an open sourced Machine Learning Library, developed by Google Brain is based on the idea of data flow graphs for building models.

2 - Features

TensorFlow has some attracting features such as TensorBoard which serves as a great option while visualising a Machine Learning model, it also has TensorFlow Serving which is a specific grpc server that is used during the deployment of models in production. On the other hand, PyTorch has several distinguished features too such as dynamic computation graphs, naive support for Python, support for CUDA which ensures less time for running the code and increase in performance.

3 - Community

TensorFlow is adopted by many researchers of various fields like academics, business organisations, etc. It has a much bigger community than PyTorch which implies that it is easier to find for resources or solutions in TensorFlow. There is a vast amount of tutorials, codes, as well as support in TensorFlow and PyTorch, being the newcomer into play as compared to TensorFlow, it lacks these benefits.

4 - Visualisation

Visualisation plays as a protagonist while presenting any project in an organisation. TensorFlow has TensorBoard for visualising Machine Learning models which helps during training the model and spot the errors quickly. It is a real-time representation of the graphs of a model which not only depicts the graphic representation but also shows the accuracy graphs in real-time. This eye-catching feature is lacked by PyTorch.

5 - Defining Computational Graphs

In TensorFlow, defining computational graph is a lengthy process as you have to build and run the computations within sessions. Also, you will have to use other parameters such as placeholders, variable scoping, etc. On the other hand, Python wins this point as it has the dynamic computation graphs which help id building the graphs dynamically. Here, the graph is built at every point of execution and you can manipulate the graph at run-time.

6 - Debugging

PyTorch being the dynamic computational process, the debugging process is a painless method. You can easily use Python debugging tools like pdb or ipdb, etc. for instance, you can put “pdb.set_trace()” at any line of code and then proceed for executions of further computations, pinpoint the cause of the errors, etc. While, for TensorFlow you have to use the TensorFlow debugger tool, tfdbg which lets you view the internal structure and states of running TensorFlow graphs during training and inference.

7 - Deployment

For now, deployment in TensorFlow is much more supportive as compared to PyTorch. It has the advantage of TensorFlow Serving which is a flexible, high-performance serving system for deploying Machine Learning models, designed for production environments. However, in PyTorch, you can use the Microframework for Python, Flask for deployment of models.

8 - Documentation

The documentation of both frameworks is broadly available as there are examples and tutorials in abundance for both the libraries. You can say, it is a tie between both the frameworks.

Click here for TensorFlow documentation and click here for PyTorch documentation.

9 - Serialisation

The serialisation in TensorFlow can be said as one of the advantages for this framework users. Here, you can save your entire graph as a protocol buffer and then later it can be loaded in other supported languages, however, PyTorch lacks this feature. 

10 - Device Management

By default, Tensorflow maps nearly all of the GPU memory of all GPUs visible to the process which is a comedown but here it automatically presumes that you want to run your code on the GPU because of the well-set defaults and thus result in fair management of the device. On the other hand, PyTorch keeps track of the currently selected GPU and all the CUDA tensors which will be allocated.