How To Install and Use TensorFlow on Ubuntu 18.04?

How to setup TensorFlow on Ubuntu

How to setup TensorFlow on Ubuntu

How to setup TensorFlow on Ubuntu - This tutorial will help you set up TensorFlow 1.12 on Ubuntu 16.04 with a GPU using Docker and nvidia-docker.

TensorFlow is one of the most popular deep-learning libraries. It was created by Google and was released as an open-source project in 2015. TensorFlow is used for both research and production environments. Installing TensorFlow can be cumbersome. The difficulty varies based on your environment constraints, and more when you’re a data scientist that just wants to build your neural networks.

When using TensorFlow on GPU — setting up requires a few steps. In the following tutorial, we will go over the process required to set up TensorFlow.


You may also like:5 TensorFlow and ML Courses for Programmers


Requirements:

Step 1 — Prepare your environment with Docker and Nvidia-Docker

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. What exactly is a container? Containers allow data scientists and developers to wrap up an environment with all of the parts it needs — such as libraries and other dependencies — and ship it all out in one package.

To use docker with GPUs and to be able to use TensorFlow in your application, you’ll need to install Docker with Nvidia-Docker. If you already have those installed, move to the next step. Otherwise, you can follow our previous guide to installing nvidia docker.

Prerequisites: Step 2 — Dockerfile

Docker can build images (environments) automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

In our case, those commands will describe the installation of Python 3.6, CUDA 9 and CUDNN 7.2.1 — and of course the installation of TensorFlow 1.12 from source.

For this environment, we will use the following Dockerfile

FROM nvidia/cuda:9.0-base-ubuntu16.04

RUN apt-get update && apt-get install -y --no-install-recommends \
        build-essential \
        cuda-command-line-tools-9-0 \
        cuda-cublas-dev-9-0 \
        cuda-cudart-dev-9-0 \
        cuda-cufft-dev-9-0 \
        cuda-curand-dev-9-0 \
        cuda-cusolver-dev-9-0 \
        cuda-cusparse-dev-9-0 \
        curl \
        git \
        libcudnn7=7.2.1.38-1+cuda9.0 \
        libcudnn7-dev=7.2.1.38-1+cuda9.0 \
	libnccl2=2.4.2-1+cuda9.0 \
	libnccl-dev=2.4.2-1+cuda9.0 \
        libcurl3-dev \
        libfreetype6-dev \
        libhdf5-serial-dev \
        libpng12-dev \
        libzmq3-dev \
        pkg-config \
        rsync \
        software-properties-common \
        unzip \
        zip \
        zlib1g-dev \
        wget \
        && \
    rm -rf /var/lib/apt/lists/* && \
    find /usr/local/cuda-9.0/lib64/ -type f -name 'lib*_static.a' -not -name 'libcudart_static.a' -delete && \
    rm /usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a

# install python 3.6 and pip

RUN apt-get update
RUN apt-get install -y software-properties-common vim
RUN add-apt-repository ppa:jonathonf/python-3.6
RUN apt-get update

RUN apt-get install -y build-essential python3.6 python3.6-dev python3-pip python3.6-venv
RUN apt-get install -y git

RUN apt-get update && \
        apt-get install nvinfer-runtime-trt-repo-ubuntu1604-4.0.1-ga-cuda9.0 && \
        apt-get update && \
        apt-get install libnvinfer4=4.1.2-1+cuda9.0 && \
        apt-get install libnvinfer-dev=4.1.2-1+cuda9.0

RUN python3.6 -m pip install pip --upgrade
RUN python3.6 -m pip install wheel 
RUN python3.6 -m pip install six numpy wheel mock
RUN python3.6 -m pip install keras_applications
RUN python3.6 -m pip install keras_preprocessing

RUN ln -s /usr/bin/python3.6 /usr/bin/python


# Set up Bazel.

# Running bazel inside a `docker build` command causes trouble, cf:
#   https://github.com/bazelbuild/bazel/issues/134
# The easiest solution is to set up a bazelrc file forcing --batch.
RUN echo "startup --batch" >>/etc/bazel.bazelrc
# Similarly, we need to workaround sandboxing issues:
#   https://github.com/bazelbuild/bazel/issues/418
RUN echo "build --spawn_strategy=standalone --genrule_strategy=standalone" \
    >>/etc/bazel.bazelrc
# Install the most recent bazel release.
ENV BAZEL_VERSION 0.15.0
WORKDIR /
RUN mkdir /bazel && \
    cd /bazel && \
    curl -H "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" -fSsL -O https://github.com/bazelbuild/bazel/releases/download/$BAZEL_VERSION/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh && \
    curl -H "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" -fSsL -o /bazel/LICENSE.txt https://raw.githubusercontent.com/bazelbuild/bazel/master/LICENSE && \
    chmod +x bazel-*.sh && \
    ./bazel-$BAZEL_VERSION-installer-linux-x86_64.sh && \
    cd / && \
    rm -f /bazel/bazel-$BAZEL_VERSION-installer-linux-x86_64.sh




# Download and build TensorFlow.
WORKDIR /tensorflow
RUN git clone --branch=r1.12 --depth=1 https://github.com/tensorflow/tensorflow.git .

# Configure the build for our CUDA configuration.
ENV CI_BUILD_PYTHON python3.6
ENV LD_LIBRARY_PATH /usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH
ENV TF_NEED_CUDA 1
ENV TF_NEED_TENSORRT 1
ENV TF_CUDA_COMPUTE_CAPABILITIES=3.5,5.2,6.0,6.1,7.0
ENV TF_CUDA_VERSION=9.0
ENV TF_CUDNN_VERSION=7

RUN ln -s /usr/local/cuda/lib64/stubs/libcuda.so /usr/local/cuda/lib64/stubs/libcuda.so.1 && \
    LD_LIBRARY_PATH=/usr/local/cuda/lib64/stubs:${LD_LIBRARY_PATH} \
    tensorflow/tools/ci_build/builds/configured GPU \
    bazel build -c opt --copt=-mavx --config=cuda \
	--cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" \
        tensorflow/tools/pip_package:build_pip_package && \
    rm /usr/local/cuda/lib64/stubs/libcuda.so.1 && \
    bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/pip && \
    pip --no-cache-dir install --upgrade /tmp/pip/tensorflow-*.whl && \
    rm -rf /tmp/pip && \
    rm -rf /root/.cache
# Clean up pip wheel and Bazel cache when done.

WORKDIR /root

# TensorBoard
EXPOSE 6006

Step 3 — Running Dockerfile

To build the image from the Dockerfile, simply run the docker build command. Keep in mind that this build process might take a few hours to complete. We recommend using nohup utility so that if your terminal hangs — it will still run.

$ docker build -t deeplearning -f Dockerfile

This should output the setup process and should end with something similar to:

>> Successfully built deeplearning (= the image ID)

Your image is ready to use. To start the environment, simply type in the below command. But, don’t forget to replace your image id:

$ docker run --runtime=nvidia -it deeplearning /bin/bashStep 4 — Validating TensorFlow & start building!

Validate that TensorFlow is indeed running in your Dockerfile

$ python
import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
2019-02-23 07:34:14.592926: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports 
instructions that this TensorFlow binary was not compiled to use: AVX2 FMA2019-02-23 07:34:17.452780: I 
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA
node read from SysFS had negative value (-1), but there must be at leastone NUMA node, so returning NUMA node zero
2019-02-23 07:34:17.453267: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with 
properties: 
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235pciBusID: 0000:00:1e.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2019-02-23 07:34:17.453306: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu
devices: 0
2019-02-23 07:34:17.772969: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect
StreamExecutor with strength 1 edge matrix:
2019-02-23 07:34:17.773032: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0
2019-02-23 07:34:17.773054: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N
2019-02-23 07:34:17.773403: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow
device (/job:localhost/replica:0/task:0/device:GPU:0 with 10757 MB memory)
-> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0,
compute capability: 3.7)
Device mapping:
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K80,
pci bus id: 0000:00:1e.0, compute capability: 3.7
2019-02-23 07:34:17.774289: I
tensorflow/core/common_runtime/direct_session.cc:307] Device mapping:
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K80, pci bus id: 0000:0

Congrats! Your new TensorFlow environment is set up and ready to start training, testing and deploying your deep learning models!

Conclusion

Tensorflow has truly disrupted the machine learning world by offering a solution to build production-ready models at scale. But Tensorflow is not always the most user-friendly. It can be difficult to smoothly incorporate into your machine learning pipeline. cnvrg.io data science platform leverages Tensorflow and other open-source tools so that data scientists can focus on the magic — the algorithms. You can find more tutorials on how to easily leverage open-source tools like TensorflowHow to set up Kubernetes for your machine learning workflows, and How to run Spark on Kubernetes. Finding simple ways to integrate these useful tools will get your models closer to production.

Further Reading

How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 16.04

How To Install Python 3 and Set Up a Programming Environment on Ubuntu 18.04

How To Set Up Jupyter Notebook with Python 3 on Ubuntu 18.04

Originally published by*** *yochze **at towardsdatascience.com


Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

TensorFlow is dead, long live TensorFlow!

TensorFlow is dead, long live TensorFlow!

TensorFlow is ann open source machine learning library for research and production. TensorFlow is dead, long live TensorFlow

TensorFlow is ann open source machine learning library for research and production. TensorFlow is dead, long live TensorFlow

If you’re an AI enthusiast and you didn’t see the big news this month, you might have just snoozed through an off-the-charts earthquake. Everything is about to change!

Last year I wrote 9 Things You Need To Know About TensorFlow… but there’s one thing you need to know above all others: TensorFlow 2.0 is here!

The revolution is here! Welcome to TensorFlow 2.0.
It’s a radical makeover. The consequences of what just happened are going to have major ripple effects on every industry, just you wait. If you’re a TF beginner in mid-2019, you’re extra lucky because you picked the best possible time to enter AI (though you might want to start from scratch if your old tutorials have the word “session” in them).

In a nutshell:TensorFlow has just gone full Keras. Those of you who know those words just fell out of your chairs. Boom!

A prickly experience

I doubt that many people have accused TensorFlow 1.x of being easy to love. It’s the industrial lathe of AI… and about as user-friendly. At best, you might feel grateful for being able to accomplish your AI mission at mind-boggling scale.

You’d also attract some raised eyebrows if you claimed that TensorFlow 1.x was easy to get the hang of. Its steep learning curve made it mostly inaccessible to the casual user, but mastering it meant you could talk about it the way you’d brag about that toe you lost while climbing Everest. Was it fun? No, c’mon, really: was it fun?

You‘re not the only one — it’s what TensorFlow 1.x tutorials used to feel like for everybody.

TensorFlow’s core strength is performance. It was built for taking models from research to production at massive scale and it delivers, but TF 1.x made you sweat for it. Persevere and you’d be able to join the ranks of ML practitioners who use it for incredible things, like finding new planets and pioneering medicine.

What a pity that such a powerful tool was in the hands of so few… until now.

Don’t worry about what tensors are. We

just called them (generalized) matrices where I grew up. The name

TensorFlow is a nod to the fact that TF’s very good at performing

distributed computations involving multidimensional arrays (er,

matrices), which you’ll find handy for

AI](http://bit.ly/quaesita_emperor) "http://bit.ly/quaesita_emperor)") at scale.

Image source.](http://karlstratos.com/drawings/drawings.html). "http://karlstratos.com/drawings/drawings.html).")

Cute and cuddly Keras

Now that we’ve covered cactuses, let’s talk about something you’d actually want to hug. Overheard at my place of work: “I think I have an actual crush on Keras.”

Keras is a specification for building models layer-by-layer that works with multiple machine learning frameworks (so it’s not a TF thing), but you might know it as a high level API accessed from within TensorFlow as tf.keras.

Incidentally, I’m writing this section on Keras’ 4th birthday (Mar 27, 2019) for an extra dose of warm fuzzies.

Keras was built from the ground up to be Pythonic and always put people first — it was designed to be inviting, flexible, and simple to learn.

Why don’t we have both?

Why must we choose between Keras’s cuddliness and traditional TensorFlow’s mighty performance? What don’t we have both?

Great idea! Let’s have both! That’s TensorFlow 2.0 in a nutshell.

This is TensorFlow 2.0. You can mash those orange buttons yourself here.](http://bit.ly/tfoview). "http://bit.ly/tfoview).")

The revolution is here! Welcome to TensorFlow 2.0.### The usability revolution

Going forward, Keras will be the high level API for TensorFlow and it’s extended so that you can use all the advanced features of TensorFlow directly from tf.keras.

The revolution is here! Welcome to TensorFlow 2.0.

In the new version, everything you’ve hated most about TensorFlow 1.x gets the guillotine. Having to perform a dark ritual just to add two numbers together? Dead. TensorFlow Sessions? Dead. A million ways to do the exact same thing? Dead. Rewriting code if you switch hardware or scale? Dead. Reams of boilerplate to write? Dead. Horrible unactionable error messages? Dead. Steep learning curve? Dead.

The revolution is here! Welcome to TensorFlow 2.0.
You’re expecting the obvious catch, aren’t you? Worse performance? Guess again! We’re not giving up performance.

TensorFlow is now cuddly and this is a game-changer, because it means that one of the most potent tools of our time just dropped the bulk of its barriers to entry. Tech enthusiasts from all walks of life are finally empowered to join in because the new version opens access beyond researchers and other highly-motivated folks with an impressive pain threshold.

The revolution is here! Welcome to TensorFlow 2.0.
Everyone is welcome. Want to play? Then come play!

Eager to please

In TensorFlow 2.0, eager execution is now the default. You can take advantage of graphs even in eager context, which makes your debugging and prototyping easy, while the TensorFlow runtime takes care of performance and scaling under the hood.

Wrangling graphs in TensorFlow 1.x (declarative programming) was disorienting for many, but it’s all just a bad dream now with eager execution (imperative programming). If you skipped learning it before, so much the better. TF 2.0 is a fresh start for everyone.

As easy as one… one… one…

Many APIs got consolidated across TensorFlow under Keras, so now it’s easier to know what you should use when. For example, now you only need to work with one set of optimizers and one set of metrics. How many sets of layers? You guessed it! One! Keras-style, naturally.

In fact, the whole ecosystem of tools got a spring cleaning, from data processing pipelines to easy model exporting to TensorBoard integration with Keras, which is now a… one-liner!

There are also great tools that let you switch and optimize distribution strategies for amazing scaling efficiency without losing any of the convenience of Keras.

Those distribution strategies are pretty, aren’t they?

The catch!

If the catch isn’t performance, what is it? There has to be a catch, right?

Actually, the catch was your suffering up to now. TensorFlow demanded quite a lot of patience from its users while a friendly version was brewing. This wasn’t a matter of sadism. Making tools for deep learning is new territory, and we’re all charting it as we go along. Wrong turns were inevitable, but we learned a lot along the way.

The revolution is here! Welcome to TensorFlow 2.0.
The TensorFlow community put in a lot of elbow grease to make the initial magic happen, and then more effort again to polish the best gems while scraping out less fortunate designs. The plan was never to force you to use a rough draft forever, but perhaps you habituated so well to the discomfort that you didn’t realize it was temporary. Thank you for your patience!
The revolution is here! Welcome to TensorFlow 2.0.
The reward is everything you appreciate about TensorFlow 1.x made friendly under a consistent API with tons of duplicate functionality removed so it’s cleaner to use. Even the errors are cleaned up to be concise, simple to understand, and actionable. Mighty performance stays!

What’s the big deal?

Haters (who’re gonna hate) might say that much of v2.0 could be cobbled together in v1.x if you searched hard enough, so what’s all the fuss about? Well, not everyone wants to spend our days digging around in clutter for buried treasure. The makeover and clean-up are worth a standing ovation. But that’s not the biggest big deal.

The point not to miss is this: TensorFlow just announced an uncompromising focus on usability.

The revolution is here! Welcome to TensorFlow 2.0.
AI lets you automate tasks you can’t come up with instructions for. It lets you automate the ineffable. Democratization means that AI at scale will no longer be the province of a tiny tech elite.
The revolution is here! Welcome to TensorFlow 2.0.
Imagine a future where “I know how to make things with Python and “I know how to make things with AI are equally commonplace statements… Exactly! I’m almost tempted to use that buzzword “disruptive” here.

The great migration

We know it’s hard work to upgrade to a new version, especially when the changes are so dramatic. If you’re about to embark on migrating your codebase to 2.0, you’re not alone — we’ll be doing the same here at Google with one of the largest codebases in the world. As we go along, we’ll be sharing migration guides to help you out.

The revolution is here! Welcome to TensorFlow 2.0.
If you rely on specific functionality, you won’t be left in the lurch — except for contrib, all TF 1.x functions will live on in the compat.v1 compatibility module. We’re also giving you a script which automatically updates your code so it runs on TensorFlow 2.0. Learn more in the video below.

This video’s is a great resource if you’re eager to dig deeper into TF 2.0 and geek out on code snippets.

Your clean slate

TF 2.0 is a beginner’s paradise, so it will be a downer for those who’ve been looking forward to watching newbies suffer the way you once suffered. If you were hoping to use TensorFlow for hazing new recruits, you might need to search for some other way to inflict existential horror.

The revolution is here! Welcome to TensorFlow 2.0.
Sitting out might have been the smartest move, because now’s the best time to arrive on the scene. As of March 2019, TensorFlow 2.0 is available in alpha (that’s a preview, you hipster you), so learning it now gets you ready in time for the full release that the community is gearing up for over the next quarter.
The revolution is here! Welcome to TensorFlow 2.0.
Following the dramatic changes, you won’t be as much of a beginner as you imagined. The playing field got leveled, the game got easier, and there’s a seat saved just for you. Welcome! I’m glad you’re finally here and I hope you’re as excited about this new world of possibilities as I am.

Dive in!

Check out the shiny redesigned tensorflow.org for tutorials, examples, documentation, and tools to get you started… or dive straight in with:

pip install tensorflow==2.0.0-alpha0

You’ll find detailed instructions here.

TensorFlow Full Course - TensorFlow Tutorial For Beginners

TensorFlow Full Course - TensorFlow Tutorial For Beginners

This "TensorFlow Full Course - TensorFlow Tutorial For Beginners" video is a complete guide to Deep Learning using TensorFlow. It covers in-depth knowledge about Deep Leaning, Tensorflow & Neural Networks

Below are the topics covered in this TensorFlow tutorial:

2:07 Artificial Intelligence

2:21 Why Artificial Intelligence?

5:27 What is Artificial Intelligence?

5:55 Artificial Intelligence Domains

6:14 Artificial Intelligence Subsets

11:17 Machine Learning

12:32 Types of Machine Learning

12:39 Machine Learning Use Case

15:55 Supervised Learning

18:50 Types of Supervised Learning

20:17 Use Case 2

21:28 Linear Regression

26:34 Linear Regression Demo

38:39 Regression Application

40:14 Building Logistic Regression Model

40:24 Logistic Regression Use Case

46:55 Analysing Performance Of The Model
	
49:40 Calculating The Accuracy
	
51:31 Logistic Regression Demo

1:01:38 Clustering Use Case

1:05:12 How Clustering works?

1:05:12 Initialization
	
1:06:07 Cluster Assignment
	
1:07:37 Move Centroid
	
1:08:27 Optimization
	
1:08:32 Convergence
	
1:09:22 How to find optimal solution?
	
1:09:30 Choosing the number of cluster

1:16:35 Reinforcement Learning

1:17:35 Limitation of Machine Learning

1:22:00 How Deep Learning Solves the Issue?

1:25:05 What is Deep Learning?

1:26:35 Applications of Deep Learning

1:29:14 What is a Tensor?

1:29:48 Rank of Tensors

1:32:13 Shape of a Tensor

1:33:58 What is TensorFlow?

1:35:38 TensorFlow Code Basics

1:36:09 TensorFlow Basic Demo

2:00:33 Activation or Transformation Function

2:01:28 Linear
	
2:02:18 Unit Step
	
2:03:23 Sigmoid
	
2:04:23 Tanh
	
2:05:18 ReLU
	
2:05:53 Softmax

2:07:03 Activation Function Demo

2:10:43 How Neuron Works?

2:13:08 What is a Perceptron?

2:15:53 Role of Weights & Bias

2:16:18 Perceptron Example

2:22:23 Training a Perceptron

2:22:48 Perceptron Learning Algorithm

2:26:08 Training Network Weights

2:39:43 Reducing The Loss

2:43:18 Perceptron Learning Algorithm Demo