Lenora  Hauck

Lenora Hauck

1593678180

How to Install, Run, and Connect to Jupyter Notebook

Jupyter Notebook is an open-source web application that lets you create and share interactive code, visualizations, and more. This tool can be used with several programming languages, including Python, Julia, R, Haskell, and Ruby. It is often used for working with data, statistical modeling, and machine learning.

Jupyter Notebooks (or just “Notebooks”) are documents produced by the Jupyter Notebook app which contain both computer code and rich text elements (paragraph, equations, figures, links, etc.) which aid in presenting and sharing reproducible research. They can therefore be an excellent tool to use for data-driven or programming-based presentations, or as a teaching tool.

This tutorial will walk you through setting up Jupyter Notebook to run from an Ubuntu 20.04 server, as well as demonstrate how to connect to and use the notebook from a local machine via tunnelling. By the end of this guide, you will be able to run Python 3 code using Jupyter Notebook running on a remote server.

Prerequisites

In order to complete this guide, you should have a fresh Ubuntu 20.04 server instance with a basic firewall and a non-root user with sudo privileges configured. You can learn how to set this up by running through our initial server setup tutorial.

#jupyter notebook #ubuntu

What is GEEK

Buddha Community

How to Install, Run, and Connect to Jupyter Notebook

Rodrigo Senra - Jupyter Notebooks

Nosso convidado de hoje é diretor técnico na Work & Co, PhD em Ciências da Computação, já contribuiu com inúmeros projetos open source em Python, ajudou a fundar a Associação Python Brasil e já foi premiado com o Prêmio Dorneles Tremea por contribuições para a comunidade Python Brasil.

#alexandre oliva #anaconda #apache zeppelin #associação python brasil #azure notebooks #beakerx #binder #c++ #closure #colaboratory #donald knuth #fernando pérez #fortran #graphql #guido van rossum #ipython #java #javascript #json #jupyter kenels #jupyter notebooks #jupyterhub #jupyterlab #latex #lisp #literate programming #lua #matlab #perl #cinerdia #prêmio dorneles tremea #python #r #rodrigo senra #scala #spark notebook #tcl #typescript #zope

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

In our previous posts in this series, we spoke at length about using PgBouncer  and Pgpool-II , the connection pool architecture and pros and cons of leveraging one for your PostgreSQL deployment. In our final post, we will put them head-to-head in a detailed feature comparison and compare the results of PgBouncer vs. Pgpool-II performance for your PostgreSQL hosting !

The bottom line – Pgpool-II is a great tool if you need load-balancing and high availability. Connection pooling is almost a bonus you get alongside. PgBouncer does only one thing, but does it really well. If the objective is to limit the number of connections and reduce resource consumption, PgBouncer wins hands down.

It is also perfectly fine to use both PgBouncer and Pgpool-II in a chain – you can have a PgBouncer to provide connection pooling, which talks to a Pgpool-II instance that provides high availability and load balancing. This gives you the best of both worlds!

Using PgBouncer with Pgpool-II - Connection Pooling Diagram

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

CLICK TO TWEET

Performance Testing

While PgBouncer may seem to be the better option in theory, theory can often be misleading. So, we pitted the two connection poolers head-to-head, using the standard pgbench tool, to see which one provides better transactions per second throughput through a benchmark test. For good measure, we ran the same tests without a connection pooler too.

Testing Conditions

All of the PostgreSQL benchmark tests were run under the following conditions:

  1. Initialized pgbench using a scale factor of 100.
  2. Disabled auto-vacuuming on the PostgreSQL instance to prevent interference.
  3. No other workload was working at the time.
  4. Used the default pgbench script to run the tests.
  5. Used default settings for both PgBouncer and Pgpool-II, except max_children*. All PostgreSQL limits were also set to their defaults.
  6. All tests ran as a single thread, on a single-CPU, 2-core machine, for a duration of 5 minutes.
  7. Forced pgbench to create a new connection for each transaction using the -C option. This emulates modern web application workloads and is the whole reason to use a pooler!

We ran each iteration for 5 minutes to ensure any noise averaged out. Here is how the middleware was installed:

  • For PgBouncer, we installed it on the same box as the PostgreSQL server(s). This is the configuration we use in our managed PostgreSQL clusters. Since PgBouncer is a very light-weight process, installing it on the box has no impact on overall performance.
  • For Pgpool-II, we tested both when the Pgpool-II instance was installed on the same machine as PostgreSQL (on box column), and when it was installed on a different machine (off box column). As expected, the performance is much better when Pgpool-II is off the box as it doesn’t have to compete with the PostgreSQL server for resources.

Throughput Benchmark

Here are the transactions per second (TPS) results for each scenario across a range of number of clients:

#database #developer #performance #postgresql #connection control #connection pooler #connection pooler performance #connection queue #high availability #load balancing #number of connections #performance testing #pgbench #pgbouncer #pgbouncer and pgpool-ii #pgbouncer vs pgpool #pgpool-ii #pooling modes #postgresql connection pooling #postgresql limits #resource consumption #throughput benchmark #transactions per second #without pooling

Lenora  Hauck

Lenora Hauck

1593678180

How to Install, Run, and Connect to Jupyter Notebook

Jupyter Notebook is an open-source web application that lets you create and share interactive code, visualizations, and more. This tool can be used with several programming languages, including Python, Julia, R, Haskell, and Ruby. It is often used for working with data, statistical modeling, and machine learning.

Jupyter Notebooks (or just “Notebooks”) are documents produced by the Jupyter Notebook app which contain both computer code and rich text elements (paragraph, equations, figures, links, etc.) which aid in presenting and sharing reproducible research. They can therefore be an excellent tool to use for data-driven or programming-based presentations, or as a teaching tool.

This tutorial will walk you through setting up Jupyter Notebook to run from an Ubuntu 20.04 server, as well as demonstrate how to connect to and use the notebook from a local machine via tunnelling. By the end of this guide, you will be able to run Python 3 code using Jupyter Notebook running on a remote server.

Prerequisites

In order to complete this guide, you should have a fresh Ubuntu 20.04 server instance with a basic firewall and a non-root user with sudo privileges configured. You can learn how to set this up by running through our initial server setup tutorial.

#jupyter notebook #ubuntu

Deploy Your First Jupyter Notebook to Docker

There are only two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle. -Albert Einstein

In this article, we will talk about what Docker is, how it works and how to deploy a Jupyter notebook to a Docker Container.

#What is Docker?

 

docker-about.png

 

According to the Docker Website, Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly.

In other words, Docker is a platform that provides a container for you to run host, and run your applications in without bothering about things like platform dependence, it provides infrastructure called a container where your platforms can be held and run.

#What makes up Docker (In a Nutshell)

Here we will provide an overview of what docker up, if you want a comprehensive overview of how docker works, check this article out.

The Docker Architecture is divided into three(3) sections:

  • Docker Engine(dockerd)
  • docker-containerd (contained)
  • docker-runc (runc)

 

docker-2.png

Docker Engine(dockerd)

Docker engine comprises the docker daemon, an API interface, and Docker CLI. Docker daemon (dockerd) runs continuously as dockerd system service. It is responsible for building the docker images.

Docker-containerd

containerd is another system daemon service that is responsible for downloading the docker images and running them as a container. It exposes its API to receive instructions from the dockerd service

Docker-runc

runc is the container runtime responsible for creating the namespaces and cgroups required for a container. It then runs the container commands inside those namespaces. runc runtime is implemented as per the OCI specification.

How to Deploy a Colab Jupyter Notebook to a Docker Container

 

docker-0.png

 

 

In this part, we are going to work build a simple classifier model using the Iris Dataset, after that we will import the code from colab and finally we install and run deploy the script containing the model into a docker container.

#Building Model

In this section, we will build the classifier model using the sklearn's inbuilt Iris Dataset.

STEP 1: Create a new notebook in google colab.

colab-opening.png

STEP 2 Import the dependencies.

import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier

STEP 3 Here we are going to load the iris dataset, split the data into the training set and test set, and build our classification model.

iris = load_iris()
X = iris.data
y = iris.target

Above we loaded the Iris dataset.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2 , random_state=4)

Above we used the train_test_split function in sklearn to split the iris dataset into a training set and test set.

knn = KNeighborsClassifier(n_neighbors=10)
knn.fit(X,y)

Above we instantiated the KNeighbors Classifier model and tuned the n_neighbours hyperparameter to contain ten(10) neighbors.

Installing and deploying to Docker

This is the final chapter, here we are going to install the docker desktop application and write the scripts which will deploy our script to a docker container.

STEP 1 Firstly, you download the python script containing your trained model from colab.

colab-import-code.png

STEP 2 Now we are going to install and set up docker. You can install docker using this__link__ STEP 3 Now create a directory called iris-classifier where we are going to host our model and docker scripts.

Move the python file containing the iris classification model to the iris-classification folder just created.

In the same folder, create a text file called requirements, below are the contents it will contain.

sklearn==0.0
matplotlib==3.2.2

STEP 4 Here we will create the Dockerfile, go to your main directory and create a file called Dockerfile without any extension. A dockerfile is a script that is used to create a container image. Below are the items that will be contained in your Dockerfile.

FROM python:3.8

ADD requirements.txt /

RUN pip install -r /requirements.txt

ADD iris-classifier.py /

ENV PYTHONUNBUFFERED=1

CMD [ "python", "./iris-classifier.py" ]

Above we simply told docker what to do each time the container is run.

STEP 5 Here we are going to create our Docker Compose file, docker-compose files are simply configuration files that make it easy to maintain different Docker containers.

In your project directory, create a file called docker-compose.yml, below are the contents to be contained in the file.

version: "3"
services:
  iris-classifier-uplink:
    # if failure  or server restarts, container will restart
    restart: always 
    container_name: iris-classifier-uplink
    image: iris-classifier-uplink
    build: 
      # build classifier image from the Dockerfile in the current directory
      context: .

Now in your directory, iris-classifier you should have three(3) files.

#Running Docker Container

This is the final step, here we will run our docker container using the commands stated below.

docker compose build

docker compose up -d

This is the end, our Python model is now running in a docker container!

Useful Links

EndNote

Jupyter Notebooks are really good places for building models and you can also use them as back ends for applications, unfortunately, they don't run forever.

Docker helps you fix that by re-running your jupyter notebook when it fails and this makes it a tool worth knowing.

Source: https://hackernoon.com/deploy-your-first-jupyter-notebook-to-docker

#jupyter #notebook #docker 

Kennith  Blick

Kennith Blick

1625786460

How to Install and Important Concepts - Jupyter Notebook Tutorial #1

In this Python tutorial you will start learning how to deal with Jupyter Notebooks. This video will cover how to install Jupyter Notebook library and the basic concepts you need to know to get started and take full advantage of this great tool.

Jupyter Notebook is a web application where you can combine code, texts and media of any kind to document your work, assignments, demonstrations or even prepare classes to your students. It is widely used by scientists, researches and the scientific community. This is a great tool to know. Let’s get started!

Playlist: Jupyter Notebook | Video #1
Access the code here: https://github.com/rscorrea1/youtube.git

Scientific Python Lectures by J.R Johansson
Access the material here: https://github.com/jrjohansson/scientific-python-lectures

Timestamp:
0:00 - Start of the video
1:14 - How to install (conda and pip)
1:51 - How to start the notebook server
2:23 - Notebook Dashboard
3:28 - How to run a cell (and shortcut)
4:02 - Important Concepts
6:12 - Command and Edit Modes
8:14 - Code and Markdown Cells
10:46 - Wrapping up

#jupyter #jupyter notebook tutorial