Royce  Reinger

Royce Reinger

1667923380

Kubeflow: Machine Learning toolkit for Kubernetes

Kubeflow the cloud-native platform for machine learning operations - pipelines, training and deployment. 


Documentation

Please refer to the official docs at kubeflow.org.

Working Groups

The Kubeflow community is organized into working groups (WGs) with associated repositories, that focus on specific pieces of the ML platform.

Quick Links

Get Involved

Please refer to the Community page.

Download Details:

Author: Kubeflow
Source Code: https://github.com/kubeflow/kubeflow 
License: Apache-2.0 license

#machinelearning #kubernetes #jupyter #notebook #tensorflow 

Kubeflow: Machine Learning toolkit for Kubernetes

Devel IPerl: Perl5 Language Kernel for Jupyter

Devel-IPerl

Installation

Dependencies

Devel::IPerl depends upon the ZeroMQ library (ZMQ) and Project Jupyter in order to work.

ZeroMQ

Debian

On Debian-based systems, you can install ZeroMQ using apt.

sudo apt install libzmq3-dev 

macOS

If you use Homebrew on macOS, you can install ZeroMQ by using

brew install zmq

You may also need to install cpanm this way by using

brew install cpanm

Installing ZeroMQ without a package manager

Some systems may not have a package manager (e.g,. Windows) or you may want to avoid using the package manager.

Make sure you have Perl, a C/C++ compiler, and a CPAN client (cpanm) on your system.

Then run the following command

cpanm --notest Alien::ZMQ::latest

It has been tested on GNU/Linux, macOS, and Windows (Strawberry Perl 5.26.1.1).

Note: There are currently issues with installing on Windows using ActivePerl and older versions of Strawberry Perl. These are mostly due to having an older toolchain which causes builds of the native libraries to fail.

Jupyter

See the Jupyter install page to see how to install Jupyter.

On Debian, you can install using apt:

sudo apt install jupyter-console jupyter-notebook

If you know how to use pip, this may be as easy as

pip install -U jupyter
# or use pip3 (for Python 3) instead of pip

Make sure Jupyter is in the path by running

jupyter --version

Install from CPAN

cpanm Devel::IPerl

Running

iperl console  # start the console

iperl notebook # start the notebook

See the wiki for more information and example notebooks!


Download Details:

Author: EntropyOrg
Source Code: https://github.com/EntropyOrg/p5-Devel-IPerl

#perl #jupyter 

Devel IPerl: Perl5 Language Kernel for Jupyter
Royce  Reinger

Royce Reinger

1667640060

Neural Network That Transforms A Design Mock-up into A Static Website

Screenshot to code

A detailed tutorial covering the code in this repository: Turning design mockups into code with deep learning.

Plug: ๐Ÿ‘‰ Check out my 60-page guide, No ML Degree, on how to land a machine learning job without a degree.

The neural network is built in three iterations. Starting with a Hello World version, followed by the main neural network layers, and ending by training it to generalize.

The models are based on Tony Beltramelli's pix2code, and inspired by Airbnb's sketching interfaces, and Harvard's im2markup.

Note: only the Bootstrap version can generalize on new design mock-ups. It uses 16 domain-specific tokens which are translated into HTML/CSS. It has a 97% accuracy. The best model uses a GRU instead of an LSTM. This version can be trained on a few GPUs. The raw HTML version has potential to generalize, but is still unproven and requires a significant amount of GPUs to train. The current model is also trained on a homogeneous and small dataset, thus it's hard to tell how well it behaves on more complex layouts.

A quick overview of the process:

1) Give a design image to the trained neural network

Insert image

2) The neural network converts the image into HTML markup

html_display.gif?raw=true

3) Rendered output

Screenshot

Installation

FloydHub

Click this button to open a Workspace on FloydHub where you will find the same environment and dataset used for the Bootstrap version. You can also find the trained models for testing.

Local

pip install keras tensorflow pillow h5py jupyter
git clone https://github.com/emilwallner/Screenshot-to-code.git
cd Screenshot-to-code/
jupyter notebook

Go do the desired notebook, files that end with '.ipynb'. To run the model, go to the menu then click on Cell > Run all

The final version, the Bootstrap version, is prepared with a small set to test run the model. If you want to try it with all the data, you need to download the data here: https://www.floydhub.com/emilwallner/datasets/imagetocode, and specify the correct dir_name.

Folder structure

  |  |-Bootstrap                           #The Bootstrap version
  |  |  |-compiler                         #A compiler to turn the tokens to HTML/CSS (by pix2code)
  |  |  |-resources                                            
  |  |  |  |-eval_light                    #10 test images and markup
  |  |-Hello_world                         #The Hello World version
  |  |-HTML                                #The HTML version
  |  |  |-Resources_for_index_file         #CSS,images and scripts to test index.html file
  |  |  |-html                             #HTML files to train it on
  |  |  |-images                           #Screenshots for training
  |-readme_images                          #Images for the readme page

Hello World

HTML

Bootstrap

Model weights

Acknowledgments

  • Thanks to IBM for donating computing power through their PowerAI platform
  • The code is largely influenced by Tony Beltramelli's pix2code paper. Code Paper
  • The structure and some of the functions are from Jason Brownlee's excellent tutorial

Download Details:

Author: Emilwallner
Source Code: https://github.com/emilwallner/Screenshot-to-code 
License: View license

#machinelearning #deeplearning #jupyter 

Neural Network That Transforms A Design Mock-up into A Static Website
Nat  Grady

Nat Grady

1666874880

Jupyterlab-lsp: Language Server Protocol integration for Jupyter(Lab)

Language Server Protocol integration for Jupyter(Lab)

Features

Examples show Python code, but most features also work in R, bash, typescript, and many other languages.

Hover

Hover over any piece of code; if an underline appears, you can press Ctrl to get a tooltip with function/class signature, module documentation or any other piece of information that the language server provides

hover

Diagnostics

Critical errors have red underline, warnings are orange, etc. Hover over the underlined code to see a more detailed message

inspections

Jump to Definition and References

Use the context menu entry, or Alt + :computer_mouse: to jump to definitions/references (you can change it to Ctrl/โŒ˜ in settings); use Alt + o to jump back.

jump

Highlight References

Place your cursor on a variable, function, etc and all the usages will be highlighted

Automatic Completion and Continuous Hinting

  • Certain characters, for example '.' (dot) in Python, will automatically trigger completion.
  • You can choose to receive the completion suggestions as you type by enabling continuousHinting setting.

invoke

Automatic Signature Suggestions

Function signatures will automatically be displayed

signature

Kernel-less Autocompletion

Advanced static-analysis autocompletion without a running kernel

autocompletion

The runtime kernel suggestions are still there

When a kernel is available the suggestions from the kernel (such as keys of a dict and columns of a DataFrame) are merged with the suggestions from the Language Server (in notebook).

If the kernel is too slow to respond promptly only the Language Server suggestions will be shown (default threshold: 0.6s). You can configure the completer to not attempt to fetch the kernel completions if the kernel is busy (skipping the 0.6s timeout).

You can deactivate the kernel suggestions by adding "Kernel" to the disableCompletionsFrom in the completion section of Advanced Settings. Alternatively if you only want kernel completions you can add "LSP" to the same setting; Or add both if you like to code in hardcore mode and get no completions, or if another provider has been added.

Rename

Rename variables, functions and more, in both: notebooks and the file editor. Use the context menu option or the F2 shortcut to invoke.

rename

Diagnostics panel

Sort and jump between the diagnostics using the diagnostics panel. Open it searching for "Show diagnostics panel" in JupyterLab commands palette or from the context menu. Use context menu on rows in the panel to filter out diagnostics or copy their message.

panel

Prerequisites

You will need to have both of the following installed:

  • JupyterLab >=3.3.0,<4.0.0a0
  • Python 3.7+

In addition, if you wish to use javascript, html, markdown or any other NodeJS-based language server you will need to have appropriate NodeJS version installed.

Note: Installation for JupyterLab 2.x requires a different procedure, please consult the documentation for the extension version 2.x.

Installation

For more extensive installation instructions, see the documentation.

For the current stable version, the following steps are recommended. Use of a python virtualenv or a conda env is also recommended.

install python 3

conda install -c conda-forge python=3

install JupyterLab and the extensions

conda install -c conda-forge 'jupyterlab>=3.0.0,<4.0.0a0' jupyterlab-lsp
# or
pip install 'jupyterlab>=3.0.0,<4.0.0a0' jupyterlab-lsp

Note: jupyterlab-lsp provides both the server extension and the lab extension.

Note: With conda, you could take advantage of the bundles: jupyter-lsp-python or jupyter-lsp-r to install both the server extension and the language server.

install LSP servers for languages of your choice; for example, for Python (pylsp) and R (languageserver) servers:

pip install 'python-lsp-server[all]'
R -e 'install.packages("languageserver")'

or from conda-forge

conda install -c conda-forge python-lsp-server r-languageserver

Please see our full list of supported language servers which includes installation hints for the common package managers (npm/pip/conda). In general, any LSP server from the Microsoft list should work after some additional configuration.

Note: it is worth visiting the repository of each server you install as many provide additional configuration options.

Restart JupyterLab

If JupyterLab is running when you installed the extension, a restart is required for the server extension and any language servers to be recognized by JupyterLab.

(Optional, IPython users only) to improve the performance of autocompletion, disable Jedi in IPython (the LSP servers for Python use Jedi too). You can do that temporarily with:

%config Completer.use_jedi = False

or permanently by setting c.Completer.use_jedi = False in your ipython_config.py file.

(Optional, Linux/OSX-only) As a security measure by default Jupyter server only allows access to files under the Jupyter root directory (the place where you launch the Jupyter server). Thus, in order to allow jupyterlab-lsp to navigate to external files such as packages installed system-wide or to libraries inside a virtual environment (conda, pip, ...) this access control mechanism needs to be circumvented: inside your Jupyter root directory create a symlink named .lsp_symlink pointing to your system root /.

ln -s / .lsp_symlink

As this symlink is a hidden file the Jupyter server must be instructed to serve hidden files. Either use the appropriate command line flag:

jupyter lab --ContentsManager.allow_hidden=True

or, alternatively, set the corresponding setting inside your jupyter_server_config.py.

Help in implementing a custom ContentsManager which will enable navigating to external files without the symlink is welcome.

Configuring the servers

Server configurations can be edited using the Advanced Settings editor in JupyterLab (Settings > Advanced Settings Editor). For settings specific to each server, please see the table of language servers. Example settings might include:

Note: for the new (currently recommended) python-lsp-server replace pyls occurrences with pylsp

{
  "language_servers": {
    "pyls": {
      "serverSettings": {
        "pyls.plugins.pydocstyle.enabled": true,
        "pyls.plugins.pyflakes.enabled": false,
        "pyls.plugins.flake8.enabled": true
      }
    },
    "r-languageserver": {
      "serverSettings": {
        "r.lsp.debug": false,
        "r.lsp.diagnostics": false
      }
    }
  }
}

The serverSettings key specifies the configurations sent to the language servers. These can be written using stringified dot accessors like above (in the VSCode style), or as nested JSON objects, e.g.:

{
  "language_servers": {
    "pyls": {
      "serverSettings": {
        "pyls": {
          "plugins": {
            "pydocstyle": {
              "enabled": true
            },
            "pyflakes": {
              "enabled": false
            },
            "flake8": {
              "enabled": true
            }
          }
        }
      }
    }
  }
}

Other configuration methods

Some language servers, such as pyls, provide other configuration methods in addition to language-server configuration messages (accessed using the Advanced Settings Editor). For example, pyls allows users to configure the server using a local configuration file. You can change the inspection/diagnostics for server plugins like pycodestyle there.

The exact configuration details will vary between operating systems (please see the configuration section of pycodestyle documentation), but as an example, on Linux you would simply need to create a file called ~/.config/pycodestyle, which may look like that:

[pycodestyle]
ignore = E402, E703
max-line-length = 120

In the example above:

  • ignoring E402 allows imports which are not on the very top of the file,
  • ignoring E703 allows terminating semicolon (useful for matplotlib plots),
  • the maximal allowed line length is increased to 120.

After changing the configuration you may need to restart the JupyterLab, and please be advised that the errors in configuration may prevent the servers from functioning properly.

Again, please do check the pycodestyle documentation for specific error codes, and check the configuration of other feature providers and language servers as needed.

Acknowledgements

This would not be possible without the fantastic initial work at wylieconlon/lsp-editor-adapter.

Download Details:

Author: jupyter-lsp
Source Code: https://github.com/jupyter-lsp/jupyterlab-lsp 
License: BSD-3-Clause license

#r #jupyter #notebook 

Jupyterlab-lsp: Language Server Protocol integration for Jupyter(Lab)
Nat  Grady

Nat Grady

1666417440

Papermill: Parameterize, Execute, and analyze Notebooks

Papermill

papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks.

Papermill lets you:

  • parameterize notebooks
  • execute notebooks

This opens up new opportunities for how notebooks can be used. For example:

  • Perhaps you have a financial report that you wish to run with different values on the first or last day of a month or at the beginning or end of the year, using parameters makes this task easier.
  • Do you want to run a notebook and depending on its results, choose a particular notebook to run next? You can now programmatically execute a workflow without having to copy and paste from notebook to notebook manually.

Papermill takes an opinionated approach to notebook parameterization and execution based on our experiences using notebooks at scale in data pipelines.

Installation

From the command line:

pip install papermill

For all optional io dependencies, you can specify individual bundles like s3, or azure -- or use all. To use Black to format parameters you can add as an extra requires ['black'].

pip install papermill[all]

Python Version Support

This library currently supports Python 3.7+ versions. As minor Python versions are officially sunset by the Python org papermill will similarly drop support in the future.

Usage

Parameterizing a Notebook

To parameterize your notebook designate a cell with the tag parameters.

enable parameters in Jupyter

Papermill looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. Papermill will add a new cell tagged with injected-parameters with input parameters in order to overwrite the values in parameters. If no cell is tagged with parameters the injected cell will be inserted at the top of the notebook.

Additionally, if you rerun notebooks through papermill and it will reuse the injected-parameters cell from the prior run. In this case Papermill will replace the old injected-parameters cell with the new run's inputs.

image

Executing a Notebook

The two ways to execute the notebook with parameters are: (1) through the Python API and (2) through the command line interface.

Execute via the Python API

import papermill as pm

pm.execute_notebook(
   'path/to/input.ipynb',
   'path/to/output.ipynb',
   parameters = dict(alpha=0.6, ratio=0.1)
)

Execute via CLI

Here's an example of a local notebook being executed and output to an Amazon S3 account:

$ papermill local/input.ipynb s3://bkt/output.ipynb -p alpha 0.6 -p l1_ratio 0.1

NOTE: If you use multiple AWS accounts, and you have properly configured your AWS credentials, then you can specify which account to use by setting the AWS_PROFILE environment variable at the command-line. For example:

$ AWS_PROFILE=dev_account papermill local/input.ipynb s3://bkt/output.ipynb -p alpha 0.6 -p l1_ratio 0.1

In the above example, two parameters are set: alpha and l1_ratio using -p (--parameters also works). Parameter values that look like booleans or numbers will be interpreted as such. Here are the different ways users may set parameters:

$ papermill local/input.ipynb s3://bkt/output.ipynb -r version 1.0

Using -r or --parameters_raw, users can set parameters one by one. However, unlike -p, the parameter will remain a string, even if it may be interpreted as a number or boolean.

$ papermill local/input.ipynb s3://bkt/output.ipynb -f parameters.yaml

Using -f or --parameters_file, users can provide a YAML file from which parameter values should be read.

$ papermill local/input.ipynb s3://bkt/output.ipynb -y "
alpha: 0.6
l1_ratio: 0.1"

Using -y or --parameters_yaml, users can directly provide a YAML string containing parameter values.

$ papermill local/input.ipynb s3://bkt/output.ipynb -b YWxwaGE6IDAuNgpsMV9yYXRpbzogMC4xCg==

Using -b or --parameters_base64, users can provide a YAML string, base64-encoded, containing parameter values.

When using YAML to pass arguments, through -y, -b or -f, parameter values can be arrays or dictionaries:

$ papermill local/input.ipynb s3://bkt/output.ipynb -y "
x:
    - 0.0
    - 1.0
    - 2.0
    - 3.0
linear_function:
    slope: 3.0
    intercept: 1.0"

Supported Name Handlers

Papermill supports the following name handlers for input and output paths during execution:

Local file system: local

HTTP, HTTPS protocol: http://, https://

Amazon Web Services: AWS S3 s3://

Azure: Azure DataLake Store, Azure Blob Store adl://, abs://

Google Cloud: Google Cloud Storage gs://

Development Guide

Read CONTRIBUTING.md for guidelines on how to setup a local development environment and make code changes back to Papermill.

For development guidelines look in the DEVELOPMENT_GUIDE.md file. This should inform you on how to make particular additions to the code base.

Documentation

We host the Papermill documentation on ReadTheDocs.

Download Details:

Author: nteract
Source Code: https://github.com/nteract/papermill 
License: BSD-3-Clause license

#r #python #scala #jupyter 

Papermill: Parameterize, Execute, and analyze Notebooks
Sean Robertson

Sean Robertson

1666148807

Installing Jupyter Notebooks / Anaconda

In this Python tutorial for Beginners, we are going to set up our environment using Anaconda and Jupyter Notebooks. Installing Jupyter Notebooks/Anaconda | Python for Beginners

Link to Download Anaconda/Jupyter Notebooks: https://www.anaconda.com/ 

0:00 Intro
0:52 Downloading Anaconda
3:52 Jupyter Notebooks
7:41 Outro

#jupyter #anaconda #python #programming 

Installing Jupyter Notebooks / Anaconda
Franz  Becker

Franz Becker

1664020920

7 Best Jupyter Testing Libraries

In this Jupyter and python article, let's learn about Testing: 7 Best Jupyter Testing Libraries

Table of contents:

  • ipytest - Test runner for running unit tests from within a notebook.
  • nbcelltests - Cell-by-cell testing for notebooks in Jupyter.
  • nbval - Py.test plugin for validating Jupyter notebooks.
  • nosebook - Nose plugin for finding and running IPython notebooks as nose tests.
  • sphinxcontrib-jupyter - Sphinx extension for generating Jupyter notebooks.
  • treebeard - GitHub Action for testing/scheduling Jupyter notebooks.
  • treon - Easy-to-use test framework for Jupyter Notebooks.

What is Jupyter used for?

Jupyter Notebook allows users to compile all aspects of a data project in one place making it easier to show the entire process of a project to your intended audience. Through the web-based application, users can create data visualizations and other components of a project to share with others via the platform.


7 Best Jupyter Testing Libraries

  1. ipytest

ipytest allows you to run Pytest in Jupyter notebooks. ipytest aims to give access to the full pytest experience to make it easy to transfer tests out of notebooks into separate test files.

Installation: pip install ipytest

Usage

For usage see the example notebook or the documentation for the core API below. The suggested way to import ipytest is:

import ipytest
ipytest.autoconfig()

Afterwards in a new cell, tests can be executed as in:

%%ipytest -qq

def test_example():
    assert [1, 2, 3] == [1, 2, 3]

This command will first delete any previously defined tests, execute the cell and then run pytest. For further details on how to use ipytest see Extended usage.

NOTE: Some notebook implementations modify the core IPython package and magics may not work correctly (see here or here). In this case, using ipytest.run and ipytest.clean_tests directly should still work as expected.

View on GitHub


2.  nbcelltests

Cell-by-cell testing for production Jupyter notebooks in JupyterLab

nbcelltests is designed for writing tests for linearly executed notebooks. Its primary use is for unit testing reports.

Installation

Python package installation: pip install nbcelltests

To use in JupyterLab, you will also need the lab and server extensions. Typically, these are automatically installed alongside nbcelltests, so you should not need to do anything special to use them. The lab extension will require a rebuild of JupyterLab, which you'll be prompted to do on starting JupyterLab the first time after installing celltests (or you can do manually with jupyter lab build). Note that you must have node.js installed (as for any lab extension).

To see what extensions you have, check the output of jupyter labextension list (look for jupyterlab_celltests), and jupyter serverextension list (look for nbcelltests). If for some reason you need to manually install the extensions, you can do so as follows:

jupyter labextension install jupyterlab_celltests
jupyter serverextension enable --py nbcelltests

(Note: if using in an environment, you might wish to add --sys-prefix to the serverextension command.)

"Linearly executed notebooks?"

When converting notebooks into html/pdf/email reports, they are executed top-to-bottom one time, and are expected to contain as little code as reasonably possible, focusing primarily on the plotting and markdown bits. Libraries for this type of thing include Papermill, JupyterLab Emails, etc.

View on GitHub


3.  nbval

A py.test plugin to validate Jupyter notebooks


The plugin adds functionality to py.test to recognise and collect Jupyter notebooks. The intended purpose of the tests is to determine whether execution of the stored inputs match the stored outputs of the .ipynb file. Whilst also ensuring that the notebooks are running without errors.

The tests were designed to ensure that Jupyter notebooks (especially those for reference and documentation), are executing consistently.

Each cell is taken as a test, a cell that doesn't reproduce the expected output will fail.

See docs/source/index.ipynb for the full documentation.

Installation

Available on PyPi:

pip install nbval

or install the latest version from cloning the repository and running:

pip install .

from the main directory. To uninstall:

pip uninstall nbval

View on GitHub


4.  nosebook

a nose plugin for finding and running IPython 2/3 notebooks as nose tests.

What it can't do in terms of setup and tearDown, nosebook makes up for in simplicity: there is no %%nose magic, no metadata required: the notebook on disk is the "gold master".

This makes it ideal for decreasing the burden of keeping documentation up to date with tests by making a single set of notebooks into both rich, multi-format documentation and a simple part of your test suite.

How does it work?

Each notebook found according to `nosebook-match <#nosebook-match>`__ is started with a fresh kernel, based on the kernel specified in the notebook. If the kernel is not installed, no tests will be run and the error will be logged.

Each code cell that matches `nosebook-match-cell <#nosebook-match-cell>`__ will be executed against the kernel in the order in which it appears in the notebook: other cells e.g. markdown, raw, are ignored.

The number and content of outputs has to match exactly, with the following parts of each output stripped:

  • execution/prompt numbers, i.e. [1]:
  • tracebacks

Non-deterministic output, such as with _repr_ methods that include the memory location of the instance, will obviously not match every time. You can use `nosebook-scrub <#nosebook-scrub>`__ to rewrite or remove offending content.

View on GitHub


5.  sphinxcontrib.jupyter

A Sphinx Extension for Generating Jupyter Notebooks

Summary

This sphinx extension can be used to:

  1. build a collection of jupyter notebooks,
  2. execute the Jupyter notebooks,
  3. convert the executed notebooks to html using nbconvert with template support.

Note: It has mainly been written to support the use case of scientific publishing and hasn't been well tested outside of this domain. Please provide feedback as an issue to this repository.

Requires: Sphinx >= 1.7.2 (for running tests).

Examples

Installation

pip install sphinxcontrib-jupyter

to get the latest version it is best to install directly by getting a copy of the repository, and

python setup.py install

if you are wishing to make changes to the project it is best to install using

python setup.py develop

View on GitHub


6.  nbmake-action

(repo renamed from 'treebeard').

What? A GitHub Action for testing notebooks, runs them from top-to-bottom

Why? To raise the quality of scientific material through better automation

Who is this for? Scientists/Developers who have written docs in notebooks and want to CI test them after every commit

Functionality

Tests notebooks using nbmake via pytest.

Note: If you have some experience setting up GitHub actions already you will probably prefer the flexibility of using the nbmake pip package directly.

Quick Start

      - uses: actions/checkout@v2
      - uses: actions/setup-python@v2
      - uses: "treebeardtech/nbmake-action@v0.2"
        with:
          path: "./examples"
          path-output: .
          notebooks: |
            nb1.ipynb
            'sub dir/*.ipynb'

See action.yml for the parameters you can pass to this action, and see unit tests and integ tests for example invocations.

Developing

Install local package

npm install

Run checks and build

npm run all

View on GitHub


7.  treon

Easy to use test framework for Jupyter Notebooks

Easy to use test framework for Jupyter Notebooks.

  • Runs notebook top to bottom and flags execution errors if any
  • Runs unittest present in your notebook code cells
  • Runs doctest present in your notebook code cells

Why should you use it?

  • Start testing notebooks without writing a single line of test code
  • Multithreaded execution for quickly testing a set of notebooks
  • Executes every Notebook in a fresh kernel to avoid hidden state problems
  • Primarily a command line tool that can be used easily in any Continuous Integration (CI) system

Installation

pip install treon

Usage

Treon will execute notebook from top to bottom and the test fails if any code cell returns an error. Additionally, one can write unittest & doctest to test specific behaviour (examples shown below).

$ treon
Executing treon version 0.1.4
Recursively scanning /workspace/treon/tmp/docs/site/ru/guide for Notebooks...

-----------------------------------------------------------------------
Collected following Notebooks for testing
-----------------------------------------------------------------------
/workspace/treon/tmp/docs/site/ru/guide/keras.ipynb
/workspace/treon/tmp/docs/site/ru/guide/eager.ipynb
-----------------------------------------------------------------------

Triggered test for /workspace/treon/tmp/docs/site/ru/guide/keras.ipynb
Triggered test for /workspace/treon/tmp/docs/site/ru/guide/eager.ipynb

test_sum (__main__.TestNotebook) ...
ok
test_sum (__main__.TestNotebook2) ...
ok
test_sum (__main__.TestNotebook3) ...
ok

----------------------------------------------------------------------
Ran 3 tests in 0.004s

OK

-----------------------------------------------------------------------
TEST RESULT
-----------------------------------------------------------------------
/workspace/treon/tmp/docs/site/ru/guide/keras.ipynb       -- PASSED
/workspace/treon/tmp/docs/site/ru/guide/eager.ipynb       -- PASSED
-----------------------------------------------------------------------
2 succeeded, 0 failed, out of 2 notebooks tested.
-----------------------------------------------------------------------

View on GitHub


Frequently asked questions about Testing Jupyter

  • How do you test a Python code Jupyter Notebook?

Features of testbook

  1. Write conventional unit tests for Jupyter Notebooks.
  2. Execute all or some specific cells before unit test.
  3. Share kernel context across multiple tests (using pytest fixtures)
  4. Inject code into Jupyter notebooks.
  5. Works with any unit testing library - unittest, pytest or nose.
  • What is a test notebook?

testbook is a unit testing framework for testing code in Jupyter Notebooks. Previous attempts at unit testing notebooks involved writing the tests in the notebook itself. However, testbook will allow for unit tests to be run against notebooks in separate test files, hence treating .

  • Can you use Unittest in Jupyter Notebook?
  • Unit testing with unittest

    We can basically do the same thing in our Jupyter notebook. We can make a unitest. TestCase class, perform the tests we want, and then just execute the unit tests in any cell.
  • Is JupyterLab free?

Jupyter will always be 100% open-source software, free for all to use and released under the liberal terms of the modified BSD license. Jupyter is developed in the open on GitHub, through the consensus of the Jupyter community.

  • Can we do pytest in Jupyter notebook?

This small pytest plugin allows it to discover and run tests written inside ipython notebook cells. It works by examining notebook global scope, putting it into a module object and passing it to pytest for futher processing. No temporary files or bytecode hacks.


Related videos:

Unit Testing Jupyter Notebooks - testbook


Related posts:

#jupyter 

7 Best Jupyter Testing Libraries
Layne  Fadel

Layne Fadel

1664012820

Best Visualization Libraries Plugins for Jupyter and Python

In this Jupyter and python article, let's learn about Visualization: Best Visualization Libraries Plugins for Jupyter and Python

Table of contents:

  • jupyter-manim - Display manim (Mathematical Animation Engine) videos or GIFs in Jupyter notebooks.
  • lux - Recommends a set of visualizations whenever a dataframe is printed in a notebook.
  • mpld3 - Combining Matplotlib and D3js for interactive data visualizations.
  • pd-replicator - Copy a pandas DataFrame to the clipboard with one click.
  • Perspective - Data visualization and analytics component, especially for large/streaming datasets.
  • pyecharts - Python interface for the ECharts visualization library.
  • pythreejs - Python / ThreeJS bridge utilizing the Jupyter widget infrastructure.
  • Qgrid - Interactive grid for sorting, filtering, and editing DataFrames in Jupyter notebooks.
  • tqdm - Fast, extensible progress bar for loops and iterables.
  • tributary - Python data streams with Jupyter support.
  • xleaflet - C++ Backend for ipyleaflet.
  • xwebrtc - C++ Backend for ipywebrtc.
  • xwidgets - C++ Backend for ipywidgets.

what is Jupyter?

JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, computational journalism, and machine learning. A modular design invites extensions to expand and enrich functionality.

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.


Best Visualization Libraries Plugins for Jupyter and Python

  1. jupyter-manim

Integrates 3b1b's ManimCairo (cairo-backend branch) with Jupyter displaying the resulting video when using %%manim cell magic to wrap a scene definition.

WARNING: This library only works for ManimCairo (the cairo-backend branch in 3b1b's version). It does not work for ManimCE (which already has Jupyter support by default) or ManimGL (which does not support Jupyter at all, as of time of writing).

Installation

pip3 install jupyter-manim

Usage

To enable the manim magic please run import jupyter_manim first. Then, you can use the magic as if it was the manim command: your arguments will be passed to manim, exactly as if these were command line options.

For example, to render scene defined with class Shapes(Scene) use

%%manim Shapes
from manimlib.scene.scene import Scene
from manimlib.mobject.geometry import Circle
from manimlib.animation.creation import ShowCreation

class Shapes(Scene):

    def construct(self):
        circle = Circle()
        self.play(ShowCreation(circle))

Since version 1.0, the code is no longer required to be self-contained - jupyter_manim will attempt to export your variables (and imported objects) from the notebook into the manim script.

Most variables can be easily exported, however there are limitations; in short everything which can be pickled can be exported. Additionally, variables whose names start with an underscore will be ommited.

View on GitHub


2.  Lux

Lux is a Python library that facilitate fast and easy data exploration by automating the visualization and data analysis process. By simply printing out a dataframe in a Jupyter notebook, Lux recommends a set of visualizations highlighting interesting trends and patterns in the dataset. Visualizations are displayed via an interactive widget that enables users to quickly browse through large collections of visualizations and make sense of their data.

Getting Started

To start using Lux, simply add an extra import statement along with your Pandas import.

import lux
import pandas as pd

Lux can be used without modifying any existing Pandas code. Here, we use Pandas's read_csv command to load in a dataset of colleges and their properties.

df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/college.csv")
df

When the dataframe is printed out, Lux automatically recommends a set of visualizations highlighting interesting trends and patterns in the dataset.

Basic recommendations in Lux

Voila! Here's a set of visualizations that you can now use to explore your dataset further!

View on GitHub


3.  mpld3

mpld3 provides a custom stand-alone javascript library built on D3, which parses JSON representations of plots. The mpld3 python module provides a set of routines which parses matplotlib plots (using the mplexporter framework) and outputs the JSON description readable by mpld3.js.

Installation

mpld3 is compatible with python 2.6-2.7 and 3.3-3.4. It requires matplotlib version 2.2.2 and jinja2 version 2.7+.

Optionally, mpld3 can be used with IPython notebook, and requires IPython version 1.x or (preferably) version 2.0+.

This package is based on the mplexporter framework for crawling and exporting matplotlib images. mplexporter is bundled with the source distribution via git submodule.

Within the git source directory, you can download the mplexporter dependency and copy it into the mpld3 source directory using the following command:

$ python setup.py submodule

The submodule command is not necessary if you are installing from a distribution rather than from the git source.

Once the submodule command has been run, you can build the package locally using

$ python setup.py build

or install the package to the standard Python path using:

$ python setup.py install

Or, to install to another location, use

$ python setup.py install --prefix=/path/to/location/

Then make sure your PYTHONPATH environment variable points to this location.

View on GitHub


4.  pd-replicator

Copy a pandas DataFrame to the clipboard with one click

Installation

Installation can be done through pip:

> pip install pd-replicator

ipywidgets must be setup in order for the button/dropdown to display correctly:

> pip install ipywidgets 
> jupyter nbextension enable --py widgetsnbextension

To use with JupyterLab, an additional step is required:

> jupyter labextension install @jupyter-widgets/jupyterlab-manager

View on GitHub


5.  Perspective

Perspective is an interactive analytics and data visualization component, which is especially well-suited for large and/or streaming datasets. Use it to create user-configurable reports, dashboards, notebooks and applications, then deploy stand-alone in the browser, or in concert with Python and/or Jupyterlab.

Features

  • A fast, memory efficient streaming query engine, written in C++ and compiled for both WebAssembly and Python, with read/write/streaming for Apache Arrow, and a high-performance columnar expression language based on ExprTK.
  • A framework-agnostic User Interface packaged as a Custom Element, powered either in-browser via WebAssembly or virtually via WebSocket server (Python/Node).
  • A JupyterLab widget and Python client library, for interactive data analysis in a notebook, as well as scalable production Voila applications.

View on GitHub


6.  pyecharts

๐ŸŽจ Python Echarts Plotting Library


๐Ÿ“ฃ Introduction

Apache ECharts is easy-to-use, highly interactive and highly performant javascript visualization library under Apache license. Since its first public release in 2013, it now dominates over 74% of Chinese web front-end market. Yet Python is an expressive language and is loved by data science community. Combining the strength of both technologies, pyecharts is born.

โœจ Feature highlights

  • Simple API, Sleek and method chaining
  • Support 30 + popular charts
  • Suppot data science tools: Jupyter Notebook, JupyterLab, nteract
  • Integrate with Flask๏ผŒDjango at ease
  • Easy to use and highly configurable
  • Detailed documentation and examples.
  • More than 400+ geomaps assets for geograpic information processing

๐Ÿ”ฐ Installation

pip install

$ pip install pyecharts

Install from source

$ git clone https://github.com/pyecharts/pyecharts.git
$ cd pyecharts
$ pip install -r requirements.txt
$ python setup.py install

View on GitHub


7.  pythreejs

A Python / ThreeJS bridge for Jupyter Widgets.

Installation

Using pip:

pip install pythreejs

or conda:

conda install -c conda-forge pythreejs

For a development install, see the contributing guide.

The extension should then be installed automatically for your Jupyter client.

For JupyterLab <3, you may also need to ensure nodejs is installed, and rebuild the application:

# conda install -c cond-forge 'nodejs>=12'
jupyter lab build

View on GitHub


8.  qgrid

Qgrid is a Jupyter notebook widget which uses SlickGrid to render pandas DataFrames within a Jupyter notebook. This allows you to explore your DataFrames with intuitive scrolling, sorting, and filtering controls, as well as edit your DataFrames by double clicking cells.

Qgrid was developed for use in Quantopian's hosted research environment and is available for use in that environment as of June 2018. Quantopian also offers a fully managed service for professionals that includes Qgrid, Zipline, Alphalens, Pyfolio, FactSet data, and more.

Announcements: Qgrid Webinar

Qgrid author Tim Shawver recently did a live webinar about Qgrid, and the recording of the webinar is now available on YouTube.

This talk will be interesting both for people that are new to Qgrid, as well as longtime fans that are interested in learning more about the project.

View on GitHub


9.  tqdm

you so much" in Spanish (te quiero demasiado).

Instantly make your loops show a smart progress meter - just wrap any iterable with tqdm(iterable), and you're done!

from tqdm import tqdm
for i in tqdm(range(10000)):
    ...

76%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ        | 7568/10000 [00:33<00:10, 229.00it/s]

trange(N) can be also used as a convenient shortcut for tqdm(range(N)).

Screenshot

Video Slides Merch

It can also be executed as a module with pipes:

$ seq 9999999 | tqdm --bytes | wc -l
75.2MB [00:00, 217MB/s]
9999999

$ tar -zcf - docs/ | tqdm --bytes --total `du -sb docs/ | cut -f1` \
    > backup.tgz
 32%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–                      | 8.89G/27.9G [00:42<01:31, 223MB/s]

Overhead is low -- about 60ns per iteration (80ns with tqdm.gui), and is unit tested against performance regression. By comparison, the well-established ProgressBar has an 800ns/iter overhead.

View on GitHub


10.  Tributary

Tributary is a library for constructing dataflow graphs in python. Unlike many other DAG libraries in python (airflow, luigi, prefect, dagster, dask, kedro, etc), tributary is not designed with data/etl pipelines or scheduling in mind. Instead, tributary is more similar to libraries like mdf, pyungo, streamz, or pyfunctional, in that it is designed to be used as the implementation for a data model. One such example is the greeks library, which leverages tributary to build data models for options pricing.

Installation

Install with pip:

pip install tributary

or with conda:

conda install -c conda-forge tributary

or from source:

python setup.py install

Note: If installing from source or with pip, you'll also need Graphviz itself if you want to visualize the graph using the .graphviz() method.

View on GitHub


11.  xleaflet

C++ backend for the jupyter-leaflet map visualization library

Usage

Selecting a base layer for a map:

Basemap Screencast

Loading a geojson dataset:

GeoJSON Screencast

View on GitHub


12.  xwebrtc

C++ backend for WebRTC in the Jupyter notebook/lab

xwebrtc is an early developer preview, and is not suitable for general usage yet. Features and implementation are subject to change.

Installation

We provide a package for the mamba (or conda) package manager.

  • Installing xwebrtc and the C++ kernel
mamba install xeus-cling xwebrtc -c conda-forge

Then, the front-end extension must be installed for either the classic notebook or JupyterLab.

  • Installing the extensions for the classic notebook
mamba install widgetsnbextension -c conda-forge
mamba install ipywebrtc -c conda-forge

View on GitHub


13.  xwidgets

A C++ backend for Jupyter interactive widgets.

Introduction

xwidgets is a C++ implementation of the Jupyter interactive widgets protocol. The Python reference implementation is available in the ipywidgets project.

xwidgets enables the use of the Jupyter interactive widgets in the C++ notebook, powered by the xeus-cling kernel and the cling C++ interpreter from CERN. xwidgets can also be used to create applications making use of the Jupyter interactive widgets without the C++ kernel per se.

Installation

We provide a package for the mamba (or conda) package manager.

  • Installing xwidgets and the C++ kernel
mamba install xeus-cling xwidgets -c conda-forge

Then, the front-end extension must be installed for either the classic notebook or JupyterLab.

  • Installing the extension for the classic notebook
mamba install widgetsnbextension -c conda-forge
  • Installing the JupyterLab extension
jupyter labextension install @jupyter-widgets/jupyterlab-manager

This command defaults to installing the latest version of the JupyterLab extension. Depending on the version of xwidgets and jupyterlab you have installed you may need an older version.

View on GitHub


Frequently Asked Questions About Visualization Jupyter

  • Is Jupyter Notebook a visualization tool?

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.

  • How do you display data in a Jupyter Notebook?

In a Jupyter notebook, simply typing the name of a data frame will result in a neatly formatted outputs. This is an excellent way to preview data, however notes that, by default, only 100 rows will print, and 20 columns.

  • What is visualization in Python?

The proczss of finding trends and correlations in our data by representing it pictorially is called Data Visualization. To perform data visualization in python, we can use various python data visualization modules such as Matplotlib, Seaborn, Plotly, etc.

  • Is Jupyter notebook good for data analysis?

Having proper data analytics and visualizations tools has become more important than ever. Jupyter Notebooks is one of the leading open-source tools for developing and managing data analytics.

  • How do you plot a graph in Python Jupyter?

Simple Plot

The first line imports the pyplot graphing library from the matplotlib API. The third and fourth lines define the x and y axes respectively. The plot() method is called to plot the graph. The show() method is then used to display the graph.


Related videos:

How to Create a Data Visualization in Jupyter Notebook Using atoti


Related posts:

#jupyter 

Best Visualization Libraries Plugins for Jupyter and Python
Layne  Fadel

Layne Fadel

1664004480

Discover 8 Visualization Libraries for Jupyter You Must Know

In this Jupyter article, let's learn about Visualization: Discover 8 Visualization Libraries for Jupyter You Must Know

Table of contents:

  • ipytree - Tree UI element for Jupyter.
  • ipyvizzu - Animated data storytelling tool.
  • ipyvolume - 3D plotting for Python in Jupyter based on widgets and WebGL.
  • ipywebrtc - Video/Audio streaming in Jupyter.
  • ipywidgets - UI widgets for Jupyter.
  • itk-jupyter-widgets - Interactive widgets to visualize images in 2D and 3D.
  • jp_doodle - Infrastructure for building special purpose interactive diagrams in 2D and 3D.
  • jupyter-gmaps - Interactive visualization library for Google Maps in Jupyter notebooks.

what is Jupyter?

JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, computational journalism, and machine learning. A modular design invites extensions to expand and enrich functionality.

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.


Discover 8 Visualization Libraries for Jupyter You Must Know

  1. ipytree

A Tree Widget using Jupyter-widgets protocol and jsTree

Installation

With conda:

$ conda install -c conda-forge ipytree

With pip:

$ pip install ipytree

If you use JupyterLab<=2:

$ jupyter labextension install @jupyter-widgets/jupyterlab-manager ipytree

If you have notebook 5.2 or below, you also need to execute:

$ jupyter nbextension enable --py --sys-prefix ipytree

View on GitHub


2.  ipyvizzu

Build animated charts in Jupyter notebook with a simple Python synthax.


ipyvizzu is an animated charting tool for Jupyter, Google Colab, Databricks, Kaggle and Deepnote notebooks among other platforms. ipyvizzu enables data scientists and analysts to utilize animation for storytelling with data using Python. It's built on the open-source Javascript/C++ charting library Vizzu

There is a new extension of ipyvizzu, ipyvizzu-story with which the animated charts can be presented right from the notebooks. Since ipyvizzu-story's synthax is a bit different to ipyvizzu's, we suggest you to start from the ipyvizzu-story repo if you're interested in using animated charts to present your findings live or to share your presentation as an HTML file.

Similarly to Vizzu, ipyvizzu utilizes a generic dataviz engine that generates many types of charts and seamlessly animates between them. It is designed for building animated data stories as it enables showing different perspectives of the data that the viewers can easily follow.

Main features:

  • Designed with animation in focus;
  • Defaults based on data visualization guidelines;
  • Works with Pandas dataframe, while also JSON and inline data input is available;
  • Auto scrolling feature to keep the actual chart in position while executing multiple cells.

Installation

ipyvizzu requires the IPython, jsonschema and pandas packages.

pip install ipyvizzu

You can also use ipyvizzu by locally installing Vizzu, you can find more info about this in the documentation

View on GitHub


3.  ipyvolume

3d plotting for Python in the Jupyter notebook based on IPython widgets using WebGL.

Ipyvolume currenty can

  • Do (multi) volume rendering.
  • Create scatter plots (up to ~1 million glyphs).
  • Create quiver plots (like scatter, but with an arrow pointing in a particular direction).
  • Render isosurfaces.
  • Do lasso mouse selections.
  • Render in the Jupyter notebook, or create a standalone html page (or snippet to embed in your page).
  • Render in stereo, for virtual reality with Google Cardboard.
  • Animate in d3 style, for instance if the x coordinates or color of a scatter plots changes.
  • Animations / sequences, all scatter/quiver plot properties can be a list of arrays, which can represent time snapshots.
  • Stylable (although still basic)
  • Integrates with

Ipyvolume will probably, but not yet:

  • Render labels in latex.
  • Show a custom popup on hovering over a glyph.

View on GitHub


4.  ipywebrtc

WebRTC and MediaStream API exposed in the Jupyter notebook/lab.

Installation

To install:

$ pip install ipywebrtc                             # will auto enable for notebook >= 5.3

For a development installation (requires npm),

$ git clone https://github.com/maartenbreddels/ipywebrtc
$ cd ipywebrtc
$ pip install -e .
$ jupyter nbextension install --py --symlink --sys-prefix ipywebrtc
$ jupyter nbextension enable --py --sys-prefix ipywebrtc
$ jupyter labextension develop . --overwrite

View on GitHub


5.  ipywidgets

ipywidgets, also known as jupyter-widgets or simply widgets, are interactive HTML widgets for Jupyter notebooks and the IPython kernel.

Notebooks come alive when interactive widgets are used. Users gain control of their data and can visualize changes in the data.

Learning becomes an immersive, fun experience. Researchers can easily see how changing inputs to a model impact the results. We hope you will add ipywidgets to your notebooks, and we're here to help you get started.

Core Interactive Widgets

The fundamental widgets provided by this library are called core interactive widgets. A demonstration notebook provides an overview of the core interactive widgets, including:

  • sliders
  • progress bars
  • text boxes
  • toggle buttons and checkboxes
  • display areas
  • and more

Jupyter Interactive Widgets as a Framework

Besides the widgets already provided with the library, the framework can be extended with the development of custom widget libraries. For detailed information, please refer to the ipywidgets documentation.

Cookiecutter template for custom widget development

A template project for building custom widgets is available as a cookiecutter. This cookiecutter project helps custom widget authors get started with the packaging and the distribution of their custom Jupyter interactive widgets. The cookiecutter produces a project for a Jupyter interactive widget library following the current best practices for using interactive widgets. An implementation for a placeholder "Hello World" widget is provided as an example.

Popular widget libraries such as bqplot, pythreejs and ipyleaflet follow exactly the same template and directory structure. They serve as more advanced examples of usage of the Jupyter widget infrastructure.

View on GitHub


6.  itkwidgets

Interactive widgets to visualize images, point sets, and 3D geometry on the web.

Getting Started

Installation

Jupyter Notebook

To install the widgets for the Jupyter Notebook with pip:

pip install 'itkwidgets[notebook]>=1.0a8'

Then look for the ImJoy icon at the top in the Jupyter Notebook:

ImJoy Icon in Jupyter Notebook

Jupyter Lab

For Jupyter Lab 3 run:

pip install 'itkwidgets[lab]>=1.0a8'

Then look for the ImJoy icon at the top in the Jupyter Notebook:

ImJoy Icon in Jupyter Lab

View on GitHub


7.  jp_doodle

Tools for drawing 2d and 3d interactive visualizations using Jupyter proxy widgets


jp_doodle makes implementing special purpose interactive visualizations easy. It is designed to facilitate the development of bespoke scientific data presentation and interactive exploration tools.

Quick references: Please see the Javascript quick reference or the Python/Jupyter quick reference for an introduction to building visualizations using `jp_doodle`.

Installation

To install the package for use with Jupyter notebooks:

python -m pip install https://github.com/AaronWatters/jp_doodle/zipball/master

To use the package with Jupyter Lab you also need to build the Jupyterlab Javascript resources with widget support and jp_proxy_widget:

jupyter labextension install @jupyter-widgets/jupyterlab-manager  --no-build
jupyter labextension install jp_proxy_widget

View on GitHub


8.  gmaps

gmaps is a plugin for including interactive Google maps in the IPython Notebook.

Let's plot a heatmap of taxi pickups in San Francisco:

import gmaps
import gmaps.datasets
gmaps.configure(api_key="AI...") # Your Google API key

# load a Numpy array of (latitude, longitude) pairs
locations = gmaps.datasets.load_dataset("taxi_rides")

fig = gmaps.figure()
fig.add_layer(gmaps.heatmap_layer(locations))
fig

docs/source/_images/taxi_example.png

View on GitHub


Frequently Asked Questions About Visualization Jupyter

  • Is Jupyter Notebook a visualization tool?

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.

  • How do you display data in a Jupyter Notebook?

In a Jupyter notebook, simply typing the name of a data frame will result in a neatly formatted outputs. This is an excellent way to preview data, however notes that, by default, only 100 rows will print, and 20 columns.

  • What is visualization in Python?

The proczss of finding trends and correlations in our data by representing it pictorially is called Data Visualization. To perform data visualization in python, we can use various python data visualization modules such as Matplotlib, Seaborn, Plotly, etc.

  • Is Jupyter notebook good for data analysis?

Having proper data analytics and visualizations tools has become more important than ever. Jupyter Notebooks is one of the leading open-source tools for developing and managing data analytics.

  • How do you plot a graph in Python Jupyter?

Simple Plot

The first line imports the pyplot graphing library from the matplotlib API. The third and fourth lines define the x and y axes respectively. The plot() method is called to plot the graph. The show() method is then used to display the graph.


Related videos:

How to Create a Data Visualization in Jupyter Notebook Using atoti


Related posts:

#jupyter 

Discover 8 Visualization Libraries for Jupyter You Must Know
Layne  Fadel

Layne Fadel

1663996500

Revealing 6 Best Visualization Libraries for Jupyter

In this Jupyter article, we will learn about Visualization: Revealing 6 Best Visualization Libraries for Jupyter

Table of contents:

  • pycytoscape - Widget for interactive graph visualization in Jupyter using cytoscape.js.
  • ipydagred3 - ipywidgets library for drawing directed acyclic graphs in jupyterlab using dagre-d3.
  • ipyleaflet - Interactive visualization library for Leaflet.js maps in Jupyter notebooks.
  • ipyregulartable - High performance, editable, stylable datagrids in Jupyter.
  • ipysheet - Interactive spreadsheets in Jupyter.
  • IPySigma - Prototype network visualization frontend for Jupyter notebooks.

what is Jupyter?

JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, computational journalism, and machine learning. A modular design invites extensions to expand and enrich functionality.

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.


Revealing 6 Best Visualization Libraries for Jupyter

  1. ipycytoscape

A widget enabling interactive graph visualization with cytoscape.js in JupyterLab and the Jupyter notebook.

Supports:

Installation

With mamba:

mamba install -c conda-forge ipycytoscape

With conda:

conda install -c conda-forge ipycytoscape

With pip:

pip install ipycytoscape

Pandas installation

You can install the Pandas dependencies for ipycytoscape with pip:

pip install pandas

Or conda-forge:

mamba install pandas

View on GitHub


2.  ipydagred3

ipywidgets library for drawing directed acyclic graphs in jupyterlab using dagre-d3


You can install using pip:

pip install ipydagred3

Or if you use jupyterlab:

pip install ipydagred3
jupyter labextension install @jupyter-widgets/jupyterlab-manager

If you are using Jupyter Notebook 5.2 or earlier, you may also need to enable the nbextension:

jupyter nbextension enable --py [--sys-prefix|--user|--system] ipydagred3

Features

  • Dynamically create and modify graphs from python
  • Change color, shape, tooltip, etc
  • Click events (click on node or edge and get event in ipywidget indicating source, good for node inspector tools)

View on GitHub


3.  ipyleaflet

A Jupyter / Leaflet bridge enabling interactive maps in the Jupyter notebook.

Installation

Using conda:

conda install -c conda-forge ipyleaflet

Using pip:

pip install ipyleaflet

If you are using the classic Jupyter Notebook < 5.3 you need to run this extra command:

jupyter nbextension enable --py --sys-prefix ipyleaflet

If you are using JupyterLab <=2, you will need to install the JupyterLab extension:

jupyter labextension install @jupyter-widgets/jupyterlab-manager jupyter-leaflet

Installation from sources

For a development installation (requires yarn, you can install it with conda install -c conda-forge yarn):

git clone https://github.com/jupyter-widgets/ipyleaflet.git
cd ipyleaflet
pip install -e .

If you are using the classic Jupyter Notebook you need to install the nbextension:

jupyter nbextension install --py --symlink --sys-prefix --overwrite ipyleaflet
jupyter nbextension enable --py --sys-prefix --overwrite ipyleaflet

View on GitHub


4.  ipyregulartable

High performance, editable, stylable datagrids in jupyter and jupyterlab


An ipywidgets wrapper of regular-table for Jupyter.

Installation

You can install using pip:

pip install ipyregulartable

Or if you use jupyterlab:

pip install ipyregulartable
jupyter labextension install @jupyter-widgets/jupyterlab-manager

If you are using Jupyter Notebook 5.2 or earlier, you may also need to enable the nbextension:

jupyter nbextension enable --py [--sys-prefix|--user|--system] ipyregulartable

Data Model

It is very easy to construct a custom data model. Just implement the abstract methods on the base DataModel class.

class DataModel(with_metaclass(ABCMeta)):
    @abstractmethod
    def editable(self, x, y):
        '''Given an (x,y) coordinate, return if its editable or not'''

    @abstractmethod
    def rows(self):
        '''return total number of rows'''

    @abstractmethod
    def columns(self):
        '''return total number of columns'''

    @abstractmethod
    def dataslice(self, x0, y0, x1, y1):
        '''get slice of data from (x0, y0) to (x1, y1) inclusive'''

Any DataModel object can be provided as the argument to RegularTableWidget. Note that regular-table may make probing calls of the form (0, 0, 0, 0) to assess data limits.

View on GitHub


5.  ipysheet

Jupyter handsontable integration


Installation

With conda:

$ conda install -c conda-forge ipysheet

With pip:

$ pip install ipysheet

Development install

Note: You will need NodeJS to build the extension package.

The jlpm command is JupyterLab's pinned version of yarn that is installed with JupyterLab. You may use yarn or npm in lieu of jlpm below.

# Clone the repo to your local environment
# Change directory to the ipysheet directory
# Install package in development mode
pip install -e .
# Link your development version of the extension with JupyterLab
jupyter labextension develop . --overwrite
# Rebuild extension Typescript source after making changes
jlpm run build

You can watch the source directory and run JupyterLab at the same time in different terminals to watch for changes in the extension's source and automatically rebuild the extension.

# Watch the source directory in one terminal, automatically rebuilding when needed
jlpm run watch
# Run JupyterLab in another terminal
jupyter lab

With the watch command running, every saved change will immediately be built locally and available in your running JupyterLab. Refresh JupyterLab to load the change in your browser (you may need to wait several seconds for the extension to be rebuilt).

View on GitHub


6.  IPySigma

IPySigma is a lightweight python package coupled with a node-express/socket.io app. It is designed to support a seamless workflow for graph visualization in jupyter notebook by using the jupyterlab/services javscript library to leverage communication between networkx objects and sigma.js.

Manual Install:

git clone this repo and install both the python and node components.

Python

The prototype python package is contained in the ipysig folder.

From the root directory: Build and activate a clean python environment>=2.7.10 with requirements.txt using virtualenv.

pip install -r requirements.txt to get the required packages.

Node.js

The node-express application is contained in the app folder.

Make sure your node version is >= v6.9.4 and that both npm and bower are installed globally.

From the root directory: cd ./app

type npm install to install the node modules locally in the app top-level folder

From app: cd ./browser

type bower install to install the bower_components folder (note: these steps might change in future versions with browserify)

View on GitHub


Frequently Asked Questions About Visualization Jupyter

  • Is Jupyter Notebook a visualization tool?

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.

  • How do you display data in a Jupyter Notebook?

In a Jupyter notebook, simply typing the name of a data frame will result in a neatly formatted outputs. This is an excellent way to preview data, however notes that, by default, only 100 rows will print, and 20 columns.

  • What is visualization in Python?

The proczss of finding trends and correlations in our data by representing it pictorially is called Data Visualization. To perform data visualization in python, we can use various python data visualization modules such as Matplotlib, Seaborn, Plotly, etc.

  • Is Jupyter notebook good for data analysis?

Having proper data analytics and visualizations tools has become more important than ever. Jupyter Notebooks is one of the leading open-source tools for developing and managing data analytics.

  • How do you plot a graph in Python Jupyter?

Simple Plot

The first line imports the pyplot graphing library from the matplotlib API. The third and fourth lines define the x and y axes respectively. The plot() method is called to plot the graph. The show() method is then used to display the graph.


Related videos:

#jupyter 

Revealing 6 Best Visualization Libraries for Jupyter
Franz  Becker

Franz Becker

1663993200

7 Useful Version Control Libraries in Jupyter

In this Jupyter and python article, let's learn about Version Control: 7 useful Version Control libraries in Jupyter

Table of contents:

  • databooks - A command-line utility that eases versioning and sharing of notebooks.
  • git - Extension for git integration.
  • jupyter-nbrequirements - Dependency management and optimization in Jupyter Notebooks.
  • nbdime - Tools for diffing and merging of Jupyter notebooks.
  • nbQA - Run any standard Python code quality tool on a Jupyter Notebook, from the command-line or via pre-commit.
  • Neptune - Version, manage and share notebook checkpoints in your projects.
  • ReviewNB - Code reviews for Jupyter Notebooks.

What is Jupyter used for?

Jupyter Notebook allows users to compile all aspects of a data project in one place making it easier to show the entire process of a project to your intended audience. Through the web-based application, users can create data visualizations and other components of a project to share with others via the platform.

Version control, also known as source code control, is used to track changes in code and other artifacts in software development and data science work.


7 Useful Version Control Libraries in Jupyter

  1. databooks

databooks is a package to ease the collaboration between data scientists using Jupyter notebooks, by reducing the number of git conflicts between different notebooks and resolution of git conflicts when encountered.

The key features include:

  • CLI tool
    • Clear notebook metadata
    • Resolve git conflicts
  • Simple to use
  • Simple API for using modelling and comparing notebooks using Pydantic

Installation

pip install databooks

Usage

Clear metadata

Simply specify the paths for notebook files to remove metadata. By doing so, we can already avoid many of the conflicts.

$ databooks meta [OPTIONS] PATHS...

databooks meta demo

Fix git conflicts for notebooks

Specify the paths for notebook files with conflicts to be fixed. Then, databooks finds the source notebooks that caused the conflicts and compares them (so no JSON manipulation!)

$ databooks fix [OPTIONS] PATHS...

View on GitHub


2.   jupyterlab-git

A JupyterLab extension for version control using Git

To see the extension in action, open the example notebook included in the Binder demo.

Requirements

  • JupyterLab >= 3.0 (older version available for 2.x)
  • Git (version >=2.x)

Usage

Install

To install perform the following steps, with pip:

pip install --upgrade jupyterlab jupyterlab-git

or with conda:

conda install -c conda-forge jupyterlab jupyterlab-git

For JupyterLab < 3, you will need to run the following command after installing the package:

jupyter lab build

Uninstall

pip uninstall jupyterlab-git

or with conda:

conda remove jupyterlab-git

For JupyterLab < 3, you will also need to run the following command after removing the Python package:

jupyter labextension uninstall @jupyterlab/git

View on GitHub


3.  jupyter-nbrequirements

Dependency management and optimization in Jupyter Notebooks.

About

This extension provides control over the notebook dependencies.

The main goals of the project are the following:

  • manage notebook requirements without leaving the notebook
  • provide a unique and optimized* environment for each notebook

*The requirements are optimized using the Thoth resolution engine

Installation

pip install jupyter-nbrequirements

And enable the required extensions (might not be needed with the latest version, but to be sure..)

jupyter nbextension install --user --py jupyter_nbrequirements

Usage

NBRequirements UI

Since v0.4.0, we've introduced a new UI! Check it out, interact with it and see what it can offer you!

NBRequirements UI

Our development efforts will from now on focus primarily on improving the UI.

View on GitHub


4.  nbdime

Tools for diffing and merging of Jupyter notebooks.

nbdime provides tools for diffing and merging of Jupyter Notebooks.

  • nbdiff compare notebooks in a terminal-friendly way
  • nbmerge three-way merge of notebooks with automatic conflict resolution
  • nbdiff-web shows you a rich rendered diff of notebooks
  • nbmerge-web gives you a web-based three-way merge tool for notebooks
  • nbshow present a single notebook in a terminal-friendly way

Installation

Install nbdime with pip:

pip install nbdime

See the installation docs for more installation details and development installation instructions.

View on GitHub


5.  nbQA

Run isort, pyupgrade, mypy, pylint, flake8, and more on Jupyter Notebooks


๐ŸŽ‰ Installation

In your virtual environment, run (note: the $ is not part of the command):

$ python -m pip install -U nbqa

To also install all supported linters/formatters:

$ python -m pip install -U "nbqa[toolchain]"

Or, if you are using conda:

$ conda install -c conda-forge nbqa

๐Ÿš€ Examples

Command-line

Reformat your notebooks with black:

$ nbqa black my_notebook.ipynb
reformatted my_notebook.ipynb
All done! โœจ ๐Ÿฐ โœจ
1 files reformatted.

Sort your imports with isort:

$ nbqa isort my_notebook.ipynb --float-to-top
Fixing my_notebook.ipynb

Upgrade your syntax with pyupgrade:

$ nbqa pyupgrade my_notebook.ipynb --py37-plus
Rewriting my_notebook.ipynb

View on GitHub


6.  Neptune

Neptune is a lightweight solution designed for:

  • Experiment tracking: log, display, organize, and compare ML experiments in a single place.
  • Model registry: version, store, manage, and query trained models and model-building metadata.
  • Monitoring ML runs live: record and monitor model training, evaluation, or production runs live.  

Getting started

Step 1: Sign up for a free account

Step 2: Install the Neptune client library

pip install neptune-client

Step 3: Connect Neptune to your code

import neptune.new as neptune

run = neptune.init(project="common/quickstarts", api_token="ANONYMOUS")

run["parameters"] = {
    "batch_size": 64,
    "dropout": 0.2,
    "optim": {"learning_rate": 0.001, "optimizer": "Adam"},
}

for epoch in range(100):
    run["train/accuracy"].log(epoch * 0.6)
    run["train/loss"].log(epoch * 0.4)

run["f1_score"] = 0.66

View on GitHub


Frequently asked questions about Version Control for jupyter

  • Can you version control Jupyter Notebook?

Version Control the Python Script

Add the . py file to version control. Every saved change to a Python cell in this Jupyter notebook will now be reflected in the . py file.

  • How do I manage a Jupyter Notebook in git?

Steps

  1. Open the required Jupyter notebook and save the changes.
  2. From the left sidebar, click on the GitHub Versions icon.
  3. Click the Push icon to commit. A dialog opens to push commits.
  4. Add a commit message and click Save to push the commit to the GitHub repository.
  • How do I check my Jupyter Notebook version?

To check the Python version in your Jupyter notebook, first import the python_version function with โ€œ from platform import python_version โ€œ. Then call the function python_version() that returns a string with the version number running in your Jupyter notebook such as "3.7. 11" .

  • What version of Python is Jupyter using?

Jupyter installation requires Python 3.3 or greater, or Python 2.7. IPython 1. x, which included the parts that later became Jupyter, was the last version to support Python 3.2 and 2.6. As an existing Python user, you may wish to install Jupyter using Python's package manager, pip, instead of Anaconda.


Related videos:

#jupyter 

7 Useful Version Control Libraries in Jupyter
Layne  Fadel

Layne Fadel

1663991220

Suggest 5 Best Visualization Libraries for Jupyter

In this Jupyter article, let's learn about Visualization: Suggest 5 Best Visualization Libraries for Jupyter

Table of contents:

  • Altair - Declarative visualization library for Python, based on Vega and Vega-Lite.
  • Bokeh - Interactive visualization library that targets modern web browsers for presentation.
  • bqplot - Grammar of Graphics-based interactive plotting framework for Jupyter.
  • Evidently - Interactive reports to analyze machine learning models during validation or production monitoring.
  • ipychart - Interactive Chart.js plots in Jupyter.

what is Jupyter?

JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, computational journalism, and machine learning. A modular design invites extensions to expand and enrich functionality.

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.


Suggest 5 Best Visualization Libraries for Jupyter

  1. Altair

The Vega-Altair open source project is not affiliated with Altair Engineering, Inc.

Vega-Altair is a declarative statistical visualization library for Python. With Vega-Altair, you can spend more time understanding your data and its meaning. Vega-Altair's API is simple, friendly and consistent and built on top of the powerful Vega-Lite JSON specification. This elegant simplicity produces beautiful and effective visualizations with a minimal amount of code. Vega-Altair was originally developed by Jake Vanderplas and Brian Granger in close collaboration with the UW Interactive Data Lab.

Example

Here is an example using Vega-Altair to quickly visualize and display a dataset with the native Vega-Lite renderer in the JupyterLab:

import altair as alt

# load a simple dataset as a pandas DataFrame
from vega_datasets import data
cars = data.cars()

alt.Chart(cars).mark_point().encode(
    x='Horsepower',
    y='Miles_per_Gallon',
    color='Origin',
)

Vega-Altair Visualization

One of the unique features of Vega-Altair, inherited from Vega-Lite, is a declarative grammar of not just visualization, but interaction. With a few modifications to the example above we can create a linked histogram that is filtered based on a selection of the scatter plot.

import altair as alt
from vega_datasets import data

source = data.cars()

brush = alt.selection(type='interval')

points = alt.Chart(source).mark_point().encode(
    x='Horsepower',
    y='Miles_per_Gallon',
    color=alt.condition(brush, 'Origin', alt.value('lightgray'))
).add_selection(
    brush
)

bars = alt.Chart(source).mark_bar().encode(
    y='Origin',
    color='Origin',
    x='count(Origin)'
).transform_filter(
    brush
)

points & bars

 

Vega-Altair Visualization Gif

View on GitHub


2. Bokeh

Bokeh is an interactive visualization library for modern web browsers. It provides elegant, concise construction of versatile graphics and affords high-performance interactivity across large or streaming datasets. Bokeh can help anyone who wants to create interactive plots, dashboards, and data applications quickly and easily.

Installation

To install Bokeh and its required dependencies using conda, enter the following command at a Bash or Windows command prompt:

conda install bokeh

To install using pip, enter the following command at a Bash or Windows command prompt:

pip install bokeh

Refer to the installation documentation for more details.

Resources

Once Bokeh is installed, check out the first steps guides.

Visit the full documentation site to view the User's Guide or launch the Bokeh tutorial to learn about Bokeh in live Jupyter Notebooks.

View on GitHub


3.  bqplot

2-D plotting library for Project Jupyter

Introduction

bqplot is a 2-D visualization system for Jupyter, based on the constructs of the Grammar of Graphics.

Usage

Wealth of Nations

In bqplot, every component of a plot is an interactive widget. This allows the user to integrate visualizations with other Jupyter interactive widgets to create integrated GUIs with a few lines of Python code.

Goals

  • Provide a unified framework for 2-D visualizations with a pythonic API
  • Provide a sensible API for adding user interactions (panning, zooming, selection, etc)

Two APIs are provided

  • Object Model, which is inspired by the constructs of the Grammar of Graphics (figure, marks, axes, scales). This API is verbose but is fully customizable
  • pyplot, which is a context-based API similar to Matplotlib's pyplot. pyplot provides sensible default choices for most parameters

View on GitHub


4.  Evidently

An open-source framework to evaluate, test and monitor ML models in production.

Evidently helps analyze and track data and ML model quality throughout the model lifecycle. You can think of it as an evaluation layer that fits into the existing ML stack.

Evidently has a modular approach with 3 interfaces on top of the shared analyzer functionality.

Installing from PyPI

MAC OS and Linux

Evidently is available as a PyPI package. To install it using pip package manager, run:

$ pip install evidently

If you want to generate reports as HTML files or export as JSON profiles, the installation is now complete.

If you want to display the dashboards directly in a Jupyter notebook, you should install jupyter nbextension. After installing evidently, run the two following commands in the terminal from the evidently directory.

To install jupyter nbextension, run:

$ jupyter nbextension install --sys-prefix --symlink --overwrite --py evidently

To enable it, run:

$ jupyter nbextension enable evidently --py --sys-prefix

That's it! A single run after the installation is enough.

Note: if you use Jupyter Lab, the dashboard might not display in the notebook. However, the report generation in a separate HTML file will work correctly.

View on GitHub


5.  ipychart

The power of Chart.js with Python

Installation

You can install ipychart from your terminal using pip or conda:

# using pip
$ pip install ipychart

# using conda
$ conda install -c conda-forge ipychart

Usage

Create charts with Python in a very similar way to creating charts using Chart.js. The charts created are fully configurable, interactive and modular and are displayed directly in the output of the the cells of your jupyter notebook environment:

ipychart-demo.gif

You can also create charts directly from a pandas dataframe. See the Pandas Interface section of the documentation for more details.

View on GitHub


Frequently Asked Questions About Visualization Jupyter

  • Is Jupyter Notebook a visualization tool?

Jupyter Notebooks provide a data visualization framework called Qviz that enables you to visualize dataframes with improved charting options and Python plots on the Spark driver.

  • How do you display data in a Jupyter Notebook?

In a Jupyter notebook, simply typing the name of a data frame will result in a neatly formatted outputs. This is an excellent way to preview data, however notes that, by default, only 100 rows will print, and 20 columns.

  • What is visualization in Python?

The proczss of finding trends and correlations in our data by representing it pictorially is called Data Visualization. To perform data visualization in python, we can use various python data visualization modules such as Matplotlib, Seaborn, Plotly, etc.

  • Is Jupyter notebook good for data analysis?

Having proper data analytics and visualizations tools has become more important than ever. Jupyter Notebooks is one of the leading open-source tools for developing and managing data analytics.

  • How do you plot a graph in Python Jupyter?

Simple Plot

The first line imports the pyplot graphing library from the matplotlib API. The third and fourth lines define the x and y axes respectively. The plot() method is called to plot the graph. The show() method is then used to display the graph.


Related videos:

Data Visualization using Python on Jupyter Notebook


Related posts:

#jupyter 

Suggest 5 Best Visualization Libraries for Jupyter
Franz  Becker

Franz Becker

1663983402

Useful Rendering/Publishing/Conversion Libraries Plugins in Jupyter

In this Jupyter and python article, let's learn about Rendering/Publishing/Conversion: Useful Rendering/Publishing/Conversion Libraries Plugins in Jupyter

Table of contents:

  • Bookbook - Bookbook converts a set of notebooks in a directory to HTML or PDF, preserving cross references within and between notebooks.
  • ContainDS Dashboards - JupyterHub extension to host authenticated scripts or notebooks in any framework (Voilร , Streamlit, Plotly Dash etc).
  • Ganimede - Store, version, edit and execute notebooks in sandboxes and integrate them directly via REST interfaces.
  • Jupyter Book - Build publication-quality books and documents from computational material.
  • jupyterlab_nbconvert_nocode - NBConvert exporters for PDF/HTML export without code cells.
  • Jupytext - Convert and synchronize notebooks with text formats (e.g. Python or Markdown files) that work well under version control.
  • jut - CLI to nicely display notebooks in the terminal.
  • Kapitsa - CLI to search local Jupyter notebooks.
  • Mercury - Convert notebooks into web applications.
  • nbconvert - Convert notebooks to other formats.


What is Jupyter used for?

Jupyter Notebook allows users to compile all aspects of a data project in one place making it easier to show the entire process of a project to your intended audience. Through the web-based application, users can create data visualizations and other components of a project to share with others via the platform.


Useful Rendering/Publishing/Conversion Libraries Plugins in Jupyter

  1. Bookbook

Bookbook converts a set of notebooks in a directory to HTML or PDF, preserving cross references within and between notebooks.

This code is in early development, so use it at your own risk.

Installation

Bookbook requires Python 3.5.

pip install bookbook

To install locally as an editable install, run:

pip install flit
git clone https://github.com/takluyver/bookbook.git
cd bookbook
flit install --symlink

Running bookbook

bookbook expects a directory of notebooks whose names indicate their order. Specifically, the file names must have the form ``x-y.ipynb``, where typically x is a number indicating the order and y is a chapter title; e.g.: 01-introduction.ipynb.

To run bookbook:

python3 -m bookbook.html           # HTML output under html/
python3 -m bookbook.latex [--pdf]  # Latex/PDF output as combined.(tex|pdf)

Add --help to either command for more options.

View on GitHub


2.  cdsdashboards

A Dashboard publishing solution for Data Science teams to share results with decision makers.

Run a private on-premise or cloud-based JupyterHub with extensions to instantly publish apps and notebooks as user-friendly interactive dashboards to share with non-technical colleagues.

Currently supported frameworks:

This open source package allows data scientists to instantly and reliably publish interactive notebooks or other scripts as secure interactive web apps.

Source files can be pulled from a Git repo or from the user's Jupyter tree.

Any authorised JupyterHub user can view the dashboard, or choose to give permission only to named users.

How it works

  • Data scientist creates a Jupyter Notebook as normal or uploads Python/R files etc
  • Data scientist creates a new Dashboard to clone their Jupyter server
  • Other logged-in JupyterHub users see the dashboard in their list
  • Click to launch as a server, using OAuth to gain access
  • User sees a safe user-friendly version of the original notebook - served by Voilร , Streamlit, Dash, Bokeh, Panel, R Shiny etc.

All of this works through a new Dashboards menu item added to JupyterHub's header.

View on GitHub


3.  Ganimede

Store, version, edit and execute notebooks in sandboxes and integrate them directly via REST interfaces.

Use cases

  • Ability to write machine learning logic and expose them to systems as rest api
  • Write Jupyter nb locally and run them in a centralised powerful machine to reduce cost
  • Create framework to directly connect Jupyter notebook to other systems

Requirements

  • docker
  • redis

Stack

  • Redis
  • FastAPI
  • Papermill
  • Jupyter
  • Poetry
  • Docker

Build

  • Clone the repo
  • Run poetry install
  • Run run.py or scripts\launch.sh or cd docker;docker-compose up -d

Deployment

  • Clone the repo and in docker folder, run docker-compose build. The docker image will be build
  • Push to registry or use your custom publishing method to publish the image

View on GitHub


4.  Jupyter Book

Jupyter Book is an open-source tool for building publication-quality books and documents from computational material.

Jupyter Book allows users to

  • write their content in markdown files or Jupyter notebooks,
  • include computational elements (e.g., code cells) in either type,
  • include rich syntax such as citations, cross-references, and numbered equations, and
  • using a simple command, run the embedded code cells, cache the outputs and convert this content into:
    • a web-based interactive book and
    • a publication-quality PDF.

Governance of this project

Jupyter Book is still developing relatively rapidly, so please be patient if things change or features iterate and change quickly. Once Jupyter Book hits 1.0, it will slow down considerably!

View on GitHub


5.  jupyterlab_nbconvert_nocode

A simple helper library with 2 NBConvert exporters for PDF/HTML export with no code cells

View on GitHub


6.  jupytext

Jupyter Notebooks as Markdown Documents, Julia, Python or R scripts


Have you always wished Jupyter notebooks were plain text documents? Wished you could edit them in your favorite IDE? And get clear and meaningful diffs when doing version control? Then... Jupytext may well be the tool you're looking for!

Jupytext is a plugin for Jupyter that can save Jupyter notebooks as either

Use cases

Common use cases for Jupytext are:

  • Doing version control on Jupyter Notebooks
  • Editing, merging or refactoring notebooks in your favorite text editor
  • Applying Q&A checks on notebooks.

Install

You can install Jupytext with

  • pip install jupytext
  • or conda install jupytext -c conda-forge.

Please note that Jupytext includes an extension for Jupyter Lab. In the latest version of Jupytext, this extension is compatible with Jupyter Lab >= 3.0 only. If you use Jupyter Lab 2.x, please either stay with Jupytext 1.8.2, or install, on top of the latest pip or conda version of Jupytext, a version of the extension that is compatible with Jupyter Lab 2.x:

jupyter labextension install jupyterlab-jupytext@1.2.2  # For Jupyter Lab 2.x

Then, restart your Jupyter server (for more installation details, see the install section in the documentation).

View on GitHub


7.  jut

jut - JUpyter notebook Terminal viewer.

The command line tool view the IPython/Jupyter notebook in the terminal.

Install

pip install jut

Usage

$jut --help
Usage: cli.py [OPTIONS] PATH

Options:
  -he, --head INTEGER RANGE  Display first n cells. Default is 10
  -t, --tail INTEGER RANGE   Display last n cells
  -p, --single-page          Should the result be in a single page?
  -f, --full-display         Should all the contents in the file displayed?
  --force-colors             Force colored output even if stdout is not a
                             terminal

  -s, --start INTEGER RANGE  Display the cells starting from the cell number
  -e, --end INTEGER RANGE    Display the cells till the cell number
  --exclude-output-cells     Exclude the notebook output cells from the output
  --no-cell-border           Don't display the result in a cell with border
  --help                     Show this message and exit.

View on GitHub


8.  kapitsa

Search your Jupyter notebook files (.ipynb) from bash.


Motivation

As the number of projects grow, it becomes difficult to keep notebooks organized and searchable on your local machine.

See blog post for for how this project got started and the tools behind it..

Solution

Kapitsa is a simple, minimally configured command line program that provides a centralized way to search and keep track of your notebooks. Users simply configure paths where they keep their notebooks. Kapitsa provides convenience methods do do the following:

  1. Search Code - Query your notebooks' source.
  2. Search Tags - Query your notebooks' cell tags.
  3. List Recent - List notebooks you have worked on recently.
  4. List Directories - View all directories on your system that contain notebooks.

View on GitHub


9.  Mercury

Mercury is a perfect tool to convert Python notebook to interactive web application and share with non-programmers.

  • You define interactive widgets for your notebook with the YAML header.
  • Your users can change the widgets values, execute the notebook and save result (as PDF or html file).
  • You can hide your code to not scare your (non-coding) collaborators.
  • Easily deploy to any server.

Mercury is dual-licensed. Looking for dedicated support, a commercial-friendly license, and more features? The Mercury Pro is for you. Please see the details at our website.

Installation

Compatible with Python 3.7 and higher.

Install with pip:

pip install mljar-mercury

Or with conda:

conda install -c conda-forge mljar-mercury

View on GitHub


10.  nbconvert

Using nbconvert enables:

  • presentation of information in familiar formats, such as PDF.
  • publishing of research using LaTeX and opens the door for embedding notebooks in papers.
  • collaboration with others who may not use the notebook in their work.
  • sharing contents with many people via the web using HTML.

Overall, notebook conversion and the nbconvert tool give scientists and researchers the flexibility to deliver information in a timely way across different formats.

Primarily, the nbconvert tool allows you to convert a Jupyter .ipynb notebook document file into another static format including HTML, LaTeX, PDF, Markdown, reStructuredText, and more. nbconvert can also add productivity to your workflow when used to execute notebooks programmatically.

If used as a Python library (import nbconvert), nbconvert adds notebook conversion within a project. For example, nbconvert is used to implement the "Download as" feature within the Jupyter Notebook web application. When used as a command line tool (invoked as jupyter nbconvert ...), users can conveniently convert just one or a batch of notebook files to another format.

Contents:

.. toctree::
   :maxdepth: 2
   :caption: User Documentation

   install
   usage
   nbconvert_library
   dejavu
   latex_citations
   removing_cells
   execute_api

View on GitHub


Related posts:

#jupyter 

Useful Rendering/Publishing/Conversion Libraries Plugins in Jupyter
Layne  Fadel

Layne Fadel

1663982820

6 Useful Collaboration/Education Libraries for Jupyter

In this Jupyter article, let's learn about Collaboration/Education: 6 Useful Collaboration/Education Libraries for Jupyter

Table of contents:

  • callgraph - Magic to display a function call graph.
  • IllumiDesk - Docker-based JupyterHub + LTI + nbgrader distribution for education.
  • IPythonBlocks - Practice Python with colored grids in Jupyter.
  • jupyter-drive - Google drive for Jupyter.
  • jupyter-edx-grader-xblock - Auto-grade a student assignment created as a Jupyter notebook and write the score in the Open edX gradebook.
  • jupyter-viewer-xblock - Fetch and display part of, or an entire Jupyter Notebook in an Open edX XBlock.

what is Jupyter?

JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, computational journalism, and machine learning. A modular design invites extensions to expand and enrich functionality.


6 Useful Collaboration/Education Libraries for Jupyter

  1. Callgraph

Callgraph is a Python package that defines a decorator, and Jupyter magic, to draw dynamic call graphs of Python function calls.

Itโ€™s intended for classroom use, but may also be useful for self-guided exploration.

The package defines a Jupyter IPython magic, %callgraph, that displays a call graph within a Jupyter cell:

from functools import lru_cache

@lru_cache()
def lev(a, b):
    if "" in (a, b):
        return len(a) + len(b)

    candidates = []
    if a[0] == b[0]:
        candidates.append(lev(a[1:], b[1:]))
    else:
        candidates.append(lev(a[1:], b[1:]) + 1)
    candidates.append(lev(a, b[1:]) + 1)
    candidates.append(lev(a[1:], b) + 1)
    return min(candidates)

%callgraph -w10 lev("big", "dog"); lev("dig", "dog")

image0

It also provides a Python decorator, callgraph.decorator, that instruments a function to collect call graph information and render the result.

View on GitHub


2.  IllumiDesk

This monorepo is used to maintain IllumiDesk's authenticators, spawners, and microservices. This setup assumes that all services are running with Kubernetes. Please refer to our help guides for more information.

Overview

Jupyter Notebooks are a great education tool for a variety of subjects since it offers instructors and learners a unified document standard to combine markdown, code, and rich visualizations. With the proper setup, Jupyter Notebooks allow organizations to enhance their learning experiences.

When combined with the nbgrader package instructors are able to automate much of tasks associated with grading and providing feedback for their users.

Why?

Running a multi-user setup using JupyterHub and nbgrader with containers requires some additional setup. Some of the questions this distribution attempts to answer are:

  • How do we manage authentication when the user isn't a system user within the JupyterHub or Jupyter Notebook container?
  • How do we manage permissions for student and instructor folders?
  • How do we securely syncronize information with the Learning Management System (LMS) using the LTI 1.1 and LTI 1.3 standards?
  • How do we improve the developer experience to provide more consistency with versions used in production, such as with Kubernetes?
  • How should deployment tools reflect these container-based requirements and also (to the extent possible) offer users an option that is cloud-vendor agnostic?

Our goal is to remove these obstacles so that you can get on with the teaching!

View on GitHub


3.  ipythonblocks

ipythonblocks is a teaching tool for use with the IPython Notebook. It provides a BlockGrid object whose representation is an HTML table. Individual table cells are represented by Block objects that have .red, .green, and .blue attributes by which the color of that cell can be specified.

ipythonblocks allows students to experiment with Python flow control concepts and immediately see the effects of their code represented in a colorful, attractive way. BlockGrid objects can be indexed and sliced like 2D NumPy arrays making them good practice for learning how to access arrays.

Install

ipythonblocks can be installed with pip:

pip install ipythonblocks

However, the package is contained in a single .py file and if you prefer you can just grab ipythonblocks.py and copy it to wherever you want to use it (useful for packaging with other teaching materials).

View on GitHub


4.  jupyter-drive

Google drive for jupyter notebooks

The jupyter-drive project is no longer supported due to the deprecation of the Google Realtime API.

You might instead consider jupyterlab/jupyterlab-google-drive; which adds a Google Drive file browser to the left sidebar of JupyterLab.

Installation

This repository contains custom Contents classes that allows IPython to use Google Drive for file management. The code is organized as a python package that contains functions to install a Jupyter Notebook JavaScript extension, and activate/deactivate different IPython profiles to be used with Google drive.

To install this package, run

git clone git://github.com/jupyter/jupyter-drive.git
pip install -e jupyter-drive

This will install the package in development mode with pip, which means that any change you make to the repository will be reflected into the importable version immediately.

To install the notebook extension and activate your configuration with Google Drive, run

python -m jupyterdrive

To deactivate, run

python -m jupyterdrive --deactivate

Note on Jupyter/IPython

We try to support both IPython 3.x and above version, though many changes in configuration between IPython 3.x and later versions may cause the exact configuration path to vary from system to system.

View on GitHub


5.  jupyter-edx-grader-xblock

Grade Jupyter Notebooks in Open edX


Auto-grade a student assignment created as a Jupyter notebook, using the nbgrader Jupyter extension, and write the score in the Open edX gradebook

See also the Jupyter Notebook Viewer XBlock to populate course content from publicly available Jupyter notebooks.

This XBlock uses Docker and nbgrader to create a Python environment and auto-grade a Jupyter Notebook, and tracks the resulting score as a problem within an EdX graded sub-section. It allows an instructor to upload an assignment created with nbgrader, upload a requirements.txt file to configure the environment, set the maximum number of tries for the student, and set a deadline for the submission. The student downloads the assignment file, answers the questions (executing all cells), and uploads the solution, which gets immediately auto-graded. The student gets a visual score report, and the score gets added to his/her progress in the Open edX gradebook.

Features and Support

  • Integrated into EdX Grading System
  • Maximum point values are pulled from the instructor version of the notebook
  • A separate Python3 virtual environment is kept for each course
  • Each student's notebook is run within its own Docker container
  • Several Other Configuration Options
  • Only supports auto-graded cells - Does not support manually graded cells.

View on GitHub


6.  jupyter-viewer-xblock

Fetch and display part of, or an entire Jupyter Notebook in an XBlock.

Jupyter is a "killer app" for education, said Prof. Lorena Barba in her keynote at the 2014 Scientific Python Conference (video available). Many people are writing lessons, tutorials, whole courses and even books using Jupyter. It is a new genre of open educational resource (OER). What if you want to create an online course on Open edX using content originally written as Jupyter notebook? You certainly don't want to duplicate the content, much less copy-and-paste. This XBlock allows you to embed the content dynamically from a notebook available on a public URL.

Prof. Barba used the XBlock in the second half of her course module, Get Data Off The Ground with Python. Check it out!

Installation

XBlock

  • login as the root user: sudo -i
  • New Installation:
    • /edx/bin/pip.edxapp install git+https://github.com/ibleducation/jupyter-viewer-xblock.git
  • Re-Installation:
    • /edx/bin/pip.edxapp install --upgrade --no-deps --force-reinstall git+https://github.com/ibleducation/jupyter-viewer-xblock.git
  • Restart the edxapp via /edx/bin/supervisorctl restart edxapp:

Edx Server Setup

  • In the studio, go to the course you would like to implement this XBlock in
  • Under Settings at the top, select Advanced Settings
  • in the Advanced Module List, add: "xblock_jupyter_viewer"
    • ensure there is a comma after the second to last entry, and no comma exists after the last entry
  • Select Save

The viewer can now be added to a unit by selecting Jupyter Notebook Viewer from the Advanced component menu

View on GitHub


Related videos:

Collaborative Editing in Jupyter Notebook


Related posts:

#jupyter 

6 Useful Collaboration/Education Libraries for Jupyter
Layne  Fadel

Layne Fadel

1663974660

Revealing 7 Useful Runtimes/Frontends Libraries for Jupyter

In this Jupyter article, let's learn about Runtimes/Frontends: Revealing 7 Useful Runtimes/Frontends Libraries for Jupyter

Table of contents:

  • JupyterWith - Nix-based framework for the definition of declarative and reproducible Jupyter environments.
  • kaggle/docker-python - Kaggle Python docker image that includes datasets and packages.
  • ML Workspace - Docker image that includes Jupyter(Lab) and various packages for data science/machine learning.
  • nteract - Native desktop notebook frontend.
  • Stencila - Native desktop notebook frontend.
  • Visual Studio Code - Native desktop notebook frontend.
  • voila - Notebooks as interactive standalone web applications.

what is Jupyter?

JupyterLab is the latest web-based interactive development environment for notebooks, code, and data. Its flexible interface allows users to configure and arrange workflows in data science, scientific computing, computational journalism, and machine learning. A modular design invites extensions to expand and enrich functionality.


Revealing 7 Useful Runtimes/Frontends Libraries for Jupyter

  1. JupyterWith

This repository provides a Nix-based framework for the definition of declarative and reproducible Jupyter environments. These environments include JupyterLab - configurable with extensions - the classic notebook, and configurable Jupyter kernels.

In practice, a Jupyter environment is defined in a single shell.nix file which can be distributed together with a notebook as a self-contained reproducible package.

Getting started

Using Nix-Shell

Nix must be installed in order to use JupyterWith. A simple JupyterLab environment with kernels can be defined in a shell.nix file such as:

let
  jupyter = import (builtins.fetchGit {
    url = https://github.com/tweag/jupyterWith;
    # Example working revision, check out the latest one.
    rev = "45f9a774e981d3a3fb6a1e1269e33b4624f9740e";
  }) {};

  iPython = jupyter.kernels.iPythonWith {
    name = "python";
    packages = p: with p; [ numpy ];
  };

  iHaskell = jupyter.kernels.iHaskellWith {
    name = "haskell";
    packages = p: with p; [ hvega formatting ];
  };

  jupyterEnvironment =
    jupyter.jupyterlabWith {
      kernels = [ iPython iHaskell ];
    };
in
  jupyterEnvironment.env

JupyterLab can then be started by running:

nix-shell --command "jupyter lab"

This can take a while, especially when it is run for the first time because all dependencies of JupyterLab have to be downloaded, built and installed. Subsequent runs are instantaneous for the same environment, or much faster even when some packages or kernels are changed, since a lot will already be cached.

This process can be largely accelerated by using cachix:

cachix use jupyterwith

View on GitHub


2.  docker-python

Kaggle Notebooks allow users to run a Python Notebook in the cloud against our competitions and datasets without having to download data or set up their environment.

This repository includes the Dockerfile for building the CPU-only and GPU image that runs Python Notebooks on Kaggle.

Requesting new packages

First, evaluate whether installing the package yourself in your own notebooks suits your needs. See guide.

If you the first step above doesn't work for your use case, open an issue or a pull request.

Opening a pull request

  1. Edit the Dockerfile.
  2. Follow the instructions below to build a new image.
  3. Add tests for your new package. See this example.
  4. Follow the instructions below to test the new image.
  5. Open a PR on this repo and you are all set!

Building a new image

./build

Flags:

  • --gpu to build an image for GPU.
  • --use-cache for faster iterative builds.

Testing a new image

A suite of tests can be found under the /tests folder. You can run the test using this command:

./test

Flags:

  • --gpu to test the GPU image.

View on GitHub


3.  ml-workspace

All-in-one web-based development environment for machine learning

The ML workspace is an all-in-one web-based IDE specialized for machine learning and data science. It is simple to deploy and gets you started within minutes to productively built ML solutions on your own machines. This workspace is the ultimate tool for developers preloaded with a variety of popular data science libraries (e.g., Tensorflow, PyTorch, Keras, Sklearn) and dev tools (e.g., Jupyter, VS Code, Tensorboard) perfectly configured, optimized, and integrated.

Getting Started

Prerequisites

The workspace requires Docker to be installed on your machine (๐Ÿ“– Installation Guide).

Start single instance

Deploying a single workspace instance is as simple as:

docker run -p 8080:8080 mltooling/ml-workspace:0.13.2

Voilร , that was easy! Now, Docker will pull the latest workspace image to your machine. This may take a few minutes, depending on your internet speed. Once the workspace is started, you can access it via http://localhost:8080.

If started on another machine or with a different port, make sure to use the machine's IP/DNS and/or the exposed port.

To deploy a single instance for productive usage, we recommend to apply at least the following options:

docker run -d \
    -p 8080:8080 \
    --name "ml-workspace" \
    -v "${PWD}:/workspace" \
    --env AUTHENTICATE_VIA_JUPYTER="mytoken" \
    --shm-size 512m \
    --restart always \
    mltooling/ml-workspace:0.13.2

This command runs the container in background (-d), mounts your current working directory into the /workspace folder (-v), secures the workspace via a provided token (--env AUTHENTICATE_VIA_JUPYTER), provides 512MB of shared memory (--shm-size) to prevent unexpected crashes (see known issues section), and keeps the container running even on system restarts (--restart always). You can find additional options for docker run here and workspace configuration options in the section below.

View on GitHub


4.  nteract

nteract is an open-source organization committed to creating fantastic interactive computing experiences that allow people to collaborate with ease.

We build SDKs, applications, and libraries that help you and your team make the most of interactive (particularly Jupyter) notebooks and REPLs.

To learn more about the nteract open source organization and the rest of our projects, please visit our website.

What's in this repo?

This repo is a monorepo. It contains the code for the nteract core SDK and nteract's desktop and web applications. It also contains the documentation for the SDK and the applications. Here's a quick guide to the contents of the monorepo.

FolderDescription
applications/desktopSource code for the nteract desktop application. The desktop application is a cross-platform app built using Electron.
applications/jupyter-extensionSource code the nteract Jupyter extension. This extension can be installed alongside Jupyter classic and JupyterLab in your Jupyter deployments or personal Jupyter server.
packagesJavaScript packages that are part of the nteract core SDK.
changelogsChangelogs for each release of the nteract core SDK and applications.

How do I contribute to this repo?

If you are interested in contributing to nteract, please read the contribution guidelines for information on how to set up your nteract repo for development, how to write tests and validate changes, how to update documentation, and how to submit your code changes for review on GitHub.

View on GitHub


5.  stencila

Stencila is comprised of several open source packages, written in a variety of programming languages. This repo acts as an entry point to these other packages as well as hosting code for our desktop and CLI tools.

We ๐Ÿ’• contributions! All types of contributions: ideas ๐Ÿค”, examples ๐Ÿ’ก, bug reports ๐Ÿ›, documentation ๐Ÿ“–, code ๐Ÿ’ป, questions ๐Ÿ’ฌ. If you are unsure of where to make a contribution feel free to open a new issue or discussion in this repository (we can always move them elsewhere if need be).

๐Ÿ“œ Help

For documentation, including demos and reference guides, please go to our Help site https://help.stenci.la/. That site is developed in the help folder of this repository and contributions are always welcome.

๐ŸŽ Hub

If you don't want to install anything, or just want to try out Stencila, https://hub.stenci.la is the best place to start. It's a web application that makes all our software available via intuitive browser-based interfaces. You can contribute to Stencila Hub at stencila/hub.

๐Ÿ–ฅ๏ธ Desktop

If you'd prefer to use Stencila on your own computer, the Stencila Desktop is a great place to start. It is still in the early stages of (re)development but please see the desktop folder for its current status and how you can help out!

View on GitHub


6.  voila

Rendering of live Jupyter notebooks with interactive widgets.

Introduction

Voilร  turns Jupyter notebooks into standalone web applications.

Unlike the usual HTML-converted notebooks, each user connecting to the Voilร  tornado application gets a dedicated Jupyter kernel which can execute the callbacks to changes in Jupyter interactive widgets.

  • By default, Voilร  disallows execute requests from the front-end, preventing execution of arbitrary code.
  • By default, Voilร  runs with the strip_source option, which strips out the input cells from the rendered notebook.

Installation

Voilร  can be installed with the mamba (or conda) package manager from conda-forge

mamba install -c conda-forge voila

or from PyPI

pip install voila

JupyterLab preview extension

Voilร  provides a JupyterLab extension that displays a Voilร  preview of your Notebook in a side-pane.

Starting with JupyterLab 3.0, the extension is automatically installed after installing voila with pip install voila.

If you would like to install the extension from source, run the following command.

jupyter labextension install @voila-dashboards/jupyterlab-preview

View on GitHub


Related videos:

From Jupyter Notebook to Production Web App, with Anvil and (only) Python


Related posts:

#jupyter 

Revealing 7 Useful Runtimes/Frontends Libraries for Jupyter