Ruth  Gleason

Ruth Gleason

1650434400

Metakernel: Jupyter/IPython Kernel tools

A Jupyter kernel base class in Python which includes core magic functions (including help, command and file path completion, parallel and distributed processing, downloads, and much more). 

See Jupyter's docs on wrapper kernels.

Additional magics can be installed within the new kernel package under a magics subpackage.

Features

  • Tab completion for magics and file paths.
  • Help for magics using ? or Shift+Tab.
  • Plot magic for setting default plot behavior.

Kernels based on Metakernel

... and many others.

Installation

You can install Metakernel through pip:

Installing metakernel from the conda-forge channel can be achieved by adding conda-forge to your channels with:

Once the conda-forge channel has been enabled, metakernel can be installed with:

It is possible to list all of the versions of metakernel available on your platform with:

Use MetaKernel Magics in IPython

Although MetaKernel is a system for building new kernels, you can use a subset of the magics in the IPython kernel.

from metakernel import register_ipython_magics
register_ipython_magics()

Put the following in your (or a system-wide) ipython_config.py file:

# /etc/ipython/ipython_config.py
c = get_config()
startup = [
   'from metakernel import register_ipython_magics',
   'register_ipython_magics()',
]
c.InteractiveShellApp.exec_lines = startup

Use MetaKernel Languages in Parallel

To use a MetaKernel language in parallel, do the following:

  1. Make sure that the Python module ipyparallel is installed. In the shell, type:
pip install ipyparallel
  1. To enable the extension in the notebook, in the shell, type:
ipcluster nbextension enable
  1. To start up a cluster, with 10 nodes, on a local IP address, in the shell, type:
ipcluster start --n=10 --ip=192.168.1.108
  1. Initialize the code to use the 10 nodes, inside the notebook from a host kernel MODULE and CLASSNAME (can be any metakernel kernel):
%parallel MODULE CLASSNAME

For example:

%parallel calysto_scheme CalystoScheme
  1. Run code in parallel, inside the notebook, type:

Execute a single line, in parallel:

%px (+ 1 1)

Or execute the entire cell, in parallel:

%%px
(* cluster_rank cluster_rank)

Results come back in a Python list (Scheme vector), in cluster_rank order. (This will be a JSON representation in the future).

Therefore, the above would produce the result:

#10(0 1 4 9 16 25 36 49 64 81)

You can get the results back in any of the parallel magics (%px, %%px, or %pmap) in the host kernel by accessing the variable _ (single underscore), or by using the --set_variable VARIABLE flag, like so:

%%px --set_variable results
(* cluster_rank cluster_rank)

Then, in the next cell, you can access results.

Notice that you can use the variable cluster_rank to partition parts of a problem so that each node is working on something different.

In the examples above, use -e to evaluate the code in the host kernel as well. Note that cluster_rank is not defined on the host machine, and that this assumes the host kernel is the same as the parallel machines.

Configuration

Metakernel subclasses can be configured by the user. The configuration file name is determined by the app_name property of the subclass. For example, in the Octave kernel, it is octave_kernel. The user of the kernel can add an octave_kernel_config.py file to their jupyter config path. The base MetaKernel class offers plot_settings as a configurable trait. Subclasses can define other traits that they wish to make configurable.

As an example:

cat ~/.jupyter/octave_kernel_config.py
# use Qt as the default backend for plots
c.OctaveKernel.plot_settings = dict(backend='qt')

Documentation

Example notebooks can be viewed here.

Documentation is available online. Magics have interactive help (and online).

For version information, see the Changelog.

Basic set of line and cell magics for all kernels.

  • Python magic for accessing python interpreter.
  • Run kernels in parallel.
  • Shell magics.
  • Classroom management magics.

Author: Calysto
Source Code: https://github.com/Calysto/metakernel
License: BSD-3-Clause License

#python #jupyter 

What is GEEK

Buddha Community

Metakernel: Jupyter/IPython Kernel tools
Ruth  Gleason

Ruth Gleason

1650434400

Metakernel: Jupyter/IPython Kernel tools

A Jupyter kernel base class in Python which includes core magic functions (including help, command and file path completion, parallel and distributed processing, downloads, and much more). 

See Jupyter's docs on wrapper kernels.

Additional magics can be installed within the new kernel package under a magics subpackage.

Features

  • Tab completion for magics and file paths.
  • Help for magics using ? or Shift+Tab.
  • Plot magic for setting default plot behavior.

Kernels based on Metakernel

... and many others.

Installation

You can install Metakernel through pip:

Installing metakernel from the conda-forge channel can be achieved by adding conda-forge to your channels with:

Once the conda-forge channel has been enabled, metakernel can be installed with:

It is possible to list all of the versions of metakernel available on your platform with:

Use MetaKernel Magics in IPython

Although MetaKernel is a system for building new kernels, you can use a subset of the magics in the IPython kernel.

from metakernel import register_ipython_magics
register_ipython_magics()

Put the following in your (or a system-wide) ipython_config.py file:

# /etc/ipython/ipython_config.py
c = get_config()
startup = [
   'from metakernel import register_ipython_magics',
   'register_ipython_magics()',
]
c.InteractiveShellApp.exec_lines = startup

Use MetaKernel Languages in Parallel

To use a MetaKernel language in parallel, do the following:

  1. Make sure that the Python module ipyparallel is installed. In the shell, type:
pip install ipyparallel
  1. To enable the extension in the notebook, in the shell, type:
ipcluster nbextension enable
  1. To start up a cluster, with 10 nodes, on a local IP address, in the shell, type:
ipcluster start --n=10 --ip=192.168.1.108
  1. Initialize the code to use the 10 nodes, inside the notebook from a host kernel MODULE and CLASSNAME (can be any metakernel kernel):
%parallel MODULE CLASSNAME

For example:

%parallel calysto_scheme CalystoScheme
  1. Run code in parallel, inside the notebook, type:

Execute a single line, in parallel:

%px (+ 1 1)

Or execute the entire cell, in parallel:

%%px
(* cluster_rank cluster_rank)

Results come back in a Python list (Scheme vector), in cluster_rank order. (This will be a JSON representation in the future).

Therefore, the above would produce the result:

#10(0 1 4 9 16 25 36 49 64 81)

You can get the results back in any of the parallel magics (%px, %%px, or %pmap) in the host kernel by accessing the variable _ (single underscore), or by using the --set_variable VARIABLE flag, like so:

%%px --set_variable results
(* cluster_rank cluster_rank)

Then, in the next cell, you can access results.

Notice that you can use the variable cluster_rank to partition parts of a problem so that each node is working on something different.

In the examples above, use -e to evaluate the code in the host kernel as well. Note that cluster_rank is not defined on the host machine, and that this assumes the host kernel is the same as the parallel machines.

Configuration

Metakernel subclasses can be configured by the user. The configuration file name is determined by the app_name property of the subclass. For example, in the Octave kernel, it is octave_kernel. The user of the kernel can add an octave_kernel_config.py file to their jupyter config path. The base MetaKernel class offers plot_settings as a configurable trait. Subclasses can define other traits that they wish to make configurable.

As an example:

cat ~/.jupyter/octave_kernel_config.py
# use Qt as the default backend for plots
c.OctaveKernel.plot_settings = dict(backend='qt')

Documentation

Example notebooks can be viewed here.

Documentation is available online. Magics have interactive help (and online).

For version information, see the Changelog.

Basic set of line and cell magics for all kernels.

  • Python magic for accessing python interpreter.
  • Run kernels in parallel.
  • Shell magics.
  • Classroom management magics.

Author: Calysto
Source Code: https://github.com/Calysto/metakernel
License: BSD-3-Clause License

#python #jupyter 

50+ Useful DevOps Tools

The article comprises both very well established tools for those who are new to the DevOps methodology.

What Is DevOps?

The DevOps methodology, a software and team management approach defined by the portmanteau of Development and Operations, was first coined in 2009 and has since become a buzzword concept in the IT field.

DevOps has come to mean many things to each individual who uses the term as DevOps is not a singularly defined standard, software, or process but more of a culture. Gartner defines DevOps as:

“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture), and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology — especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”

As you can see from the above definition, DevOps is a multi-faceted approach to the Software Development Life Cycle (SDLC), but its main underlying strength is how it leverages technology and software to streamline this process. So with the right approach to DevOps, notably adopting its philosophies of co-operation and implementing the right tools, your business can increase deployment frequency by a factor of 30 and lead times by a factor of 8000 over traditional methods, according to a CapGemini survey.

The Right Tools for the Job

This list is designed to be as comprehensive as possible. The article comprises both very well established tools for those who are new to the DevOps methodology and those tools that are more recent releases to the market — either way, there is bound to be a tool on here that can be an asset for you and your business. For those who already live and breathe DevOps, we hope you find something that will assist you in your growing enterprise.

With such a litany of tools to choose from, there is no “right” answer to what tools you should adopt. No single tool will cover all your needs and will be deployed across a variety of development and Operational teams, so let’s break down what you need to consider before choosing what tool might work for you.

  • Plan and collaborate: Before you even begin the SDLC, your business needs to have a cohesive idea of what tools they’ll need to implement across your teams. There are even DevOps tools that can assist you with this first crucial step.
  • Build: Here you want tools that create identically provisioned environments. The last you need is to hear “But it works for me on my computer”
  • Automation: This has quickly become a given in DevOps, but automation will always drastically increase production over manual methods.
  • Continuous Integration: Tools need to provide constant and immediate feedback, several times a day but not all integrations are implemented equally, will the tool you select be right for the job?
  • Deployment: Deployments need to be kept predictable, smooth, and reliable with minimal risks, automation will also play a big part in this process.

With all that in mind, I hope this selection of tools will aid you as your business continues to expand into the DevOps lifestyle.

Tools Categories List:

Infrastructure As Code

Continuous Integration and Delivery

Development Automation

Usability Testing

Database and Big Data

Monitoring

Testing

Security

Helpful CLI Tools

Development

Visualization

Infrastructure As Code

#AWSCloudFormation

1. AWS CloudFormation

AWS CloudFormation is an absolute must if you are currently working, or planning to work, in the AWS Cloud. CloudFormation allows you to model your AWS infrastructure and provision all your AWS resources swiftly and easily. All of this is done within a JSON or YAML template file and the service comes with a variety of automation features ensuring your deployments will be predictable, reliable, and manageable.

Link: https://aws.amazon.com/cloudformation/

2. Azure Resource Manager

Azure Resource Manager (ARM) is Microsoft’s answer to an all-encompassing IAC tool. With its ARM templates, described within JSON files, Azure Resource Manager will provision your infrastructure, handle dependencies, and declare multiple resources via a single template.

Link: https://azure.microsoft.com/en-us/features/resource-manager/

#Google Cloud Deployment Manager

3. Google Cloud Deployment Manager

Much like the tools mentioned above, Google Cloud Deployment Manager is Google’s IAC tool for the Google Cloud Platform. This tool utilizes YAML for its config files and JINJA2 or PYTHON for its templates. Some of its notable features are synchronistic deployment and ‘preview’, allowing you an overhead view of changes before they are committed.

Link: https://cloud.google.com/deployment-manager/

4. Terraform

Terraform is brought to you by HashiCorp, the makers of Vault and Nomad. Terraform is vastly different from the above-mentioned tools in that it is not restricted to a specific cloud environment, this comes with increased benefits for tackling complex distributed applications without being tied to a single platform. And much like Google Cloud Deployment Manager, Terraform also has a preview feature.

Link: https://www.terraform.io/

#Chef

5. Chef

Chef is an ideal choice for those who favor CI/CD. At its heart, Chef utilizes self-described recipes, templates, and cookbooks; a collection of ready-made templates. Cookbooks allow for consistent configuration even as your infrastructure rapidly scales. All of this is wrapped up in a beautiful Ruby-based DSL pie.

Link: https://www.chef.io/products/chef-infra/

#Ansible

#tools #devops #devops 2020 #tech tools #tool selection #tool comparison

Poetry Kernel: Python Jupyter Kernel Using Poetry for Reproducible Notebooks

Poetry Kernel

Use per-directory Poetry environments to run Jupyter kernels. No need to install a Jupyter kernel per Python virtual environment!

The idea behind this project is to allow you to capture the exact state of your environment. This means you can email your work to your peers, and they'll have exactly the same set of packages that you do! Reproducibility!

Why not virtual environments (venvs)?

Virtual environments were (and are) an important advancement to Python's package management story, but they have a few shortcomings:

  • They are not great for reproducibility. Usually, you'll create a new virtual environment using a requirements.txt which includes all the direct dependencies (numpy, pandas, etc.), but not transient dependencies (pandas depends on pytz for timezone support, for example). And usually, even the direct dependencies are specified only as minimum (or semver) ranges (e.g., numpy>=1.21) which can make it hard or impossible to accurately recreate the venv later.
  • With Jupyter, they usually require that the kernels be installed globally. This means you'll need need to have a separate kernelspec for every venv you want to use with Jupyter.

Poetry uses venvs transparently under the hood by constructing them from the pyproject.toml and poetry.lock files. The poetry.lock file records the exact state of dependencies (and transient dependencies) and can be used to more accurately reproduce the environment.

Additionally, Poetry Kernel means you only have to install one kernelspec. It then uses the pyproject.toml file from the directory of the notebook (or any parent directory) to choose which environment to run the notebook in.

Shameless plug

The reason we created this package was to make sure that the code environments created for running student code on Pathbird exactly match your development environment. Interested in developing interactive, engaging, inquiry-based lessons for your students? Check out Pathbird for more information!

Usage

# NOTE: Do **NOT** install this package in your Poetry project, it should be
# installed at the system or user level.
pip3 install --user poetry-kernel
  • Initialize a Poetry project (only required if you do not have an existing Poetry project ready to use):
poetry init -n
  • IMPORTANT: Add ipykernel to your project's dependencies:
# In the directory of your Poetry project
poetry add ipykernel
  • Start a "Poetry" Jupyter kernel and see it in action! 
Jupyter launcher screenshot

Troubleshooting

Kernel isn't starting ("No Kernel" message)

Pro-tip: Check the output of the terminal window where you launched Jupyter. It will usually explain why the kernel is failing to start.

  • Make sure that you are launching a notebook in a directory/folder that contains a Poetry project (pyproject.toml and poetry.lock files). You can turn a directory into a Poetry project by running:
poetry init -n
  • Make sure that you've installed ipykernel into your project:
poetry add ipykernel

Make sure the Poetry project is installed! This is especially important for projects that you have downloaded from others (warning: installing a Poetry project could run arbitrary code on your computer, make sure you trust your download first!):

Still can't figure it out? Open an issue!

A package I added won't import properly

If you added the package after starting the kernel, you might need to restart the kernel for it to see the new package.

 

Download Details: 
Author: pathbird
Source Code: https://github.com/pathbird/poetry-kernel 
License: MIT
 

#python #poetry #jupyter #kernel

 

Sunny  Kunde

Sunny Kunde

1597848060

Top 12 Most Used Tools By Developers In 2020

rameworks and libraries can be said as the fundamental building blocks when developers build software or applications. These tools help in opting out the repetitive tasks as well as reduce the amount of code that the developers need to write for a particular software.

Recently, the Stack Overflow Developer Survey 2020 surveyed nearly 65,000 developers, where they voted their go-to tools and libraries. Here, we list down the top 12 frameworks and libraries from the survey that are most used by developers around the globe in 2020.

(The libraries are listed according to their number of Stars in GitHub)

1| TensorFlow

**GitHub Stars: **147k

Rank: 5

**About: **Originally developed by researchers of Google Brain team, TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art research in ML. It allows developers to easily build and deploy ML-powered applications.

Know more here.

2| Flutter

**GitHub Stars: **98.3k

**Rank: **9

About: Created by Google, Flutter is a free and open-source software development kit (SDK) which enables fast user experiences for mobile, web and desktop from a single codebase. The SDK works with existing code and is used by developers and organisations around the world.


#opinions #developer tools #frameworks #java tools #libraries #most used tools by developers #python tools

Ruth  Gleason

Ruth Gleason

1650427200

IPython Kernel: This Package Provides The IPython Kernel for Jupyter.

IPython Kernel for Jupyter

This package provides the IPython kernel for Jupyter.

Installation from source

  1. git clone
  2. cd ipykernel
  3. pip install -e ".[test]"

After that, all normal ipython commands will use this newly-installed version of the kernel.

Running tests

Follow the instructions from Installation from source.

and then from the root directory

pytest ipykernel

Running tests with coverage

Follow the instructions from Installation from source.

and then from the root directory

pytest ipykernel -vv -s --cov ipykernel --cov-branch --cov-report term-missing:skip-covered --durations 10

Author: ipython
Source Code: https://github.com/ipython/ipykernel
License: View license

#python #jupyter