1650434400
A Jupyter kernel base class in Python which includes core magic functions (including help, command and file path completion, parallel and distributed processing, downloads, and much more).
See Jupyter's docs on wrapper kernels.
Additional magics can be installed within the new kernel package under a magics subpackage.
... and many others.
You can install Metakernel through pip
:
Installing metakernel from the conda-forge channel can be achieved by adding conda-forge to your channels with:
Once the conda-forge channel has been enabled, metakernel can be installed with:
It is possible to list all of the versions of metakernel available on your platform with:
Although MetaKernel is a system for building new kernels, you can use a subset of the magics in the IPython kernel.
from metakernel import register_ipython_magics
register_ipython_magics()
Put the following in your (or a system-wide) ipython_config.py
file:
# /etc/ipython/ipython_config.py
c = get_config()
startup = [
'from metakernel import register_ipython_magics',
'register_ipython_magics()',
]
c.InteractiveShellApp.exec_lines = startup
Use MetaKernel Languages in Parallel
To use a MetaKernel language in parallel, do the following:
pip install ipyparallel
ipcluster nbextension enable
ipcluster start --n=10 --ip=192.168.1.108
MODULE
and CLASSNAME
(can be any metakernel kernel):%parallel MODULE CLASSNAME
For example:
%parallel calysto_scheme CalystoScheme
Execute a single line, in parallel:
%px (+ 1 1)
Or execute the entire cell, in parallel:
%%px
(* cluster_rank cluster_rank)
Results come back in a Python list (Scheme vector), in cluster_rank
order. (This will be a JSON representation in the future).
Therefore, the above would produce the result:
#10(0 1 4 9 16 25 36 49 64 81)
You can get the results back in any of the parallel magics (%px
, %%px
, or %pmap
) in the host kernel by accessing the variable _
(single underscore), or by using the --set_variable VARIABLE
flag, like so:
%%px --set_variable results
(* cluster_rank cluster_rank)
Then, in the next cell, you can access results
.
Notice that you can use the variable cluster_rank
to partition parts of a problem so that each node is working on something different.
In the examples above, use -e
to evaluate the code in the host kernel as well. Note that cluster_rank
is not defined on the host machine, and that this assumes the host kernel is the same as the parallel machines.
Metakernel
subclasses can be configured by the user. The configuration file name is determined by the app_name
property of the subclass. For example, in the Octave
kernel, it is octave_kernel
. The user of the kernel can add an octave_kernel_config.py
file to their jupyter
config path. The base MetaKernel
class offers plot_settings
as a configurable trait. Subclasses can define other traits that they wish to make configurable.
As an example:
cat ~/.jupyter/octave_kernel_config.py
# use Qt as the default backend for plots
c.OctaveKernel.plot_settings = dict(backend='qt')
Example notebooks can be viewed here.
Documentation is available online. Magics have interactive help (and online).
For version information, see the Changelog.
Basic set of line and cell magics for all kernels.
Author: Calysto
Source Code: https://github.com/Calysto/metakernel
License: BSD-3-Clause License
1650434400
A Jupyter kernel base class in Python which includes core magic functions (including help, command and file path completion, parallel and distributed processing, downloads, and much more).
See Jupyter's docs on wrapper kernels.
Additional magics can be installed within the new kernel package under a magics subpackage.
... and many others.
You can install Metakernel through pip
:
Installing metakernel from the conda-forge channel can be achieved by adding conda-forge to your channels with:
Once the conda-forge channel has been enabled, metakernel can be installed with:
It is possible to list all of the versions of metakernel available on your platform with:
Although MetaKernel is a system for building new kernels, you can use a subset of the magics in the IPython kernel.
from metakernel import register_ipython_magics
register_ipython_magics()
Put the following in your (or a system-wide) ipython_config.py
file:
# /etc/ipython/ipython_config.py
c = get_config()
startup = [
'from metakernel import register_ipython_magics',
'register_ipython_magics()',
]
c.InteractiveShellApp.exec_lines = startup
Use MetaKernel Languages in Parallel
To use a MetaKernel language in parallel, do the following:
pip install ipyparallel
ipcluster nbextension enable
ipcluster start --n=10 --ip=192.168.1.108
MODULE
and CLASSNAME
(can be any metakernel kernel):%parallel MODULE CLASSNAME
For example:
%parallel calysto_scheme CalystoScheme
Execute a single line, in parallel:
%px (+ 1 1)
Or execute the entire cell, in parallel:
%%px
(* cluster_rank cluster_rank)
Results come back in a Python list (Scheme vector), in cluster_rank
order. (This will be a JSON representation in the future).
Therefore, the above would produce the result:
#10(0 1 4 9 16 25 36 49 64 81)
You can get the results back in any of the parallel magics (%px
, %%px
, or %pmap
) in the host kernel by accessing the variable _
(single underscore), or by using the --set_variable VARIABLE
flag, like so:
%%px --set_variable results
(* cluster_rank cluster_rank)
Then, in the next cell, you can access results
.
Notice that you can use the variable cluster_rank
to partition parts of a problem so that each node is working on something different.
In the examples above, use -e
to evaluate the code in the host kernel as well. Note that cluster_rank
is not defined on the host machine, and that this assumes the host kernel is the same as the parallel machines.
Metakernel
subclasses can be configured by the user. The configuration file name is determined by the app_name
property of the subclass. For example, in the Octave
kernel, it is octave_kernel
. The user of the kernel can add an octave_kernel_config.py
file to their jupyter
config path. The base MetaKernel
class offers plot_settings
as a configurable trait. Subclasses can define other traits that they wish to make configurable.
As an example:
cat ~/.jupyter/octave_kernel_config.py
# use Qt as the default backend for plots
c.OctaveKernel.plot_settings = dict(backend='qt')
Example notebooks can be viewed here.
Documentation is available online. Magics have interactive help (and online).
For version information, see the Changelog.
Basic set of line and cell magics for all kernels.
Author: Calysto
Source Code: https://github.com/Calysto/metakernel
License: BSD-3-Clause License
1598001060
The DevOps methodology, a software and team management approach defined by the portmanteau of Development and Operations, was first coined in 2009 and has since become a buzzword concept in the IT field.
DevOps has come to mean many things to each individual who uses the term as DevOps is not a singularly defined standard, software, or process but more of a culture. Gartner defines DevOps as:
“DevOps represents a change in IT culture, focusing on rapid IT service delivery through the adoption of agile, lean practices in the context of a system-oriented approach. DevOps emphasizes people (and culture), and seeks to improve collaboration between operations and development teams. DevOps implementations utilize technology — especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective.”
As you can see from the above definition, DevOps is a multi-faceted approach to the Software Development Life Cycle (SDLC), but its main underlying strength is how it leverages technology and software to streamline this process. So with the right approach to DevOps, notably adopting its philosophies of co-operation and implementing the right tools, your business can increase deployment frequency by a factor of 30 and lead times by a factor of 8000 over traditional methods, according to a CapGemini survey.
This list is designed to be as comprehensive as possible. The article comprises both very well established tools for those who are new to the DevOps methodology and those tools that are more recent releases to the market — either way, there is bound to be a tool on here that can be an asset for you and your business. For those who already live and breathe DevOps, we hope you find something that will assist you in your growing enterprise.
With such a litany of tools to choose from, there is no “right” answer to what tools you should adopt. No single tool will cover all your needs and will be deployed across a variety of development and Operational teams, so let’s break down what you need to consider before choosing what tool might work for you.
With all that in mind, I hope this selection of tools will aid you as your business continues to expand into the DevOps lifestyle.
Continuous Integration and Delivery
AWS CloudFormation is an absolute must if you are currently working, or planning to work, in the AWS Cloud. CloudFormation allows you to model your AWS infrastructure and provision all your AWS resources swiftly and easily. All of this is done within a JSON or YAML template file and the service comes with a variety of automation features ensuring your deployments will be predictable, reliable, and manageable.
Link: https://aws.amazon.com/cloudformation/
Azure Resource Manager (ARM) is Microsoft’s answer to an all-encompassing IAC tool. With its ARM templates, described within JSON files, Azure Resource Manager will provision your infrastructure, handle dependencies, and declare multiple resources via a single template.
Link: https://azure.microsoft.com/en-us/features/resource-manager/
Much like the tools mentioned above, Google Cloud Deployment Manager is Google’s IAC tool for the Google Cloud Platform. This tool utilizes YAML for its config files and JINJA2 or PYTHON for its templates. Some of its notable features are synchronistic deployment and ‘preview’, allowing you an overhead view of changes before they are committed.
Link: https://cloud.google.com/deployment-manager/
Terraform is brought to you by HashiCorp, the makers of Vault and Nomad. Terraform is vastly different from the above-mentioned tools in that it is not restricted to a specific cloud environment, this comes with increased benefits for tackling complex distributed applications without being tied to a single platform. And much like Google Cloud Deployment Manager, Terraform also has a preview feature.
Link: https://www.terraform.io/
Chef is an ideal choice for those who favor CI/CD. At its heart, Chef utilizes self-described recipes, templates, and cookbooks; a collection of ready-made templates. Cookbooks allow for consistent configuration even as your infrastructure rapidly scales. All of this is wrapped up in a beautiful Ruby-based DSL pie.
Link: https://www.chef.io/products/chef-infra/
#tools #devops #devops 2020 #tech tools #tool selection #tool comparison
1643276746
Use per-directory Poetry environments to run Jupyter kernels. No need to install a Jupyter kernel per Python virtual environment!
The idea behind this project is to allow you to capture the exact state of your environment. This means you can email your work to your peers, and they'll have exactly the same set of packages that you do! Reproducibility!
Virtual environments were (and are) an important advancement to Python's package management story, but they have a few shortcomings:
requirements.txt
which includes all the direct dependencies (numpy, pandas, etc.), but not transient dependencies (pandas depends on pytz for timezone support, for example). And usually, even the direct dependencies are specified only as minimum (or semver) ranges (e.g., numpy>=1.21
) which can make it hard or impossible to accurately recreate the venv
later.Poetry uses venvs transparently under the hood by constructing them from the pyproject.toml
and poetry.lock
files. The poetry.lock
file records the exact state of dependencies (and transient dependencies) and can be used to more accurately reproduce the environment.
Additionally, Poetry Kernel means you only have to install one kernelspec. It then uses the pyproject.toml
file from the directory of the notebook (or any parent directory) to choose which environment to run the notebook in.
The reason we created this package was to make sure that the code environments created for running student code on Pathbird exactly match your development environment. Interested in developing interactive, engaging, inquiry-based lessons for your students? Check out Pathbird for more information!
Usage
# NOTE: Do **NOT** install this package in your Poetry project, it should be
# installed at the system or user level.
pip3 install --user poetry-kernel
poetry init -n
ipykernel
to your project's dependencies:# In the directory of your Poetry project
poetry add ipykernel
Pro-tip: Check the output of the terminal window where you launched Jupyter. It will usually explain why the kernel is failing to start.
pyproject.toml
and poetry.lock
files). You can turn a directory into a Poetry project by running:poetry init -n
ipykernel
into your project:poetry add ipykernel
Make sure the Poetry project is installed! This is especially important for projects that you have downloaded from others (warning: installing a Poetry project could run arbitrary code on your computer, make sure you trust your download first!):
Still can't figure it out? Open an issue!
If you added the package after starting the kernel, you might need to restart the kernel for it to see the new package.
Download Details:
Author: pathbird
Source Code: https://github.com/pathbird/poetry-kernel
License: MIT
#python #poetry #jupyter #kernel
1597848060
rameworks and libraries can be said as the fundamental building blocks when developers build software or applications. These tools help in opting out the repetitive tasks as well as reduce the amount of code that the developers need to write for a particular software.
Recently, the Stack Overflow Developer Survey 2020 surveyed nearly 65,000 developers, where they voted their go-to tools and libraries. Here, we list down the top 12 frameworks and libraries from the survey that are most used by developers around the globe in 2020.
(The libraries are listed according to their number of Stars in GitHub)
**GitHub Stars: **147k
Rank: 5
**About: **Originally developed by researchers of Google Brain team, TensorFlow is an end-to-end open-source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art research in ML. It allows developers to easily build and deploy ML-powered applications.
Know more here.
**GitHub Stars: **98.3k
**Rank: **9
About: Created by Google, Flutter is a free and open-source software development kit (SDK) which enables fast user experiences for mobile, web and desktop from a single codebase. The SDK works with existing code and is used by developers and organisations around the world.
#opinions #developer tools #frameworks #java tools #libraries #most used tools by developers #python tools
1650427200
This package provides the IPython kernel for Jupyter.
git clone
cd ipykernel
pip install -e ".[test]"
After that, all normal ipython
commands will use this newly-installed version of the kernel.
Follow the instructions from Installation from source
.
and then from the root directory
pytest ipykernel
Follow the instructions from Installation from source
.
and then from the root directory
pytest ipykernel -vv -s --cov ipykernel --cov-branch --cov-report term-missing:skip-covered --durations 10
Author: ipython
Source Code: https://github.com/ipython/ipykernel
License: View license