Lawson  Wehner

Lawson Wehner


Loyalty_foop: A Loyalty Managment tool


A loyalty managment tool

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add loyalty_foop

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  loyalty_foop: ^0.5.1

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:loyalty_foop/loyalty_foop.dart';

Getting Started

This project is a starting point for a Flutter plug-in package, a specialized package that includes platform-specific implementation code for Android and/or iOS.

For help getting started with Flutter, view our online documentation, which offers tutorials, samples, guidance on mobile development, and a full API reference.

Original article source at: 

#flutter #dart #tool 

Loyalty_foop: A Loyalty Managment tool
Lawson  Wehner

Lawson Wehner


Flutter_toolbox: Common Flutter Widgets and Helper Methods


Common flutter widgets and helper methods.


Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add flutter_toolbox

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

  flutter_toolbox: ^8.0.3

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:flutter_toolbox/flutter_toolbox.dart';


import 'package:flutter/material.dart';
import 'package:flutter_localizations/flutter_localizations.dart';
import 'package:flutter_toolbox/flutter_toolbox.dart';
import 'package:flutter_toolbox/generated/l10n.dart' as toolbox;
import 'package:flutter_toolbox_example/auth_navigation.dart';
import 'package:provider/provider.dart';

import 'auth_provider.dart';
import 'paginated_list_view_example.dart';

void main() => runApp(MyApp());

class MyApp extends StatefulWidget {
  _MyAppState createState() => _MyAppState();

class _MyAppState extends State<MyApp> {
  Widget build(BuildContext context) {
    return MultiProvider(
      providers: [
        ChangeNotifierProvider(create: (_) => AuthProvider()),
      child: Consumer<AuthProvider>(
        builder: (context, value, child) {
          return ToolboxApp(
            toolboxConfig: ToolboxConfig(
              useWeservResizer: true,
              noItemsFoundWidget: Icon(Icons.subject),
              unAuthenticatedPages: const [
              isAuthenticated: () {
                final isLoggedIn = value.getUserCashed() != null;
                d("isLoggedIn = $isLoggedIn");

                return isLoggedIn;
              onAuthorizedNavigation: (BuildContext context, Type pageType) {
                d("onAuthorizedNavigation#pageType = $pageType");
                return push(context, LoginPage());
            child: MaterialApp(
              localizationsDelegates: const [
              supportedLocales: const <Locale>[
                Locale("en", ""),
                Locale("ar", ""),
              theme: ThemeData(
                tabBarTheme: TabBarTheme(
                  indicator: TabRoundedLineIndicator(
                    indicatorSize: TabRoundedLineIndicatorSize.normal,
                    indicatorHeight: 3,
              home: HomePage(),

class HomePage extends StatefulWidget {
  HomePageState createState() => HomePageState();

class HomePageState extends State<HomePage> {
  void initState() {


  Widget build(BuildContext context) {
    var themeData = Theme.of(context);

    return DefaultTabController(
      length: 3,
      child: Scaffold(
        appBar: AppBar(
          centerTitle: true,
          title: Text(
            'Plugin example app',
            style: TextStyle(color: Colors.black87),
          backgroundColor: Colors.white,
          bottom: TabBar(
            labelStyle: TextStyle(fontWeight: FontWeight.w700),
            indicatorSize: TabBarIndicatorSize.label,
            labelColor: themeData.primaryColor,
            unselectedLabelColor: Color(0xff5f6368),
            isScrollable: true,
            indicator: TabRoundedLineIndicator(
              indicatorSize: TabRoundedLineIndicatorSize.normal,
              indicatorHeight: 3,
              indicatorColor: Theme.of(context).primaryColor,
            tabs: <Widget>[
              Tab(text: "Home"),
              Tab(text: "Personal info"),
              Tab(text: "Data & personalization"),
          actions: <Widget>[
              onPressed: () => push(context, PaginatedListViewPage()),
              child: Text('PaginatedList page'),
        body: Builder(builder: (context) {
          return Column(
            children: <Widget>[
              Text(toolbox.S.of(context)?.please_check_your_connection ??
                  'Please check your connection'),
                width: 300,
                fullScreen: true,
                fullScreen: true,
                width: 50,
                width: 50,
                borderRadius: BorderRadius.circular(8),
                width: 50,
                borderRadius: BorderRadius.circular(8),
                children: <Widget>[
                    child: Text('Error toast'),
                    onPressed: () => errorToast('Error'),
                    child: Text('Success toast'),
                    onPressed: () => successToast('Success'),
                    child: Text('toast'),
                    onPressed: () => toast('أهلا بكم'),
                child: Text('Auth navigation'),
                onPressed: () => push(context, AuthNavHomePage()),

Getting Started

This project is a starting point for a Flutter plug-in package, a specialized package that includes platform-specific implementation code for Android and/or iOS.

For help getting started with Flutter, view our online documentation, which offers tutorials, samples, guidance on mobile development, and a full API reference.

Download Details:

Author: Humazed
Source Code: 
License: Apache-2.0 license

#flutter #dart #tool 

Flutter_toolbox: Common Flutter Widgets and Helper Methods

Qlab.jl: Generic Lab tools in Julia


Data manipulation and analysis tools tailored for quantum computing experiments in conjunction with Auspex. Currently working with Julia v1.0.


(v1.3) pkg> add

The code base also uses some system tools and python libraries for building libraries and plotting data with PyPlot.jl. You'll want to make sure your system has these.

In CentOS:

yum -y install epel-release
yum install gcc gcc-c++ make bzip2 hdf5 libaec libgfortran libquadmath

In Ubuntu/Debian:

apt-get install gcc g++ gcc-7-base make libaec0 libgfortran4 libhdf5-100 libquadmath0 libsz2


You'll need a working version of PyPlot. In some cases the package manager has trouble getting this right on all systems/OSs. If you run into issues, we recommend using Conda.jl manually:

using Pkg
ENV["PYTHON"] = """PyCall")
using Conda

In most cases, Julia should take care of this for you.

Other dependancies

Qlab.jl depends on several other Julia packages that have biniary dependencies. These should mostly be taken care of by the package manager. One important exception is HDF5 and its libhdf5 dependancy. This library manages the handling of HDF5 files and is currently maintained for backwards compatibility. The version of libhdf5 which produced any data files you want to analyze must match the library version used to create the files. You may need to add the path the the right version of libhdf5 to the Libdl path in Julia and rebuild HDF5:

push!(Libdl.DL_LOAD_PATH, "/opt/local/lib")"HDF5")

where /opt/local/lib is the path to the correct version of libhdf5. See the documentation from HDF5.jl for more details. Currently only version of hdf5 1.8.2 - 1.8.17 are supported. If you're not planning to use HDF5 files, you shouldn't have to worry about the library versions matching.


Raytheon BBN Technologies.

Download Details:

Author: BBN-Q
Source Code: 
License: View license

#julia #tool 

Qlab.jl: Generic Lab tools in Julia

Bolter: Command-line App for Viewing BoltDB File in Your Terminal


View BoltDB file in your terminal

List all items


$ go get -u


$ bolter [global options]

  --file FILE, -f FILE        boltdb FILE to view
  --bucket BUCKET, -b BUCKET  boltdb BUCKET to view
  --machine, -m               key=value format
  --help, -h                  show help
  --version, -v               print the version

List all buckets

$ bolter -f emails.db
|          BUCKETS          |
|              |
|              |
|        |
|             |

List all items in bucket

$ bolter -f emails.db -b
|      KEY      |        VALUE        |
| emailLastSent |                     |
| subLocation   |                     |
| subTag        |                     |
| userActive    | true                |
| userCreatedOn | 2016-10-28 07:21:49 |
| userEmail     |        |
| userFirstName | John                |
| userLastName  | Doe                 |

Nested buckets

You can easily list all items in a nested bucket:

$ bolter -f my.db
|   root    |

$ bolter -f my.db -b root
Bucket: root
|   KEY   |  VALUE  |
| nested* |         |

* means the key ('nested' in this case) is a bucket.

$ bolter -f my.db -b root.nested
Bucket: root.nested
|   KEY   |  VALUE  |
|  mykey  | myvalue |

Machine friendly output

$ bolter -f emails.db -m

$ bolter -f emails.db -b -m
userCreatedOn=2016-10-28 07:21:49


Feel free to ask questions, post issues and open pull requests. My only requirement is that you run gofmt on your code before you send in a PR.

Download Details:

Author: Hasit
Source Code: 
License: MIT license

#go #golang #command #tool 

Bolter: Command-line App for Viewing BoltDB File in Your Terminal
Nat  Grady

Nat Grady


Pkgdown: Generate Static Html Documentation for an R Package


pkgdown is designed to make it quick and easy to build a website for your package. You can see pkgdown in action at this is the output of pkgdown applied to the latest version of pkgdown. Learn more in vignette("pkgdown") or ?build_site.


# Install released version from CRAN
# Install development version from GitHub


Get started with usethis:

# Run once to configure your package to use pkgdown

Then use pkgdown to build your website:


This generates a docs/ directory containing a website. Your becomes the homepage, documentation in man/ generates a function reference, and vignettes will be rendered into articles/. Read vignette("pkgdown") for more details, and to learn how to deploy your site to GitHub pages.

pkgdown 2.0.0 and Bootstrap 5

pkgdown 2.0.0 includes an upgrade from Bootstrap 3 to Bootstrap 5, which is accompanied by a whole bunch of minor UI improvements. If you’ve heavily customised your site, there’s a small chance that this will break your site, so everyone needs to explicitly opt-in to the upgrade by adding the following to _pkgdown.yml:

  bootstrap: 5

Then learn about the many new ways to customise your site in vignette("customise").

In the wild

At last count, pkgdown is used by over 6,000 packages. Here are a few examples created by contributors to pkgdown:

bayesplot (source): plotting functions for posterior analysis, model checking, and MCMC diagnostics.

valr (source): read and manipulate genome intervals and signals.

mkin (source): calculation routines based on the FOCUS Kinetics Report

NMF (source): a framework to perform non-negative matrix factorization (NMF).

Comparing the source and output of these sites is a great way to learn new pkgdown techniques.

Code of conduct

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Download Details:

Author: r-lib
Source Code: 
License: Unknown, MIT licenses found

#r #documentation #tool #html 

Pkgdown: Generate Static Html Documentation for an R Package

PkgUtils.jl: Tools for analyzing Julia Packages


This package contains tools for analyzing Julia packages.

For now, it provides tools to build a highly simplified package search engine that can be queried as a service:


Build a simplified search engine service:

using PkgUtils

Run a search website:

cd .julia/PkgUtils
julia scripts/server.jl
open site/index.html

Download Details:

Author: johnmyleswhite
Source Code: 

#julia #utils #tool 

PkgUtils.jl: Tools for analyzing Julia Packages

PkgDev.jl: Tools for Julia Package Developers



PkgDev provides tools for Julia package developers. The package is currently being rewritten for Julia 1.x and only for brave early adopters.


PkgDev.tag(package_name, version=nothing; registry=nothing, release_notes=nothing)

Tag a new release for package package_name. The package you want to tag must be deved in the current Julia environment. You pass the package name package_name as a String. The git commit that is the HEAD in the package folder will form the basis for the version to be tagged.

If you don't specify a version, then the version field in the Project.toml must have the format x.y.z-DEV, and the command will tag version x.y.z as the next release. Alternatively you can specify one of :major, :minor or :patch for the version parameter. In that case PkgDev.tag will increase that part of the version number by 1 and tag that version. Finally, you can also specify a full VersionNumber as the value for the version parameter, in which case that version will be tagged.

The only situation where you would specify a value for registry is when you want to register a new package for the first time in a registry that is not General. In all other situations, PkgDev.tag will automatically figure out in which registry your package is registered. When you do pass a value for registry, it should simply be the short name of a registry that is one of the registries your local system is connected with.

If you want to add custom release notes for TagBot, do so with the release_notes keyword.

PkgDev.tag runs through the following process when it tags a new version:

  1. Create a new release branch called release-x.y.z.
  2. Change the version field in Project.toml and commit that change on the release branch.
  3. Change the version field in Project.toml to x.y.z+1-DEV and commit that change also to the release branch. 4a. For packages in the General registry: add a comment that triggers Registrator. 4b. For packages in other registries: Open a pull request against the registry that tags the first new commit on the release branch as a new version x.y.z.
  4. Open a pull request against the package repository to merge the release branch into master.

If you have TagBot installed for your package with the branches: true setting, it will automatically merge the release-x.y.z branch into master once the pull request for the registry has been merged. If you use the package butler (desribed below) it auto-configures your repository for this workflow.

PkgDev.enable_pkgbutler(package_name; channel=:auto, template=:auto)

Enables the Julia Package Butler for package package_name. The package must be deved in the current Julia environment. The command will make various modifications to the files in the deved folder of the package. You then then need to commit these changes and push them to GitHub. The command will also add a deploy key to the GitHub repository of the package and show instructions on how to add two GitHub Actions secrets to the repository.

The channel argument can be :auto, :stable or :dev. There are two channels of updates: stable and dev. The dev channel will run the Julia Package Butler workflow every 5 minutes and it will use the master branch of the Julia Packge Butler engine, i.e. it will get new features more quickly. The stable branch runs the Julia Package Butler workflow every hour, and new features in the Julia Package Butler engine are only pushed out to the stable channel once they have been tested for a while on the dev channel. If you specify :auto as the argument, any existing channel choice you have previously made for the package will be retained, otherwise the package will be configure for the stable channel.

The template argument can be :auto, :default or :bach. Different templates will configure different aspects of your package. At this point everyone should use the :default template (or :auto template), everything else is considered experimental.

PkgDev.switch_pkgbutler_channel(package_name, channel)

Switch the Julia Package Butler channel for package package_name. The package you want to tag must be deved in the current Julia environment and the Julia Package Butler must already be enabled for the package. The channel argument can be :auto, :stable or :dev, see the documentation for PkgDev.enable_pkgbutler for an explanation of the different channels.

PkgDev.switch_pkgbutler_template(package_name, template)

Switch the Julia Package Butler template for package package_name. The package you want to tag must be deved in the current Julia environment and the Julia Package Butler must already be enabled for the package. The template argument can be :auto, :default or :bach.


Format all the Julia source code files for the package with name package_name. The package you want to format must be deved in the current Julia environment. This function uses DocumentFormat.jl.

Download Details:

Author: JuliaLang
Source Code: 
License: View license

#julia #tool #dev 

PkgDev.jl: Tools for Julia Package Developers

PackageEvaluator.jl: A tool To Evaluate The Quality Of Julia Packages


The purpose of PackageEvaluator is to attempt to test every Julia package nightly, and to provide the information required to generate the Julia package listing.

This is currently done for Julia 0.6 and nightly, and the tests are run in Ubuntu 14.04 LTS ("Trusty Tahr") virtual machines managed with Vagrant. This allows users to debug why their tests are failing, and allows PackageEvaluator to be run almost anywhere.

The code itself, in particular scripts/, is heavily commented, so check that out for more information.

"My package is failing tests!"

Possible reasons include:

  • Your package is out of date. PackageEvaluator tests the last released version of your package, not master. Make sure you've tagged a version with your bug fixes included.
  • You have a binary dependency that BinDeps can't handle.
    • If the binary dependency is a commerical package, or does not work on Ubuntu (e.g. OSX only), then the package should be excluded from testing. Please submit a pull request adding a line to src/constants.jl.
    • If the binary dependency is something that is not installable (or shouldn't be installed) through BinDeps, like a Python package or R package, then it should be added to the provisioning script. Please submit a pull request adding a line to scripts/
  • You have a testing-only dependency that you haven't declared. Create (or check) your package's test/REQUIRE file.
  • *Your package only works on Windows/OSX/one particular -nix. Your package might need to be excluded from testing. Please submit a pull request adding a line to src/constants.jl saying your package shouldn't be run.
  • Your testing process relies on random numbers. Please make sure you set a seed or use appropriate tolerances if you rely on random numbers in your tests.
  • Your package relies on X running. It may be possible to get your package working through the magic of xvfb. Please submit a pull request adding a line to src/constants.jl that specifies that your package needs to be run with xvfb active.
  • Your package's tests or installation take too long. There is a time limit of 30 minutes for installation, and a seperate 10 minute time limit for testing. You can either reduce your testing time, or exclude your package from testing.
  • Your package requires too much memory. The VMs only have 2 GB of RAM. You can either reduce your test memory usage, or exclude your package from testing.
  • Your tests aren't being found / wrong test file is being run. Your package needs a test/runtests.jl file. PackageEvaluator will execute it with Pkg.test.
  • Something else. You'll probably need to check manually on the testing VM. See next section.

(Licenses are searched for in the files listed in src/constants.jl. The goal is to support a variety of licenses. If your license isn't detected, please file a pull request with detection logic.)

Using Vagrant and PackageEvaluator

  • Vagrant is a tool for creating and managing virtual machines.
  • The configuration of the virtual machine, including the operating system to use, live in the Vagrantfile.
  • When the virtual machine(s) are launched with vagrant up, a provisioning script called is run.
  • This script takes two arguments. The first is the version of Julia to use (0.6 or 0.7)
  • The second determines the mode to operate in:
    • setup: set up the machine with Julia and the same dependencies that are used for a full PackageEvaluator run, but do not do any testing.
    • all: do setup and evaluate all the packages.
    • AF, GO, PZ: evaluate only packages with names beginning with those letters.
  • Each combination of settings corresponds to a named virtual machine - see scripts/Vagrantfile for the list of the VMs.

Download Details:

Author: JuliaCI
Source Code: 
License: MIT license

#julia #tool 

PackageEvaluator.jl: A tool To Evaluate The Quality Of Julia Packages
Nat  Grady

Nat Grady


A Colour Picker tool for Shiny & For Selecting Colours in Plots (in R)

{colourpicker} - A Colour Picker Tool for Shiny and for Selecting Colours in Plots 

Demo · Created by Dean Attali

{colourpicker} gives you a colour picker widget that can be used in different contexts in R.

colour input image

The most common uses of {colourpicker} are to use the colourInput() function to create a colour input in Shiny, or to use the plotHelper() function/RStudio Addin to easily select colours for a plot. 

This package is part of a larger ecosystem of packages with a shared vision: solving common Shiny issues and improving Shiny apps with minimal effort, minimal code changes, and straightforward documentation. Other packages for your Shiny apps:

shinyjs💡 Easily improve the user experience of your Shiny apps in seconds🔗
shinyalert🗯️ Easily create pretty popup messages (modals) in Shiny🔗
shinyscreenshot📷 Capture screenshots of entire pages or parts of pages in Shiny apps🔗
timevis📅 Create interactive timeline visualizations in R🔗
shinycssloaders⌛ Add loading animations to a Shiny output while it's recalculating🔗
shinybrowser🌐 Find out information about a user's web browser in Shiny apps🔗
shinydisconnect🔌 Show a nice message when a Shiny app disconnects or errors🔗
shinyforms📝 Easily create questionnaire-type forms with ShinyWIP


As mentioned above, the most useful functions are colourInput() and plotHelper().

  • Click here to view a live interactive demo the colour input.
  • The GIF below shows what the Plot Colour Helper looks like (the GIF is from an old version that did not support opacity/transparency for colours, which is now supported).

Plot Colour Helper demo


To install the stable CRAN version:


To install the latest development version from GitHub:


Colour input for Shiny apps (or R markdown): `colourInput()`

You can use colourInput() to include a colour picker input in Shiny apps (or in R markdown documents). It works just like any other native Shiny input:

    ui = fluidPage(
        colourInput("col", "Select colour", "purple"),
    server = function(input, output) {
        output$plot <- renderPlot({
            plot(rnorm(50), bg = input$col, col = input$col, pch = 21)

Demo of colourInput

Scroll down for more information about colourInput().

Select colours to use in your plot: `plotHelper()`

If you've ever had to spend a long time perfecting the colour scheme of a plot, you'd find the Plot Colour Helper handy. It's an RStudio addin that lets you interactively choose colours for your plot while updating your plot in real-time, so you can see the colour changes immediately.

To use this tool, either highlight code for a plot and select the addin through the RStudio Addins menu, or call the plotHelper() function. The colours selected will be available as a variable named CPCOLS.

Demo of Plot Colour Helper

Scroll down for more information about the Plot Colour Helper.

Select colours to use in your R code: `colourPicker()`

{colourpicker} also provides a more generic RStudio addin that can be used to select colours and save them as a variable in R. You can either access this tool using the Addins menu or with colourPicker(). You can also watch a short GIF of it an action.

Demo of colour picker addin

Colour input as an 'htmlwidgets' widget

The colour picker input is also available as an 'htmlwidgets' widget using the colourWidget() function. This may not be terribly useful right now since you can use the more powerful colourInput in Shiny apps and Rmarkdown documents, but it may come in handy if you need a widget.

Features of 'colourInput()'

Simple and familiar

Using colourInput is extremely trivial if you've used Shiny, and it's as easy to use as any other input control. It was implemented to very closely mimic all other Shiny inputs so that using it will feel very familiar. You can add a simple colour input to your Shiny app with colourInput("col", "Select colour", value = "red"). The return value from a colourInput is an uppercase HEX colour, so in the previous example the value of input$col would be #FF0000 (#FF0000 is the HEX value of the colour red). The default value at initialization is white (#FFFFFF).

Retrieving the colour names

If you use the returnName = TRUE parameter, then the return value will be a colour name instead of a HEX value, when possible. For example, if the chosen colour is red, the return value will be red instead of #FF0000. For any colour that does not have a standard name, its HEX value will be returned.

Allowing transparent colours

A simple colour input allows you to choose any opaque colour. If you use the allowTransparent = TRUE parameter, the input will display an additional slider that lets you choose a transparency (alpha) value. Using this slider allows you to select semi-transparent colours, or even the fully transparent colour, which is sometimes useful.

When using transparent colours, the return value will be an 8-digit HEX code instead of 6 digits (the last 2 digits are the transparency value). For example, if you select a 50% transparent red, the return value would be #FF000080. Most R plotting functions can accept colours in this format.

Limited colour selection

If you want to only allow the user to select a colour from a specific list of colours, rather than any possible colour, you can use the palette = "limited" parameter. By default, the limited palette will contain 40 common colours, but you can supply your own list of colours using the allowedCols parameter. Here is an image of the default limited colour palette.

colourInput demo

Flexible colour specification

Specifying a colour to the colour input is very flexible to allow for easier use. When providing a colour as the value parameter of the input, there are a few ways to specify a colour:

  • Using a name of an R colour, such as red, gold, blue3, or any other name that R supports (for a full list of R colours, type colours())
  • Using a 6-character HEX value, either with or without the leading #. For example, initializing a colourInput with any of the following values will all result in the colour red: ff0000, FF0000, #ff0000. If transparency is allowed, you can use an 8-character HEX value.
  • Using a 3-character HEX value, either with or without the leading #. These values will be converted to full HEX values by automatically doubling every character. For example, all the following values would result in the same colour: 1ac, #1Ac, 11aacc. If transparency is allowed, you can use a 4-character HEX value.
  • Using RGB specification, such as rgb(0, 0, 255). If transparency is allowed, you can use an rgba() specification.
  • Using HSL specification, such as hsl(240, 100, 50). If transparency is allowed, you can use an hsla() specification.

Protip: You can also type in any of these values directly into the input box to select that colour, instead of selecting it from the colour palette with your mouse. For example, you can click on the colour input and literally type the word "blue", and the colour blue will get selected.

How the chosen colour is shown inside the input box

By default, the colour input's background will match the selected colour and the text inside the input field will be the colour's HEX value. If that's too much for you, you can customize the input with the showColour parameter to either only show the text or only show the background colour.

Here is what a colour input with each of the possible values for showColour looks like

showColour demo

Updating a colourInput

As with all other Shiny inputs, colourInput can be updated with the updateColourInput function. Any parameter that can be used in colourInput can be used in updateColourInput. This means that you can start with a basic colour input such as colourInput("col", "Select colour") and completely redesign it with

updateColourInput(session, "col", label = "COLOUR:", value = "orange",
  showColour = "background", allowTransparent = TRUE)

Works on any device

If you're worried that maybe someone viewing your Shiny app on a phone won't be able to use this input properly - don't you worry. I haven't quite checked every single device out there, but I did spend extra time making sure the colour selection JavaScript works in most devices I could think of. colourInput will work fine in Shiny apps that are viewed on Android cell phones, iPhones, iPads, and even Internet Explorer 8+.

Features of 'plotHelper()'

Addin vs gadget

The Plot Colour Helper is available as both a gadget and an RStudio addin. This means that it can be invoked in one of two ways:

  • Highlight code for a plot and select the addin through the Addins menu, or
  • Call the plotHelper(code) function with plot code as the first parameter.

There is a small difference between the two: invoking the addin via plotHelper() will merely return the final colour list as a vector, while using the Addins menu will result in the entire plot code and colour list getting inserted into the document.

Most important to understand: Use CPCOLS in your plot code

The Plot Colour Helper lets you run code for a plot, and select a list of colours. But how does the list of colours get linked to the plot? The colour list is available as a variable called CPCOLS. This means that in order to refer to the colour list, you need to use that variable in your plot code. You can even refer to it more than once if you want to select colours for multiple purposes in the plot:

plotHelper(ggplot(iris, aes(Sepal.Length, Petal.Length)) +
    geom_point(aes(col = Species)) +
    scale_colour_manual(values = CPCOLS[1:3]) +
    theme(panel.background = element_rect(CPCOLS[4])),
    colours = 4)

Default plot if no code is provided

To more easily access the tool, you can call plotHelper() with no parameters or select the addin without highlighting any code. In that case, the default code in the tool will be initialized as

ggplot(iris, aes(Sepal.Length, Petal.Length)) +
      geom_point(aes(col = Species)) +
      scale_colour_manual(values = CPCOLS)

You can always change the plot code from within the tool.

Initial list of colours

You can set the initial colour list by providing a vector of colours as the colours parameter to plotHelper() (eg. plotHelper(colours = c("red", "#123ABC"))).

Alternatively, if you don't want to initialize to any particular set of colours, but you want to initialize with a specific number of colours in the list, you can provide an integer as the colours parameter (eg. plotHelper(colours = 2)).

If the colour values are not provided, then a default palette of colours will be used for the initial colours. This palette has 12 colours, and if there are more than 12 colours to support then they will get recycled.

Plot Colour Helper tries to guess how many colours are needed

If you don't provide the colours parameter, or if you invoke the tool as an addin, it will attempt to guess how many colours are needed. For example, using the following plot code

ggplot(mtcars, aes(wt, mpg)) +
    geom_point(aes(col = as.factor(am))) +
    scale_colour_manual(values = CPCOLS)

will initialize the tool with 2 colours (because there are 2 am levels), while the following code

ggplot(mtcars, aes(wt, mpg)) +
    geom_point(aes(col = as.factor(cyl))) +
    scale_colour_manual(values = CPCOLS)

will use 3 colours.

Keyboard shortcuts

There are several keyboard shortcuts available, to make the selection process even simpler. Spacebar to add another colour, Delete to remove the currently selected colour, Left/Right to navigate the colours, and more. You can view the full list of shortcuts by clicking on Show keyboard shortcuts.

Return value of Plot Colour Helper

When the tool is run as an addin, the final colour list and the code get inserted into the currently selected RStudio document (either the Source panel or the Console panel).

If the tool is called with plotHelper(), then the return value is simply the vector of selected colours. You can assign it into a variable directly - running cols <- plotHelper() will assign the selected colours into cols.

Since the plot code requires you to use the variable name CPCOLS, after closing the plot helper tool, a variable named CPCOLS will be available in the global environment.

The colours returned can either be in HEX format (eg. "#0000FF") or be named (eg. "blue") - you can choose this option inside the tool.

Download Details:

Author: Daattali
Source Code: 
License: View license

#r #picker #tool 

A Colour Picker tool for Shiny & For Selecting Colours in Plots (in R)
Nat  Grady

Nat Grady


SeaClass: an interactive R tool for Classification Problems

The SeaClass R Package

The Advanced Analytics group at Seagate Technology has decided to share an internal project which helps accelerate development for classification problems. The interactive SeaClass tool is contained in an R based package built using R Shiny and other CRAN packages commonly used for binary classification. The package is free to use and develop further, but any analysis mistakes are the sole responsibility of the user. Checkout the demo video here.


The SeaClass R package provides tools for analyzing classification problems. In particular, specialized tools are available for addressing the problem of imbalanced data sets. The SeaClass application provides an easy to use interface which requires only minimal R programming knowledge to get started, and can be launched using the RStudio Addins menu. The application allows the user to explore numerous methods by simply clicking on the available options and interacting with the generated results. The user can choose to download the codes for any procedures they wish to explore further. SeaClass was designed to jump start the analysis process for both novice and advanced R users. See screenshots below for one demonstration.


Install Instructions

The SeaClass application depends on numerous R packages. To install SeaClass and its dependencies run:


Usage Instructions

Step 1. Begin by loading and preparing your data in R. Some general advice:

  • Your data set must be saved as an R data frame object.
  • The data set must contain a binary response variable (0/1, PASS/FAIL, A/B, etc.)
  • All other variables must be predictor variables.
  • Predictor variables can be numeric, categorical, or factors.
  • Including too many predictors may slow down the application and weaken performance.
  • Categorical predictors are often ignored when the number of levels exceeds 10 since they tend to have improper influences.
  • Missing values are not allowed and will throw a flag. Please remove or impute NAs prior to starting the app.
  • Keep the number of observations (rows) to a medium or small size.
  • Data sets with many rows (>10,000) or many columns (>30) may slow down the app's interactive responses.

Step 2. After data preparation, start the application by either loading SeaClass from the RStudio Addins dropdown menu or by loading the SeaClass function from the command line. For example:


### Make some fake data:
X <- matrix(rnorm(10000,0,1),ncol=10,nrow=1000)
X[1:100,1:2] <- X[1:100,1:2] + 3
Y <- c(rep(1,100), rep(0,900))
Fake_Data <- data.frame(Y = Y , X)

### Load the SeaClass rare failure data:

### Start the interactive GUI:

If the application fails to load, you may need to first specify your favorite browser path. For example:

options(browser = "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe")

Step 3. The user has various options for configuring their analysis within the GUI. Once the analysis runs, the user can view the results, interact with the results (module dependent), save the underlying R script, or start over. Additional help is provided within the application. See above screenshots for one depiction of these steps.

Step 4. Besides the SeaClass function, several other functions are contained within the library. For example:

### List available functions:
### Note this is a sample data set:
# data(rareFailData)
### Note code_output is a support function for SeaClass, not for general use.

### View help:

### Run example from help file:
### General Use: ###
x <- c(rnorm(100,0,1),rnorm(100,2,1))
group <- c(rep(0,100),rep(2,100))
accuracy_threshold(x=x, group=group, pos_class=2)
accuracy_threshold(x=x, group=group, pos_class=0)
### Bagged Example ###
replicate_function = function(index){accuracy_threshold(x=x[index], group=group[index], pos_class=2)[[2]]}
sample_cuts <- replicate(100, {
  sample_index =,replace=TRUE)
bagged_scores <- sapply(x, function(x) mean(x > sample_cuts))
unbagged_cut    <- accuracy_threshold(x=x, group=group, pos_class=2)[[2]]
unbagged_scores <- ifelse(x > unbagged_cut, 1, 0)
# Compare AUC:
PRROC::roc.curve(scores.class0 = bagged_scores,weights.class0 = ifelse(group==2,1,0))[[2]]
PRROC::roc.curve(scores.class0 = unbagged_scores,weights.class0 = ifelse(group==2,1,0))[[2]]
bagged_prediction <- ifelse(bagged_scores > 0.50, 2, 0)
unbagged_prediction <- ifelse(x > unbagged_cut, 2, 0)
# Compare Confusion Matrix:
table(bagged_prediction, group)
table(unbagged_prediction, group)

Download Details:

Author: ChrisDienes
Source Code: 

#r #tool #classification 

SeaClass: an interactive R tool for Classification Problems
Reid  Rohan

Reid Rohan


Voyager (2): Visualization Tool for Data Exploration

Voyager 2

Voyager 2 is a data exploration tool that blends manual and automated chart specification. Voyager 2 combines PoleStar, a traditional chart specification tool inspired by Tableau and Polaris (research project that led to the birth of Tableau), with two partial chart specification interfaces: (1) wildcards let users specify multiple charts in parallel,(2) related views suggest visualizations relevant to the currently specified chart. With Voyager 2, we aim to help analysts engage in both breadth-oriented exploration and depth-oriented question answering.

For a quick overview of Voyager, see our preview video, or a 4-minute demo in our Vega-Lite talk at OpenVisConf, or watch our research talk at CHI 2017. For more information about our design, please read our CHI paper and other related papers (1, 2, 3).

Voyager 2 can be used from JupyterLab via the JupyterLab extension for Voyager. The DataVoyager.jl package integrates Voyager 2 into the Julia programming language.


This repository now hosts an alpha version of the migration of Voyager 2 to a React/Redux application. Older versions of Voyager built in AngularJS at the following URL.

Basic Setup

For basic setup for local development or installation, we use yarn for package management. Installing dependencies can be done with:


Once the installation is complete, use yarn test to run the included tests.

To build a deployable version of the code, run yarn build.

Please see our contributing documentation for more info about setup and coding conventions if you are interested in contributing to this project.

Build Outputs

There are 3 artifacts build using yarn build:

  • Stand alone version of voyager in dist/. This distribution can be hosted on a web server to deploy Voyager.
  • Compiled Javscript and .d.js declaration files for a subset of the Voyager source code in build/src/. These declarations and sources can be included in other packages that use Voyager as a dependency. See voyager-server for an example.
  • Embeddable Voyager build in build/. See below for more details on embedding Voyager in other applications.

Embed Voyager (datavoyager library)

Voyager can be embedded in another web application. The following sections document how to use it.


Using npm or yarn? Add the following to your package.json then run npm install datavoyager or yarn add datavoyager.

If you want to use the latest development version, you may want to clone and link Voyager.

Example Use


const libVoyager = require('voyager');

const container = document.getElementById("voyager-embed");
const config = undefined;
const data = undefined;
const voyagerInstance = libVoyager.CreateVoyager(container, config, data)

Initializing with data

const data: any = {
  "values": [
    {"fieldA": "A", "fieldB": 28}, {"fieldA": "B", "fieldB": 55}, {"fieldA": "C", "fieldB": 43},
    {"fieldA": "D", "fieldB": 91}, {"fieldA": "E", "fieldB": 81}, {"fieldA": "F", "fieldB": 53},
    {"fieldA": "G", "fieldB": 19}, {"fieldA": "H", "fieldB": 87}, {"fieldA": "I", "fieldB": 52}

const voyagerInstance = libVoyager.CreateVoyager(container, undefined, data)

Updating Data

const voyagerInstance = libVoyager.CreateVoyager(container, undefined, undefined)

const data: any = {
  "values": [
    {"fieldA": "A", "fieldB": 28}, {"fieldA": "B", "fieldB": 55}, {"fieldA": "C", "fieldB": 43},
    {"fieldA": "D", "fieldB": 91}, {"fieldA": "E", "fieldB": 81}, {"fieldA": "F", "fieldB": 53},
    {"fieldA": "G", "fieldB": 19}, {"fieldA": "H", "fieldB": 87}, {"fieldA": "I", "fieldB": 52}



You currently also need to include the CSS. Note that this has not yet been optimized for embedding (it will take over the whole screen)

<link rel="stylesheet" type="text/css" href="./node_modules/voyager/lib/style.css">


The voyager module exposes 1 function.

CreateVoyager(container, config, data)

 * Create an instance of the voyager application and return it.
 * @param {Container} container css selector or HTMLElement that will be the parent
 *                              element of the application
 * @param {Object|undefined}    config    Optional: configuration options
 * @param {Array|undefined}     data      Optional: data object. Can be a string or an array of objects.

Please see src/lib-voyager.tsx to see the exposed public methods.

For information regarding the config parameter, please see src/models/config.ts

The data parameter must follow the inline data format as seen in the vega lite documentation

Voyager-server Mode

Computationally expensive portions of the Voyager process can be configured to run on a server.

To get this running in a local development environment, first clone and install the dependencies of the voyager-server project.

In voyager-server directory, yarn start will start the server running on port 3000.

With voyager-server now running, we can start voyager in server mode by running:

yarn start:server

This will run Voyager in "server-mode" sending requests to voyager-server, which it expects, by default, to be at http://localhost:3000.

The server url is controlled by the SERVER environment variable.

See voyager-server for more information on what portions of the functionality the server handles.


You can find Voyager documentation on our GitBook.

This documentation is divided into several sections:

Download Details:

Author: Vega
Source Code: 
License: View license

#javascript #typescript #visualization #tool 

Voyager (2): Visualization Tool for Data Exploration
Monty  Boehm

Monty Boehm


CellFishing.jl: Fast and Scalable Cell Search tool

CellFishing.jl 🎣

CellFishing.jl (cell finder via hashing) is a tool to find similar cells of query cells based on their transcriptome expression profiles.

Kenta Sato, Koki Tsuyuzaki, Kentaro Shimizu, and Itoshi Nikaido. "CellFishing.jl: an ultrafast and scalable cell search method for single-cell RNA sequencing." Genome Biology, 2019 20:31.

# Import packages.
using CellFishing
using TableReader

# Load expression profiles of database cells.
# Note: We highly recommend using the Loom format ( to
# load expression data, because loading a large matrix in plain text takes
# extremely long time.
data = readtsv("database.txt")  # use readcsv if your file is comma-separated
cellnames = string.(names(data))
featurenames = string.(data[:,1])
counts = Matrix{Int}(data[:,2:end])

# Select features and create an index (or a database).
features = CellFishing.selectfeatures(counts, featurenames)
database = CellFishing.CellIndex(counts, features, metadata=cellnames)

# Save/load the database to/from a file (optional).
#"", database)
# database = CellFishing.load("")

# Load expression profiles of query cells.
data = readtsv("query.txt")
cellnames = string.(names(data))
featurenames = string.(data[:,1])
counts = Matrix{Int}(data[:,2:end])

# Search the database for similar cells; k cells will be returned per query.
k = 10
neighbors = CellFishing.findneighbors(k, counts, featurenames, database)

# Write the neighboring cells to a file.
open("neighbors.tsv", "w") do file
    println(file, join(["cell"; string.("n", 1:k)], '\t'))
    for j in 1:length(cellnames)
        print(file, cellnames[j])
        for i in 1:k
            print(file, '\t', database.metadata[neighbors.indexes[i,j]])


First of all, you need to install a Julia compiler. A recommended way is to download a pre-built binary of Julia. The pre-built binaries for several major platforms are distributed at Currently, CellFishing.jl supports Julia 1.0 or later.

Then, install CellFishing.jl with the following command:

$ julia -e 'using Pkg; Pkg.add(PackageSpec(url="git://"))'

Alternatively, you can use the add command in the package management mode of Julia:

(v1.0) pkg> add

To check the installation, you can try using CellFishing in your REPL:

$ julia
   _       _ _(_)_     |  Documentation:
  (_)     | (_) (_)    |
   _ _   _| |_  __ _   |  Type "?" for help, "]?" for Pkg help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 1.0.0 (2018-08-08)
 _/ |\__'_|_|_|\__'_|  |  Official release
|__/                   |

julia> using CellFishing  # load the package
[ Info: Precompiling CellFishing [5ab3512e-c64d-48f6-b1c0-509c1121fdda]


No error messages mean you have successfully installed CellFishing.jl.

To run unit tests, execute the following command:

$ julia -e 'using Pkg; Pkg.test("CellFishing")'

Command-line interface (WIP)

The bin/cellfishing script is a command-line interface to CellFishing.jl.

$ ./bin/cellfishing build Plass2018.dge.loom
Build a search database from Plass2018.dge.loom.
  Loading data ―――――――――――― 13 seconds, 173 milliseconds
  Selecting features ―――――― 1 second, 376 milliseconds
  Creating a database ――――― 16 seconds, 418 milliseconds
  Writing the database ―――― 659 milliseconds
The serialized database is in
$ ./bin/cellfishing search Plass2018.dge.loom >neighbors.tsv
Search for 10 neighbors.
  Loading the database ―――― 512 milliseconds
  Loading query data ―――――― 12 seconds, 960 milliseconds
  Searching the database ―― 31 seconds, 821 milliseconds
  Writing neighbors ――――――― 64 milliseconds
$ head -5 neighbors.tsv | cut -f1-3

Download Details:

Author: Bicycle1885
Source Code: 
License: MIT license

#julia #search #tool 

CellFishing.jl: Fast and Scalable Cell Search tool

EMIRT.jl: Electronic Microscopy Image Reconstruction Toolbox


Electron Microscopy Image Reconstruction Toolbox using julia language


  • julia -e 'Pkg.add("EMIRT")'
  • update to latest code in master branch: julia -e 'Pkg.checkout("EMIRT")'


  • using EMIRT check the functions in each file

Download Details:

Author: Seung-lab
Source Code: 
License: MIT license

#julia #image #tool #box 

EMIRT.jl: Electronic Microscopy Image Reconstruction Toolbox

A tool To Bring Existing Azure Resources Under Terraform's Management

Azure Terrafy

A tool to bring your existing Azure resources under the management of Terraform.


Azure Terrafy imports the resources that are supported by the Terraform AzureRM provider within a resource group, into the Terraform state, and generates the corresponding Terraform configuration. Both the Terraform state and configuration are expected to be consistent with the resources' remote state, i.e., terraform plan shows no diff. The user then is able to use Terraform to manage these resources.

Non Goal

The Terraform configurations generated by aztfy is not meant to be comprehensive. This means it doesn't guarantee the infrastruction can be reproduced via the generated configurations. For details, please refer to the limitation.


From Release

Precompiled binaries are available at Releases.

From Homebrew

brew update && brew install aztfy

From Go toolchain

go install


There is no special precondtion needed for running aztfy, except that you have access to Azure.

Although aztfy depends on terraform, it is not required to have terraform pre-installed and configured in the PATH before running aztfy. aztfy will ensure a terraform in the following order:

  • If there is already a terraform discovered in the PATH whose version >= v0.12, then use it
  • Otherwise, if there is already a terraform installed at the aztfy cache directory, then use it
  • Otherwise, install the latest terraform from Hashicorp's release to the aztfy cache directory

(The aztfy cache directory is at: "<UserCacheDir>/aztfy")


Follow the authentication guide from the Terraform AzureRM provider to authenticate to Azure.

Then you can go ahead and run aztfy resource [option] <resource id> or aztfy resource-group [option] <resource group name> to import either a single resource, or a resource group and its including resources.

Terrafy a Single Resource

aztfy resource [option] <resource id> terrafies a single resource by its Azure control plane ID.


aztfy resource /subscriptions/0000/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachines/vm1

The command will automatically identify the Terraform resource type (e.g. correctly identifies above resource as azurerm_linux_virtual_machine), and import it into state file and generate the Terraform configuration.

❗For data plane only resources (e.g. azurerm_key_vault_certificate), the resource id is using a pesudo format, as is defined here.

Terrafy a Resource Group

aztfy resource-group [option] <resource group name> terrafies a resource group and its including resources by its name. Depending on whether --batch is used, it can work in either interactive mode or batch mode.

Interactive Mode

In interactive mode, aztfy list all the resources resides in the specified resource group. For each resource, user is expected to input the Terraform resource address in form of <resource type>.<resource name> (e.g. azurerm_linux_virtual_machine.test). Users can press r to see the possible resource type(s) for the selected import item. In case there is exactly one resource type match for the import item, that resource type will be automatically filled in the text input for the users, with a 💡 line prefix as an indication.

In some cases, there are Azure resources that have no corresponding Terraform resources (e.g. due to lacks of Terraform support), or some resources might be created as a side effect of provisioning another resource (e.g. the OS Disk resource is created automatically when provisioning a VM). In these cases, you can skip these resources without typing anything.

💡 Option --resource-mapping/-m can be used to specify a resource mapping file, either constructed manually or from other runs of aztfy (generated in the output directory with name: .aztfyResourceMapping.json).

After going through all the resources to be imported, users press w to instruct aztfy to proceed importing resources into Terraform state and generating the Terraform configuration.

As the last step, aztfy will leverage the ARM template to inject dependencies between each resource. This makes the generated Terraform template to be useful.

Batch Mode

In batch mode, instead of interactively specifying the mapping from Azurem resource id to the Terraform resource address, users are expected to provide that mapping via the resource mapping file, with the following format:

    "<azure resource id1>": "<terraform resource type1>.<terraform resource name>",
    "<azure resource id2>": "<terraform resource type2>.<terraform resource name>",


  "/subscriptions/0-0-0-0/resourceGroups/tfy-vm/providers/Microsoft.Network/virtualNetworks/example-network": "azurerm_virtual_network.res-0",
  "/subscriptions/0-0-0-0/resourceGroups/tfy-vm/providers/Microsoft.Compute/virtualMachines/example-machine": "azurerm_linux_virtual_machine.res-1",
  "/subscriptions/0-0-0-0/resourceGroups/tfy-vm/providers/Microsoft.Network/networkInterfaces/example-nic": "azurerm_network_interface.res-2",
  "/subscriptions/0-0-0-0/resourceGroups/tfy-vm/providers/Microsoft.Network/networkInterfaces/example-nic1": "azurerm_network_interface.res-3",
  "/subscriptions/0-0-0-0/resourceGroups/tfy-vm/providers/Microsoft.Network/virtualNetworks/example-network/subnets/internal": "azurerm_subnet.res-4"

Then the tool will import each specified resource in the mapping file (if exists) and skip the others.

Especially if the no resource mapping file is specified, aztfy will only import the "recognized" resources for you, based on its limited knowledge on the ARM and Terraform resource mappings.

In the batch import mode, users can further specify the --continue/-k option to make the tool continue even on hitting import error(s) on any resource.

Remote Backend

By default aztfy uses local backend to store the state file. While it is also possible to use remote backend, via the --backend-type and --backend-config options.

E.g. to use the azurerm backend, users can invoke aztfy like following:

aztfy --backend-type=azurerm --backend-config=resource_group_name=<resource group name> --backend-config=storage_account_name=<account name> --backend-config=container_name=<container name> --backend-config=key=terraform.tfstate <importing resource group name>

Import Into Existing Local State

For local backend, aztfy will by default ensure the output directory is empty at the very begining. This is to avoid any conflicts happen for existing user files, including the terraform configuration, provider configuration, the state file, etc. As a result, aztfy generates a pretty new workspace for users.

One limitation of doing so is users can't import resources to existing state file via aztfy. To support this scenario, you can use the --append option. This option will make aztfy skip the empty guarantee for the output directory. If the output directory is empty, then it has no effect. Otherwise, it will ensure the provider setting (create a file for it if not exists). Then it proceeds the following steps.

This means if the output directory has an active Terraform workspace, i.e. there exists a state file, any resource imported by the aztfy will be imported into that state file. Especially, the file generated by aztfy in this case will be named differently than normal, where each file will has .aztfy suffix before the extension (e.g., to avoid potential file name conflicts. If you run aztfy --append multiple times, the generated config in will be appended in each run.

How it Works

aztfy leverage aztft to identify the Terraform resource type on its Azure resource ID. Then it runs terraform import under the hood to import each resource. Afterwards, it runs tfadd to generate the Terraform template for each imported resource.




There are several limitations causing aztfy can hardly generate reproducible Terraform configurations.

N:M Model Mappings

Azure resources are modeled differently in AzureRM provider.

For example, the azurerm_lb_backend_address_pool_address is actually a property of azurerm_lb_backend_address_pool in Azure platform. Whilst in the AzureRM provider, it has its own resource and a synthetic resource ID as /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Network/loadBalancers/loadBalancer1/backendAddressPools/backendAddressPool1/addresses/address1.

Another popular case is that in the AzureRM provider, there are a bunch of "association" resources, e.g. the azurerm_network_interface_security_group_association. These "association" resources represent the association relationship between two Terraform resources (in this case they are azurerm_network_interface and azurerm_network_security_group). They also have some synthetic resource ID, e.g. /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mygroup1/providers/|/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/group1/providers/Microsoft.Network/networkSecurityGroups/group1.

Currently, this tool only works on the assumption that there is 1:1 mapping between Azure resources and the Terraform resources. For those property-like Terraform resources, aztfy will just ignore them.

AzureRM Provider Validation

When generating the Terraform configuration, not all properties of the resource are exported for different reasons.

One reason is because there are flexible cross-property constraints defined in the AzureRM Terraform provider. E.g. property_a conflits with property_b. This might due to the nature of the API, or might be due to some deprecation process of the provider (e.g. property_a is deprecated in favor of property_b, but kept for backwards compatibility). These constraints require some properties must be absent in the Terraform configuration, otherwise, the configuration is not a valid and will fail during terraform validate.

Another reason is that an Azure resource can be a property of its parent resource (e.g. azurerm_subnet can be its own resource, or be a property of azurerm_virtual_network). Per Terraform's best practice, users should only use one of the forms, not both. aztfy chooses to always generate all the resources, but omit the property in the parent resource that represents the child resource.

Additional Resources

  • The aztfy Github Page: Everything about aztfy, including comparisons with other existing import solutions.
  • Kyle Ruddy's Blog about aztfy: A live use of aztfy, explaining the pros and cons.
  • aztft: A Go program and library for identifying the correct Terraform AzureRM provider resource type on the Azure resource id.
  • tfadd: A Go program and library for generating Terraform configuration from Terraform state.

Download Details:

Author: Azure
Source Code: 
License: MPL-2.0 license

#go #golang #devops #tool 

A tool To Bring Existing Azure Resources Under Terraform's Management
Nat  Grady

Nat Grady


Extrafont: Tools for using Fonts in R Graphics


The extrafont package makes it easier to use fonts other than the basic PostScript fonts that R uses. Fonts that are imported into extrafont can be used with PDF or PostScript output files. On Windows, extrafont will also make system fonts available for bitmap output.

There are two hurdles for using fonts in PDF (or Postscript) output files:

  • Making R aware of the font and the dimensions of the characters.
  • Embedding the fonts in the PDF file so that the PDF can be displayed properly on a device that doesn't have the font. This is usually needed if you want to print the PDF file or share it with others.

The extrafont package makes both of these things easier.

Presently it allows the use of TrueType fonts with R, and installation of special font packages. Support for other kinds of fonts will be added in the future. It has been tested on Mac OS X 10.7 and Ubuntu Linux 12.04 and Windows XP.

The instructions below are written for PDF files, although the information also applies to PostScript files.

If you want to use the TeX Computer Modern fonts in PDF files, also see the fontcm package.

Using extrafont


You must have Ghostscript installed on your system for embedding fonts into PDF files.

Extrafont requires the extrafontdb package to be installed. extrafontdb contains the font database, while this package contains the code to install fonts and register them in the database.

It also requires the Rttf2pt1 package to be installed. Rttf2pt1 contains the ttf2pt1 program which is used to read and manipulate TrueType fonts. It is in a separate package for licensing reasons.

Install extrafont from CRAN will automatically install extrafontdb and Rttf2pt1:


To use extrafont in making graphs, you'll need to do the following:

  • Import fonts into the extrafont database. (Needs to be done once)
  • Register the fonts from the extrafont database with R's PDF (or PostScript) output device. (Needs to be done once per R session)
  • Create the graphics that use the fonts.
  • Embed the fonts into the PDF file. (Needs to be done for each file)

Import fonts into the extrafont database

First, import the fonts installed on the system. (This only works with TrueType fonts right now.)

# This tries to autodetect the directory containing the TrueType fonts.
# If it fails on your system, please let me know.

This does the following:

  • Finds the fonts on your system.
  • Extracts the FontName (like ArialNarrow-BoldItalic).
  • Extracts/converts a PostScript .afm file for each font. This file contains the font metrics, which are the rectangular dimensions of each character that are needed for placement of the characters. These are not the glyphs, which the curves defining the visual shape of each character. The glyphs are only in the .ttf file.
  • Scan all the resulting .afm files, and save a table with information about them. This table will be used when making plots with R.
  • Creates a file Fontmap, which contains the mapping from FontName to the .ttf file. This is required by Ghostscript for embedding fonts.

You can view the resulting table of font information with:

# Vector of font family names

# Show entire table

If you install new fonts on your computer, you'll have to run font_import() again.

Register the fonts with the PDF output device

The next step is to register the fonts in the afm table with R's PDF (or PostScript) output device. This is needed to create PDF files with the fonts. As of extrafont version 0.13, this must be run only in the first session when you import your fonts. In sessions started after the fonts have been imported, simply loading the package with library(extrafont) this step isn't necessary, since it will automatically register the fonts with R.

# Only necessary in session where you ran font_import()
# For PostScript output, use loadfonts(device="postscript")
# Suppress output with loadfonts(quiet=TRUE)

Create figures with the fonts

Here's an example of PDFs made with base graphics and with ggplot2. These examples use the font Impact, which should be available on Windows and Mac. (Use fonts() to see what fonts are available on your system)

pdf("font_plot.pdf", family="Impact", width=4, height=4)
plot(mtcars$mpg, mtcars$wt, 
     main = "Fuel Efficiency of 32 Cars",
     xlab = "Weight (x1000 lb)",
     ylab = "Miles per Gallon")

p <- ggplot(mtcars, aes(x=wt, y=mpg)) + geom_point() +
  ggtitle("Fuel Efficiency of 32 Cars") +
  xlab("Weight (x1000 lb)") + ylab("Miles per Gallon") +
  theme(text=element_text(size=16, family="Impact"))

ggsave("font_ggplot.pdf", plot=p,  width=4, height=4)

The first time you use a font, it may throw some warnings about unknown characters. This should be harmless, but if it causes any problems, please report them.

Embed the fonts

After you create a PDF output file, you should embed the fonts into the file. There are 14 PostScript base fonts never need to be embedded, because they are included with every PDF/PostScript renderer. All other fonts should be embedded into the PDF files.

First, if you are running Windows, you may need to tell it where the Ghostscript program is, for embedding fonts. (See Windows installation notes below.)

# Needed only on Windows - run once per R session
# Adjust the path to match your installation of Ghostscript
Sys.setenv(R_GSCMD = "C:/Program Files/gs/gs9.05/bin/gswin32c.exe")

As the name suggests, embed_fonts() will embed the fonts:

embed_fonts("font_plot.pdf", outfile="font_plot_embed.pdf")
embed_fonts("font_ggplot.pdf", outfile="font_ggplot_embed.pdf")
# If outfile is not specified, it will overwrite the original file

To check if the fonts have been properly embedded, open each of the PDF files with Adobe Reader, and go to File->Properties->Fonts. If a font is embedded, it will say "Embedded Subset" by the font's name; otherwise it will say nothing next to the name.

With Adobe Reader, if a font is not embedded, it will be substituted by another font. This provides a way to see what your PDF will look like on printer or computer that doesn't have the font installed. Other PDF viewers may behave differently. For example, the Preview application on Mac OS X will automatically use system fonts to display non-embedded fonts -- this makes it impossible to tell whether the font is embedded in the PDF.

On Linux you can also use evince (the default PDF viewer) to view embedded fonts. Open the file and go to File->Properties->Fonts. If a font is embedded, it will say "Embedded subset"; otherwise it will say "Not embedded".

If you are putting multiple PDF figures into a single document, it is more space-efficient to not embed fonts in each figure, but instead embed the font in the final PDF document.

Windows bitmap output

extrafont also makes it easier to use fonts in Windows for on-screen or bitmap output.

# Register fonts for Windows bitmap output

ggplot(mtcars, aes(x=wt, y=mpg)) + geom_point() +
  ggtitle("Title text goes here") +
  theme(plot.title = element_text(size = 16, family="Georgia", face="italic"))


Since the output is a bitmap file, there's no need to embed the fonts.

Font packages

Extrafont supports font packages, which contain fonts that are packaged in a particular way so that they can be imported into extrafont. These fonts are installed as R packages; they are not installed for the computer operating system. Fonts that are installed this way will be available only for PDF or PostScript output. They will not be available for on-screen or bitmap output, which requires that the font be installed for operating system, not just with R and extrafont.

Presently extrafont supports only font packages with PostScript Type 1 fonts.

See the fontcm package containing Computer Modern fonts for an example.

Installation notes


The source code for the utility program ttf2pt1 is in the package Rttf2pt1. CRAN has pre-compiled Windows and Mac OS X binaries. For other platforms, and when installing from source, it will be compiled on installation, so you need a build environment on your system.

Windows installation notes

In Windows, you need to make sure that Ghostscript is installed.

In each R session where you embed fonts, you will need to tell R where Ghostscript is installed. For example, when Ghostscript 9.05 is installed to the default location, running this command will do it (adjust the path for your installation):

Sys.setenv(R_GSCMD="C:/Program Files/gs/gs9.05/bin/gswin32c.exe")

Resetting the font database

To reset the extrafont database, reinstall the extrafontdb package:


Original article source at:

#r #graphic #tool 

Extrafont: Tools for using Fonts in R Graphics