Monty  Boehm

Monty Boehm


Qira: QEMU interactive Runtime Analyser


  • QIRA is a competitor to strace and gdb
  • See for high level usage information
  • All QIRA code is released under MIT license
  • Other code in this repo released under its respective license

Supported OS

Ubuntu 14.04 and 16.04 supported out of the box.
18.04 is having a problem with building QEMU
See forked QEMU source at to fix.

Non Linux hosts may run the rest of QIRA, but cannot run the QEMU tracer.
Very limited support for Mac OS X and Windows natively.
The Docker image in docker should work everywhere.

Installing release

See instructions on to install 1.3

Installing trunk

cd ~/
git clone
cd qira/

Installation Extras

  • ./ will fetch the libraries for i386, armhf, armel, aarch64, mips, mipsel, and ppc
  • ./tracers/ will install the QIRA PIN plugin, allowing --pin to work


  • v1.3 -- Update using pinned python packages
  • v1.2 -- Many many changes. Forced release due to v1.0 not working anymore.
  • v1.1 -- Support for names and comments. Static stuff added. Register colors.
  • v1.0 -- Perf is good! Tons of bugfixes. Quality software.
  • v0.9 -- Function indentation. haddrline added (look familiar?). Register highlighting in hexdump.
  • v0.8 -- Intel syntax! Shipping CDA (cda a.out) and experimental PIN backend. Bugfixes. Windows support?
  • v0.7 -- DWARF support. Builds QEMU if distributed binaries don't work. Windows IDA plugin.
  • v0.6 -- Added changes before webforking. Highlight strace addresses. Default on analysis.
  • v0.5 -- Fixed regression in C++ database causing wrong values. Added PowerPC support. Added "A" button.
  • v0.4 -- Using 50x faster C++ database. strace support. argv and envp are there.
  • v0.3 -- Built in socat, multiple traces, forks (experimental). Somewhat working x86-64 and ARM support
  • v0.2 -- Removed dependency on mongodb, much faster. IDA plugin fixes, Mac version.
  • v0.1 -- Initial release


At the top, you have 4 boxes, called the controls.
  Blue = change number, grey = fork number
  red = instruction address (iaddr), yellow = data address (daddr).

On the left you have the vtimeline, this is the full trace of the program.
  The top is the start of the program, the bottom is the end/current state.
  More green = deeper into a function.
  The currently selected change is blue, red is every passthrough of the current iaddr
  Bright yellow is a write to the daddr, dark yellow is a read from the daddr.
  This color scheme is followed everywhere.

Below the controls, you have the idump, showing instructions near the current change
Under that is the regviewer, datachanges, hexeditor, and strace, all self explanatory.

Mouse Actions

Click on vtimeline to navigate around. Right-click forks to delete them. Click on data (or doubleclick if highlightable) to follow in data. Right-click on instruction address to follow in instruction.

Keyboard Shortcuts in web/client/controls.js

j -- next invocation of instruction
k -- prev invocation of instruction

shift-j -- next toucher of data
shift-k -- prev toucher of data

m -- go to return from current function
, -- go to start of current function

z -- zoom out max on vtimeline

left  -- -1 fork
right -- +1 fork
up    -- -1 clnum
down  -- +1 clnum

esc -- back

shift-c -- clear all forks

n -- rename instruction
shift-n -- rename data
: -- add comment at instruction
shift-: -- add comment at data

g -- go to change, address, or name
space -- toggle flat/function view

p -- analyze function at iaddr
c -- make code at iaddr, one instruction
a -- make ascii at iaddr
d -- make data at iaddr
u -- make undefined at iaddr

Installation on Windows (experimental)

  • Install git and python 2.7.9
  • Run install.bat

Session state

clnum -- selected changelist number
forknum -- selected fork number
iaddr -- selected instruction address
daddr -- selected data address

cview -- viewed changelists in the vtimeline
dview -- viewed window into data in the hexeditor
iview -- viewed address in the static view

max_clnum -- max changelist number for each fork
dirtyiaddr -- whether we should update the clnum based on the iaddr or not
flat -- if we are in flat view


QIRA static has historically been such a trash heap it's gated behind -S. QIRA should not be trying to compete with IDA.

User input and the actual traces of the program should drive creation of the static database. Don't try to recover all CFGs, only what ran.

The basic idea of static is that it exists at change -1 and doesn't change ever. Each address has a set of tags, including things like name.

Download Details:

Author: Geohot
Source Code: 
License: MIT license

#c #python #interactive #runtime 

Qira: QEMU interactive Runtime Analyser
Royce  Reinger

Royce Reinger


Interactive Tools for Machine Learning, Deep Learning & Math

Interactive Tools

Interactive Tools for machine learning, deep learning, and math

Deep Learning


"exBERT is a tool to help humans conduct flexible, interactive investigations and formulate hypotheses for the model-internal reasoning process, supporting analysis for a wide variety of Hugging Face Transformer models. exBERT provides insights into the meaning of the contextual representations and attention by matching a human-specified input to similar contexts in large annotated datasets."



"BertViz is a tool for visualizing attention in the Transformer model, supporting most models from the transformers library (BERT, GPT-2, XLNet, RoBERTa, XLM, CTRL, MarianMT, etc.). It extends the Tensor2Tensor visualization tool by Llion Jones and the transformers library from HuggingFace."




CNN Explainer

An interactive visualization system designed to help non-experts learn about Convolutional Neural Networks (CNNs). It runs a pre-tained CNN in the browser and lets you explore the layers and operations.




Play with GANs in the Browser

Explore Generative Adversarial Networks directly in the browser with GAN Lab. There are many cool features that support interactive experimentation.

  • Interactive hyperparameter adjustment
  • User-defined data distribution
  • Slow-motion mode
  • Manual step-by-step execution



ConvNet Playground

ConvNet Playground is an interactive visualization tool for exploring Convolutional Neural Networks applied to the task of semantic image search.



Distill: Exploring Neural Networks with Activation Atlases

Feature inversion to visualize millions of activations from an image classification network leads to an explorable activation atlas of features the network has learned. This can reveal how the network typically represents some concepts.



A visual introduction to Machine Learning

Available in many different languages.



Interactive Deep Learning Playground

New to Deep Learning? Tinker with a Neural Network in your browser.



Initializing neural networks

Initialization can have a significant impact on convergence in training deep neural networks. Simple initialization schemes can accelerate training, but they require care to avoid common pitfalls. In this post, folks explain how to initialize neural network parameters effectively.



Embedding Projector

It's increaingly important to understand how data is being interpreted by machine learning models. To translate the things we understand naturally (e.g. words, sounds, or videos) to a form that the algorithms can process, we often use embeddings, a mathematical vector representation that captures different facets (dimensions) of the data. In this interactive, you can explore multiple different algorithms (PCA, t-SNE, UMAP) for exploring these embeddings in your browser.




OpenAI Microscope

The OpenAI Microscope is a collection of visualizations of every significant layer and neuron of eight important vision models.



Interpretability, Fairness

The Language Interpretability Tool

The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.

You can use LIT to ask and answer questions like:

  • What kind of examples does my model perform poorly on?
  • Why did my model make this prediction? Can it attribute it to adversarial behavior, or undesirable priors from the training set?
  • Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?




What if

The What-If Tool lets you visually probe the behavior of trained machine learning models, with minimal coding.


Measuring diversity

PAIR Explorables around measuring diversity.

"Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a page of white men, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels."

Mitchell et. al. (2020) Diversity and Inclusion Metrics in Subset Selection

Interactive explorables

Source: PAIR




Sage Interactions

This is a collection of pages demonstrating the use of the interact command in Sage. It should be easy to just scroll through and copy/paste examples into Sage notebooks.

Examples include Algebra, Bioinformatics, Calculus, Cryptography, Differential Equations, Drawing Graphics, Dynamical Systems, Fractals, Games and Diversions, Geometry, Graph Theory, Linear Algebra, Loop Quantum Gravity, Number Theory, Statistics/Probability, Topology, Web Applications.



Probability Distributions

by Simon Ward-Jones. A visual 👀 tour of probability distributions.

  • Bernoulli Distribution
  • Binomial Distribution
  • Normal Distribution
  • Beta Distribution
  • LogNormal Distribution



Bayesian Inference

by Simon Ward-Jones. Explaining the basics of bayesian inference with the example of flipping a coin.



Seeing Theory: Probability and Stats

A visual introduction to probability and statistics.



Interactive Gaussian Process Visualization

"A Gaussian process can be thought of as an extension of the multivariate normal distribution to an infinite number of random variables covering each point on the input domain. The covariance between function values at any two points is given by the evaluation of the kernel of the Gaussian process. For an in-depth explanation, read this excellent article and then come back to this interactive visualisation!"

Download Details:

Author: Machine-Learning-Tokyo
Source Code: 

#machinelearning #deeplearning #interactive #tools 

 Interactive Tools for Machine Learning, Deep Learning & Math
Rupert  Beatty

Rupert Beatty


SaveTheDot: A game developed using UIViewPropertyAnimator

Save the Dot

Apple introduced UIViewPropertyAnimator for iOS 10. We can use this new API to control interactive animations. To experiment UIViewPropertyAnimator, we developed this game by using UIKit only 😉 (no Sprite Kit at all 😬). And you can see the animations are very smooth, looking forward to see more interactive animations in iOS 10.


How to build

  • Clone the repository
$ git clone
  • Open the project in Xcode 8
$ cd SaveTheDot
$ open "SaveTheDot.xcodeproj"


  • Xcode 8.0 (8A218a)
  • iOS 10
  • Swift 3

Download Details:

Author: JakeLin
Source Code: 
License: MIT license

#swift #game #ios #animation #interactive 

SaveTheDot: A game developed using UIViewPropertyAnimator
Sheldon  Grant

Sheldon Grant


Transforming Ways Of interactive Data analytics with Apache Druid

What is Apache Druid?

Apache Druid is a real-time analytics database that is designed for rapid analytics on large datasets. This database is used more often for powering use cases where real-time ingestion, high uptime, and fast query performance is needed.  Druid can be used to analyze billions of rows not only in batch but also in real-time. It offers many integrations with different technologies like Apache Kafka Security, Cloud Storage, S3, Hive, HDFS, DataSketches, Redis, etc. It also follows the immutable past and append-only future. As past events happen once and never change, these are immutable, whereas the only append takes place for new events. It provides users with a fast and deep exploration of large scale transaction data.

Characteristics of Apache Druid

Some of the exciting characteristics of Apache Druid are:

    • Cloud-Native, making easy horizontal scaling
    • Supports SQL for analyzing data
    • REST API enabled for querying or uploading data

What are its use cases?

Some of the common use cases of Druid are:

  • Clickstream analytics
  • Server metrics storage
  • OLAP/Business intelligence
  • Digital Marketing/Advertising analytics
  • Network Telemetry analytics
  • Supply chain analytics
  • Application performance metrics

What are its key features?

Druid’s core architecture is made by combining the ideas of different data warehouses, log search systems, and time-series databases.

Columnar Storage Format

It uses column-oriented storage; hence only loads required columns needed for a particular query. It helps in fast scans and aggregations.

Parallel Processing

It can process a query in parallel across the entire cluster. It is also termed as Massively Parallel Processing.

Scalable Distributed System

Druid is mostly deployed in clusters ranging from tens to hundreds that offer ingest rate of millions of records/sec, query latencies of sub-second to a few seconds, and retention of trillions of records.

Real-time or Batch Ingestion

Druid can ingest data either in real-time (Ingested data can be queried immediately) or in batches.


It is a fault-tolerant architecture that won’t lose data. Once Druid ingests data, its copy is safely stored in deep storage (Cloud Storage, Amazon S3, Redis, HDFS, many more). Users' data can be easily recovered from this deep storage even if all the Druid’s servers fail. This replication ensures that queries are still possible while the system recovers.


Druid uses concise and roaring compressed bitmap indexes to create indexes that help in faster filtering.

Timestamp Partitioning

Every data in Druid must have a timestamp column as the data is always partitioned by time, and every query has a time filter.

Easy Integration with Existing Pipelines

Users can easily stream data natively using Druid from message buses like Kafka, kinesis, and many more. It can also load batch files from the data lakes like HDFS and Amazon S3.

General Architecture of Apache Druid

Druid is mainly composed of the following processes:

  • Coordinator – This process manages data availability on the cluster.
  • Overlord – This process controls the assignment of data ingestion workloads.
  • Broker – This helps handle queries from external clients.
  • Historical – This process store data that is queryable.
  • Middle manager – This process is responsible for ingesting the data.
  • Router – These processes are used to route requests to Brokers, Coordinators, and Overlords. These processes are optional.

Apache Druid Architecture

The processes described above are organized into 3 types of servers: Master, Query, and Data.


It runs the Coordinator and Overlord. Basically, it manages big data ingestion and availability. Master is responsible for the ingestion of jobs and coordinating the availability of data on the “Data Servers”.


It runs Brokers and Optional Router processes. Basically, it handles queries and external clients by providing the endpoints of applications that users and clients interact with, routing queries to Data servers or other Query servers.


It runs Middle Managers and Historical processes. This helps execute jobs and store the queryable data. Other than these 3 servers and six processes, Druid also requires storage for Metadata and Deep Storage.

Metadata Storage

It is basically used to store the metadata of the system (Audit, Datasource, Schemas, and so on). For experimental purposes, the environment suggested using Apache Derby. Derby is the default metadata store for Druid, but it is not suitable for production. For production purposes, MySQL or PostgreSQL is the best choice. Metadata storage stores the entire metadata, which is very useful for the cluster of Druid to work. Derby is not used for production as it does not support a multi-node cluster with high availability. MySQL as a metadata storage database is used to acquire:

  • Long term flexibility
  • Scaling on budget
  • Good with large datasets
  • Good high read speed

PostgreSQL, as a metadata storage database, is used to acquire:

  • Complex database designs
  • Performing customized procedures
  • Diverse indexing technique
  • Variety of replication methods
  • High read and write speed.

Deep Storage

Apache Druid uses separate storage for any data ingested that makes it fault-tolerant. Some of Deep Storage Technologies are Cloud Storage, Amazon S3, HDFS, Redis, and many more.

A structure that defines the logical view of the entire defines how the data is managed and how the relations among them are associated. Click to explore about our, Types of Databases

Data Ingestion in Druid

Data in Druid is organized into segments that generally have rows up to a few million. Loading data in Druid is known as Ingestion or Indexing. Druid fully supports batch ingestion and streaming ingestion. Some of the technologies supported by Druid is Kinesis, Cloud Storage, Apache Kafka, and local storage. Druid requires some structure to the data it ingests. In general, data should consist of OS timestamp, metrics and dimensions.

Zookeeper for Apache Druid

Apache Druid uses Apache Zookeeper to integrate all the services. Users can use Zookeeper that comes with Druid for experiments, but one has to install Zookeeper for production. It’s cluster can only be as stable by a Zookeeper. Zookeeper is responsible for most of the communications that keep the Druid cluster functioning as Druid nodes are prevented from talking to each other.Understanding Zookeeper

Duties of a Zookeeper

Zookeeper is responsible for the following operations:

  • Segment “publishing” protocol from Historical
  • Coordinator leader election
  • Overlord and MiddleManager task management
  • Segment load/drop protocol between Coordinator and Historical
  • Overlord leader election

How to Keep a Zookeeper Stable?

For maximum Zookeeper stability, the user has to follow the following practices:

  • There should be a Zookeeper dedicated to Druid; avoid sharing it with any other products/applications.
  • Maintain an odd number of Zookeepers for increased reliability.
  • For highly available Zookeeper, 3-5 Zookeeper nodes are recommended. Users can either install Zookeeper on their own system or run 3 or 5 master servers and configure Zookeeper on them appropriately.
  • Share Zookeeper’s location with a master server rather than doing so with data or query servers. This is done because query and data are far much work-intensive than the master node (coordinator and overlord).
  • To fully achieve high availability, it is recommended to never out Zookeeper behind a load balancer.

If Zookeeper goes down, the cluster will operate. Failing of Zookeeper would neither result in addition to new data segments nor can it effectively react to the loss of one of the nodes. So, the failure of Zookeeper is a degraded state.

The IDP is an excellent solution for meeting the ever-increasing demand for faster development and release cycles with total automation.Click to explore about our, Database-as-a-Service

How to monitoring Apache Druid?

Users can monitor Druid by using the metrics it generates. Druid generates metrics related to queries, coordination and ingestion. These metrics are emitted as a JSON object. It is either emitted to a runtime log file or over HTTP (to service like Kafka). The emission of a metric is disabled by default.

Fields of Metrics Emitted

Metrics emitted by Druid share a common set of fields.

  • Timestamp – the time at which metric was created
  • Metric – the name given to the metric
  • Service – the name of the service that emitted the metric
  • Host – the name of the host that emitted the metric
  • Value – the numeric value that is associated with the metric emitted

Briefing About Available Metrics

Metric emitted may have dimensions beyond the one listed. To change the emission period of Druid that is 1 minute by default, one can use `druid.monitoring.emissionPeriod` to change the default value. Metrics available are:

  • Query Metrics, mainly categorized as Broker, Historical, Real-time, Jetty and Cache
  • SQL Metrics (Only if SQL is enabled)
  • Ingestion Metrics (Kafka Indexing Service)
  • Real-time Metrics (Real-time process, available if Real-time Metrics Monitor is included)
  • Indexing Service
  • Coordination
  • JVM (Available if JVM Monitor module is included)
  • Event Receiver Firehose (available if Event Receiver Firehose Monitor module is included)
  • Sys (Available if Sys Monitor module is included)
  • General Health, mainly Historical


Apache Druid is the best in the market when it comes to analyzing data in clusters and providing brief insight to all the data processed. Plus having Zookeeper by the side, one can ease up their working with it and rule the DataOps market. Also, there are many libraries to interact with it. To Validate the running of services, one can use JPS commands. As Druid nodes are java processes, they would show up when JPS commands '$ jps -m' are used. With that much ease in monitoring Druid and working with such a vast architecture of Druid, it is really the last bite of an ice-cream for a DataOps Engineer.

Original article source at:

#dataanalytics #interactive 

Transforming Ways Of interactive Data analytics with Apache Druid
Oral  Brekke

Oral Brekke


How to Build an interactive Timeline in React

Planby is a JavaScript component to help create schedules, timelines, and electronic program guides (EPG) for streaming services, music and sporting events, and more.

For several years, I worked in the TV online and video-on-demand (VOD) industry. While working on a scheduler web application, I realized that there were no good solutions for electronic program guides (EPG) and scheduling. Admittedly, this is a niche feature for most web developers, but it's a common requirement for TV applications. I've seen and analyzed a number of websites that have implemented their own EPG or timeline, and I often wondered why everyone seemed to be inventing their own solutions instead of working on a shared solution everyone could use. And that's when I started developing Planby.

Planby is a React (JavaScript) component to help you create schedules, timelines, and electronic program guides (EPG) for online TV and video-on-demand (VOD) services, music and sporting events, and more. Planby uses a custom virtual view, allowing you to operate on a lot of data, and present it to your viewers in a friendly and useful way.

Planby has a simple API that you can integrate with third party UI libraries. The component theme is customised to the needs of the application design.

Timeline performance

The most significant thing when implementing a timeline feature is performance. You're potentially handling basically an endless stream of data across many many different channels. Applications can struggle with refreshing, moving, and scrolling. You want the user's interactions with the content to be fluid.

There's also the potential for poor design. Sometimes, an app implements an EPG timeline in the form of a list that you must scroll vertically, meaning you must click on buttons to move left and right through time, which quickly becomes tiring. What's more, sometimes customization for interacting with an EPG (such as rating, choosing your favorite channels, reading right-to-left (RTL), and so on) aren't available at all, or when they are, they cause performance issues.

Another problem I often face is that an app is too verbose in its data transfer. When an app requests data while you scroll through the EPG, the timeline feels slow and can even crash.

What is Planby?

This is where Planby comes in. Planby is built from scratch, using React and Typescript and a minimal amount of resources. It uses a custom virtual view, allowing you to operate on vast amounts of data. It displays programs and channels to the user, and automatically positions all elements according to hours and assigned channels. When a resource contains no content, Planby calculates the positioning so the time slots are properly aligned.

Planby has a simple interface and includes all necessary features, such as a sidebar, the timeline itself, a pleasant layout, and live program refreshing. In addition, there's an optional feature allowing you to hide any element you don't want to include in the layout.

Planby has a simple API that allows you as the developer to implement your own items along with your user preferences. You can use Planby's theme to develop new features, or you can make custom styles to fit in with your chosen design. You can easily integrate with other features, like a calendar, rating options, a list of user favorites, scroll, "now" buttons, a recording schedule, catch-up content, and much more. What's more, you can add custom global styles, including register-transfer level (RTL) functionality.

And best of all, it uses the open source MIT licensed.

Try Planby

If you would like to try Planby, or just to learn more about it, visit the Git repository. There, I've got some examples of what's possible and you can read the documentation for the details. The package is also available with npm.

Original article source at:

#react #timeline #interactive 

How to Build an interactive Timeline in React
Rupert  Beatty

Rupert Beatty


Interpolate: Swift Interpolation for Gesture-driven animations


Interpolate is a powerful Swift interpolation framework for creating interactive gesture-driven animations.


The 🔑 idea of Interpolate is - all animation is the interpolation of values over time.

To use Interpolate:

Import Interpolate at the top of your Swift file.

import Interpolate

Create an Interpolate object with a from value, a to value and an apply closure that applies the interpolation's result to the target object.

let colorChange = Interpolate(from: UIColor.white,
apply: { [weak self] (color) in
    self?.view.backgroundColor = color

Alternatively, you can specify multiple values for the interpolation in an array. The Swift compiler might have issues to infer the type of the array so it's best to be explicit.

let colors: [UIColor] = [UIColor.white,,]
let colorChange = Interpolate(values: colors,
apply: { [weak self] (color) in
    self?.view.backgroundColor = color

Next, you will need to define a way to translate your chosen gesture's progress to a percentage value (i.e. a CGFloat between 0.0 and 1.0).

For a gesture recognizer or delegate that reports every step of its progress (e.g. UIPanGestureRecognizer or a ScrollViewDidScroll) you can just apply the percentage directly to the Interpolate object:

@IBAction func handlePan(recognizer: UIPanGestureRecognizer) {
    let translation = recognizer.translation(in: self.view)
    let translatedCenterY = + translation.y
    let progress = translatedCenterY / self.view.bounds.size.height
    colorChange.progress = progress

For other types of gesture recognizers that only report a beginning and an end (e.g. a UILongPressGestureRecognizer), you can animate directly to a target progress value with a given duration. For example:

@IBAction func handleLongPress(recognizer: UILongPressGestureRecognizer) {
    switch recognizer.state {
        case .began:
            colorChange.animate(1.0, duration: 0.3)
        case .cancelled, .ended, .failed:
            colorChange.animate(0.0, duration: 0.3)
        default: break

To stop an animation:


When you are done with the interpolation altogether:



What can I interpolate?

Interpolate currently supports the interpolation of:

  • CGPoint
  • CGRect
  • CGSize
  • Double
  • CGFloat
  • Int
  • NSNumber
  • UIColor
  • CGAffineTransform
  • CATransform3D
  • UIEdgeInsets

More types will be added over time.

Advanced usage

Interpolate is not just for dull linear interpolations.

For smoother animations, consider using any of the following functions: easeIn, easeOut, easeInOut and Spring.

// Spring interpolation
let shadowPosition = Interpolate(from: -shadowView.frame.size.width,
to: (self.view.bounds.size.width - shadowView.frame.size.width)/2,
function: SpringInterpolation(damping: 30.0, velocity: 0.0, mass: 1.0, stiffness: 100.0),
apply: { [weak self] (originX) in
    self?.shadowView.frame.origin.x = originX

// Ease out interpolation
let groundPosition = Interpolate(from: CGPoint(x: 0, y: self.view.bounds.size.height),
to: CGPoint(x: 0, y: self.view.bounds.size.height - 150),
function: BasicInterpolation.easeOut,
apply: { [weak self] (origin) in
    self?.groundView.frame.origin = origin

In fact, you can easily create and use your own interpolation function - all you need is an object that conforms to the InterpolationFunction protocol.

Setting up with CocoaPods

source ''
pod 'Interpolate', '~> 1.3.0'

Setting up with Carthage

Carthage is a decentralized dependency manager that automates the process of adding frameworks to your Cocoa application.

You can install Carthage with Homebrew using the following command:

$ brew update
$ brew install carthage

To integrate Interpolate into your Xcode project using Carthage, specify it in your Cartfile:

github "marmelroy/Interpolate"


Download Details:

Author: Marmelroy
Source Code: 
License: MIT license

#swift #ios #interactive #animation 

Interpolate: Swift Interpolation for Gesture-driven animations

Create Beatiful, interactive Visualizations for Graphs using Vega-Lite


VegaGraphs implements graph visualization with Vega-Lite.

This library is built on top of the JuliaGraphs project.

Example of Usage

The use of VegaGraphs is very straightforward. At the moment, the package has one main function called graphplot() that is shipped with all the possible modifications one can the do the graph visualization.

# Creating a Random Graph with SimpleWeightedGraphs
incidence = rand([0,1],10,20)
m = incidence'*incidence
m[diagind(m)] .= 0.0
g = SimpleWeightedGraph(m)
random_nodecolor = rand([1,2,3],20)
random_nodesize  = rand(20)
# Using VegaGraphs to create the Plot
p = VegaGraphs.graphplot(g,
    tooltip=true,  # Iteractive tooltips
    ew=true,       # VegaGraphs calculate the edge weights based on the number of time the pair appears in the graph

Graph Plot

Desired Features

  •  Graph visulization with interactivity;
  •  Generate graph from DataFrame and generate graph from provided nodes and edges;
  •  Allow to tweak node size, node color, node shape, edge size;
  •  Plot degree distribution;
  •  Interactivity between related graphs (e.g. papers vs authors networks);
  •  Interactivity between Plot and Graph (e.g. Degree distribution and Graph);

Download Details:

Author: JuliaGraphs
Source Code: 
License: MIT license

#julia #interactive #visualization 

Create Beatiful, interactive Visualizations for Graphs using Vega-Lite
Nat  Grady

Nat Grady


Htmlwidget R Bindings for UpSet.js For Rendering UpSet Plots, Euler

UpSet.js as R HTMLWidget

This is a HTMLWidget and Dash wrapper around the JavaScript library UpSet.js and an alternative implementation of UpSetR.

This package is part of the UpSet.js ecosystem located at the main Github Monorepo.


# CRAN version
# or



listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13), two = c(1, 2, 4, 5, 10), three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))
upsetjs() %>% fromList(listInput) %>% interactiveChart()

List Input Example

see also UpSetJS.Rmd

Shiny Example


listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13),
                  two = c(1, 2, 4, 5, 10),
                  three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))

ui <- fluidPage(
  titlePanel("UpSet.js Shiny Example"),

server <- function(input, output, session) {
  # render upsetjs as interactive plot
  output$upsetjs1 <- renderUpsetjs({
    upsetjs() %>% fromList(listInput) %>% interactiveChart()

# Run the application
shinyApp(ui = ui, server = server)


see also Shiny Examples

Dash Example


app <- Dash$new()

            htmlH1("Hello UpSet.js + Dash"),
            upsetjsDash(id = "upset") %>% fromList(list(a = c(1, 2, 3), b = c(2, 3)))
                %>% interactiveChart(),
            htmlDiv(id = "output")
    output = list(id = "output", property = "children"),
    params = list(input(id = "upset", property = "selection")),
    function(selection) {
        sprintf("You selected \"%s\"", selection$name)



see also Dash Examples


the package documentation is located at Open Docs. An introduction vignette is at Open Vignette.

Venn Diagram

Besides the main UpSet.js plot also Venn Diagrams for up to five sets are supported. It uses the same input formats and has similar functionality in terms of interaction.

listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13), two = c(1, 2, 4, 5, 10), three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))
upsetjsVennDiagram() %>% fromList(listInput) %>% interactiveChart()


see also Venn.Rmd

Karnaugh Maps Diagram

Besides the main UpSet.js plot also a variant of a Karnaugh Map is supported. It uses the same input formats and has similar functionality in terms of interaction.

listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13), two = c(1, 2, 4, 5, 10), three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))
upsetjsKarnaughMap() %>% fromList(listInput) %>% interactiveChart()


see also KMap.Rmd

Dev Environment


  • R with packages: devtools, pkgdown
  • pandoc
npm i -g yarn
yarn install
yarn sdks vscode


yarn lint
yarn build

R Package

yarn style:r
yarn lint:r
yarn check:r
yarn build:r

or in R


R Package Website

will be automatically updated upon push

yarn docs:r

or in R



use release-it

yarn release
Rscript -e "devtools::release()"

Privacy Policy

UpSet.js is a client only library. The library or any of its integrations doesn't track you or transfers your data to any server. The uploaded data in the app are stored in your browser only using IndexedDB. The Tableau extension can run in a sandbox environment prohibiting any server requests. However, as soon as you export your session within the app to an external service (e.g., your data will be transferred.

Download Details:

Author: upsetjs
Source Code: 
License: View license

#r #shiny #interactive 

Htmlwidget R Bindings for UpSet.js For Rendering UpSet Plots, Euler
Lawrence  Lesch

Lawrence Lesch


Tween.js: JavaScript/TypeScript Animation Engine


JavaScript tweening engine for easy animations, incorporating optimised Robert Penner's equations.  

Update Note In v18 the script you should include has moved from src/Tween.js to dist/tween.umd.js. See the installation section below.

const box = document.createElement('div')'background-color', '#008800')'width', '100px')'height', '100px')

// Setup the animation loop.
function animate(time) {

const coords = {x: 0, y: 0} // Start at (0, 0)
const tween = new TWEEN.Tween(coords) // Create a new tween that modifies 'coords'.
    .to({x: 300, y: 200}, 1000) // Move to (300, 200) in 1 second.
    .easing(TWEEN.Easing.Quadratic.Out) // Use an easing function to make the animation smooth.
    .onUpdate(() => {
        // Called after tween.js updates 'coords'.
        // Move 'box' to the position described by 'coords' with a CSS translation.'transform', `translate(${coords.x}px, ${coords.y}px)`)
    .start() // Start the tween immediately.

Test it with CodePen


Currently npm is required to build the project.

git clone
cd tween.js
npm i .
npm run build

This will create some builds in the dist directory. There are currently four different builds of the library:

  • UMD : tween.umd.js
  • AMD : tween.amd.js
  • CommonJS : tween.cjs.js
  • ES6 Module :

You are now able to copy tween.umd.js into your project, then include it with a script tag. This will add TWEEN to the global scope.

<script src="js/tween.umd.js"></script>

With require('@tweenjs/tween.js')

You can add tween.js as an npm dependency:

npm i @tweenjs/tween.js@^18

If you are using Node, Webpack, or Browserify, then you can now use the following to include tween.js:

const TWEEN = require('@tweenjs/tween.js')


  • Does one thing and one thing only: tween properties
  • Doesn't take care of CSS units (e.g. appending px)
  • Doesn't interpolate colours
  • Easing functions are reusable outside of Tween
  • Can also use custom easing functions



Custom functions Custom functions
Stop all chained tweens Stop all chained tweens
Yoyo Yoyo
Relative values Relative values
Repeat Repeat
Dynamic to Dynamic to
Array interpolation Array interpolation
Video and time Video and time
Simplest possible example Simplest possible example
Graphs Graphs
Black and red Black and red
Bars Bars
hello world hello world


You need to install npm first--this comes with node.js, so install that one first. Then, cd to tween.js's directory and run:

npm install

To run the tests run:

npm test

If you want to add any feature or change existing features, you must run the tests to make sure you didn't break anything else. Any pull request (PR) needs to have updated passing tests for feature changes (or new passing tests for new features) in src/tests.ts, otherwise the PR won't be accepted. See contributing for more information.


Maintainers: mikebolt, sole, Joe Pea (@trusktr).

All contributors.

Projects using tween.js

A-Frame VR MOMA Inventing Abstraction 1910-1925 Web Lab MACCHINA I Minesweeper 3D ROME WebGL Globe Androidify The Wilderness Downtown Linechart

Download Details:

Author: Tweenjs
Source Code: 
License: View license

#javascript #typescript #interactive #animation 

Tween.js: JavaScript/TypeScript Animation Engine
Nat  Grady

Nat Grady


VivagRaph: R Package for interactive Network Plots using VivaGraph js


R package for interactive network plots using VivaGraph js. Built with htmlwidgets.

Example Use

Nodes = data.frame(nodeName=c('Homer','Bart','Lisa','Milhouse','Lenny'), group=c(1,1,1,2,3))
Edges = data.frame(source=c(0,1,0,1,0),target=c(1,2,2,3,4))

Example Network



Download Details:

Author: Keeganhines/
Source Code: 

#r #network #interactive #javascript 

VivagRaph: R Package for interactive Network Plots using VivaGraph js
Nat  Grady

Nat Grady


Interactive PCA Explorer: Shiny App for Exploring A PCA

Interactive PCA Explorer

This Shiny application takes a CSV file of clean data, allows you to inspect the data and compute a Principal Components Analysis, and will return several diagnostic plots and tables. The plots include a tableplot, a correlation matrix, a scree plot, and a biplot of Principal Components.

You can chose which columns to include in the PCA, and which column to use as a grouping variable. You can choose the center and/or scale the data, or not. You can choose which PCs to include on the biplot.

The biplot of PCs is interactive, so you can click on points or select points and inspect the details of those points in a table.

How to run or install

There are two ways to run/install this app.

First, you can run it on your computer like so:

runGitHub("interactive_pca_explorer", "benmarwick")

Second, you can clone this repo to have the code on your computer, and run the app from there, like so:

# First clone the repository with git. If you have cloned it into
# ~/interactive_pca_explorer, first change your working directory to ~/interactive_pca_explorer, then use runApp() to start the app.
setwd("~/interactive_pca_explorer") # change to match where you downloaded this repo to
runApp() # runs the app 

This app depends on several R packages (ggplot2, DT, GGally, psych, Hmisc, MASS, tabplot). The app will check to see if you have them installed, and if you don't, it will try to download and install them for you.

How to use

Start on the first (left-most) tab to upload your CSV file, then click on each tab, in order from left to right, to see the results.


Here's what it looks like. Here we have input a CSV file that contain the iris data (included with this app).

Then we can see some simple descriptions of the data, and the raw data at the bottom of the page.

Below we see how we can choose the variables to explore in a correlation matrix. We also have a table that summarizes the correlations and gives p-values.

Below we have a few popular diagnostic tests that many people like to do before doing a PCA. They're not very informative and can be skipped, but people coming from SPSS might feel more comfortable if they can see them here also.

Below are the options for computing the PCA. We can choose which columns to include, and a few details about the PCA function. We are using the prcomp function to compute the PCA.

Here are the classic PCA plots. First is the scree plot summarizing how important the first few PCs are. Second is the interactive PC biplot. You can see that I've used my mouse to draw a rectangle around a few of the points in the biplot (this is called 'brushing') and in the table below we can see the details of those points in the selected area. We can choose which column to use for grouping (this only affects the colouring of the plot, it doesn't change the PCA results), and we can choose which PCs to show on the plot.

Finally we have some of the raw output from the PCA.

Feedback, contributing, etc.

Please open an issue if you find something that doesn't work as expected. Note that this project is released with a Guide to Contributing and a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Download Details:

Author: Benmarwick
Source Code: 
License: MIT license

#r #interactive #pca 

Interactive PCA Explorer: Shiny App for Exploring A PCA
Nat  Grady

Nat Grady


Ballr: Interactive NBA and NCAA Shot Charts with R and Shiny

BallR: Interactive NBA Shot Charts with R and Shiny

BallR uses the NBA Stats API to visualize every shot taken by a player during an NBA season dating back to 1996.

Run your own local version

You can run BallR on your own machine by pasting the following code into the R console (you'll have to install R first):

packages = c("shiny", "tidyverse", "hexbin")
install.packages(packages, repos = "")
runGitHub("ballr", "toddwschneider")



There are three chart types to choose from: hexagonal, scatter, and heat map


Hexagonal charts, which are influenced by the work of Kirk Goldsberry at Grantland, use R's hexbin package to bin shots into hexagonal regions. The size and opacity of each hexagon are proportional to the number of shots taken within that region, and the color of each hexagon represents your choice of metric, which can be one of:

  • FG% vs. league average
  • FG%
  • Points per shot

There are two sliders to adjust the maximum hexagon sizes, and also the variability of sizes across hexagons, e.g. here's the same Stephen Curry chart but with larger hexagons, and plotting points per shot as the color metric.

Note that the color metrics are not plotted at the individual hexagon level, but at the court region level, e.g. all hexagons on the left side of the court that are 16-24 feet from the basket will have the same color. If BallR were extended to, say, chart all shots for an entire team, then it might make sense to assign colors at the hexagon-level, but for single players that tends to produce excessive noise.


Scatter charts are the most straightforward option: they show the location of each individual shot, with color-coding for makes and misses


Heat map

Heat map charts use two-dimensional kernel density estimation to show the distribution of shot attempts across the court.

Anecdotally I've found that heat maps often show, unsurprisingly, that most shot attempts are taken in the restricted area near the basket. It might be more interesting to filter out restricted area shots when generating heat maps, for example here's the heat map of Stephen Curry's shot attempts excluding shots from within the restricted area:

heat map excluding restricted area


BallR lets you filter shots along a few dimensions (zone, angle, distance, made/missed) by adjusting the inputs in the sidebar. When you apply filters, the shot chart and summary stats update automatically to reflect whatever subset of shots you have chosen.

Color themes

BallR comes with light and dark color themes, and you can define your own theme in court_themes.R


The data comes directly from the NBA Stats API via the shotchartdetail endpoint. See fetch_shots.R for the API call itself. The player select input lets you choose any player and season back to 1996, so you can compare, for example, Michael Jordan of 1996 to LeBron James of 2012.

See also: NBA Shots DB

NBA Shots DB is a Rails app that populates a PostgreSQL database with every NBA shot attempt since 1996 (4.5 million shots and growing).

BallR does not interact with NBA Shots DB yet, but that might change in the future.


Posts by Savvas Tjortjoglou and Eduardo Maia about making NBA shot charts in Python and R, respectively, served as useful resources

Questions/issues/contact, or open a GitHub issue

See this post for more info

See also the college branch of this repo for men's college basketball shot charts.

Download Details:

Author: Toddwschneider
Source Code: 
License: MIT license

#r #interactive #charts 

Ballr: Interactive NBA and NCAA Shot Charts with R and Shiny
Nat  Grady

Nat Grady


An interactive Web Application for Demonstrating & using Phyloseq


Shiny-phyloseq is an interactive web application that provides a graphical user interface to the microbiome analysis package for R, called phyloseq. For details about using the phyloseq package directly, see The phyloseq Homepage.


Shiny-phyloseq is provided under a free-of-charge, open-source license (A-GPL3). All we require is that you cite/attribute the following in any work that benefits from this code or application.

The App

McMurdie and Holmes (2014) Shiny-phyloseq: Web Application for Interactive Microbiome Analysis with Provenance Tracking.

Bioinformatics (Oxford, England), 31(2), 282–283. DOI 10.1093/bioinformatics/btu616

"Under the Hood"

McMurdie and Holmes (2013) phyloseq: An R package for reproducible interactive analysis and graphics of microbiome census data.

PLoS ONE 8(4):e61217.

Launching Shiny-phyloseq Local Session

While it is possible to host the server "back end" somewhere so that users only need to point their web browser to a link, it is also possible to launch both the back and front "ends" on your local machine. The server back end will be an R session on your own machine, while the front end is your web browser, pointed to the appropriate local URL.

Quick install/launch instructions

Simply launching Shiny-phyloseq should also install missing/old packages. Make sure that you first have installed the latest version of R.

The following R code will launch Shiny-phyloseq on most systems.


See the Shiny-phyloseq installation instructions, for further details.

Download Details:

Author: joey711
Source Code: 
License: GPL-3.0 license

#r #interactive #web 

An interactive Web Application for Demonstrating & using Phyloseq
Nat  Grady

Nat Grady


QRAGadget: A Shiny Gadget for interactive QRA Visualizations


  • Easily create Quantitative Risk Analysis (QRA) visualizations
  • Choose from numerous color palettes, basemaps, and different configurations


QRAGadget is a Shiny Gadget for creating interactive QRA visualizations. QRAGadget is powered by the excellent leaflet and raster packages. While this gadget was initially intended for those interested in creating QRA visualizations, it may also be more generally applicable to anyone interested in visualizing raster data in an interactive map.

Getting Started

To install QRAGadget in R:


Or to install the latest developmental version:


After installation, and if using RStudio (v0.99.878 or later), the gadget will appear in the Addins dropdown menu. Otherwise, to launch the gadget, simply type:




QRAGadget currently accepts two primary types of raster data: (1) a file upload (in csv format) or (2) an R data.frame object. In order to explore the gadget, create some dummy data:

sample <- matrix(runif(36*36), ncol = 36, nrow = 36) %>%

Then launch the app:


Launching the app brings up the Input/Output page. To find the dummy data, click R Object under Data Type, and then select sample from the dropdown menu.

Choose a name for the output html file. After customizing the map, clicking Done will create a standalone html file in the current working directory (Be sure not to save over a previously created map file!). Click Cancel any time to start over.

To bookmark the app at any time, click the Bookmark button, which will create a unique url for the current state of the app.

Input/Output Page


To format the raster image, click the Raster icon. Here are a number of options for specifying the extents of the raster image (XMIN, XMAX, YMIN, and YMAX) as well as the projection of the raster layer. It is very important that the raster layer be tagged with the correct project coordinate reference system.

To specify the bins for the color palette, click Number to select the total number of bins or Cuts to select both the number and the actual cut values for each bin.

Finally, there is an option to disaggregate the raster layer and create a new one with a higher resolution (smaller cells) while also locally interpolating between the new cell values (smoothed cells). To disaggregate the raster layer, enter the number of cells to disaggregate.

For this example, use the default values for XMIN, XMAX, YMIN, and YMAX as well as the given projection, but enter 5 as the number of cells to disaggregate:

Raster Page


To view the interactive map, click the Map icon. Click the Reset button at any time in order to reset the extents of the map.

Map Page


The Preferences tab has a number of options for customizing the map:

To try out some of these options, select the PuOr Color Palette, the Esri.WorldImagery Map Tile, and move the Control Position over to the bottomleft:

Preferences Page

This should result in the following interactive map:

Map Page 2

Source Code

QRAGadget is an open source project, and the source code is available at


This project is in its very early stages. Please let us know if there are things you would like to see (or things you don't like!) by opening up an issue using the GitHub issue tracker at


Contributions are welcome by sending a pull request

Download Details:

Author: Paulgovan
Source Code: 
License: Apache-2.0 license

#r #visualization #interactive 

QRAGadget: A Shiny Gadget for interactive QRA Visualizations

IPython.jl: Run IPython inside Julia to Exchange Data Interactively

Launch IPython in Julia 

Example REPL session


Run using IPython and then type . in empty julia> prompt or run IPython.start_ipython(). You can switch back to Julia REPL by backspace or ctrl-h key (like other REPL modes). Re-entering IPython keeps the previous state. Use pre-defined Main object to access Julia namespace from IPython. Use py"..." string macro to access Python namespace from Julia.

Note: First launch of IPython may be slow.



  • PyCall


  • Python 3.7 or above
  • IPython 7.0 or above

Accessing Julia from Python

If simple Main.eval("...") and Main.<name> accessor is not enough, PyJulia is a nice way to access Julia objects from Python. For example, you can import any Julia package from Python:

>>> from julia import Base
>>> Base.banner()

For more advanced/experimental Julia-(I)Python integration, see ipyjulia_hacks.


Julia-mode like prompt

If you want IPython prompt to look like a part of Julia prompt, then add the following snippet in ~/.ipython/profile_default/

    from import JuliaModePrompt
except ImportError:
    c.TerminalInteractiveShell.prompts_class = JuliaModePrompt

Then the prompt would then look like ipy 1> instead of In [1]:. It also removes Out[1]. Note that above setting does not change your normal IPython prompts.

Download Details:

Author: tkf
Source Code: 
License: View license

#julia #python #interactive 

IPython.jl: Run IPython inside Julia to Exchange Data Interactively