1678795500
Ubuntu 14.04 and 16.04 supported out of the box.
18.04 is having a problem with building QEMU
See forked QEMU source at https://github.com/geohot/qemu/tree/qira to fix.
Non Linux hosts may run the rest of QIRA, but cannot run the QEMU tracer.
Very limited support for Mac OS X and Windows natively.
The Docker image in docker should work everywhere.
See instructions on qira.me to install 1.3
cd ~/
git clone https://github.com/geohot/qira.git
cd qira/
./install.sh
At the top, you have 4 boxes, called the controls.
Blue = change number, grey = fork number
red = instruction address (iaddr), yellow = data address (daddr).
On the left you have the vtimeline, this is the full trace of the program.
The top is the start of the program, the bottom is the end/current state.
More green = deeper into a function.
The currently selected change is blue, red is every passthrough of the current iaddr
Bright yellow is a write to the daddr, dark yellow is a read from the daddr.
This color scheme is followed everywhere.
Below the controls, you have the idump, showing instructions near the current change
Under that is the regviewer, datachanges, hexeditor, and strace, all self explanatory.
Click on vtimeline to navigate around. Right-click forks to delete them. Click on data (or doubleclick if highlightable) to follow in data. Right-click on instruction address to follow in instruction.
j -- next invocation of instruction
k -- prev invocation of instruction
shift-j -- next toucher of data
shift-k -- prev toucher of data
m -- go to return from current function
, -- go to start of current function
z -- zoom out max on vtimeline
left -- -1 fork
right -- +1 fork
up -- -1 clnum
down -- +1 clnum
esc -- back
shift-c -- clear all forks
n -- rename instruction
shift-n -- rename data
: -- add comment at instruction
shift-: -- add comment at data
g -- go to change, address, or name
space -- toggle flat/function view
p -- analyze function at iaddr
c -- make code at iaddr, one instruction
a -- make ascii at iaddr
d -- make data at iaddr
u -- make undefined at iaddr
clnum -- selected changelist number
forknum -- selected fork number
iaddr -- selected instruction address
daddr -- selected data address
cview -- viewed changelists in the vtimeline
dview -- viewed window into data in the hexeditor
iview -- viewed address in the static view
max_clnum -- max changelist number for each fork
dirtyiaddr -- whether we should update the clnum based on the iaddr or not
flat -- if we are in flat view
QIRA static has historically been such a trash heap it's gated behind -S. QIRA should not be trying to compete with IDA.
User input and the actual traces of the program should drive creation of the static database. Don't try to recover all CFGs, only what ran.
The basic idea of static is that it exists at change -1 and doesn't change ever. Each address has a set of tags, including things like name.
Author: Geohot
Source Code: https://github.com/geohot/qira
License: MIT license
1676446680
Interactive Tools for machine learning, deep learning, and math
"exBERT is a tool to help humans conduct flexible, interactive investigations and formulate hypotheses for the model-internal reasoning process, supporting analysis for a wide variety of Hugging Face Transformer models. exBERT provides insights into the meaning of the contextual representations and attention by matching a human-specified input to similar contexts in large annotated datasets."
"BertViz is a tool for visualizing attention in the Transformer model, supporting most models from the transformers library (BERT, GPT-2, XLNet, RoBERTa, XLM, CTRL, MarianMT, etc.). It extends the Tensor2Tensor visualization tool by Llion Jones and the transformers library from HuggingFace."
An interactive visualization system designed to help non-experts learn about Convolutional Neural Networks (CNNs). It runs a pre-tained CNN in the browser and lets you explore the layers and operations.
Explore Generative Adversarial Networks directly in the browser with GAN Lab. There are many cool features that support interactive experimentation.
ConvNet Playground
ConvNet Playground is an interactive visualization tool for exploring Convolutional Neural Networks applied to the task of semantic image search.
Feature inversion to visualize millions of activations from an image classification network leads to an explorable activation atlas of features the network has learned. This can reveal how the network typically represents some concepts.
Available in many different languages.
New to Deep Learning? Tinker with a Neural Network in your browser.
Initialization can have a significant impact on convergence in training deep neural networks. Simple initialization schemes can accelerate training, but they require care to avoid common pitfalls. In this post, deeplearning.ai folks explain how to initialize neural network parameters effectively.
It's increaingly important to understand how data is being interpreted by machine learning models. To translate the things we understand naturally (e.g. words, sounds, or videos) to a form that the algorithms can process, we often use embeddings, a mathematical vector representation that captures different facets (dimensions) of the data. In this interactive, you can explore multiple different algorithms (PCA, t-SNE, UMAP) for exploring these embeddings in your browser.
The OpenAI Microscope is a collection of visualizations of every significant layer and neuron of eight important vision models.
Interpretability, Fairness
The Language Interpretability Tool (LIT) is an open-source platform for visualization and understanding of NLP models.
You can use LIT to ask and answer questions like:
The What-If Tool lets you visually probe the behavior of trained machine learning models, with minimal coding.
PAIR Explorables around measuring diversity.
"Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for “CEO pictures” and sees a page of white men, they may feel that only white men can be CEOs, further perpetuating lack of representation at companies’ executive levels."
Mitchell et. al. (2020) Diversity and Inclusion Metrics in Subset Selection
Math
This is a collection of pages demonstrating the use of the interact command in Sage. It should be easy to just scroll through and copy/paste examples into Sage notebooks.
Examples include Algebra, Bioinformatics, Calculus, Cryptography, Differential Equations, Drawing Graphics, Dynamical Systems, Fractals, Games and Diversions, Geometry, Graph Theory, Linear Algebra, Loop Quantum Gravity, Number Theory, Statistics/Probability, Topology, Web Applications.
by Simon Ward-Jones. A visual 👀 tour of probability distributions.
by Simon Ward-Jones. Explaining the basics of bayesian inference with the example of flipping a coin.
A visual introduction to probability and statistics.
"A Gaussian process can be thought of as an extension of the multivariate normal distribution to an infinite number of random variables covering each point on the input domain. The covariance between function values at any two points is given by the evaluation of the kernel of the Gaussian process. For an in-depth explanation, read this excellent distill.pub article and then come back to this interactive visualisation!"
Author: Machine-Learning-Tokyo
Source Code: https://github.com/Machine-Learning-Tokyo/Interactive_Tools
1673956336
Apple introduced UIViewPropertyAnimator
for iOS 10. We can use this new API to control interactive animations. To experiment UIViewPropertyAnimator
, we developed this game by using UIKit only 😉 (no Sprite Kit at all 😬). And you can see the animations are very smooth, looking forward to see more interactive animations in iOS 10.
$ git clone https://github.com/JakeLin/SaveTheDot.git
$ cd SaveTheDot
$ open "SaveTheDot.xcodeproj"
Requirements
Author: JakeLin
Source Code: https://github.com/JakeLin/SaveTheDot
License: MIT license
1670597221
Apache Druid is a real-time analytics database that is designed for rapid analytics on large datasets. This database is used more often for powering use cases where real-time ingestion, high uptime, and fast query performance is needed. Druid can be used to analyze billions of rows not only in batch but also in real-time. It offers many integrations with different technologies like Apache Kafka Security, Cloud Storage, S3, Hive, HDFS, DataSketches, Redis, etc. It also follows the immutable past and append-only future. As past events happen once and never change, these are immutable, whereas the only append takes place for new events. It provides users with a fast and deep exploration of large scale transaction data.
Some of the exciting characteristics of Apache Druid are:
Some of the common use cases of Druid are:
Druid’s core architecture is made by combining the ideas of different data warehouses, log search systems, and time-series databases.
It uses column-oriented storage; hence only loads required columns needed for a particular query. It helps in fast scans and aggregations.
It can process a query in parallel across the entire cluster. It is also termed as Massively Parallel Processing.
Druid is mostly deployed in clusters ranging from tens to hundreds that offer ingest rate of millions of records/sec, query latencies of sub-second to a few seconds, and retention of trillions of records.
Druid can ingest data either in real-time (Ingested data can be queried immediately) or in batches.
It is a fault-tolerant architecture that won’t lose data. Once Druid ingests data, its copy is safely stored in deep storage (Cloud Storage, Amazon S3, Redis, HDFS, many more). Users' data can be easily recovered from this deep storage even if all the Druid’s servers fail. This replication ensures that queries are still possible while the system recovers.
Druid uses concise and roaring compressed bitmap indexes to create indexes that help in faster filtering.
Every data in Druid must have a timestamp column as the data is always partitioned by time, and every query has a time filter.
Users can easily stream data natively using Druid from message buses like Kafka, kinesis, and many more. It can also load batch files from the data lakes like HDFS and Amazon S3.
Druid is mainly composed of the following processes:
The processes described above are organized into 3 types of servers: Master, Query, and Data.
It runs the Coordinator and Overlord. Basically, it manages big data ingestion and availability. Master is responsible for the ingestion of jobs and coordinating the availability of data on the “Data Servers”.
It runs Brokers and Optional Router processes. Basically, it handles queries and external clients by providing the endpoints of applications that users and clients interact with, routing queries to Data servers or other Query servers.
It runs Middle Managers and Historical processes. This helps execute jobs and store the queryable data. Other than these 3 servers and six processes, Druid also requires storage for Metadata and Deep Storage.
It is basically used to store the metadata of the system (Audit, Datasource, Schemas, and so on). For experimental purposes, the environment suggested using Apache Derby. Derby is the default metadata store for Druid, but it is not suitable for production. For production purposes, MySQL or PostgreSQL is the best choice. Metadata storage stores the entire metadata, which is very useful for the cluster of Druid to work. Derby is not used for production as it does not support a multi-node cluster with high availability. MySQL as a metadata storage database is used to acquire:
PostgreSQL, as a metadata storage database, is used to acquire:
Apache Druid uses separate storage for any data ingested that makes it fault-tolerant. Some of Deep Storage Technologies are Cloud Storage, Amazon S3, HDFS, Redis, and many more.
A structure that defines the logical view of the entire defines how the data is managed and how the relations among them are associated. Click to explore about our, Types of Databases
Data in Druid is organized into segments that generally have rows up to a few million. Loading data in Druid is known as Ingestion or Indexing. Druid fully supports batch ingestion and streaming ingestion. Some of the technologies supported by Druid is Kinesis, Cloud Storage, Apache Kafka, and local storage. Druid requires some structure to the data it ingests. In general, data should consist of OS timestamp, metrics and dimensions.
Apache Druid uses Apache Zookeeper to integrate all the services. Users can use Zookeeper that comes with Druid for experiments, but one has to install Zookeeper for production. It’s cluster can only be as stable by a Zookeeper. Zookeeper is responsible for most of the communications that keep the Druid cluster functioning as Druid nodes are prevented from talking to each other.
Zookeeper is responsible for the following operations:
For maximum Zookeeper stability, the user has to follow the following practices:
If Zookeeper goes down, the cluster will operate. Failing of Zookeeper would neither result in addition to new data segments nor can it effectively react to the loss of one of the nodes. So, the failure of Zookeeper is a degraded state.
The IDP is an excellent solution for meeting the ever-increasing demand for faster development and release cycles with total automation.Click to explore about our, Database-as-a-Service
Users can monitor Druid by using the metrics it generates. Druid generates metrics related to queries, coordination and ingestion. These metrics are emitted as a JSON object. It is either emitted to a runtime log file or over HTTP (to service like Kafka). The emission of a metric is disabled by default.
Metrics emitted by Druid share a common set of fields.
Metric emitted may have dimensions beyond the one listed. To change the emission period of Druid that is 1 minute by default, one can use `druid.monitoring.emissionPeriod` to change the default value. Metrics available are:
Apache Druid is the best in the market when it comes to analyzing data in clusters and providing brief insight to all the data processed. Plus having Zookeeper by the side, one can ease up their working with it and rule the DataOps market. Also, there are many libraries to interact with it. To Validate the running of services, one can use JPS commands. As Druid nodes are java processes, they would show up when JPS commands '$ jps -m' are used. With that much ease in monitoring Druid and working with such a vast architecture of Druid, it is really the last bite of an ice-cream for a DataOps Engineer.
Original article source at: https://www.xenonstack.com/
1669816320
Planby is a JavaScript component to help create schedules, timelines, and electronic program guides (EPG) for streaming services, music and sporting events, and more.
For several years, I worked in the TV online and video-on-demand (VOD) industry. While working on a scheduler web application, I realized that there were no good solutions for electronic program guides (EPG) and scheduling. Admittedly, this is a niche feature for most web developers, but it's a common requirement for TV applications. I've seen and analyzed a number of websites that have implemented their own EPG or timeline, and I often wondered why everyone seemed to be inventing their own solutions instead of working on a shared solution everyone could use. And that's when I started developing Planby.
Planby is a React (JavaScript) component to help you create schedules, timelines, and electronic program guides (EPG) for online TV and video-on-demand (VOD) services, music and sporting events, and more. Planby uses a custom virtual view, allowing you to operate on a lot of data, and present it to your viewers in a friendly and useful way.
Planby has a simple API that you can integrate with third party UI libraries. The component theme is customised to the needs of the application design.
The most significant thing when implementing a timeline feature is performance. You're potentially handling basically an endless stream of data across many many different channels. Applications can struggle with refreshing, moving, and scrolling. You want the user's interactions with the content to be fluid.
There's also the potential for poor design. Sometimes, an app implements an EPG timeline in the form of a list that you must scroll vertically, meaning you must click on buttons to move left and right through time, which quickly becomes tiring. What's more, sometimes customization for interacting with an EPG (such as rating, choosing your favorite channels, reading right-to-left (RTL), and so on) aren't available at all, or when they are, they cause performance issues.
Another problem I often face is that an app is too verbose in its data transfer. When an app requests data while you scroll through the EPG, the timeline feels slow and can even crash.
This is where Planby comes in. Planby is built from scratch, using React and Typescript and a minimal amount of resources. It uses a custom virtual view, allowing you to operate on vast amounts of data. It displays programs and channels to the user, and automatically positions all elements according to hours and assigned channels. When a resource contains no content, Planby calculates the positioning so the time slots are properly aligned.
Planby has a simple interface and includes all necessary features, such as a sidebar, the timeline itself, a pleasant layout, and live program refreshing. In addition, there's an optional feature allowing you to hide any element you don't want to include in the layout.
Planby has a simple API that allows you as the developer to implement your own items along with your user preferences. You can use Planby's theme to develop new features, or you can make custom styles to fit in with your chosen design. You can easily integrate with other features, like a calendar, rating options, a list of user favorites, scroll, "now" buttons, a recording schedule, catch-up content, and much more. What's more, you can add custom global styles, including register-transfer level (RTL) functionality.
And best of all, it uses the open source MIT licensed.
If you would like to try Planby, or just to learn more about it, visit the Git repository. There, I've got some examples of what's possible and you can read the documentation for the details. The package is also available with npm
.
Original article source at: https://opensource.com/
1667050980
Interpolate is a powerful Swift interpolation framework for creating interactive gesture-driven animations.
The 🔑 idea of Interpolate is - all animation is the interpolation of values over time.
To use Interpolate:
Import Interpolate at the top of your Swift file.
import Interpolate
Create an Interpolate object with a from value, a to value and an apply closure that applies the interpolation's result to the target object.
let colorChange = Interpolate(from: UIColor.white,
to: UIColor.red,
apply: { [weak self] (color) in
self?.view.backgroundColor = color
})
Alternatively, you can specify multiple values for the interpolation in an array. The Swift compiler might have issues to infer the type of the array so it's best to be explicit.
let colors: [UIColor] = [UIColor.white, UIColor.red, UIColor.green]
let colorChange = Interpolate(values: colors,
apply: { [weak self] (color) in
self?.view.backgroundColor = color
})
Next, you will need to define a way to translate your chosen gesture's progress to a percentage value (i.e. a CGFloat between 0.0 and 1.0).
For a gesture recognizer or delegate that reports every step of its progress (e.g. UIPanGestureRecognizer or a ScrollViewDidScroll) you can just apply the percentage directly to the Interpolate object:
@IBAction func handlePan(recognizer: UIPanGestureRecognizer) {
let translation = recognizer.translation(in: self.view)
let translatedCenterY = view.center.y + translation.y
let progress = translatedCenterY / self.view.bounds.size.height
colorChange.progress = progress
}
For other types of gesture recognizers that only report a beginning and an end (e.g. a UILongPressGestureRecognizer), you can animate directly to a target progress value with a given duration. For example:
@IBAction func handleLongPress(recognizer: UILongPressGestureRecognizer) {
switch recognizer.state {
case .began:
colorChange.animate(1.0, duration: 0.3)
case .cancelled, .ended, .failed:
colorChange.animate(0.0, duration: 0.3)
default: break
}
}
To stop an animation:
colorChange.stopAnimation()
When you are done with the interpolation altogether:
colorChange.invalidate()
Voila!
Interpolate currently supports the interpolation of:
More types will be added over time.
Interpolate is not just for dull linear interpolations.
For smoother animations, consider using any of the following functions: easeIn, easeOut, easeInOut and Spring.
// Spring interpolation
let shadowPosition = Interpolate(from: -shadowView.frame.size.width,
to: (self.view.bounds.size.width - shadowView.frame.size.width)/2,
function: SpringInterpolation(damping: 30.0, velocity: 0.0, mass: 1.0, stiffness: 100.0),
apply: { [weak self] (originX) in
self?.shadowView.frame.origin.x = originX
})
// Ease out interpolation
let groundPosition = Interpolate(from: CGPoint(x: 0, y: self.view.bounds.size.height),
to: CGPoint(x: 0, y: self.view.bounds.size.height - 150),
function: BasicInterpolation.easeOut,
apply: { [weak self] (origin) in
self?.groundView.frame.origin = origin
})
In fact, you can easily create and use your own interpolation function - all you need is an object that conforms to the InterpolationFunction protocol.
source 'https://github.com/CocoaPods/Specs.git'
pod 'Interpolate', '~> 1.3.0'
Carthage is a decentralized dependency manager that automates the process of adding frameworks to your Cocoa application.
You can install Carthage with Homebrew using the following command:
$ brew update
$ brew install carthage
To integrate Interpolate into your Xcode project using Carthage, specify it in your Cartfile
:
github "marmelroy/Interpolate"
Author: Marmelroy
Source Code: https://github.com/marmelroy/Interpolate
License: MIT license
1666793280
VegaGraphs implements graph visualization with Vega-Lite.
This library is built on top of the JuliaGraphs project.
The use of VegaGraphs is very straightforward. At the moment, the package has one main function called graphplot()
that is shipped with all the possible modifications one can the do the graph visualization.
# Creating a Random Graph with SimpleWeightedGraphs
incidence = rand([0,1],10,20)
m = incidence'*incidence
m[diagind(m)] .= 0.0
g = SimpleWeightedGraph(m)
random_nodecolor = rand([1,2,3],20)
random_nodesize = rand(20)
# Using VegaGraphs to create the Plot
p = VegaGraphs.graphplot(g,
tooltip=true, # Iteractive tooltips
ew=true, # VegaGraphs calculate the edge weights based on the number of time the pair appears in the graph
node_label=false,
node_colorfield=random_nodecolor,
node_sizefield=random_nodesize,
node_sizefieldtype="q",
node_colorfieldtype="o"
)
Author: JuliaGraphs
Source Code: https://github.com/JuliaGraphs/VegaGraphs.jl
License: MIT license
1665415024
This is a HTMLWidget and Plot.ly Dash wrapper around the JavaScript library UpSet.js and an alternative implementation of UpSetR.
This package is part of the UpSet.js ecosystem located at the main Github Monorepo.
# CRAN version
install.packages('upsetjs')
# or
devtools::install_url("https://github.com/upsetjs/upsetjs_r/releases/latest/download/upsetjs.tar.gz")
library(upsetjs)
listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13), two = c(1, 2, 4, 5, 10), three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))
upsetjs() %>% fromList(listInput) %>% interactiveChart()
see also UpSetJS.Rmd
library(shiny)
library(upsetjs)
listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13),
two = c(1, 2, 4, 5, 10),
three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))
ui <- fluidPage(
titlePanel("UpSet.js Shiny Example"),
upsetjsOutput("upsetjs1"),
)
server <- function(input, output, session) {
# render upsetjs as interactive plot
output$upsetjs1 <- renderUpsetjs({
upsetjs() %>% fromList(listInput) %>% interactiveChart()
})
}
# Run the application
shinyApp(ui = ui, server = server)
see also Shiny Examples
library(dash)
library(dashHtmlComponents)
library(upsetjs)
app <- Dash$new()
app$layout(
htmlDiv(
list(
htmlH1("Hello UpSet.js + Dash"),
upsetjsDash(id = "upset") %>% fromList(list(a = c(1, 2, 3), b = c(2, 3)))
%>% interactiveChart(),
htmlDiv(id = "output")
)
)
)
app$callback(
output = list(id = "output", property = "children"),
params = list(input(id = "upset", property = "selection")),
function(selection) {
sprintf("You selected \"%s\"", selection$name)
}
)
app$run_server()
TODO
see also Dash Examples
the package documentation is located at . An introduction vignette is at
.
Besides the main UpSet.js plot also Venn Diagrams for up to five sets are supported. It uses the same input formats and has similar functionality in terms of interaction.
listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13), two = c(1, 2, 4, 5, 10), three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))
upsetjsVennDiagram() %>% fromList(listInput) %>% interactiveChart()
see also Venn.Rmd
Besides the main UpSet.js plot also a variant of a Karnaugh Map is supported. It uses the same input formats and has similar functionality in terms of interaction.
listInput <- list(one = c(1, 2, 3, 5, 7, 8, 11, 12, 13), two = c(1, 2, 4, 5, 10), three = c(1, 5, 6, 7, 8, 9, 10, 12, 13))
upsetjsKarnaughMap() %>% fromList(listInput) %>% interactiveChart()
see also KMap.Rmd
requirements:
npm i -g yarn
yarn install
yarn sdks vscode
yarn lint
yarn build
yarn style:r
yarn lint:r
yarn check:r
yarn build:r
or in R
devtools::load_all()
styler::style_pkg()
lintr::lint_pkg()
devtools::check()
devtools::document()
devtools::build()
R Package Website
will be automatically updated upon push
yarn docs:r
or in R
devtools::build_site()
use release-it
yarn release
Rscript -e "devtools::release()"
UpSet.js is a client only library. The library or any of its integrations doesn't track you or transfers your data to any server. The uploaded data in the app are stored in your browser only using IndexedDB. The Tableau extension can run in a sandbox environment prohibiting any server requests. However, as soon as you export your session within the app to an external service (e.g., Codepen.io) your data will be transferred.
Author: upsetjs
Source Code: https://github.com/upsetjs/upsetjs_r
License: View license
1664354655
JavaScript tweening engine for easy animations, incorporating optimised Robert Penner's equations.
Update Note In v18 the script you should include has moved from src/Tween.js
to dist/tween.umd.js
. See the installation section below.
const box = document.createElement('div')
box.style.setProperty('background-color', '#008800')
box.style.setProperty('width', '100px')
box.style.setProperty('height', '100px')
document.body.appendChild(box)
// Setup the animation loop.
function animate(time) {
requestAnimationFrame(animate)
TWEEN.update(time)
}
requestAnimationFrame(animate)
const coords = {x: 0, y: 0} // Start at (0, 0)
const tween = new TWEEN.Tween(coords) // Create a new tween that modifies 'coords'.
.to({x: 300, y: 200}, 1000) // Move to (300, 200) in 1 second.
.easing(TWEEN.Easing.Quadratic.Out) // Use an easing function to make the animation smooth.
.onUpdate(() => {
// Called after tween.js updates 'coords'.
// Move 'box' to the position described by 'coords' with a CSS translation.
box.style.setProperty('transform', `translate(${coords.x}px, ${coords.y}px)`)
})
.start() // Start the tween immediately.
Currently npm is required to build the project.
git clone https://github.com/tweenjs/tween.js
cd tween.js
npm i .
npm run build
This will create some builds in the dist
directory. There are currently four different builds of the library:
You are now able to copy tween.umd.js into your project, then include it with a script tag. This will add TWEEN to the global scope.
<script src="js/tween.umd.js"></script>
require('@tweenjs/tween.js')
You can add tween.js as an npm dependency:
npm i @tweenjs/tween.js@^18
If you are using Node, Webpack, or Browserify, then you can now use the following to include tween.js:
const TWEEN = require('@tweenjs/tween.js')
px
)![]() | Custom functions (source) | ![]() | Stop all chained tweens (source) |
![]() | Yoyo (source) | ![]() | Relative values (source) |
![]() | Repeat (source) | ![]() | Dynamic to (source) |
![]() | Array interpolation (source) | ![]() | Video and time (source) |
![]() | Simplest possible example (source) | ![]() | Graphs (source) |
![]() | Black and red (source) | ![]() | Bars (source) |
![]() | hello world (source) |
You need to install npm
first--this comes with node.js, so install that one first. Then, cd to tween.js
's directory and run:
npm install
To run the tests run:
npm test
If you want to add any feature or change existing features, you must run the tests to make sure you didn't break anything else. Any pull request (PR) needs to have updated passing tests for feature changes (or new passing tests for new features) in src/tests.ts
, otherwise the PR won't be accepted. See contributing for more information.
Maintainers: mikebolt, sole, Joe Pea (@trusktr).
Author: Tweenjs
Source Code: https://github.com/tweenjs/tween.js/
License: View license
1664349960
R package for interactive network plots using VivaGraph js. Built with htmlwidgets
.
Example Use
Nodes = data.frame(nodeName=c('Homer','Bart','Lisa','Milhouse','Lenny'), group=c(1,1,1,2,3))
Edges = data.frame(source=c(0,1,0,1,0),target=c(1,2,2,3,4))
vivagRaph(nodes=Nodes,edges=Edges)
Installation
devtools::install_github('keeganhines/vivagRaph')
library(vivagRaph)
Author: Keeganhines/
Source Code: https://github.com/keeganhines/vivagRaph
1664244969
This Shiny application takes a CSV file of clean data, allows you to inspect the data and compute a Principal Components Analysis, and will return several diagnostic plots and tables. The plots include a tableplot, a correlation matrix, a scree plot, and a biplot of Principal Components.
You can chose which columns to include in the PCA, and which column to use as a grouping variable. You can choose the center and/or scale the data, or not. You can choose which PCs to include on the biplot.
The biplot of PCs is interactive, so you can click on points or select points and inspect the details of those points in a table.
There are two ways to run/install this app.
First, you can run it on your computer like so:
library(shiny)
runGitHub("interactive_pca_explorer", "benmarwick")
Second, you can clone this repo to have the code on your computer, and run the app from there, like so:
# First clone the repository with git. If you have cloned it into
# ~/interactive_pca_explorer, first change your working directory to ~/interactive_pca_explorer, then use runApp() to start the app.
setwd("~/interactive_pca_explorer") # change to match where you downloaded this repo to
runApp() # runs the app
This app depends on several R packages (ggplot2, DT, GGally, psych, Hmisc, MASS, tabplot). The app will check to see if you have them installed, and if you don't, it will try to download and install them for you.
Start on the first (left-most) tab to upload your CSV file, then click on each tab, in order from left to right, to see the results.
Here's what it looks like. Here we have input a CSV file that contain the iris data (included with this app).
Then we can see some simple descriptions of the data, and the raw data at the bottom of the page.
Below we see how we can choose the variables to explore in a correlation matrix. We also have a table that summarizes the correlations and gives p-values.
Below we have a few popular diagnostic tests that many people like to do before doing a PCA. They're not very informative and can be skipped, but people coming from SPSS might feel more comfortable if they can see them here also.
Below are the options for computing the PCA. We can choose which columns to include, and a few details about the PCA function. We are using the prcomp function to compute the PCA.
Here are the classic PCA plots. First is the scree plot summarizing how important the first few PCs are. Second is the interactive PC biplot. You can see that I've used my mouse to draw a rectangle around a few of the points in the biplot (this is called 'brushing') and in the table below we can see the details of those points in the selected area. We can choose which column to use for grouping (this only affects the colouring of the plot, it doesn't change the PCA results), and we can choose which PCs to show on the plot.
Finally we have some of the raw output from the PCA.
Please open an issue if you find something that doesn't work as expected. Note that this project is released with a Guide to Contributing and a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
Author: Benmarwick
Source Code: https://github.com/benmarwick/Interactive_PCA_Explorer
License: MIT license
1661604720
BallR uses the NBA Stats API to visualize every shot taken by a player during an NBA season dating back to 1996.
You can run BallR on your own machine by pasting the following code into the R console (you'll have to install R first):
packages = c("shiny", "tidyverse", "hexbin")
install.packages(packages, repos = "https://cran.rstudio.com/")
library(shiny)
runGitHub("ballr", "toddwschneider")
There are three chart types to choose from: hexagonal, scatter, and heat map
Hexagonal charts, which are influenced by the work of Kirk Goldsberry at Grantland, use R's hexbin
package to bin shots into hexagonal regions. The size and opacity of each hexagon are proportional to the number of shots taken within that region, and the color of each hexagon represents your choice of metric, which can be one of:
There are two sliders to adjust the maximum hexagon sizes, and also the variability of sizes across hexagons, e.g. here's the same Stephen Curry chart but with larger hexagons, and plotting points per shot as the color metric.
Note that the color metrics are not plotted at the individual hexagon level, but at the court region level, e.g. all hexagons on the left side of the court that are 16-24 feet from the basket will have the same color. If BallR were extended to, say, chart all shots for an entire team, then it might make sense to assign colors at the hexagon-level, but for single players that tends to produce excessive noise.
Scatter charts are the most straightforward option: they show the location of each individual shot, with color-coding for makes and misses
Heat map charts use two-dimensional kernel density estimation to show the distribution of shot attempts across the court.
Anecdotally I've found that heat maps often show, unsurprisingly, that most shot attempts are taken in the restricted area near the basket. It might be more interesting to filter out restricted area shots when generating heat maps, for example here's the heat map of Stephen Curry's shot attempts excluding shots from within the restricted area:
BallR lets you filter shots along a few dimensions (zone, angle, distance, made/missed) by adjusting the inputs in the sidebar. When you apply filters, the shot chart and summary stats update automatically to reflect whatever subset of shots you have chosen.
BallR comes with light and dark color themes, and you can define your own theme in court_themes.R
The data comes directly from the NBA Stats API via the shotchartdetail
endpoint. See fetch_shots.R for the API call itself. The player select input lets you choose any player and season back to 1996, so you can compare, for example, Michael Jordan of 1996 to LeBron James of 2012.
NBA Shots DB is a Rails app that populates a PostgreSQL database with every NBA shot attempt since 1996 (4.5 million shots and growing).
https://github.com/toddwschneider/nba-shots-db
BallR does not interact with NBA Shots DB yet, but that might change in the future.
Posts by Savvas Tjortjoglou and Eduardo Maia about making NBA shot charts in Python and R, respectively, served as useful resources
todd@toddwschneider.com, or open a GitHub issue
See also the college branch of this repo for men's college basketball shot charts.
Author: Toddwschneider
Source Code: https://github.com/toddwschneider/ballr
License: MIT license
1661593080
Shiny-phyloseq is an interactive web application that provides a graphical user interface to the microbiome analysis package for R, called phyloseq. For details about using the phyloseq package directly, see The phyloseq Homepage.
Shiny-phyloseq is provided under a free-of-charge, open-source license (A-GPL3). All we require is that you cite/attribute the following in any work that benefits from this code or application.
McMurdie and Holmes (2014) Shiny-phyloseq: Web Application for Interactive Microbiome Analysis with Provenance Tracking.
Bioinformatics (Oxford, England), 31(2), 282–283. DOI 10.1093/bioinformatics/btu616
McMurdie and Holmes (2013) phyloseq: An R package for reproducible interactive analysis and graphics of microbiome census data.
PLoS ONE 8(4):e61217.
While it is possible to host the server "back end" somewhere so that users only need to point their web browser to a link, it is also possible to launch both the back and front "ends" on your local machine. The server back end will be an R session on your own machine, while the front end is your web browser, pointed to the appropriate local URL.
Simply launching Shiny-phyloseq should also install missing/old packages. Make sure that you first have installed the latest version of R.
The following R code will launch Shiny-phyloseq on most systems.
install.packages("shiny")
shiny::runGitHub("shiny-phyloseq","joey711")
See the Shiny-phyloseq installation instructions, for further details.
Author: joey711
Source Code: https://github.com/joey711/shiny-phyloseq
License: GPL-3.0 license
1660928880
Features
Overview
QRAGadget is a Shiny Gadget for creating interactive QRA visualizations. QRAGadget is powered by the excellent leaflet and raster packages. While this gadget was initially intended for those interested in creating QRA visualizations, it may also be more generally applicable to anyone interested in visualizing raster data in an interactive map.
Getting Started
To install QRAGadget in R:
install.packages("QRAGadget")
Or to install the latest developmental version:
devtools::install_github('paulgovan/QRAGadget')
After installation, and if using RStudio (v0.99.878 or later), the gadget will appear in the Addins
dropdown menu. Otherwise, to launch the gadget, simply type:
QRAGadget::QRAGadget()
Example
QRAGadget currently accepts two primary types of raster data: (1) a file upload (in csv format) or (2) an R data.frame
object. In order to explore the gadget, create some dummy data:
sample <- matrix(runif(36*36), ncol = 36, nrow = 36) %>%
data.frame()
Then launch the app:
QRAGadget::QRAGadget()
Launching the app brings up the Input/Output page. To find the dummy data, click R Object under Data Type, and then select sample from the dropdown menu.
Choose a name for the output html file. After customizing the map, clicking Done will create a standalone html file in the current working directory (Be sure not to save over a previously created map file!). Click Cancel any time to start over.
To bookmark the app at any time, click the Bookmark button, which will create a unique url for the current state of the app.
To format the raster image, click the Raster icon. Here are a number of options for specifying the extents of the raster image (XMIN, XMAX, YMIN, and YMAX) as well as the projection of the raster layer. It is very important that the raster layer be tagged with the correct project coordinate reference system.
To specify the bins for the color palette, click Number to select the total number of bins or Cuts to select both the number and the actual cut values for each bin.
Finally, there is an option to disaggregate the raster layer and create a new one with a higher resolution (smaller cells) while also locally interpolating between the new cell values (smoothed cells). To disaggregate the raster layer, enter the number of cells to disaggregate.
For this example, use the default values for XMIN, XMAX, YMIN, and YMAX as well as the given projection, but enter 5 as the number of cells to disaggregate:
To view the interactive map, click the Map icon. Click the Reset button at any time in order to reset the extents of the map.
The Preferences tab has a number of options for customizing the map:
To try out some of these options, select the PuOr Color Palette, the Esri.WorldImagery Map Tile, and move the Control Position over to the bottomleft:
This should result in the following interactive map:
Source Code
QRAGadget is an open source project, and the source code is available at https://github.com/paulgovan/QRAGadget
Issues
This project is in its very early stages. Please let us know if there are things you would like to see (or things you don't like!) by opening up an issue using the GitHub issue tracker at https://github.com/paulgovan/QRAGadget/issues
Contributions
Contributions are welcome by sending a pull request
Author: Paulgovan
Source Code: https://github.com/paulgovan/QRAGadget
License: Apache-2.0 license
1660847100
Run using IPython
and then type .
in empty julia>
prompt or run IPython.start_ipython()
. You can switch back to Julia REPL by backspace
or ctrl-h
key (like other REPL modes). Re-entering IPython keeps the previous state. Use pre-defined Main
object to access Julia namespace from IPython. Use py"..."
string macro to access Python namespace from Julia.
Note: First launch of IPython may be slow.
If simple Main.eval("...")
and Main.<name>
accessor is not enough, PyJulia is a nice way to access Julia objects from Python. For example, you can import any Julia package from Python:
>>> from julia import Base
>>> Base.banner()
For more advanced/experimental Julia-(I)Python integration, see ipyjulia_hacks
.
If you want IPython prompt to look like a part of Julia prompt, then add the following snippet in ~/.ipython/profile_default/ipython_config.py
:
try:
from ipython_jl.tools import JuliaModePrompt
except ImportError:
pass
else:
c.TerminalInteractiveShell.prompts_class = JuliaModePrompt
Then the prompt would then look like ipy 1>
instead of In [1]:
. It also removes Out[1]
. Note that above setting does not change your normal IPython prompts.
Author: tkf
Source Code: https://github.com/tkf/IPython.jl
License: View license