Julia interface to WebGL using Three-js Custom Elements & Patchwork.jl


A Julia module to render graphical objects, especially 3-D objects, using the ThreeJS abstraction over WebGL. Outputs Patchwork Elems of three-js custom elements. Meant to be used to help packages like Compose3D render 3D output.


Click on any of the above examples to see the code used to draw them.

Where can these be used?

This can be used in IJulia and Escher to embed 3D graphics.


WebGL lets you interact with the GPU in a browser. As long as you have a modern browser, and it supports WebGL (Check this link to see if it does!), the output of this package will just work.




Running Pkg.build("ThreeJS") fetches and installs the three-js webcomponents. This will be done automatically if you install ThreeJS.jl using Pkg.add("ThreeJS").

However, if you clone ThreeJS.jl (with Pkg.clone or otherwise), then these webcomponents must be installed manually into assets/bower_components. This is done to allow simultaneous development of both repositories.


API documentation can be found here.


For use in IJulia notebooks, using ThreeJS will set up everything including static files.

NOTE: If you are restarting the kernel, and doing using ThreeJS again, please reload the page, after deleting the cell where you did using ThreeJS.


Adding push!(window.assets,("ThreeJS","threejs")) in your Escher code, will get the static files set up and you can do 3D Graphics in Escher!

General web servers

To use in a web server, you will need to serve the asset files found in the assets/ directory. Then adding a HTML import to the three-js.html file in the assets/bower_components/three-js will get you all set up! This is done by adding the following line to your HTML file.

<link rel="import" href="assets/bower_components/three-js/three-js.html">

How to create a scene?

For rendering Three-JS elements, all tags should be nested in a three-js tag. This can be done by using the initscene function. An outer div to put this in is also required and can be created by using the outerdiv function.

The code snippet below should get a scene initialized.

using ThreeJS
outerdiv() << initscene()

By default, a scene of 1000px x 562px is created. Support to change this will be added soon.

Creating meshes

In Three-JS, meshes are objects that can be drawn in the scene. These require a geometry and a material to be created. Meshes decide the properties as to the position of where the object is drawn.

A mesh can be created using the mesh function taking the coordinates (x,y,z) as its arguments.

A geometry and a material element should be nested inside this mesh.


Geometries hold all details necessary to describe a 3D model. These can be thought of as the shapes we want to display.

ThreeJS.jl provides support to render the following geometry primitives:

  • Boxes - box(width, height, depth)
  • Spheres - sphere(radius)
  • Pyramids - pyramid(base, height)
  • Cylinders - cylinder(topradius, bottomradius, height)
  • Tori - torus(radius, tuberadius)
  • Parametric Surfaces - parametric(slices, stacks, xrange, yrange, function)
  • Dodecahedron - dodecahedron(radius)
  • Icosahedron - icosahedron(radius)
  • Octahedron - octahedron(radius)
  • Tetrahedron - tetrahedron(radius)
  • Planes - plane(width, height)

These functions will return the appropriate geometry tags that are to be nested inside a mesh along with a material to render.

Custom Geometries

The geometry function is able to render custom geometries, which are specified by the vertices and the faces.


Materials are what decides how the model responds to light, color and such properties of the material.

A material tag is created by using the material function. Properties are to be passed as a Dict to this function.

Available properties are:

  • color - Can be any CSS color value.
  • kind - Can be lambert, basic, phong, normal, or texture(for texture mapping)
  • texture - URL of image to be mapped as texture. Will be applied only if kind is set to texture.
  • wireframe - true or false
  • hidden - true or false
  • transparent - true or false. Set to true to get proper rendering for transparent objects.
  • opacity - Number between 0.0 and 1.0 (fully opaque).

Some helper functions to get these key value pairs is given in src/properties.jl.

Putting them together

mesh(0.0, 0.0, 0.0) <<
    [box(1.0,1.0,1.0), material(Dict(:kind=>"basic",:color=>"red")]

will create a cube of size 1.0 of red color and with the basic material.


Lines can be drawn by specifying the vertices of the line in the order to be joined. Lines can either be of "strip" or "pieces" kinds, which decide how the vertices should be joined. "strip" lines join all vertices, while "pieces" only joins the first and second, third and fourth and so on. Colors for the vertices of the lines can also be specified.

Lines are also meshes and has the properties of a mesh too, like position and rotation. Like meshes, they are a child of the scene.

Line Materials

Lines also require a material to decide properties of a line. The linematerial function can be used to do this and specify some properties for the line. The linematerial should be a child of the line element.

Drawing lines

The line function can be used to draw lines.

line([(0.0, 0.0, 0.0), (1.0, 1.0, 1.0)]) <<

Mesh grids

Drawing mesh grids can be achieved by using the meshlines function. It creates a set of lines to form the grid and assigns colors to the vertices based on the z values.

If you are looking for a 2D grid, use the grid function. It creates a grid on the XY plane which can then be rotated as required.


No 3D scene can be properly displayed without a camera to view from. ThreeJS.jl provides support for a Perspective Camera view using the camera function.

This sets the position of the camera, along with properties like near plane, far plane, fov for field of view (in degrees), and aspect ratio.

The camera tag should be a child of the scene.


ThreeJS.jl provides support for 3 kinds of lighting.

  • Ambient - ambientlight(color)
  • Point - pointlight(x, y, z; color, intensity, distance)
  • Spot - spotlight(x, y, z; color, intensity, distance, angle, exponent, shadow)

These tags should also be a child of the scene.


By default, ThreeJS adds TrackballControls to every scene drawn. This lets you interact with the scene by using the trackpad or mouse to rotate, pan and zoom.


You can use the reactive functionality provided by Escher to create Signals of the 3D graphic elements produced. These can let you create graphics that can be interacted with using UI elements like sliders. Try launching escher --serve (if you have Escher installed) in the examples/ directory and heading to localhost:5555/box.jl on the browser. You can see a box whose width, depth, height and rotation about each axes can be set and the box will update accordingly!

Currently, this functionality does not work in IJulia notebooks. Hopefully, this will be fixed soon and you can use Interact(https://github.com/JuliaLang/Interact.jl) to do the same in IJulia notebooks.


You can also do animations by using Reactive signals. See examples/rotatingcube.jl as an example. It is implemented in Escher, so running an Escher server from that directory and heading to localhost:5555/rotatingcube.jl should give you a cube which is rotating!

NOTE: Adding new objects to a scene will force a redraw of the scene, resetting the camera.


using ThreeJS
outerdiv() << (initscene() <<
        mesh(0.0, 0.0, 0.0) <<
            box(1.0,1.0,1.0), material(Dict(:kind=>"lambert",:color=>"red"))
        pointlight(3.0, 3.0, 3.0),
        camera(0.0, 0.0, 10.0)

Running the above in an IJulia notebook should draw a red cube, which is illuminated by a light from a corner.

For Escher, after the script above is run, the following code should give the same result.

using ThreeJS
using Compat

main(window) = begin
        outerdiv() <<
        initscene() <<
            mesh(0.0, 0.0, 0.0) <<
                    ThreeJS.box(1.0, 1.0, 1.0),
            pointlight(3.0, 3.0, 3.0),
            camera(0.0, 0.0, 10.0)

Download Details:

Author: Rohitvarkey
Source Code: https://github.com/rohitvarkey/ThreeJS.jl 
License: View license

#julia #interface #webgl 

Julia interface to WebGL using Three-js Custom Elements & Patchwork.jl
Lawrence  Lesch

Lawrence Lesch


Pix-plot: A WebGL Viewer for UMAP Or TSNE-clustered Images


This repository contains code that can be used to visualize tens of thousands of images in a two-dimensional projection within which similar images are clustered together. The image analysis uses Tensorflow's Inception bindings, and the visualization layer uses a custom WebGL viewer.

See the change log for recent updates.

App preview


To install the Python dependencies, we recommend you install Anaconda and then create a conda environment with a Python 3.7 runtime:

conda create --name=3.7 python=3.7
source activate 3.7

Then you can install the dependencies by running:

pip uninstall pixplot
pip install https://github.com/yaledhlab/pix-plot/archive/master.zip

Please note that you will need to use Python 3.6 or Python 3.7 to install and use this package. The HTML viewer also requires a WebGL-enabled browser.


If you have a WebGL-enabled browser and a directory full of images to process, you can prepare the data for the viewer by installing the dependencies above then running:

pixplot --images "path/to/images/*.jpg"

To see the results of this process, you can start a web server by running:

# for python 3.x
python -m http.server 5000

# for python 2.x
python -m SimpleHTTPServer 5000

The visualization will then be available at http://localhost:5000/output.

Sample Data

To acquire some sample data with which to build a plot, feel free to use some data prepared by Yale's DHLab:

pip install image_datasets

Then in a Python script:

import image_datasets

The .download() command will make a directory named datasets in your current working directory. That datasets directory will contain a subdirectory named 'oslomini', which contains a directory of images and another directory with a CSV file of image metadata. Using that data, we can next build a plot:

pixplot --images "datasets/oslomini/images/*" --metadata "datasets/oslomini/metadata/metadata.csv"

Creating Massive Plots

If you need to plot more than 100,000 images but don't have an expensive graphics card with which to visualize huge WebGL displays, you might want to specify a smaller "cell_size" parameter when building your plot. The "cell_size" argument controls how large each image is in the atlas files; smaller values require fewer textures to be rendered, which decreases the GPU RAM required to view a plot:

pixplot --images "path/to/images/*.jpg" --cell_size 10

Controlling UMAP Layout

The UMAP algorithm is particularly sensitive to three hyperparemeters:

--min_dist: determines the minimum distance between points in the embedding
--n_neighbors: determines the tradeoff between local and global clusters
--metric: determines the distance metric to use when positioning points

UMAP's creator, Leland McInnes, has written up a helpful overview of these hyperparameters. To specify the value for one or more of these hyperparameters when building a plot, one may use the flags above, e.g.:

pixplot --images "path/to/images/*.jpg" --n_neighbors 2

Curating Automatic Hotspots

PixPlot uses Hierarchical density-based spatial clustering of applications with noise, a refinement of the earlier DBSCAN algorithm, to find hotspots in the visualization. You may be interested in consulting this explanation of how HDBSCAN works.

Adding Metadata

If you have metadata associated with each of your images, you can pass in that metadata when running the data processing script. Doing so will allow the PixPlot viewer to display the metadata associated with an image when a user clicks on that image.

To specify the metadata for your image collection, you can add --metadata=path/to/metadata.csv to the command you use to call the processing script. For example, you might specify:

pixplot --images "path/to/images/*.jpg" --metadata "path/to/metadata.csv"

Metadata should be in a comma-separated value file, should contain one row for each input image, and should contain headers specifying the column order. Here is a sample metadata file:

bees.jpgyellowa|b|cbees' kneeshttps://...1776
cats.jpgdangerousb|c|dcats' pajamashttps://...1972

The following column labels are accepted:

filenamethe filename of the image
categorya categorical label for the image
tagsa pipe-delimited list of categorical tags for the image
descriptiona plaintext description of the image's contents
permalinka link to the image hosted on another domain
yeara year timestamp for the image (should be an integer)
labela categorical label used for supervised UMAP projection
latthe latitudinal position of the image
lngthe longitudinal position of the image

IIIF Images

If you would like to process images that are hosted on a IIIF server, you can specify a newline-delimited list of IIIF image manifests as the --images argument. For example, the following could be saved as manifest.txt:


One could then specify these images as input by running pixplot --images manifest.txt --n_clusters 2

Demonstrations (Developed with PixPlot 2.0 codebase)

LinkImage CountCollection InfoBrowse ImagesDownload for PixPlot
NewsPlot: 1910-191224,026George Grantham Bain CollectionNews in the 1910sImages, Metadata
Bildefelt i Oslo31,097oslobilderAdvanced search, 1860-1924Images, Metadata


The DHLab would like to thank Cyril Diagne and Nicolas Barradeau, lead developers of the spectacular Google Arts Experiments TSNE viewer, for generously sharing ideas on optimization techniques used in this viewer, and Lillianna Marie for naming this viewer PixPlot.

Download Details:

Author: YaleDHLab
Source Code: https://github.com/YaleDHLab/pix-plot 
License: MIT license

#javascript #webgl #datavisualization 

Pix-plot: A WebGL Viewer for UMAP Or TSNE-clustered Images
Lawrence  Lesch

Lawrence Lesch


Genome-spy: A GPU-accelerated toolkit & Visualization Grammar


GenomeSpy is a visualization toolkit for genomic (and other) data. It has a Vega-Lite inspired visualization grammar and high-performance, WebGL-powered graphics rendering.

The software is still work in progress. Documentation and examples for the current version can be found at https://genomespy.app/



GenomeSpy is split into several packages, two of which are the most important:


The core library provides the visualization grammar and a WebGL-powered rendering engine.


The app builds upon the core, extending the visualization grammar with support for faceting multiple (up to thousands of) patient samples. It provides a user interface for interactive analysis of the samples, which can be filtered, sorted, and grouped flexibly. A session handling with provenance, url hashes, and bookmarks is included.


Bootstrapping and running

  1. git clone git@github.com:genome-spy/genome-spy.git
  2. cd genome-spy
  3. npm install (use npm7!)
  4. npm start (starts the App)

The packages/core/examples directory contains some random view specification that can be accessed through urls like http://localhost:8080/?spec=examples/first.json.

The packages/core/private/ directory is in .gitignore and served by the development server: http://localhost:8080/?spec=private/foo.json. Use it for experiments that should not go into version control.

If you want to use or develop the core library, launch a single-page app using: npm -w @genome-spy/core run dev

Download Details:

Author: Genome-spy
Source Code: https://github.com/genome-spy/genome-spy 
License: BSD-2-Clause license

#javascript #visualization #webgl #datavisualization 

Genome-spy: A GPU-accelerated toolkit & Visualization Grammar
Lawrence  Lesch

Lawrence Lesch


TWGL.js: A Tiny WebGL Helper Library

TWGL: A Tiny WebGL helper Library [rhymes with wiggle]

This library's sole purpose is to make using the WebGL API less verbose.


If you want to get stuff done use three.js. If you want to do stuff low-level with WebGL consider using TWGL.

The tiniest example

Not including the shaders (which is a simple quad shader) here's the entire code

<canvas id="c"></canvas>
<script src="../dist/5.x/twgl-full.min.js"></script>
  const gl = document.getElementById("c").getContext("webgl");
  const programInfo = twgl.createProgramInfo(gl, ["vs", "fs"]);

  const arrays = {
    position: [-1, -1, 0, 1, -1, 0, -1, 1, 0, -1, 1, 0, 1, -1, 0, 1, 1, 0],
  const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);

  function render(time) {
    gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);

    const uniforms = {
      time: time * 0.001,
      resolution: [gl.canvas.width, gl.canvas.height],

    twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
    twgl.setUniforms(programInfo, uniforms);
    twgl.drawBufferInfo(gl, bufferInfo);


And here it is live.

Why? What? How?

WebGL is a very verbose API. Setting up shaders, buffers, attributes and uniforms takes a lot of code. A simple lit cube in WebGL might easily take over 60 calls into WebGL.

At its core there's really only a few main functions

  • twgl.createProgramInfo compiles a shader and creates setters for attribs and uniforms
  • twgl.createBufferInfoFromArrays creates buffers and attribute settings
  • twgl.setBuffersAndAttributes binds buffers and sets attributes
  • twgl.setUniforms sets the uniforms
  • twgl.createTextures creates textures of various sorts
  • twgl.createFramebufferInfo creates a framebuffer and attachments.

There's a few extra helpers and lower-level functions if you need them but those 6 functions are the core of TWGL.

Compare the TWGL vs WebGL code for a point lit cube.

Compiling a Shader and looking up locations


const programInfo = twgl.createProgramInfo(gl, ["vs", "fs"]);


// Note: I'm conceding that you'll likely already have the 30 lines of
// code for compiling GLSL
const program = twgl.createProgramFromScripts(gl, ["vs", "fs"]);

const u_lightWorldPosLoc = gl.getUniformLocation(program, "u_lightWorldPos");
const u_lightColorLoc = gl.getUniformLocation(program, "u_lightColor");
const u_ambientLoc = gl.getUniformLocation(program, "u_ambient");
const u_specularLoc = gl.getUniformLocation(program, "u_specular");
const u_shininessLoc = gl.getUniformLocation(program, "u_shininess");
const u_specularFactorLoc = gl.getUniformLocation(program, "u_specularFactor");
const u_diffuseLoc = gl.getUniformLocation(program, "u_diffuse");
const u_worldLoc = gl.getUniformLocation(program, "u_world");
const u_worldInverseTransposeLoc = gl.getUniformLocation(program, "u_worldInverseTranspose");
const u_worldViewProjectionLoc = gl.getUniformLocation(program, "u_worldViewProjection");
const u_viewInverseLoc = gl.getUniformLocation(program, "u_viewInverse");

const positionLoc = gl.getAttribLocation(program, "a_position");
const normalLoc = gl.getAttribLocation(program, "a_normal");
const texcoordLoc = gl.getAttribLocation(program, "a_texcoord");

Creating Buffers for a Cube


const arrays = {
  position: [1,1,-1,1,1,1,1,-1,1,1,-1,-1,-1,1,1,-1,1,-1,-1,-1,-1,-1,-1,1,-1,1,1,1,1,1,1,1,-1,-1,1,-1,-1,-1,-1,1,-1,-1,1,-1,1,-1,-1,1,1,1,1,-1,1,1,-1,-1,1,1,-1,1,-1,1,-1,1,1,-1,1,-1,-1,-1,-1,-1],
  normal:   [1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1],
  texcoord: [1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1],
  indices:  [0,1,2,0,2,3,4,5,6,4,6,7,8,9,10,8,10,11,12,13,14,12,14,15,16,17,18,16,18,19,20,21,22,20,22,23],
const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);


const positions = [1,1,-1,1,1,1,1,-1,1,1,-1,-1,-1,1,1,-1,1,-1,-1,-1,-1,-1,-1,1,-1,1,1,1,1,1,1,1,-1,-1,1,-1,-1,-1,-1,1,-1,-1,1,-1,1,-1,-1,1,1,1,1,-1,1,1,-1,-1,1,1,-1,1,-1,1,-1,1,1,-1,1,-1,-1,-1,-1,-1];
const normals   = [1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1,0,0,0,1,0,0,1,0,0,1,0,0,1,0,0,-1,0,0,-1,0,0,-1,0,0,-1];
const texcoords = [1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,1,1,1];
const indices   = [0,1,2,0,2,3,4,5,6,4,6,7,8,9,10,8,10,11,12,13,14,12,14,15,16,17,18,16,18,19,20,21,22,20,22,23];

const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
const normalBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(normals), gl.STATIC_DRAW);
const texcoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(texcoords), gl.STATIC_DRAW);
const indicesBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indicesBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW);

Setting Attributes and Indices for a Cube


twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);


gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.vertexAttribPointer(positionLoc, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer);
gl.vertexAttribPointer(normalLoc, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.vertexAttribPointer(texcoordLoc, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indicesBuffer);

Setting Uniforms for a Lit Cube


// At Init time
const uniforms = {
  u_lightWorldPos: [1, 8, -10],
  u_lightColor: [1, 0.8, 0.8, 1],
  u_ambient: [0, 0, 0, 1],
  u_specular: [1, 1, 1, 1],
  u_shininess: 50,
  u_specularFactor: 1,
  u_diffuse: tex,

// At render time
uniforms.u_viewInverse = camera;
uniforms.u_world = world;
uniforms.u_worldInverseTranspose = m4.transpose(m4.inverse(world));
uniforms.u_worldViewProjection = m4.multiply(viewProjection, world);

twgl.setUniforms(programInfo, uniforms);


// At Init time
const u_lightWorldPos = [1, 8, -10];
const u_lightColor = [1, 0.8, 0.8, 1];
const u_ambient = [0, 0, 0, 1];
const u_specular = [1, 1, 1, 1];
const u_shininess = 50;
const u_specularFactor = 1;
const u_diffuse = 0;

// At render time
gl.uniform3fv(u_lightWorldPosLoc, u_lightWorldPos);
gl.uniform4fv(u_lightColorLoc, u_lightColor);
gl.uniform4fv(u_ambientLoc, u_ambient);
gl.uniform4fv(u_specularLoc, u_specular);
gl.uniform1f(u_shininessLoc, u_shininess);
gl.uniform1f(u_specularFactorLoc, u_specularFactor);
gl.uniform1i(u_diffuseLoc, u_diffuse);
gl.uniformMatrix4fv(u_viewInverseLoc, false, camera);
gl.uniformMatrix4fv(u_worldLoc, false, world);
gl.uniformMatrix4fv(u_worldInverseTransposeLoc, false, m4.transpose(m4.inverse(world)));
gl.uniformMatrix4fv(u_worldViewProjectionLoc, false, m4.multiply(viewProjection, world));

Loading / Setting up textures


const textures = twgl.createTextures(gl, {
  // a power of 2 image
  hftIcon: { src: "images/hft-icon-16.png", mag: gl.NEAREST },
  // a non-power of 2 image
  clover: { src: "images/clover.jpg" },
  // From a canvas
  fromCanvas: { src: ctx.canvas },
  // A cubemap from 6 images
  yokohama: {
    target: gl.TEXTURE_CUBE_MAP,
    src: [
  // A cubemap from 1 image (can be 1x6, 2x3, 3x2, 6x1)
  goldengate: {
    target: gl.TEXTURE_CUBE_MAP,
    src: 'images/goldengate.jpg',
  // A 2x2 pixel texture from a JavaScript array
  checker: {
    mag: gl.NEAREST,
    min: gl.LINEAR,
    src: [
  // a 1x8 pixel texture from a typed array.
  stripe: {
    mag: gl.NEAREST,
    min: gl.LINEAR,
    format: gl.LUMINANCE,
    src: new Uint8Array([
    width: 1,


// Let's assume I already loaded all the images

// a power of 2 image
const hftIconTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, hftIconImg);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
// a non-power of 2 image
const cloverTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, hftIconImg);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
// From a canvas
const cloverTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, ctx.canvas);
// A cubemap from 6 images
const yokohamaTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, posXImg);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_X, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, negXImg);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_Y, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, posYImg);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, negYImg);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_Z, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, posZImg);
gl.texImage2D(gl.TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, negZImg);
// A cubemap from 1 image (can be 1x6, 2x3, 3x2, 6x1)
const goldengateTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
const size = goldengate.width / 3;  // assume it's a 3x2 texture
const slices = [0, 0, 1, 0, 2, 0, 0, 1, 1, 1, 2, 1];
const tempCtx = document.createElement("canvas").getContext("2d");
tempCtx.canvas.width = size;
tempCtx.canvas.height = size;
for (let ii = 0; ii < 6; ++ii) {
  const xOffset = slices[ii * 2 + 0] * size;
  const yOffset = slices[ii * 2 + 1] * size;
  tempCtx.drawImage(element, xOffset, yOffset, size, size, 0, 0, size, size);
  gl.texImage2D(faces[ii], 0, format, format, type, tempCtx.canvas);
// A 2x2 pixel texture from a JavaScript array
const checkerTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 2, 2, 0, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array([
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
// a 1x8 pixel texture from a typed array.
const stripeTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, 1, 8, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, new Uint8Array([
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);

Creating Framebuffers and attachments


const attachments = [
  { format: RGBA, type: UNSIGNED_BYTE, min: LINEAR, wrap: CLAMP_TO_EDGE },
  { format: DEPTH_STENCIL, },
const fbi = twgl.createFramebufferInfo(gl, attachments);


const fb = gl.createFramebuffer(gl.FRAMEBUFFER);
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.drawingBufferWidth, gl.drawingBufferHeight, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, tex, 0);
const rb = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, rb);
gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_STENCIL, gl.drawingBufferWidth, gl.drawingBufferHeight);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_STENCIL_ATTACHMENT, gl.RENDERBUFFER, rb);

Setting uniform and uniformblock structures and arrays

Given an array of GLSL structures like this

struct Light {
  float intensity;
  float shininess;
  vec4 color;
uniform Light lights[2];


const progInfo = twgl.createProgramInfo(gl, [vs, fs]);
twgl.setUniforms(progInfo, {
  lights: [
    { intensity: 5.0, shininess: 100, color: [1, 0, 0, 1] },
    { intensity: 2.0, shininess:  50, color: [0, 0, 1, 1] },


// assuming we already compiled and linked the program
const light0IntensityLoc = gl.getUniformLocation('lights[0].intensity');
const light0ShininessLoc = gl.getUniformLocation('lights[0].shininess');
const light0ColorLoc = gl.getUniformLocation('lights[0].color');
const light1IntensityLoc = gl.getUniformLocation('lights[1].intensity');
const light1ShininessLoc = gl.getUniformLocation('lights[1].shininess');
const light1ColorLoc = gl.getUniformLocation('lights[1].color');
gl.uniform1f(light0IntensityLoc, 5.0);
gl.uniform1f(light0ShininessLoc, 100);
gl.uniform4fv(light0ColorLoc, [1, 0, 0, 1]);
gl.uniform1f(light1IntensityLoc, 2.0);
gl.uniform1f(light1ShininessLoc, 50);
gl.uniform4fv(light1ColorLoc, [0, 0, 1, 1]);

If you just want to set the 2nd light in TWGL you can do this

const progInfo = twgl.createProgramInfo(gl, [vs, fs]);
twgl.setUniforms(progInfo, {
  'lights[1]': { intensity: 5.0, shininess: 100, color: [1, 0, 0, 1] },


TWGL example vs WebGL example


WebGL 2 Examples

OffscreenCanvas Example

ES6 module support

AMD support

CommonJS / Browserify support

Other Features

Includes some optional 3d math functions (full version)

You are welcome to use any math library as long as it stores matrices as flat Float32Array or JavaScript arrays.

Includes some optional primitive generators (full version)

planes, cubes, spheres, ... Just to help get started


See the examples. Otherwise there's a few different versions

  • twgl-full.module.js the es6 module version
  • twgl-full.min.js the minified full version
  • twgl-full.js the concatenated full version
  • twgl.min.js the minimum version (no 3d math, no primitives)
  • twgl.js the concatenated minimum version (no 3d math, no primitives)


from github


from bower

bower install twgl.js

from npm

npm install twgl.js


npm install twgl-base.js

from git

git clone https://github.com/greggman/twgl.js.git

Rationale and other chit-chat

TWGL's is an attempt to make WebGL simpler by providing a few tiny helper functions that make it much less verbose and remove the tedium. TWGL is NOT trying to help with the complexity of managing shaders and writing GLSL. Nor is it a 3D library like three.js. It's just trying to make WebGL less verbose.

TWGL can be considered a spiritual successor to TDL. Where as TDL created several classes that wrapped WebGL, TWGL tries not to wrap anything. In fact you can manually create nearly all TWGL data structures.

For example the function setAttributes takes an object of attributes. In WebGL you might write code like this

gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.vertexAttribPointer(positionLoc, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, normalBuffer);
gl.vertexAttribPointer(normalLoc, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
gl.vertexAttribPointer(texcoordLoc, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, colorsBuffer);
gl.vertexAttribPointer(colorLoc, 4, gl.UNSIGNED_BYTE, true, 0, 0);

setAttributes is just the simplest code to do that for you.

// make attributes for TWGL manually
const attribs = {
  a_position: { buffer: positionBuffer, size: 3, },
  a_normal:   { buffer: normalBuffer,   size: 3, },
  a_texcoord: { buffer: texcoordBuffer, size: 2, },
  a_color:    { buffer: colorBuffer,    size: 4, type: gl.UNSIGNED_BYTE, normalize: true, },
twgl.setAttributes(attribSetters, attribs);

The point of the example above is TWGL is a thin wrapper. All it's doing is trying to make common WebGL operations easier and less verbose. Feel free to mix it with raw WebGL.

API Docs

API Docs are here.

Want to learn WebGL?

Try webglfundamentals.org

Download Details:

Author: Greggman
Source Code: https://github.com/greggman/twgl.js 
License: MIT license

#javascript #webgl 

TWGL.js: A Tiny WebGL Helper Library
Lawrence  Lesch

Lawrence Lesch


OGL: Minimal WebGL Library


Minimal WebGL library.

⚠️ Note: currently in alpha, so expect breaking changes.

See the Examples!

OGL is a small, effective WebGL library aimed at developers who like minimal layers of abstraction, and are interested in creating their own shaders.

Written in es6 modules with zero dependencies, the API shares many similarities with ThreeJS, however it is tightly coupled with WebGL and comes with much fewer features.

In its design, the library does the minimum abstraction necessary, so devs should still feel comfortable using it in conjunction with native WebGL commands.

Keeping the level of abstraction low helps to make the library easier to understand, extend, and also makes it more practical as a WebGL learning resource.

⚠️ Note: Typescript users may be interested in using a TS fork of the library, kindly maintained by nshen.




npm i ogl


yarn add ogl


Show me what you got! - Explore a comprehensive list of examples, with comments in the source code.

Inspired by the effectiveness of ThreeJS' examples, they will hopefully serve as reference for how to use the library, and to achieve a wide range of techniques.


Even though the source is modular, as a guide, below are the complete component download sizes.

ComponentSize (minzipped)

With tree-shaking applied in a build step, one can expect the final size to be much lighter than the values above.


If installed amongst your project files, importing can be done from one single entry point.

import { ... } from './path/to/src/index.mjs';

Else if using a bundler or import maps with node modules, then import directly from the installed node module.

import { ... } from 'ogl';

By default, the ES source modules are loaded (src/index.mjs).

As another alternative, you could load from a CDN, using either the jsdelivr, unpkg or skypack services.

import { ... } from 'https://cdn.jsdelivr.net/npm/ogl';
import { ... } from 'https://unpkg.com/ogl';
import { ... } from 'https://cdn.skypack.dev/ogl';

If you take this route, I would highly recommend defining a specific version (append @x.x.x) to avoid code breaking, rather than fetching the latest version, as per the above links.

As a basic API example, below renders a spinning white cube.

import { Renderer, Camera, Transform, Box, Program, Mesh } from 'ogl';

    const renderer = new Renderer();
    const gl = renderer.gl;

    const camera = new Camera(gl);
    camera.position.z = 5;

    function resize() {
        renderer.setSize(window.innerWidth, window.innerHeight);
            aspect: gl.canvas.width / gl.canvas.height,
    window.addEventListener('resize', resize, false);

    const scene = new Transform();

    const geometry = new Box(gl);

    const program = new Program(gl, {
        vertex: /* glsl */ `
            attribute vec3 position;

            uniform mat4 modelViewMatrix;
            uniform mat4 projectionMatrix;

            void main() {
                gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
        fragment: /* glsl */ `
            void main() {
                gl_FragColor = vec4(1.0);

    const mesh = new Mesh(gl, { geometry, program });

    function update(t) {

        mesh.rotation.y -= 0.04;
        mesh.rotation.x += 0.03;
        renderer.render({ scene, camera });

Here you can play with the above template live in a codesandbox https://codesandbox.io/s/ogl-5i69p

For a simpler use, such as a full-screen shader, more of the core can be omitted as a scene graph (Transform) and projection matrices (Camera) are not necessary. We'll also show how to easily create custom geometry.

import { Renderer, Geometry, Program, Mesh } from 'ogl';

    const renderer = new Renderer({
        width: window.innerWidth,
        height: window.innerHeight,
    const gl = renderer.gl;

    // Triangle that covers viewport, with UVs that still span 0 > 1 across viewport
    const geometry = new Geometry(gl, {
        position: { size: 2, data: new Float32Array([-1, -1, 3, -1, -1, 3]) },
        uv: { size: 2, data: new Float32Array([0, 0, 2, 0, 0, 2]) },
    // Alternatively, you could use the Triangle class.

    const program = new Program(gl, {
        vertex: /* glsl */ `
            attribute vec2 uv;
            attribute vec2 position;

            varying vec2 vUv;

            void main() {
                vUv = uv;
                gl_Position = vec4(position, 0, 1);
        fragment: /* glsl */ `
            precision highp float;

            uniform float uTime;

            varying vec2 vUv;

            void main() {
                gl_FragColor.rgb = vec3(0.8, 0.7, 1.0) + 0.3 * cos(vUv.xyx + uTime);
                gl_FragColor.a = 1.0;
        uniforms: {
            uTime: { value: 0 },

    const mesh = new Mesh(gl, { geometry, program });

    function update(t) {

        program.uniforms.uTime.value = t * 0.001;

        // Don't need a camera if camera uniforms aren't required
        renderer.render({ scene: mesh });


In an attempt to keep things light and modular, the library is split up into three components: Math, Core, and Extras.

The Math component is an extension of gl-matrix, providing instancable classes that extend Array for each of the module types. 8kb when gzipped, it has no dependencies and can be used separately.

The Core is made up of the following:

  • Geometry.js
  • Program.js
  • Renderer.js
  • Camera.js
  • Transform.js
  • Mesh.js
  • Texture.js
  • RenderTarget.js

Any additional layers of abstraction will be included as Extras, and not part of the core as to reduce bloat. These provide a wide breadth of functionality, ranging from simple to advanced.


This is free and unencumbered software released into the public domain.

Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.

In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.


For more information, please refer to https://unlicense.org

Download Details:

Author: oframe
Source Code: https://github.com/oframe/ogl 

#javascript #webgl 

OGL: Minimal WebGL Library
Lawrence  Lesch

Lawrence Lesch


Regl: Fast Functional WebGL



Fast functional WebGL


regl simplifies WebGL programming by removing as much shared state as it can get away with. To do this, it replaces the WebGL API with two fundamental abstractions, resources and commands:

  • A resource is a handle to a GPU resident object, like a texture, FBO or buffer.
  • A command is a complete representation of the WebGL state required to perform some draw call.

To define a command you specify a mixture of static and dynamic data for the object. Once this is done, regl takes this description and then compiles it into optimized JavaScript code. For example, here is a simple regl program to draw a triangle:

// Calling the regl module with no arguments creates a full screen canvas and
// WebGL context, and then uses this context to initialize a new REGL instance
const regl = require('regl')()

// Calling regl() creates a new partially evaluated draw command
const drawTriangle = regl({

  // Shaders in regl are just strings.  You can use glslify or whatever you want
  // to define them.  No need to manually create shader objects.
  frag: `
    precision mediump float;
    uniform vec4 color;
    void main() {
      gl_FragColor = color;

  vert: `
    precision mediump float;
    attribute vec2 position;
    void main() {
      gl_Position = vec4(position, 0, 1);

  // Here we define the vertex attributes for the above shader
  attributes: {
    // regl.buffer creates a new array buffer object
    position: regl.buffer([
      [-2, -2],   // no need to flatten nested arrays, regl automatically
      [4, -2],    // unrolls them into a typedarray (default Float32)
      [4,  4]
    // regl automatically infers sane defaults for the vertex attribute pointers

  uniforms: {
    // This defines the color of the triangle to be a dynamic variable
    color: regl.prop('color')

  // This tells regl the number of vertices to draw in this command
  count: 3

// regl.frame() wraps requestAnimationFrame and also handles viewport changes
regl.frame(({time}) => {
  // clear contents of the drawing buffer
    color: [0, 0, 0, 0],
    depth: 1

  // draw a triangle using the command defined above
    color: [
      Math.cos(time * 0.001),
      Math.sin(time * 0.0008),
      Math.cos(time * 0.003),

See this example live

More examples

Check out the gallery. The source code of all the gallery examples can be found here.


regl has no dependencies, so setting it up is pretty easy. There are 3 basic ways to do this:

Live editing

To try out regl right away, you can use the live editor in the gallery.


The easiest way to use regl in a project is via npm. Once you have node set up, you can install and use regl in your project using the following command:

npm i -S regl

For more info on how to use npm, check out the official docs.

If you are using npm, you may also want to try budo which is a live development server.

Run time error checking and browserify

By default if you compile regl with browserify then all error messages and run time checks are removed. This is done to reduce the size of the final bundle. If you are developing an application, you should run browserify using the --debug flag in order to enable error messages. This will also generate source maps which make reading the source code of your application easier.

Standalone script tag

You can also use regl as a standalone script if you are really stubborn. The most recent versions can be found in the dist/ folder and is also available from npm cdn in both minified and unminified versions.

There are some difference when using regl in standalone. Because script tags don't assume any sort of module system, the standalone scripts inject a global constructor function which is equivalent to the module.exports of regl:

<!DOCTYPE html>
    <meta content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=0" name="viewport" />
    <meta charset=utf-8>
  <script language="javascript" src="https://npmcdn.com/regl/dist/regl.js"></script>
  <script language="javascript">
    var regl = createREGL()

    regl.frame(function () {
        color: [0, 0, 0, 1]

Why regl

regl just removes shared state from WebGL. You can do anything you could in regular WebGL with little overhead and way less debugging. regl emphasizes the following values:

  • Simplicity The interface is concise and emphasizes separation of concerns. Removing shared state helps localize the effects and interactions of code, making it easier to reason about.
  • Correctness regl has more than 30,000 unit tests and above 95% code coverage. In development mode, regl performs strong validation and sanity checks on all input data to help you catch errors faster.
  • Performance regl uses dynamic code generation and partial evaluation to remove almost all overhead.
  • Minimalism regl just wraps WebGL. It is not a game engine and doesn't have opinions about scene graphs or vector math libraries. Any feature in WebGL is accessible, including advanced extensions like multiple render targets or instancing.
  • Stability regl takes interface compatibility and semantic versioning seriously, making it well suited for long lived applications that must be supported for months or years down the road. It also has no dependencies limiting exposure to risky or unplanned updates.


While regl is lower level than many 3D engines, code written in it tends to be highly compact and flexible. A comparison of regl to various other WebGL libraries across several tasks can be found here.


In order to prevent performance regressions, regl is continuously benchmarked. You can run benchmarks locally using npm run bench or check them out online. The results for the last few days can be found here.

These measurements were taken using our custom scripts bench-history and bench-graph. You can read more about them in the development guide.

Projects using regl

The following is an incomplete list of projects using regl:

If you have a project using regl that isn't on this list that you would like to see added, please send us a pull request!

Help Wanted

regl is still under active developement, and anyone willing to contribute is very much welcome to do so. Right now, what we need the most is for people to write examples and demos with the framework. This will allow us to find bugs and deficiencies in the API. We have a list of examples we would like to be implemented here, but you are of course welcome to come up with your own examples. To add an example to our gallery of examples, please send us a pull request!

API docs

regl has extensive API documentation. You can browse the docs online here.


The latest changes in regl can be found in the CHANGELOG.

For info on how to build and test headless, see the contributing guide here

Docs | Chat | Download | Minified 

Download Details:

Author: Regl-project
Source Code: https://github.com/regl-project/regl 
License: MIT license

#javascript #webgl #functional 

Regl: Fast Functional WebGL
Lawrence  Lesch

Lawrence Lesch


Phenomenon: A Fast 2kB Low-level WebGL API


Phenomenon is a very small, low-level WebGL library that provides the essentials to deliver a high performance experience. Its core functionality is built around the idea of moving millions of particles around using the power of the GPU.


  • Small in size, no dependencies
  • GPU based for high performance
  • Low-level & highly configurable
  • Helper functions with options
  • Add & destroy instances dynamically
  • Dynamic attribute switching

Want to see some magic right away? Have a look here!


$ npm install --save phenomenon


// Import the library
import Phenomenon from 'phenomenon';

// Create a renderer
const phenomenon = new Phenomenon(options);

// Add an instance
phenomenon.add("particles", options);

For a better understanding of how to use the library, read along or have a look at the demo!



Returns an instance of Phenomenon.

Throughout this documentation we'll refer to an instance of this as renderer.


Type: HTMLElement 
Default: document.querySelector('canvas') 

The element where the scene, with all of its instances, will be rendered to. The provided element has to be <canvas> otherwise it won't work.


Type: Object
Default: {}

Overrides that are used when getting the WebGL context from the canvas. The library overrides two settings by default.

AlphafalseSetting this property to true will result in the canvas having a transparent background. By default clearColor is used instead.
AntialiasfalseSetting this property to true will make the edges sharper, but could negatively impact performance. See for yourself if it's worth it!

Read more about all the possible overrides on MDN.


Type: String 
Default: webgl

The context identifier defining the drawing context associated to the canvas. For WebGL 2.0 use webgl2.


Type: Object
Default: {}

Overrides that can be used to alter the behaviour of the experience.

devicePixelRationumber1The resolution multiplier by which the scene is rendered relative to the canvas' resolution. Use window.devicePixelRatio for the highest possible quality, 1 for the best performance.
clearColorarray[1,1,1,1]The color in rgba that is used as the background of the scene.
cliparray[0.001, 100]The near and far clip plane in 3D space.
positionnumber{x:0,y:0,z:2}The distance in 3D space between the center of the scene and the camera.
shouldRenderbooleantrueA boolean indicating whether the scene should start rendering automatically.
uniformsobject{}Shared values between all instances that can be updated at any given moment. By default this feature is used to render all the instances with the same uProjectionMatrix, uModelMatrix and uViewMatrix. It's also useful for moving everything around with the same progress value; uProgress.
onSetup(gl)functionundefinedA setup hook that is called before first render which can be used for gl context changes.
onRender(renderer)functionundefinedA render hook that is invoked after every rendered frame. Use this to update renderer.uniforms.
debugbooleanfalseWhether or not the console should log shader compilation warnings.


Update all values that are based on the dimensions of the canvas to make it look good on all screen sizes.


Toggle the rendering state of the scene. When shouldRender is false requestAnimationFrame is disabled so no resources are used.


Type: Boolean 
Default: undefined 

An optional boolean to set the rendering state to a specific value. Leaving this value empty will result in a regular boolean switch.

.add(key, settings)

This function is used to add instances to the renderer. These instances can be as simple or complex as you'd like them to be. There's no limit to how many of these you can add. Make sure they all have a different key!


Type: String
Default: undefined

Every instance should have a unique name. This name can also be used to destroy the instance specifically later.


Type: Object
Default: {}

An object containing overrides for parameters that are used when getting the WebGL context from the canvas.

attributesarray[]Values used in the program that are stored once, directly on the GPU.
uniformsobject{}Values used in the program that can be updated on the fly.
vertexstring-The vertex shader is used to position the geometry in 3D space.
fragmentstring-The fragment shader is used to provide the geometry with color or texture.
multipliernumber1The amount of duplicates that will be created for the same instance.
modenumber0The way the instance will be rendered. Particles = 0, triangles = 4.
geometryobject{}Vertices (and optional normals) of a model.
modifiersobject{}Modifiers to alter the attributes data on initialize.
onRenderfunctionundefinedA render hook that is invoked after every rendered frame.

Note: Less instances with a higher multiplier will be faster than more instances with a lower multiplier!


Remove an instance from the scene (and from memory) by its key.


Remove all instances and the renderer itself. The canvas element will remain in the DOM.


Dynamically override an attribute with the same logic that is used during initial creation of the instance. The function requires an object with a name, size and data attribute.

Note: The calculation of the data function is done on the CPU. Be sure to check for dropped frames with a lot of particles.

Attributes can also be switched. In the demo this is used to continue with a new start position identical to the end position. This can be achieved with .prepareBuffer(attribute) in which the data function is replaced with the final array.


  1. Particles
  2. Types
  3. Transition
  4. Easing
  5. Shapes
  6. Instances
  7. Movement
  8. Particle cube
  9. Dynamic intances


Are you excited about this library and have interesting ideas on how to improve it? Please tell me or contribute directly! 🙌

npm install > npm start > http://localhost:8080

Download Details:

Author: Vaneenige
Source Code: https://github.com/vaneenige/phenomenon 
License: MIT license

#javascript #webgl #shader #gpu 

Phenomenon: A Fast 2kB Low-level WebGL API
Lawrence  Lesch

Lawrence Lesch


Seriously.js: A Real-time, Node-based Video Effects Compositor


Seriously.js is a real-time, node-based video compositor for the web. Inspired by professional software such as After Effects and Nuke, Seriously.js renders high-quality video effects, but allows them to be dynamic and interactive.

Getting Started

Full documentation is in progress at the wiki. Start with the Tutorial and FAQ.


  • Optimized rendering path and GPU accelerated up to 60 frames per second
  • Accept image input from varied sources: video, image, canvas, array, webcam, Three.js
  • Effect parameters accept multiple formats and can monitor HTML form inputs
  • Basic 2D transforms (translate, rotate, scale, skew) on effect nodes
  • Plugin architecture for adding new effects, sources and targets
  • Read pixel array from any node
  • Load with AMD/RequireJS

Included Effects

  • Accumulator
  • Ascii Text
  • Bleach Bypass
  • Blend
  • Brightness/Contrast
  • Channel Mapping
  • Checkerboard Generator
  • Chroma Key
  • Color Complements
  • Color Cube
  • Color Generator
  • Color Look-Up Table
  • Color Select
  • Color Temperature
  • Crop
  • Daltonize
  • Directional Blur
  • Displacement Map
  • Dither
  • Edge Detect
  • Emboss
  • Exposure Adjust
  • Expressions
  • Fader
  • False Color
  • Fast Approximate Anti-Aliasing
  • Film Grain
  • Freeze Frame
  • Gaussian Blur
  • Hex Tiles
  • Highlights/Shadows
  • Hue/Saturation Adjust
  • Invert
  • Kaleidoscope
  • Layers
  • Linear Transfer
  • Luma Key
  • Mirror
  • Night Vision
  • Optical Flow
  • Panorama
  • Pixelate
  • Polar Coordinates
  • Ripple
  • Scanlines
  • Sepia tone
  • Simplex Noise
  • Sketch
  • Split
  • Throttle Frame Rate
  • Tone Adjust
  • TV Glitch
  • Vibrance
  • Vignette
  • White Balance



Seriously.js requires a browser that supports WebGL. Development is targeted to and tested in Firefox (4.0+), Google Chrome (9+), Internet Explorer (11+) and Opera (18+). Safari is expected to support WebGL in the near future.

Even though a browser may support WebGL, the ability to run it depends on the system's graphics card. Seriously.js is heavily optimized, so most modern desktops and notebooks should be sufficient. Older systems may run slower, especially when using high-resolution videos.

Mobile browser support for WebGL has improved. Mobile Firefox, Chrome and Safari have decent support, but they can be slower than desktop versions due to limited system resources.

Seriously.js provides a method to detect browser support and offer descriptive error messages wherever possible.

Cross-Origin Videos and Images

Due to security limitations of WebGL, Seriously.js can only process video or images that are served from the same domain, unless they are served with CORS headers. Firefox, Chrome and Opera support CORS for video, but Safari and Internet Explorer do not, and videos served with CORS are rare. So for now, it is best to host your own video files.


Bug fixes, new features, effects and examples are welcome and appreciated. Please follow the Contributing Guidelines.


Seriously.js is created and maintained by Brian Chirls

Download Details:

Author: Brianchirls
Source Code: https://github.com/brianchirls/Seriously.js 
License: MIT license

#javascript #html5 #webgl 

Seriously.js: A Real-time, Node-based Video Effects Compositor
Lawrence  Lesch

Lawrence Lesch


Pixijs: The HTML5 Creation Engine

PixiJS - The HTML5 Creation Engine  

The aim of this project is to provide a fast lightweight 2D library that works across all devices. The PixiJS renderer allows everyone to enjoy the power of hardware acceleration without prior knowledge of WebGL. Also, it's fast. Really fast.

If you want to keep up to date with the latest PixiJS news then feel free to follow us on Twitter @PixiJS and we will keep you posted! You can also check back on our site as any breakthroughs will be posted up there too!

What to Use PixiJS for and When to Use It

PixiJS is a rendering library that will allow you to create rich, interactive graphics, cross-platform applications, and games without having to dive into the WebGL API or deal with browser and device compatibility.

PixiJS has full WebGL support and seamlessly falls back to HTML5's canvas if needed. As a framework, PixiJS is a fantastic tool for authoring interactive content, especially with the move away from Adobe Flash in recent years. Use it for your graphics rich, interactive websites, applications, and HTML5 games. Out of the box, cross-platform compatibility and graceful degradation mean you have less work to do and have more fun doing it! If you want to create polished and refined experiences relatively quickly, without delving into dense, low-level code, all while avoiding the headaches of browser inconsistencies, then sprinkle your next project with some PixiJS magic!

Boost your development and feel free to use your imagination!


  • Website: Find out more about PixiJS on the official website.
  • Getting started:
    • Check out @kittykatattack's comprehensive tutorial.
    • Also check out @miltoncandelero's PixiJS tutorials aimed toward videogames with recipes, best practices and TypeScript / npm / webpack setup here
  • Examples: Get stuck right in and play around with PixiJS code and features right here!
  • Docs: Get to know the PixiJS API by checking out the docs.
  • Guide: Supplementary guide to the API documentation here.
  • Wiki: Other misc tutorials and resources are on the Wiki.


  • Forums: Check out the forum and Stackoverflow, both friendly places to ask your PixiJS questions.
  • Inspiration: Check out the gallery to see some of the amazing things people have created!
  • Chat: You can join us on Discord to chat about PixiJS.


It's easy to get started with PixiJS! Simply download a prebuilt build!

Alternatively, PixiJS can be installed with npm or simply using a content delivery network (CDN) URL to embed PixiJS directly on your HTML page.

Note: After v4.5.0, support for the Bower package manager has been dropped. Please see the release notes for more information.

NPM Install

npm install pixi.js

There is no default export. The correct way to import PixiJS is:

import * as PIXI from 'pixi.js'

CDN Install (via cdnjs)

<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/5.1.3/pixi.min.js"></script>

Note: 5.1.3 can be replaced by any released version.


Thanks to @photonstorm for providing those last 2 examples and allowing us to share the source code :)


Want to be part of the PixiJS project? Great! All are welcome! We will get there quicker together :) Whether you find a bug, have a great feature request or you fancy owning a task from the road map above feel free to get in touch.

Make sure to read the Contributing Guide before submitting changes.

Current features

  • WebGL renderer (with automatic smart batching allowing for REALLY fast performance)
  • Canvas renderer (Fastest in town!)
  • Full scene graph
  • Super easy to use API (similar to the flash display list API)
  • Support for texture atlases
  • Asset loader / sprite sheet loader
  • Auto-detect which renderer should be used
  • Full Mouse and Multi-touch Interaction
  • Text
  • BitmapFont text
  • Multiline Text
  • Render Texture
  • Primitive Drawing
  • Masking
  • Filters
  • User Plugins

Basic Usage Example

import { Application, Sprite, Assets } from 'pixi.js';

// The application will create a renderer using WebGL, if possible,
// with a fallback to a canvas render. It will also setup the ticker
// and the root stage PIXI.Container
const app = new Application();

// The application will create a canvas element for you that you
// can then insert into the DOM

// load the texture we need
const texture = await Assets.load('bunny.png');

// This creates a texture from a 'bunny.png' image
const bunny = new Sprite(texture);

// Setup the position of the bunny
bunny.x = app.renderer.width / 2;
bunny.y = app.renderer.height / 2;

// Rotate around the center
bunny.anchor.x = 0.5;
bunny.anchor.y = 0.5;

// Add the bunny to the scene we are building

// Listen for frame updates
app.ticker.add(() => {
    // each frame we spin the bunny around a bit
    bunny.rotation += 0.01;

How to build

Note that for most users you don't need to build this project. If all you want is to use PixiJS, then just download one of our prebuilt releases. Really the only time you should need to build PixiJS is if you are developing it.

If you don't already have Node.js and NPM, go install them. Then, in the folder where you have cloned the repository, install the build dependencies using npm:

npm install

Then, to build the source, run:

npm run build

Error installing gl package

In most cases installing gl from npm should just work. However, if you run into problems you might need to adjust your system configuration and make sure all your dependencies are up to date

Please refer to the gl installation guide for more information.

Error installing canvas package

The canvas library currently being used does not have a pre-built version for every environment. When the package detects an unsupported environment, it will try to build from source.

To build from source you will need to make sure you have the following dependencies installed and then reinstall:

brew install pkg-config cairo pango libpng jpeg giflib librsvg

For non-mac users, please refer to the canvas installation guide for more information.

How to generate the documentation

The docs can be generated using npm:

npm run docs

The documentation uses webdoc in combination with this template pixi-webdoc-template. The configuration file can be found at webdoc.conf.json

We are now a part of the Open Collective and with your support you can help us make PixiJS even better. To make a donation, simply click the button below and we'll love you forever!


Download Details:

Author: Pixijs
Source Code: https://github.com/pixijs/pixijs 
License: MIT license

#javascript #game #webgl #canvas 

Pixijs: The HTML5 Creation Engine
Oral  Brekke

Oral Brekke


4 Favorite Node.js WebGL Libraries

In today's post we will learn about 4 Favorite Node.js WebGL Libraries.

WebGL is JavaScript API/ library based on OpenGL that allows web browsers to render 3D/ 2D graphics in the browser without the need to install extra plugins, desktop apps, any third-party plug-ins or browser extensions. WebGL allows using the machine  GPU through the browser to render 3D graphics into HTML pages. WebGL is currently supported in most of the modern web browsers like Google Chrome, Mozilla Firefox, & Safari.  WebGL can be disabled or enabled through browser settings or the use of special plugins.

1 - Polygonjs

Polygonjs is a node-based 3D WebGL design tool.


npm install @polygonjs/polygonjs


yarn add @polygonjs/polygonjs

You can also load it from the CDN:


The API is designed to be very simple. Here is how you create a minimal scene with a box:

<script type="module">
	// import from the CDN with all nodes
	import {PolyScene, AllRegister} from 'https://unpkg.com/@polygonjs/polygonjs@latest/dist/all.js';
	// or import from the npm module. This is the recommended method,
	// since you can then import only what you need, which will create a much smaller bundle.
	// import {PolyScene} from '@polygonjs/polygonjs/dist/src/engine/scene/PolyScene';
	// import {Poly} from '@polygonjs/polygonjs/dist/src/engine/Poly';
	// import the nodes you need, one by one (NOTE: this will be auto generated when using the visual editor. See more on https://polygonjs.com )
	// import {GeoObjNode} from '@polygonjs/polygonjs/dist/src/engine/nodes/obj/Geo'
	// Poly.registerNode(GeoObjNode);
	// import {BoxSopNode} from '@polygonjs/polygonjs/dist/src/engine/nodes/sop/Box'
	// Poly.registerNode(BoxSopNode);
	// import {HemisphereLightObjNode} from '@polygonjs/polygonjs/dist/src/engine/nodes/obj/HemisphereLight'
	// Poly.registerNode(HemisphereLightObjNode);
	// import {PerspectiveCameraObjNode} from '@polygonjs/polygonjs/dist/src/engine/nodes/obj/PerspectiveCamera'
	// Poly.registerNode(PerspectiveCameraObjNode);
	// import {EventsNetworkSopNode} from '@polygonjs/polygonjs/dist/src/engine/nodes/sop/EventsNetwork'
	// Poly.registerNode(EventsNetworkSopNode);
	// import {CameraOrbitControlsEventNode} from '@polygonjs/polygonjs/dist/src/engine/nodes/event/CameraOrbitControls'
	// Poly.registerNode(CameraOrbitControlsEventNode);

	// create a scene
	const scene = new PolyScene();
	const rootNode = scene.root();

	// create a box
	const geo = rootNode.createNode('geo');
	const box = geo.createNode('box');

	// add a light

	// create a camera
	const perspectiveCamera1 = rootNode.createNode('perspectiveCamera');
	perspectiveCamera1.p.t.set([5, 5, 5]);
	// add OrbitControls
	const events1 = perspectiveCamera1.createNode('eventsNetwork');
	const orbitsControls = events1.createNode('cameraOrbitControls');

	const element = document.getElementById('app');

View on Github

2 - Headless-gl

gl lets you create a WebGL context in Node.js without making a window or loading a full browser environment.


Installing headless-gl on a supported platform is a snap using one of the prebuilt binaries. Using npm run the command,

npm install gl


// Create context
var width   = 64
var height  = 64
var gl = require('gl')(width, height, { preserveDrawingBuffer: true })

//Clear screen to red
gl.clearColor(1, 0, 0, 1)

//Write output as a PPM formatted image
var pixels = new Uint8Array(width * height * 4)
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels)
process.stdout.write(['P3\n# gl.ppm\n', width, " ", height, '\n255\n'].join(''))

for(var i = 0; i < pixels.length; i += 4) {
  for(var j = 0; j < 3; ++j) {
    process.stdout.write(pixels[i + j] + ' ')

View on Github

3 - Hihat

Local Node/Browser development with Chrome DevTools


This project is currently best suited as a global install. Use npm to install it like so:

npm install hihat -g

Basic Examples

Simplest case is just to run hihat on any source file that can be browserified (Node/CommonJS).

hihat index.js

Any options after -- will be passed to browserify. For example:

# transpile ES6 files
hihat tests/*.js -- --transform babelify

You can use --print to redirect console logging into your terminal:

hihat test.js --print | tap-spec

The process will stay open until you call window.close() from the client code. Also see the --quit and --timeout options in Usage.



hihat [entries] [options] -- [browserifyOptions]

View on Github

4 - Node-webgl

This is a set of WebGL like bindings to OpenGL for Node.JS for desktops: windows, linux, mac


npm install node-webgl

Installation Notes for Windows 7

Beware of the Node.JS distribution you use. The default Node.JS is 32-bit and this means that modules will be compiled by node-gyp with 32-bit settings, which often leads to compilation errors especially on 64-bit systems.

So for Windows 7 64-bit, instead of downloading the default Node.JS windows installer, select 'Other release files'. This will show you an ftp site for the latest release. Go into x64 folder and download that distribution.

Installation Notes for OSX

brew install anttweakbar freeimage


examples/ contains examples from other the web test/ contains lessons from www.learningwebgl.com and other tests

simply type: node test/lesson02.js

View on Github

Thank you for following this article. 

#node #webgl 

4 Favorite Node.js WebGL Libraries
Lawrence  Lesch

Lawrence Lesch


OpenSC2K: An Open Source Remake Of Sim City 2000 By Maxis


OpenSC2K - An Open Source remake of SimCity 2000 written in JavaScript, using WebGL Canvas and Phaser 3.


Currently a lot remains to be implemented but the basic framework is there for importing and viewing cities. Lots of stuff remains completely unimplemented such as the actual simulation, rendering of many special case tiles and buildings and anything else that exists outside of importing and viewing.

Along with implementing the original functionality and features, I plan to add additional capabilities beyond the original such as larger city/map sizes, additional network types, adding buildings beyond the initial tileset limitations, action/history tracking along with replays and more.

I've only tested using Chrome / Firefox on macOS, but it should run fairly well on any modern browser/platform that supports WebGL. Performance should be acceptable but there is still a LOT of room for optimizations and improvements.

Due to copyrights, the original graphics and assets from SimCity 2000 cannot be provided here. I've developed and tested using the assets from SimCity 2000 Special Edition for Windows 95. Once I've got the basic engine stabilized I plan to add support for multiple versions of SimCity 2000 in the future.

Update: I've been working on refactoring considerable portions of the code for clarity and performance. Due to the changes a lot of existing functionality is now completely broken and will be fixed in upcoming commits.


You can use yarn (recommended) or npm to install and run. Once installed and started, open a browser to http://localhost:3000 to start the game.

OS X / Linux

  1. git clone https://github.com/rage8885/OpenSC2K or download this repository
  2. cd OpenSC2K
  3. yarn install downloads and installs the dependancies
  4. yarn dev to run


By default, a test city included in the /assets/cities/ folder will load. Currently you must modify the /src/city/load.js file to load different cities.

Requires two files from the Windows 95 Special Edition version of SimCity 2000: LARGE.DAT and PAL_MSTR.BMP. These must be placed in the /assets/import/ directory prior to starting the game. The files will be automatically parsed and used for all in game graphics.


  • WASD to move the camera viewport
  • Q or E to adjust camera zoom  


Based on the work of Dale Floer

Based on the work of David Moews

Portions of the SC2 import logic are based on sc2kparser created by Objelisks and distributed under the terms of the ISC license. https://github.com/Objelisks/sc2kparser

Includes work adapted from the Graham Scan polygon union JavaScript implementation by Lovasoa and distributed under the terms of the MIT license https://github.com/lovasoa/graham-fast

Download Details:

Author: Nicholas-ochoa
Source Code: https://github.com/nicholas-ochoa/OpenSC2K 
License: GPL-3.0 license

#javascript #electron #game #webgl 

OpenSC2K: An Open Source Remake Of Sim City 2000 By Maxis
Lawrence  Lesch

Lawrence Lesch


A JavaScript API for Drawing Unconventional Text Effects on The Web


A JavaScript API for drawing unconventional text effects on the web.


When applying effects to text on the web, designers have traditionally been constrained to those provided by CSS. In the majority of cases this is entirely suitable – text is text right? Yet still, there exist numerous examples of designers combining CSS properties or gifs and images to create effects that evoke something more playful. Precisely here, Blotter exists to provide an alternative.

GLSL Backed Text Effects with Ease

Blotter provides a simple interface for building and manipulating text effects that utilize GLSL shaders without requiring that the designer write GLSL. Blotter has a growing library of configurable effects while also providing ways for student or experienced GLSL programmers to quickly bootstrap new ones.

Atlasing Effects in a Single WebGL Back Buffer

Blotter renders all texts in a single WebGL context and limits the number of draw calls it makes by using atlases. When multiple texts share the same effect they are mapped into a single texture and rendered together. The resulting image data is then output to individual 2d contexts for each element.

Animation Loop

Rather than executing on a time based interval, Blotter's internal animation loop uses requestAnimationFrame to match the browser's display refresh rate and pause when the user navigates to other browser tabs; improving performance and preserving the battery life on the user's device.

What Blotter Isn't

Any texts you pass to Blotter can be individually configured using familiar style properties. You can use custom font faces through the @font-face spec. However, Blotter ultimately renders the texts passed to it into canvas elements. This means rendered text won't be selectable. Blotter is great for elements like titles, headings, and texts used for graphic purposes. It's not recommended that Blotter be used for lengthy bodies of text, and should in most cases be applied to words individually.


Download the minified version.

To apply text effects, you'll also want to include at least one material, so download one of Blotter's ready-made effects, such as the ChannelSplitMaterial.

Include both in your HTML.

<script src="path/to/blotter.min.js"></script>
<script src="path/to/channelSplitMaterial.js"></script>

The following illustrates how to render Blotter's ChannelSplitMaterial in the body of your page with default settings.

<!doctype html>
    <script src="path/to/blotter.min.js"></script>
    <script src="path/to/channelSplitMaterial.js"></script>
      var text = new Blotter.Text("Hello", {
        family : "serif",
        size : 120,
        fill : "#171717"

      var material = new Blotter.ChannelSplitMaterial();

      var blotter = new Blotter(material, { texts : text });

      var scope = blotter.forText(text);


Making Changes / Custom Builds

Firstly, install Blotter's build dependencies (OSX):

$ cd ~/path/to/blotter
$ npm install

The blotter.js and blotter.min.js files are built from source files in the /src directory. Do not edit these built files directly. Instead, edit the source files within the /src directory and then run the following to build the generated files:

$ npm run build

You will the updated build files at /build/blotter.js and /build/blotter.min.js.

Without Three.js / Without Underscore.js

Blotter.js requires Three.js and Underscore.js. If you're already including these files in your project, you should remove them from the defFiles array in the Gruntfile and re-run the build script.

Note: In order to decrease the total build size, Blotter uses a custom build of Three.js that only includes modules Blotter.js relies on. For more information view the build-custom-three script in package.json.

Custom Materials

The documentation for creating custom materials can be found in the Wiki.


Blotter is not possible without these contributions to JavaScript. 

Some projects and people who have helped inspire along the way. 

  • Two.js
    Jono Brandel's Two.js has provided much inspiration for Blotter's documentation and API design.
  • Reza Ali
    Reza's Fragment was a fundamental part of the development process for writing Blotter's Fragment shaders, and Reza kindly allowed Blotter to include an array of shader helper functions from Fragment.
  • Mitch Paone
    I was introduced to Mitch's work in computational typography while working on Blotter, and the work Mitch has done with DIA has been hugely motivational.
  • Stan Haanappel
    Stan Haanappel is a designer whose work with type has been inspirational to Blotter.
  • The Book of Shaders
    The Book of Shaders by Patricio Gonzalez Vivo and Jen Lowe is where anyone looking to learn more about writing shaders should begin.
  • Shadertoy
    Shadertoy has been a critical part of my personal learning experience while working on Blotter.

✌️ - Bradley Griffith

Download Details:

Author: Bradley
Source Code: https://github.com/bradley/Blotter 
License: View license

#javascript #css #design #webgl 

A JavaScript API for Drawing Unconventional Text Effects on The Web
Nat  Grady

Nat Grady


Plotly.R: an interactive Graphing Library for R

An R package for creating interactive web graphics via the open source JavaScript graphing library plotly.js.


Install from CRAN:


Or install the latest development version (on GitHub) via {remotes}:


Getting started

Web-based ggplot2 graphics

If you use ggplot2, ggplotly() converts your static plots to an interactive web-based version!

g <- ggplot(faithful, aes(x = eruptions, y = waiting)) +
  stat_density_2d(aes(fill = ..level..), geom = "polygon") + 
  xlim(1, 6) + ylim(40, 100)


By default, ggplotly() tries to replicate the static ggplot2 version exactly (before any interaction occurs), but sometimes you need greater control over the interactive behavior. The ggplotly() function itself has some convenient “high-level” arguments, such as dynamicTicks, which tells plotly.js to dynamically recompute axes, when appropriate. The style() function also comes in handy for modifying the underlying trace attributes (e.g. hoveron) used to generate the plot:

gg <- ggplotly(g, dynamicTicks = "y")
style(gg, hoveron = "points", hoverinfo = "x+y+text", hoverlabel = list(bgcolor = "white"))


Moreover, since ggplotly() returns a plotly object, you can apply essentially any function from the R package on that object. Some useful ones include layout() (for customizing the layout), add_traces() (and its higher-level add_*() siblings, for example add_polygons(), for adding new traces/data), subplot() (for combining multiple plotly objects), and plotly_json() (for inspecting the underlying JSON sent to plotly.js).

The ggplotly() function will also respect some “unofficial” ggplot2 aesthetics, namely text (for customizing the tooltip), frame (for creating animations), and ids (for ensuring sensible smooth transitions).

Using plotly without ggplot2

The plot_ly() function provides a more direct interface to plotly.js so you can leverage more specialized chart types (e.g., parallel coordinates or maps) or even some visualization that the ggplot2 API won’t ever support (e.g., surface, mesh, trisurf, etc).

plot_ly(z = ~volcano, type = "surface")


Learn more

To learn more about special features that the plotly R package provides (e.g., client-side linking, shiny integration, editing and generating static images, custom events in JavaScript, and more), see https://plotly-r.com. You may already be familiar with existing plotly documentation (e.g., https://plotly.com/r/), which is essentially a language-agnostic how-to guide for learning plotly.js, whereas https://plotly-r.com is meant to be more wholistic tutorial written by and for the R user. The package itself ships with a number of demos (list them by running demo(package = "plotly")) and shiny/rmarkdown examples (list them by running plotly_example("shiny") or plotly_example("rmd")). Carson also keeps numerous slide decks with useful examples and concepts.


Please read through our contributing guidelines. Included are directions for opening issues, asking questions, contributing changes to plotly, and our code of conduct.

Download Details:

Author: Plotly
Source Code: https://github.com/plotly/plotly.R 
License: Unknown, MIT licenses found

#r #webgl #datavisualization #plotly 

Plotly.R: an interactive Graphing Library for R
Nat  Grady

Nat Grady


Rthreejs: Three.js Widgets for R and Shiny

Three.js and R

Three.js widgets for R and shiny. The package includes

  • graphjs: an interactive network visualization widget
  • scatterplot3js: a 3-d scatterplot widget similar to, but more limited than, the scatterplot3d function
  • globejs: a somewhat silly widget that plots data and images on a 3-d globe

The widgets are easy to use and render directly in RStudio, in R markdown, in Shiny applications, and from command-line R via a web browser. They produce high-quality interactive visualizations with just a few lines of R code.

Visualizations optionally use accelerated WebGL graphics, falling back to non-accelerated graphics for systems without WebGL when possible.

See https://threejs.org for details on three.js.

See https://bwlewis.github.io/rthreejs for R examples.

This project is based on the htmlwidgets package. See http://htmlwidgets.org for details and links to many other visualization widgets for R.

Changes in version 0.3.4 (August, 2021)

Added a JavaScript 'program' function argument to run extra user-supplied JavaScript initialization code, see the graphjs help for examples.

New in version 0.3.0 (June, 2017)

The new 0.3.0 package version introduces major changes. The scatterplot3js() function generally works as before but with more capabilities. The graphjs() function is very different with a new API more closely tied to the igraph package.

The threejs package now depends on igraph. If you're doing serious network analysis, you're probably already using igraph (or you should be). Threejs now uses external graph layouts (either from igraph or elsewhere). This gives much greater graph layout flexibility, something I was looking for, but also removes the cute (but slow and crude) force-directed JavaScript animation previously used. To partially make up for that, several new graph animation and interaction schemes are newly available.

See https://bwlewis.github.io/rthreejs/animation/animation.html and https://bwlewis.github.io/rthreejs/advanced/advanced.html for short tutorials on the new graph animation capabilities.

Performance of graphjs() is generally much improved using extensive buffering and custom WebGL shaders where needed. See https://bwlewis.github.io/rthreejs/ego/index.html for an example.

Summary of changes

The scatterplot3js() function was substantially improved and updated.

  • The new pch option supports many point styles with size control.
  • Interactive rotation and zooming are greatly improved and panning is now supported: press and hold the right mouse button (or touch equivalent) and move the mouse to pan.
  • Mouse over labels are supported in WebGL renderings.
  • The points3d() interface has changed to support pipelining.
  • Lines are supported too, see lines3d().
  • Support for crosstalk selection handles (see demo("crosstalk", package="threejs")).
  • Set the experimental use.orbitcontrols=TRUE option for more CPU-efficient (but less fluid) rendering (good for laptops), also applies to graphjs().

The graphjs() function is completely new.

  • Greater variety of WebGL vertex rendering ("pch") options, including spheres and much higher-performance options for large graphs.
  • Graph layout is now external; for instance use one of the many superb igraph package graph layout options.
  • Graph animation is supported, see the examples.
  • Interactive (click-able) graph animation is supported, see demo(package="threejs") for examples.
  • Limited brushing is available to highlight portions of the graph, see the brush=TRUE option.
  • Support for crosstalk selection handles.

Known issues

  • RStudio on Windows systems may not be able to render the WebGL graphics emitted by threejs. RStudio users running on Windows systems may need to use the plot "pop out" button to see visualizations in an external browser. We expect this to be a temporary problem until the underlying graphics rendering system used by RStudio is updated later in 2017.
  • The fallback Canvas rendering code has diverged too much from the baseline WebGL code and no longer works. We have temporarily disabled Canvas rendering with an error message. See https://github.com/bwlewis/rthreejs/issues/67 for details.
  • Crosstalk filter handles are used in a non-standard and experimental way to control graph animation. Don't rely on this experimental feature.


Use the devtools package to install threejs directly from GitHub on any R platform (Mac, Windows, Linux, ...). You'll need the 'devtools' package.

if(!require("devtools")) install.packages("devtools")


See ?scatterplot3d for more examples and detailed help.

z <- seq(-10, 10, 0.1)
x <- cos(z)
y <- sin(z)
scatterplot3js(x, y, z, color=rainbow(length(z)))

The following example plots an undirected graph with 4039 vertices and 88234 edges from the Stanford SNAP network repository http://snap.stanford.edu/data/facebook_combined.txt.gz.

graphjs(ego, bg="black")

The next example illustrates the globe widget by plotting the relative population of some cities using data from the R maps package on a globe. It's based on the JavaScript WebGL Globe Toolkit (https://github.com/dataarts) by the Google Creative Lab Data Arts Team.

runApp(system.file("examples/globe", package="threejs"))

For detailed help on the widgets and additional examples, see


Download Details:

Author: Bwlewis
Source Code: https://github.com/bwlewis/rthreejs 
License: View license

#r #webgl #threejs 

Rthreejs: Three.js Widgets for R and Shiny
Monty  Boehm

Monty Boehm


MeshCat.jl: WebGL-based 3D Visualizer in Julia

MeshCat.jl: Julia bindings to the MeshCat WebGL viewer

MeshCat is a remotely-controllable 3D viewer, built on top of three.js. The viewer contains a tree of objects and transformations (i.e. a scene graph) and allows those objects and transformations to be added and manipulated with simple commands. This makes it easy to create 3D visualizations of geometries, mechanisms, and robots. MeshCat.jl runs on macOS, Linux, and Windows.

The MeshCat viewer runs entirely in the browser, with no external dependencies. All files are served locally, so no internet connection is required. Communication between the browser and your Julia code is managed by HTTP.jl. That means that MeshCat should work:

As much as possible, MeshCat.jl tries to use existing implementations of its fundamental types. In particular, we use:

That means that MeshCat should play well with other tools in the JuliaGeometry ecosystem like MeshIO.jl, Meshing.jl, etc.


Basic Usage

For detailed examples of usage, check out demo.ipynb.


To learn about the animation system (introduced in MeshCat.jl v0.2.0), see animation.ipynb.

Related Projects

MeshCat.jl is a successor to DrakeVisualizer.jl, and the interface is quite similar (with the exception that we use setobject! instead of setgeometry!). The primary difference is that DrakeVisualizer required Director, LCM, and VTK, all of which could be difficult to install, while MeshCat just needs a web browser. MeshCat also has better support for materials, textures, point clouds, and complex meshes.

You may also want to check out:


Create a visualizer and open it

using MeshCat
vis = Visualizer()

## In an IJulia/Jupyter notebook, you can also do:
# IJuliaCell(vis)


using GeometryBasics
using CoordinateTransformations

setobject!(vis, HyperRectangle(Vec(0., 0, 0), Vec(1., 1, 1)))
settransform!(vis, Translation(-0.5, -0.5, 0))


Point Clouds

using ColorTypes
verts = rand(Point3f, 100_000)
colors = [RGB(p...) for p in verts]
setobject!(vis, PointCloud(verts, colors))



# Visualize a mesh from the level set of a function
using Meshing
f = x -> sum(sin, 5 * x)
sdf = SignedDistanceField(f, HyperRectangle(Vec(-1, -1, -1), Vec(2, 2, 2)))
mesh = HomogenousMesh(sdf, MarchingTetrahedra())
setobject!(vis, mesh,
           MeshPhongMaterial(color=RGBA{Float32}(1, 0, 0, 0.5)))



See here for a notebook with the example.

# Visualize the permutahedron of order 4 using Polyhedra.jl
using Combinatorics, Polyhedra
v = vrep(collect(permutations([0, 1, 2, 3])))
using CDDLib
p4 = polyhedron(v, CDDLib.Library())

# Project that polyhedron down to 3 dimensions for visualization
v1 = [1, -1,  0,  0]
v2 = [1,  1, -2,  0]
v3 = [1,  1,  1, -3]
p3 = project(p4, [v1 v2 v3])

# Show the result
setobject!(vis, Polyhedra.Mesh(p3))



Using https://github.com/rdeits/MeshCatMechanisms.jl


Author: rdeits
Source Code: https://github.com/rdeits/MeshCat.jl 
License: MIT license

#julia #3d #webgl 

MeshCat.jl: WebGL-based 3D Visualizer in Julia