Lawson  Wehner

Lawson Wehner


Crystaledge: A Pure Crystal Vector Math Library


Pure Crystal vector math library (WIP)


Add this to your application's shard.yml:

    github: unn4m3d/crystaledge


TODO List:

  •  Vector2 math
    •  Vector2 * Matrix3
    •  Vector2 tests
    •  Vector2 docs
    •  "Dangerous" versions of Vector2 methods
  •  Vector3 math
    •  Vector3 * Matrix4
    •  Vector3 rotation
    •  Vector3 reflection
    •  Vector3 tests
    •  Vector3 docs
    •  "Dangerous" versions of Vector3 methods
  •  Vector4 math
    •  Vector4 tests
    •  Vector4 docs
    •  "Dangerous" versions of Vector4 methods
  •  Matrix3 math
    •  Matrix3 tests
    •  Matrix3 docs
  •  Matrix4 math
    •  Matrix4 tests
    •  Matrix4 docs
  •  Quaternion
    •  Quaternion tests
    •  Quaternion docs


  1. Fork it ( )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request


Download Details:

Author: unn4m3d
Source Code: 
License: MIT license

#crystal #vector 

Crystaledge: A Pure Crystal Vector Math Library

LoopVectorization.jl: Macro(s) for Vectorizing Loops



using Pkg

LoopVectorization is supported on Julia 1.1 and later. It is tested on Julia 1.5 and nightly.


Misusing LoopVectorization can have serious consequences. Like @inbounds, misusing it can lead to segfaults and memory corruption. We expect that any time you use the @turbo macro with a given block of code that you:

  1. Are not indexing an array out of bounds. @turbo does not perform any bounds checking.
  2. Are not iterating over an empty collection. Iterating over an empty loop such as for i ∈ eachindex(Float64[]) is undefined behavior, and will likely result in the out of bounds memory accesses. Ensure that loops behave correctly.
  3. Are not relying on a specific execution order. @turbo can and will re-order operations and loops inside its scope, so the correctness cannot depend on a particular order. You cannot implement cumsum with @turbo.
  4. Are not using multiple loops at the same level in nested loops.


This library provides the @turbo macro, which may be used to prefix a for loop or broadcast statement. It then tries to vectorize the loop to improve runtime performance.

The macro assumes that loop iterations can be reordered. It also currently supports simple nested loops, where loop bounds of inner loops are constant across iterations of the outer loop, and only a single loop at each level of loop nest. These limitations should be removed in a future version.


Please see the documentation for benchmarks versus base Julia, Clang, icc, ifort, gfortran, and Eigen. If you believe any code or compiler flags can be improved, would like to submit your own benchmarks, or have Julia code using LoopVectorization that you would like to be tested for performance regressions on a semi-regular basis, please feel file an issue or PR with the code sample.


Dot Product

LLVM/Julia by default generate essentially optimal code for a primary vectorized part of this loop. In many cases -- such as the dot product -- this vectorized part of the loop computes 4*SIMD-vector-width iterations at a time. On the CPU I'm running these benchmarks on with Float64 data, the SIMD-vector-width is 8, meaning it will compute 32 iterations at a time. However, LLVM is very slow at handling the tails, length(iterations) % 32. For this reason, in benchmark plots you can see performance drop as the size of the remainder increases.

For simple loops like a dot product, LoopVectorization.jl's most important optimization is to handle these tails more efficiently:

julia> using LoopVectorization, BenchmarkTools

julia> function mydot(a, b)
          s = 0.0
          @inbounds @simd for i ∈ eachindex(a,b)
              s += a[i]*b[i]
mydot (generic function with 1 method)

julia> function mydotavx(a, b)
          s = 0.0
          @turbo for i ∈ eachindex(a,b)
              s += a[i]*b[i]
mydotavx (generic function with 1 method)

julia> a = rand(256); b = rand(256);

julia> @btime mydot($a, $b)
 12.220 ns (0 allocations: 0 bytes)

julia> @btime mydotavx($a, $b) # performance is similar
 12.104 ns (0 allocations: 0 bytes)

julia> a = rand(255); b = rand(255);

julia> @btime mydot($a, $b) # with loops shorter by 1, the remainder is now 32, and it is slow
 36.530 ns (0 allocations: 0 bytes)

julia> @btime mydotavx($a, $b) # performance remains mostly unchanged.
 12.226 ns (0 allocations: 0 bytes)

Matrix Multiply

We can also vectorize fancier loops. A likely familiar example to dive into:

julia> function mygemm!(C, A, B)
           @inbounds @fastmath for m ∈ axes(A,1), n ∈ axes(B,2)
               Cmn = zero(eltype(C))
               for k ∈ axes(A,2)
                   Cmn += A[m,k] * B[k,n]
               C[m,n] = Cmn
mygemm! (generic function with 1 method)

julia> function mygemmavx!(C, A, B)
           @turbo for m ∈ axes(A,1), n ∈ axes(B,2)
               Cmn = zero(eltype(C))
               for k ∈ axes(A,2)
                   Cmn += A[m,k] * B[k,n]
               C[m,n] = Cmn
mygemmavx! (generic function with 1 method)

julia> M, K, N = 191, 189, 171;

julia> C1 = Matrix{Float64}(undef, M, N); A = randn(M, K); B = randn(K, N);

julia> C2 = similar(C1); C3 = similar(C1);

julia> @benchmark mygemmavx!($C1, $A, $B)
  memory estimate:  0 bytes
  allocs estimate:  0
  minimum time:     111.722 μs (0.00% GC)
  median time:      112.528 μs (0.00% GC)
  mean time:        112.673 μs (0.00% GC)
  maximum time:     189.400 μs (0.00% GC)
  samples:          10000
  evals/sample:     1

julia> @benchmark mygemm!($C2, $A, $B)
  memory estimate:  0 bytes
  allocs estimate:  0
  minimum time:     4.891 ms (0.00% GC)
  median time:      4.899 ms (0.00% GC)
  mean time:        4.899 ms (0.00% GC)
  maximum time:     5.049 ms (0.00% GC)
  samples:          1021
  evals/sample:     1

julia> using LinearAlgebra, Test

julia> @test all(C1 .≈ C2)
Test Passed

julia> BLAS.set_num_threads(1); BLAS.vendor()

julia> @benchmark mul!($C3, $A, $B)
  memory estimate:  0 bytes
  allocs estimate:  0
  minimum time:     117.221 μs (0.00% GC)
  median time:      118.745 μs (0.00% GC)
  mean time:        118.892 μs (0.00% GC)
  maximum time:     193.826 μs (0.00% GC)
  samples:          10000
  evals/sample:     1

julia> @test all(C1 .≈ C3)
Test Passed

julia> 2e-9M*K*N ./ (111.722e-6, 4.891e-3, 117.221e-6)
(110.50516460500171, 2.524199141279902, 105.32121377568868)

It can produce a good macro kernel. An implementation of matrix multiplication able to handle large matrices would need to perform blocking and packing of arrays to prevent the operations from being memory bottle-necked. Some day, LoopVectorization may itself try to model the costs of memory movement in the L1 and L2 cache, and use these to generate loops around the macro kernel following the work of Low, et al. (2016).

But for now, you should view it as a tool for generating efficient computational kernels, leaving tasks of parallelization and cache efficiency to you.





Another example, a straightforward operation expressed well via broadcasting and (which is typed *\^l), the lazy matrix multiplication operator:

julia> using LoopVectorization, LinearAlgebra, BenchmarkTools, Test; BLAS.set_num_threads(1)

julia> A = rand(5,77); B = rand(77, 51); C = rand(51,49); D = rand(49,51);

julia> X1 =      view(A,1,:) .+ B  *  (C .+ D');

julia> X2 = @turbo view(A,1,:) .+ B .*ˡ (C .+ D');

julia> @test X1 ≈ X2
Test Passed

julia> buf1 = Matrix{Float64}(undef, size(C,1), size(C,2));

julia> buf2 = similar(X1);

julia> @btime $X1 .= view($A,1,:) .+ mul!($buf2, $B, ($buf1 .= $C .+ $D'));
  9.188 μs (0 allocations: 0 bytes)

julia> @btime @turbo $X2 .= view($A,1,:) .+ $B .*ˡ ($C .+ $D');
  6.751 μs (0 allocations: 0 bytes)

julia> @test X1 ≈ X2
Test Passed

julia> AmulBtest!(X1, B, C, D, view(A,1,:))

julia> AmulBtest2!(X2, B, C, D, view(A,1,:))

julia> @test X1 ≈ X2
Test Passed

The lazy matrix multiplication operator escapes broadcasts and fuses, making it easy to write code that avoids intermediates. However, I would recommend always checking if splitting the operation into pieces, or at least isolating the matrix multiplication, increases performance. That will often be the case, especially if the matrices are large, where a separate multiplication can leverage BLAS (and perhaps take advantage of threads). This may improve as the optimizations within LoopVectorization improve.

Note that loops will be faster than broadcasting in general. This is because the behavior of broadcasts is determined by runtime information (i.e., dimensions other than the leading dimension of size 1 will be broadcasted; it is not known which these will be at compile time).

julia> function AmulBtest!(C,A,Bk,Bn,d)
          @turbo for m ∈ axes(A,1), n ∈ axes(Bk,2)
             ΔCmn = zero(eltype(C))
             for k ∈ axes(A,2)
                ΔCmn += A[m,k] * (Bk[k,n] + Bn[n,k])
             C[m,n] = ΔCmn + d[m]
AmulBtest! (generic function with 1 method)

julia> AmulBtest!(X2, B, C, D, view(A,1,:))

julia> @test X1 ≈ X2
Test Passed

julia> @benchmark AmulBtest!($X2, $B, $C, $D, view($A,1,:))
  memory estimate:  0 bytes
  allocs estimate:  0
  minimum time:     5.793 μs (0.00% GC)
  median time:      5.816 μs (0.00% GC)
  mean time:        5.824 μs (0.00% GC)
  maximum time:     14.234 μs (0.00% GC)
  samples:          10000
  evals/sample:     6


Dealing with structs


The key to the @turbo macro's performance gains is leveraging knowledge of exactly how data like Float64s and Ints are handled by a CPU. As such, it is not strightforward to generalize the @turbo macro to work on arrays containing structs such as Matrix{Complex{Float64}}. Instead, it is currently recommended that users wishing to apply @turbo to arrays of structs use packages such as StructArrays.jl which transform an array where each element is a struct into a struct where each element is an array. Using StructArrays.jl, we can write a matrix multiply (gemm) kernel that works on matrices of Complex{Float64}s and Complex{Int}s:

using LoopVectorization, LinearAlgebra, StructArrays, BenchmarkTools, Test

BLAS.set_num_threads(1); @show BLAS.vendor()

const MatrixFInt64 = Union{Matrix{Float64}, Matrix{Int}}

function mul_avx!(C::MatrixFInt64, A::MatrixFInt64, B::MatrixFInt64)
    @turbo for m ∈ 1:size(A,1), n ∈ 1:size(B,2)
        Cmn = zero(eltype(C))
        for k ∈ 1:size(A,2)
            Cmn += A[m,k] * B[k,n]
        C[m,n] = Cmn

function mul_add_avx!(C::MatrixFInt64, A::MatrixFInt64, B::MatrixFInt64, factor=1)
    @turbo for m ∈ 1:size(A,1), n ∈ 1:size(B,2)
        ΔCmn = zero(eltype(C))
        for k ∈ 1:size(A,2)
            ΔCmn += A[m,k] * B[k,n]
        C[m,n] += factor * ΔCmn

const StructMatrixComplexFInt64 = Union{StructArray{ComplexF64,2}, StructArray{Complex{Int},2}}

function mul_avx!(C:: StructMatrixComplexFInt64, A::StructMatrixComplexFInt64, B::StructMatrixComplexFInt64)
    mul_avx!(,,     # = *
    mul_add_avx!(,,, -1) # = - *
    mul_avx!(,,     # = *
    mul_add_avx!(,,     # = + *

this mul_avx! kernel can now accept StructArray matrices of complex numbers and multiply them efficiently:

julia> M, K, N = 56, 57, 58
(56, 57, 58)

julia> A  = StructArray(randn(ComplexF64, M, K));

julia> B  = StructArray(randn(ComplexF64, K, N));

julia> C1 = StructArray(Matrix{ComplexF64}(undef, M, N));

julia> C2 = collect(similar(C1));

julia> @btime mul_avx!($C1, $A, $B)
  13.525 μs (0 allocations: 0 bytes)

julia> @btime mul!(    $C2, $(collect(A)), $(collect(B))); # collect turns the StructArray into a regular Array
  14.003 μs (0 allocations: 0 bytes)

julia> @test C1 ≈ C2
Test Passed

Similar approaches can be taken to make kernels working with a variety of numeric struct types such as dual numbers, DoubleFloats, etc.

Packages using LoopVectorization

If you're using LoopVectorization, please feel free to file a PR adding yours to the list!

Download Details:

Author: JuliaSIMD
Source Code: 
License: MIT license

#julia #loops #vector

LoopVectorization.jl: Macro(s) for Vectorizing Loops
Nat  Grady

Nat Grady


Terra: R Package for Spatial Data Handling


terra is an R package for spatial data analysis. There are tutorials at

stackoverflow is the best place to ask questions if you get stuck. Make sure to include a simple reproducible example. But if you think you have found a bug, please file an issue.

terra replaces the raster package. The interfaces of terra and raster are similar, but terra is simpler, faster and can do more.


terra is available from CRAN, so you can use install.packages("terra") to get the current released version.

The easiest way to use the development version on Windows or MacOS, is to install it from the R-universe, like this:

install.packages('terra', repos='')

From source-code

To install from source-code, first install the Rcpp package that terra depends on:


And then continue based on the OS you are using.


On Windows, you need to first install Rtools to get a C++ compiler that R can use. You need a recent version of Rtools42 (rtools42-5355-5357).

Then, in R, install the package.



On macOS, first install gdal and proj with homebrew

brew install pkg-config
brew install gdal

Followed by (note the additional configuration argument needed for the current homebrew version of proj (9.1.0)

remotes::install_github("rspatial/terra", configure.args = "--with-proj-lib=/opt/homebrew/Cellar/proj/9.1.0/lib/")

To install the CRAN version from source you would do

install.packages("terra", configure.args = "--with-proj-lib=/opt/homebrew/Cellar/proj/9.1.0/lib/")


The easy way to install terra on linux is with r2u.

The harder way: C++11, GDAL (>= 2.2.3), GEOS (>= 3.4.0), PROJ (>= 4.9.3), sqlite3 are required, but more recent versions highly recommended.

To install these system requirements on Ubuntu you can do:

sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable
sudo apt-get update
sudo apt-get install libgdal-dev libgeos-dev libproj-dev 

And now, in R, install the package


See the sf instructions for installation on other linux systems --- and for possible updates/improvements on the above instructions.

Download Details:

Author: rspatial
Source Code: 
License: GPL-3.0 license

#r #vector #geospatial #raster 

Terra: R Package for Spatial Data Handling
Nat  Grady

Nat Grady


Geocompr: Open Source Book: Geocomputation with R

Geocomputation with R


This repository hosts the code underlying Geocomputation with R, a book by Robin Lovelace, Jakub Nowosad, and Jannes Muenchow. If you find the contents useful, please cite it as follows:

Lovelace, Robin, Jakub Nowosad and Jannes Muenchow (2019). Geocomputation with R. The R Series. CRC Press.

The first version of the book has been published by CRC Press in the R Series and can be viewed online at Read the latest version at

Note: we are actively working on the Second Edition 🏗

Since commencing work on the Second Edition in September 2021 much has changed, including:

  • Replacement of raster with terra in Chapters 1 to 7 (see commits related to this update here)
  • Update of Chapter 7 to include mention alternative ways or reading-in OSM data in #656
  • Refactor build settings so the book builds on Docker images in the geocompr/docker repo
  • Improve the experience of using the book in Binder (ideal for trying out the code before installing or updating the necessary R packages), as documented in issue #691 (thanks to yuvipanda)
  • Improved communication of binary spatial predicates in Chapter 4 (see #675)
  • New section on the links between subsetting and clipping (see #698) in Chapter 5
  • New section on the dimensionally extended 9 intersection model (DE-9IM)
  • New chapter on raster-vector interactions split out from Chapter 5
  • New section on the sfheaders package
  • New section in Chapter 2 on spherical geometry engines and the s2 package
  • Replacement of code based on the old mlr package with code based on the new mlr3 package, as described in a huge pull request

See…main for a continuously updated summary of the changes to date. At the time of writing (April 2022) there have been more than 10k lines of code/prose added, lots of refactoring!

Contributions at this stage are very welcome.


We encourage contributions on any part of the book, including:

  • improvements to the text, e.g. clarifying unclear sentences, fixing typos (see guidance from Yihui Xie);
  • changes to the code, e.g. to do things in a more efficient way;
  • suggestions on content (see the project’s issue tracker);
  • improvements to and alternative approaches in the Geocompr solutions booklet hosted at (see a blog post on how to update solutions in files such as _01-ex.Rmd here)

See for the book’s style.

Many thanks to all contributors to the book so far via GitHub (this list will update automatically): prosoitos, florisvdh, katygregg, Lvulis, rsbivand, iod-ine, KiranmayiV, babayoshihiko, cuixueqin, defuneste, zmbc, erstearns, FlorentBedecarratsNM, dcooley, marcosci, appelmar, MikeJohnPage, eyesofbambi, darrellcarvalho, nickbearman, tyluRp, giocomai, KHwong12, LaurieLBaker, MarHer90, mdsumner, pat-s, e-clin, gisma, ateucher, annakrystalli, andtheWings, kant, gavinsimpson, Himanshuteli, yutannihilation, jimr1603, jbixon13, olyerickson, yvkschaefer, katiejolly, kwhkim, layik, mpaulacaldas, mtennekes, mvl22, ganes1410, richfitz, wdearden, yihui, adambhouston, chihinl, cshancock, ec-nebi, gregor-d, jasongrahn, p-kono, pokyah, schuetzingit, sdesabbata, tim-salabim, tszberkowitz.

During the project we aim to contribute ‘upstream’ to the packages that make geocomputation with R possible. This impact is recorded in our-impact.csv.

Downloading the source code

The recommended way to get the source code underlying Geocomputation with R on your computer is by cloning the repo. You can can that on any computer with Git installed with the following command:

git clone

An alternative approach, which we recommend for people who want to contribute to open source projects hosted on GitHub, is to install the gh CLI tool. From there cloning a fork of the source code, that you can change and share (including with Pull Requests to improve the book), can be done with the following command:

git fork robinlovelace/geocompr # (gh repo clone robinlovelace/geocompr # also works)

Both of those methods require you to have Git installed. If not, you can download the book’s source code from the URL . Download/unzip the source code from the R command line to increase reproducibility and reduce time spent clicking around:

u = ""
f = basename(u)
download.file(u, f)        # download the file
unzip(f)                   # unzip it
file.rename(f, "geocompr") # rename the directory
rstudioapi::openProject("geococompr") # or open the folder in vscode / other IDE

Reproducing the book in R/RStudio/VS Code

To ease reproducibility, we created the geocompkg package. Install it with the following commands:

# To reproduce the first Part (chapters 1 to 8):

Installing geocompkg will also install core packages required for reproducing Part 1 of the book (chapters 1 to 8). Note: you may also need to install system dependencies if you’re running Linux (recommended) or Mac operating systems. You also need to have the remotes package installed:

To reproduce book in its entirety, run the following command (which installs additional ‘Suggests’ packages, this may take some time to run!):

# To reproduce all chapters (install lots of packages, may take some time!)
remotes::install_github("geocompr/geocompkg", dependencies = TRUE)

You need a recent version of the GDAL, GEOS, PROJ and udunits libraries installed for this to work on Mac and Linux. See the sf package’s README for information on that. After the dependencies have been installed you should be able to build and view a local version the book with:

# Change this depending on where you have the book code stored:
 # or code /location/of/geocompr in the system terminal
 # or cd /location/of/geocompr then R in the system terminal, then:
bookdown::render_book("index.Rmd") # to build the book
browseURL("_book/index.html")      # to view it

Geocompr in a devcontainer

A great feature of VS Code is devcontainers, which allow you to develop in an isolated Docker container. If you have VS Code and the necessary dependencies installed on your computer, you can build Geocomputation with R in a devcontainer as shown below (see #873 for details):

Geocompr in Binder

For many people the quickest way to get started with Geocomputation with R is in your web browser via Binder. To see an interactive RStudio Server instance click on the following button, which will open with an R installation that has all the dependencies needed to reproduce the book:

Launch Rstudio

You can also have a play with the repo in RStudio Cloud by clicking on this link (requires log-in):

Launch Rstudio Cloud

Geocomputation with R in a Docker container

To ease reproducibility we have made Docker images available, at geocompr/geocompr on DockerHub. These images allow you to explore Geocomputation with R in a virtual machine that has up-to-date dependencies.

After you have installed docker and set-it up on your computer you can start RStudio Server without a password (see the Rocker project for info on how to add a password and other security steps for public-facing servers):

docker run -p 8787:8787 -e DISABLE_AUTH=TRUE geocompr/geocompr

If it worked you should be able to open-up RStudio server by opening a browser and navigating to http://localhost:8787/ resulting in an up-to-date version of R and RStudio running in a container.

Start a plain R session running:

docker run -it geocompr/geocompr R

If you see something like this after following the steps above, congratulations: it worked! See for more info.

If you want to call QGIS from R, you can use the qgis tag, by running the following command for example (which also shows how to set a password and use a different port on localhost):

docker run -d -p 8799:8787 -e USERID=$UID -e PASSWORD=strongpass -v $(pwd):/home/rstudio/geocompr robinlovelace/geocompr:qgis

From this point to build the book you can open projects in the geocompr directory from the project box in the top-right hand corner, and knit index.Rmd with the little knit button above the the RStudio script panel (Ctl+Shift+B should do the same job).

See the geocompr/docker repo for details, including how to share volumes between your computer and the Docker image, for using geographic R packages on your own data and for information on available tags.

Reproducing this README

To reduce the book’s dependencies, scripts to be run infrequently to generate input for the book are run on creation of this README.

The additional packages required for this can be installed as follows:


With these additional dependencies installed, you should be able to run the following scripts, which create content for the book, that we’ve removed from the main book build to reduce package dependencies and the book’s build time:


Note: the .Rproj file is configured to build a website not a single page. To reproduce this README use the following command:

rmarkdown::render("README.Rmd", output_format = "github_document", output_file = "")


To cite packages used in this book we use code from Efficient R Programming:

# geocompkg:::generate_citations()

This generates .bib and .csv files containing the packages. The current of packages used can be read-in as follows:

pkg_df = readr::read_csv("extdata/package_list.csv")

Other citations are stored online using Zotero.

If you would like to add to the references, please use Zotero, join the open group add your citation to the open geocompr library.

We use the following citation key format:


This can be set from inside Zotero desktop with the Better Bibtex plugin installed (see by selecting the following menu options (with the shortcut Alt+E followed by N), and as illustrated in the figure below:

Edit > Preferences > Better Bibtex

Zotero settings: these are useful if you want to add references.

We use Zotero because it is a powerful open source reference manager that integrates well with the citr package. As described in the GitHub repo Robinlovelace/rmarkdown-citr-demo.

Download Details:

Author: Robinlovelace
Source Code: 
License: View license

#r #education #book #maps #vector 

Geocompr: Open Source Book: Geocomputation with R

Vec.jl: 2D and 3D Vectors and Their Operations for Julia


Provides 2D and 3D vector types for vector operations in Julia.


Run one of those commands in the Julia REPL:

Through the SISL registry:

] registry add
add Vec

Through Pkg

import Pkg


Vec.jl provides several vector types, named after their groups. All types are immutable and are subtypes of 'StaticArrays'' FieldVector, so they can be indexed and used as vectors in many contexts.

  • VecE2 provides an (x,y) type of the Euclidean-2 group.
  • VecE3 provides an (x,y,z) type of the Euclidean-3 group.
  • VecSE2 provides an (x,y,theta) type of the special-Euclidean 2 group.
v = VecE2(0, 1)
v = VecSE2(0,1,0.5)
v = VecE3(0, 1, 2)

Additional geometry types include Quat for quaternions, Line, LineSegment, and Projectile.

The switch to StaticArrays brings several breaking changes. If you need a backwards-compatible version, please checkout the v0.1.0 tag with cd(Pkg.dir("Vec")); run(`git checkout v0.1.0`).

Download Details:

Author: sisl
Source Code: 
License: View license

#julia #vector #3d 

Vec.jl: 2D and 3D Vectors and Their Operations for Julia

SparseVectorMatrix.jl: SparseMatrices As A Vector Of SparseVectors


This packages provides an alternative implementation of SparseMatrices that maintains a vector of SparseVectors. Such an implementation is best used when all matrix operations require access to just one column each.


using SparseVectorMatrix

# Random Generation
a = svmrand(100, 100, 0.1)

# Getindex
a[:, 1]                      # Returns an entire column quickly
a[1, :]                      # Returns an entire row, but slowly.

# SetIndex
a[:, 1] = 1:100              # Assign an entire column quickly.
a[1, :] = 1:100              # Assign an entire row, by slowly.

b = svmrand(100, 100, 0.1)
hcat(a, b)                   # Concatenates horizontally. Very fast.
vcat(a, b)                   # Concatenates vertically. Not as fast.

arr = [svmrand(100, 100, 0.1) for i in 1:4]
hvcat((2,2), arr..)          # Grid Concatenation. Quite fast.

What's supported?

  • svmrand (Similar to sprand)
  • getindex
  • setindex
  • hcat
  • vcat
  • hvcat
  • A bunch of other basic methods like nnz, size, full, etc.



Download Details:

Author: Pranavtbhat
Source Code: 
License: View license

#julia #vector #matrix 

SparseVectorMatrix.jl: SparseMatrices As A Vector Of SparseVectors

A Set Of tools for Dealing with Recursive Arrays Like Arrays Of Arrays


RecursiveArrayTools.jl is a set of tools for dealing with recursive arrays like arrays of arrays.

Tutorials and Documentation

For information on using the package, see the stable documentation. Use the in-development documentation for the version of the documentation, which contains the unreleased features.


using RecursiveArrayTools
a = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
b = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
vA = VectorOfArray(a)
vB = VectorOfArray(b)

vA .* vB # Now all standard array stuff works!

a = (rand(5),rand(5))
b = (rand(5),rand(5))
pA = ArrayPartition(a)
pB = ArrayPartition(b)

pA .* pB # Now all standard array stuff works!

Download Details:

Author: SciML
Source Code: 
License: View license

 #julia #array #tools #vector 

A Set Of tools for Dealing with Recursive Arrays Like Arrays Of Arrays

VSL.jl: Julia Bindings to The intel Vector Statistics Library


This package provides bindings to the Intel Vector Statistics Library.

Using VSL.jl

You must have the Intel® Math Kernel Library installed to use VSL.jl, and the shared library must be in a directory known to the linker.

VML.jl provides several basic random number generators (BRNGs) and distributions, and each distribution has at least one method to generate random number. After VSL.jl loaded, you can use the distributions such like the followings:

julia> using VSL

julia> brng = BasicRandomNumberGenerator(VSL_BRNG_MT19937, 12345);
# A BRNG created, in which 12345 is the random seed.

julia> u = Uniform(brng, 0.0, 1.0); # Create a uniform distribution between 0.0 and 1.0.

julia> rand(u) # Generate one random number.

julia> rand(u, 2, 3) # Generate an random 2*3 array.
2×3 Array{Float64,2}:
 0.732685   0.820175  0.802848
 0.0101692  0.825207  0.29864 

julia> A = Array{Float64}(3, 4);

julia> rand!(u, A) # Fill an array with random numbers.
3×4 Array{Float64,2}:
 0.855138  0.193661  0.436228  0.124267
 0.368412  0.270245  0.161688  0.874174
 0.931785  0.566008  0.373064  0.432936

Basic random number generators

Use the Enum BRNGType to set the type of BRNG.

BRNGType Enum

Supported distributions

Contigurous: Uniform, Gaussian, GaussianMV, Exponential, Laplace, Weibull, Cauchy, Rayleigh, Lognormal, Gumbel, Gamma, Beta

Discrete: UniformDiscrete, UniformBits, UniformBits32, UniformBits64, Bernoulli, Geometric, Binomial, Hypergeometric, Poisson, PoissonV, NegBinomial


Most of the discrete distributions return values of 32-bit integer. Please be careful when using those distributions.

For more information, please refer to the Intel® Math Kernel Library Developer Reference

Download Details:

Author: Sunoru
Source Code: 
License: MIT license

#julia #cryptography #vector 

VSL.jl: Julia Bindings to The intel Vector Statistics Library
Reid  Rohan

Reid Rohan


Fast 2d Geometry Math: Vector2, Rectangle, Circle, Matrix2x3


Fast 2d geometry math: Vector2, Rectangle, Circle, Matrix2x3 (2D transformation), Circle, BoundingBox, Line2, Segment2, Intersections, Distances, Transitions (animation/tween), Noise, Random numbers.

So the objective is "Be fast"

Help needed / TODO LIST

  • API completeness
  • Testing
  • Use falafel/esprima to create an asm.js build
  • More Numerical integrators
  • AI: Path-finding, Steer, Backtracking
  • Minkowski distance, euclidean, Manhattan
  • Beizer math
  • Serialization / De-serialization
  • did I miss anything useful?


Performance is based on good practices.

  • Avoid new
  • Use arrays instead of objects, this is huge performance boost!
  • Avoid creating unnecessary variables (reuse intermediate variables) only create & clone methods should create new variables.
  • Cache every function call to a single variable. example: Vec2.add => vec2_add, even Math.*
  • If access a multi-dimensional array in a loop, cache the array access. for(i...) carr=arr[i]; carr[X]
  • Do not use forEach, map, every etc. or other looping method that require apply/call usage, both are costly.

See some performance test that prove it.

funlinify It's a library that do function inline expansion for javascript. It's in early stage but it works perfectly for our usage here.

Obviously I ignore myself in some parts of this library. Feel free to issue me :)


npm install -g grunt
npm install -g grunt-cli

grunt dist

Create distribution packages using browserify and documentation.

debug: debug/js-2dmath-browser-debug.js

  • argumentify Assert on invalid arguments to properly debug your app.

dist: dist/js-2dmath-browser.js

dist.min: js-2dmath-browser.min.js

grunt watch

Watch every change and rebuild the distribution code.

What can you do with js-2dmath?

See some examples.


The documentation is autogenerated with falafel see dist.js for more fun! :)


How do i know a variable type?

You can't, there is no instanceof or anything like that, everything are numbers/arrays.

I choose to keep track of all types using meaningful naming or enclose the variable in an object like

var movable = {
    body: Polygon.create(/*...*/), // could be a circle, change the type...
    type: "polygon"

Download Details:

Author: llafuente
Source Code: 

#javascript #node #vector 

Fast 2d Geometry Math: Vector2, Rectangle, Circle, Matrix2x3

ChainedVectors.jl: Few Utility Types Over Julia Vector Type

ChainedVectors consist of a bunch of types that:

  • chain multiple Vectors and make it appear like a single Vector
  • give a window into a portion of the chained vector that appears like a single Vector. The window may straddle across boundaries of multiple elements in the chain.


Chains multiple vectors. Only index translation is done and the constituent Vectors are not copied. This can be efficient in situations where avoiding allocation and copying of data is important. For example, during sequential file reading, ChainedVectors can be used to store file blocks progressively as the file is read. As it grows beyond a certain size, buffers from the head of the chain can be removed and resued to read further data at the tail.

julia> v1 = [1, 2, 3]
3-element Int64 Array:

julia> v2 = [4, 5, 6]
3-element Int64 Array:

julia> cv = ChainedVector{Int}(v1, v2)
6-element Int64 ChainedVector:
[1, 2, 3, 4, 5, ...]

julia> cv[1]

julia> cv[5]

ChainedVector{Uint8} has specialized methods for search, beginswith, and beginswithat that help in working with textual data.

julia> cv = ChainedVector{Uint8}(b"Hello World ", b"Goodbye World ")
26-element Uint8 ChainedVector:
[0x48, 0x65, 0x6c, 0x6c, 0x6f, ...]

julia> search(cv, 'W')

julia> search(cv, 'W', 8)

julia> search(cv, 'W', 22)

julia> beginswith(cv, b"Hello")

julia> beginswith(cv, b"ello")

julia> beginswithat(cv, 2, b"ello")

julia> beginswithat(cv, 7, b"World Goodbye")

Window view of a ChainedVector

Using the sub method, a portion of the data in the ChainedVector can be accessed as a view:

sub(cv::ChainedVector, r::Range1{Int})


julia> v1 = [1, 2, 3, 4, 5, 6];

julia> v2 = [7, 8, 9, 10, 11, 12];

julia> cv = ChainedVector{Int}(v1, v2);

julia> sv = sub(cv, 3:10)
8-element Int64 SubVector:
[3, 4, 5, 6, 7, ...]

julia> sv[1]

julia> # sv[7] is the same as cv[9] and v2[3]

julia> println("sv[7]=$(sv[7]), v2[3]=$(v2[3]), cv[9]=$(cv[9])")
sv[7]=9, v2[3]=9, cv[9]=9


julia> # changing values through sv will be visible at cv and v2

julia> sv[7] = 71

julia> println("sv[7]=$(sv[7]), v2[3]=$(v2[3]), cv[9]=$(cv[9])")
sv[7]=71, v2[3]=71, cv[9]=71

The sub method returns a Vector that indexes into the chained vector at the given range. The returned Vector is not a copy and any modifications affect the Chainedvector and consequently the constituent vectors of the ChainedVector as well. The returned vector can be an instance of either a SubVector or a Vector obtained through the method fast_sub_vec.


Provides index translations for abstract vectors. Example:

julia> v1 = [1, 2, 3, 4, 5, 6];
julia> sv = SubVector(v1, 2:5)
4-element Int64 SubVector:
[2, 3, 4, 5, ]

julia> sv[1]

julia> sv[1] = 20

julia> v1[2]



Provides an optimized way of creating a Vector that points within another Vector and uses the same underlying data. Since it reuses the same memory locations, it works only on concrete Vectors that give contiguous memory locations. Internally the instance of the view vector is maintained in a WeakKeyDict along with a reference to the larger vector to prevent gc from releasing the parent vector till the view is in use. Example:

julia> v1 = [1, 2, 3, 4, 5, 6];
julia> sv = fast_sub_vec(v1, 2:5)
4-element Int64 Array:


julia> println("sv[1]=$(sv[1]), v1[2]=$(v1[2])")
sv[1]=2, v1[2]=2

julia> sv[1] = 20

julia> println("sv[1]=$(sv[1]), v1[2]=$(v1[2])")
sv[1]=20, v1[2]=20

Tests and Benchmarks

Below is the output of some benchmarks done using time_tests.jl located in the test folder.

Times for getindex across all elements of vectors of 33554432 integers.
Split into two 16777216 buffers for ChainedVectors.

Vector: 0.041909848
ChainedVector: 0.261795721
SubVector: 0.172702399
FastSubVector: 0.041579312
SubArray: 3.848813439
SubVector of ChainedVector: 0.418898455

Download Details:

Author: Tanmaykm
Source Code: 
License: MIT license

#julia #vector 

ChainedVectors.jl: Few Utility Types Over Julia Vector Type

Shapefile.jl: Parsing .shp Files in Julia


This library supports reading ESRI Shapefiles in pure Julia.

Quick Start

Basic example of reading a shapefile from test cases:

using Shapefile

path = joinpath(dirname(pathof(Shapefile)),"..","test","shapelib_testcases","test.shp")
table = Shapefile.Table(path)

# if you only want the geometries and not the metadata in the DBF file
geoms = Shapefile.shapes(table)

# whole columns can be retrieved by their name
table.Descriptio  # => Union{String, Missing}["Square with triangle missing", "Smaller triangle", missing]

# example function that iterates over the rows and gathers shapes that meet specific criteria
function selectshapes(table)
    geoms = empty(Shapefile.shapes(table))
    for row in table
        if !ismissing(row.TestDouble) && row.TestDouble < 2000.0
            push!(geoms, Shapefile.shape(row))
    return geoms

# the metadata can be converted to other Tables such as DataFrame
using DataFrames
df = DataFrame(table)

Shapefiles can contain multiple parts for each shape entity. Use GeoInterface.coordinates to fully decompose the shape data into parts.

# Example of converting the 1st shape of the file into parts (array of coordinates)
julia> GeoInterface.coordinates(Shapefile.shape(first(table)))
2-element Vector{Vector{Vector{Vector{Float64}}}}:
 [[[20.0, 20.0], [20.0, 30.0], [30.0, 30.0], [20.0, 20.0]]]
 [[[0.0, 0.0], [100.0, 0.0], [100.0, 100.0], [0.0, 100.0], [0.0, 0.0]]]

Alternative packages

If you want another lightweight pure Julia package for reading feature files, consider also GeoJSON.jl.

For much more fully featured support for reading and writing geospatial data, at the cost of a larger binary dependency, look at GDAL.jl or ArchGDAL.jl packages. The latter builds a higher level API on top of GDAL.jl.

Download Details:

Author: JuliaGeo
Source Code: 
License: View license

#julia #vector #geospatial 

Shapefile.jl: Parsing .shp Files in Julia

GeoJSON.jl: Utilities for Working with GeoJSON Data in Julia


Read GeoJSON files using JSON3.jl, and provide the Tables.jl interface.

This package is heavily inspired by, and borrows code from, JSONTables.jl, which does the same thing for the general JSON format. GeoJSON puts the geometry in a geometry column, and adds all properties in the columns individually.


julia> using GeoJSON, DataFrames

julia> jsonbytes = read("path/to/a.geojson");

julia> fc =
FeatureCollection with 171 Features

julia> first(fc)
Feature with geometry type Polygon and properties Symbol[:geometry, :timestamp, :version, :changeset, :user, :uid, :area, :highway, :type, :id]

# use the Tables interface to convert the format, extract data, or iterate over the rows
julia> df = DataFrame(fc)

Download Details:

Author: JuliaGeo
Source Code: 
License: MIT license

#julia #json #vector 

GeoJSON.jl: Utilities for Working with GeoJSON Data in Julia

Thin Julia Wrapper for GDAL - Geospatial Data Abstraction Library


Julia wrapper for GDAL - Geospatial Data Abstraction Library. This package is a binding to the C API of GDAL/OGR. It provides only a C style usage, where resources must be closed manually, and datasets are pointers.

Other packages can build on top of this to provide a more Julian user experience. See for example ArchGDAL.jl.

Most users will want to use ArchGDAL.jl instead of using GDAL.jl directly.


This package is registered, so add it using Pkg. This will also download GDAL binaries created in Yggdrasil.

pkg> add GDAL

To check if it is installed correctly, you could run the test suite with:

pkg> test GDAL


Docstrings are automatically inserted from the GDAL documentation. Note that these are written for the C API, so function names and argument type names will differ.

julia> using GDAL

help?> GDAL.ogr_g_creategeometry
  OGR_G_CreateGeometry(OGRwkbGeometryType eGeometryType) -> OGRGeometryH

  Create an empty geometry of desired type.


    •    eGeometryType: the type code of the geometry to be created.


  handle to the newly create geometry or NULL on failure. Should be freed with OGRGDestroyGeometry() after use.

Further usage documentation is not yet available, but the files test/tutorial_raster.jl and test/tutorial_vector.jl should provide a good hint based on the API tutorials from

The bulk of this package is generated automatically by the scripts under gen/.

Using the GDAL and OGR utilities

The provided GDAL installation also contains the commonly used utilities such as gdal_translate and ogr2ogr. They can be called from Julia like so:

using GDAL

# list information about a raster dataset
GDAL.gdalinfo_path() do gdalinfo
    run(`$gdalinfo path/to/raster-file`)

# convert raster data between different formats
GDAL.gdal_translate_path() do gdal_translate
    run(`$gdal_translate -of COG input.asc output.tif`)

# list information about an OGR-supported data source
GDAL.ogrinfo_path() do ogrinfo
    run(`$ogrinfo path/to/vector-file`)

# convert simple features data between file formats
GDAL.ogr2ogr_path() do ogr2ogr
    run(`$ogr2ogr -f FlatGeobuf output.fgb input.shp`)

The GDAL.<util>_path are defined in the GDAL_jll package. If you only wish to run the utilities, that package will have all you need. A list of the available utilities can be found here. Documentation for them is available on Note that programs implemented in python (ending in .py) are not available, since those would require a python installations.

Since GDAL 2.1's RFC59.1 most utilities are also available as functions in the library, they are implemented here and tested here. If these are used you can avoid the need for calling the binaries.

If you want to use these utilities from outside julia, note that this will not work unless you set two things:

  1. The environment variable GDAL_DATA must be set to the value returned in julia by GDAL.GDAL_DATA[].
  2. Julia's Sys.BINDIR must be in your path.

Inside of julia (2) is always the case, and (1) happens on loading the GDAL module, in its __init__ function.

Missing driver to support a format

If you get an error such as the one below:

GDALError (CE_Failure, code 6):
    The <...> driver needs to be compiled to support <...>

This means that the GDAL binaries you are using, which normally come from the Yggdrasil community build tree, are not compiled with support for the format or feature you need. GDAL is a large library with many optional dependencies which allow support for more formats. Currently the amount of formats supported is still limited, but will grow over time. Lists of available formats can be found here for rasters and here for vectors. If you need support for another format, consider making an issue in this repository. Many formats need external libraries as added dependencies. This means an Yggdrasil build also needs to be available for that library, and added as a dependency. See issue #65 for a discussion on which new drivers should be prioritized.

Download Details:

Author: JuliaGeo
Source Code: 
License: View license

#julia #vector #geospatial 

Thin Julia Wrapper for GDAL - Geospatial Data Abstraction Library

A High Level API for GDAL - Geospatial Data Abstraction Library


GDAL is a translator library for raster and vector geospatial data formats that is released under an X/MIT license by the Open Source Geospatial Foundation. As a library, it presents an abstract data model to drivers for various raster and vector formats.

This package aims to be a complete solution for working with GDAL in Julia, similar in scope to the SWIG bindings for Python and the user-friendliness of Fiona and Rasterio. It builds on top of GDAL.jl, and provides a high level API for GDAL, espousing the following principles.

Principles (The Arch Way)

(adapted from:

  • simplicity: ArchGDAL tries to avoid unnecessary additions or modifications. It preserves the GDAL Data Model and requires minimal dependencies.
  • modernity: ArchGDAL strives to maintain the latest stable release versions of GDAL as long as systemic package breakage can be reasonably avoided.
  • pragmatism: The principles here are only useful guidelines. Ultimately, design decisions are made on a case-by-case basis through developer consensus. Evidence-based technical analysis and debate are what matter, not politics or popular opinion.
  • user-centrality: Whereas other libraries attempt to be more user-friendly, ArchGDAL shall be user-centric. It is intended to fill the needs of those contributing to it, rather than trying to appeal to as many users as possible.
  • versatility: ArchGDAL will strive to remain small in its assumptions about the range of user-needs, and to make it easy for users to build their own extensions/conveniences.


To install this package, run the following command in the Pkg REPL-mode,

pkg> add ArchGDAL

To test if it is installed correctly,

pkg> test ArchGDAL

Getting Involved


This package will not be possible without JuliaLang, GDAL and GDAL.jl. They are maintained by, and respectively. In case of any contention for support and involvement, we encourage participation and contributions to those projects and communities over this package.

Style Guide

ArchGDAL.jl uses JuliaFormatter.jl as an autoformatting tool, and uses the options in .JuliaFormatter.toml.

If you wish to format code, cd to the ArchGDAL.jl directory, then run:

] add JuliaFormatter
using JuliaFormatter


To manage the dependencies of this package, we work with environments:

Navigate to the directory corresponding to the package:

$ cd /Users/yeesian/.julia/dev/ArchGDAL

Start a session:

$ julia --project

Activate the environment corresponding to Project.toml):

(@v1.6) pkg> activate .
  Activating environment at `~/.julia/environments/v1.6/Project.toml`

Manage the dependencies using Pkg in, e.g.

(ArchGDAL) pkg> st
     Project ArchGDAL v0.6.0
      Status `~/.julia/dev/ArchGDAL/Project.toml`
  [3c3547ce] DiskArrays
  [add2ef01] GDAL
  [68eda718] GeoFormatTypes
  [cf35fbd7] GeoInterface
  [bd369af6] Tables
  [ade2ca70] Dates

(ArchGDAL) pkg> add CEnum
   Resolving package versions...
    Updating `~/.julia/dev/ArchGDAL/Project.toml`
  [fa961155] + CEnum v0.4.1
  [3c3547ce] + DiskArrays v0.2.7
  [add2ef01] + GDAL v1.2.1
  [68eda718] + GeoFormatTypes v0.3.0
  [cf35fbd7] + GeoInterface v0.5.5
  [bd369af6] + Tables v1.4.2

Update the [compat] section of Project.toml so that julia can resolve the versions, e.g.

CEnum = "0.4"

Download Details:

Author: Yeesian
Source Code: 
License: View license

#julia #vector #geospatial 

A High Level API for GDAL - Geospatial Data Abstraction Library
Monty  Boehm

Monty Boehm


GloVe.jl: Implements Global Word Vectors


Implements Global Word Vectors.

using Pkg

See benchmark/perf.jl for a usage example.

Here's the rough idea:

Take text and make a LookupTable. This is a dictionary that has a map from words -> ids and vice-versa. Preprocessing steps should be taken prior to this.

Use weightedsums to get the weighted co-occurence sum totals. This returns a CooccurenceDict.

Convert the CooccurenceDict to a CooccurenceVector. The reasoning for this is faster indexing when we train the model.

Initialize a Model and train the model with the CooccurenceVector using the agagrad! method.

It's pretty fast at this point. On a single core it's roughly 3x slower than the optimized C version.


[ ] More docs.

[ ] See if precompile(args...) does anything

[ ] Notebook example ( has to have emojis )

[ ] Multi-threading

Author: Domluna
Source Code: 
License: Apache-2.0 license

#julia #vector 

GloVe.jl: Implements Global Word Vectors