Desmond  Gerber

Desmond Gerber


Clean Node_modules with PowerShell

Clean Node_modules with PowerShell

As we do many experiments with UI based frameworks node_modules folders occupy a large portion of the disk space and after a few years, the disk space starts to diminish.

I have a 500GB SSD hard disk in my laptop and have recently run out of space and started to analyze what is that occupies so much space. I do not store personal images and documents on my laptop, at least not more than 1-2 GB.

When I analyzed the code base folder I realized that the experiments I did years ago are still on my machine and needed some cleanup. The easiest way was to remove unused data without any kind of loss is by deleting the huge node_modules folder in every project I did not use currently.

For this purpose, I wrote below PowerShell script to clean up all the node_modules folders in my code folder

Get-ChildItem -Path . -Filter node_modules -Recurse | Remove-Item -Force -Recurse

The "Get-Childitem" here gets all the folders with path "." (which is the current folder I am running the script in) with filter as folder name "node_module" and pipes in into "Remove-Item" command. This command recursively finds all the files and folders inside and deletes them without confirmation. ("Force" parameter is added to avoid asking yes/no before deleting each folder)

After executing this command I was able to recover more than 20Gb of disk space.

Original article source at:

#powershell #node #modules 

Clean Node_modules with PowerShell
Desmond  Gerber

Desmond Gerber


How to Fix Missing Write Access To Node_modules Folder

Here's how you can fix the missing write access to /usr/local/lib/node_modules error.

When you try to install an npm package globally using the npm install -g <package name> command, you may find the following error:

npm WARN checkPermissions Missing write access to /usr/local/lib/node_modules

The warning above means that your current terminal user doesn’t have “write access” to the /usr/local/lib/node_modules folder.

Because you can’t write any new file and folder to the node_modules folder, npm won’t be able to complete the installation.

You should see other errors below the warning as a consequence of the warning:

npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /usr/local/lib/node_modules/rimraf
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, ...
npm ERR! 
npm ERR! The operation was rejected by your operating system.

Without the write access, npm will not be able to create folders and write the files for the package you are trying to install.

There are three ways to fix this error:

This tutorial will help you resolve the missing write access error.

Installing npm modules globally with the sudo command

You can run the npm install command as root user by adding sudo before the command:

sudo npm install -g @angular/cli

The sudo command allows you to execute terminal command as the “root” user. You will be asked for your password when you invoke this command.

By calling the sudo command, the error message with code EACCES should be resolved because root user has access to everything on your computer.

The drawback of this method is that you also need to add sudo when you want to uninstall the package later.

As an alternative, you can change the owner of the node_modules folder instead

Fix missing write access using chown command

The first way is to change the owner of the node_modules folder, which must be owned by user “root” by default.

The following command will change the owner of the folder to your current user. I will explain the command below:

sudo chown -R $USER /usr/local/lib/node_modules

The chown command is used to change the owner of the folder.

The -R option means that the change owner command will be executed recursively, changing not only the node_modules folder owner, but also the rest of the files and folders inside it.

Then, the $USER is an environment variable that will be replaced with the current username you used to login to your computer.

Finally, the folder path /usr/local/lib/node_modules is included to tell the terminal to change the owner of that folder.

By running the command above, you will be able to install npm packages again because the node_modules folder now belongs to your current user.

But rather than running the command and changing the owner of the folder, I’d recommend you install NVM instead.

Fixing missing write access using NVM

NVM or Node Version Manager is a software that’s designed to be installed per-user basis.

It allows you to install multiple versions of NodeJS on your computer so that you can upgrade or downgrade your NodeJS version as needed.

The reason why using NVM would fix the missing write access command is that by default, NVM will install NodeJS versions to a folder under your current user.

For example, my NodeJS version is currently installed under the /Users/nsebhastian/ folder:

nvm which current

When you install global packages using NVM, the package will be installed under the version’s lib/ folder:


By using NVM, you will have the ability to install different versions of NodeJS on your computer and you will automatically fix the missing write access error. I’m currently using it for my computer, and I’d recommend you to use it too 😉

You can learn how to install NVM here

Now you’ve learned how to resolve the permission denied error message caused by missing write access. Nice work! 👍

Original article source at:

#node #modules 

How to Fix Missing Write Access To Node_modules Folder
Bongani  Ngema

Bongani Ngema


How to Exclude node_modules folder with .gitignore file

Learn about .gitignore file and how to use it to exclude node_modules from Git tracking

The node_modules/ folder is the folder where all packages required by your JavaScript project are downloaded and installed. This folder is commonly excluded from a remote repository because it has a large size and you shouldn’t add code that you didn’t write to the repository.

Rather than including the node_modules/ folder, you should have the packages required by your project listed in package.json file and ignore the node_modules/ folder using the .gitignore file.

A .gitignore file is a plain text file where you can write a list of patterns for files or folders that Git must not track from your project. It’s commonly used to exclude auto-generated files in your project.

To ignore the node_modules/ folder, you simply need to write the folder name inside .gitignore file:


And with that, your node_modules/ folder will be ignored by Git. This works even when you have multiple node_modules/ folders located inside another subfolders.

If you already have the node_modules/ folder committed and tracked by Git, you can make Git forget about the folder by removing it from Git cache with the following command:

git rm -r --cached .

The command above will remove all tracked files from Git cache, so you need to add them back using git add . where the dot (.) symbol will add all files in the folder, but still exclude those in the .gitignore file.

Next, you just need to commit the changes and push them into your remote repository. Here’s the full git command:

git rm -r --cached .
git add .
git commit -m "Remove node_modules folder"
git push

And that’s how you ignore the node_modules/ folder using .gitignore file. Feel free to modify the commands above as you require 😉

Original article source at:

#node #modules #folder 

How to Exclude node_modules folder with .gitignore file

How to Remove Node_modules Folder

How to remove node_modules folder

Learn how to remove the entire folder or just specific packages from your node_modules folder

The node_modules folder is used to save all downloaded packages from npm in your computer for the JavaScript project that you have. Developers are always recommended to do a fresh install with npm install each time they downloaded a JavaScript project into their computer.

Still, there might be cases when you want to remove the folder from your computer. Such as when copying the project to a hard drive for backup and removing unused packages. This tutorial will show you how to remove certain npm packages and remove the whole node_modules folder from your local computer.

First, let’s see how to remove the node_modules folder entirely.

How to remove node_modules folder

To delete the node_modules folder from your JavaScript project, you can use the following command for Mac / Linux OS:

rm -rf node_modules

The command above will first delete the content of node_modules recursively until all of it is deleted, then it will remove the node_modules folder too.

If you have multiple node_modules folders in many different projects, then you need to use the following command to find the folders and delete them all.

Go to the parent folder of all your node_modules folders and run the following command:

find . -name "node_modules" -type d -prune | xargs du -chs

It will find all node_modules located inside the folder and all its subfolders. Here’s an example output from my computer:

$ find . -name "node_modules" -type d -prune | xargs du -chs
130M	./server-components-demo/node_modules
244M	./single-spa-basic/node_modules
244M	./react-boilerplate/node_modules

As you can see from the output above, I have three node_modules folders inside three different subfolders. To delete them all, I need to use the following command:

find . -name "node_modules" -type d -prune -exec rm -rf '{}' +

The command will find all node_modules directory inside the folder and its subfolders and execute rm -rf command on the selected files.

For Windows, removing node_modules package may cause an error saying the source file names are larger than what is supported by the file system:

The source file name(s) are larger than is supported by the file system. 
Try moving to a location which has a shorter path name, 
or try renaming to shorter name(s) before attempting this operation

When you see this error, you need to remove the node_modules folder by using the rimraf npm package.

If you’re using npm version 5.2.0 or above, then you can use npm package runner called npx to run rimraf without installing as follows:

npx rimraf node_modules

If your npm version is lower than 5.2.0, then you need to install the package globally using npm as follows:

npm install -g rimraf

Then remove the node_modules folder with the following command:

rimraf node_modules

If you have many node_modules folders, then you can go to the parent folder that contains all the node_modules folder and execute rimraf with the following pattern:

rimraf ./**/node_modules

The double asterisk ** pattern will make rimraf finds all node_modules folder recursively and delete them all.

Deleting specific packages from node_modules folder

At times, you may want to remove specific npm packages from your node_modules folder. You can do so by using the npm uninstall command as shown below:

npm uninstall <package name>

Or you can also remove the package name manually from package.json file and run another npm install command. The npm install command will check your node_modules folder and remove packages that are not listed as a dependency in package.json file.

Now you know how to remove node_modules folder and removing specific packages from it 😉

Original article source at:

#node #remove #modules 

How to Remove Node_modules Folder

Merge Functions with Identical Names From Distinct Modules


This package is not currently registered, but you can use the REPL to clone it for yourself with

julia> Pkg.clone("git://")

What's new (as of v0.3)

  1. Changed 'merge!()' to 'fmerge!()' in order to avoid potential name conflicts with base (wouldn't that be ironic?). I don't plan on doing this ever again.
  2. Added support for arbitrary number of (::Module, ::Function) arguments for fmerge!().
  3. Added "tracking mechanism" of sorts for methods added via fmerge!() (see below).

Motivation & example usage

Suppose we create a function f in Main:

julia> f() = nothing
f (generic function with 1 method)

Suppose also that we also intend to use the following modules A and B:

module A

export f
immutable Foo end
f(::Foo) = print("This is Foo.")
f(x::Int64) = x


module B

export f
immutable Bar end
f(::Bar) = print("This is Bar.")
f(x::Int64) = 2x


As of Julia 0.3.7, unqualified use of a name common to both modules -- say, the name 'f' -- will elicit behavior that depends on the order in which we declare to be using the modules:

julia> using A, B
Warning: using A.f in module Main conflicts with an existing identifier.
Warning: using B.f in module Main conflicts with an existing identifier.

julia> methods(f)
# 1 method for generic function "f":
f() at none:1

julia> f(A.Foo())
ERROR: `f` has no method matching f(::Foo)

julia> A.f(A.Foo())
This is Foo.

But suppose we want unqualified use of 'f' to refer to the correct object --- either f, A.f or B.f --- depending on the signature of the argument on which f is called. The present "package" offers this functionality through the fmerge!() function, which "merges" the methods of A.f and B.f into our original function f as defined in Main. (At its core, this is just extending the f defined in Main.) This allows unqualified use of the name 'f' to dispatch on signatures for which methods are defined in other modules:

julia> fmerge!(f, (A,f), (B,f))

julia> methods(f)
# 3 methods for generic function "f":
f() at none:1

julia> f(A.Foo())
This is Foo.
julia> f(B.Bar())
This is Bar.

For merged methods with at least one argument, the name of the module from which the method originates is appended to the first argument in the method definition, as can be seen above. This can help one keep track of which methods come from which modules. However, this machinery only keeps track of the most recent module from which the method originates. If a method has been merged multiple times through multiple modules, its ultimate origin will be obscured.

Note that no method for the signature (x::Int64,) was merged since both A.f and B.f have methods for this signature. To choose one to merge, use the optional priority keyword argument, which takes an array of (::Module, ::Function) tuples in the order of priority rank:

julia> fmerge!(f, (A,f), (B,f), priority=[(A,f)])

julia> methods(f)
# 4 methods for generic function "f":
f() at none:1

julia> f(3)

If, for a given signature, a method exists in both Module1.f and Module2.f, then the method from whichever of (Module1, f), (Module2, f) with the greater rank (so lower numerical rank, e.g. 1 is greatest) will be merged. (::Module, ::Function) arguments passed to fmerge!() but omitted from priority are by default given the lowest possible rank. If (Module1, f), (Module2, f) have the same rank (which will only occur if they are not specified in priority) then neither method will be merged. This means that if one omits the priority argument, then only those methods whose signatures unambiguously specify precisely one of the (::Module, ::Function) arguments passed to fmerge!() will be merged.

WARNING: As of yet I haven't figured out how to use reflection to distinguish between otherwise identical signatures with user-defined types of the same name. Thus if module B above also defined a Foo type and defined a method for f(::Foo), these two methods would be seen to conflict by fmerge!().

One can call fmerge!() in modules other than Main.

module C

export f
using MetaMerge, A, B
f(::Union()) = nothing
fmerge!(f, (A,f), (B,f), conflicts_favor=A)
h(x::Int64) = f(x)


The result is that unqualified use of f in the module C will dispatch across methods defined for A.f and B.f. We can check this in the REPL:

julia> methods(C.f)
# 4 methods for generic function "f":
f(::None) at none:5

julia> C.h(2)

I hope that this versatility makes fmerge!() suitable for more general use outside the REPL.

One is also free to fmerge!() functions of different names, as well as functions from the same module.

To do:

  1. Currently, merge!() only handles two (Module, Function) tuples in its argument. In the future, one should be able to call merge!() on any number of such arguments, e.g. merge!(f, (A,f)) or merge!(f, (A,f), (B,f), (C,f)). featured in v0.3.
  2. Currently, if one wants to merge multiple functions from two+ modules, one has to merge!() each set of names individually. In the future, there should be a mergeall() function that automatically merges all commonly named functions between two modules, e.g. mergeall(A, B, conflicts_favor=A) generates a list of function names common to A and B and merge!s them.
  3. Find a way to handle name clashes of user defined types from different modules (WARNING above).

Download Details:

Author: Davidagold
Source Code: 
License: View license

#julia #functions #modules 

Merge Functions with Identical Names From Distinct Modules
Lawrence  Lesch

Lawrence Lesch


Super-fast Alternative to Babel for When You Can Target Modern JS


Quick usage

yarn add --dev sucrase  # Or npm install --save-dev sucrase
node -r sucrase/register main.ts

Using the ts-node integration:

yarn add --dev sucrase ts-node typescript
./node_modules/.bin/ts-node --transpiler sucrase/ts-node-plugin main.ts

Project overview

Sucrase is an alternative to Babel that allows super-fast development builds. Instead of compiling a large range of JS features to be able to work in Internet Explorer, Sucrase assumes that you're developing with a recent browser or recent Node.js version, so it focuses on compiling non-standard language extensions: JSX, TypeScript, and Flow. Because of this smaller scope, Sucrase can get away with an architecture that is much more performant but less extensible and maintainable. Sucrase's parser is forked from Babel's parser (so Sucrase is indebted to Babel and wouldn't be possible without it) and trims it down to a focused subset of what Babel solves. If it fits your use case, hopefully Sucrase can speed up your development experience!

Sucrase has been extensively tested. It can successfully build the Benchling frontend code, Babel, React, TSLint, Apollo client, and decaffeinate with all tests passing, about 1 million lines of code total.

Sucrase is about 20x faster than Babel. Here's one measurement of how Sucrase compares with other tools when compiling the Jest codebase 3 times, about 360k lines of code total:

            Time            Speed
Sucrase     0.57 seconds    636975 lines per second
swc         1.19 seconds    304526 lines per second
esbuild     1.45 seconds    248692 lines per second
TypeScript  8.98 seconds    40240 lines per second
Babel       9.18 seconds    39366 lines per second

Details: Measured on July 2022. Tools run in single-threaded mode without warm-up. See the benchmark code for methodology and caveats.


The main configuration option in Sucrase is an array of transform names. These transforms are available:

  • jsx: Transforms JSX syntax to React.createElement, e.g. <div a={b} /> becomes React.createElement('div', {a: b}). Behaves like Babel 7's React preset, including adding createReactClass display names and JSX context information.
  • typescript: Compiles TypeScript code to JavaScript, removing type annotations and handling features like enums. Does not check types. Sucrase transforms each file independently, so you should enable the isolatedModules TypeScript flag so that the typechecker will disallow the few features like const enums that need cross-file compilation.
  • flow: Removes Flow type annotations. Does not check types.
  • imports: Transforms ES Modules (import/export) to CommonJS (require/module.exports) using the same approach as Babel and TypeScript with --esModuleInterop. If preserveDynamicImport is specified in the Sucrase options, then dynamic import expressions are left alone, which is particularly useful in Node to load ESM-only libraries. If preserveDynamicImport is not specified, import expressions are transformed into a promise-wrapped call to require.
  • react-hot-loader: Performs the equivalent of the react-hot-loader/babel transform in the react-hot-loader project. This enables advanced hot reloading use cases such as editing of bound methods.
  • jest: Hoist desired jest method calls above imports in the same way as babel-plugin-jest-hoist. Does not validate the arguments passed to jest.mock, but the same rules still apply.

When the imports transform is not specified (i.e. when targeting ESM), the injectCreateRequireForImportRequire option can be specified to transform TS import foo = require("foo"); in a way that matches the TypeScript 4.7 behavior with module: nodenext.

These newer JS features are transformed by default:

If your target runtime supports these features, you can specify disableESTransforms: true so that Sucrase preserves the syntax rather than trying to transform it. Note that transpiled and standard class fields behave slightly differently; see the TypeScript 3.7 release notes for details. If you use TypeScript, you can enable the TypeScript option useDefineForClassFields to enable error checking related to these differences.

Unsupported syntax

All JS syntax not mentioned above will "pass through" and needs to be supported by your JS runtime. For example:

  • Decorators, private fields, throw expressions, generator arrow functions, and do expressions are all unsupported in browsers and Node (as of this writing), and Sucrase doesn't make an attempt to transpile them.
  • Object rest/spread, async functions, and async iterators are all recent features that should work fine, but might cause issues if you use older versions of tools like webpack. BigInt and newer regex features may or may not work, based on your tooling.

JSX Options

By default, JSX is compiled to React functions in development mode. This can be configured with a few options:

  • jsxRuntime: A string specifying the transform mode, which can be one of two values:
    • "classic" (default): The original JSX transform that calls React.createElement by default. To configure for non-React use cases, specify:
      • jsxPragma: Element creation function, defaults to React.createElement.
      • jsxFragmentPragma: Fragment component, defaults to React.Fragment.
    • "automatic": The new JSX transform introduced with React 17, which calls jsx functions and auto-adds import statements. To configure for non-React use cases, specify:
      • jsxImportSource: Package name for auto-generated import statements, defaults to react.
  • production: If true, use production version of functions and don't include debugging information. When using React in production mode with the automatic transform, this must be set to true to avoid an error about jsxDEV being missing.

Legacy CommonJS interop

Two legacy modes can be used with the imports transform:

  • enableLegacyTypeScriptModuleInterop: Use the default TypeScript approach to CommonJS interop instead of assuming that TypeScript's --esModuleInterop flag is enabled. For example, if a CJS module exports a function, legacy TypeScript interop requires you to write import * as add from './add';, while Babel, Webpack, Node.js, and TypeScript with --esModuleInterop require you to write import add from './add';. As mentioned in the docs, the TypeScript team recommends you always use --esModuleInterop.
  • enableLegacyBabel5ModuleInterop: Use the Babel 5 approach to CommonJS interop, so that you can run require('./MyModule') instead of require('./MyModule').default. Analogous to babel-plugin-add-module-exports.


Tool integrations

Usage in Node

The most robust way is to use the Sucrase plugin for ts-node, which has various Node integrations and configures Sucrase via tsconfig.json:

ts-node --transpiler sucrase/ts-node-plugin

For projects that don't target ESM, Sucrase also has a require hook with some reasonable defaults that can be accessed in a few ways:

  • From code: require("sucrase/register");
  • When invoking Node: node -r sucrase/register main.ts
  • As a separate binary: sucrase-node main.ts

Compiling a project to JS

For simple use cases, Sucrase comes with a sucrase CLI that mirrors your directory structure to an output directory:

sucrase ./srcDir -d ./outDir --transforms typescript,imports

Usage from code

For any advanced use cases, Sucrase can be called from JS directly:

import {transform} from "sucrase";
const compiledCode = transform(code, {transforms: ["typescript", "imports"]}).code;

What Sucrase is not

Sucrase is intended to be useful for the most common cases, but it does not aim to have nearly the scope and versatility of Babel. Some specific examples:

  • Sucrase does not check your code for errors. Sucrase's contract is that if you give it valid code, it will produce valid JS code. If you give it invalid code, it might produce invalid code, it might produce valid code, or it might give an error. Always use Sucrase with a linter or typechecker, which is more suited for error-checking.
  • Sucrase is not pluginizable. With the current architecture, transforms need to be explicitly written to cooperate with each other, so each additional transform takes significant extra work.
  • Sucrase is not good for prototyping language extensions and upcoming language features. Its faster architecture makes new transforms more difficult to write and more fragile.
  • Sucrase will never produce code for old browsers like IE. Compiling code down to ES5 is much more complicated than any transformation that Sucrase needs to do.
  • Sucrase is hesitant to implement upcoming JS features, although some of them make sense to implement for pragmatic reasons. Its main focus is on language extensions (JSX, TypeScript, Flow) that will never be supported by JS runtimes.
  • Like Babel, Sucrase is not a typechecker, and must process each file in isolation. For example, TypeScript const enums are treated as regular enums rather than inlining across files.
  • You should think carefully before using Sucrase in production. Sucrase is mostly beneficial in development, and in many cases, Babel or tsc will be more suitable for production builds.

See the Project Vision document for more details on the philosophy behind Sucrase.


As JavaScript implementations mature, it becomes more and more reasonable to disable Babel transforms, especially in development when you know that you're targeting a modern runtime. You might hope that you could simplify and speed up the build step by eventually disabling Babel entirely, but this isn't possible if you're using a non-standard language extension like JSX, TypeScript, or Flow. Unfortunately, disabling most transforms in Babel doesn't speed it up as much as you might expect. To understand, let's take a look at how Babel works:

  1. Tokenize the input source code into a token stream.
  2. Parse the token stream into an AST.
  3. Walk the AST to compute the scope information for each variable.
  4. Apply all transform plugins in a single traversal, resulting in a new AST.
  5. Print the resulting AST.

Only step 4 gets faster when disabling plugins, so there's always a fixed cost to running Babel regardless of how many transforms are enabled.

Sucrase bypasses most of these steps, and works like this:

  1. Tokenize the input source code into a token stream using a trimmed-down fork of the Babel parser. This fork does not produce a full AST, but still produces meaningful token metadata specifically designed for the later transforms.
  2. Scan through the tokens, computing preliminary information like all imported/exported names.
  3. Run the transform by doing a pass through the tokens and performing a number of careful find-and-replace operations, like replacing <Foo with React.createElement(Foo.

Because Sucrase works on a lower level and uses a custom parser for its use case, it is much faster than Babel.


Contributions are welcome, whether they be bug reports, PRs, docs, tests, or anything else! Please take a look through the Contributing Guide to learn how to get started.

Why the name?

Sucrase is an enzyme that processes sugar. Get it?

Try it out

Download Details:

Author: Alangpierce
Source Code: 
License: MIT license

#typescript #javascript #compiler #jsx #modules 

Super-fast Alternative to Babel for When You Can Target Modern JS
Reid  Rohan

Reid Rohan


8 Best Libraries for Module Or Loading System with JavaScript

In today's post we will learn about 8 Best Libraries for Module or loading system with JavaScript. 

Module Systems provide a way to manage dependencies in JavaScript.

In vanilla client-side JavaScript development, dependencies are implicit: they need to be defined manually, sometimes they need to be also defined in certain order.

Node.js Modules is an extension to CommonJS Modules (1.1).

Asynchronous Module Definition (AMD) is the most popular for client-side code.

Universal Module Definition (UMD) is a set of boilerplate recipes that attempt to bridge the differences between AMD and Node.js.

Table of contents:

  • RequireJS - A file and module loader for JavaScript.
  • Browserify - Browser-side require() the node.js way.
  • SeaJS - A Module Loader for the Web.
  • HeadJS - The only script in your HEAD.
  • Lazyload - Tiny, dependency-free async JavaScript and CSS loader.
  • Script.js - Asynchronous JavaScript loader and dependency manager.
  • Systemjs - AMD, CJS & ES6 spec-compliant module loader.
  • LodJS - Module loader based on AMD.

1 - RequireJS: A file and module loader for JavaScript.


  • dist: Scripts and assets to generate the docs, and for generating a require.js release.
  • docs: The raw HTML files for the docs. Only includes the body of each page. Files in dist are used to generate a complete HTML page.
  • tests: Tests for require.js.
  • testBaseUrl.js: A file used in the tests inside tests. Purposely placed outside the tests directory for testing paths that go outside a baseUrl.
  • Updates projects that depend on require.js Assumes the projects are siblings to this directory and have specific names. Useful to copy require.js to dependent projects easily while in development.


This repo assumes some other repos are checked out as siblings to this repo:

git clone
git clone
git clone
git clone

So when the above clones are done, the directory structure should look like:

  • domReady
  • i18n
  • text
  • requirejs (this repo)

You will need to be connected to the internet because the JSONP and remoteUrls tests access the internet to complete their tests.

Serve the directory with these 4 siblings from a web server. It can be a local web server.

Open requirejs/tests/index.html in all the browsers, click the arrow button to run all the tests.

View on Github

2 - Browserify: Browser-side require() the node.js way.

getting started

If you're new to browserify, check out the browserify handbook and the resources on


Whip up a file, main.js with some require()s in it. You can use relative paths like './foo.js' and '../lib/bar.js' or module paths like 'gamma' that will search node_modules/ using node's module lookup algorithm.

var foo = require('./foo.js');
var bar = require('../lib/bar.js');
var gamma = require('gamma');

var elem = document.getElementById('result');
var x = foo(100) + bar('baz');
elem.textContent = gamma(x);

Export functionality by assigning onto module.exports or exports:

module.exports = function (n) { return n * 111 }

Now just use the browserify command to build a bundle starting at main.js:

$ browserify main.js > bundle.js

All of the modules that main.js needs are included in the bundle.js from a recursive walk of the require() graph using required.

To use this bundle, just toss a <script src="bundle.js"></script> into your html!


With npm do:

npm install browserify

View on Github

3 - SeaJS: A Module Loader for the Web.

Sea.js is a module loader for the web. It is designed to change the way that you organize JavaScript. With Sea.js, it is pleasure to build scalable web applications.

The official site:


If you have any questions, please feel free to ask through New Issue.

Reporting an Issue

Make sure the problem you're addressing is reproducible. Use or to provide a test page. Indicate what browsers the issue can be reproduced in. What version of Sea.js is the issue reproducible in. Is it reproducible after updating to the latest version?

View on Github

4 - HeadJS: The only script in your HEAD.

Responsive Design, Feature Detections, and Resource Loading

  • Speed up your apps: Load JS & CSS asyncronously and in parallel, but execute them in order
  • Load one asset if a condition is met, else fallback and load a different one
  • Manage script dependencies, and execute callbacks once they are loaded
  • Cross-browser compatible « pseudo media-queries » let you code against different resolutions & devices
  • Fix quirks in specific browsers by quickly applying dedicated CSS/JS logic
  • Detect various browsers & their versions
  • Check if the client supports a certain Browser, HTML5, or CSS3 feature
  • Automatically generates JS and CSS classes for browsers & features that where detected
  • Automatically generates CSS classes, to know what page or section a user is viewing
  • Know if the user is in landscape or portrait mode
  • Or whether the client is using a mobile or desktop device
  • Get old browsers to support HTML5 elements like nav, sidebar, header, footer, ...
  • ...
  • Make it, The only script in your <HEAD>
    • A concise solution to universal problems

View on Github

5 - Lazyload: Tiny, dependency-free async JavaScript and CSS loader.

LazyLoad is a tiny (only 966 bytes minified and gzipped), dependency-free JavaScript utility that makes it super easy to load external JavaScript and CSS files on demand.

Whenever possible, LazyLoad will automatically load resources in parallel while ensuring execution order when you specify an array of URLs to load. In browsers that don't preserve the execution order of asynchronously-loaded scripts, LazyLoad will safely load the scripts sequentially.


Using LazyLoad is simple. Just call the appropriate method -- css() to load CSS, js() to load JavaScript -- and pass in a URL or array of URLs to load. You can also provide a callback function if you'd like to be notified when the resources have finished loading, as well as an argument to pass to the callback and a context in which to execute the callback.

// Load a single JavaScript file and execute a callback when it finishes.
LazyLoad.js('', function () {
  alert('foo.js has been loaded');

// Load multiple JS files and execute a callback when they've all finished.
LazyLoad.js(['foo.js', 'bar.js', 'baz.js'], function () {
  alert('all files have been loaded');

// Load a CSS file and pass an argument to the callback function.
LazyLoad.css('foo.css', function (arg) {
}, 'foo.css has been loaded');

// Load a CSS file and execute the callback in a different scope.
LazyLoad.css('foo.css', function () {
  alert(; // displays 'bar'
}, null, {foo: 'bar'});

Supported Browsers

  • Firefox 2+
  • Google Chrome
  • Internet Explorer 6+
  • Opera 9+
  • Safari 3+
  • Mobile Safari
  • Android

Other browsers may work, but haven't been tested. It's a safe bet that anything based on a recent version of Gecko or WebKit will probably work.


View on Github

6 - Script.js: Asynchronous JavaScript loader and dependency manager.

Browser Support

  • IE 6+
  • Opera 10+
  • Safari 3+
  • Chrome 1+
  • Firefox 2+


old school - blocks CSS, Images, AND JS!

<script src="jquery.js"></script>
<script src="my-jquery-plugin.js"></script>
<script src="my-app-that-uses-plugin.js"></script>

middle school - loads as non-blocking, but has multiple dependents

$script('jquery.js', function () {
  $script('my-jquery-plugin.js', function () {

new school - loads as non-blocking, and ALL js files load asynchronously

// load jquery and plugin at the same time. name it 'bundle'
$script(['jquery.js', 'my-jquery-plugin.js'], 'bundle')

// load your usage

/*--- in my-jquery-plugin.js ---*/
$script.ready('bundle', function() {
  // jquery & plugin (this file) are both ready
  // plugin code...

/*--- in my-app-that-uses-plugin.js ---*/
$script.ready('bundle', function() {
  // use your plugin :)

View on Github

7 - Systemjs: AMD, CJS & ES6 spec-compliant module loader.

Getting Started

Introduction video.

The systemjs-examples repo contains a variety of examples demonstrating how to use SystemJS.


npm install systemjs

Example Usage

Loading a System.register module

You can load System.register modules with a script element in your HTML:

<script src="system.js"></script>
<script type="systemjs-module" src="/js/main.js"></script>
<script type="systemjs-module" src="import:name-of-module"></script>

Loading with System.import

You can also dynamically load modules at any time with System.import():


where main.js is a module available in the System.register module format.

Bundling workflow

For an example of a bundling workflow, see the Rollup Code Splitting starter project -

Note that when building System modules you typically want to ensure anonymous System.register statements like:

System.register([], function () { ... });

are emitted, as these can be loaded in a way that behaves the same as normal ES modules, and not named register statements like:

System.register('name', [], function () { ... });

While these can be supported with the named register extension, this approach is typically not recommended for modern modules workflows.

View on Github

8 - LodJS: Module loader based on AMD.

How to Use?

Traditional Usage

<script src="lodjs.js"></script>


$ bower install lodjs
$ bower install git://

Quick Start

Define Module

We use lodJS's global function define to define a module, for example, in mod.js, we have the following code:

define(function () {
	return 123;

Module Usage

The use method in lodJS uses a module. The following code can use the module defined above:

lodjs.use('mod', function (mod) {
	console.log(mod);// Outputs 123

For more examples, please see the directory of demo.

View on Github

Thank you for following this article.

#javascript #loading #modules 

8 Best Libraries for Module Or Loading System with JavaScript

OdsIO.jl: (ODS) I/O for Julia using The Python Ezodf Module


Open Document Format Spreadsheet (ODS) I/O for Julia using the python ezodf module.

It allows to export (import) data from (to) Julia to (from) LibreOffice, OpenOffice and any other spreadsheet software that implements the OpenDocument specifications. 



This package provides the following functions:

ODS reading:


ods_readall(filename_or_stream; <keyword arguments>)

Return a dictionary of tables|dictionaries|dataframes indexed by position or name in the original OpenDocument Spreadsheet (.ods) file or stream.


  • fileaname_or_stream: file name or stream as Vector{UInt8}
  • sheetsNames=[]: the list of sheet names from which to import data.
  • sheetsPos=[]: the list of sheet positions (starting from 1) from which to import data.
  • ranges=[]: a list of pair of touples defining the ranges in each sheet from which to import data, in the format ((tlr,tlc),(brr,brc))
  • innerType="Matrix": the type of the inner container returned. Either "Matrix", "Dict" or "DataFrame"


  • sheetsNames and sheetsPos can not be given together
  • ranges is defined using integer positions for both rows and columns
  • individual dictionaries or dataframes are keyed by the values of the cells in the first row specified in the range, or first row if range is not given
  • innerType="Matrix", differently from innerType="Dict", preserves original column order, it is faster and require less memory
  • using innerType="DataFrame" also preserves original column order and try to auto-convert column types (working for Int64, Float64, String, in that order)


julia> outDic  = ods_readall("spreadsheet.ods";sheetsPos=[1,3],ranges=[((1,1),(3,3)),((2,2),(6,4))], innerType="Dict")
Dict{Any,Any} with 2 entries:
  3 => Dict{Any,Any}(Pair{Any,Any}("c",Any[33.0,43.0,53.0,63.0]),Pair{Any,Any}("b",Any[32.0,42.0,52.0,62.0]),Pair{Any,Any}("d",Any[34.0,44.0,54.…
  1 => Dict{Any,Any}(Pair{Any,Any}("c",Any[23.0,33.0]),Pair{Any,Any}("b",Any[22.0,32.0]),Pair{Any,Any}("a",Any[21.0,31.0]))
julia> data = @pipe HTTP.get("").body |> ods_readall(_)
Dict{Any, Any} with 3 entries:
  "Sheet1" => Any["h1" "h2" "h3"; "a" "b" "c"; "aa" "bb" "cc"]
  "Sheet2" => Any["a" "b" "c"; 21 22 23; 31 32 33]
  "Sheet3" => Any[nothing nothing nothing nothing; nothing "b" "c" "d"; … ; nothing 52 53 54; nothing 62 63 64]


ods_read(filename_or_stream; <keyword arguments>)

Return a table|dictionary|dataframe from a sheet (or range within a sheet) in a OpenDocument Spreadsheet (.ods) file or stream.


  • fileaname_or_stream: file name or stream as Vector{UInt8}
  • sheetName=nothing: the sheet name from which to import data.
  • sheetPos=nothing: the position of the sheet (starting from 1) from which to import data.
  • ranges=[]: a pair of touples defining the range in the sheet from which to import data, in the format ((tlr,tlc),(brr,brc))
  • retType="Matrix": the type of container returned. Either "Matrix", "Dict" or "DataFrame"


  • sheetName and sheetPos can not be given together
  • if both sheetName and sheetPos are not specified data from the first sheet is returned
  • ranges is defined using integer positions for both rows and columns
  • the dictionary or dataframe is keyed by the values of the cells in the first row specified in the range, or first row if range is not given
  • retType="Matrix", differently from innerType="Dict", preserves original column order, it is faster and require less memory
  • using innerType="DataFrame" also preserves original column order and try to auto-convert column types (working for Int64, Float64, String, in that order)


julia> df = ods_read("spreadsheet.ods";sheetName="Sheet2",retType="DataFrame")
3×3 DataFrames.DataFrame
│ Row │ x1   │ x2   │ x3   │
│ 1   │ "a"  │ "b"  │ "c"  │
│ 2   │ 21.0 │ 22.0 │ 23.0 │
│ 3   │ 31.0 │ 32.0 │ 33.0 │
julia> data = @pipe HTTP.get("").body |> ods_read(_)
3×3 Matrix{Any}:
 "h1"  "h2"  "h3"
 "a"   "b"   "c"
 "aa"  "bb"  "cc"

ODS writing



Write tabular data (2D Array, DataFrame or Dictionary) to OpenDocument spreadsheet format.


  • filename: an existing ods file or the one to create.
  • data=Dict(): a dictionary of locations in the files where to export the data => the actual data (see notes).


  • The locations where to save the data (the keys in the dictionary) are a tuple of tree elements: The first one is the sheet name or sheet position, the other two are the index of row and column of the top left corner where to export the data. If using sheet positions, these must be within current file sheets boundaries. If you want to create new sheets, use names.
  • The actual data exported are either a Matrix (2D Array), a DataFrame or an OrderedDict. In case of DataFrame or OrderedDict the headers ARE exported, so if you don't want them, first convert the DataFrame (or Dictionary) to a Matrix. In case of OrderedDict, the inner data must all have the same length.
  • Some spreadsheet software may not automatically recalculate cells that depends on exported cells (e.g. we are exporting some data o cell A1 and cells A2 depends on A2, the content of cell A2 may not be updated after the export). In such case most spreadsheet software have a command to force recalculation of cells (e.g. in LibreOffice/OpenOffice use CTRL+Shift+F9)


julia> ods_write("TestSpreadsheet.ods",Dict(("TestSheet",3,2)=>[[1,2,3,4,5] [6,7,8,9,10]]))



Provide tests to check that both the Julia 'OdsIO' and Python 'ezodf' modules are correctly installed. It may return an error if the file system is not writeable.


This package requires:

  • the PyCall package to call Python
  • a working local installation of Python with the python ezodf module already installed (if the ezodf module is not available and you have no access to the local python installation, you can use PyCall to try to install the ezodf using pip.. see here)
  • the DataFrames package in order to return DataFrames.

Known limitations

  • In reading, as the data is saved in a dictionary, the order of the columns is not maintained.
  • It is relatively slow with very large data.
  • If the data has many columns, the conversion from Dictionary to DataFrame made in the ods2dfs and ods2df functions may not work. In that case call the ods2dics or ods2dic functions and perform the conversion manually choosing the columns you need.

Download Details:

Author: Sylvaticus
Source Code: 
License: View license

#julia #python #modules 

OdsIO.jl: (ODS) I/O for Julia using The Python Ezodf Module

BDF.jl: Module to Read Biosemi BDF Files with The Julia

BDF.jl is a Julia module to read/write BIOSEMI 24-bit BDF files (used for storing electroencephalographic recordings)


bdfHeader = readBDFHeader("res1.bdf") #read the bdf header
sampRate = bdfHeader["sampRate"][1] #get the sampling rate
#read the data, the event table, the trigger channel and the status channel
dats, evtTab, trigs, statusChan = readBDF("res1.bdf")


Load the module

using BDF

To read an entire BDF recording

dats, evtTab, trigChan, sysCodeChan = readBDF("res1.bdf")

dats is the nChannelXnSamples matrix containing the data. Note that the triggers are not contained in the dats matrix. The triggers can be retrieved either trough the event table (evtTab), or the raw trigger channel (trigChan). The eventTable is a dictionary containing the trigger codes evtTab["code"], the trigger indexes evtTab["idx"] (i.e. the sample numbers at which triggers occurred in the recording), and the trigger durations evtTab["dur"] (in seconds). The raw trigger channel returned in trigChan contains the trigger code for each recording sample. Additional Biosemi status codes (like CM in/out-of range, battery low/OK) are returned in sysCodeChan.

You can also read only part of a recording, the following code will read the first 10 seconds of the recording:

dats, evtTab, trigChan, statChan = readBDF("res1.bdf", from=0, to=10)

The readBDFHeader function can be used to get information on the BDF recording:

bdfInfo = readBDFHeader("res1.bdf")

Get the duration of the recording:


Get the sampling rate of each channel:


Get the channel labels:


To read the information stored in the status channel you can use the decodeStatusChannel function

statusChanInfo = decodeStatusChannel(sysCodeChan)

this will return a dictionary with several arrays that indicate for each sample of the recordings whether CMS was in range, whether the battery charge was low, the speedmode of the system, and other information stored in the status channel.

Beware that BDF.jl does not check that you have sufficient RAM to read all the data in a BDF file. If you try to read a file that is too big for your hardware, your system may become slow or unresponsive. Initially try reading only a small amount of data, and check how much RAM that uses.

Documentation is available here:

Download Details:

Author: Sam81
Source Code: 
License: MIT license

#julia #modules 

BDF.jl: Module to Read Biosemi BDF Files with The Julia
Gordon  Taylor

Gordon Taylor


Babel-plugin-module-resolver: Custom Module Resolver Plugin for Babel


A Babel plugin to add a new resolver for your modules when compiling your code using Babel. This plugin allows you to add new "root" directories that contain your modules. It also allows you to setup a custom alias for directories, specific files, or even other npm modules.


This plugin can simplify the require/import paths in your project. For example, instead of using complex relative paths like ../../../../utils/my-utils, you can write utils/my-utils. It will allow you to work faster since you won't need to calculate how many levels of directory you have to go up before accessing the file.

// Use this:
import MyUtilFn from 'utils/MyUtilFn';
// Instead of that:
import MyUtilFn from '../../../../utils/MyUtilFn';

// And it also work with require calls
// Use this:
const MyUtilFn = require('utils/MyUtilFn');
// Instead of that:
const MyUtilFn = require('../../../../utils/MyUtilFn');

Getting started

Install the plugin

npm install --save-dev babel-plugin-module-resolver


yarn add --dev babel-plugin-module-resolver

Specify the plugin in your .babelrc with the custom root or alias. Here's an example:

  "plugins": [
    ["module-resolver", {
      "root": ["./src"],
      "alias": {
        "test": "./test",
        "underscore": "lodash"

.babelrc.js version Specify the plugin in your .babelrc.js file with the custom root or alias. Here's an example:

const plugins = [
      root: ["./src/"],
      alias: {
        "test": "./test"



Good example: //


babel-plugin-module-resolver can be configured and controlled easily, check the documentation for more details

Are you a plugin author (e.g. IDE integration)? We have documented the exposed functions for use in your plugins!

ESLint plugin

If you're using ESLint, you should use eslint-plugin-import, and eslint-import-resolver-babel-module to remove falsy unresolved modules. If you want to have warnings when aliased modules are being imported by their relative paths, you can use eslint-plugin-module-resolver.

Editors autocompletion

  "compilerOptions": {
    "baseUrl": ".",
    "paths": {
      "*": ["src/*"],
      "test/*": ["test/*"],
      "underscore": ["lodash"]
  • IntelliJ/WebStorm: You can mark your module directories as "resources root" e.g if you have ../../../utils/MyUtilFn you can mark ../../../utils as "resources root". This has the problem that your alias also has to be named utils. The second option is to add a webpack.config.js to your project and use it under File->Settings->Languages&Frameworks->JavaScript->Webpack. This will trick webstorm into resolving the paths and you can use any alias you want e.g.:
var path = require('path');

module.exports = {
  resolve: {
    extensions: ['.js', '.json', '.vue'],
    alias: {
      utils: path.resolve(__dirname, '../../../utils/MyUtilFn'),

Who is using babel-plugin-module-resolver ?

Are you also using it? Send a PR!

Download Details:

Author: Tleunen
Source Code: 
License: MIT license

#javascript #babel #modules 

Babel-plugin-module-resolver: Custom Module Resolver Plugin for Babel
Reid  Rohan

Reid Rohan


Detective: Find All Calls to Require() No Matter How Deeply Nested


find all calls to require() by walking the AST




var a = require('a');
var b = require('b');
var c = require('c');


var detective = require('detective');
var fs = require('fs');

var src = fs.readFileSync(__dirname + '/strings_src.js');
var requires = detective(src);


$ node examples/strings.js
[ 'a', 'b', 'c' ]


var detective = require('detective');

detective(src, opts)

Give some source body src, return an array of all the require() calls with string arguments.

The options parameter opts is passed along to detective.find().

var found = detective.find(src, opts)

Give some source body src, return found with:

  • found.strings - an array of each string found in a require()
  • found.expressions - an array of each stringified expression found in a require() call
  • found.nodes (when opts.nodes === true) - an array of AST nodes for each argument found in a require() call


  • opts.word - specify a different function name instead of "require"
  • opts.nodes - when true, populate found.nodes
  • opts.isRequire(node) - a function returning whether an AST CallExpression node is a require call
  • opts.parse - supply options directly to acorn with some support for esprima-style options range and loc
  • opts.ecmaVersion - default: 9


With npm do:

npm install detective

Download Details:

Author: Browserify
Source Code: 
License: View license

#javascript #modules #commonjs 

Detective: Find All Calls to Require() No Matter How Deeply Nested

D4M.jl: A D4M Module for Julia


A Dynamic Distributed Dimensional Data Model(D4M) module for Julia. D4M was developed in MATLAB by Dr Jeremy Kepner and his team at Lincoln Labs. The goal is to implement D4M in a native Julia method. As a course project in Numeric Computation with Julia, various parts of this implementation has been completed and compared with the original matlab in performance. In the matrix performance example folder (testing performance in matrix like operations such as add and multiply), this implementation has achieved on par if not significant speed up (10x). This is thanks to the effectiveness of Julia base in comparision to Matlab.

The D4M Project Page:

Current Status: (v0.5) - End of course project

  • Read and Write CSV
  • Printtable tabular
  • Basic and advanced Assoc method of indexing
  • All methods of construction Assoc implemented
  • Implemented 1Intro/1AssocIntro and 3Scaling/3MatrixPerformance example folders and dependency.

Next Version: (v0.6) [Mid Feburary]

  • Implement 1Intro/2EdgeArt and 2Apps/1EntityAnalysis example folders and dependency.
  • Add interfaces to Julia's native DataFrame, allowing user to transfer data back and forth from JuliaStats

Download Details:

Author: Achen12
Source Code: 
License: Apache-2.0 license

#julia #matlab #modules 

D4M.jl: A D4M Module for Julia

Boilerplate for Npm Modules with ES6 Features & All The Best Practices

NPM Module Boilerplate 

NOTE: This setup is pretty old and outdated for 2022. I need to update it to use Microbundle. In the meanwhile, do yourself a favour and setup your lib with Microbundle directly (it's pretty simple and straightforward) instead of using the boilerplate code.

Start developing your NPM module in seconds

Readymade boilerplate setup with all the best practices to kick start your npm/node module development.

Happy hacking =)


  • ES6/ESNext - Write ES6 code and Babel will transpile it to ES5 for backwards compatibility
  • Test - Mocha with Istanbul coverage
  • Lint - Preconfigured ESlint with Airbnb config
  • CI - TravisCI configuration setup
  • Minify - Built code will be minified for performance


  • npm run clean - Remove lib/ directory
  • npm test - Run tests with linting and coverage results.
  • npm test:only - Run tests without linting or coverage.
  • npm test:watch - You can even re-run tests on file changes!
  • npm test:prod - Run tests with minified code.
  • npm run test:examples - Test written examples on pure JS for better understanding module usage.
  • npm run lint - Run ESlint with airbnb-config
  • npm run cover - Get coverage report for your code.
  • npm run build - Babel will transpile ES6 => ES5 and minify the code.
  • npm run prepublish - Hook for npm. Do all the checks before publishing your module.


Just clone this repo and remove .git folder.

Download Details:

Author: flexdinesh
Source Code: 
License: MIT license

#javascript #npm #modules #boilerplate 

Boilerplate for Npm Modules with ES6 Features & All The Best Practices
Nat  Grady

Nat Grady


Supreme: Generate UML Diagrams Of Shiny Modules


As a ‘Shiny application’, developed with ‘Shiny modules’, gets bigger, it becomes more difficult to track the relationships and to have a clear overview of the module hierarchy. supreme is a tool to help developers visualize the structure of their ‘Shiny applications’ developed with modules.

Therefore, you are able to:

Visualize relationship of modules in existing applications

Design new applications from scratch

⚠️ supreme isn't yet compatible with the new moduleServer syntax introduced in the Shiny version 1.5.0 ⚠️


0. The model language

A graph consists of five main fields:

Module name (always required)

Module inputs (except the defaults: input, output, session)

Module outputs

Module returns

Calling modules, which are modules called a the module

1. Model graph for existing applications

path <- example_app_path()
obj <- supreme(src_file(path))


2. Model new applications

- name: server
    - items_tab_module_server: ItemsTab
    - customers_tab_module_server: CustomersTab
    - transactions_tab_module_server: TransactionsTab
  src: app.R

- name: customers_tab_module_server
  input: customers_list
    - paid_customers_table
    - free_customers_table
  src: module-customers.R

- name: items_tab_module_server
    - items_list
    - is_fired
    - module_modal_dialog: ~
  src: module-items.R

- name: transactions_tab_module_server
    - table
    - button_clicked
  output: transactions_table
  return: transactions_keys
  src: module-transactions.R

- name: module_modal_dialog
    - text
  src: module-utils.R

There are some special rules when creating model objects with YAML:

Each entity in the model must have a name field.

The entities can have optional fields, which are defined in the getOption("SUPREME_MODEL_OPTIONAL_FIELDS")

The fields defined in the getOption("SUPREME_MODEL_MULTI_VAR_FIELDS") can have multiple elements. It means that these fields can be expressed as an array.

model_yaml <- src_yaml(text = model)
obj <- supreme(model_yaml)

Known limitations

supreme will not properly parse the source code of your application if server side component is created with shinyServer(), which is soft-deprecated after a very early Shiny version of 0.10.

Similarly, although it’s possible to create a Shiny application by only providing input and output arguments in the server side, supreme will not read any Shiny server side component missing a session argument. That’s reasonable decision because modules cannot work without a session argument.

For the module returns, all return values in a module should explicitly be wrapped in return() calls.

All the possible limitations comes from the fact that supreme is designed to perform static analysis on your code. Thus, some idiosyncratic Shiny application code may not be parsed as intended. For such cases, it would be great if you open an issue describing the situation with a reproducible example.


You can install the released version from CRAN:


Or get the development version from GitHub:

# install.packages("devtools")


R Core Team: supreme package is brought to life thanks to R allowing abstract syntax trees (AST) that is used to practice static analysis on the code.

datamodelr: Inspiring work for creating modeling language

shinypod: Interesting thoughts regarding the implementation of Shiny modules

Download Details:

Author: strboul
Source Code: 
License: View license

#r #modules 

Supreme: Generate UML Diagrams Of Shiny Modules

Microcoverage: Module Computes Code Coverage for A Julia Program


This module computes code coverage for a Julia program at a more fine-grained level than the built-in coverage feature. Specifically, it provides coverage counts for each branch of the ||, && and ?: operators where they occur. It also counts the number of invocations to statement-functions.


Install the software in some directory and then include it at the REPL level (outermost level, either in the interpreter or in a file included by the interpreter not inside a module):


In order to refer to the functions in the module without prefixing them with the module name, use the following declaration:

using microcoverage

Next, instruct the module to instrument your source code:


Now run your code as you normally would. Suppose, for example, that including mytestruns.jl invokes many routines whose source code sits inside mysourcecode.jl and myothersourcecode.jl, so you generate these invocations:


Finally, generate the reports:


Under the hood

The microcoverage module works at the source-code level (as opposed to the standard library coverage feature, which operates close to the machine). The begintrack function copies your source code to a backup file (in the above example, the backup files would be called mysourcecode.jl.orig and myothersourcecode.jl.orig). Then it generates a new source code file (in the above case, named mysourcecode.jl and myothersourcecode.jl) that is peppered with calls to a routine to increment counters. The method used to generate the new source file is as follows. First, the entire file is passed through the parse function. Then the expressions generated by the parse function are fathomed by a routine that inserts a call to increment a counter each time a new source line is encountered and each time one of the aforementioned operators is encountered.

This rewritten source code consists of opaque eval statements and is not meant to be human-readable. The endtrack function restores your original file and generates the report, which shows the source code line and the corresponding counter. The reports have the extension mcov appended; in the above example, the reports would be named mysourcecode.jl.mcov and myothersourcecode.jl.mcov.

Interpreting the report

Here are some examples of lines from the .mcov file and what they mean:

                * function cmp3(o::Ordering,
                *               treenode::TreeNode,
                *               k,
                *               isleaf::Bool)
L167      70360  ? ( 1640 ) : ( 68720 ( 68720 ( 68720 ) && ( 34623 )) || ( 68688 ) ? ( 716 ) : ( 68004 ))
                *     (lt(o, k, treenode.splitkey1))? 1 :
                *     (((isleaf && treenode.child3 == 2) ||
                *       lt(o, k, treenode.splitkey2))? 2 : 3)
                * end

All of these lines are copies of source lines (source lines are preceded with an asterisk) except for the line marked L167. This line is interpreted as follows: L167 means source line number 167. The line was executed 70360 times. The line has a ?: operator. The first branch of the operator was executed 1640 times while the second was executed 68720 times. Meanwhile, the second branch involves an || operator; the first argument of this || operator was executed 68720 times while the second was executed 68688 times. These branches have further nested calling inside of them.

For statement functions, the coverage routine tells how many times they were invoked:

L195      10 ( 10 ) && ( 6 )(called 10 time(s))
                * eq(o::Ordering, a, b) = !lt(o, a, b) && !lt(o, b, a)

This statement function was invoked 10 times. It has an internal branch; the first branch was invoked 10 times, while the second was invoked 6 times.


The microcoverage module uses several undocumented aspects of the Expr type and the parse function. These aspects were discovered via trial and error. This means that they may change in a future version of Julia, so the module is rather fragile.

The module must be loaded at the REPL level, not inside any other module. The reason is that the invocations to the counter-incrementing routine that are scattered through the instrumented code are of the following form: Main.microcoverage.incrtrackarray(nn). Therefore, if the microcoverage module is nested inside of some other module, then the incrtrackarray function won't be found.

The package does not work if the instrumented code is run in a forked process. This is because the global variable associated with the incrtrackarray routine will not be known to the other process. In particular, this means that the microcoverage module does not work if the instrumented code is run via Julia's package-testing mechanism: Pkg.test("mymodule"). Instead, it is necessary to run the test within the same process using a statement like this:

include(joinpath(Pkg.dir("mymodule"), "test", "runtests.jl"))

Once a begintrack instruction is executed, the microcoverage module should not be reloaded until after the corresponding call to endtrack because the global variables keeping track of the instrumented code are lost during the reloading process. If it is necessary to reload microcoverage after a begintrack instruction, then the source code should be restored using the restore function provided in the module, as in the following snippet:

using microcoverage
include("microcoverage.jl")   # oops, global variables reset
                              # knowledge of mysourcecode lost!
using microcoverage
restore("mysourcecode.jl")    # restore the original version
begintrack("mysourcecode.jl") # should be good to go now

Download Details:

Author: StephenVavasis
Source Code: 
License: MIT license

#julia #cover #modules 

Microcoverage: Module Computes Code Coverage for A Julia Program