1675417825
As we do many experiments with UI based frameworks node_modules folders occupy a large portion of the disk space and after a few years, the disk space starts to diminish.
I have a 500GB SSD hard disk in my laptop and have recently run out of space and started to analyze what is that occupies so much space. I do not store personal images and documents on my laptop, at least not more than 1-2 GB.
When I analyzed the code base folder I realized that the experiments I did years ago are still on my machine and needed some cleanup. The easiest way was to remove unused data without any kind of loss is by deleting the huge node_modules folder in every project I did not use currently.
For this purpose, I wrote below PowerShell script to clean up all the node_modules folders in my code folder
Get-ChildItem -Path . -Filter node_modules -Recurse | Remove-Item -Force -Recurse
The "Get-Childitem" here gets all the folders with path "." (which is the current folder I am running the script in) with filter as folder name "node_module" and pipes in into "Remove-Item" command. This command recursively finds all the files and folders inside and deletes them without confirmation. ("Force" parameter is added to avoid asking yes/no before deleting each folder)
After executing this command I was able to recover more than 20Gb of disk space.
Original article source at: https://www.c-sharpcorner.com/
1668768624
Here's how you can fix the missing write access to /usr/local/lib/node_modules error.
When you try to install an npm package globally using the npm install -g <package name>
command, you may find the following error:
npm WARN checkPermissions Missing write access to /usr/local/lib/node_modules
The warning above means that your current terminal user doesn’t have “write access” to the /usr/local/lib/node_modules
folder.
Because you can’t write any new file and folder to the node_modules
folder, npm won’t be able to complete the installation.
You should see other errors below the warning as a consequence of the warning:
npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /usr/local/lib/node_modules/rimraf
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, ...
npm ERR!
npm ERR! The operation was rejected by your operating system.
Without the write access, npm will not be able to create folders and write the files for the package you are trying to install.
There are three ways to fix this error:
This tutorial will help you resolve the missing write access error.
You can run the npm install
command as root user by adding sudo
before the command:
sudo npm install -g @angular/cli
The sudo
command allows you to execute terminal command as the “root” user. You will be asked for your password when you invoke this command.
By calling the sudo
command, the error message with code EACCES should be resolved because root user has access to everything on your computer.
The drawback of this method is that you also need to add sudo
when you want to uninstall the package later.
As an alternative, you can change the owner of the node_modules
folder instead
The first way is to change the owner of the node_modules
folder, which must be owned by user “root” by default.
The following command will change the owner of the folder to your current user. I will explain the command below:
sudo chown -R $USER /usr/local/lib/node_modules
The chown
command is used to change the owner of the folder.
The -R
option means that the change owner command will be executed recursively, changing not only the node_modules
folder owner, but also the rest of the files and folders inside it.
Then, the $USER
is an environment variable that will be replaced with the current username you used to login to your computer.
Finally, the folder path /usr/local/lib/node_modules
is included to tell the terminal to change the owner of that folder.
By running the command above, you will be able to install npm packages again because the node_modules
folder now belongs to your current user.
But rather than running the command and changing the owner of the folder, I’d recommend you install NVM instead.
NVM or Node Version Manager is a software that’s designed to be installed per-user basis.
It allows you to install multiple versions of NodeJS on your computer so that you can upgrade or downgrade your NodeJS version as needed.
The reason why using NVM would fix the missing write access command is that by default, NVM will install NodeJS versions to a folder under your current user.
For example, my NodeJS version is currently installed under the /Users/nsebhastian/
folder:
nvm which current
/Users/nsebhastian/.nvm/versions/node/v10.19.0/bin/node
When you install global packages using NVM, the package will be installed under the version’s lib/
folder:
/Users/nsebhastian/.nvm/versions/node/v10.19.0/lib/node_modules
By using NVM, you will have the ability to install different versions of NodeJS on your computer and you will automatically fix the missing write access error. I’m currently using it for my computer, and I’d recommend you to use it too 😉
You can learn how to install NVM here
Now you’ve learned how to resolve the permission denied error message caused by missing write access. Nice work! 👍
Original article source at: https://sebhastian.com/
1668682514
Learn about .gitignore file and how to use it to exclude node_modules from Git tracking
The node_modules/
folder is the folder where all packages required by your JavaScript project are downloaded and installed. This folder is commonly excluded from a remote repository because it has a large size and you shouldn’t add code that you didn’t write to the repository.
Rather than including the node_modules/
folder, you should have the packages required by your project listed in package.json
file and ignore the node_modules/
folder using the .gitignore
file.
A .gitignore
file is a plain text file where you can write a list of patterns for files or folders that Git must not track from your project. It’s commonly used to exclude auto-generated files in your project.
To ignore the node_modules/
folder, you simply need to write the folder name inside .gitignore
file:
node_modules/
And with that, your node_modules/
folder will be ignored by Git. This works even when you have multiple node_modules/
folders located inside another subfolders.
If you already have the node_modules/
folder committed and tracked by Git, you can make Git forget about the folder by removing it from Git cache with the following command:
git rm -r --cached .
The command above will remove all tracked files from Git cache, so you need to add them back using git add .
where the dot (.
) symbol will add all files in the folder, but still exclude those in the .gitignore
file.
Next, you just need to commit the changes and push them into your remote repository. Here’s the full git
command:
git rm -r --cached .
git add .
git commit -m "Remove node_modules folder"
git push
And that’s how you ignore the node_modules/
folder using .gitignore
file. Feel free to modify the commands above as you require 😉
Original article source at: https://sebhastian.com/
1668598020
Learn how to remove the entire folder or just specific packages from your node_modules folder
The node_modules
folder is used to save all downloaded packages from npm in your computer for the JavaScript project that you have. Developers are always recommended to do a fresh install with npm install
each time they downloaded a JavaScript project into their computer.
Still, there might be cases when you want to remove the folder from your computer. Such as when copying the project to a hard drive for backup and removing unused packages. This tutorial will show you how to remove certain npm packages and remove the whole node_modules
folder from your local computer.
First, let’s see how to remove the node_modules
folder entirely.
To delete the node_modules
folder from your JavaScript project, you can use the following command for Mac / Linux OS:
rm -rf node_modules
The command above will first delete the content of node_modules
recursively until all of it is deleted, then it will remove the node_modules
folder too.
If you have multiple node_modules
folders in many different projects, then you need to use the following command to find the folders and delete them all.
Go to the parent folder of all your node_modules
folders and run the following command:
find . -name "node_modules" -type d -prune | xargs du -chs
It will find all node_modules
located inside the folder and all its subfolders. Here’s an example output from my computer:
$ find . -name "node_modules" -type d -prune | xargs du -chs
130M ./server-components-demo/node_modules
244M ./single-spa-basic/node_modules
244M ./react-boilerplate/node_modules
As you can see from the output above, I have three node_modules
folders inside three different subfolders. To delete them all, I need to use the following command:
find . -name "node_modules" -type d -prune -exec rm -rf '{}' +
The command will find all node_modules
directory inside the folder and its subfolders and execute rm -rf
command on the selected files.
For Windows, removing node_modules
package may cause an error saying the source file names are larger than what is supported by the file system:
The source file name(s) are larger than is supported by the file system.
Try moving to a location which has a shorter path name,
or try renaming to shorter name(s) before attempting this operation
When you see this error, you need to remove the node_modules
folder by using the rimraf
npm package.
If you’re using npm version 5.2.0 or above, then you can use npm package runner called npx
to run rimraf
without installing as follows:
npx rimraf node_modules
If your npm version is lower than 5.2.0, then you need to install the package globally using npm as follows:
npm install -g rimraf
Then remove the node_modules
folder with the following command:
rimraf node_modules
If you have many node_modules
folders, then you can go to the parent folder that contains all the node_modules
folder and execute rimraf
with the following pattern:
rimraf ./**/node_modules
The double asterisk **
pattern will make rimraf
finds all node_modules
folder recursively and delete them all.
At times, you may want to remove specific npm packages from your node_modules
folder. You can do so by using the npm uninstall command as shown below:
npm uninstall <package name>
Or you can also remove the package name manually from package.json
file and run another npm install
command. The npm install
command will check your node_modules
folder and remove packages that are not listed as a dependency in package.json
file.
Now you know how to remove node_modules
folder and removing specific packages from it 😉
Original article source at: https://sebhastian.com/
1668162682
This package is not currently registered, but you can use the REPL to clone it for yourself with
julia> Pkg.clone("git://github.com/davidagold/MetaMerge.jl.git")
(::Module, ::Function)
arguments for fmerge!()
.fmerge!()
(see below).Suppose we create a function f
in Main
:
julia> f() = nothing
f (generic function with 1 method)
Suppose also that we also intend to use the following modules A
and B
:
module A
export f
immutable Foo end
f(::Foo) = print("This is Foo.")
f(x::Int64) = x
end
module B
export f
immutable Bar end
f(::Bar) = print("This is Bar.")
f(x::Int64) = 2x
end
As of Julia 0.3.7, unqualified use of a name common to both modules -- say, the name 'f
' -- will elicit behavior that depends on the order in which we declare to be using
the modules:
julia> using A, B
Warning: using A.f in module Main conflicts with an existing identifier.
Warning: using B.f in module Main conflicts with an existing identifier.
julia> methods(f)
# 1 method for generic function "f":
f() at none:1
julia> f(A.Foo())
ERROR: `f` has no method matching f(::Foo)
julia> A.f(A.Foo())
This is Foo.
But suppose we want unqualified use of 'f
' to refer to the correct object --- either f
, A.f
or B.f
--- depending on the signature of the argument on which f
is called. The present "package" offers this functionality through the fmerge!()
function, which "merges" the methods of A.f
and B.f
into our original function f
as defined in Main
. (At its core, this is just extending the f
defined in Main
.) This allows unqualified use of the name 'f
' to dispatch on signatures for which methods are defined in other modules:
julia> fmerge!(f, (A,f), (B,f))
julia> methods(f)
# 3 methods for generic function "f":
f() at none:1
f(x1_A::Foo)
f(x1_B::Bar)
julia> f(A.Foo())
This is Foo.
julia> f(B.Bar())
This is Bar.
For merged methods with at least one argument, the name of the module from which the method originates is appended to the first argument in the method definition, as can be seen above. This can help one keep track of which methods come from which modules. However, this machinery only keeps track of the most recent module from which the method originates. If a method has been merged multiple times through multiple modules, its ultimate origin will be obscured.
Note that no method for the signature (x::Int64,)
was merged since both A.f
and B.f
have methods for this signature. To choose one to merge, use the optional priority
keyword argument, which takes an array of (::Module, ::Function)
tuples in the order of priority rank:
julia> fmerge!(f, (A,f), (B,f), priority=[(A,f)])
julia> methods(f)
# 4 methods for generic function "f":
f() at none:1
f(x1_A::Foo)
f(x1_B::Bar)
f(x1_A::Int64)
julia> f(3)
3
If, for a given signature, a method exists in both Module1.f
and Module2.f
, then the method from whichever of (Module1, f)
, (Module2, f)
with the greater rank (so lower numerical rank, e.g. 1 is greatest) will be merged. (::Module, ::Function)
arguments passed to fmerge!()
but omitted from priority
are by default given the lowest possible rank. If (Module1, f)
, (Module2, f)
have the same rank (which will only occur if they are not specified in priority
) then neither method will be merged. This means that if one omits the priority
argument, then only those methods whose signatures unambiguously specify precisely one of the (::Module, ::Function)
arguments passed to fmerge!()
will be merged.
WARNING: As of yet I haven't figured out how to use reflection to distinguish between otherwise identical signatures with user-defined types of the same name. Thus if module B
above also defined a Foo
type and defined a method for f(::Foo)
, these two methods would be seen to conflict by fmerge!()
.
One can call fmerge!()
in modules other than Main
.
module C
export f
using MetaMerge, A, B
f(::Union()) = nothing
fmerge!(f, (A,f), (B,f), conflicts_favor=A)
h(x::Int64) = f(x)
end
The result is that unqualified use of f
in the module C
will dispatch across methods defined for A.f
and B.f
. We can check this in the REPL:
julia> methods(C.f)
# 4 methods for generic function "f":
f(::None) at none:5
f(x1_A::Foo)
f(x1_A::Int64)
f(x1_B::Bar)
julia> C.h(2)
2
I hope that this versatility makes fmerge!()
suitable for more general use outside the REPL.
One is also free to fmerge!()
functions of different names, as well as functions from the same module.
merge!()
(Module, Function)
merge!()
merge!(f, (A,f))
merge!(f, (A,f), (B,f), (C,f))
merge!()
each set of names individually. In the future, there should be a mergeall()
function that automatically merges all commonly named functions between two modules, e.g. mergeall(A, B, conflicts_favor=A)
generates a list of function names common to A
and B
and merge!
s them.Author: Davidagold
Source Code: https://github.com/davidagold/MetaMerge.jl
License: View license
1667932860
yarn add --dev sucrase # Or npm install --save-dev sucrase
node -r sucrase/register main.ts
Using the ts-node integration:
yarn add --dev sucrase ts-node typescript
./node_modules/.bin/ts-node --transpiler sucrase/ts-node-plugin main.ts
Sucrase is an alternative to Babel that allows super-fast development builds. Instead of compiling a large range of JS features to be able to work in Internet Explorer, Sucrase assumes that you're developing with a recent browser or recent Node.js version, so it focuses on compiling non-standard language extensions: JSX, TypeScript, and Flow. Because of this smaller scope, Sucrase can get away with an architecture that is much more performant but less extensible and maintainable. Sucrase's parser is forked from Babel's parser (so Sucrase is indebted to Babel and wouldn't be possible without it) and trims it down to a focused subset of what Babel solves. If it fits your use case, hopefully Sucrase can speed up your development experience!
Sucrase has been extensively tested. It can successfully build the Benchling frontend code, Babel, React, TSLint, Apollo client, and decaffeinate with all tests passing, about 1 million lines of code total.
Sucrase is about 20x faster than Babel. Here's one measurement of how Sucrase compares with other tools when compiling the Jest codebase 3 times, about 360k lines of code total:
Time Speed
Sucrase 0.57 seconds 636975 lines per second
swc 1.19 seconds 304526 lines per second
esbuild 1.45 seconds 248692 lines per second
TypeScript 8.98 seconds 40240 lines per second
Babel 9.18 seconds 39366 lines per second
Details: Measured on July 2022. Tools run in single-threaded mode without warm-up. See the benchmark code for methodology and caveats.
The main configuration option in Sucrase is an array of transform names. These transforms are available:
React.createElement
, e.g. <div a={b} />
becomes React.createElement('div', {a: b})
. Behaves like Babel 7's React preset, including adding createReactClass
display names and JSX context information.isolatedModules
TypeScript flag so that the typechecker will disallow the few features like const enum
s that need cross-file compilation.import
/export
) to CommonJS (require
/module.exports
) using the same approach as Babel and TypeScript with --esModuleInterop
. If preserveDynamicImport
is specified in the Sucrase options, then dynamic import
expressions are left alone, which is particularly useful in Node to load ESM-only libraries. If preserveDynamicImport
is not specified, import
expressions are transformed into a promise-wrapped call to require
.react-hot-loader/babel
transform in the react-hot-loader project. This enables advanced hot reloading use cases such as editing of bound methods.jest.mock
, but the same rules still apply.When the imports
transform is not specified (i.e. when targeting ESM), the injectCreateRequireForImportRequire
option can be specified to transform TS import foo = require("foo");
in a way that matches the TypeScript 4.7 behavior with module: nodenext
.
These newer JS features are transformed by default:
a?.b
a ?? b
class C { x = 1; }
. This includes static fields but not the #x
private field syntax.const n = 1_234;
try { doThing(); } catch { }
.If your target runtime supports these features, you can specify disableESTransforms: true
so that Sucrase preserves the syntax rather than trying to transform it. Note that transpiled and standard class fields behave slightly differently; see the TypeScript 3.7 release notes for details. If you use TypeScript, you can enable the TypeScript option useDefineForClassFields
to enable error checking related to these differences.
All JS syntax not mentioned above will "pass through" and needs to be supported by your JS runtime. For example:
throw
expressions, generator arrow functions, and do
expressions are all unsupported in browsers and Node (as of this writing), and Sucrase doesn't make an attempt to transpile them.By default, JSX is compiled to React functions in development mode. This can be configured with a few options:
"classic"
(default): The original JSX transform that calls React.createElement
by default. To configure for non-React use cases, specify:React.createElement
.React.Fragment
."automatic"
: The new JSX transform introduced with React 17, which calls jsx
functions and auto-adds import statements. To configure for non-React use cases, specify:react
.true
, use production version of functions and don't include debugging information. When using React in production mode with the automatic transform, this must be set to true to avoid an error about jsxDEV
being missing.Two legacy modes can be used with the imports
transform:
--esModuleInterop
flag is enabled. For example, if a CJS module exports a function, legacy TypeScript interop requires you to write import * as add from './add';
, while Babel, Webpack, Node.js, and TypeScript with --esModuleInterop
require you to write import add from './add';
. As mentioned in the docs, the TypeScript team recommends you always use --esModuleInterop
.require('./MyModule')
instead of require('./MyModule').default
. Analogous to babel-plugin-add-module-exports.The most robust way is to use the Sucrase plugin for ts-node, which has various Node integrations and configures Sucrase via tsconfig.json
:
ts-node --transpiler sucrase/ts-node-plugin
For projects that don't target ESM, Sucrase also has a require hook with some reasonable defaults that can be accessed in a few ways:
require("sucrase/register");
node -r sucrase/register main.ts
sucrase-node main.ts
For simple use cases, Sucrase comes with a sucrase
CLI that mirrors your directory structure to an output directory:
sucrase ./srcDir -d ./outDir --transforms typescript,imports
For any advanced use cases, Sucrase can be called from JS directly:
import {transform} from "sucrase";
const compiledCode = transform(code, {transforms: ["typescript", "imports"]}).code;
Sucrase is intended to be useful for the most common cases, but it does not aim to have nearly the scope and versatility of Babel. Some specific examples:
const enum
s are treated as regular enum
s rather than inlining across files.See the Project Vision document for more details on the philosophy behind Sucrase.
As JavaScript implementations mature, it becomes more and more reasonable to disable Babel transforms, especially in development when you know that you're targeting a modern runtime. You might hope that you could simplify and speed up the build step by eventually disabling Babel entirely, but this isn't possible if you're using a non-standard language extension like JSX, TypeScript, or Flow. Unfortunately, disabling most transforms in Babel doesn't speed it up as much as you might expect. To understand, let's take a look at how Babel works:
Only step 4 gets faster when disabling plugins, so there's always a fixed cost to running Babel regardless of how many transforms are enabled.
Sucrase bypasses most of these steps, and works like this:
<Foo
with React.createElement(Foo
.Because Sucrase works on a lower level and uses a custom parser for its use case, it is much faster than Babel.
Contributions are welcome, whether they be bug reports, PRs, docs, tests, or anything else! Please take a look through the Contributing Guide to learn how to get started.
Sucrase is an enzyme that processes sugar. Get it?
Author: Alangpierce
Source Code: https://github.com/alangpierce/sucrase
License: MIT license
1663003440
In today's post we will learn about 8 Best Libraries for Module or loading system with JavaScript.
Module Systems provide a way to manage dependencies in JavaScript.
In vanilla client-side JavaScript development, dependencies are implicit: they need to be defined manually, sometimes they need to be also defined in certain order.
Node.js Modules is an extension to CommonJS Modules (1.1).
Asynchronous Module Definition (AMD) is the most popular for client-side code.
Universal Module Definition (UMD) is a set of boilerplate recipes that attempt to bridge the differences between AMD and Node.js.
Table of contents:
This repo assumes some other repos are checked out as siblings to this repo:
git clone https://github.com/requirejs/text.git
git clone https://github.com/requirejs/i18n.git
git clone https://github.com/requirejs/domReady.git
git clone https://github.com/requirejs/requirejs.git
So when the above clones are done, the directory structure should look like:
You will need to be connected to the internet because the JSONP and remoteUrls tests access the internet to complete their tests.
Serve the directory with these 4 siblings from a web server. It can be a local web server.
Open requirejs/tests/index.html in all the browsers, click the arrow button to run all the tests.
If you're new to browserify, check out the browserify handbook and the resources on browserify.org.
example
Whip up a file, main.js
with some require()
s in it. You can use relative paths like './foo.js'
and '../lib/bar.js'
or module paths like 'gamma'
that will search node_modules/
using node's module lookup algorithm.
var foo = require('./foo.js');
var bar = require('../lib/bar.js');
var gamma = require('gamma');
var elem = document.getElementById('result');
var x = foo(100) + bar('baz');
elem.textContent = gamma(x);
Export functionality by assigning onto module.exports
or exports
:
module.exports = function (n) { return n * 111 }
Now just use the browserify
command to build a bundle starting at main.js
:
$ browserify main.js > bundle.js
All of the modules that main.js
needs are included in the bundle.js
from a recursive walk of the require()
graph using required.
To use this bundle, just toss a <script src="bundle.js"></script>
into your html!
install
With npm do:
npm install browserify
Sea.js is a module loader for the web. It is designed to change the way that you organize JavaScript. With Sea.js, it is pleasure to build scalable web applications.
The official site: https://seajs.github.io/seajs/
If you have any questions, please feel free to ask through New Issue.
Make sure the problem you're addressing is reproducible. Use http://jsbin.com/ or http://jsfiddle.net/ to provide a test page. Indicate what browsers the issue can be reproduced in. What version of Sea.js is the issue reproducible in. Is it reproducible after updating to the latest version?
Responsive Design, Feature Detections, and Resource Loading
LazyLoad is a tiny (only 966 bytes minified and gzipped), dependency-free JavaScript utility that makes it super easy to load external JavaScript and CSS files on demand.
Whenever possible, LazyLoad will automatically load resources in parallel while ensuring execution order when you specify an array of URLs to load. In browsers that don't preserve the execution order of asynchronously-loaded scripts, LazyLoad will safely load the scripts sequentially.
Using LazyLoad is simple. Just call the appropriate method -- css()
to load CSS, js()
to load JavaScript -- and pass in a URL or array of URLs to load. You can also provide a callback function if you'd like to be notified when the resources have finished loading, as well as an argument to pass to the callback and a context in which to execute the callback.
// Load a single JavaScript file and execute a callback when it finishes.
LazyLoad.js('http://example.com/foo.js', function () {
alert('foo.js has been loaded');
});
// Load multiple JS files and execute a callback when they've all finished.
LazyLoad.js(['foo.js', 'bar.js', 'baz.js'], function () {
alert('all files have been loaded');
});
// Load a CSS file and pass an argument to the callback function.
LazyLoad.css('foo.css', function (arg) {
alert(arg);
}, 'foo.css has been loaded');
// Load a CSS file and execute the callback in a different scope.
LazyLoad.css('foo.css', function () {
alert(this.foo); // displays 'bar'
}, null, {foo: 'bar'});
Other browsers may work, but haven't been tested. It's a safe bet that anything based on a recent version of Gecko or WebKit will probably work.
old school - blocks CSS, Images, AND JS!
<script src="jquery.js"></script>
<script src="my-jquery-plugin.js"></script>
<script src="my-app-that-uses-plugin.js"></script>
middle school - loads as non-blocking, but has multiple dependents
$script('jquery.js', function () {
$script('my-jquery-plugin.js', function () {
$script('my-app-that-uses-plugin.js')
})
})
new school - loads as non-blocking, and ALL js files load asynchronously
// load jquery and plugin at the same time. name it 'bundle'
$script(['jquery.js', 'my-jquery-plugin.js'], 'bundle')
// load your usage
$script('my-app-that-uses-plugin.js')
/*--- in my-jquery-plugin.js ---*/
$script.ready('bundle', function() {
// jquery & plugin (this file) are both ready
// plugin code...
})
/*--- in my-app-that-uses-plugin.js ---*/
$script.ready('bundle', function() {
// use your plugin :)
})
The systemjs-examples repo contains a variety of examples demonstrating how to use SystemJS.
npm install systemjs
You can load System.register modules with a script element in your HTML:
<script src="system.js"></script>
<script type="systemjs-module" src="/js/main.js"></script>
<script type="systemjs-module" src="import:name-of-module"></script>
You can also dynamically load modules at any time with System.import()
:
System.import('/js/main.js');
where main.js
is a module available in the System.register module format.
For an example of a bundling workflow, see the Rollup Code Splitting starter project - https://github.com/rollup/rollup-starter-code-splitting.
Note that when building System modules you typically want to ensure anonymous System.register statements like:
System.register([], function () { ... });
are emitted, as these can be loaded in a way that behaves the same as normal ES modules, and not named register statements like:
System.register('name', [], function () { ... });
While these can be supported with the named register extension, this approach is typically not recommended for modern modules workflows.
<script src="lodjs.js"></script>
$ bower install lodjs
$ bower install git://github.com/yanhaijing/lodjs.git
We use lodJS's global function define to define a module, for example, in mod.js, we have the following code:
define(function () {
return 123;
});
The use method in lodJS uses a module. The following code can use the module defined above:
lodjs.use('mod', function (mod) {
console.log(mod);// Outputs 123
});
For more examples, please see the directory of demo.
Thank you for following this article.
1662382560
Open Document Format Spreadsheet (ODS) I/O for Julia using the python ezodf module.
It allows to export (import) data from (to) Julia to (from) LibreOffice, OpenOffice and any other spreadsheet software that implements the OpenDocument specifications.
Pkg.add("OdsIO")
This package provides the following functions:
ods_readall(filename_or_stream; <keyword arguments>)
Return a dictionary of tables|dictionaries|dataframes indexed by position or name in the original OpenDocument Spreadsheet (.ods) file or stream.
fileaname_or_stream
: file name or stream as Vector{UInt8}
sheetsNames=[]
: the list of sheet names from which to import data.sheetsPos=[]
: the list of sheet positions (starting from 1) from which to import data.ranges=[]
: a list of pair of touples defining the ranges in each sheet from which to import data, in the format ((tlr,tlc),(brr,brc))innerType="Matrix"
: the type of the inner container returned. Either "Matrix", "Dict" or "DataFrame"range
is not givenjulia> outDic = ods_readall("spreadsheet.ods";sheetsPos=[1,3],ranges=[((1,1),(3,3)),((2,2),(6,4))], innerType="Dict")
Dict{Any,Any} with 2 entries:
3 => Dict{Any,Any}(Pair{Any,Any}("c",Any[33.0,43.0,53.0,63.0]),Pair{Any,Any}("b",Any[32.0,42.0,52.0,62.0]),Pair{Any,Any}("d",Any[34.0,44.0,54.…
1 => Dict{Any,Any}(Pair{Any,Any}("c",Any[23.0,33.0]),Pair{Any,Any}("b",Any[22.0,32.0]),Pair{Any,Any}("a",Any[21.0,31.0]))
julia> data = @pipe HTTP.get("https://github.com/sylvaticus/OdsIO.jl/raw/master/test/spreadsheet.ods").body |> ods_readall(_)
Dict{Any, Any} with 3 entries:
"Sheet1" => Any["h1" "h2" "h3"; "a" "b" "c"; "aa" "bb" "cc"]
"Sheet2" => Any["a" "b" "c"; 21 22 23; 31 32 33]
"Sheet3" => Any[nothing nothing nothing nothing; nothing "b" "c" "d"; … ; nothing 52 53 54; nothing 62 63 64]
ods_read(filename_or_stream; <keyword arguments>)
Return a table|dictionary|dataframe from a sheet (or range within a sheet) in a OpenDocument Spreadsheet (.ods) file or stream.
fileaname_or_stream
: file name or stream as Vector{UInt8}
sheetName=nothing
: the sheet name from which to import data.sheetPos=nothing
: the position of the sheet (starting from 1) from which to import data.ranges=[]
: a pair of touples defining the range in the sheet from which to import data, in the format ((tlr,tlc),(brr,brc))retType="Matrix"
: the type of container returned. Either "Matrix", "Dict" or "DataFrame"range
is not givenjulia> df = ods_read("spreadsheet.ods";sheetName="Sheet2",retType="DataFrame")
3×3 DataFrames.DataFrame
│ Row │ x1 │ x2 │ x3 │
├─────┼──────┼──────┼──────┤
│ 1 │ "a" │ "b" │ "c" │
│ 2 │ 21.0 │ 22.0 │ 23.0 │
│ 3 │ 31.0 │ 32.0 │ 33.0 │
julia> data = @pipe HTTP.get("https://github.com/sylvaticus/OdsIO.jl/raw/master/test/spreadsheet.ods").body |> ods_read(_)
3×3 Matrix{Any}:
"h1" "h2" "h3"
"a" "b" "c"
"aa" "bb" "cc"
ods_write(filename,data)
Write tabular data (2D Array, DataFrame or Dictionary) to OpenDocument spreadsheet format.
filename
: an existing ods file or the one to create.data=Dict()
: a dictionary of locations in the files where to export the data => the actual data (see notes).A1
and cells A2
depends on A2
, the content of cell A2
may not be updated after the export). In such case most spreadsheet software have a command to force recalculation of cells (e.g. in LibreOffice/OpenOffice use CTRL+Shift+F9
)julia> ods_write("TestSpreadsheet.ods",Dict(("TestSheet",3,2)=>[[1,2,3,4,5] [6,7,8,9,10]]))
Pkg.test("OdsIO")
Provide tests to check that both the Julia 'OdsIO' and Python 'ezodf' modules are correctly installed. It may return an error if the file system is not writeable.
This package requires:
ezodf
module is not available and you have no access to the local python installation, you can use PyCall to try to install the ezodf
using pip.. see here)Author: Sylvaticus
Source Code: https://github.com/sylvaticus/OdsIO.jl
License: View license
1662000840
BDF.jl
is a Julia module to read/write BIOSEMI 24-bit BDF files (used for storing electroencephalographic recordings)
Usage:
bdfHeader = readBDFHeader("res1.bdf") #read the bdf header
sampRate = bdfHeader["sampRate"][1] #get the sampling rate
#read the data, the event table, the trigger channel and the status channel
dats, evtTab, trigs, statusChan = readBDF("res1.bdf")
Load the module
using BDF
To read an entire BDF recording
dats, evtTab, trigChan, sysCodeChan = readBDF("res1.bdf")
dats
is the nChannelXnSamples matrix containing the data. Note that the triggers are not contained in the dats
matrix. The triggers can be retrieved either trough the event table (evtTab
), or the raw trigger channel (trigChan
). The eventTable is a dictionary containing the trigger codes evtTab["code"]
, the trigger indexes evtTab["idx"]
(i.e. the sample numbers at which triggers occurred in the recording), and the trigger durations evtTab["dur"]
(in seconds). The raw trigger channel returned in trigChan
contains the trigger code for each recording sample. Additional Biosemi status codes (like CM in/out-of range, battery low/OK) are returned in sysCodeChan
.
You can also read only part of a recording, the following code will read the first 10 seconds of the recording:
dats, evtTab, trigChan, statChan = readBDF("res1.bdf", from=0, to=10)
The readBDFHeader
function can be used to get information on the BDF recording:
bdfInfo = readBDFHeader("res1.bdf")
Get the duration of the recording:
bdfInfo["duration"]
Get the sampling rate of each channel:
bdfInfo["sampRate"]
Get the channel labels:
bdfInfo["chanLabels"]
To read the information stored in the status channel you can use the decodeStatusChannel
function
statusChanInfo = decodeStatusChannel(sysCodeChan)
this will return a dictionary with several arrays that indicate for each sample of the recordings whether CMS was in range, whether the battery charge was low, the speedmode of the system, and other information stored in the status channel.
Beware that BDF.jl
does not check that you have sufficient RAM to read all the data in a BDF file. If you try to read a file that is too big for your hardware, your system may become slow or unresponsive. Initially try reading only a small amount of data, and check how much RAM that uses.
Documentation is available here:
http://samcarcagno.altervista.org/BDF/index.html
Author: Sam81
Source Code: https://github.com/sam81/BDF.jl
License: MIT license
1661951341
A Babel plugin to add a new resolver for your modules when compiling your code using Babel. This plugin allows you to add new "root" directories that contain your modules. It also allows you to setup a custom alias for directories, specific files, or even other npm modules.
This plugin can simplify the require/import paths in your project. For example, instead of using complex relative paths like ../../../../utils/my-utils
, you can write utils/my-utils
. It will allow you to work faster since you won't need to calculate how many levels of directory you have to go up before accessing the file.
// Use this:
import MyUtilFn from 'utils/MyUtilFn';
// Instead of that:
import MyUtilFn from '../../../../utils/MyUtilFn';
// And it also work with require calls
// Use this:
const MyUtilFn = require('utils/MyUtilFn');
// Instead of that:
const MyUtilFn = require('../../../../utils/MyUtilFn');
Install the plugin
npm install --save-dev babel-plugin-module-resolver
or
yarn add --dev babel-plugin-module-resolver
Specify the plugin in your .babelrc
with the custom root or alias. Here's an example:
{
"plugins": [
["module-resolver", {
"root": ["./src"],
"alias": {
"test": "./test",
"underscore": "lodash"
}
}]
]
}
.babelrc.js version Specify the plugin in your .babelrc.js
file with the custom root or alias. Here's an example:
const plugins = [
[
require.resolve('babel-plugin-module-resolver'),
{
root: ["./src/"],
alias: {
"test": "./test"
}
}
]
];
Good example: // https://gist.github.com/nodkz/41e189ff22325a27fe6a5ca81df2cb91
babel-plugin-module-resolver can be configured and controlled easily, check the documentation for more details
Are you a plugin author (e.g. IDE integration)? We have documented the exposed functions for use in your plugins!
If you're using ESLint, you should use eslint-plugin-import, and eslint-import-resolver-babel-module to remove falsy unresolved modules. If you want to have warnings when aliased modules are being imported by their relative paths, you can use eslint-plugin-module-resolver.
babel-plugin-module-resolver
option.jsconfig.json
(tsconfig.json
for TypeScript), e.g.:{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"*": ["src/*"],
"test/*": ["test/*"],
"underscore": ["lodash"]
}
}
}
../../../utils/MyUtilFn
you can mark ../../../utils
as "resources root". This has the problem that your alias also has to be named utils
. The second option is to add a webpack.config.js
to your project and use it under File->Settings->Languages&Frameworks->JavaScript->Webpack. This will trick webstorm into resolving the paths and you can use any alias you want e.g.:var path = require('path');
module.exports = {
resolve: {
extensions: ['.js', '.json', '.vue'],
alias: {
utils: path.resolve(__dirname, '../../../utils/MyUtilFn'),
},
},
};
Are you also using it? Send a PR!
Author: Tleunen
Source Code: https://github.com/tleunen/babel-plugin-module-resolver
License: MIT license
1661862420
find all calls to require()
by walking the AST
example
strings_src.js:
var a = require('a');
var b = require('b');
var c = require('c');
strings.js:
var detective = require('detective');
var fs = require('fs');
var src = fs.readFileSync(__dirname + '/strings_src.js');
var requires = detective(src);
console.dir(requires);
output:
$ node examples/strings.js
[ 'a', 'b', 'c' ]
methods
var detective = require('detective');
Give some source body src
, return an array of all the require()
calls with string arguments.
The options parameter opts
is passed along to detective.find()
.
Give some source body src
, return found
with:
found.strings
- an array of each string found in a require()
found.expressions
- an array of each stringified expression found in a require()
callfound.nodes
(when opts.nodes === true
) - an array of AST nodes for each argument found in a require()
callOptionally:
opts.word
- specify a different function name instead of "require"
opts.nodes
- when true
, populate found.nodes
opts.isRequire(node)
- a function returning whether an AST CallExpression
node is a require callopts.parse
- supply options directly to acorn with some support for esprima-style options range
and loc
opts.ecmaVersion
- default: 9install
With npm do:
npm install detective
Author: Browserify
Source Code: https://github.com/browserify/detective
License: View license
1661778507
A Dynamic Distributed Dimensional Data Model(D4M) module for Julia. D4M was developed in MATLAB by Dr Jeremy Kepner and his team at Lincoln Labs. The goal is to implement D4M in a native Julia method. As a course project in Numeric Computation with Julia, various parts of this implementation has been completed and compared with the original matlab in performance. In the matrix performance example folder (testing performance in matrix like operations such as add and multiply), this implementation has achieved on par if not significant speed up (10x). This is thanks to the effectiveness of Julia base in comparision to Matlab.
The D4M Project Page: http://www.mit.edu/~kepner/D4M/
Current Status: (v0.5) - End of course project
Next Version: (v0.6) [Mid Feburary]
Author: Achen12
Source Code: https://github.com/achen12/D4M.jl
License: Apache-2.0 license
1661533560
Start developing your NPM module in seconds ✨
Readymade boilerplate setup with all the best practices to kick start your npm/node module development.
Happy hacking =)
Features
Commands
npm run clean
- Remove lib/
directorynpm test
- Run tests with linting and coverage results.npm test:only
- Run tests without linting or coverage.npm test:watch
- You can even re-run tests on file changes!npm test:prod
- Run tests with minified code.npm run test:examples
- Test written examples on pure JS for better understanding module usage.npm run lint
- Run ESlint with airbnb-confignpm run cover
- Get coverage report for your code.npm run build
- Babel will transpile ES6 => ES5 and minify the code.npm run prepublish
- Hook for npm. Do all the checks before publishing your module.Installation
Just clone this repo and remove .git
folder.
Author: flexdinesh
Source Code: https://github.com/flexdinesh/npm-module-boilerplate
License: MIT license
1661338680
As a ‘Shiny application’, developed with ‘Shiny modules’, gets bigger, it becomes more difficult to track the relationships and to have a clear overview of the module hierarchy. supreme is a tool to help developers visualize the structure of their ‘Shiny applications’ developed with modules.
Therefore, you are able to:
Visualize relationship of modules in existing applications
Design new applications from scratch
⚠️ supreme isn't yet compatible with the new moduleServer
syntax introduced in the Shiny version 1.5.0
⚠️
A graph consists of five main fields:
Module name (always required)
Module inputs (except the defaults: input, output, session)
Module outputs
Module returns
Calling modules, which are modules called a the module
library(supreme)
path <- example_app_path()
obj <- supreme(src_file(path))
graph(obj)
- name: server
calling_modules:
- items_tab_module_server: ItemsTab
- customers_tab_module_server: CustomersTab
- transactions_tab_module_server: TransactionsTab
src: app.R
- name: customers_tab_module_server
input: customers_list
output:
- paid_customers_table
- free_customers_table
src: module-customers.R
- name: items_tab_module_server
input:
- items_list
- is_fired
calling_modules:
- module_modal_dialog: ~
src: module-items.R
- name: transactions_tab_module_server
input:
- table
- button_clicked
output: transactions_table
return: transactions_keys
src: module-transactions.R
- name: module_modal_dialog
input:
- text
src: module-utils.R
There are some special rules when creating model objects with YAML:
Each entity in the model must have a name field.
The entities can have optional fields, which are defined in the getOption("SUPREME_MODEL_OPTIONAL_FIELDS")
The fields defined in the getOption("SUPREME_MODEL_MULTI_VAR_FIELDS")
can have multiple elements. It means that these fields can be expressed as an array.
model_yaml <- src_yaml(text = model)
obj <- supreme(model_yaml)
supreme will not properly parse the source code of your application if server side component is created with shinyServer()
, which is soft-deprecated after a very early Shiny version of 0.10
.
Similarly, although it’s possible to create a Shiny application by only providing input
and output
arguments in the server side, supreme will not read any Shiny server side component missing a session
argument. That’s reasonable decision because modules cannot work without a session
argument.
For the module returns, all return values in a module should explicitly be wrapped in return()
calls.
All the possible limitations comes from the fact that supreme is designed to perform static analysis on your code. Thus, some idiosyncratic Shiny application code may not be parsed as intended. For such cases, it would be great if you open an issue describing the situation with a reproducible example.
You can install the released version from CRAN:
install.packages("supreme")
Or get the development version from GitHub:
# install.packages("devtools")
devtools::install_github("strboul/supreme")
R Core Team: supreme package is brought to life thanks to R allowing abstract syntax trees (AST) that is used to practice static analysis on the code.
datamodelr: Inspiring work for creating modeling language
shinypod: Interesting thoughts regarding the implementation of Shiny modules
Author: strboul
Source Code: https://github.com/strboul/supreme
License: View license
1661330220
This module computes code coverage for a Julia program at a more fine-grained level than the built-in coverage feature. Specifically, it provides coverage counts for each branch of the ||, && and ?: operators where they occur. It also counts the number of invocations to statement-functions.
Install the software in some directory and then include it at the REPL level (outermost level, either in the interpreter or in a file included by the interpreter not inside a module):
include("microcoverage.jl")
In order to refer to the functions in the module without prefixing them with the module name, use the following declaration:
using microcoverage
Next, instruct the module to instrument your source code:
begintrack("mysourcecode.jl")
begintrack("myothersourcecode.jl")
Now run your code as you normally would. Suppose, for example, that including mytestruns.jl
invokes many routines whose source code sits inside mysourcecode.jl
and myothersourcecode.jl
, so you generate these invocations:
include("mytestruns.jl")
Finally, generate the reports:
endtrack("mysourcecode.jl")
endtrack("myothersourcecode.jl")
The microcoverage module works at the source-code level (as opposed to the standard library coverage feature, which operates close to the machine). The begintrack
function copies your source code to a backup file (in the above example, the backup files would be called mysourcecode.jl.orig
and myothersourcecode.jl.orig
). Then it generates a new source code file (in the above case, named mysourcecode.jl
and myothersourcecode.jl
) that is peppered with calls to a routine to increment counters. The method used to generate the new source file is as follows. First, the entire file is passed through the parse
function. Then the expressions generated by the parse
function are fathomed by a routine that inserts a call to increment a counter each time a new source line is encountered and each time one of the aforementioned operators is encountered.
This rewritten source code consists of opaque eval
statements and is not meant to be human-readable. The endtrack
function restores your original file and generates the report, which shows the source code line and the corresponding counter. The reports have the extension mcov
appended; in the above example, the reports would be named mysourcecode.jl.mcov
and myothersourcecode.jl.mcov
.
Here are some examples of lines from the .mcov file and what they mean:
* function cmp3(o::Ordering,
* treenode::TreeNode,
* k,
* isleaf::Bool)
L167 70360 ? ( 1640 ) : ( 68720 ( 68720 ( 68720 ) && ( 34623 )) || ( 68688 ) ? ( 716 ) : ( 68004 ))
* (lt(o, k, treenode.splitkey1))? 1 :
* (((isleaf && treenode.child3 == 2) ||
* lt(o, k, treenode.splitkey2))? 2 : 3)
* end
All of these lines are copies of source lines (source lines are preceded with an asterisk) except for the line marked L167
. This line is interpreted as follows: L167
means source line number 167. The line was executed 70360 times. The line has a ?: operator. The first branch of the operator was executed 1640 times while the second was executed 68720 times. Meanwhile, the second branch involves an || operator; the first argument of this || operator was executed 68720 times while the second was executed 68688 times. These branches have further nested calling inside of them.
For statement functions, the coverage routine tells how many times they were invoked:
L195 10 ( 10 ) && ( 6 )(called 10 time(s))
* eq(o::Ordering, a, b) = !lt(o, a, b) && !lt(o, b, a)
This statement function was invoked 10 times. It has an internal branch; the first branch was invoked 10 times, while the second was invoked 6 times.
The microcoverage module uses several undocumented aspects of the Expr
type and the parse
function. These aspects were discovered via trial and error. This means that they may change in a future version of Julia, so the module is rather fragile.
The module must be loaded at the REPL level, not inside any other module. The reason is that the invocations to the counter-incrementing routine that are scattered through the instrumented code are of the following form: Main.microcoverage.incrtrackarray(nn)
. Therefore, if the microcoverage module is nested inside of some other module, then the incrtrackarray
function won't be found.
The package does not work if the instrumented code is run in a forked process. This is because the global variable associated with the incrtrackarray
routine will not be known to the other process. In particular, this means that the microcoverage module does not work if the instrumented code is run via Julia's package-testing mechanism: Pkg.test("mymodule").
Instead, it is necessary to run the test within the same process using a statement like this:
include(joinpath(Pkg.dir("mymodule"), "test", "runtests.jl"))
Once a begintrack
instruction is executed, the microcoverage module should not be reloaded until after the corresponding call to endtrack
because the global variables keeping track of the instrumented code are lost during the reloading process. If it is necessary to reload microcoverage after a begintrack
instruction, then the source code should be restored using the restore
function provided in the module, as in the following snippet:
include("microcoverage.jl")
using microcoverage
begintrack("mysourcecode.jl")
include("microcoverage.jl") # oops, global variables reset
# knowledge of mysourcecode lost!
using microcoverage
restore("mysourcecode.jl") # restore the original version
begintrack("mysourcecode.jl") # should be good to go now
Author: StephenVavasis
Source Code: https://github.com/StephenVavasis/microcoverage
License: MIT license