1665143400
Node style SHA
on pure JavaScript.
var shajs = require('sha.js')
console.log(shajs('sha256').update('42').digest('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049
console.log(new shajs.sha256().update('42').digest('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049
var sha256stream = shajs('sha256')
sha256stream.end('42')
console.log(sha256stream.read().toString('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049
sha.js
currently implements:
Note, this doesn't actually implement a stream, but wrapping this in a stream is trivial. It does update incrementally, so you can hash things larger than RAM, as it uses a constant amount of memory (except when using base64 or utf8 encoding, see code comments).
This work is derived from Paul Johnston's A JavaScript implementation of the Secure Hash Algorithm.
Author: Crypto-browserify
Source Code: https://github.com/crypto-browserify/sha.js
License: View license
1664391000
In case you didn't know, the HTML5 drag and drop API is a total disaster! This is an attempt to make the API usable by mere mortals.
drag
class to the drop target on hover, for easy styling!npm install drag-drop
This package works in the browser with browserify. If you do not use a bundler, you can use the standalone script directly in a <script>
tag.
const dragDrop = require('drag-drop')
dragDrop('#dropTarget', (files, pos, fileList, directories) => {
console.log('Here are the dropped files', files) // Array of File objects
console.log('Dropped at coordinates', pos.x, pos.y)
console.log('Here is the raw FileList object if you need it:', fileList)
console.log('Here is the list of directories:', directories)
})
Another handy thing this does is add a drag
class to the drop target when the user is dragging a file over the drop target. Useful for styling the drop target to make it obvious that this is a drop target!
const dragDrop = require('drag-drop')
// You can pass in a DOM node or a selector string!
dragDrop('#dropTarget', (files, pos, fileList, directories) => {
console.log('Here are the dropped files', files)
console.log('Dropped at coordinates', pos.x, pos.y)
console.log('Here is the raw FileList object if you need it:', fileList)
console.log('Here is the list of directories:', directories)
// `files` is an Array!
files.forEach(file => {
console.log(file.name)
console.log(file.size)
console.log(file.type)
console.log(file.lastModifiedDate)
console.log(file.fullPath) // not real full path due to browser security restrictions
console.log(file.path) // in Electron, this contains the actual full path
// convert the file to a Buffer that we can use!
const reader = new FileReader()
reader.addEventListener('load', e => {
// e.target.result is an ArrayBuffer
const arr = new Uint8Array(e.target.result)
const buffer = new Buffer(arr)
// do something with the buffer!
})
reader.addEventListener('error', err => {
console.error('FileReader error' + err)
})
reader.readAsArrayBuffer(file)
})
})
If you prefer to access file data as Buffers, then just require drag-drop like this:
const dragDrop = require('drag-drop/buffer')
dragDrop('#dropTarget', files => {
files.forEach(file => {
// file is actually a buffer!
console.log(file.readUInt32LE(0))
console.log(file.toJSON())
console.log(file.toString('hex')) // etc...
// but it still has all the normal file properties!
console.log(file.name)
console.log(file.size)
console.log(file.type)
console.log(file.lastModifiedDate)
})
})
If the user highlights text and drags it, we capture that as a separate event. Listen for it like this:
const dragDrop = require('drag-drop')
dragDrop('#dropTarget', {
onDropText: (text, pos) => {
console.log('Here is the dropped text:', text)
console.log('Dropped at coordinates', pos.x, pos.y)
}
})
dragenter
, dragover
and dragleave
eventsInstead of passing just an ondrop
function as the second argument, instead pass an object with all the events you want to listen for:
const dragDrop = require('drag-drop')
dragDrop('#dropTarget', {
onDrop: (files, pos, fileList, directories) => {
console.log('Here are the dropped files', files)
console.log('Dropped at coordinates', pos.x, pos.y)
console.log('Here is the raw FileList object if you need it:', fileList)
console.log('Here is the list of directories:', directories)
},
onDropText: (text, pos) => {
console.log('Here is the dropped text:', text)
console.log('Dropped at coordinates', pos.x, pos.y)
},
onDragEnter: (event) => {},
onDragOver: (event) => {},
onDragLeave: (event) => {}
})
You can rely on the onDragEnter
and onDragLeave
events to fire only for the drop target you specified. Events which bubble up from child nodes are ignored so that you can expect a single onDragEnter
and then a single onDragLeave
event to fire.
Furthermore, neither onDragEnter
, onDragLeave
, nor onDragOver
will fire for drags which cannot be handled by the registered drop listeners. For example, if you only listen for onDrop
(files) but not onDropText
(text) and the user is dragging text over the drop target, then none of the listed events will fire.
To stop listening for drag & drop events and remove the event listeners, just use the cleanup
function returned by the dragDrop
function.
const dragDrop = require('drag-drop')
const cleanup = dragDrop('#dropTarget', files => {
// ...
})
// ... at some point in the future, stop listening for drag & drop events
cleanup()
To support users pasting files from their clipboard, use the provided processItems()
function to process the DataTransferItemList
from the browser's native 'paste'
event.
document.addEventListener('paste', event => {
dragDrop.processItems(event.clipboardData.items, (err, files) => {
// ...
})
})
file://
urlsDon't run your app from file://
. For security reasons, browsers do not allow you to run your app from file://
. In fact, many of the powerful storage APIs throw errors if you run the app locally from file://
.
Instead, start a local server and visit your site at http://localhost:port
.
See https://instant.io.
Author: Feross
Source Code: https://github.com/feross/drag-drop
License: MIT license
1661455020
Fetch for node and Browserify. Built on top of GitHub's WHATWG Fetch polyfill.
fetch
as a global so that its API is consistent between client and server.For ease-of-maintenance and backward-compatibility reasons, this library will always be a polyfill. As a "safe" alternative, which does not modify the global, consider fetch-ponyfill.
The Fetch API is currently not implemented consistently across browsers. This module will enable you to use fetch
in your Node code in a cross-browser compliant fashion. The Fetch API is part of the Web platform API defined by the standards bodies WHATWG and W3C.
npm install --save isomorphic-fetch
bower install --save isomorphic-fetch
require('isomorphic-fetch');
fetch('//offline-news-api.herokuapp.com/stories')
.then(function(response) {
if (response.status >= 400) {
throw new Error("Bad response from server");
}
return response.json();
})
.then(function(stories) {
console.log(stories);
});
Author: Matthew-andrews
Source Code: https://github.com/matthew-andrews/isomorphic-fetch
License: MIT license
1660682340
proxyquireify
browserify >= v2
version of proxyquire.
Proxies browserify's require in order to make overriding dependencies during testing easy while staying totally unobstrusive. To run your tests in both Node and the browser, use proxyquire-universal.
require
calls to ensure the module you are testing gets bundlednpm install proxyquireify
To use with browserify < 5.1
please npm install proxyquireify@0.5
instead. To run your tests in PhantomJS, you may need to use a shim.
foo.js:
var bar = require('./bar');
module.exports = function () {
return bar.kinder() + ' ist ' + bar.wunder();
};
foo.test.js:
var proxyquire = require('proxyquireify')(require);
var stubs = {
'./bar': {
wunder: function () { return 'wirklich wunderbar'; }
, kinder: function () { return 'schokolade'; }
}
};
var foo = proxyquire('./src/foo', stubs);
console.log(foo());
browserify.build.js:
var browserify = require('browserify');
var proxyquire = require('proxyquireify');
browserify()
.plugin(proxyquire.plugin)
.require(require.resolve('./foo.test'), { entry: true })
.bundle()
.pipe(fs.createWriteStream(__dirname + '/bundle.js'));
load it in the browser and see:
schokolade ist wirklich wunderbar
If you're transforming your source code to JavaScript, you must apply those transforms before applying the proxyquireify plugin:
browserify()
.transform('coffeeify')
.plugin(proxyquire.plugin)
.require(require.resolve('./test.coffee'), { entry: true })
.bundle()
.pipe(fs.createWriteStream(__dirname + '/bundle.js'));
proxyquireify needs to parse your code looking for require
statements. If you require
anything that's not valid JavaScript that acorn can parse (e.g. CoffeeScript, TypeScript), you need to make sure the relevant transform runs before proxyquireify.
proxyquireify functions as a browserify plugin and needs to be registered with browserify like so:
var browserify = require('browserify');
var proxyquire = require('proxyquireify');
browserify()
.plugin(proxyquire.plugin)
.require(require.resolve('./test'), { entry: true })
.bundle()
.pipe(fs.createWriteStream(__dirname + '/bundle.js'));
Alternatively you can register proxyquireify as a plugin from the command line like so:
browserify -p proxyquireify/plugin test.js > bundle.js
This API to setup proxyquireify was used prior to browserify plugin support.
It has not been removed yet to make upgrading proxyquireify easier for now, but it will be deprecated in future versions. Please consider using the plugin API (above) instead.
To be used in build script instead of browserify()
, autmatically adapts browserify to work for tests and injects require overrides into all modules via a browserify transform.
proxyquire.browserify()
.require(require.resolve('./test'), { entry: true })
.bundle()
.pipe(fs.createWriteStream(__dirname + '/bundle.js'));
../lib/foo
{ modulePath: stub, ... }
var proxyquire = require('proxyquireify')(require);
var barStub = { wunder: function () { 'really wonderful'; } };
var foo = proxyquire('./foo', { './bar': barStub })
In order for browserify to include the module you are testing in the bundle, proxyquireify will inject a require()
call for every module you are proxyquireing. So in the above example require('./foo')
will be injected at the top of your test file.
By default proxyquireify calls the function defined on the original dependency whenever it is not found on the stub.
If you prefer a more strict behavior you can prevent callThru on a per module or per stub basis.
If callThru is disabled, you can stub out modules that weren't even included in the bundle. Note, that unlike in proxquire, there is no option to prevent call thru globally.
// Prevent callThru for path module only
var foo = proxyquire('./foo', {
path: {
extname: function (file) { ... }
, '@noCallThru': true
}
, fs: { readdir: function (..) { .. } }
});
// Prevent call thru for all contained stubs (path and fs)
var foo = proxyquire('./foo', {
path: {
extname: function (file) { ... }
}
, fs: { readdir: function (..) { .. } }
, '@noCallThru': true
});
// Prevent call thru for all stubs except path
var foo = proxyquire('./foo', {
path: {
extname: function (file) { ... }
, '@noCallThru': false
}
, fs: { readdir: function (..) { .. } }
, '@noCallThru': true
});
Author: Thlorenz
Source Code: https://github.com/thlorenz/proxyquireify
License: MIT license
1657873620
Well, in this case, since someone has visited this link before you, the file was cached with leveldb. But if you were to try and grab a bundle that nobody else has tried to grab before, what would happen is this:
API
There are a few API endpoints:
Get the latest version of :module.
Get a version of :module
which satisfies the given :version
semver range. Defaults to latest.
The same as the prior two, except with --debug
passed to browserify.
In this case, --standalone
is passed to browserify.
Both --debug
and --standalone
are passed to browserify!
POST a body that looks something like this:
{
"options": {
"debug": true
},
"dependencies": {
"concat-stream": "0.1.x",
"hyperstream": "0.2.x"
}
}
"options" is where you get to set "debug", "standalone", and "fullPaths". Usually, in this case, you'll probably only really care about debug. If you don't define "options", it will default to { "debug": false, "standalone": false, "fullPaths": false }
.
What you get in return looks something like this:
HTTP/1.1 200 OK
X-Powered-By: Express
Location: /multi/48GOmL0XvnRZn32bkpz75A==
content-type: application/json
Date: Sat, 22 Jun 2013 22:36:32 GMT
Connection: keep-alive
Transfer-Encoding: chunked
{
"concat-stream": {
"package": /* the concat-stream package.json */,
"bundle": /* the concat-stream bundle */
},
"hyperstream": {
"package": /* the hyperstream package.json */,
"bundle": /* the hyperstream bundle */
}
}
The bundle gets permanently cached at /multi/48GOmL0XvnRZn32bkpz75A==
for future GETs.
If you saved the Location url from the POST earlier, you can just GET it instead of POSTing again.
Get information on the build status of a module. Returns build information for all versions which satisfy the given semver (or latest in the event of a missing semver).
Blobs generally look something like this:
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
Content-Length: 109
ETag: "-9450086"
Date: Sun, 26 Jan 2014 08:05:59 GMT
Connection: keep-alive
{
"module": "concat-stream",
"builds": {
"1.4.1": {
"ok": true
}
}
}
The "module" and "builds" fields should both exist. Keys for "builds" are the versions. Properties:
Versions which have not been built will not be keyed onto "builds".
browserify-cdn is ready to run on Heroku:
heroku create my-browserify-cdn
git push heroku master
heroku ps:scale web=1
You can build and run an image doing the following:
docker build -t "wzrd.in" /path/to/wzrd.in
docker run -p 8080:8080 wzrd.in
Keep in mind that a new deploy will wipe the cache.
Quick Start
Try visiting this link:
/standalone/concat-stream@latest
Also, wzrd.in has a nice url generating form.
Author: Browserify
Source Code: https://github.com/browserify/wzrd.in
License: MIT license
1657614308
browserify-fs
fs for the browser using level-filesystem, level.js and browserify
npm install browserify-fs
To use simply require it and use it as you would fs
var fs = require('browserify-fs');
fs.mkdir('/home', function() {
fs.writeFile('/home/hello-world.txt', 'Hello world!\n', function() {
fs.readFile('/home/hello-world.txt', 'utf-8', function(err, data) {
console.log(data);
});
});
});
You can also make browserify replace require('fs')
with browserify-fs using
browserify -r fs:browserify-fs
Using the replacement you can browserify modules like tar-fs and mkdirp!
Checkout level-filesystem and level.js to see which browsers are supported
Author: Mafintosh
Source Code: https://github.com/mafintosh/browserify-fs
License: MIT
1656103020
Serverless Optimize Plugin
Bundle with Browserify, transpile and minify with Babel automatically to your NodeJS runtime compatible JavaScript.
This plugin is a child of the great serverless-optimizer-plugin. Kudos!
Requirements:
Install via npm in the root of your Serverless service:
npm install serverless-plugin-optimize --save-dev
plugins
array in your Serverless serverless.yml
:plugins:
- serverless-plugin-optimize
package:
individually: true
deploy
and invoke local
commandsConfiguration options can be set globally in custom
property and inside each function in optimize
property. Function options overwrite global options.
false
) - When debug is set to true
it won't remove prefix
folder and will generate debug output at the end of package creation.['aws-sdk']
) - Array of modules or paths that will be excluded.['.js', '.json']
) - Array of optional extra extensions modules that will be included.node_modules
instead of being loaded into browserify bundle. Note that external modules will require that its dependencies are within its directory and this plugin will not do this for you. e.g. you should execute the following: (cd external_modules/some-module && npm i --prod
)node_modules
.false
) - When global is set to true
transforms will run inside node_modules
.true
) - When minify is set to false
Babili preset won't be added._optimize
) - Folder to output bundle.['env']
) - Array of Babel presets.custom:
optimize:
debug: true
exclude: ['ajv']
extensions: ['.extension']
external: ['sharp']
externalPaths:
sharp: 'external_modules/sharp'
global: true
ignore: ['ajv']
includePaths: ['bin/some-binary-file']
minify: false
prefix: 'dist'
plugins: ['transform-decorators-legacy']
presets: ['es2017']
true
) - When optimize is set to false
the function won't be optimized.functions:
hello:
optimize: false
node_modules
instead of being loaded into browserify bundle. Note that external modules will require it's dependencies within it's directory. (cd external_modules/some-module && npm i --prod
)node_modules
.true
transforms will run inside node_modules
.false
Babili preset won't be added.functions:
hello:
optimize:
exclude: ['ajv']
extensions: ['.extension']
external: ['sharp']
externalPaths:
sharp: 'external_modules/sharp'
global: false
ignore: ['ajv']
includePaths: ['bin/some-binary-file']
minify: false
plugins: ['transform-decorators-legacy']
presets: ['es2017']
There is a difference you must know between calling files locally and after optimization with includePaths
.
When Optimize packages your functions, it bundles them inside /${prefix}/${functionName}/...
and when your lambda function runs in AWS it will run from root /var/task/${prefix}/${functionName}/...
and your CWD
will be /var/task/
.
Solution in #32 by @hlegendre. path.resolve(process.env.LAMBDA_TASK_ROOT, ${prefix}, process.env.AWS_LAMBDA_FUNCTION_NAME, ${includePathFile})
.
Help us making this plugin better and future proof.
npm install
git checkout -b new_feature
npm run lint
Author: FidelLimited
Source Code: https://github.com/FidelLimited/serverless-plugin-optimize
License: MIT license
1654917540
beefy
a local development server designed to work with browserify.
it:
node_modules/
.index.html
for missing routes so you don't need to even muck about with HTML to get startednpm install -g beefy
; and if you want to always have a browserify available for beefy to use, npm install -g browserify
.
$ cd directory/you/want/served
$ beefy path/to/thing/you/want/browserified.js [PORT] [-- browserify args]
Beefy searches for bundlers in the following order:
path/to/file.js
the path to the file you want browserified. can be just a normal node module. you can also alias it: path/to/file.js:bundle.js
if you want -- so all requests to bundle.js
will browserify path/to/file.js
. this is helpful for when you're writing gh-pages
-style sites that already have an index.html, and expect the bundle to be pregenerated and available at a certain path.
You may provide multiple entry points, if you desire!
--browserify command
--bundler command
use command
instead of browserify
or ./node_modules/.bin/browserify
.
in theory, you could even get this working with r.js, but that would probably be scary and bats would fly out of it. but it's there if you need it! if you want to use r.js
with beefy, you'll need a config that can write the resulting bundle to stdout, and you can run beefy with beefy :output-url.js --bundler r.js -- -o config.js
.
NB: This will not work in Windows.
--live
Enable live reloading. this'll start up a sideband server and an fs
watch on the current working directory -- if you save a file, your browser will refresh.
if you're not using the generated index file, beefy has your back -- it'll still automatically inject the appropriate script tag.
<script src="/-/live-reload.js"></script>
--cwd dir
serve files as if running from dir
.
--debug=false
turn off browserify source map output. by default, beefy automatically inserts -d
into the browserify args -- this turns that behavior off.
--open
automatically discover a port and open it using your default browser.
--index=path/to/file
Provide your own default index! This works great for single page apps, as every URL on your site will be redirected to the same HTML file. Every instance of {{entry}}
will be replaced with the entry point of your app.
var beefy = require('beefy')
, http = require('http')
var handler = beefy('entry.js')
http.createServer(handler).listen(8124)
Beefy defaults the cwd
to the directory of the file requiring it, so it's easy to switch from CLI mode to building a server.
As your server grows, you may want to expand on the information you're giving beefy:
var beefy = require('beefy')
, http = require('http')
http.createServer(beefy({
entries: ['entry.js']
, cwd: __dirname
, live: true
, quiet: false
, bundlerFlags: ['-t', 'brfs']
, unhandled: on404
})).listen(8124)
function on404(req, resp) {
resp.writeHead(404, {})
resp.end('sorry folks!')
}
Create a request handler suitable for providing to http.createServer
. Calls ready
once the appropriate bundler has been located. If ready
is not provided and a bundler isn't located, an error is thrown.
Beefy's options are a simple object, which may contain the following attributes:
cwd
: String. The base directory that beefy is serving. Defaults to the directory of the module that first required beefy.quiet
: Boolean. Whether or not to output request information to the console. Defaults to true.live
: Boolean. Whether to enable live reloading. Defaults to false.bundler
: null, String, or Function. If a string is given, beefy will attempt to run that string as a child process whenever the path is given. If a function is given, it is expected to accept a path and return an object comprised of {stdout: ReadableStream, stderr: ReadableStream}
. If not given, beefy will search for an appropriate bundler.bundlerFlags
: Flags to be passed to the bundler. Ignored if bundler
is a function.entries
: String, Array, or Object. The canonical form is that of an object mapping URL pathnames to paths on disk relative to cwd
. If given as an array or string, entries will be mapped like so: index.js
will map /index.js
to <cwd>/index.js
.unhandled
: Function accepting req and resp. Called for 404s. If not given, a default 404 handler will be used.watchify
: defaults to true -- when true, beefy will prefer using watchify to browserify. If false, beefy will prefer browserify.Beefy may accept, as a shorthand, beefy("file.js")
or beefy(["file.js"])
.
Author: Chrisdickinson
Source Code: https://github.com/chrisdickinson/beefy
License: MIT license
1653823260
BitcoinJS (bitcoinjs-lib)
A javascript Bitcoin library for node.js and browsers. Written in TypeScript, but committing the JS files to verify.
If you are thinking of using the master branch of this library in production, stop. Master is not stable; it is our development branch, and only tagged releases may be classified as stable.
Don't trust. Verify.
We recommend every user of this library and the bitcoinjs ecosystem audit and verify any underlying code for its validity and suitability, including reviewing any and all of your project's dependencies.
Mistakes and bugs happen, but with your help in resolving and reporting issues, together we can produce open source software that is:
Buffer
's throughout, andPresently, we do not have any formal documentation other than our examples, please ask for help if our examples aren't enough to guide you.
You can find a Web UI that covers most of the psbt.ts
, transaction.ts
and p2*.ts
APIs here.
npm install bitcoinjs-lib
# optionally, install a key derivation library as well
npm install ecpair bip32
# ecpair is the ECPair class for single keys
# bip32 is for generating HD keys
Previous versions of the library included classes for key management (ECPair, HDNode(->"bip32")) but now these have been separated into different libraries. This lowers the bundle size significantly if you don't need to perform any crypto functions (converting private to public keys and deriving HD keys).
Typically we support the Node Maintenance LTS version. TypeScript target will be set to the ECMAScript version in which all features are fully supported by current Active Node LTS. However, depending on adoption among other environments (browsers etc.) we may keep the target back a year or two. If in doubt, see the main_ci.yml for what versions are used by our continuous integration tests.
WARNING: We presently don't provide any tooling to verify that the release on npm
matches GitHub. As such, you should verify anything downloaded by npm
against your own verified copy.
Crypto is hard.
When working with private keys, the random number generator is fundamentally one of the most important parts of any software you write. For random number generation, we default to the randombytes
module, which uses window.crypto.getRandomValues
in the browser, or Node js' crypto.randomBytes
, depending on your build system. Although this default is ~OK, there is no simple way to detect if the underlying RNG provided is good enough, or if it is catastrophically bad. You should always verify this yourself to your own standards.
This library uses tiny-secp256k1, which uses RFC6979 to help prevent k
re-use and exploitation. Unfortunately, this isn't a silver bullet. Often, Javascript itself is working against us by bypassing these counter-measures.
Problems in Buffer (UInt8Array)
, for example, can trivially result in catastrophic fund loss without any warning. It can do this through undermining your random number generation, accidentally producing a duplicate k
value, sending Bitcoin to a malformed output script, or any of a million different ways. Running tests in your target environment is important and a recommended step to verify continuously.
Finally, adhere to best practice. We are not an authorative source of best practice, but, at the very least:
Math.random
- in any way - don't.The recommended method of using bitcoinjs-lib
in your browser is through Browserify. If you're familiar with how to use browserify, ignore this and carry on, otherwise, it is recommended to read the tutorial at https://browserify.org/.
NOTE: We use Node Maintenance LTS features, if you need strict ES5, use --transform babelify
in conjunction with your browserify
step (using an es2015
preset).
WARNING: iOS devices have problems, use at least buffer@5.0.5 or greater, and enforce the test suites (for Buffer
, and any other dependency) pass before use.
Type declarations for Typescript are included in this library. Normal installation should include all the needed type information.
The below examples are implemented as integration tests, they should be very easy to understand. Otherwise, pull requests are appreciated. Some examples interact (via HTTPS) with a 3rd Party Blockchain Provider (3PBP).
Generate a 2-of-3 P2SH multisig address
Generate a SegWit P2SH address
Generate a SegWit 3-of-4 multisig address
Generate a SegWit 2-of-2 P2SH multisig address
Support the retrieval of transactions for an address (3rd party blockchain)
Create (and broadcast via 3PBP) a typical Transaction
Create (and broadcast via 3PBP) a Transaction with an OP_RETURN output
Create (and broadcast via 3PBP) a Transaction with a 2-of-4 P2SH(multisig) input
Create (and broadcast via 3PBP) a Transaction with a SegWit P2SH(P2WPKH) input
Create (and broadcast via 3PBP) a Transaction with a SegWit P2WPKH input
Create (and broadcast via 3PBP) a Transaction with a SegWit P2PK input
Create (and broadcast via 3PBP) a Transaction with a SegWit 3-of-4 P2SH(P2WSH(multisig)) input
Create (and broadcast via 3PBP) a Transaction and sign with an HDSigner interface (bip32)
Import a BIP32 testnet xpriv and export to WIF
Export a BIP32 xpriv, then import it
Create a BIP32, bitcoin, account 0, external address
Create a BIP44, bitcoin, account 0, external address
Create a BIP49, bitcoin testnet, account 0, external address
Use BIP39 to generate BIP32 addresses
Create (and broadcast via 3PBP) a Transaction where Alice and Bob can redeem the output at any time
If you have a use case that you feel could be listed here, please ask for it!
See CONTRIBUTING.md.
npm test
npm run-script coverage
Author: Bitcoinjs
Source Code: https://github.com/bitcoinjs/bitcoinjs-lib
License: MIT license
1652952060
Serverless Browserify Plugin
A Serverless v1.0 plugin that uses Browserify to bundle your NodeJS Lambda functions.
Why? Lambda's with smaller code start and run faster. Lambda also has an account wide deployment package size limit.
aws-sdk-js now officially supports browserify. Read more about why this kicks ass on my blog.
With the example package.json
and javascript code below, the default packaging for NodeJs lambdas in Serverless produces a zip file that is 11.3 MB, because it blindly includes all of node_modules
in the zip.
This plugin with 2 lines of configuration produces a zip file that is 400KB!
...
"dependencies": {
"aws-sdk": "^2.6.12",
"moment": "^2.15.2",
"request": "^2.75.0",
"rxjs": "^5.0.0-rc.1"
},
...
const Rx = require('rxjs/Rx');
const request = require('request');
...
From your serverless project run:
npm install serverless-plugin-browserify --save-dev
Add the plugin to your serverless.yml
file and set package.individually
to true
:
plugins:
- serverless-plugin-browserify
package:
individually: true
package.individually
is required because it makes configuration more straight forward, and if you are not packaging individually size is not a concern of yours in the 1st place.
For most use cases you should NOT need to do any configuration. If you are a code ninja, read on.
The base config for browserify is read from the custom.browserify
section of serverless.yml
. All browserify options are supported (most are auto configured by this plugin). This plugin adds one special option disable
which if true
will bypass this plugin.
The base config can be over-ridden on a function by function basis. Again custom.browserify
is not required and should not even need to be defined in most cases.
custom:
browserify:
#any option defined in https://github.com/substack/node-browserify#browserifyfiles--opts
functions:
usersGet:
name: ${self:provider.stage}-${self:service}-pageGet
description: get user
handler: users/handler.hello
browserify:
noParse:
- ./someBig.json #browserify can't optimize json, will take long time to parse for nothing
Note: package.include
can be used with this plugin. All other options can be handled by leveraging browserify options in your serverless.yml
custom browserify
section.
When this plugin is enabled, and package.individually
is true
, running serverless deploy
and serverless deploy -f <funcName>
will automatically browserify your node lambda code.
If you want to see output of bundled file or zip simply set SLS_DEBUG
. Ex (using Fish Shell): env SLS_DEBUG=true sls deploy function -v -f usersGet
Also check out the examples directory
Run serverless browserify -f <functionName>
. You can optionally dictate where the bundling output dir is by using the -o
flag. Ex: sls browserify -o /tmp/test -f pageUpdate
.
SLS_DEBUG=true
then re-run your command to output the directory. Fish Shell ex: env SLS_DEBUG=true sls browserify
Author: Doapp-ryanp
Source Code: https://github.com/doapp-ryanp/serverless-plugin-browserify
License: MIT license