1657105560
level-codec
Encode keys, values and range options, with built-in or custom encodings.
📌 This module will soon be deprecated, because it is superseded by
level-transcoder
.
If you are upgrading: please see UPGRADING.md
.
const Codec = require('level-codec')
const codec = Codec({ keyEncoding: 'json' })
const key = codec.encodeKey({ foo: 'bar' })
console.log(key) // -> '{"foo":"bar"}'
console.log(codec.decodeKey(key)) // -> { foo: 'bar' }
codec = Codec([opts])
Create a new codec, with a global options object.
codec.encodeKey(key[, opts])
Encode key
with given opts
.
codec.encodeValue(value[, opts])
Encode value
with given opts
.
codec.encodeBatch(batch[, opts])
Encode batch
ops with given opts
.
codec.encodeLtgt(ltgt)
Encode the ltgt values of option object ltgt
.
codec.decodeKey(key[, opts])
Decode key
with given opts
.
codec.decodeValue(value[, opts])
Decode value
with given opts
.
codec.createStreamDecoder([opts])
Create a function with signature (key, value)
, that for each key-value pair returned from a levelup read stream returns the decoded value to be emitted.
codec.keyAsBuffer([opts])
Check whether opts
and the global opts
call for a binary key encoding.
codec.valueAsBuffer([opts])
Check whether opts
and the global opts
call for a binary value encoding.
codec.encodings
The builtin encodings as object of form
{
[type]: encoding
}
See below for a list and the format of encoding
.
Type | Input | Stored as | Output |
---|---|---|---|
utf8 | String or Buffer | String or Buffer | String |
json | Any JSON type | JSON string | Input |
binary | Buffer, string or byte array | Buffer | As stored |
hex ascii base64 ucs2 utf16le utf-16le | String or Buffer | Buffer | String |
none a.k.a. id | Any type (bypass encoding) | Input* | As stored |
* Stores may have their own type coercion. Whether type information is preserved depends on the abstract-leveldown
implementation as well as the underlying storage (LevelDB
, IndexedDB
, etc).
An encoding is an object of the form:
{
encode: function (data) {
return data
},
decode: function (data) {
return data
},
buffer: Boolean,
type: 'example'
}
All of these properties are required.
The buffer
boolean tells consumers whether to fetch data as a Buffer, before calling your decode()
function on that data. If buffer
is true, it is assumed that decode()
takes a Buffer. If false, it is assumed that decode
takes any other type (usually a string).
To explain this in the grand scheme of things, consider a store like leveldown
which has the ability to return either a Buffer or string, both sourced from the same byte array. Wrap this store with encoding-down
and it'll select the most optimal data type based on the buffer
property of the active encoding. If your decode()
function needs a string (and the data can legitimately become a UTF8 string), you should set buffer
to false
. This avoids the cost of having to convert a Buffer to a string.
The type
string should be a unique name.
Level/codec
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Author: Level
Source Code: https://github.com/Level/codec
License: MIT license
1657103465
Min is a fast, minimal browser that protects your privacy. It includes an interface designed to minimize distractions, and features such as:
Download Min from the releases page, or learn more on the website.
You can find prebuilt binaries for Min here. Alternatively, skip to the section below for instructions on how to build Min directly from source.
sudo dpkg -i /path/to/download
sudo rpm -i /path/to/download --ignoreos
sudo pacman -Sy min
If you want to develop Min:
npm install
to install dependencies.npm run start
.ctrl+r
(or cmd+r
on Mac) twice to restart the browser.In order to build Min from source, follow the installation instructions above, then use one of the following commands to create binaries:
npm run buildWindows
npm run buildMacIntel
npm run buildMacArm
npm run buildDebian
npm run buildRaspi
(for 32-bit Raspberry Pi)npm run buildLinuxArm64
(for 64-bit Raspberry Pi or other ARM Linux)npm run buildRedhat
Depending on the platform you are building for, you may need to install additional dependencies:
brew install fakeroot dpkg
first.export SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX11.1.sdk
. The exact command will depend on where Xcode is installed and which SDK version you're using.npm config set msvs_version 2019
(or the appropriate version).Thanks for taking the time to contribute to Min!
If you're experiencing a bug or have a suggestion for how to improve Min, please open a new issue.
If you have questions about using Min, need help getting started with development, or want to talk about what we're working on, join our Discord server.
localization/languages
directory, create a new file, and name it "[your language code].json".localization/languages
directory.en-US.json
file.Author: Minbrowser
Source Code: https://github.com/minbrowser/min
License: Apache-2.0 license
1657078440
An abstract-level
database for browsers, backed by IndexedDB. The successor to level-js
. If you are upgrading, please see UPGRADING.md.
📌 Which module should I use? What is
abstract-level
? Head over to the FAQ.
const { BrowserLevel } = require('browser-level')
// Create a database called 'example'
const db = new BrowserLevel('example', { valueEncoding: 'json' })
// Add an entry with key 'a' and value 1
await db.put('a', 1)
// Add multiple entries
await db.batch([{ type: 'put', key: 'b', value: 2 }])
// Get value of key 'a': 1
const value = await db.get('a')
// Iterate entries with keys that are greater than 'a'
for await (const [key, value] of db.iterator({ gt: 'a' })) {
console.log(value) // 2
}
The API of browser-level
follows that of abstract-level
with just two additional constructor options (see below) and one additional method (see below). As such, the majority of the API is documented in abstract-level
. The createIfMissing
and errorIfExists
options of abstract-level
are not supported here.
Like other implementations of abstract-level
, browser-level
has first-class support of binary keys and values, using either Uint8Array or Buffer. In order to sort string and binary keys the same way as other databases, browser-level
internally converts data to a Uint8Array before passing them to IndexedDB. If you have no need to work with Buffer
keys or values, you can choose to omit the buffer
shim from a JavaScript bundle (through configuration of Webpack, Browserify or other bundlers).
Due to limitations of IndexedDB, browser-level
does not offer snapshot guarantees. Such a guarantee would mean that an iterator does not see the data of simultaneous writes - it would be reading from a snapshot in time. In contrast, a browser-level
iterator reads a few entries ahead and then opens a new IndexedDB transaction on the next read. A "few" means all entries for iterator.all()
, size
amount of entries for iterator.nextv(size)
and a hardcoded 100 entries for iterator.next()
. Individual calls to those methods have snapshot guarantees but repeated calls do not.
The result is that an iterator will include the data of simultaneous writes, if db.put()
, db.del()
or db.batch()
are called in between creating the iterator and consuming the iterator, or in between calls to iterator.next()
or iterator.nextv()
. For example:
const iterator = db.iterator()
await db.put('abc', '123')
for await (const [key, value] of iterator) {
// This might be 'abc'
console.log(key)
}
If snapshot guarantees are a must for your application then use iterator.all()
and call it immediately after creating the iterator:
const entries = await db.iterator({ limit: 50 }).all()
// Synchronously iterate the result
for (const [key, value] of entries) {
console.log(key)
}
db = new BrowserLevel(location[, options])
Create a new database or open an existing one. The required location
argument is the string name of the IDBDatabase
to be opened, as well as the name of the object store within that database. The name of the IDBDatabase
will be prefixed with options.prefix
.
Besides abstract-level
options, the optional options
argument may contain:
prefix
(string, default: 'level-js-'
): Prefix for the IDBDatabase
name. Can be set to an empty string. The default is compatible with level-js
.version
(string or number, default: 1
): The version to open the IDBDatabase
with.See IDBFactory#open()
for more details about database name and version.
BrowserLevel.destroy(location[, prefix][, callback])
Delete the IndexedDB database at the given location
. If prefix
is not given, it defaults to the same value as the BrowserLevel
constructor does. The callback
function will be called when the destroy operation is complete, with a possible error argument. If no callback is provided, a promise is returned. This method is an additional method that is not part of the abstract-level
interface.
Before calling destroy()
, close a database if it's using the same location
and prefix
:
const db = new BrowserLevel('example')
await db.close()
await BrowserLevel.destroy('example')
With npm do:
npm install browser-level
This module is best used with browserify
or similar bundlers.
Level/browser-level
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Author: Level
Source Code: https://github.com/Level/browser-level
License: MIT license
1657073222
Beaker is an experimental peer-to-peer Web browser. It adds new APIs for building hostless applications while remaining compatible with the rest of the Web.
Visit the Releases Page to find the installer you need.
Requires node 12 or higher.
In Linux (and in some cases macOS) you need libtool, m4, autoconf, and automake:
sudo apt-get install libtool m4 make g++ autoconf # debian/ubuntusudo dnf install libtool m4 make gcc-c++ libXScrnSaver # fedorabrew install libtool autoconf automake # macos
In Windows, you'll need to install Python 2.7, Visual Studio 2015 or 2017, and Git. (You might try windows-build-tools.) Then run:
npm config set python c:/python27
npm config set msvs_version 2017
npm install -g node-gyp
npm install -g gulp
To build:
git clone https://github.com/beakerbrowser/beaker.gitcd beaker/scriptsnpm install # don't worry about v8 api errors building native modules - rebuild will fixnpm run rebuild # needed after each install. see https://github.com/electron/electron/issues/5851npm start
If you pull latest from the repo and get weird module errors, do:
npm run burnthemall
This invokes the mad king, who will torch your node_modules/
, and do the full install/rebuild process for you. (We chose that command name when GoT was still cool.) npm start
should work afterward.
If you're doing development, npm run watch
to have assets build automatically.
DEBUG
: which log systems to output? A comma-separated string. Can be beaker
, dat
, bittorrent-dht
, dns-discovery
, hypercore-protocol
. Specify *
for all.BEAKER_OPEN_URL
: open the given URL on load, rather than the previous session or default tab.BEAKER_USER_DATA_PATH
: override the user-data path, therefore changing where data is read/written. Useful for testing. For default value see userData
in the electron docs.BEAKER_DAT_QUOTA_DEFAULT_BYTES_ALLOWED
: override the default max-quota for bytes allowed to be written by a dat site. Useful for testing. Default value is '500mb'
. This can be a Number or a String. Check bytes.parse for supported units and abbreviations.See SECURITY.md for reporting security issues and vulnerabilities.
Launching from tmux is known to cause issues with GUI apps in macOS. On Beaker, it may cause the application to hang during startup.
Please feel free to open usability issues. Join us at beakerbrowser on Freenode.
Author: Beakerbrowser
Source Code: https://github.com/beakerbrowser/beaker
License: MIT license
1657070940
An abstract-leveldown
compliant store on top of IndexedDB.
📌 This module will soon be deprecated, because it is superseded by
browser-level
.
Here are the goals of level-js
:
abstract-leveldown
test suiteBuffer
keys and valuesBeing abstract-leveldown
compliant means you can use many of the Level modules on top of this library.
If you are upgrading: please see UPGRADING.md.
const levelup = require('levelup')
const leveljs = require('level-js')
const db = levelup(leveljs('bigdata'))
db.put('hello', Buffer.from('world'), function (err) {
if (err) throw err
db.get('hello', function (err, value) {
if (err) throw err
console.log(value.toString()) // 'world'
})
})
With async/await
:
const levelup = require('levelup')
const leveljs = require('level-js')
const db = levelup(leveljs('bigdata'))
await db.put('hello', Buffer.from('world'))
const value = await db.get('hello')
Keys and values can be a string or Buffer
. Any other type will be irreversibly stringified. The only exceptions are null
and undefined
. Keys and values of that type are rejected.
In order to sort string and Buffer keys the same way, for compatibility with leveldown
and the larger ecosystem, level-js
internally converts keys and values to binary before passing them to IndexedDB.
If you desire non-destructive encoding (e.g. to store and retrieve numbers as-is), wrap level-js
with encoding-down
. Alternatively install level
which conveniently bundles levelup
, level-js
and encoding-down
. Such an approach is also recommended if you want to achieve universal (isomorphic) behavior. For example, you could have leveldown
in a backend and level-js
in the frontend. The level
package does exactly that.
When getting or iterating keys and values, regardless of the type with which they were stored, keys and values will return as a Buffer unless the asBuffer
, keyAsBuffer
or valueAsBuffer
options are set, in which case strings are returned. Setting these options is not needed when level-js
is wrapped with encoding-down
, which determines the optimal return type by the chosen encoding.
db.get('key', { asBuffer: false })
db.iterator({ keyAsBuffer: false, valueAsBuffer: false })
With npm do:
npm install level-js
Not to be confused with leveljs.
This library is best used with browserify.
db = leveljs(location[, options])
Returns a new leveljs
instance. location
is the string name of the IDBDatabase
to be opened, as well as the object store within that database. The database name will be prefixed with options.prefix
.
options
The optional options
argument may contain:
prefix
(string, default: 'level-js-'
): Prefix for IDBDatabase
name.version
(string | number, default: 1
): The version to open the database with.See IDBFactory#open
for more details.
Level/level-js
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Author: Level
Source Code: https://github.com/Level/level-js
License: MIT license
1657055640
In-memory abstract-leveldown
store for Node.js and browsers.
📌 This module will soon be deprecated, because it is superseded by
memory-level
.
If you are upgrading: please see UPGRADING.md
.
const levelup = require('levelup')
const memdown = require('memdown')
const db = levelup(memdown())
db.put('hey', 'you', (err) => {
if (err) throw err
db.get('hey', { asBuffer: false }, (err, value) => {
if (err) throw err
console.log(value) // 'you'
})
})
With async/await
:
await db.put('hey', 'you')
const value = await db.get('hey', { asBuffer: false })
Your data is discarded when the process ends or you release a reference to the store. Note as well, though the internals of memdown
operate synchronously - levelup
does not.
Keys and values can be strings or Buffers. Any other key type will be irreversibly stringified. The only exceptions are null
and undefined
. Keys and values of that type are rejected.
const db = levelup(memdown())
db.put('example', 123, (err) => {
if (err) throw err
db.createReadStream({
keyAsBuffer: false,
valueAsBuffer: false
}).on('data', (entry) => {
console.log(typeof entry.key) // 'string'
console.log(typeof entry.value) // 'string'
})
})
If you desire non-destructive encoding (e.g. to store and retrieve numbers as-is), wrap memdown
with encoding-down
. Alternatively install level-mem
which conveniently bundles levelup
, memdown
and encoding-down
. Such an approach is also recommended if you want to achieve universal (isomorphic) behavior. For example, you could have leveldown
in a backend and memdown
in the frontend.
const encode = require('encoding-down')
const db = levelup(encode(memdown(), { valueEncoding: 'json' }))
db.put('example', 123, (err) => {
if (err) throw err
db.createReadStream({
keyAsBuffer: false,
valueAsBuffer: false
}).on('data', (entry) => {
console.log(typeof entry.key) // 'string'
console.log(typeof entry.value) // 'number'
})
})
A memdown
store is backed by a fully persistent data structure and thus has snapshot guarantees. Meaning that reads operate on a snapshot in time, unaffected by simultaneous writes.
In addition to the regular npm test
, you can test memdown
in a browser of choice with:
npm run test-browser-local
To check code coverage:
npm run coverage
Level/memdown
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Author: Level
Source Code: https://github.com/Level/memdown
License: MIT license
1657021020
level-packager
levelup
package helper for distributing with an abstract-leveldown
store.
Exports a single function which takes a single argument, an abstract-leveldown
compatible storage back-end for levelup
. The function returns a constructor function that will bundle levelup
with the given abstract-leveldown
replacement. The full API is supported, including optional functions, destroy()
, and repair()
. Encoding functionality is provided by encoding-down
.
The constructor function has a .errors
property which provides access to the different error types from level-errors
.
For example use-cases, see:
Also available is a test.js file that can be used to verify that the user-package works as expected.
If you are upgrading: please see UPGRADING.md
.
Level/packager
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
📌 This module will soon be deprecated.
Author: Level
Source Code: https://github.com/Level/packager
License: MIT license
1657013400
An abstract prototype matching the leveldown
API. Useful for extending levelup
functionality by providing a replacement to leveldown
.
📌 This module will soon be deprecated, because it is superseded by
abstract-level
.
This module provides a simple base prototype for a key-value store. It has a public API for consumers and a private API for implementors. To implement a abstract-leveldown
compliant store, extend its prototype and override the private underscore versions of the methods. For example, to implement put()
, override _put()
on your prototype.
Where possible, the default private methods have sensible noop defaults that essentially do nothing. For example, _open(callback)
will invoke callback
on a next tick. Other methods like _clear(..)
have functional defaults. Each method listed below documents whether implementing it is mandatory.
The private methods are always provided with consistent arguments, regardless of what is passed in through the public API. All public methods provide argument checking: if a consumer calls open()
without a callback argument they'll get an Error('open() requires a callback argument')
.
Where optional arguments are involved, private methods receive sensible defaults: a get(key, callback)
call translates to _get(key, options, callback)
where the options
argument is an empty object. These arguments are documented below.
If you are upgrading: please see UPGRADING.md.
Let's implement a simplistic in-memory leveldown
replacement:
var AbstractLevelDOWN = require('abstract-leveldown').AbstractLevelDOWN
var util = require('util')
// Constructor
function FakeLevelDOWN () {
AbstractLevelDOWN.call(this)
}
// Our new prototype inherits from AbstractLevelDOWN
util.inherits(FakeLevelDOWN, AbstractLevelDOWN)
FakeLevelDOWN.prototype._open = function (options, callback) {
// Initialize a memory storage object
this._store = {}
// Use nextTick to be a nice async citizen
this._nextTick(callback)
}
FakeLevelDOWN.prototype._serializeKey = function (key) {
// As an example, prefix all input keys with an exclamation mark.
// Below methods will receive serialized keys in their arguments.
return '!' + key
}
FakeLevelDOWN.prototype._put = function (key, value, options, callback) {
this._store[key] = value
this._nextTick(callback)
}
FakeLevelDOWN.prototype._get = function (key, options, callback) {
var value = this._store[key]
if (value === undefined) {
// 'NotFound' error, consistent with LevelDOWN API
return this._nextTick(callback, new Error('NotFound'))
}
this._nextTick(callback, null, value)
}
FakeLevelDOWN.prototype._del = function (key, options, callback) {
delete this._store[key]
this._nextTick(callback)
}
Now we can use our implementation with levelup
:
var levelup = require('levelup')
var db = levelup(new FakeLevelDOWN())
db.put('foo', 'bar', function (err) {
if (err) throw err
db.get('foo', function (err, value) {
if (err) throw err
console.log(value) // 'bar'
})
})
See memdown
if you are looking for a complete in-memory replacement for leveldown
.
db = constructor(..)
Constructors typically take a location
argument pointing to a location on disk where the data will be stored. Since not all implementations are disk-based and some are non-persistent, implementors are free to take zero or more arguments in their constructor.
db.status
A read-only property. An abstract-leveldown
compliant store can be in one of the following states:
'new'
- newly created, not opened or closed'opening'
- waiting for the store to be opened'open'
- successfully opened the store, available for use'closing'
- waiting for the store to be closed'closed'
- store has been successfully closed, should not be used.db.supports
A read-only manifest. Might be used like so:
if (!db.supports.permanence) {
throw new Error('Persistent storage is required')
}
if (db.supports.bufferKeys && db.supports.promises) {
await db.put(Buffer.from('key'), 'value')
}
db.open([options, ]callback)
Open the store. The callback
function will be called with no arguments when the store has been successfully opened, or with a single error argument if the open operation failed for any reason.
The optional options
argument may contain:
createIfMissing
(boolean, default: true
): If true
and the store doesn't exist it will be created. If false
and the store doesn't exist, callback
will receive an error.errorIfExists
(boolean, default: false
): If true
and the store exists, callback
will receive an error.Not all implementations support the above options.
db.close(callback)
Close the store. The callback
function will be called with no arguments if the operation is successful or with a single error
argument if closing failed for any reason.
db.get(key[, options], callback)
Get a value from the store by key
. The optional options
object may contain:
asBuffer
(boolean, default: true
): Whether to return the value
as a Buffer. If false
, the returned type depends on the implementation.The callback
function will be called with an Error
if the operation failed for any reason, including if the key was not found. If successful the first argument will be null
and the second argument will be the value.
db.getMany(keys[, options][, callback])
Get multiple values from the store by an array of keys
. The optional options
object may contain:
asBuffer
(boolean, default: true
): Whether to return the value
as a Buffer. If false
, the returned type depends on the implementation.The callback
function will be called with an Error
if the operation failed for any reason. If successful the first argument will be null
and the second argument will be an array of values with the same order as keys
. If a key was not found, the relevant value will be undefined
.
If no callback is provided, a promise is returned.
db.put(key, value[, options], callback)
Store a new entry or overwrite an existing entry. There are no options
by default but implementations may add theirs. The callback
function will be called with no arguments if the operation is successful or with an Error
if putting failed for any reason.
db.del(key[, options], callback)
Delete an entry. There are no options
by default but implementations may add theirs. The callback
function will be called with no arguments if the operation is successful or with an Error
if deletion failed for any reason.
db.batch(operations[, options], callback)
Perform multiple put and/or del operations in bulk. The operations
argument must be an Array
containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.
Each operation is contained in an object having the following properties: type
, key
, value
, where the type
is either 'put'
or 'del'
. In the case of 'del'
the value
property is ignored.
There are no options
by default but implementations may add theirs. The callback
function will be called with no arguments if the batch is successful or with an Error
if the batch failed for any reason.
db.batch()
Returns a chainedBatch
.
db.iterator([options])
Returns an iterator
. Accepts the following range options:
gt
(greater than), gte
(greater than or equal) define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse=true
the order will be reversed, but the entries iterated will be the same.lt
(less than), lte
(less than or equal) define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse=true
the order will be reversed, but the entries iterated will be the same.reverse
(boolean, default: false
): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.limit
(number, default: -1
): limit the number of entries collected by this iterator. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of -1
means there is no limit. When reverse=true
the entries with the highest keys will be returned instead of the lowest keys.Note Zero-length strings, buffers and arrays as well as null
and undefined
are invalid as keys, yet valid as range options. These types are significant in encodings like bytewise
and charwise
as well as some underlying stores like IndexedDB. Consumers of an implementation should assume that { gt: undefined }
is not the same as {}
. An implementation can choose to:
If you are an implementor, a final note: the abstract test suite does not test these types. Whether they are supported or how they sort is up to you; add custom tests accordingly.
In addition to range options, iterator()
takes the following options:
keys
(boolean, default: true
): whether to return the key of each entry. If set to false
, calls to iterator.next(callback)
will yield keys with a value of undefined
.values
(boolean, default: true
): whether to return the value of each entry. If set to false
, calls to iterator.next(callback)
will yield values with a value of undefined
.keyAsBuffer
(boolean, default: true
): Whether to return the key of each entry as a Buffer. If false
, the returned type depends on the implementation.valueAsBuffer
(boolean, default: true
): Whether to return the value of each entry as a Buffer.Lastly, an implementation is free to add its own options.
db.clear([options, ]callback)
This method is experimental. Not all implementations support it yet.
Delete all entries or a range. Not guaranteed to be atomic. Accepts the following range options (with the same rules as on iterators):
gt
(greater than), gte
(greater than or equal) define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse=true
the order will be reversed, but the entries deleted will be the same.lt
(less than), lte
(less than or equal) define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse=true
the order will be reversed, but the entries deleted will be the same.reverse
(boolean, default: false
): delete entries in reverse order. Only effective in combination with limit
, to remove the last N records.limit
(number, default: -1
): limit the number of entries to be deleted. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of -1
means there is no limit. When reverse=true
the entries with the highest keys will be deleted instead of the lowest keys.If no options are provided, all entries will be deleted. The callback
function will be called with no arguments if the operation was successful or with an Error
if it failed for any reason.
chainedBatch
chainedBatch.put(key, value[, options])
Queue a put
operation on this batch. This may throw if key
or value
is invalid. There are no options
by default but implementations may add theirs.
chainedBatch.del(key[, options])
Queue a del
operation on this batch. This may throw if key
is invalid. There are no options
by default but implementations may add theirs.
chainedBatch.clear()
Clear all queued operations on this batch.
chainedBatch.write([options, ]callback)
Commit the queued operations for this batch. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.
There are no options
by default but implementations may add theirs. The callback
function will be called with no arguments if the batch is successful or with an Error
if the batch failed for any reason.
After write
has been called, no further operations are allowed.
chainedBatch.db
A reference to the db
that created this chained batch.
iterator
An iterator allows you to iterate the entire store or a range. It operates on a snapshot of the store, created at the time db.iterator()
was called. This means reads on the iterator are unaffected by simultaneous writes. Most but not all implementations can offer this guarantee.
Iterators can be consumed with for await...of
or by manually calling iterator.next()
in succession. In the latter mode, iterator.end()
must always be called. In contrast, finishing, throwing or breaking from a for await...of
loop automatically calls iterator.end()
.
An iterator reaches its natural end in the following situations:
iterator.seek()
was out of range.An iterator keeps track of when a next()
is in progress and when an end()
has been called so it doesn't allow concurrent next()
calls, it does allow end()
while a next()
is in progress and it doesn't allow either next()
or end()
after end()
has been called.
for await...of iterator
Yields arrays containing a key
and value
. The type of key
and value
depends on the options passed to db.iterator()
.
try {
for await (const [key, value] of db.iterator()) {
console.log(key)
}
} catch (err) {
console.error(err)
}
Note for implementors: this uses iterator.next()
and iterator.end()
under the hood so no further method implementations are needed to support for await...of
.
iterator.next([callback])
Advance the iterator and yield the entry at that key. If an error occurs, the callback
function will be called with an Error
. Otherwise, the callback
receives null
, a key
and a value
. The type of key
and value
depends on the options passed to db.iterator()
. If the iterator has reached its natural end, both key
and value
will be undefined
.
If no callback is provided, a promise is returned for either an array (containing a key
and value
) or undefined
if the iterator reached its natural end.
Note: Always call iterator.end()
, even if you received an error and even if the iterator reached its natural end.
iterator.seek(target)
Seek the iterator to a given key or the closest key. Subsequent calls to iterator.next()
(including implicit calls in a for await...of
loop) will yield entries with keys equal to or larger than target
, or equal to or smaller than target
if the reverse
option passed to db.iterator()
was true.
If range options like gt
were passed to db.iterator()
and target
does not fall within that range, the iterator will reach its natural end.
Note: At the time of writing, leveldown
is the only known implementation to support seek()
. In other implementations, it is a noop.
iterator.end([callback])
End iteration and free up underlying resources. The callback
function will be called with no arguments on success or with an Error
if ending failed for any reason.
If no callback is provided, a promise is returned.
iterator.db
A reference to the db
that created this iterator.
The following applies to any method above that takes a key
argument or option: all implementations must support a key
of type String and should support a key
of type Buffer. A key
may not be null
, undefined
, a zero-length Buffer, zero-length string or zero-length array.
The following applies to any method above that takes a value
argument or option: all implementations must support a value
of type String or Buffer. A value
may not be null
or undefined
due to preexisting significance in streams and iterators.
Support of other key and value types depends on the implementation as well as its underlying storage. See also db._serializeKey
and db._serializeValue
.
Each of these methods will receive exactly the number and order of arguments described. Optional arguments will receive sensible defaults. All callbacks are error-first and must be asynchronous.
If an operation within your implementation is synchronous, be sure to invoke the callback on a next tick using queueMicrotask
, process.nextTick
or some other means of microtask scheduling. For convenience, the prototypes of AbstractLevelDOWN
, AbstractIterator
and AbstractChainedBatch
include a _nextTick
method that is compatible with node and browsers.
db = AbstractLevelDOWN([manifest])
The constructor. Sets the .status
to 'new'
. Optionally takes a manifest object which abstract-leveldown
will enrich:
AbstractLevelDOWN.call(this, {
bufferKeys: true,
snapshots: true,
// ..
})
db._open(options, callback)
Open the store. The options
object will always have the following properties: createIfMissing
, errorIfExists
. If opening failed, call the callback
function with an Error
. Otherwise call callback
without any arguments.
The default _open()
is a sensible noop and invokes callback
on a next tick.
db._close(callback)
Close the store. If closing failed, call the callback
function with an Error
. Otherwise call callback
without any arguments.
The default _close()
is a sensible noop and invokes callback
on a next tick.
db._serializeKey(key)
Convert a key
to a type supported by the underlying storage. All methods below that take a key
argument or option - including db._iterator()
with its range options and iterator._seek()
with its target
argument - will receive serialized keys. For example, if _serializeKey
is implemented as:
FakeLevelDOWN.prototype._serializeKey = function (key) {
return Buffer.isBuffer(key) ? key : String(key)
}
Then db.get(2, callback)
translates into db._get('2', options, callback)
. Similarly, db.iterator({ gt: 2 })
translates into db._iterator({ gt: '2', ... })
and iterator.seek(2)
translates into iterator._seek('2')
.
If the underlying storage supports any JavaScript type or if your implementation wraps another implementation, it is recommended to make _serializeKey
an identity function (returning the key as-is). Serialization is irreversible, unlike encoding as performed by implementations like encoding-down
. This also applies to _serializeValue
.
The default _serializeKey()
is an identity function.
db._serializeValue(value)
Convert a value
to a type supported by the underlying storage. All methods below that take a value
argument or option will receive serialized values. For example, if _serializeValue
is implemented as:
FakeLevelDOWN.prototype._serializeValue = function (value) {
return Buffer.isBuffer(value) ? value : String(value)
}
Then db.put(key, 2, callback)
translates into db._put(key, '2', options, callback)
.
The default _serializeValue()
is an identity function.
db._get(key, options, callback)
Get a value by key
. The options
object will always have the following properties: asBuffer
. If the key does not exist, call the callback
function with a new Error('NotFound')
. Otherwise call callback
with null
as the first argument and the value as the second.
The default _get()
invokes callback
on a next tick with a NotFound
error. It must be overridden.
db._getMany(keys, options, callback)
This new method is optional for the time being. To enable its tests, set the getMany
option of the test suite to true
.
Get multiple values by an array of keys
. The options
object will always have the following properties: asBuffer
. If an error occurs, call the callback
function with an Error
. Otherwise call callback
with null
as the first argument and an array of values as the second. If a key does not exist, set the relevant value to undefined
.
The default _getMany()
invokes callback
on a next tick with an array of values that is equal in length to keys
and is filled with undefined
. It must be overridden to support getMany()
but this is currently an opt-in feature. If the implementation does support getMany()
then db.supports.getMany
must be set to true via the constructor.
db._put(key, value, options, callback)
Store a new entry or overwrite an existing entry. There are no default options but options
will always be an object. If putting failed, call the callback
function with an Error
. Otherwise call callback
without any arguments.
The default _put()
invokes callback
on a next tick. It must be overridden.
db._del(key, options, callback)
Delete an entry. There are no default options but options
will always be an object. If deletion failed, call the callback
function with an Error
. Otherwise call callback
without any arguments.
The default _del()
invokes callback
on a next tick. It must be overridden.
db._batch(operations, options, callback)
Perform multiple put and/or del operations in bulk. The operations
argument is always an Array
containing a list of operations to be executed sequentially, although as a whole they should be performed as an atomic operation. Each operation is guaranteed to have at least type
and key
properties. There are no default options but options
will always be an object. If the batch failed, call the callback
function with an Error
. Otherwise call callback
without any arguments.
The default _batch()
invokes callback
on a next tick. It must be overridden.
db._chainedBatch()
The default _chainedBatch()
returns a functional AbstractChainedBatch
instance that uses db._batch(array, options, callback)
under the hood. The prototype is available on the main exports for you to extend. If you want to implement chainable batch operations in a different manner then you should extend AbstractChainedBatch
and return an instance of this prototype in the _chainedBatch()
method:
var AbstractChainedBatch = require('abstract-leveldown').AbstractChainedBatch
var inherits = require('util').inherits
function ChainedBatch (db) {
AbstractChainedBatch.call(this, db)
}
inherits(ChainedBatch, AbstractChainedBatch)
FakeLevelDOWN.prototype._chainedBatch = function () {
return new ChainedBatch(this)
}
db._iterator(options)
The default _iterator()
returns a noop AbstractIterator
instance. It must be overridden, by extending AbstractIterator
(available on the main module exports) and returning an instance of this prototype in the _iterator(options)
method.
The options
object will always have the following properties: reverse
, keys
, values
, limit
, keyAsBuffer
and valueAsBuffer
.
db._clear(options, callback)
This method is experimental and optional for the time being. To enable its tests, set the clear
option of the test suite to true
.
Delete all entries or a range. Does not have to be atomic. It is recommended (and possibly mandatory in the future) to operate on a snapshot so that writes scheduled after a call to clear()
will not be affected.
The default _clear()
uses _iterator()
and _del()
to provide a reasonable fallback, but requires binary key support. It is recommended to implement _clear()
with more performant primitives than _iterator()
and _del()
if the underlying storage has such primitives. Implementations that don't support binary keys must implement their own _clear()
.
Implementations that wrap another db
can typically forward the _clear()
call to that db
, having transformed range options if necessary.
The options
object will always have the following properties: reverse
and limit
.
iterator = AbstractIterator(db)
The first argument to this constructor must be an instance of your AbstractLevelDOWN
implementation. The constructor will set iterator.db
which is used to access db._serialize*
and ensures that db
will not be garbage collected in case there are no other references to it.
iterator._next(callback)
Advance the iterator and yield the entry at that key. If nexting failed, call the callback
function with an Error
. Otherwise, call callback
with null
, a key
and a value
.
The default _next()
invokes callback
on a next tick. It must be overridden.
iterator._seek(target)
Seek the iterator to a given key or the closest key. This method is optional.
iterator._end(callback)
Free up underlying resources. This method is guaranteed to only be called once. If ending failed, call the callback
function with an Error
. Otherwise call callback
without any arguments.
The default _end()
invokes callback
on a next tick. Overriding is optional.
chainedBatch = AbstractChainedBatch(db)
The first argument to this constructor must be an instance of your AbstractLevelDOWN
implementation. The constructor will set chainedBatch.db
which is used to access db._serialize*
and ensures that db
will not be garbage collected in case there are no other references to it.
chainedBatch._put(key, value, options)
Queue a put
operation on this batch. There are no default options but options
will always be an object.
chainedBatch._del(key, options)
Queue a del
operation on this batch. There are no default options but options
will always be an object.
chainedBatch._clear()
Clear all queued operations on this batch.
chainedBatch._write(options, callback)
The default _write
method uses db._batch
. If the _write
method is overridden it must atomically commit the queued operations. There are no default options but options
will always be an object. If committing fails, call the callback
function with an Error
. Otherwise call callback
without any arguments.
To prove that your implementation is abstract-leveldown
compliant, include the abstract test suite in your test.js
(or similar):
const test = require('tape')
const suite = require('abstract-leveldown/test')
const YourDOWN = require('.')
suite({
test: test,
factory: function () {
return new YourDOWN()
}
})
This is the most minimal setup. The test
option must be a function that is API-compatible with tape
. The factory
option must be a function that returns a unique and isolated database instance. The factory will be called many times by the test suite.
If your implementation is disk-based we recommend using tempy
(or similar) to create unique temporary directories. Your setup could look something like:
const test = require('tape')
const tempy = require('tempy')
const suite = require('abstract-leveldown/test')
const YourDOWN = require('.')
suite({
test: test,
factory: function () {
return new YourDOWN(tempy.directory())
}
})
As not every implementation can be fully compliant due to limitations of its underlying storage, some tests may be skipped. For example, to skip snapshot tests:
suite({
// ..
snapshots: false
})
This also serves as a signal to users of your implementation. The following options are available:
bufferKeys
: set to false
if binary keys are not supported by the underlying storageseek
: set to false
if your iterator
does not implement _seek
clear
: defaults to false
until a next major release. Set to true
if your implementation either implements _clear()
itself or is suitable to use the default implementation of _clear()
(which requires binary key support).getMany
: defaults to false
until a next major release. Set to true
if your implementation implements _getMany()
.snapshots
: set to false
if any of the following is true:createIfMissing
and errorIfExists
: set to false
if db._open()
does not support these options.This metadata will be moved to manifests (db.supports
) in the future.
To perform (a)synchronous work before or after each test, you may define setUp
and tearDown
functions:
suite({
// ..
setUp: function (t) {
t.end()
},
tearDown: function (t) {
t.end()
}
})
testCommon
The input to the test suite is a testCommon
object. Should you need to reuse testCommon
for your own (additional) tests, use the included utility to create a testCommon
with defaults:
const test = require('tape')
const suite = require('abstract-leveldown/test')
const YourDOWN = require('.')
const testCommon = suite.common({
test: test,
factory: function () {
return new YourDOWN()
}
})
suite(testCommon)
The testCommon
object will have all the properties describe above: test
, factory
, setUp
, tearDown
and the skip options. You might use it like so:
test('setUp', testCommon.setUp)
test('custom test', function (t) {
var db = testCommon.factory()
// ..
})
test('another custom test', function (t) {
var db = testCommon.factory()
// ..
})
test('tearDown', testCommon.tearDown)
If you'd like to share your awesome implementation with the world, here's what you might want to do:
README
: 
With npm do:
npm install abstract-leveldown
Level/abstract-leveldown
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Author: Level
Source Code: https://github.com/Level/abstract-leveldown
License: MIT license
1656983400
A convenience package that bundles levelup
, encoding-down
and memdown
and exposes levelup
on its export.
📌 This module will soon be deprecated, because it is superseded by
memory-level
.
Use this package to avoid having to explicitly install memdown
when you want to use memdown
with levelup
for non-persistent levelup
data storage.
const level = require('level-mem')
// Create our in-memory database
const db = level()
// Put a key & value
await db.put('name', 'Level')
// Get value by key
const value = await db.get('name')
console.log(value)
See levelup
and memdown
for more details.
If you are upgrading: please see UPGRADING.md
.
Level/mem
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Support us with a monthly donation on Open Collective and help us continue our work.
Author: Level
Source Code: https://github.com/Level/mem
License: MIT license
1656975960
Universal abstract-level
database for Node.js and browsers. This is a convenience package that exports classic-level
in Node.js and browser-level
in browsers, making it an ideal entry point to start creating lexicographically sorted key-value databases.
📌 Which module should I use? What is
abstract-level
? Head over to the FAQ.
If you are upgrading: please see UPGRADING.md
.
const { Level } = require('level')
// Create a database
const db = new Level('example', { valueEncoding: 'json' })
// Add an entry with key 'a' and value 1
await db.put('a', 1)
// Add multiple entries
await db.batch([{ type: 'put', key: 'b', value: 2 }])
// Get value of key 'a': 1
const value = await db.get('a')
// Iterate entries with keys that are greater than 'a'
for await (const [key, value] of db.iterator({ gt: 'a' })) {
console.log(value) // 2
}
All asynchronous methods also support callbacks.
Callback example
db.put('a', { x: 123 }, function (err) {
if (err) throw err
db.get('a', function (err, value) {
console.log(value) // { x: 123 }
})
})
TypeScript type declarations are included and cover the methods that are common between classic-level
and browser-level
. Usage from TypeScript requires generic type parameters.
TypeScript example
// Specify types of keys and values (any, in the case of json).
// The generic type parameters default to Level<string, string>.
const db = new Level<string, any>('./db', { valueEncoding: 'json' })
// All relevant methods then use those types
await db.put('a', { x: 123 })
// Specify different types when overriding encoding per operation
await db.get<string, string>('a', { valueEncoding: 'utf8' })
// Though in some cases TypeScript can infer them
await db.get('a', { valueEncoding: db.valueEncoding('utf8') })
// It works the same for sublevels
const abc = db.sublevel('abc')
const xyz = db.sublevel<string, any>('xyz', { valueEncoding: 'json' })
With npm do:
npm install level
For use in browsers, this package is best used with browserify
, webpack
, rollup
or similar bundlers. For a quick start, visit browserify-starter
or webpack-starter
.
At the time of writing, level
works in Node.js 12+ and Electron 5+ on Linux, Mac OS, Windows and FreeBSD, including any future Node.js and Electron release thanks to Node-API, including ARM platforms like Raspberry Pi and Android, as well as in Chrome, Firefox, Edge, Safari, iOS Safari and Chrome for Android. For details, see Supported Platforms of classic-level
and Browser Support of browser-level
.
Binary keys and values are supported across the board.
The API of level
follows that of abstract-level
. The documentation below covers it all except for Encodings, Events and Errors which are exclusively documented in abstract-level
. For options and additional methods specific to classic-level
and browser-level
, please see their respective READMEs.
An abstract-level
and thus level
database is at its core a key-value database. A key-value pair is referred to as an entry here and typically returned as an array, comparable to Object.entries()
.
db = new Level(location[, options])
Create a new database or open an existing database. The location
argument must be a directory path (relative or absolute) where LevelDB will store its files, or in browsers, the name of the IDBDatabase
to be opened.
The optional options
object may contain:
keyEncoding
(string or object, default 'utf8'
): encoding to use for keysvalueEncoding
(string or object, default 'utf8'
): encoding to use for values.See Encodings for a full description of these options. Other options
(except passive
) are forwarded to db.open()
which is automatically called in a next tick after the constructor returns. Any read & write operations are queued internally until the database has finished opening. If opening fails, those queued operations will yield errors.
db.status
Read-only getter that returns a string reflecting the current state of the database:
'opening'
- waiting for the database to be opened'open'
- successfully opened the database'closing'
- waiting for the database to be closed'closed'
- successfully closed the database.db.open([callback])
Open the database. The callback
function will be called with no arguments when successfully opened, or with a single error argument if opening failed. If no callback is provided, a promise is returned. Options passed to open()
take precedence over options passed to the database constructor. The createIfMissing
and errorIfExists
options are not supported by browser-level
.
The optional options
object may contain:
createIfMissing
(boolean, default: true
): If true
, create an empty database if one doesn't already exist. If false
and the database doesn't exist, opening will fail.errorIfExists
(boolean, default: false
): If true
and the database already exists, opening will fail.passive
(boolean, default: false
): Wait for, but do not initiate, opening of the database.It's generally not necessary to call open()
because it's automatically called by the database constructor. It may however be useful to capture an error from failure to open, that would otherwise not surface until another method like db.get()
is called. It's also possible to reopen the database after it has been closed with close()
. Once open()
has then been called, any read & write operations will again be queued internally until opening has finished.
The open()
and close()
methods are idempotent. If the database is already open, the callback
will be called in a next tick. If opening is already in progress, the callback
will be called when that has finished. If closing is in progress, the database will be reopened once closing has finished. Likewise, if close()
is called after open()
, the database will be closed once opening has finished and the prior open()
call will receive an error.
db.close([callback])
Close the database. The callback
function will be called with no arguments if closing succeeded or with a single error
argument if closing failed. If no callback is provided, a promise is returned.
A database may have associated resources like file handles and locks. When the database is no longer needed (for the remainder of a program) it's recommended to call db.close()
to free up resources.
After db.close()
has been called, no further read & write operations are allowed unless and until db.open()
is called again. For example, db.get(key)
will yield an error with code LEVEL_DATABASE_NOT_OPEN
. Any unclosed iterators or chained batches will be closed by db.close()
and can then no longer be used even when db.open()
is called again.
db.supports
A manifest describing the features supported by this database. Might be used like so:
if (!db.supports.permanence) {
throw new Error('Persistent storage is required')
}
db.get(key[, options][, callback])
Get a value from the database by key
. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.valueEncoding
: custom value encoding for this operation, used to decode the value.The callback
function will be called with an error if the operation failed. If the key was not found, the error will have code LEVEL_NOT_FOUND
. If successful the first argument will be null
and the second argument will be the value. If no callback is provided, a promise is returned.
db.getMany(keys[, options][, callback])
Get multiple values from the database by an array of keys
. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the keys
.valueEncoding
: custom value encoding for this operation, used to decode values.The callback
function will be called with an error if the operation failed. If successful the first argument will be null
and the second argument will be an array of values with the same order as keys
. If a key was not found, the relevant value will be undefined
. If no callback is provided, a promise is returned.
db.put(key, value[, options][, callback])
Add a new entry or overwrite an existing entry. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.valueEncoding
: custom value encoding for this operation, used to encode the value
.The callback
function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.
db.del(key[, options][, callback])
Delete an entry by key
. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.The callback
function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.
db.batch(operations[, options][, callback])
Perform multiple put and/or del operations in bulk. The operations
argument must be an array containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.
Each operation must be an object with at least a type
property set to either 'put'
or 'del'
. If the type
is 'put'
, the operation must have key
and value
properties. It may optionally have keyEncoding
and / or valueEncoding
properties to encode keys or values with a custom encoding for just that operation. If the type
is 'del'
, the operation must have a key
property and may optionally have a keyEncoding
property.
An operation of either type may also have a sublevel
property, to prefix the key of the operation with the prefix of that sublevel. This allows atomically committing data to multiple sublevels. Keys and values will be encoded by the sublevel, to the same effect as a sublevel.batch(..)
call. In the following example, the first value
will be encoded with 'json'
rather than the default encoding of db
:
const people = db.sublevel('people', { valueEncoding: 'json' })
const nameIndex = db.sublevel('names')
await db.batch([{
type: 'put',
sublevel: people,
key: '123',
value: {
name: 'Alice'
}
}, {
type: 'put',
sublevel: nameIndex,
key: 'Alice',
value: '123'
}])
The optional options
object may contain:
keyEncoding
: custom key encoding for this batch, used to encode keys.valueEncoding
: custom value encoding for this batch, used to encode values.Encoding properties on individual operations take precedence. In the following example, the first value will be encoded with the 'utf8'
encoding and the second with 'json'
.
await db.batch([
{ type: 'put', key: 'a', value: 'foo' },
{ type: 'put', key: 'b', value: 123, valueEncoding: 'json' }
], { valueEncoding: 'utf8' })
The callback
function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.
chainedBatch = db.batch()
Create a chained batch, when batch()
is called with zero arguments. A chained batch can be used to build and eventually commit an atomic batch of operations. Depending on how it's used, it is possible to obtain greater performance with this form of batch()
. On browser-level
however, it is just sugar.
await db.batch()
.del('bob')
.put('alice', 361)
.put('kim', 220)
.write()
iterator = db.iterator([options])
Create an iterator. The optional options
object may contain the following range options to control the range of entries to be iterated:
gt
(greater than) or gte
(greater than or equal): define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries iterated will be the same.lt
(less than) or lte
(less than or equal): define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries iterated will be the same.reverse
(boolean, default: false
): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.limit
(number, default: Infinity
): limit the number of entries yielded. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity
or -1
means there is no limit. When reverse
is true the entries with the highest keys will be returned instead of the lowest keys.The gte
and lte
range options take precedence over gt
and lt
respectively. If no range options are provided, the iterator will visit all entries of the database, starting at the lowest key and ending at the highest key (unless reverse
is true). In addition to range options, the options
object may contain:
keys
(boolean, default: true
): whether to return the key of each entry. If set to false
, the iterator will yield keys that are undefined
. Prefer to use db.keys()
instead.values
(boolean, default: true
): whether to return the value of each entry. If set to false
, the iterator will yield values that are undefined
. Prefer to use db.values()
instead.keyEncoding
: custom key encoding for this iterator, used to encode range options, to encode seek()
targets and to decode keys.valueEncoding
: custom value encoding for this iterator, used to decode values.:pushpin: To instead consume data using streams, see
level-read-stream
andlevel-web-stream
.
keyIterator = db.keys([options])
Create a key iterator, having the same interface as db.iterator()
except that it yields keys instead of entries. If only keys are needed, using db.keys()
may increase performance because values won't have to fetched, copied or decoded. Options are the same as for db.iterator()
except that db.keys()
does not take keys
, values
and valueEncoding
options.
// Iterate lazily
for await (const key of db.keys({ gt: 'a' })) {
console.log(key)
}
// Get all at once. Setting a limit is recommended.
const keys = await db.keys({ gt: 'a', limit: 10 }).all()
valueIterator = db.values([options])
Create a value iterator, having the same interface as db.iterator()
except that it yields values instead of entries. If only values are needed, using db.values()
may increase performance because keys won't have to fetched, copied or decoded. Options are the same as for db.iterator()
except that db.values()
does not take keys
and values
options. Note that it does take a keyEncoding
option, relevant for the encoding of range options.
// Iterate lazily
for await (const value of db.values({ gt: 'a' })) {
console.log(value)
}
// Get all at once. Setting a limit is recommended.
const values = await db.values({ gt: 'a', limit: 10 }).all()
db.clear([options][, callback])
Delete all entries or a range. Not guaranteed to be atomic. Accepts the following options (with the same rules as on iterators):
gt
(greater than) or gte
(greater than or equal): define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries deleted will be the same.lt
(less than) or lte
(less than or equal): define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse
is true the order will be reversed, but the entries deleted will be the same.reverse
(boolean, default: false
): delete entries in reverse order. Only effective in combination with limit
, to delete the last N entries.limit
(number, default: Infinity
): limit the number of entries to be deleted. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity
or -1
means there is no limit. When reverse
is true the entries with the highest keys will be deleted instead of the lowest keys.keyEncoding
: custom key encoding for this operation, used to encode range options.The gte
and lte
range options take precedence over gt
and lt
respectively. If no options are provided, all entries will be deleted. The callback
function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.
sublevel = db.sublevel(name[, options])
Create a sublevel that has the same interface as db
(except for additional methods specific to classic-level
or browser-level
) and prefixes the keys of operations before passing them on to db
. The name
argument is required and must be a string.
const example = db.sublevel('example')
await example.put('hello', 'world')
await db.put('a', '1')
// Prints ['hello', 'world']
for await (const [key, value] of example.iterator()) {
console.log([key, value])
}
Sublevels effectively separate a database into sections. Think SQL tables, but evented, ranged and real-time! Each sublevel is an AbstractLevel
instance with its own keyspace, events and encodings. For example, it's possible to have one sublevel with 'buffer'
keys and another with 'utf8'
keys. The same goes for values. Like so:
db.sublevel('one', { valueEncoding: 'json' })
db.sublevel('two', { keyEncoding: 'buffer' })
An own keyspace means that sublevel.iterator()
only includes entries of that sublevel, sublevel.clear()
will only delete entries of that sublevel, and so forth. Range options get prefixed too.
Fully qualified keys (as seen from the parent database) take the form of prefix + key
where prefix
is separator + name + separator
. If name
is empty, the effective prefix is two separators. Sublevels can be nested: if db
is itself a sublevel then the effective prefix is a combined prefix, e.g. '!one!!two!'
. Note that a parent database will see its own keys as well as keys of any nested sublevels:
// Prints ['!example!hello', 'world'] and ['a', '1']
for await (const [key, value] of db.iterator()) {
console.log([key, value])
}
:pushpin: The key structure is equal to that of
subleveldown
which offered sublevels before they were built-in toabstract-level
. This means that anabstract-level
sublevel can read sublevels previously created with (and populated by)subleveldown
.
Internally, sublevels operate on keys that are either a string, Buffer or Uint8Array, depending on parent database and choice of encoding. Which is to say: binary keys are fully supported. The name
must however always be a string and can only contain ASCII characters.
The optional options
object may contain:
separator
(string, default: '!'
): Character for separating sublevel names from user keys and each other. Must sort before characters used in name
. An error will be thrown if that's not the case.keyEncoding
(string or object, default 'utf8'
): encoding to use for keysvalueEncoding
(string or object, default 'utf8'
): encoding to use for values.The keyEncoding
and valueEncoding
options are forwarded to the AbstractLevel
constructor and work the same, as if a new, separate database was created. They default to 'utf8'
regardless of the encodings configured on db
. Other options are forwarded too but abstract-level
(and therefor level
) has no relevant options at the time of writing. For example, setting the createIfMissing
option will have no effect. Why is that?
Like regular databases, sublevels open themselves but they do not affect the state of the parent database. This means a sublevel can be individually closed and (re)opened. If the sublevel is created while the parent database is opening, it will wait for that to finish. If the parent database is closed, then opening the sublevel will fail and subsequent operations on the sublevel will yield errors with code LEVEL_DATABASE_NOT_OPEN
.
chainedBatch
chainedBatch.put(key, value[, options])
Queue a put
operation on this batch, not committed until write()
is called. This will throw a LEVEL_INVALID_KEY
or LEVEL_INVALID_VALUE
error if key
or value
is invalid. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.valueEncoding
: custom value encoding for this operation, used to encode the value
.sublevel
(sublevel instance): act as though the put
operation is performed on the given sublevel, to similar effect as sublevel.batch().put(key, value)
. This allows atomically committing data to multiple sublevels. The key
will be prefixed with the prefix
of the sublevel, and the key
and value
will be encoded by the sublevel (using the default encodings of the sublevel unless keyEncoding
and / or valueEncoding
are provided).chainedBatch.del(key[, options])
Queue a del
operation on this batch, not committed until write()
is called. This will throw a LEVEL_INVALID_KEY
error if key
is invalid. The optional options
object may contain:
keyEncoding
: custom key encoding for this operation, used to encode the key
.sublevel
(sublevel instance): act as though the del
operation is performed on the given sublevel, to similar effect as sublevel.batch().del(key)
. This allows atomically committing data to multiple sublevels. The key
will be prefixed with the prefix
of the sublevel, and the key
will be encoded by the sublevel (using the default key encoding of the sublevel unless keyEncoding
is provided).chainedBatch.clear()
Clear all queued operations on this batch.
chainedBatch.write([options][, callback])
Commit the queued operations for this batch. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.
There are no options
(that are common between classic-level
and browser-level
). Note that write()
does not take encoding options. Those can only be set on put()
and del()
.
The callback
function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.
After write()
or close()
has been called, no further operations are allowed.
chainedBatch.close([callback])
Free up underlying resources. This should be done even if the chained batch has zero queued operations. Automatically called by write()
so normally not necessary to call, unless the intent is to discard a chained batch without committing it. The callback
function will be called with no arguments. If no callback is provided, a promise is returned. Closing the batch is an idempotent operation, such that calling close()
more than once is allowed and makes no difference.
chainedBatch.length
The number of queued operations on the current batch.
chainedBatch.db
A reference to the database that created this chained batch.
iterator
An iterator allows one to lazily read a range of entries stored in the database. The entries will be sorted by keys in lexicographic order (in other words: byte order) which in short means key 'a'
comes before 'b'
and key '10'
comes before '2'
.
A classic-level
iterator reads from a snapshot of the database, created at the time db.iterator()
was called. This means the iterator will not see the data of simultaneous write operations. A browser-level
iterator does not offer such guarantees, as is indicated by db.supports.snapshots
. That property will be true in Node.js and false in browsers.
Iterators can be consumed with for await...of
and iterator.all()
, or by manually calling iterator.next()
or nextv()
in succession. In the latter case, iterator.close()
must always be called. In contrast, finishing, throwing, breaking or returning from a for await...of
loop automatically calls iterator.close()
, as does iterator.all()
.
An iterator reaches its natural end in the following situations:
iterator.seek()
was out of range.An iterator keeps track of calls that are in progress. It doesn't allow concurrent next()
, nextv()
or all()
calls (including a combination thereof) and will throw an error with code LEVEL_ITERATOR_BUSY
if that happens:
// Not awaited and no callback provided
iterator.next()
try {
// Which means next() is still in progress here
iterator.all()
} catch (err) {
console.log(err.code) // 'LEVEL_ITERATOR_BUSY'
}
for await...of iterator
Yields entries, which are arrays containing a key
and value
. The type of key
and value
depends on the options passed to db.iterator()
.
try {
for await (const [key, value] of db.iterator()) {
console.log(key)
}
} catch (err) {
console.error(err)
}
iterator.next([callback])
Advance to the next entry and yield that entry. If an error occurs, the callback
function will be called with an error. Otherwise, the callback
receives null
, a key
and a value
. The type of key
and value
depends on the options passed to db.iterator()
. If the iterator has reached its natural end, both key
and value
will be undefined
.
If no callback is provided, a promise is returned for either an entry array (containing a key
and value
) or undefined
if the iterator reached its natural end.
Note: iterator.close()
must always be called once there's no intention to call next()
or nextv()
again. Even if such calls yielded an error and even if the iterator reached its natural end. Not closing the iterator will result in memory leaks and may also affect performance of other operations if many iterators are unclosed and each is holding a snapshot of the database.
iterator.nextv(size[, options][, callback])
Advance repeatedly and get at most size
amount of entries in a single call. Can be faster than repeated next()
calls. The size
argument must be an integer and has a soft minimum of 1. There are no options
at the moment.
If an error occurs, the callback
function will be called with an error. Otherwise, the callback
receives null
and an array of entries, where each entry is an array containing a key and value. The natural end of the iterator will be signaled by yielding an empty array. If no callback is provided, a promise is returned.
const iterator = db.iterator()
while (true) {
const entries = await iterator.nextv(100)
if (entries.length === 0) {
break
}
for (const [key, value] of entries) {
// ..
}
}
await iterator.close()
iterator.all([options][, callback])
Advance repeatedly and get all (remaining) entries as an array, automatically closing the iterator. Assumes that those entries fit in memory. If that's not the case, instead use next()
, nextv()
or for await...of
. There are no options
at the moment. If an error occurs, the callback
function will be called with an error. Otherwise, the callback
receives null
and an array of entries, where each entry is an array containing a key and value. If no callback is provided, a promise is returned.
const entries = await db.iterator({ limit: 100 }).all()
for (const [key, value] of entries) {
// ..
}
iterator.seek(target[, options])
Seek to the key closest to target
. Subsequent calls to iterator.next()
, nextv()
or all()
(including implicit calls in a for await...of
loop) will yield entries with keys equal to or larger than target
, or equal to or smaller than target
if the reverse
option passed to db.iterator()
was true.
The optional options
object may contain:
keyEncoding
: custom key encoding, used to encode the target
. By default the keyEncoding
option of the iterator is used or (if that wasn't set) the keyEncoding
of the database.If range options like gt
were passed to db.iterator()
and target
does not fall within that range, the iterator will reach its natural end.
iterator.close([callback])
Free up underlying resources. The callback
function will be called with no arguments. If no callback is provided, a promise is returned. Closing the iterator is an idempotent operation, such that calling close()
more than once is allowed and makes no difference.
If a next()
,nextv()
or all()
call is in progress, closing will wait for that to finish. After close()
has been called, further calls to next()
,nextv()
or all()
will yield an error with code LEVEL_ITERATOR_NOT_OPEN
.
iterator.db
A reference to the database that created this iterator.
iterator.count
Read-only getter that indicates how many keys have been yielded so far (by any method) excluding calls that errored or yielded undefined
.
iterator.limit
Read-only getter that reflects the limit
that was set in options. Greater than or equal to zero. Equals Infinity
if no limit, which allows for easy math:
const hasMore = iterator.count < iterator.limit
const remaining = iterator.limit - iterator.count
keyIterator
A key iterator has the same interface as iterator
except that its methods yield keys instead of entries. For the keyIterator.next(callback)
method, this means that the callback
will receive two arguments (an error and key) instead of three. Usage is otherwise the same.
valueIterator
A value iterator has the same interface as iterator
except that its methods yield values instead of entries. For the valueIterator.next(callback)
method, this means that the callback
will receive two arguments (an error and value) instead of three. Usage is otherwise the same.
sublevel
A sublevel is an instance of the AbstractSublevel
class, which extends AbstractLevel
and thus has the same API as documented above. Sublevels have a few additional properties.
sublevel.prefix
Prefix of the sublevel. A read-only string property.
const example = db.sublevel('example')
const nested = example.sublevel('nested')
console.log(example.prefix) // '!example!'
console.log(nested.prefix) // '!example!!nested!'
sublevel.db
Parent database. A read-only property.
const example = db.sublevel('example')
const nested = example.sublevel('nested')
console.log(example.db === db) // true
console.log(nested.db === db) // true
Level/level
is an OPEN Open Source Project. This means that:
Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.
See the Contribution Guide for more details.
Author: Level
Source Code: https://github.com/Level/level
License: MIT license
1654833300
SnowHaze is the first and only iOS browser that truly protects your data! Designed for best possible privacy and security and made to be easily accessible for both beginners and tech-savvy privacy enthusiasts.
SnowHaze offers everything from ad and tracking script blockers to a no-log VPN service.
Our primary focus has been on making SnowHaze the safest and most private browser on iOS. We also take small things seriously. What are these small things you ask? For example, SnowHaze will never establish any connection to any server without your explicit consent. Including our own servers. Visit our homepage to see a complete list of SnowHaze's features.
Get SnowHaze for free on the App Store.
The main purpose of this repository is to allow everybody to check SnowHaze's source code. This helps to find bugs easier and everybody can be assured that only best practices are used.
SnowHaze comes with an extensive database containing
Due to binding contracts, we are currently not allowed to publish the decrypted database. We have added a test database which only contains a few entries for each category. This allows you to test the functionality of the database.
However, for everyday use as a private browser, we still suggest downloading SnowHaze for free from the App Store with the newest database.
An Apple ID, CocoaPods and Xcode 12.3 are needed to build SnowHaze.
The following steps are needed to build SnowHaze:
cd
into the respective directorypod install
In case you want to deploy SnowHaze to a real device:
This is not our working repository and we only push versions to this repository that have made it through Apple's review process and will be released.
Please get in touch with us if you would like to contribute to SnowHaze. We would love to have you on board with us! As this is not our working repository, we cannot accept pull-requests on this repository.
Download Details:
Author: snowhaze
Source Code: https://github.com/snowhaze/SnowHaze-iOS
License: View license
1654825920
Onion Browser
This is the Onion Browser 2.X branch, based on Endless. The old version of Onion Browser can be found here.
Onion Browser is a free web browser for iPhone and iPad that encrypts and tunnels web traffic through the Tor network. See the official site for more details and App Store links.
The following features are new to Onion Browser, by way of the upstream work on Endless:
Multiple tab support
Search from URL bar
Ability to configure security and privacy settings (script blocking, etc) on a per-site basis
Per-site cookie handling
HTTPS Everywhere support
HTTP Strict Transport Security (HSTS) support, pre-loaded with the Chromium ruleset
Ability to view SSL certificate information, to allow manual verification of SSL certificates
These people helped with translations. Thank you so much, folks!
Download Details:
Author: OnionBrowser
Source Code: https://github.com/OnionBrowser/OnionBrowser
License: View license
1654818540
Browse like no one’s watching. The new Firefox Focus automatically blocks a wide range of online trackers — from the moment you launch it to the second you leave it. Easily erase your history, passwords and cookies, so you won’t get followed by things like unwanted ads.
Download on the App Store.
We encourage you to participate in this open source project. We love Pull Requests, Bug Reports, ideas, (security) code reviews or any kind of positive contribution. Please read the Community Participation Guidelines.
If you're looking for a good way to get started contributing, check out some good first issues.
We also tag recommended bugs for contributions with help wanted.
This branch works with Xcode 10.0 and supports iOS 11.0+.
This branch is written in Swift 4.2.
Pull requests should be submitted with master as the base branch and should also be written in Swift 4.2.
git clone https://github.com/mozilla-mobile/focus-ios.git
3. Pull in the project dependencies:
cd focus-ios
./checkout.sh
4. Open Blockzilla.xcodeproj
in Xcode.
5. Build the Focus
scheme in Xcode.
⚠️ Development of this project is not currently a high priority. Because of this, we cannot guarantee timely reviews or interactions on this repository. If you would like to contribute to one of our other iOS projects, we recommend checking out Firefox iOS. We greatly appreciate your interest in and contributions towards Focus and look forward to working with you on other projects!
Download Details:
Author: mozilla-mobile
Source Code: https://github.com/mozilla-mobile/focus-ios
License: MPL-2.0 and 2 other licenses found
1654811040
Download on the App Store.
This branch only works with Xcode 11.5, Swift 5.2 and supports iOS 12.0 and above.
Please make sure you aim your pull requests in the right direction.
For bug fixes and features for a specific release use the version branch.
Want to contribute but don't know where to start? Here is a list of issues that are contributor friendly
brew update
brew install node
pip3 install virtualenv
3. Clone the repository:
git clone https://github.com/mozilla-mobile/firefox-ios
4. Install Node.js dependencies, build user scripts and update content blocker:
cd firefox-ios
sh ./bootstrap.sh
5. Open Client.xcodeproj
in Xcode.
6. Build the Fennec
scheme in Xcode.
User Scripts (JavaScript injected into the WKWebView
) are compiled, concatenated and minified using webpack. User Scripts to be aggregated are placed in the following directories:
/Client
|-- /Frontend
|-- /UserContent
|-- /UserScripts
|-- /AllFrames
| |-- /AtDocumentEnd
| |-- /AtDocumentStart
|-- /MainFrame
|-- /AtDocumentEnd
|-- /AtDocumentStart
This reduces the total possible number of User Scripts down to four. The compiled output from concatenating and minifying the User Scripts placed in these folders resides in /Client/Assets
and are named accordingly:
AllFramesAtDocumentEnd.js
AllFramesAtDocumentStart.js
MainFrameAtDocumentEnd.js
MainFrameAtDocumentStart.js
To simplify the build process, these compiled files are checked-in to this repository. When adding or editing User Scripts, these files can be re-compiled with webpack
manually. This requires Node.js to be installed and all required npm
packages can be installed by running npm install
in the root directory of the project. User Scripts can be compiled by running the following npm
command in the root directory of the project:
npm run build
Download Details:
Author: mozilla-mobile
Source Code: https://github.com/mozilla-mobile/firefox-ios
License: MPL-2.0 license
1654491859
La seguridad de la aplicación es un factor importante para cada aplicación web. Los desarrolladores web utilizan varias estrategias para mejorar la capa de seguridad de sus aplicaciones web, como implementar técnicas de prevención de vulnerabilidades.
Los riesgos de seguridad de las aplicaciones web generalmente aumentan cuando comienza a procesar HTML sin formato y manipula el DOM con contenido que no es de confianza. Si está procesando HTML directamente desde una fuente de terceros y la fuente se ve afectada por una amenaza basada en Internet, los atacantes pueden ejecutar código JavaScript en las computadoras de los usuarios de su aplicación sin su consentimiento. Estos ataques de seguridad se conocen como ataques XSS (cross-site scripting).
La sanitización de HTML es una estrategia recomendada por OWASP para prevenir vulnerabilidades XSS en aplicaciones web. El saneamiento de HTML ofrece un mecanismo de seguridad para eliminar contenido no seguro (y potencialmente malicioso) de cadenas HTML sin procesar que no son de confianza antes de presentárselas al usuario.
La API experimental de desinfección del navegador incorporada lo ayuda a insertar cadenas HTML no confiables en el DOM de su aplicación web de manera segura.
El saneamiento de HTML generalmente se refiere a la eliminación de contenido JavaScript potencialmente malicioso de cadenas HTML sin formato. Hay dos implementaciones diferentes de sanitización de HTML:
De hecho, necesitamos usar ambas capas de desinfección para evitar vulnerabilidades XSS. Si su base de datos se ve afectada por cargas útiles XSS maliciosas, la capa de desinfección del lado del cliente protegerá a todos los usuarios de la aplicación, pero si un atacante envía HTML malicioso directamente desde la API RESTful, la desinfección del lado del servidor protegerá el sistema.
Los desarrolladores web tienden a usar las siguientes bibliotecas para la desinfección del lado del cliente/nivel DOM:
htmlparser2
basada en Node.js y el navegador que es muy popular entre los desarrolladores de React porque hay una biblioteca contenedora especialmente para ReactEstas bibliotecas generalmente analizan HTML no seguro utilizando el iterador DOM incorporado del navegador o un analizador HTML personalizado que excluye el contenido HTML no seguro antes de usar innerHTML
.
La API de saneamiento de HTML es una característica del navegador que ayuda a agregar de forma segura cadenas o documentos HTML no seguros a las páginas web. Proporciona métodos para desinfectar elementos DOM existentes y obtener elementos DOM nuevos y desinfectados a partir de una cadena HTML sin formato.
Las soluciones discutidas anteriormente ofrecen soluciones de seguridad bastante buenas para prevenir ataques XSS, pero aún así, hay varios problemas. Estas bibliotecas deben mantener actualizadas las especificaciones de desinfección a medida que cambian los estándares del navegador. Por ejemplo, si la especificación HTML estándar introdujo un atributo HTML potencialmente inseguro, la estrategia de saneamiento de estas bibliotecas se vuelve inestable.
La desinfección basada en la biblioteca también puede ser lenta porque el análisis ocurre dos veces, primero durante el proceso de desinfección de la biblioteca y nuevamente durante el proceso de análisis DOM del navegador, cuando inyectamos HTML seguro en una página web.
El objetivo de la API de sanitización de HTML es mitigar los ataques XSS a nivel de DOM a través de las siguientes características:
Un gran atractivo de la sanitización nativa es que nos brinda la setHTML
función, que analiza y manipula directamente el DOM en función de las reglas de sanitización.
Ahora que conocemos los antecedentes, las funciones y el estado de desarrollo actual de la API de sanitizer, veamos la especificación de la API que se expone al entorno de JavaScript.
La API de Sanitizer viene con dos interfaces de desarrollador principales: la Sanitizer
clase y el Element.setHTML
método.
Sanitizer
clase y configuraciónLa Sanitizer
clase ayuda a crear un nuevo sanitizer
objeto HTML para los requisitos de saneamiento. Viene con la siguiente sintaxis:
new Sanitizer()
new Sanitizer(config)
Podemos crear un nuevo objeto desinfectante con la siguiente sintaxis y la configuración predeterminada utilizando el constructor no parametrizado. La configuración predeterminada crea un Sanitizer
objeto con una técnica basada en listas seguras para mitigar las vulnerabilidades XSS conocidas.
const sanitizer = new Sanitizer();
Sin embargo, podemos personalizar el Sanitizer
objeto pasando un objeto de configuración, como se muestra a continuación.
const sanitizer = new Sanitizer(config);
El configuration
objeto tiene la siguiente definición; tenga en cuenta que esta definición de configuración puede cambiar en el futuro, ya que la propuesta de API aún está en la incubadora web.
{
allowElements: <string Array>,
blockElements: <string Array>,
dropElements: <string Array>,
allowAttributes: <Object>,
dropAttributes: <Object>,
allowCustomElements: <Boolean>,
allowComments: <Boolean>
}
allowElements
: Lista de elementos que debe incluir el sanitizanteblockElements
: una lista de elementos que el desinfectante debe excluir manteniendo sus elementos secundariosdropElements
: excluye elementos como la blockElements
propiedad, pero también elimina todo el árbol de elementos secundarios que pertenece al nodo excluidoallowAttributes
: Atributos permitidos como objeto de matriz de claves'class': ['div']
permite el class
atributo para todos los div
elementos: podemos usar el carácter de asterisco ( *
) para permitir un atributo específico para cualquier elemento HTMLdropAttributes
: La versión opuesta de la allowAttributes
propiedadallowCustomElements
: un valor booleano para permitir o no permitir elementos personalizados (el valor predeterminado es false
)allowComments
: Un valor booleano para permitir o no permitir comentarios (el valor predeterminado es false
)Por ejemplo, podemos iniciar un Sanitizer
objeto personalizado para permitir solo etiquetas HTML básicas y estilo en línea, como se muestra a continuación.
{
'allowElements': [
'div',
'span',
'p',
'em',
'b'
],
'allowAttributes': {
'style': ['*']
}
}
sanitize
, sanitizeFor,
ysetHTML
La Sanitizer
clase nos ayuda a iniciar un Sanitizer
objeto HTML, pero necesitamos usar otros métodos para usar la instancia de sanitizer en aplicaciones web. Después de que aprendamos la siguiente especificación de API, explicaré cómo usar la API de desinfectante en la sección del tutorial.
Sanitizer.sanitize
métododesinfectar (entrada)
Podemos usar el sanitize
método para aplicar reglas de desinfección a nodos DOM preexistentes. Esta función acepta un objeto Document
o DocumentFragment
y devuelve un desinfectado DocumentFragment
como salida.
Sanitizer.sanitizeFor
métodosanitizeFor(element, input)
Podemos usar este método para obtener un nodo de elemento desinfectado enviando una cadena HTML no segura. En otras palabras, devuelve un element
nodo DOM de tipo después de analizar la input
cadena de acuerdo con las reglas de sanitización.
Element.setHTML
métodosetHTML(input, sanitizer)
Este método es una versión más segura y más establecida de la Element.innerHTML
propiedad. La innerHTML
propiedad permite cualquier cadena HTML y es propensa a cargas útiles XSS. Por lo tanto, el setHTML
método acepta una instancia de desinfectante y desinfecta el contenido HTML potencialmente dañino antes de inyectar nuevos nodos en el DOM.
Puede usar las primeras implementaciones de la API de Sanitizer en los navegadores web Google Chrome/Chromium ≥ 93 y Firefox ≥ 83. Estas primeras implementaciones generalmente no están habilitadas de forma predeterminada en ninguno de los navegadores web, por lo que primero debemos habilitarlas modificando la configuración del navegador.
Si está utilizando Chrome/Chromium, puede habilitar el #sanitizer-api
interruptor de la siguiente manera, navegando a la chrome://flags
URL.
Si está utilizando Mozilla Firefox, puede habilitar esta función a través about:config
de , de la siguiente manera.
En este tutorial, usaré Mozilla Firefox 96 para experimentar con los próximos ejemplos de la API de Sanitizer.
Probemos la API de desinfectante con ejemplos prácticos. Usaré el editor en línea JsFiddle para demostrar estos ejemplos, pero también puede probar con su entorno de desarrollo local creando un archivo HTML.
Empecemos con lo básico. ¿Cómo podemos generar un nodo DOM más seguro a partir de una cadena HTML insegura con la API de Sanitizer? Mira el siguiente código de ejemplo.
<div id="container"></div>
<script>
// unsafe HTML string
const unsafeHTML = `<p onclick="alert('Hello')">Hello</p>`;
// Find the container node
const container = document.getElementById('container');
// Create a sanitizer object with the default config
const sanitizer = new Sanitizer();
// Inject new DOM nodes in a safer way
container.setHTML(unsafeHTML, sanitizer);
</script>
Aquí, usamos el setHTML
setter en lugar de la innerHTML
propiedad. Si inspecciona el DOM después de ejecutar el código anterior, puede ver que el setHTML
método se excluyó automáticamente onclick
antes de representar los elementos secundarios en el container
nodo.
Puede verificar la inseguridad de la innerHTML
propiedad utilizando el siguiente código.
<div id="container"></div>
<script>
// unsafe HTML string
const unsafeHTML = `<p onclick="alert('Hello')">Hello</p>`;
// Find the container node
const container = document.getElementById('container');
// Inject new DOM nodes
container.innerHTML = unsafeHTML;
</script>
El código anterior inyecta nuevos nodos DOM con los controladores de eventos no seguros, como se muestra a continuación.
Demostración de problemas de seguridad en la innerHTML
propiedad.
Puede obtener la cadena HTML sin procesar desinfectada leyendo la innerHTML
propiedad del elemento DOM desinfectado, pero de alguna manera rompe el objetivo principal detrás de la API de desinfección, que es inyectar DOM de manera segura, no usar la API de Sanitizer como otra biblioteca de desinfección.
sanitizeFor
Anteriormente, usamos el setHTML
método para representar una cadena HTML no segura inmediatamente con el proceso de limpieza, pero todavía tendremos que representar nuevos elementos más tarde, después del proceso de limpieza, en algunos escenarios.
Por ejemplo, los desarrolladores web a menudo necesitan renderizar cadenas HTML no seguras desde Internet a un editor WYSIWYG después de su proceso de renderizado. Como solución optimizada y sin errores, primero podemos obtener contenido, aplicar desinfección y luego renderizar los nodos saneados cuando el componente del editor esté completamente renderizado.
Podemos desinfectar y guardar el resultado temporalmente como un nodo DOM específico con el sanitizeFor
método. mira el siguiente ejemplo.
<div id="container">Loading...</div>
<script>
// unsafe HTML string
const unsafeHTML = `<p onclick="alert('Hello')">Hello</p>`;
// Create a sanitizer object with the default config
const sanitizer = new Sanitizer();
// Hold sanitized node
const sanitizedDiv = sanitizer.sanitizeFor('div', unsafeHTML);
// Inject nodes after sometime
setTimeout(() => {
// Find the container node
const container = document.getElementById('container');
// Inject the sanitized DOM node
container.replaceChildren(sanitizedDiv);
}, 1000);
</script>
El código anterior sanea una cadena HTML insegura y guarda el nodo DOM saneado en una constante. Más tarde, inyecta el nodo DOM desinfectado en el nodo contenedor correspondiente utilizando el replaceChildren
método. Tenga en cuenta que usamos un retraso de un segundo intencionalmente para simular una red y un retraso de renderizado.
Demostración de cómo usar la sanitizeFor
función
Los iframes son útiles para agregar widgets y páginas web de terceros a nuestras aplicaciones web, pero normalmente vienen con algunos problemas de seguridad, ya que cargamos contenido web de otras fuentes (a menudo fuentes de terceros). Por lo tanto, sin duda es más seguro desinfectar el contenido web que se carga a través de iframes.
Anteriormente, usamos una cadena como entrada para los métodos de la API de sanitización, pero ahora necesitamos sanear los nodos DOM preexistentes. Para hacer esto, necesitamos una función que acepte fragmentos de documentos HTML o documentos.
¿Recuerdas el sanitize
método? mira el siguiente ejemplo.
<iframe id="webpage"></iframe> <!-- Use a URL with cross-origin policy -->
<br/>
<button onclick="sanitize()">Sanitize</button>
<script>
function sanitize() {
// Create a sanitizer object with the default config
const sanitizer = new Sanitizer();
// Find the iframe node
const iframe = document.getElementById('webpage');
// Sanitize the iframe's document node
const sanitizedFrameNodes = sanitizer.sanitize(iframe.contentWindow.document);
iframe.replaceChildren(sanitizeFrameNodes);
}
</script>
Si creamos una nueva Sanitizer
instancia de clase sin enviar un objeto de configuración, la API utilizará una configuración predeterminada para mitigar las vulnerabilidades XSS conocidas. Pero puede personalizar la lógica de desinfección enviando un objeto de configuración.
Suponga que necesita permitir etiquetas HTML básicas y estilos en línea para un div
elemento dinámico. Podemos implementar un desinfectante para este requisito usando una configuración personalizada, como se muestra a continuación.
<div id="container"></div>
<script>
// unsafe HTML string
const unsafeHTML = `<div onclick="alert('Hello')">
<p><b>Hello Sanitizer API</b></p>
<p><em onmovemove="window.location.reload()">Test</em></p>
<img src="image.png" alt="Test"/>
</div>`;
// Find the container node
const container = document.getElementById('container');
// Create a sanitizer object with a custom config
const sanitizer = new Sanitizer(
{
'allowElements': [
'div',
'span',
'p',
'em',
'b'
],
'allowAttributes': {
'style': ['*']
}
});
// Inject new DOM nodes in a safer way
const sanitizedDiv = sanitizer.sanitizeFor('div', unsafeHTML);
container.replaceChildren(sanitizedDiv);
</script>
Tenga en cuenta que también podemos lograr el mismo resultado usando la setHTML
función, pero la usé replaceChildren
en su lugar, ya que la función experimental de Firefox setHTML
incluía la img
etiqueta, incluso después de la desinfección.
Tenga cuidado cuando use configuraciones personalizadas de desinfectante. Tiene control total para permitir cualquier elemento y atributo cuando personaliza las configuraciones; por ejemplo, la siguiente configuración de desinfectante hace que su aplicación web sea propensa a XSS, ya que permite el onclick
controlador de eventos.
{
'allowElements': ['div', 'p', 'em'],
'allowAttributes': {
'onclick': ['*']
}
}
¡Cuidado con las configuraciones incorrectas de la API del desinfectante!
Los desarrolladores de navegadores y los ingenieros de seguridad suelen presentar nuevas propuestas de API de navegadores a la organización W3C para su aprobación general. Después del período de incubación y aprobación, W3C agrega la especificación particular al estándar web oficial.
Varios colaboradores comenzaron a redactar la propuesta de la API Sanitization en 2016 en un repositorio de GitHub . A fines de 2021, la propuesta de API llegó a la etapa de borrador en la incubadora web oficial. Hoy en día, la comunidad de desarrolladores web mejora la especificación sugiriendo varias ideas y se esfuerza por convertirla en un estándar web oficial.
Además, Google Chrome/Chromium ≥ 93 y Firefox ≥ 83 brindan implementaciones tempranas de la API de Sanitizer para desarrolladores web que estén interesados en probarlas ahora. Estas primeras implementaciones no son estables y aún están sujetas a cambios en el futuro. Puede ver los detalles completos de soporte del navegador en CanIUse .
Sin embargo, esta característica del navegador funcionará en contextos seguros . En otras palabras, solo puede usar esta función del navegador con conexiones HTTPS. Pero también puede usar la API de Sanitizer con su entorno de desarrollo local porque la política de contexto seguro estándar identifica localhost (o 127.0.0.1
) como un contexto seguro.
En este tutorial, aprendimos a usar la API experimental de Sanitizer con algunos ejemplos y comenzamos por habilitarla desde la lista de funciones experimentales del navegador. Aunque Google Chrome/Chromium y Mozilla Firefox ofrecen implementaciones tempranas de esta especificación de API, todavía se encuentra en el programa de incubadora W3C. En otras palabras, los editores de la propuesta pueden cambiar la especificación de la API según las sugerencias de la comunidad y las vulnerabilidades de seguridad conocidas. Si tiene alguna sugerencia que mejore la estructura de la API de Sanitizer, puede enviar un problema al repositorio de la incubadora de la API de Sanitizer en GitHub.
La API de Sanitizer promete ayudar a los desarrolladores de frontend y framework. Por ejemplo, los desarrolladores de React a menudo tienden a usar la biblioteca sanitize-htmldangerouslySetInnerHTML
y el accesorio de React para representar cadenas HTML no seguras en DOM.
Sin embargo, si la API experimental de Sanitizer se convierte en un estándar de navegador, React podrá ofrecer un método fácil de usar para desarrolladores (como setHTML
) para desinfectar e inyectar cadenas HTML arbitrarias sin afectar el tamaño del paquete.
Los marcos que usan implementaciones personalizadas de sanitización de HTML como Angular pueden reducir el tamaño del paquete de marcos mediante el uso de la API de sanitización nativa. Sin embargo, como se mencionó anteriormente, la API de Sanitizer aún es experimental, así que no la use en sistemas de producción hasta que se vuelva estable y esté aprobada por W3C.
Puede experimentar más con la API de Sanitizer con el patio de recreo de la API de Sanitizer HTML en línea .
Esta historia se publicó originalmente en https://blog.logrocket.com/what-you-need-know-inbuilt-browser-html-sanitization/