Dylan  Iqbal

Dylan Iqbal

1624590352

Micromark 3.0: A Small Compliant Markdown Parser

micromark

The smallest CommonMark compliant markdown parser with positional info and concrete tokens.

Feature highlights

When to use this

  • If you just want to turn markdown into HTML (w/ maybe a few extensions)
  • If you want to do really complex things with markdown

See § Comparison for more info

Intro

micromark is a long awaited markdown parser. It uses a state machine to parse the entirety of markdown into concrete tokens. It’s the smallest 100% CommonMark compliant markdown parser in JavaScript. It was made to replace the internals of remark-parse, the most popular markdown parser. Its API compiles to HTML, but its parts are made to be used separately, so as to generate syntax trees (mdast-util-from-markdown) or compile to other output formats.

Contents

Install

npm:

npm install micromark

Use

Typical use (buffering):

import {micromark} from 'micromark'

console.log(micromark('## Hello, *world*!'))

Yields:

<h2>Hello, <em>world</em>!</h2>

You can pass extensions (in this case micromark-extension-gfm):

import {micromark} from 'micromark'
import {gfm, gfmHtml} from 'micromark-extension-gfm'

const value = '* [x] contact@example.com ~~strikethrough~~'

const result = micromark(value, {
  extensions: [gfm()],
  htmlExtensions: [gfmHtml]
})

console.log(result)

Yields:

<ul>
<li><input checked="" disabled="" type="checkbox"> <a href="mailto:contact@example.com">contact@example.com</a> <del>strikethrough</del></li>
</ul>

Streaming interface:

import fs from 'fs'
import {stream} from 'micromark/stream'

fs.createReadStream('example.md')
  .on('error', handleError)
  .pipe(stream())
  .pipe(process.stdout)

function handleError(error) {
  // Handle your error here!
  throw error
}

API

micromark core has two entries in its export map: micromark and micromark/stream.

micromark exports the following identifier: micromark. micromark/stream exports the following identifier: stream. There are no default exports.

The export map supports the endorsed development condition. Run node --conditions development module.js to get instrumented dev code. Without this condition, production code is loaded. See § Size & debug for more info.

micromark(value[, encoding][, options])

Compile markdown to HTML.

Parameters
value

Markdown to parse (string or Buffer).

encoding

Character encoding to understand value as when it’s a Buffer (string, default: 'utf8').

options.defaultLineEnding

Value to use for line endings not in value (string, default: first line ending or '\n').

Generally, micromark copies line endings ('\r', '\n', '\r\n') in the markdown document over to the compiled HTML. In some cases, such as > a, CommonMark requires that extra line endings are added: <blockquote>\n<p>a</p>\n</blockquote>.

options.allowDangerousHtml

Whether to allow embedded HTML (boolean, default: false). See § Security.

options.allowDangerousProtocol

Whether to allow potentially dangerous protocols in links and images (boolean, default: false). URLs relative to the current protocol are always allowed (such as, image.jpg). For links, the allowed protocols are http, https, irc, ircs, mailto, and xmpp. For images, the allowed protocols are http and https. See § Security.

options.extensions

Array of syntax extensions (Array.<SyntaxExtension>, default: []). See § Extensions.

options.htmlExtensions

Array of HTML extensions (Array.<HtmlExtension>, default: []). See § Extensions.

Returns

string — Compiled HTML.

stream(options?)

Streaming interface of micromark. Compiles markdown to HTML. options are the same as the buffering API above. Note that some of the work to parse markdown can be done streaming, but in the end buffering is required.

micromark does not handle errors for you, so you must handle errors on whatever streams you pipe into it. As markdown does not know errors, micromark itself does not emit errors.

Extensions

micromark supports extensions. There are two types of extensions for micromark: SyntaxExtension, which change how markdown is parsed, and HtmlExtension, which change how it compiles. They can be passed in options.extensions or options.htmlExtensions, respectively.

As a user of extensions, refer to each extension’s readme for more on how to use them. As a (potential) author of extensions, refer to § Extending markdown and § Creating a micromark extension.

List of extensions

SyntaxExtension

A syntax extension is an object whose fields are typically the names of hooks, referring to where constructs “hook” into. The fields at such objects are character codes, mapping to constructs as values.

The built in constructs are an example. See it and existing extensions for inspiration.

HtmlExtension

An HTML extension is an object whose fields are typically enter or exit (reflecting whether a token is entered or exited). The values at such objects are names of tokens mapping to handlers.

See existing extensions for inspiration.

Extending markdown

micromark lets you change markdown syntax, yes, but there are alternatives. The alternatives are often better.

Over the years, many micromark and remark users have asked about their unique goals for markdown. Some exemplary goals are:

  1. I want to add rel="nofollow" to external links
  2. I want to add links from headings to themselves
  3. I want line breaks in paragraphs to become hard breaks
  4. I want to support embedded music sheets
  5. I want authors to add arbitrary attributes
  6. I want authors to mark certain blocks with meaning, such as tip, warning, etc
  7. I want to combine markdown with JS(X)
  8. I want to support our legacy flavor of markdown-like syntax

These can be solved in different ways and which solution is best is both subjective and dependant on unique needs. Often, there is already a solution in the form of an existing remark or rehype plugin. Respectively, their solutions are:

  1. remark-external-links
  2. rehype-autolink-headings
  3. remark-breaks
  4. custom plugin similar to rehype-katex but integrating abcjs
  5. either remark-directive and a custom plugin or with rehype-attr
  6. remark-directive combined with a custom plugin
  7. combining the existing micromark MDX extensions however you please, such as done by mdx-js/mdx or xdm
  8. Writing a micromark extension

Looking at these from a higher level, they can be categorized:

  • Changing the output by transforming syntax trees (1 and 2)

    This category is nice as the format remains plain markdown that authors are already familiar with and which will work with existing tools and platforms.

    Implementations will deal with the syntax tree (mdast) and the ecosystems remark and rehype. There are many existing utilities for working with that tree. Many remark plugins and rehype plugins also exist.

  • Using and abusing markdown to add new meaning (3, 4, potentially 5)

    This category is similar to Changing the output by transforming syntax trees, but adds a new meaning to certain things which already have semantics in markdown.

    Some examples in pseudo code:

    *   **A list item with the first paragraph bold**
    
        And then more content, is turned into `<dl>` / `<dt>` / `<dd>` elements
    
    Or, the title attributes on links or images is [overloaded](/url 'rel:nofollow')
    with a new meaning.
    
    ```csv
    fenced,code,can,include,data
    which,is,turned,into,a,graph
    
    // after the code language name
    

    HTML, especially comments, could be used as markers

    
    
  • Arbitrary extension mechanism (potentially 5; 6)

    This category is nice when content should contain embedded “components”. Often this means it’s required for authors to have some programming experience. There are three good ways to solve arbitrary extensions.

    HTML: Markdown already has an arbitrary extension syntax. It works in most places and authors are already familiar with the syntax, but it’s reasonably hard to implement securely. Certain platforms will remove HTML completely, others sanitize it to varying degrees. HTML also supports custom elements. These could be used and enhanced by client side JavaScript or enhanced when transforming the syntax tree.

    Generic directives: although a proposal and not supported on most platforms, directives do work with many tools already. They’re not the easiest to author compared to, say, a heading, but sometimes that’s okay. They do have potential: they nicely solve the need for an infinite number of potential extensions to markdown in a single markdown-esque way.

    MDX also adds support for components by swapping HTML out for JS(X). JSX is an extension to JavaScript, so MDX is something along the lines of literate programming. This does require knowledge of React (or Vue) and JavaScript, excluding some authors.

  • Extending markdown syntax (7 and 8)

    Extend the syntax of markdown means:

    • Authors won’t be familiar with the syntax
    • Content won’t work in other places (such as on GitHub)
    • Defeating the purpose of markdown: being simple to author and looking like what it means

    …and it’s hard to do as it requires some in-depth knowledge of JavaScript and parsing. But it’s possible and in certain cases very powerful.

Creating a micromark extension

This section shows how to create an extension for micromark that parses “variables” (a way to render some data) and one to turn a default construct off.

Stuck? See support.md.

Prerequisites
  • You should possess an intermediate to high understanding of JavaScript: it’s going to get a bit complex
  • Read the readme of unified (until you hit the API section) to better understand where micromark fits
  • Read the § Architecture section to understand how micromark works
  • Read the § Extending markdown section to understand whether it’s a good idea to extend the syntax of markdown
Extension basics

micromark supports two types of extensions. Syntax extensions change how markdown is parsed. HTML extensions change how it compiles.

HTML extensions are not always needed, as micromark is often used through mdast-util-from-markdown to parse to a markdown syntax tree So instead of an HTML extension a from-markdown utility is needed. Then, a mdast-util-to-markdown utility, which is responsible for serializing syntax trees to markdown, is also needed.

When developing something for internal use only, you can pick and choose which parts you need. When open sourcing your extensions, it should probably contain four parts: syntax extension, HTML extension, from-markdown utility, and a to-markdown utility.

On to our first case!

Case: variables

Let’s first outline what we want to make: render some data, similar to how Liquid and the like work, in our markdown. It could look like this:

Hello, {planet}!

Turned into:

<p>Hello, Venus!</p>

An opening curly brace, followed by one or more characters, and then a closing brace. We’ll then look up planet in some object and replace the variable with its corresponding value, to get something like Venus out.

It looks simple enough, but with markdown there are often a couple more things to think about. For this case, I can see the following:

  • Is there a “block” version too?
  • Are spaces allowed? Line endings? Should initial and final white space be ignored?
  • Balanced nested braces? Superfluous ones such as {{planet}} or meaningful ones such as {a {pla} net}?
  • Character escapes ({pla\}net}) and character references ({pla&#x7d;net})?

To keep things as simple as possible, let’s not support a block syntax, see spaces as special, support line endings, or support nested braces. But to learn interesting things, we will support character escapes and -references.

Note that this particular case is already solved quite nicely by micromark-extension-mdx-expression. It’s a bit more powerful and does more things, but it can be used to solve this case and otherwise serve as inspiration.

Setup

Create a new folder, enter it, and set up a new package:

mkdir example
cd example
npm init -y

In this example we’ll use ESM, so add type: 'module' to package.json:

@@ -2,6 +2,7 @@
   "name": "example",
   "version": "1.0.0",
   "description": "",
+  "type": "module",
   "main": "index.js",
   "scripts": {
     "test": "echo \"Error: no test specified\" && exit 1"

Add a markdown file, example.md, with the following text:

Hello, {planet}!

{pla\}net} and {pla&#x7d;net}.

To check if our extension works, add an example.js module, with the following code:

import {promises as fs} from 'node:fs'
import {micromark} from 'micromark'
import {variables} from './index.js'

main()

async function main() {
  const buf = await fs.readFile('example.md')
  const out = micromark(buf, {extensions: [variables]})
  console.log(out)
}

While working on the extension, run node example to see whether things work. Feel free to add more examples of the variables syntax in example.md if needed.

Our extension doesn’t work yet, for one because micromark is not installed:

npm install micromark --save-dev

…and we need to write our extension. Let’s do that in index.js:

export const variables = {}

Although our extension doesn’t do anything, running node example now somewhat works!

Syntax extension

Much in micromark is based on character codes (see § Preprocess). For this extension, the relevant codes are:

  • -5 — M-0005 CARRIAGE RETURN (CR)
  • -4 — M-0004 LINE FEED (LF)
  • -3 — M-0003 CARRIAGE RETURN LINE FEED (CRLF)
  • null — EOF (end of the stream)
  • 92 — U+005C BACKSLASH (\)
  • 123 — U+007B LEFT CURLY BRACE ({)
  • 125 — U+007D RIGHT CURLY BRACE (})

Also relevant are the content types (see § Content types). This extension is a text construct, as it’s parsed alongsides links and such. The content inside it (between the braces) is string, to support character escapes and -references.

Let’s write our extension. Add the following code to index.js:

const variableConstruct = {name: 'variable', tokenize: variableTokenize}

export const variables = {text: {123: variableConstruct}}

function variableTokenize(effects, ok, nok) {
  return start

  function start(code) {
    console.log('start:', effects, code);
    return nok(code)
  }
}

The above code exports an extension with the identifier variables. The extension defines a text construct for the character code 123. The construct has a name, so that it can be turned off (optional, see next case), and it has a tokenize function that sets up a state machine, which receives effects and the ok and nok states. ok can be used when successful, nok when not, and so constructs are a bit similar to how promises can resolve or reject. tokenize returns the initial state, start, which itself receives the current character code, prints some debugging information, and then returns a call to nok.

Ensure that things work by running node example and see what it prints.

Now we need to define our states and figure out how variables work. Some people prefer sketching a diagram of the flow. I often prefer writing it down in pseudo-code prose. I’ve also found that test driven development works well, where I write unit tests for how it should work, then write the state machine, and finally use a code coverage tool to ensure I’ve thought of everything.

In prose, what we have to code looks like this:

  • start: Receive 123 as code, enter a token for the whole (let’s call it variable), enter a token for the marker (variableMarker), consume code, exit the marker token, enter a token for the contents (variableString), switch to begin
  • begin: If code is 125, reconsume in nok. Else, reconsume in inside
  • inside: If code is -5, -4, -3, or null, reconsume in nok. Else, if code is 125, exit the string token, enter a variableMarker, consume code, exit the marker token, exit the variable token, and switch to ok. Else, consume, and remain in inside.

That should be it! Replace variableTokenize with the following to include the needed states:

function variableTokenize(effects, ok, nok) {
  return start

  function start(code) {
    effects.enter('variable')
    effects.enter('variableMarker')
    effects.consume(code)
    effects.exit('variableMarker')
    effects.enter('variableString')
    return begin
  }

  function begin(code) {
    return code === 125 ? nok(code) : inside(code)
  }

  function inside(code) {
    if (code === -5 || code === -4 || code === -3 || code === null) {
      return nok(code)
    }

    if (code === 125) {
      effects.exit('variableString')
      effects.enter('variableMarker')
      effects.consume(code)
      effects.exit('variableMarker')
      effects.exit('variable')
      return ok
    }

    effects.consume(code)
    return inside
  }
}

Run node example again and see what it prints! The HTML compiler ignores things it doesn’t know, so variables are now removed.

We have our first syntax extension, and it sort of works, but we don’t handle character escapes and -references yet. We need to do two things to make that work: a) skip over \\ and \} in our algorithm, b) tell micromark to parse them.

Change the code in index.js to support escapes like so:

@@ -23,6 +23,11 @@ function variableTokenize(effects, ok, nok) {
       return nok(code)
     }

+    if (code === 92) {
+      effects.consume(code)
+      return insideEscape
+    }
+
     if (code === 125) {
       effects.exit('variableString')
       effects.enter('variableMarker')
@@ -35,4 +40,13 @@ function variableTokenize(effects, ok, nok) {
     effects.consume(code)
     return inside
   }
+
+  function insideEscape(code) {
+    if (code === 92 || code === 125) {
+      effects.consume(code)
+      return inside
+    }
+
+    return inside(code)
+  }
 }

Finally add support for character references and character escapes between braces by adding a special token that defines a content type:

@@ -11,6 +11,7 @@ function variableTokenize(effects, ok, nok) {
     effects.consume(code)
     effects.exit('variableMarker')
     effects.enter('variableString')
+    effects.enter('chunkString', {contentType: 'string'})
     return begin
   }

@@ -29,6 +30,7 @@ function variableTokenize(effects, ok, nok) {
     }

     if (code === 125) {
+      effects.exit('chunkString')
       effects.exit('variableString')
       effects.enter('variableMarker')
       effects.consume(code)

Tokens with a contentType will be replaced by postprocess (see § Postprocess) by the tokens belonging to that content type.

HTML extension

Up next is an HTML extension to replace variables with data. Change example.js to use one like so:

@@ -1,11 +1,12 @@
 import {promises as fs} from 'node:fs'
 import {micromark} from 'micromark'
-import {variables} from './index.js'
+import {variables, variablesHtml} from './index.js'

 main()

 async function main() {
   const buf = await fs.readFile('example.md')
-  const out = micromark(buf, {extensions: [variables]})
+  const html = variablesHtml({planet: '1', 'pla}net': '2'})
+  const out = micromark(buf, {extensions: [variables], htmlExtensions: [html]})
   console.log(out)
 }

And add the HTML extension, variablesHtml, to index.js like so:

@@ -52,3 +52,19 @@ function variableTokenize(effects, ok, nok) {
     return inside(code)
   }
 }
+
+export function variablesHtml(data = {}) {
+  return {
+    enter: {variableString: enterVariableString},
+    exit: {variableString: exitVariableString},
+  }
+
+  function enterVariableString() {
+    this.buffer()
+  }
+
+  function exitVariableString() {
+    var id = this.resume()
+    if (id in data) {
+      this.raw(this.encode(data[id]))
+    }
+  }
+}

variablesHtml is a function that receives an object mapping “variables” to strings and returns an HTML extension. The extension hooks two functions to variableString, one when it starts, the other when it ends. We don’t need to do anything to handle the other tokens as they’re already ignored by default. enterVariableString calls buffer, which is a function that “stashes” what would otherwise be emitted. exitVariableString calls resume, which is the inverse of buffer and returns the stashed value. If the variable is defined, we ensure it’s made safe (with this.encode) and finally output that (with this.raw).

Further exercises

It works! We’re done! Of course, it can be better, such as with the following potential features:

  • Add support for empty variables
  • Add support for spaces between markers and string
  • Add support for line endings in variables
  • Add support for nested braces
  • Add support for blocks
  • Add warnings on undefined variables
  • Use micromark-build, and use assert, debug, and micromark-util-symbol (see § Size & debug)
  • Add mdast-util-from-markdown and mdast-util-to-markdown utilities to parse and serialize the AST
Case: turn off constructs

Sometimes it’s needed to turn a default construct off. That’s possible through a syntax extension. Note that not everything can be turned off (such as paragraphs) and even if it’s possible to turn something off, it could break micromark (such as character escapes).

To disable constructs, refer to them by name in an array at the disable.null field of an extension:

import {micromark} from 'micromark'

const extension = {disable: {null: ['codeIndented']}}

console.log(micromark('\ta', {extensions: [extension]}))

Yields:

<p>a</p>

Architecture

micromark is maintained as a monorepo. Many of its internals, which are used in micromark (core) but also useful for developers of extensions or integrations, are available as separate modules. Each module maintained here is available in packages/.

Overview

The naming scheme in packages/ is as follows:

  • micromark-build — Small CLI to build dev code into production code
  • micromark-core-commonmark — CommonMark constructs used in micromark
  • micromark-factory-* — Reusable subroutines used to parse parts of constructs
  • micromark-util-* — Reusable helpers often needed when parsing markdown
  • micromark — Core module

micromark has two interfaces: buffering (maintained in micromark/dev/index.js) and streaming (maintained in micromark/dev/stream.js). The first takes all input at once whereas the last uses a Node.js stream to take input separately. They thinly wrap how data flows through micromark:

                                            micromark
+-----------------------------------------------------------------------------------------------+
|            +------------+         +-------+         +-------------+         +---------+       |
| -markdown->+ preprocess +-chunks->+ parse +-events->+ postprocess +-events->+ compile +-html- |
|            +------------+         +-------+         +-------------+         +---------+       |
+-----------------------------------------------------------------------------------------------+

Preprocess

The preprocessor (micromark/dev/lib/preprocess.js) takes markdown and turns it into chunks.

A chunk is either a character code or a slice of a buffer in the form of a string. Chunks are used because strings are more efficient storage than character codes, but limited in what they can represent. For example, the input ab\ncd is represented as ['ab', -4, 'cd'] in chunks.

A character code is often the same as what String#charCodeAt() yields but micromark adds meaning to certain other values.

In micromark, the actual character U+0009 CHARACTER TABULATION (HT) is replaced by one M-0002 HORIZONTAL TAB (HT) and between 0 and 3 M-0001 VIRTUAL SPACE (VS) characters, depending on the column at which the tab occurred. For example, the input \ta is represented as [-2, -1, -1, -1, 97] and a\tb as [97, -2, -1, -1, 98] in character codes.

The characters U+000A LINE FEED (LF) and U+000D CARRIAGE RETURN (CR) are replaced by virtual characters depending on whether they occur together: M-0003 CARRIAGE RETURN LINE FEED (CRLF), M-0004 LINE FEED (LF), and M-0005 CARRIAGE RETURN (CR). For example, the input a\r\nb\nc\rd is represented as [97, -5, 98, -4, 99, -3, 100] in character codes.

The 0 (U+0000 NUL) character code is replaced by U+FFFD REPLACEMENT CHARACTER ().

The null code represents the end of the input stream (called eof for end of file).

Parse

The parser (micromark/dev/lib/parse.js) takes chunks and turns them into events.

An event is the start or end of a token amongst other events. Tokens can “contain” other tokens, even though they are stored in a flat list, by entering before and exiting after them.

A token is a span of one or more codes. Tokens are most of what micromark produces: the built in HTML compiler or other tools can turn them into different things. Tokens are essentially names attached to a slice, such as lineEndingBlank for certain line endings, or codeFenced for a whole fenced code.

Sometimes, more info is attached to tokens, such as _open and _close by attention (strong, emphasis) to signal whether the sequence can open or close an attention run. These fields have to do with how the parser works, which is complex and not always pretty.

Certain fields (previous, next, and contentType) are used in many cases: linked tokens for subcontent. Linked tokens are used because outer constructs are parsed first. Take for example:

- *a
  b*.
  1. The list marker and the space after it is parsed first
  2. The rest of the line is a chunkFlow token
  3. The two spaces on the second line are a linePrefix of the list
  4. The rest of the line is another chunkFlow token

The two chunkFlow tokens are linked together and the chunks they span are passed through the flow tokenizer. There the chunks are seen as chunkContent and passed through the content tokenizer. There the chunks are seen as a paragraph and seen as chunkText and passed through the text tokenizer. Finally, the attention (emphasis) and data (“raw” characters) is parsed there, and we’re done!

Content types

The parser starts out with a document tokenizer. Document is the top-most content type, which includes containers such as block quotes and lists. Containers in markdown come from the margin and include more constructs on the lines that define them.

Flow represents the sections (block constructs such as ATX and setext headings, HTML, indented and fenced code, thematic breaks), which like document are also parsed per line. An example is HTML, which has a certain starting condition (such as <script> on its own line), then continues for a while, until an end condition is found (such as </style>). If that line with an end condition is never found, that flow goes until the end.

Content is zero or more definitions, and then zero or one paragraph. It’s a weird one, and needed to make certain edge cases around definitions spec compliant. Definitions are unlike other things in markdown, in that they behave like text in that they can contain arbitrary line endings, but have to end at a line ending. If they end in something else, the whole definition instead is seen as a paragraph.

The content in markdown first needs to be parsed up to this level to figure out which things are defined, for the whole document, before continuing on with text, as whether a link or image reference forms or not depends on whether it’s defined. This unfortunately prevents a true streaming markdown parser.

Text contains phrasing content (rich inline text: autolinks, character escapes and -references, code, hard breaks, HTML, images, links, emphasis, strong).

String is a limited text-like content type which only allows character references and character escapes. It exists in things such as identifiers (media references, definitions), titles, or URLs and such.

Constructs

Constructs are the things that make up markdown. Some examples are lists, thematic breaks, or character references.

Note that, as a general rule of thumb, markdown is really weird. It’s essentially made up of edge cases rather than logical rules. When browsing the built in constructs, or venturing to build your own, you’ll find confusing new things and run into complex custom hooks.

One more reasonable construct is the thematic break (see code). It’s an object that defines a name and a tokenize function. Most of what constructs do is defined in their required tokenize function, which sets up a state machine to handle character codes streaming in.

Postprocess

The postprocessor (micromark/dev/lib/postprocess.js) is a small step that takes events, ensures all their nested content is parsed, and returns the modified events.

Compile

The compiler (micromark/dev/lib/compile.js) takes events and turns them into HTML. While micromark was created mostly to advance markdown parsing irrespective of compiling to HTML, the common case of doing so is built in. A built in HTML compiler is useful because it allows us to check for compliancy to CommonMark, the de facto norm of markdown, specified in roughly 650 input/output cases. The parsing parts can still be used separately to build ASTs, CSTs, or many other output formats.

The compiler has an interface that accepts lists of events instead of the whole at once, but because markdown can’t truly stream, events are buffered before compiling and outputting the final result.

Examples

GitHub flavored markdown (GFM)

To support GFM (autolink literals, strikethrough, tables, and tasklists) use micromark-extension-gfm. Say we have a file like this:

# GFM

## Autolink literals

www.example.com, https://example.com, and contact@example.com.

## Strikethrough

~one~ or ~~two~~ tildes.

## Table

| a | b  |  c |  d  |
| - | :- | -: | :-: |

## Tasklist

* [ ] to do
* [x] done

Then do something like this:

import fs from 'node:fs'
import {micromark} from 'micromark'
import {gfm, gfmHtml} from 'micromark-extension-gfm'

const doc = fs.readFileSync('example.md')

console.log(micromark(doc, {extensions: [gfm()], htmlExtensions: [gfmHtml]}))

Show equivalent HTML

<h1>GFM</h1>
<h2>Autolink literals</h2>
<p><a href="http://www.example.com">www.example.com</a>, <a href="https://example.com">https://example.com</a>, and <a href="mailto:contact@example.com">contact@example.com</a>.</p>
<h2>Strikethrough</h2>
<p><del>one</del> or <del>two</del> tildes.</p>
<h2>Table</h2>
<table>
<thead>
<tr>
<th>a</th>
<th align="left">b</th>
<th align="right">c</th>
<th align="center">d</th>
</tr>
</thead>
</table>
<h2>Tasklist</h2>
<ul>
<li><input disabled="" type="checkbox"> to do</li>
<li><input checked="" disabled="" type="checkbox"> done</li>
</ul>

Math

To support math use micromark-extension-math. Say we have a file like this:

Lift($L$) can be determined by Lift Coefficient ($C_L$) like the following equation.

$$
L = \frac{1}{2} \rho v^2 S C_L
$$

Then do something like this:

import fs from 'node:fs'
import {micromark} from 'micromark'
import {math, mathHtml} from 'micromark-extension-math'

const doc = fs.readFileSync('example.md')

console.log(micromark(doc, {extensions: [math], htmlExtensions: [mathHtml()]}))

Show equivalent HTML

<p>Lift(<</span>) can be determined by Lift Coefficient (<</span>) like the following equation.</p>
<div class="math math-display"><</span></div>

Footnotes

To support footnotes use micromark-extension-footnote. Say we have a file like this:

Here is a footnote call,[^1] and another.[^longnote]

[^1]: Here is the footnote.

[^longnote]: Here’s one with multiple blocks.

    Subsequent paragraphs are indented to show that they
belong to the previous footnote.

        { some.code }

    The whole paragraph can be indented, or just the first
    line.  In this way, multi-paragraph footnotes work like
    multi-paragraph list items.

This paragraph won’t be part of the note, because it
isn’t indented.

Here is an inline note.^[Inlines notes are easier to write, since
you don’t have to pick an identifier and move down to type the
note.]

Then do something like this:

import fs from 'node:fs'
import {micromark} from 'micromark'
import {footnote, footnoteHtml} from 'micromark-extension-footnote'

const doc = fs.readFileSync('example.md')

console.log(
  micromark(doc, {extensions: [footnote], htmlExtensions: [footnoteHtml()]})
)

Show equivalent HTML

<p>Here is a footnote call,<a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a> and another.<a href="#fn2" class="footnote-ref" id="fnref2"><sup>2</sup></a></p>
<p>This paragraph won’t be part of the note, because it
isn’t indented.</p>
<p>Here is an inline note.<a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a></p>
<div class="footnotes">
<hr />
<ol>
<li id="fn1">
<p>Here is the footnote.<a href="#fnref1" class="footnote-back">↩︎</a></p>
</li>
<li id="fn2">
<p>Here’s one with multiple blocks.</p>
<p>Subsequent paragraphs are indented to show that they
belong to the previous footnote.</p>
<pre><code>{ some.code }
</code></pre>
<p>The whole paragraph can be indented, or just the first
line.  In this way, multi-paragraph footnotes work like
multi-paragraph list items.<a href="#fnref2" class="footnote-back">↩︎</a></p>
</li>
<li id="fn3">
<p>Inlines notes are easier to write, since
you don’t have to pick an identifier and move down to type the
note.<a href="#fnref3" class="footnote-back">↩︎</a></p>
</li>
</ol>
</div>

Syntax tree

A higher level project, mdast-util-from-markdown, can give you an AST.

import fromMarkdown from 'mdast-util-from-markdown' // This wraps micromark.

const result = fromMarkdown('## Hello, *world*!')

console.log(result.children[0])

Yields:

{
  type: 'heading',
  depth: 2,
  children: [
    {type: 'text', value: 'Hello, ', position: [Object]},
    {type: 'emphasis', children: [Array], position: [Object]},
    {type: 'text', value: '!', position: [Object]}
  ],
  position: {
    start: {line: 1, column: 1, offset: 0},
    end: {line: 1, column: 19, offset: 18}
  }
}

Another level up is remark, which provides a nice interface and hundreds of plugins.

Markdown

CommonMark

The first definition of “Markdown” gave several examples of how it worked, showing input Markdown and output HTML, and came with a reference implementation (Markdown.pl). When new implementations followed, they mostly followed the first definition, but deviated from the first implementation, and added extensions, thus making the format a family of formats.

Some years later, an attempt was made to standardize the differences between implementations, by specifying how several edge cases should be handled, through more input and output examples. This is known as CommonMark, and many implementations now work towards some degree of CommonMark compliancy. Still, CommonMark describes what the output in HTML should be given some input, which leaves many edge cases up for debate, and does not answer what should happen for other output formats.

micromark passes all tests from CommonMark and has many more tests to match the CommonMark reference parsers. Finally, it comes with CMSM, which describes how to parse markup, instead of documenting input and output examples.

Grammar

The syntax of markdown can be described in Backus–Naur form (BNF) as:

markdown = .*

No, that’s not a typo: markdown has no syntax errors; anything thrown at it renders something.

Project

Comparison

There are many other markdown parsers out there and maybe they’re better suited to your use case! Here is a short comparison of a couple in JavaScript. Note that this list is made by the folks who make micromark and remark, so there is some bias.

Note: these are, in fact, not really comparable: micromark (and remark) focus on completely different things than other markdown parsers do. Sure, you can generate HTML from markdown with them, but micromark (and remark) are created for (abstract or concrete) syntax trees—to inspect, transform, and generate content, so that you can make things like MDX, Prettier, or Gatsby.

micromark

micromark can be used in two different ways. It can either be used, optionally with existing extensions, to get HTML easily. Or, it can give tremendous power, such as access to all tokens with positional info, at the cost of being hard to get into. It’s super small, pretty fast, and has 100% CommonMark compliance. It has syntax extensions, such as supporting 100% GFM compliance (with micromark-extension-gfm), but they’re rather complex to write. It’s the newest parser on the block, which means it’s fresh and well suited for contemporary markdown needs, but it’s also battle-tested, and already the 3rd most popular markdown parser in JavaScript.

If you’re looking for fine grained control, use micromark. If you just want HTML from markdown, use micromark.

remark

remark is the most popular markdown parser. It’s built on top of micromark and boasts syntax trees. For an analogy, it’s like if Babel, ESLint, and more, were one project. It supports the syntax extensions that micromark has (so it’s 100% CM compliant and can be 100% GFM compliant), but most of the work is done in plugins that transform or inspect the tree, and there’s tons of them. Transforming the tree is relatively easy: it’s a JSON object that can be manipulated directly. remark is stable, widely used, and extremely powerful for handling complex data.

You probably should use remark.

marked

marked is the oldest markdown parser on the block. It’s been around for ages, is battle tested, small, popular, and has a bunch of extensions, but doesn’t match CommonMark or GFM, and is unsafe by default.

If you have markdown you trust and want to turn it into HTML without a fuss, and don’t care about perfect compatibility with CommonMark or GFM, but do appreciate a small bundle size and stability, use marked.

####### markdown-it

markdown-it is a good, stable, and essentially CommonMark compliant markdown parser, with (optional) support for some GFM features as well. It’s used a lot as a direct dependency in packages, but is rather big. It shines at syntax extensions, where you want to support not just markdown, but your (company’s) version of markdown.

If you need a couple of custom syntax extensions to your otherwise CommonMark-compliant markdown, and want to get HTML out, use markdown-it.

Others

There are lots of other markdown parsers! Some say they’re small, or fast, or that they’re CommonMark compliant—but that’s not always true. This list is not supposed to be exhaustive (but it’s the most relevant ones). This list of markdown parsers is a snapshot in time of why (not) to use (alternatives to) micromark: they’re all good choices, depending on what your goals are.

Test

micromark is tested with the ~650 CommonMark tests and more than 1.2k extra tests confirmed with CM reference parsers. These tests reach all branches in the code, which means that this project has 100% code coverage. Finally, we use fuzz testing to ensure micromark is stable, reliable, and secure.

To build, format, and test the codebase, use $ npm test after clone and install. The $ npm run test-api and $ npm run test-coverage scripts check either the unit tests, or both them and their coverage, respectively.

The $ npm run test-fuzz script does fuzz testing for 15 minutes. The timeout is provided by GNU coreutils timeout(1), which might not be available on your system. Either install timeout or remove that part temporarily from the script and manually exit the program after a while.

Size & debug

micromark is really small. A ton of time went into making sure it minifies well, by the way code is written but also through custom build scripts to pre-evaluate certain expressions. Furthermore, care went into making it compress well with gzip and brotli.

Normally, you’ll use the pre-evaluated version of micromark. While developing, debugging, or testing your code, you should switch to use code instrumented with assertions and debug messages:

node --conditions development module.js

To see debug messages, use a DEBUG env variable set to micromark:

DEBUG="*" node --conditions development module.js

Version

micromark adheres to semver since 3.0.0.

Security

The typical security aspect discussed for markdown is cross-site scripting (XSS) attacks. Markdown itself is safe if it does not include embedded HTML or dangerous protocols in links/images (such as javascript: or data:). micromark makes any markdown safe by default, even if HTML is embedded or dangerous protocols are used, as it encodes or drops them. Turning on the allowDangerousHtml or allowDangerousProtocol options for user-provided markdown opens you up to XSS attacks.

Another security aspect is DDoS attacks. For example, an attacker could throw a 100mb file at micromark, in which case the JavaScript engine will run out of memory and crash. It is also possible to crash micromark with smaller payloads, notably when thousands of links, images, emphasis, or strong are opened but not closed. It is wise to cap the accepted size of input (500kb can hold a big book) and to process content in a different thread or worker so that it can be stopped when needed.

Using extensions might also be unsafe, refer to their documentation for more information.

For more information on markdown sanitation, see improper-markup-sanitization.md by @chalker.

See security.md in micromark/.github for how to submit a security report.

Contribute

See contributing.md in micromark/.github for ways to get started. See support.md for ways to get help.

This project has a code of conduct. By interacting with this repository, organisation, or community you agree to abide by its terms.

Origin story

Over the summer of 2018, micromark was planned, and the idea shared in August with a couple of friends and potential sponsors. The problem I (@wooorm) had was that issues were piling up in remark and other repos, but my day job (teaching) was fun, fulfilling, and deserved time too. It was getting hard to combine the two. The thought was to feed two birds with one scone: fix the issues in remark with a new markdown parser (codename marydown) while being financially supported by sponsors building fancy stuff on top, such as Gatsby, Contentful, and Vercel (ZEIT at the time). @johno was making MDX on top of remark at the time (important historical note: several other folks were working on JSX + markdown too). We bundled our strengths: MDX was getting some traction and we thought together we could perhaps make something sustainable.

In November 2018, we launched with the idea for micromark to solve all existing bugs, sustaining the existing hundreds of projects, and furthering the exciting high-level project MDX. We pushed a single name: unified (which back then was a small but essential part of the chain). Gatsby and Vercel were immediate sponsors. We didn’t know whether it would work, and it worked. But now you have a new problem: you are getting some financial support (much more than other open source projects) but it’s not enough money for rent, and too much money to print stickers with. You still have your job and issues are still piling up.

At the start of summer 2019, after a couple months of saving up donations, I quit my job and worked on unified through fall. That got the number of open issues down significantly and set up a strong governance and maintenance system for the collective. But when the time came to work on micromark, the money was gone again, so I contracted through winter 2019, and in spring 2020 I could do about half open source, half contracting. One of the contracting gigs was to write a new MDX parser, for which I also documented how to do that with a state machine in prose. That gave me the insight into how the same could be done for markdown: I drafted CMSM, which was some of the core ideas for micromark, but in prose.

In May 2020, Salesforce reached out: they saw the bugs in remark, how micromark could help, and the initial work on CMSM. And they had thousands of Markdown files. In a for open source uncharacteristic move, they decided to fund my work on micromark. A large part of what maintaining open source means, is putting out fires, triaging issues, and making sure users and sponsors are happy, so it was amazing to get several months to just focus and make something new. I remember feeling that this project would probably be the hardest thing I’d work on: yeah, parsers are pretty difficult, but markdown is on another level. Markdown is such a giant stack of edge cases on edge cases on even more weirdness, what a mess. On August 20, 2020, I released 2.0.0, the first working version of micromark. And it’s hard to describe how that moment felt. It was great.

Download Details:

Author: micromark
The Demo/Documentation: View The Demo/Documentation
Download Link: Download The Source Code
Official Website: https://github.com/micromark/micromark
License: MIT © Titus Wormer

#micromark #markdown #javascript #html #css

What is GEEK

Buddha Community

Micromark 3.0: A Small Compliant Markdown Parser

A Wrapper for Sembast and SQFlite to Enable Easy

FHIR_DB

This is really just a wrapper around Sembast_SQFLite - so all of the heavy lifting was done by Alex Tekartik. I highly recommend that if you have any questions about working with this package that you take a look at Sembast. He's also just a super nice guy, and even answered a question for me when I was deciding which sembast version to use. As usual, ResoCoder also has a good tutorial.

I have an interest in low-resource settings and thus a specific reason to be able to store data offline. To encourage this use, there are a number of other packages I have created based around the data format FHIR. FHIR® is the registered trademark of HL7 and is used with the permission of HL7. Use of the FHIR trademark does not constitute endorsement of this product by HL7.

Using the Db

So, while not absolutely necessary, I highly recommend that you use some sort of interface class. This adds the benefit of more easily handling errors, plus if you change to a different database in the future, you don't have to change the rest of your app, just the interface.

I've used something like this in my projects:

class IFhirDb {
  IFhirDb();
  final ResourceDao resourceDao = ResourceDao();

  Future<Either<DbFailure, Resource>> save(Resource resource) async {
    Resource resultResource;
    try {
      resultResource = await resourceDao.save(resource);
    } catch (error) {
      return left(DbFailure.unableToSave(error: error.toString()));
    }
    return right(resultResource);
  }

  Future<Either<DbFailure, List<Resource>>> returnListOfSingleResourceType(
      String resourceType) async {
    List<Resource> resultList;
    try {
      resultList =
          await resourceDao.getAllSortedById(resourceType: resourceType);
    } catch (error) {
      return left(DbFailure.unableToObtainList(error: error.toString()));
    }
    return right(resultList);
  }

  Future<Either<DbFailure, List<Resource>>> searchFunction(
      String resourceType, String searchString, String reference) async {
    List<Resource> resultList;
    try {
      resultList =
          await resourceDao.searchFor(resourceType, searchString, reference);
    } catch (error) {
      return left(DbFailure.unableToObtainList(error: error.toString()));
    }
    return right(resultList);
  }
}

I like this because in case there's an i/o error or something, it won't crash your app. Then, you can call this interface in your app like the following:

final patient = Patient(
    resourceType: 'Patient',
    name: [HumanName(text: 'New Patient Name')],
    birthDate: Date(DateTime.now()),
);

final saveResult = await IFhirDb().save(patient);

This will save your newly created patient to the locally embedded database.

IMPORTANT: this database will expect that all previously created resources have an id. When you save a resource, it will check to see if that resource type has already been stored. (Each resource type is saved in it's own store in the database). It will then check if there is an ID. If there's no ID, it will create a new one for that resource (along with metadata on version number and creation time). It will save it, and return the resource. If it already has an ID, it will copy the the old version of the resource into a _history store. It will then update the metadata of the new resource and save that version into the appropriate store for that resource. If, for instance, we have a previously created patient:

{
    "resourceType": "Patient",
    "id": "fhirfli-294057507-6811107",
    "meta": {
        "versionId": "1",
        "lastUpdated": "2020-10-16T19:41:28.054369Z"
    },
    "name": [
        {
            "given": ["New"],
            "family": "Patient"
        }
    ],
    "birthDate": "2020-10-16"
}

And we update the last name to 'Provider'. The above version of the patient will be kept in _history, while in the 'Patient' store in the db, we will have the updated version:

{
    "resourceType": "Patient",
    "id": "fhirfli-294057507-6811107",
    "meta": {
        "versionId": "2",
        "lastUpdated": "2020-10-16T19:45:07.316698Z"
    },
    "name": [
        {
            "given": ["New"],
            "family": "Provider"
        }
    ],
    "birthDate": "2020-10-16"
}

This way we can keep track of all previous version of all resources (which is obviously important in medicine).

For most of the interactions (saving, deleting, etc), they work the way you'd expect. The only difference is search. Because Sembast is NoSQL, we can search on any of the fields in a resource. If in our interface class, we have the following function:

  Future<Either<DbFailure, List<Resource>>> searchFunction(
      String resourceType, String searchString, String reference) async {
    List<Resource> resultList;
    try {
      resultList =
          await resourceDao.searchFor(resourceType, searchString, reference);
    } catch (error) {
      return left(DbFailure.unableToObtainList(error: error.toString()));
    }
    return right(resultList);
  }

You can search for all immunizations of a certain patient:

searchFunction(
        'Immunization', 'patient.reference', 'Patient/$patientId');

This function will search through all entries in the 'Immunization' store. It will look at all 'patient.reference' fields, and return any that match 'Patient/$patientId'.

The last thing I'll mention is that this is a password protected db, using AES-256 encryption (although it can also use Salsa20). Anytime you use the db, you have the option of using a password for encryption/decryption. Remember, if you setup the database using encryption, you will only be able to access it using that same password. When you're ready to change the password, you will need to call the update password function. If we again assume we created a change password method in our interface, it might look something like this:

class IFhirDb {
  IFhirDb();
  final ResourceDao resourceDao = ResourceDao();
  ...
    Future<Either<DbFailure, Unit>> updatePassword(String oldPassword, String newPassword) async {
    try {
      await resourceDao.updatePw(oldPassword, newPassword);
    } catch (error) {
      return left(DbFailure.unableToUpdatePassword(error: error.toString()));
    }
    return right(Unit);
  }

You don't have to use a password, and in that case, it will save the db file as plain text. If you want to add a password later, it will encrypt it at that time.

General Store

After using this for a while in an app, I've realized that it needs to be able to store data apart from just FHIR resources, at least on occasion. For this, I've added a second class for all versions of the database called GeneralDao. This is similar to the ResourceDao, but fewer options. So, in order to save something, it would look like this:

await GeneralDao().save('password', {'new':'map'});
await GeneralDao().save('password', {'new':'map'}, 'key');

The difference between these two options is that the first one will generate a key for the map being stored, while the second will store the map using the key provided. Both will return the key after successfully storing the map.

Other functions available include:

// deletes everything in the general store
await GeneralDao().deleteAllGeneral('password'); 

// delete specific entry
await GeneralDao().delete('password','key'); 

// returns map with that key
await GeneralDao().find('password', 'key'); 

FHIR® is a registered trademark of Health Level Seven International (HL7) and its use does not constitute an endorsement of products by HL7®

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add fhir_db

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  fhir_db: ^0.4.3

Alternatively, your editor might support or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:fhir_db/dstu2.dart';
import 'package:fhir_db/dstu2/fhir_db.dart';
import 'package:fhir_db/dstu2/general_dao.dart';
import 'package:fhir_db/dstu2/resource_dao.dart';
import 'package:fhir_db/encrypt/aes.dart';
import 'package:fhir_db/encrypt/salsa.dart';
import 'package:fhir_db/r4.dart';
import 'package:fhir_db/r4/fhir_db.dart';
import 'package:fhir_db/r4/general_dao.dart';
import 'package:fhir_db/r4/resource_dao.dart';
import 'package:fhir_db/r5.dart';
import 'package:fhir_db/r5/fhir_db.dart';
import 'package:fhir_db/r5/general_dao.dart';
import 'package:fhir_db/r5/resource_dao.dart';
import 'package:fhir_db/stu3.dart';
import 'package:fhir_db/stu3/fhir_db.dart';
import 'package:fhir_db/stu3/general_dao.dart';
import 'package:fhir_db/stu3/resource_dao.dart'; 

example/lib/main.dart

import 'package:fhir/r4.dart';
import 'package:fhir_db/r4.dart';
import 'package:flutter/material.dart';
import 'package:test/test.dart';

Future<void> main() async {
  WidgetsFlutterBinding.ensureInitialized();

  final resourceDao = ResourceDao();

  // await resourceDao.updatePw('newPw', null);
  await resourceDao.deleteAllResources(null);

  group('Playing with passwords', () {
    test('Playing with Passwords', () async {
      final patient = Patient(id: Id('1'));

      final saved = await resourceDao.save(null, patient);

      await resourceDao.updatePw(null, 'newPw');
      final search1 = await resourceDao.find('newPw',
          resourceType: R4ResourceType.Patient, id: Id('1'));
      expect(saved, search1[0]);

      await resourceDao.updatePw('newPw', 'newerPw');
      final search2 = await resourceDao.find('newerPw',
          resourceType: R4ResourceType.Patient, id: Id('1'));
      expect(saved, search2[0]);

      await resourceDao.updatePw('newerPw', null);
      final search3 = await resourceDao.find(null,
          resourceType: R4ResourceType.Patient, id: Id('1'));
      expect(saved, search3[0]);

      await resourceDao.deleteAllResources(null);
    });
  });

  final id = Id('12345');
  group('Saving Things:', () {
    test('Save Patient', () async {
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);
      final patient = Patient(id: id, name: [humanName]);
      final saved = await resourceDao.save(null, patient);

      expect(saved.id, id);

      expect((saved as Patient).name?[0], humanName);
    });

    test('Save Organization', () async {
      final organization = Organization(id: id, name: 'FhirFli');
      final saved = await resourceDao.save(null, organization);

      expect(saved.id, id);

      expect((saved as Organization).name, 'FhirFli');
    });

    test('Save Observation1', () async {
      final observation1 = Observation(
        id: Id('obs1'),
        code: CodeableConcept(text: 'Observation #1'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save(null, observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1');
    });

    test('Save Observation1 Again', () async {
      final observation1 = Observation(
          id: Id('obs1'),
          code: CodeableConcept(text: 'Observation #1 - Updated'));
      final saved = await resourceDao.save(null, observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1 - Updated');

      expect(saved.meta?.versionId, Id('2'));
    });

    test('Save Observation2', () async {
      final observation2 = Observation(
        id: Id('obs2'),
        code: CodeableConcept(text: 'Observation #2'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save(null, observation2);

      expect(saved.id, Id('obs2'));

      expect((saved as Observation).code.text, 'Observation #2');
    });

    test('Save Observation3', () async {
      final observation3 = Observation(
        id: Id('obs3'),
        code: CodeableConcept(text: 'Observation #3'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save(null, observation3);

      expect(saved.id, Id('obs3'));

      expect((saved as Observation).code.text, 'Observation #3');
    });
  });

  group('Finding Things:', () {
    test('Find 1st Patient', () async {
      final search = await resourceDao.find(null,
          resourceType: R4ResourceType.Patient, id: id);
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);

      expect(search.length, 1);

      expect((search[0] as Patient).name?[0], humanName);
    });

    test('Find 3rd Observation', () async {
      final search = await resourceDao.find(null,
          resourceType: R4ResourceType.Observation, id: Id('obs3'));

      expect(search.length, 1);

      expect(search[0].id, Id('obs3'));

      expect((search[0] as Observation).code.text, 'Observation #3');
    });

    test('Find All Observations', () async {
      final search = await resourceDao.getResourceType(
        null,
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 3);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), true);

      expect(idList.contains('obs3'), true);
    });

    test('Find All (non-historical) Resources', () async {
      final search = await resourceDao.getAll(null);

      expect(search.length, 5);
      final patList = search.toList();
      final orgList = search.toList();
      final obsList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);
      obsList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Observation);

      expect(patList.length, 1);

      expect(orgList.length, 1);

      expect(obsList.length, 3);
    });
  });

  group('Deleting Things:', () {
    test('Delete 2nd Observation', () async {
      await resourceDao.delete(
          null, null, R4ResourceType.Observation, Id('obs2'), null, null);

      final search = await resourceDao.getResourceType(
        null,
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 2);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), false);

      expect(idList.contains('obs3'), true);
    });

    test('Delete All Observations', () async {
      await resourceDao.deleteSingleType(null,
          resourceType: R4ResourceType.Observation);

      final search = await resourceDao.getAll(null);

      expect(search.length, 2);

      final patList = search.toList();
      final orgList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);

      expect(patList.length, 1);

      expect(patList.length, 1);
    });

    test('Delete All Resources', () async {
      await resourceDao.deleteAllResources(null);

      final search = await resourceDao.getAll(null);

      expect(search.length, 0);
    });
  });

  group('Password - Saving Things:', () {
    test('Save Patient', () async {
      await resourceDao.updatePw(null, 'newPw');
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);
      final patient = Patient(id: id, name: [humanName]);
      final saved = await resourceDao.save('newPw', patient);

      expect(saved.id, id);

      expect((saved as Patient).name?[0], humanName);
    });

    test('Save Organization', () async {
      final organization = Organization(id: id, name: 'FhirFli');
      final saved = await resourceDao.save('newPw', organization);

      expect(saved.id, id);

      expect((saved as Organization).name, 'FhirFli');
    });

    test('Save Observation1', () async {
      final observation1 = Observation(
        id: Id('obs1'),
        code: CodeableConcept(text: 'Observation #1'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save('newPw', observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1');
    });

    test('Save Observation1 Again', () async {
      final observation1 = Observation(
          id: Id('obs1'),
          code: CodeableConcept(text: 'Observation #1 - Updated'));
      final saved = await resourceDao.save('newPw', observation1);

      expect(saved.id, Id('obs1'));

      expect((saved as Observation).code.text, 'Observation #1 - Updated');

      expect(saved.meta?.versionId, Id('2'));
    });

    test('Save Observation2', () async {
      final observation2 = Observation(
        id: Id('obs2'),
        code: CodeableConcept(text: 'Observation #2'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save('newPw', observation2);

      expect(saved.id, Id('obs2'));

      expect((saved as Observation).code.text, 'Observation #2');
    });

    test('Save Observation3', () async {
      final observation3 = Observation(
        id: Id('obs3'),
        code: CodeableConcept(text: 'Observation #3'),
        effectiveDateTime: FhirDateTime(DateTime(1981, 09, 18)),
      );
      final saved = await resourceDao.save('newPw', observation3);

      expect(saved.id, Id('obs3'));

      expect((saved as Observation).code.text, 'Observation #3');
    });
  });

  group('Password - Finding Things:', () {
    test('Find 1st Patient', () async {
      final search = await resourceDao.find('newPw',
          resourceType: R4ResourceType.Patient, id: id);
      final humanName = HumanName(family: 'Atreides', given: ['Duke']);

      expect(search.length, 1);

      expect((search[0] as Patient).name?[0], humanName);
    });

    test('Find 3rd Observation', () async {
      final search = await resourceDao.find('newPw',
          resourceType: R4ResourceType.Observation, id: Id('obs3'));

      expect(search.length, 1);

      expect(search[0].id, Id('obs3'));

      expect((search[0] as Observation).code.text, 'Observation #3');
    });

    test('Find All Observations', () async {
      final search = await resourceDao.getResourceType(
        'newPw',
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 3);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), true);

      expect(idList.contains('obs3'), true);
    });

    test('Find All (non-historical) Resources', () async {
      final search = await resourceDao.getAll('newPw');

      expect(search.length, 5);
      final patList = search.toList();
      final orgList = search.toList();
      final obsList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);
      obsList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Observation);

      expect(patList.length, 1);

      expect(orgList.length, 1);

      expect(obsList.length, 3);
    });
  });

  group('Password - Deleting Things:', () {
    test('Delete 2nd Observation', () async {
      await resourceDao.delete(
          'newPw', null, R4ResourceType.Observation, Id('obs2'), null, null);

      final search = await resourceDao.getResourceType(
        'newPw',
        resourceTypes: [R4ResourceType.Observation],
      );

      expect(search.length, 2);

      final idList = [];
      for (final obs in search) {
        idList.add(obs.id.toString());
      }

      expect(idList.contains('obs1'), true);

      expect(idList.contains('obs2'), false);

      expect(idList.contains('obs3'), true);
    });

    test('Delete All Observations', () async {
      await resourceDao.deleteSingleType('newPw',
          resourceType: R4ResourceType.Observation);

      final search = await resourceDao.getAll('newPw');

      expect(search.length, 2);

      final patList = search.toList();
      final orgList = search.toList();
      patList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Patient);
      orgList.retainWhere(
          (resource) => resource.resourceType == R4ResourceType.Organization);

      expect(patList.length, 1);

      expect(patList.length, 1);
    });

    test('Delete All Resources', () async {
      await resourceDao.deleteAllResources('newPw');

      final search = await resourceDao.getAll('newPw');

      expect(search.length, 0);

      await resourceDao.updatePw('newPw', null);
    });
  });
} 

Download Details:

Author: MayJuun

Source Code: https://github.com/MayJuun/fhir/tree/main/fhir_db

#sqflite  #dart  #flutter 

Dylan  Iqbal

Dylan Iqbal

1624590352

Micromark 3.0: A Small Compliant Markdown Parser

micromark

The smallest CommonMark compliant markdown parser with positional info and concrete tokens.

Feature highlights

When to use this

  • If you just want to turn markdown into HTML (w/ maybe a few extensions)
  • If you want to do really complex things with markdown

See § Comparison for more info

Intro

micromark is a long awaited markdown parser. It uses a state machine to parse the entirety of markdown into concrete tokens. It’s the smallest 100% CommonMark compliant markdown parser in JavaScript. It was made to replace the internals of remark-parse, the most popular markdown parser. Its API compiles to HTML, but its parts are made to be used separately, so as to generate syntax trees (mdast-util-from-markdown) or compile to other output formats.

Contents

Install

npm:

npm install micromark

Use

Typical use (buffering):

import {micromark} from 'micromark'

console.log(micromark('## Hello, *world*!'))

Yields:

<h2>Hello, <em>world</em>!</h2>

You can pass extensions (in this case micromark-extension-gfm):

import {micromark} from 'micromark'
import {gfm, gfmHtml} from 'micromark-extension-gfm'

const value = '* [x] contact@example.com ~~strikethrough~~'

const result = micromark(value, {
  extensions: [gfm()],
  htmlExtensions: [gfmHtml]
})

console.log(result)

Yields:

<ul>
<li><input checked="" disabled="" type="checkbox"> <a href="mailto:contact@example.com">contact@example.com</a> <del>strikethrough</del></li>
</ul>

Streaming interface:

import fs from 'fs'
import {stream} from 'micromark/stream'

fs.createReadStream('example.md')
  .on('error', handleError)
  .pipe(stream())
  .pipe(process.stdout)

function handleError(error) {
  // Handle your error here!
  throw error
}

API

micromark core has two entries in its export map: micromark and micromark/stream.

micromark exports the following identifier: micromark. micromark/stream exports the following identifier: stream. There are no default exports.

The export map supports the endorsed development condition. Run node --conditions development module.js to get instrumented dev code. Without this condition, production code is loaded. See § Size & debug for more info.

micromark(value[, encoding][, options])

Compile markdown to HTML.

Parameters
value

Markdown to parse (string or Buffer).

encoding

Character encoding to understand value as when it’s a Buffer (string, default: 'utf8').

options.defaultLineEnding

Value to use for line endings not in value (string, default: first line ending or '\n').

Generally, micromark copies line endings ('\r', '\n', '\r\n') in the markdown document over to the compiled HTML. In some cases, such as > a, CommonMark requires that extra line endings are added: <blockquote>\n<p>a</p>\n</blockquote>.

options.allowDangerousHtml

Whether to allow embedded HTML (boolean, default: false). See § Security.

options.allowDangerousProtocol

Whether to allow potentially dangerous protocols in links and images (boolean, default: false). URLs relative to the current protocol are always allowed (such as, image.jpg). For links, the allowed protocols are http, https, irc, ircs, mailto, and xmpp. For images, the allowed protocols are http and https. See § Security.

options.extensions

Array of syntax extensions (Array.<SyntaxExtension>, default: []). See § Extensions.

options.htmlExtensions

Array of HTML extensions (Array.<HtmlExtension>, default: []). See § Extensions.

Returns

string — Compiled HTML.

stream(options?)

Streaming interface of micromark. Compiles markdown to HTML. options are the same as the buffering API above. Note that some of the work to parse markdown can be done streaming, but in the end buffering is required.

micromark does not handle errors for you, so you must handle errors on whatever streams you pipe into it. As markdown does not know errors, micromark itself does not emit errors.

Extensions

micromark supports extensions. There are two types of extensions for micromark: SyntaxExtension, which change how markdown is parsed, and HtmlExtension, which change how it compiles. They can be passed in options.extensions or options.htmlExtensions, respectively.

As a user of extensions, refer to each extension’s readme for more on how to use them. As a (potential) author of extensions, refer to § Extending markdown and § Creating a micromark extension.

List of extensions

SyntaxExtension

A syntax extension is an object whose fields are typically the names of hooks, referring to where constructs “hook” into. The fields at such objects are character codes, mapping to constructs as values.

The built in constructs are an example. See it and existing extensions for inspiration.

HtmlExtension

An HTML extension is an object whose fields are typically enter or exit (reflecting whether a token is entered or exited). The values at such objects are names of tokens mapping to handlers.

See existing extensions for inspiration.

Extending markdown

micromark lets you change markdown syntax, yes, but there are alternatives. The alternatives are often better.

Over the years, many micromark and remark users have asked about their unique goals for markdown. Some exemplary goals are:

  1. I want to add rel="nofollow" to external links
  2. I want to add links from headings to themselves
  3. I want line breaks in paragraphs to become hard breaks
  4. I want to support embedded music sheets
  5. I want authors to add arbitrary attributes
  6. I want authors to mark certain blocks with meaning, such as tip, warning, etc
  7. I want to combine markdown with JS(X)
  8. I want to support our legacy flavor of markdown-like syntax

These can be solved in different ways and which solution is best is both subjective and dependant on unique needs. Often, there is already a solution in the form of an existing remark or rehype plugin. Respectively, their solutions are:

  1. remark-external-links
  2. rehype-autolink-headings
  3. remark-breaks
  4. custom plugin similar to rehype-katex but integrating abcjs
  5. either remark-directive and a custom plugin or with rehype-attr
  6. remark-directive combined with a custom plugin
  7. combining the existing micromark MDX extensions however you please, such as done by mdx-js/mdx or xdm
  8. Writing a micromark extension

Looking at these from a higher level, they can be categorized:

  • Changing the output by transforming syntax trees (1 and 2)

    This category is nice as the format remains plain markdown that authors are already familiar with and which will work with existing tools and platforms.

    Implementations will deal with the syntax tree (mdast) and the ecosystems remark and rehype. There are many existing utilities for working with that tree. Many remark plugins and rehype plugins also exist.

  • Using and abusing markdown to add new meaning (3, 4, potentially 5)

    This category is similar to Changing the output by transforming syntax trees, but adds a new meaning to certain things which already have semantics in markdown.

    Some examples in pseudo code:

    *   **A list item with the first paragraph bold**
    
        And then more content, is turned into `<dl>` / `<dt>` / `<dd>` elements
    
    Or, the title attributes on links or images is [overloaded](/url 'rel:nofollow')
    with a new meaning.
    
    ```csv
    fenced,code,can,include,data
    which,is,turned,into,a,graph
    
    // after the code language name
    

    HTML, especially comments, could be used as markers

    
    
  • Arbitrary extension mechanism (potentially 5; 6)

    This category is nice when content should contain embedded “components”. Often this means it’s required for authors to have some programming experience. There are three good ways to solve arbitrary extensions.

    HTML: Markdown already has an arbitrary extension syntax. It works in most places and authors are already familiar with the syntax, but it’s reasonably hard to implement securely. Certain platforms will remove HTML completely, others sanitize it to varying degrees. HTML also supports custom elements. These could be used and enhanced by client side JavaScript or enhanced when transforming the syntax tree.

    Generic directives: although a proposal and not supported on most platforms, directives do work with many tools already. They’re not the easiest to author compared to, say, a heading, but sometimes that’s okay. They do have potential: they nicely solve the need for an infinite number of potential extensions to markdown in a single markdown-esque way.

    MDX also adds support for components by swapping HTML out for JS(X). JSX is an extension to JavaScript, so MDX is something along the lines of literate programming. This does require knowledge of React (or Vue) and JavaScript, excluding some authors.

  • Extending markdown syntax (7 and 8)

    Extend the syntax of markdown means:

    • Authors won’t be familiar with the syntax
    • Content won’t work in other places (such as on GitHub)
    • Defeating the purpose of markdown: being simple to author and looking like what it means

    …and it’s hard to do as it requires some in-depth knowledge of JavaScript and parsing. But it’s possible and in certain cases very powerful.

Creating a micromark extension

This section shows how to create an extension for micromark that parses “variables” (a way to render some data) and one to turn a default construct off.

Stuck? See support.md.

Prerequisites
  • You should possess an intermediate to high understanding of JavaScript: it’s going to get a bit complex
  • Read the readme of unified (until you hit the API section) to better understand where micromark fits
  • Read the § Architecture section to understand how micromark works
  • Read the § Extending markdown section to understand whether it’s a good idea to extend the syntax of markdown
Extension basics

micromark supports two types of extensions. Syntax extensions change how markdown is parsed. HTML extensions change how it compiles.

HTML extensions are not always needed, as micromark is often used through mdast-util-from-markdown to parse to a markdown syntax tree So instead of an HTML extension a from-markdown utility is needed. Then, a mdast-util-to-markdown utility, which is responsible for serializing syntax trees to markdown, is also needed.

When developing something for internal use only, you can pick and choose which parts you need. When open sourcing your extensions, it should probably contain four parts: syntax extension, HTML extension, from-markdown utility, and a to-markdown utility.

On to our first case!

Case: variables

Let’s first outline what we want to make: render some data, similar to how Liquid and the like work, in our markdown. It could look like this:

Hello, {planet}!

Turned into:

<p>Hello, Venus!</p>

An opening curly brace, followed by one or more characters, and then a closing brace. We’ll then look up planet in some object and replace the variable with its corresponding value, to get something like Venus out.

It looks simple enough, but with markdown there are often a couple more things to think about. For this case, I can see the following:

  • Is there a “block” version too?
  • Are spaces allowed? Line endings? Should initial and final white space be ignored?
  • Balanced nested braces? Superfluous ones such as {{planet}} or meaningful ones such as {a {pla} net}?
  • Character escapes ({pla\}net}) and character references ({pla&#x7d;net})?

To keep things as simple as possible, let’s not support a block syntax, see spaces as special, support line endings, or support nested braces. But to learn interesting things, we will support character escapes and -references.

Note that this particular case is already solved quite nicely by micromark-extension-mdx-expression. It’s a bit more powerful and does more things, but it can be used to solve this case and otherwise serve as inspiration.

Setup

Create a new folder, enter it, and set up a new package:

mkdir example
cd example
npm init -y

In this example we’ll use ESM, so add type: 'module' to package.json:

@@ -2,6 +2,7 @@
   "name": "example",
   "version": "1.0.0",
   "description": "",
+  "type": "module",
   "main": "index.js",
   "scripts": {
     "test": "echo \"Error: no test specified\" && exit 1"

Add a markdown file, example.md, with the following text:

Hello, {planet}!

{pla\}net} and {pla&#x7d;net}.

To check if our extension works, add an example.js module, with the following code:

import {promises as fs} from 'node:fs'
import {micromark} from 'micromark'
import {variables} from './index.js'

main()

async function main() {
  const buf = await fs.readFile('example.md')
  const out = micromark(buf, {extensions: [variables]})
  console.log(out)
}

While working on the extension, run node example to see whether things work. Feel free to add more examples of the variables syntax in example.md if needed.

Our extension doesn’t work yet, for one because micromark is not installed:

npm install micromark --save-dev

…and we need to write our extension. Let’s do that in index.js:

export const variables = {}

Although our extension doesn’t do anything, running node example now somewhat works!

Syntax extension

Much in micromark is based on character codes (see § Preprocess). For this extension, the relevant codes are:

  • -5 — M-0005 CARRIAGE RETURN (CR)
  • -4 — M-0004 LINE FEED (LF)
  • -3 — M-0003 CARRIAGE RETURN LINE FEED (CRLF)
  • null — EOF (end of the stream)
  • 92 — U+005C BACKSLASH (\)
  • 123 — U+007B LEFT CURLY BRACE ({)
  • 125 — U+007D RIGHT CURLY BRACE (})

Also relevant are the content types (see § Content types). This extension is a text construct, as it’s parsed alongsides links and such. The content inside it (between the braces) is string, to support character escapes and -references.

Let’s write our extension. Add the following code to index.js:

const variableConstruct = {name: 'variable', tokenize: variableTokenize}

export const variables = {text: {123: variableConstruct}}

function variableTokenize(effects, ok, nok) {
  return start

  function start(code) {
    console.log('start:', effects, code);
    return nok(code)
  }
}

The above code exports an extension with the identifier variables. The extension defines a text construct for the character code 123. The construct has a name, so that it can be turned off (optional, see next case), and it has a tokenize function that sets up a state machine, which receives effects and the ok and nok states. ok can be used when successful, nok when not, and so constructs are a bit similar to how promises can resolve or reject. tokenize returns the initial state, start, which itself receives the current character code, prints some debugging information, and then returns a call to nok.

Ensure that things work by running node example and see what it prints.

Now we need to define our states and figure out how variables work. Some people prefer sketching a diagram of the flow. I often prefer writing it down in pseudo-code prose. I’ve also found that test driven development works well, where I write unit tests for how it should work, then write the state machine, and finally use a code coverage tool to ensure I’ve thought of everything.

In prose, what we have to code looks like this:

  • start: Receive 123 as code, enter a token for the whole (let’s call it variable), enter a token for the marker (variableMarker), consume code, exit the marker token, enter a token for the contents (variableString), switch to begin
  • begin: If code is 125, reconsume in nok. Else, reconsume in inside
  • inside: If code is -5, -4, -3, or null, reconsume in nok. Else, if code is 125, exit the string token, enter a variableMarker, consume code, exit the marker token, exit the variable token, and switch to ok. Else, consume, and remain in inside.

That should be it! Replace variableTokenize with the following to include the needed states:

function variableTokenize(effects, ok, nok) {
  return start

  function start(code) {
    effects.enter('variable')
    effects.enter('variableMarker')
    effects.consume(code)
    effects.exit('variableMarker')
    effects.enter('variableString')
    return begin
  }

  function begin(code) {
    return code === 125 ? nok(code) : inside(code)
  }

  function inside(code) {
    if (code === -5 || code === -4 || code === -3 || code === null) {
      return nok(code)
    }

    if (code === 125) {
      effects.exit('variableString')
      effects.enter('variableMarker')
      effects.consume(code)
      effects.exit('variableMarker')
      effects.exit('variable')
      return ok
    }

    effects.consume(code)
    return inside
  }
}

Run node example again and see what it prints! The HTML compiler ignores things it doesn’t know, so variables are now removed.

We have our first syntax extension, and it sort of works, but we don’t handle character escapes and -references yet. We need to do two things to make that work: a) skip over \\ and \} in our algorithm, b) tell micromark to parse them.

Change the code in index.js to support escapes like so:

@@ -23,6 +23,11 @@ function variableTokenize(effects, ok, nok) {
       return nok(code)
     }

+    if (code === 92) {
+      effects.consume(code)
+      return insideEscape
+    }
+
     if (code === 125) {
       effects.exit('variableString')
       effects.enter('variableMarker')
@@ -35,4 +40,13 @@ function variableTokenize(effects, ok, nok) {
     effects.consume(code)
     return inside
   }
+
+  function insideEscape(code) {
+    if (code === 92 || code === 125) {
+      effects.consume(code)
+      return inside
+    }
+
+    return inside(code)
+  }
 }

Finally add support for character references and character escapes between braces by adding a special token that defines a content type:

@@ -11,6 +11,7 @@ function variableTokenize(effects, ok, nok) {
     effects.consume(code)
     effects.exit('variableMarker')
     effects.enter('variableString')
+    effects.enter('chunkString', {contentType: 'string'})
     return begin
   }

@@ -29,6 +30,7 @@ function variableTokenize(effects, ok, nok) {
     }

     if (code === 125) {
+      effects.exit('chunkString')
       effects.exit('variableString')
       effects.enter('variableMarker')
       effects.consume(code)

Tokens with a contentType will be replaced by postprocess (see § Postprocess) by the tokens belonging to that content type.

HTML extension

Up next is an HTML extension to replace variables with data. Change example.js to use one like so:

@@ -1,11 +1,12 @@
 import {promises as fs} from 'node:fs'
 import {micromark} from 'micromark'
-import {variables} from './index.js'
+import {variables, variablesHtml} from './index.js'

 main()

 async function main() {
   const buf = await fs.readFile('example.md')
-  const out = micromark(buf, {extensions: [variables]})
+  const html = variablesHtml({planet: '1', 'pla}net': '2'})
+  const out = micromark(buf, {extensions: [variables], htmlExtensions: [html]})
   console.log(out)
 }

And add the HTML extension, variablesHtml, to index.js like so:

@@ -52,3 +52,19 @@ function variableTokenize(effects, ok, nok) {
     return inside(code)
   }
 }
+
+export function variablesHtml(data = {}) {
+  return {
+    enter: {variableString: enterVariableString},
+    exit: {variableString: exitVariableString},
+  }
+
+  function enterVariableString() {
+    this.buffer()
+  }
+
+  function exitVariableString() {
+    var id = this.resume()
+    if (id in data) {
+      this.raw(this.encode(data[id]))
+    }
+  }
+}

variablesHtml is a function that receives an object mapping “variables” to strings and returns an HTML extension. The extension hooks two functions to variableString, one when it starts, the other when it ends. We don’t need to do anything to handle the other tokens as they’re already ignored by default. enterVariableString calls buffer, which is a function that “stashes” what would otherwise be emitted. exitVariableString calls resume, which is the inverse of buffer and returns the stashed value. If the variable is defined, we ensure it’s made safe (with this.encode) and finally output that (with this.raw).

Further exercises

It works! We’re done! Of course, it can be better, such as with the following potential features:

  • Add support for empty variables
  • Add support for spaces between markers and string
  • Add support for line endings in variables
  • Add support for nested braces
  • Add support for blocks
  • Add warnings on undefined variables
  • Use micromark-build, and use assert, debug, and micromark-util-symbol (see § Size & debug)
  • Add mdast-util-from-markdown and mdast-util-to-markdown utilities to parse and serialize the AST
Case: turn off constructs

Sometimes it’s needed to turn a default construct off. That’s possible through a syntax extension. Note that not everything can be turned off (such as paragraphs) and even if it’s possible to turn something off, it could break micromark (such as character escapes).

To disable constructs, refer to them by name in an array at the disable.null field of an extension:

import {micromark} from 'micromark'

const extension = {disable: {null: ['codeIndented']}}

console.log(micromark('\ta', {extensions: [extension]}))

Yields:

<p>a</p>

Architecture

micromark is maintained as a monorepo. Many of its internals, which are used in micromark (core) but also useful for developers of extensions or integrations, are available as separate modules. Each module maintained here is available in packages/.

Overview

The naming scheme in packages/ is as follows:

  • micromark-build — Small CLI to build dev code into production code
  • micromark-core-commonmark — CommonMark constructs used in micromark
  • micromark-factory-* — Reusable subroutines used to parse parts of constructs
  • micromark-util-* — Reusable helpers often needed when parsing markdown
  • micromark — Core module

micromark has two interfaces: buffering (maintained in micromark/dev/index.js) and streaming (maintained in micromark/dev/stream.js). The first takes all input at once whereas the last uses a Node.js stream to take input separately. They thinly wrap how data flows through micromark:

                                            micromark
+-----------------------------------------------------------------------------------------------+
|            +------------+         +-------+         +-------------+         +---------+       |
| -markdown->+ preprocess +-chunks->+ parse +-events->+ postprocess +-events->+ compile +-html- |
|            +------------+         +-------+         +-------------+         +---------+       |
+-----------------------------------------------------------------------------------------------+

Preprocess

The preprocessor (micromark/dev/lib/preprocess.js) takes markdown and turns it into chunks.

A chunk is either a character code or a slice of a buffer in the form of a string. Chunks are used because strings are more efficient storage than character codes, but limited in what they can represent. For example, the input ab\ncd is represented as ['ab', -4, 'cd'] in chunks.

A character code is often the same as what String#charCodeAt() yields but micromark adds meaning to certain other values.

In micromark, the actual character U+0009 CHARACTER TABULATION (HT) is replaced by one M-0002 HORIZONTAL TAB (HT) and between 0 and 3 M-0001 VIRTUAL SPACE (VS) characters, depending on the column at which the tab occurred. For example, the input \ta is represented as [-2, -1, -1, -1, 97] and a\tb as [97, -2, -1, -1, 98] in character codes.

The characters U+000A LINE FEED (LF) and U+000D CARRIAGE RETURN (CR) are replaced by virtual characters depending on whether they occur together: M-0003 CARRIAGE RETURN LINE FEED (CRLF), M-0004 LINE FEED (LF), and M-0005 CARRIAGE RETURN (CR). For example, the input a\r\nb\nc\rd is represented as [97, -5, 98, -4, 99, -3, 100] in character codes.

The 0 (U+0000 NUL) character code is replaced by U+FFFD REPLACEMENT CHARACTER ().

The null code represents the end of the input stream (called eof for end of file).

Parse

The parser (micromark/dev/lib/parse.js) takes chunks and turns them into events.

An event is the start or end of a token amongst other events. Tokens can “contain” other tokens, even though they are stored in a flat list, by entering before and exiting after them.

A token is a span of one or more codes. Tokens are most of what micromark produces: the built in HTML compiler or other tools can turn them into different things. Tokens are essentially names attached to a slice, such as lineEndingBlank for certain line endings, or codeFenced for a whole fenced code.

Sometimes, more info is attached to tokens, such as _open and _close by attention (strong, emphasis) to signal whether the sequence can open or close an attention run. These fields have to do with how the parser works, which is complex and not always pretty.

Certain fields (previous, next, and contentType) are used in many cases: linked tokens for subcontent. Linked tokens are used because outer constructs are parsed first. Take for example:

- *a
  b*.
  1. The list marker and the space after it is parsed first
  2. The rest of the line is a chunkFlow token
  3. The two spaces on the second line are a linePrefix of the list
  4. The rest of the line is another chunkFlow token

The two chunkFlow tokens are linked together and the chunks they span are passed through the flow tokenizer. There the chunks are seen as chunkContent and passed through the content tokenizer. There the chunks are seen as a paragraph and seen as chunkText and passed through the text tokenizer. Finally, the attention (emphasis) and data (“raw” characters) is parsed there, and we’re done!

Content types

The parser starts out with a document tokenizer. Document is the top-most content type, which includes containers such as block quotes and lists. Containers in markdown come from the margin and include more constructs on the lines that define them.

Flow represents the sections (block constructs such as ATX and setext headings, HTML, indented and fenced code, thematic breaks), which like document are also parsed per line. An example is HTML, which has a certain starting condition (such as <script> on its own line), then continues for a while, until an end condition is found (such as </style>). If that line with an end condition is never found, that flow goes until the end.

Content is zero or more definitions, and then zero or one paragraph. It’s a weird one, and needed to make certain edge cases around definitions spec compliant. Definitions are unlike other things in markdown, in that they behave like text in that they can contain arbitrary line endings, but have to end at a line ending. If they end in something else, the whole definition instead is seen as a paragraph.

The content in markdown first needs to be parsed up to this level to figure out which things are defined, for the whole document, before continuing on with text, as whether a link or image reference forms or not depends on whether it’s defined. This unfortunately prevents a true streaming markdown parser.

Text contains phrasing content (rich inline text: autolinks, character escapes and -references, code, hard breaks, HTML, images, links, emphasis, strong).

String is a limited text-like content type which only allows character references and character escapes. It exists in things such as identifiers (media references, definitions), titles, or URLs and such.

Constructs

Constructs are the things that make up markdown. Some examples are lists, thematic breaks, or character references.

Note that, as a general rule of thumb, markdown is really weird. It’s essentially made up of edge cases rather than logical rules. When browsing the built in constructs, or venturing to build your own, you’ll find confusing new things and run into complex custom hooks.

One more reasonable construct is the thematic break (see code). It’s an object that defines a name and a tokenize function. Most of what constructs do is defined in their required tokenize function, which sets up a state machine to handle character codes streaming in.

Postprocess

The postprocessor (micromark/dev/lib/postprocess.js) is a small step that takes events, ensures all their nested content is parsed, and returns the modified events.

Compile

The compiler (micromark/dev/lib/compile.js) takes events and turns them into HTML. While micromark was created mostly to advance markdown parsing irrespective of compiling to HTML, the common case of doing so is built in. A built in HTML compiler is useful because it allows us to check for compliancy to CommonMark, the de facto norm of markdown, specified in roughly 650 input/output cases. The parsing parts can still be used separately to build ASTs, CSTs, or many other output formats.

The compiler has an interface that accepts lists of events instead of the whole at once, but because markdown can’t truly stream, events are buffered before compiling and outputting the final result.

Examples

GitHub flavored markdown (GFM)

To support GFM (autolink literals, strikethrough, tables, and tasklists) use micromark-extension-gfm. Say we have a file like this:

# GFM

## Autolink literals

www.example.com, https://example.com, and contact@example.com.

## Strikethrough

~one~ or ~~two~~ tildes.

## Table

| a | b  |  c |  d  |
| - | :- | -: | :-: |

## Tasklist

* [ ] to do
* [x] done

Then do something like this:

import fs from 'node:fs'
import {micromark} from 'micromark'
import {gfm, gfmHtml} from 'micromark-extension-gfm'

const doc = fs.readFileSync('example.md')

console.log(micromark(doc, {extensions: [gfm()], htmlExtensions: [gfmHtml]}))

Show equivalent HTML

<h1>GFM</h1>
<h2>Autolink literals</h2>
<p><a href="http://www.example.com">www.example.com</a>, <a href="https://example.com">https://example.com</a>, and <a href="mailto:contact@example.com">contact@example.com</a>.</p>
<h2>Strikethrough</h2>
<p><del>one</del> or <del>two</del> tildes.</p>
<h2>Table</h2>
<table>
<thead>
<tr>
<th>a</th>
<th align="left">b</th>
<th align="right">c</th>
<th align="center">d</th>
</tr>
</thead>
</table>
<h2>Tasklist</h2>
<ul>
<li><input disabled="" type="checkbox"> to do</li>
<li><input checked="" disabled="" type="checkbox"> done</li>
</ul>

Math

To support math use micromark-extension-math. Say we have a file like this:

Lift($L$) can be determined by Lift Coefficient ($C_L$) like the following equation.

$$
L = \frac{1}{2} \rho v^2 S C_L
$$

Then do something like this:

import fs from 'node:fs'
import {micromark} from 'micromark'
import {math, mathHtml} from 'micromark-extension-math'

const doc = fs.readFileSync('example.md')

console.log(micromark(doc, {extensions: [math], htmlExtensions: [mathHtml()]}))

Show equivalent HTML

<p>Lift(<</span>) can be determined by Lift Coefficient (<</span>) like the following equation.</p>
<div class="math math-display"><</span></div>

Footnotes

To support footnotes use micromark-extension-footnote. Say we have a file like this:

Here is a footnote call,[^1] and another.[^longnote]

[^1]: Here is the footnote.

[^longnote]: Here’s one with multiple blocks.

    Subsequent paragraphs are indented to show that they
belong to the previous footnote.

        { some.code }

    The whole paragraph can be indented, or just the first
    line.  In this way, multi-paragraph footnotes work like
    multi-paragraph list items.

This paragraph won’t be part of the note, because it
isn’t indented.

Here is an inline note.^[Inlines notes are easier to write, since
you don’t have to pick an identifier and move down to type the
note.]

Then do something like this:

import fs from 'node:fs'
import {micromark} from 'micromark'
import {footnote, footnoteHtml} from 'micromark-extension-footnote'

const doc = fs.readFileSync('example.md')

console.log(
  micromark(doc, {extensions: [footnote], htmlExtensions: [footnoteHtml()]})
)

Show equivalent HTML

<p>Here is a footnote call,<a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a> and another.<a href="#fn2" class="footnote-ref" id="fnref2"><sup>2</sup></a></p>
<p>This paragraph won’t be part of the note, because it
isn’t indented.</p>
<p>Here is an inline note.<a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a></p>
<div class="footnotes">
<hr />
<ol>
<li id="fn1">
<p>Here is the footnote.<a href="#fnref1" class="footnote-back">↩︎</a></p>
</li>
<li id="fn2">
<p>Here’s one with multiple blocks.</p>
<p>Subsequent paragraphs are indented to show that they
belong to the previous footnote.</p>
<pre><code>{ some.code }
</code></pre>
<p>The whole paragraph can be indented, or just the first
line.  In this way, multi-paragraph footnotes work like
multi-paragraph list items.<a href="#fnref2" class="footnote-back">↩︎</a></p>
</li>
<li id="fn3">
<p>Inlines notes are easier to write, since
you don’t have to pick an identifier and move down to type the
note.<a href="#fnref3" class="footnote-back">↩︎</a></p>
</li>
</ol>
</div>

Syntax tree

A higher level project, mdast-util-from-markdown, can give you an AST.

import fromMarkdown from 'mdast-util-from-markdown' // This wraps micromark.

const result = fromMarkdown('## Hello, *world*!')

console.log(result.children[0])

Yields:

{
  type: 'heading',
  depth: 2,
  children: [
    {type: 'text', value: 'Hello, ', position: [Object]},
    {type: 'emphasis', children: [Array], position: [Object]},
    {type: 'text', value: '!', position: [Object]}
  ],
  position: {
    start: {line: 1, column: 1, offset: 0},
    end: {line: 1, column: 19, offset: 18}
  }
}

Another level up is remark, which provides a nice interface and hundreds of plugins.

Markdown

CommonMark

The first definition of “Markdown” gave several examples of how it worked, showing input Markdown and output HTML, and came with a reference implementation (Markdown.pl). When new implementations followed, they mostly followed the first definition, but deviated from the first implementation, and added extensions, thus making the format a family of formats.

Some years later, an attempt was made to standardize the differences between implementations, by specifying how several edge cases should be handled, through more input and output examples. This is known as CommonMark, and many implementations now work towards some degree of CommonMark compliancy. Still, CommonMark describes what the output in HTML should be given some input, which leaves many edge cases up for debate, and does not answer what should happen for other output formats.

micromark passes all tests from CommonMark and has many more tests to match the CommonMark reference parsers. Finally, it comes with CMSM, which describes how to parse markup, instead of documenting input and output examples.

Grammar

The syntax of markdown can be described in Backus–Naur form (BNF) as:

markdown = .*

No, that’s not a typo: markdown has no syntax errors; anything thrown at it renders something.

Project

Comparison

There are many other markdown parsers out there and maybe they’re better suited to your use case! Here is a short comparison of a couple in JavaScript. Note that this list is made by the folks who make micromark and remark, so there is some bias.

Note: these are, in fact, not really comparable: micromark (and remark) focus on completely different things than other markdown parsers do. Sure, you can generate HTML from markdown with them, but micromark (and remark) are created for (abstract or concrete) syntax trees—to inspect, transform, and generate content, so that you can make things like MDX, Prettier, or Gatsby.

micromark

micromark can be used in two different ways. It can either be used, optionally with existing extensions, to get HTML easily. Or, it can give tremendous power, such as access to all tokens with positional info, at the cost of being hard to get into. It’s super small, pretty fast, and has 100% CommonMark compliance. It has syntax extensions, such as supporting 100% GFM compliance (with micromark-extension-gfm), but they’re rather complex to write. It’s the newest parser on the block, which means it’s fresh and well suited for contemporary markdown needs, but it’s also battle-tested, and already the 3rd most popular markdown parser in JavaScript.

If you’re looking for fine grained control, use micromark. If you just want HTML from markdown, use micromark.

remark

remark is the most popular markdown parser. It’s built on top of micromark and boasts syntax trees. For an analogy, it’s like if Babel, ESLint, and more, were one project. It supports the syntax extensions that micromark has (so it’s 100% CM compliant and can be 100% GFM compliant), but most of the work is done in plugins that transform or inspect the tree, and there’s tons of them. Transforming the tree is relatively easy: it’s a JSON object that can be manipulated directly. remark is stable, widely used, and extremely powerful for handling complex data.

You probably should use remark.

marked

marked is the oldest markdown parser on the block. It’s been around for ages, is battle tested, small, popular, and has a bunch of extensions, but doesn’t match CommonMark or GFM, and is unsafe by default.

If you have markdown you trust and want to turn it into HTML without a fuss, and don’t care about perfect compatibility with CommonMark or GFM, but do appreciate a small bundle size and stability, use marked.

####### markdown-it

markdown-it is a good, stable, and essentially CommonMark compliant markdown parser, with (optional) support for some GFM features as well. It’s used a lot as a direct dependency in packages, but is rather big. It shines at syntax extensions, where you want to support not just markdown, but your (company’s) version of markdown.

If you need a couple of custom syntax extensions to your otherwise CommonMark-compliant markdown, and want to get HTML out, use markdown-it.

Others

There are lots of other markdown parsers! Some say they’re small, or fast, or that they’re CommonMark compliant—but that’s not always true. This list is not supposed to be exhaustive (but it’s the most relevant ones). This list of markdown parsers is a snapshot in time of why (not) to use (alternatives to) micromark: they’re all good choices, depending on what your goals are.

Test

micromark is tested with the ~650 CommonMark tests and more than 1.2k extra tests confirmed with CM reference parsers. These tests reach all branches in the code, which means that this project has 100% code coverage. Finally, we use fuzz testing to ensure micromark is stable, reliable, and secure.

To build, format, and test the codebase, use $ npm test after clone and install. The $ npm run test-api and $ npm run test-coverage scripts check either the unit tests, or both them and their coverage, respectively.

The $ npm run test-fuzz script does fuzz testing for 15 minutes. The timeout is provided by GNU coreutils timeout(1), which might not be available on your system. Either install timeout or remove that part temporarily from the script and manually exit the program after a while.

Size & debug

micromark is really small. A ton of time went into making sure it minifies well, by the way code is written but also through custom build scripts to pre-evaluate certain expressions. Furthermore, care went into making it compress well with gzip and brotli.

Normally, you’ll use the pre-evaluated version of micromark. While developing, debugging, or testing your code, you should switch to use code instrumented with assertions and debug messages:

node --conditions development module.js

To see debug messages, use a DEBUG env variable set to micromark:

DEBUG="*" node --conditions development module.js

Version

micromark adheres to semver since 3.0.0.

Security

The typical security aspect discussed for markdown is cross-site scripting (XSS) attacks. Markdown itself is safe if it does not include embedded HTML or dangerous protocols in links/images (such as javascript: or data:). micromark makes any markdown safe by default, even if HTML is embedded or dangerous protocols are used, as it encodes or drops them. Turning on the allowDangerousHtml or allowDangerousProtocol options for user-provided markdown opens you up to XSS attacks.

Another security aspect is DDoS attacks. For example, an attacker could throw a 100mb file at micromark, in which case the JavaScript engine will run out of memory and crash. It is also possible to crash micromark with smaller payloads, notably when thousands of links, images, emphasis, or strong are opened but not closed. It is wise to cap the accepted size of input (500kb can hold a big book) and to process content in a different thread or worker so that it can be stopped when needed.

Using extensions might also be unsafe, refer to their documentation for more information.

For more information on markdown sanitation, see improper-markup-sanitization.md by @chalker.

See security.md in micromark/.github for how to submit a security report.

Contribute

See contributing.md in micromark/.github for ways to get started. See support.md for ways to get help.

This project has a code of conduct. By interacting with this repository, organisation, or community you agree to abide by its terms.

Origin story

Over the summer of 2018, micromark was planned, and the idea shared in August with a couple of friends and potential sponsors. The problem I (@wooorm) had was that issues were piling up in remark and other repos, but my day job (teaching) was fun, fulfilling, and deserved time too. It was getting hard to combine the two. The thought was to feed two birds with one scone: fix the issues in remark with a new markdown parser (codename marydown) while being financially supported by sponsors building fancy stuff on top, such as Gatsby, Contentful, and Vercel (ZEIT at the time). @johno was making MDX on top of remark at the time (important historical note: several other folks were working on JSX + markdown too). We bundled our strengths: MDX was getting some traction and we thought together we could perhaps make something sustainable.

In November 2018, we launched with the idea for micromark to solve all existing bugs, sustaining the existing hundreds of projects, and furthering the exciting high-level project MDX. We pushed a single name: unified (which back then was a small but essential part of the chain). Gatsby and Vercel were immediate sponsors. We didn’t know whether it would work, and it worked. But now you have a new problem: you are getting some financial support (much more than other open source projects) but it’s not enough money for rent, and too much money to print stickers with. You still have your job and issues are still piling up.

At the start of summer 2019, after a couple months of saving up donations, I quit my job and worked on unified through fall. That got the number of open issues down significantly and set up a strong governance and maintenance system for the collective. But when the time came to work on micromark, the money was gone again, so I contracted through winter 2019, and in spring 2020 I could do about half open source, half contracting. One of the contracting gigs was to write a new MDX parser, for which I also documented how to do that with a state machine in prose. That gave me the insight into how the same could be done for markdown: I drafted CMSM, which was some of the core ideas for micromark, but in prose.

In May 2020, Salesforce reached out: they saw the bugs in remark, how micromark could help, and the initial work on CMSM. And they had thousands of Markdown files. In a for open source uncharacteristic move, they decided to fund my work on micromark. A large part of what maintaining open source means, is putting out fires, triaging issues, and making sure users and sponsors are happy, so it was amazing to get several months to just focus and make something new. I remember feeling that this project would probably be the hardest thing I’d work on: yeah, parsers are pretty difficult, but markdown is on another level. Markdown is such a giant stack of edge cases on edge cases on even more weirdness, what a mess. On August 20, 2020, I released 2.0.0, the first working version of micromark. And it’s hard to describe how that moment felt. It was great.

Download Details:

Author: micromark
The Demo/Documentation: View The Demo/Documentation
Download Link: Download The Source Code
Official Website: https://github.com/micromark/micromark
License: MIT © Titus Wormer

#micromark #markdown #javascript #html #css

Let Developers Just Need to Grasp only One Button Component

 From then on, developers only need to master one Button component, which is enough.

Support corners, borders, icons, special effects, loading mode, high-quality Neumorphism style.

Author:Newton(coorchice.cb@alibaba-inc.com)

✨ Features

Rich corner effect

Exquisite border decoration

Gradient effect

Flexible icon support

Intimate Loading mode

Cool interaction Special effects

More sense of space Shadow

High-quality Neumorphism style

🛠 Guide

⚙️ Parameters

🔩 Basic parameters

ParamTypeNecessaryDefaultdesc
onPressedVoidCallbacktruenullClick callback. If null, FButton will enter an unavailable state
onPressedDownVoidCallbackfalsenullCallback when pressed
onPressedUpVoidCallbackfalsenullCallback when lifted
onPressedCancelVoidCallbackfalsenullCallback when cancel is pressed
heightdoublefalsenullheight
widthdoublefalsenullwidth
styleTextStylefalsenulltext style
disableStyleTextStylefalsenullUnavailable text style
alignmentAlignmentfalsenullalignment
textStringfalsenullbutton text
colorColorfalsenullButton color
disabledColorColorfalsenullColor when FButton is unavailable
paddingEdgeInsetsGeometryfalsenullFButton internal spacing
cornerFCornerfalsenullConfigure corners of Widget
cornerStyleFCornerStylefalseFCornerStyle.roundConfigure the corner style of Widget. round-rounded corners, bevel-beveled
strokeColorColorfalseColors.blackBorder color
strokeWidthdoublefalse0Border width. The border will appear when strokeWidth > 0
gradientGradientfalsenullConfigure gradient colors. Will override the color
activeMaskColorColorColors.transparentThe color of the mask when pressed
surfaceStyleFSurfacefalseFSurface.FlatSurface style. Default [FSurface.Flat]. See [FSurface] for details

💫 Effect parameters

ParamTypeNecessaryDefaultdesc
clickEffectboolfalsefalseWhether to enable click effects
hoverColorColorfalsenullFButton color when hovering
onHoverValueChangedfalsenullCallback when the mouse enters/exits the component range
highlightColorColorfalsenullThe color of the FButton when touched. effect:true required

🔳 Shadow parameters

ParamTypeNecessaryDefaultdesc
shadowColorColorfalseColors.greyShadow color
shadowOffsetOffsetfalseOffset.zeroShadow offset
shadowBlurdoublefalse1.0Shadow blur degree, the larger the value, the larger the shadow range

🖼 Icon & Loading parameters

ParamTypeNecessaryDefaultdesc
imageWidgetfalsenullAn icon can be configured for FButton
imageMargindoublefalse6.0Spacing between icon and text
imageAlignmentImageAlignmentfalseImageAlignment.leftRelative position of icon and text
loadingboolfalsefalseWhether to enter the Loading state
loadingWidgetWidgetfalsenullLoading widget in loading state. Will override the default Loading effect
clickLoadingboolfalsefalseWhether to enter Loading state after clicking FButton
loadingColorColorfalsenullLoading colors
loadingStrokeWidthdoublefalse4.0Loading width
hideTextOnLoadingboolfalsefalseWhether to hide text in the loading state
loadingTextStringfalsenullLoading text
loadingSizedoublefalse12Loading size

🍭 Neumorphism Style

ParamTypeNecessaryDefaultdesc
isSupportNeumorphismboolfalsefalseWhether to support the Neumorphism style. Open this item [highlightColor] will be invalid
lightOrientationFLightOrientationfalseFLightOrientation.LeftTopValid when [isSupportNeumorphism] is true. The direction of the light source is divided into four directions: upper left, lower left, upper right, and lower right. Used to control the illumination direction of the light source, which will affect the highlight direction and shadow direction
highlightShadowColorColorfalsenullAfter the Neumorphism style is turned on, the bright shadow color

📺 Demo

🔩 Basic Demo

// FButton #1
FButton(
  height: 40,
  alignment: Alignment.center,
  text: "FButton #1",
  style: TextStyle(color: Colors.white),
  color: Color(0xffffab91),
  onPressed: () {},
)

// FButton #2
FButton(
  padding: const EdgeInsets.fromLTRB(12, 8, 12, 8),
  text: "FButton #2",
  style: TextStyle(color: Colors.white),
  color: Color(0xffffab91),
  corner: FCorner.all(6.0),
)

// FButton #3
FButton(
  padding: const EdgeInsets.fromLTRB(12, 8, 12, 8),
  text: "FButton #3",
  style: TextStyle(color: Colors.white),
  disableStyle: TextStyle(color: Colors.black38),
  color: Color(0xffF8AD36),

  /// set disable Color
  disabledColor: Colors.grey[300],
  corner: FCorner.all(6.0),
)

By simply configuring text andonPressed, you can construct an available FButton.

If onPressed is not set, FButton will be automatically recognized as not unavailable. At this time, ** FButton ** will have a default unavailable status style.

You can also freely configure the style of FButton when it is not available via the disabledXXX attribute.

🎈 Corner & Stroke

// #1
FButton(
  width: 130,
  text: "FButton #1",
  style: TextStyle(color: Colors.white),
  color: Color(0xffFF7043),
  onPressed: () {},
  clickEffect: true,
  
  /// 配置边角大小
  ///
  /// set corner size
  corner: FCorner.all(25),
),

// #2
FButton(
  width: 130,
  text: "FButton #2",
  style: TextStyle(color: Colors.white),
  color: Color(0xffFFA726),
  onPressed: () {},
  clickEffect: true,
  corner: FCorner(
    leftBottomCorner: 40,
    leftTopCorner: 6,
    rightTopCorner: 40,
    rightBottomCorner: 6,
  ),
),

// #3
FButton(
  width: 130,
  text: "FButton #3",
  style: TextStyle(color: Colors.white),
  color: Color(0xffFFc900),
  onPressed: () {},
  clickEffect: true,
  corner: FCorner(leftTopCorner: 10),
  
  /// 设置边角风格
  ///
  /// set corner style
  cornerStyle: FCornerStyle.bevel,
  strokeWidth: 0.5,
  strokeColor: Color(0xffF9A825),
),

// #4
FButton(
  width: 130,
  padding: EdgeInsets.fromLTRB(6, 16, 30, 16),
  text: "FButton #4",
  style: TextStyle(color: Colors.white),
  color: Color(0xff00B0FF),
  onPressed: () {},
  clickEffect: true,
  corner: FCorner(
      rightTopCorner: 25,
      rightBottomCorner: 25),
  cornerStyle: FCornerStyle.bevel,
  strokeWidth: 0.5,
  strokeColor: Color(0xff000000),
),

You can add rounded corners to FButton via the corner property. You can even control each fillet individually。

By default, the corners of FButton are rounded. By setting cornerStyle: FCornerStyle.bevel, you can get a bevel effect.

FButton supports control borders, provided that strokeWidth> 0 can get the effect 🥳.

🌈 Gradient


FButton(
  width: 100,
  height: 60,
  text: "#1",
  style: TextStyle(color: Colors.white),
  color: Color(0xffFFc900),
  
  /// 配置渐变色
  ///
  /// set gradient
  gradient: LinearGradient(colors: [
    Color(0xff00B0FF),
    Color(0xffFFc900),
  ]),
  onPressed: () {},
  clickEffect: true,
  corner: FCorner.all(8),
)

Through the gradient attribute, you can build FButton with gradient colors. You can freely build many types of gradient colors.

🍭 Icon

FButton(
  width: 88,
  height: 38,
  padding: EdgeInsets.all(0),
  text: "Back",
  style: TextStyle(color: Colors.white),
  color: Color(0xffffc900),
  onPressed: () {
    toast(context, "Back!");
  },
  clickEffect: true,
  corner: FCorner(
    leftTopCorner: 25,
    leftBottomCorner: 25,),
  
  /// 配置图标
  /// 
  /// set icon
  image: Icon(
    Icons.arrow_back_ios,
    color: Colors.white,
    size: 12,
  ),

  /// 配置图标与文字的间距
  ///
  /// Configure the spacing between icon and text
  imageMargin: 8,
),

FButton(
  onPressed: () {},
  image: Icon(
    Icons.print,
    color: Colors.grey,
  ),
  imageMargin: 8,

  /// 配置图标与文字相对位置
  ///
  /// Configure the relative position of icons and text
  imageAlignment: ImageAlignment.top,
  text: "Print",
  style: TextStyle(color: textColor),
  color: Colors.transparent,
),

The image property can set an image for FButton and you can adjust the position of the image relative to the text, throughimageAlignment.

If the button does not need a background, just set color: Colors.transparent.

🔥 Effect


FButton(
  width: 200,
  text: "Try Me!",
  style: TextStyle(color: textColor),
  color: Color(0xffffc900),
  onPressed: () {},
  clickEffect: true,
  corner: FCorner.all(9),
  
  /// 配置按下时颜色
  ///
  /// set pressed color
  highlightColor: Color(0xffE65100).withOpacity(0.20),
  
  /// 配置 hover 状态时颜色
  ///
  /// set hover color
  hoverColor: Colors.redAccent.withOpacity(0.16),
),

The highlight color of FButton can be configured through the highlightColor property。

hoverColor can configure the color when the mouse moves to the range of FButton, which will be used during Web development.

🔆 Loading

FButton(
  text: "Click top loading",
  style: TextStyle(color: textColor),
  color: Color(0xffffc900),
  ...

  /// 配置 loading 大小
  /// 
  /// set loading size
  loadingSize: 15,

  /// 配置 loading 与文本的间距
  ///
  // Configure the spacing between loading and text
  imageMargin: 6,
  
  /// 配置 loading 的宽
  ///
  /// set loading width
  loadingStrokeWidth: 2,

  /// 是否支持点击自动开始 loading
  /// 
  /// Whether to support automatic loading by clicking
  clickLoading: true,

  /// 配置 loading 的颜色
  ///
  /// set loading color
  loadingColor: Colors.white,

  /// 配置 loading 状态时的文本
  /// 
  /// set loading text
  loadingText: "Loading...",

  /// 配置 loading 与文本的相对位置
  ///
  /// Configure the relative position of loading and text
  imageAlignment: ImageAlignment.top,
),

// #2
FButton(
  width: 170,
  height: 70,
  text: "Click to loading",
  style: TextStyle(color: textColor),
  color: Color(0xffffc900),
  onPressed: () { },
  ...
  imageMargin: 8,
  loadingSize: 15,
  loadingStrokeWidth: 2,
  clickLoading: true,
  loadingColor: Colors.white,
  loadingText: "Loading...",

  /// loading 时隐藏文本
  ///
  /// Hide text when loading
  hideTextOnLoading: true,
)


FButton(
  width: 170,
  height: 70,
  alignment: Alignment.center,
  text: "Click to loading",
  style: TextStyle(color: Colors.white),
  color: Color(0xff90caf9),
  ...
  imageMargin: 8,
  clickLoading: true,
  hideTextOnLoading: true,

  /// 配置自定义 loading 样式
  ///
  /// Configure custom loading style
  loadingWidget: CupertinoActivityIndicator(),
),

Through the loading attribute, you can configure Loading effects for ** FButton **.

When FButton is in Loading state, FButton will enter an unavailable state, onPress will no longer be triggered, and unavailable styles will also be applied.

At the same time loadingText will overwritetext if it is not null.

The click start Loading effect can be achieved through the clickLoading attribute.

The position of loading will be affected by theimageAlignment attribute.

When hideTextOnLoading: true, if FButton is inloading state, its text will be hidden.

Through loadingWidget, developers can set completely customized loading styles.

Shadow


FButton(
  width: 200,
  text: "Shadow",
  textColor: Colors.white,
  color: Color(0xffffc900),
  onPressed: () {},
  clickEffect: true,
  corner: FCorner.all(28),
  
  /// 配置阴影颜色
  ///
  /// set shadow color
  shadowColor: Colors.black87,

  /// 设置组件高斯与阴影形状卷积的标准偏差。
  /// 
  /// Sets the standard deviation of the component's Gaussian convolution with the shadow shape.
  shadowBlur: _shadowBlur,
),

FButton allows you to configure the color, size, and position of the shadow.

🍭 Neumorphism Style

FButton(

  /// 开启 Neumorphism 支持
  ///
  /// Turn on Neumorphism support
  isSupportNeumorphism: true,

  /// 配置光源方向
  ///
  /// Configure light source direction
  lightOrientation: lightOrientation,

  /// 配置亮部阴影
  ///
  /// Configure highlight shadow
  highlightShadowColor: Colors.white,

  /// 配置暗部阴影
  ///
  /// Configure dark shadows
  shadowColor: mainShadowColor,
  strokeColor: mainBackgroundColor,
  strokeWidth: 3.0,
  width: 190,
  height: 60,
  text: "FWidget",
  style: TextStyle(
      color: mainTextTitleColor, fontSize: neumorphismSize_2_2),
  alignment: Alignment.center,
  color: mainBackgroundColor,
  ...
)

FButton brings an incredible, ultra-high texture Neumorphism style to developers.

Developers only need to configure the isSupportNeumorphism parameter to enable and disable the Neumorphism style.

If you want to adjust the style of Neumorphism, you can make subtle adjustments through several attributes related to Shadow, among which:

shadowColor: configure the shadow of the shadow

highlightShadowColor: configure highlight shadow

FButton also provides lightOrientation parameters, and even allows developers to adjust the care angle, and has obtained different Neumorphism effects.

😃 How to use?

Add dependencies in the project pubspec.yaml file:

🌐 pub dependency

dependencies:
  fbutton: ^<version number>

⚠️ Attention,please go to [pub] (https://pub.dev/packages/fbutton) to get the latest version number of FButton

🖥 git dependencies

dependencies:
  fbutton:
    git:
      url: 'git@github.com:Fliggy-Mobile/fbutton.git'
      ref: '<Branch number or tag number>'

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add fbutton_nullsafety

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  fbutton_nullsafety: ^5.0.0

Alternatively, your editor might support or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:fbutton_nullsafety/fbutton_nullsafety.dart';

Download Details:

Author: Fliggy-Mobile

Source Code: https://github.com/Fliggy-Mobile/fbutton

#button  #flutter 

kolade seun

1633111730

QishioSoci Review ⚠️Warining⚠️ Don’t Buy Yet

THE ULTIMATE SOLUTION TO SOCIAL MEDIA AUTOMATION

QishioSoci-Review

In recent years, people are spending more and more time on social media because they haven’t been able to meet with people in person. Facebook and Instagram are reporting skyrocketing usage and engagement numbers since March of last year. Where better to put your products and links than right where everyone is hanging out?

But the biggest downside of Facebook for us marketers is that we need to be constantly logged in to interact with potential customers and clients. In a global business world, it’s very easy to miss out on that lead because it was the middle of the night and you just had to get some sleep. It’s time to put a stop to this.

So in today’s review, I will show you the ultimate solution that won’t just knock out other competitors but will also strategically grow your businesses online, capture more leads and generate more sales daily from social media platforms.

The application called QishioSoci promises to generate affiliate commissions, but not using any of the “mundane old methods” that you’ve seen over and over again. QishioSoci lets you design, post, sell and respond on social media, all practically on autopilot – all from one dashboard.

So if you want to make sure customers are greeted and comments are replied to, as if it was you at the keyboard, then QishioSoci – for a one-time price will fit all your social media business needs.

Excited yet, let’s jump in right now!

👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

QISHIOSOCI REVIEW – THE OVERVIEW

refund

CreatorKenny Tan et al.
ProductQishioSoci
Launch Date2021-Oct-01
Launch Time11:00 EST
Official websiteClick Here
Front-End Price$13 – $16.29
BonusesHUGE BONUSES OF DIFFERENT CATEGORIES AT THE END OF THE REVIEW
SkillAll Levels
Guarantee30 Days Money Back Guarantee
NicheTools & Software
SupportЕffесtіvе Rеѕроnѕе
RecommendHighly Recommend!

ABOUT THE PRODUCT

QishioSoci is a brand new, affiliate marketing-centric social media scheduling and management app created by Kenny Tan and his team. These guys have been creating top-selling software for a while and offer great support. QishioSoci allows you to automate Facebook and Instagram comments and includes a Facebook messenger bot and sends great buyer traffic to you at the same time, all in just 3 simple steps:

Step 1: Grab QishioSoci Now!

But act fast, the price is rising every hour!

Step 2: Login To The Cloud-Based App

Login and add some simple details – such as your affiliate link

Step 3: Push The Button & Relax

The app will unleash free traffic from 40+ sources to your link on autopilot

👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

ABOUT THE CREATOR

Kenny-Tan

This product is brought to you by Kenny Tan who is an expert in the field of online marketing. Although he just made his debut not long ago, he has earned such high recommendation and praises from both experts and users.

Let’s take a look at some of his successful launches before: ContinuumMail, QishioBits, QishioVid, QishioSuite, Qishio EEzey, Qishio Trafik, Qishio Burner.

This time, Kenny and his team have the perfect system to make your social media campaigns super easy, faster and done in a way that’s never been done before.

QISHIOSOCI REVIEW – WHAT CAN YOU GET INSIDE THIS SOFTWARE?

Let’s take a closer look at what you can get inside of QishioSoci:

   ♦   REVOLUTIONARY AFFILIATE MARKETING APP

QishioSoci is the world’s first cloud-based automated affiliate marketing platform that does all of the ‘hard work’ for you. Drive traffic from 40+ high traffic sources to your links and make money in just 1 click. Inside the app, you will see the features as followed:

   ♦   FIND OFFERS

   [+]   Search or find offers through WarriorPlus, JVZoo

   [+]   Get your affiliate link

   ♦   PAGE BUILDER

   [+]   Drag & drop elements

   [+]   Easy & simple styling

   [+]   Mobile responsive design

   [+]   Fully customizable

   [+]   20+ DFY templates

   ♦   FB POSTING

   [+]   Text, image, multi-image, video & link post

   [+]   Carousel & slideshow post

   [+]   CTA button post

   [+]   Schedule/instant post to your all Facebook pages with a single click

   [+]   Periodic re-posting ability

   [+]   Enable auto comment reply campaign with the post.

   [+]   Full report of posting

   [+]   Emoji library

   ♦   MESSENGER BOT

   [+]   Reply with text, file, image, audio, video, gif

   [+]   Generic template, carousel template, media template

   [+]   Post-back buttons, quick reply buttons

   [+]   Button of URL, phone number, webview, user birthday

   [+]   Quick reply button of user email, phone number

   [+]   Personalized reply with first name, last name

   [+]   Sync existing leads & migrate as bot subscribers

   [+]   Subscriber profile with gender, time zone & locale

   [+]   Segment subscriber by post-back button click

   [+]   Segment subscriber by private reply

   [+]   Segment subscribers by adding labels manually

   [+]   Typing on enable option

   [+]   Custom delay in each reply

   [+]   Mark seen action enable the option

   [+]   Persistent menu

   [+]   Different persistent menu ads for different locales

   [+]   Your brand URL set option in the persistent menu

   [+]   Collect phone number from quick reply

   [+]   Re-arrange bot replies by dragging and dropping

   [+]   Collect email from quick reply & Mailchimp integration, ActiveCampaign integration, Sendinblue integration, mautic integration, acelle integration

   [+]   Download email & phone number as CSV

   [+]   Error reporting log of reply

   [+]   Export bot settings

   [+]   Save exported bot data as a template

   [+]   Admin can save exported bot data as a template for users

   [+]   Import exported bot data for any page

   [+]   Visual & interactive tree view of full bot

   ♦   ONE TIME NOTIFICATION (OTN) BROADCASTING

   [+]   One time notification request button in bot settings

   [+]   One time notification broadcasting after 24 hours

   [+]   Send promotional message

   [+]   Send message with template

   ♦   FB AUTOPILOT

   [+]   Auto comment on page post as page

   [+]   One-time & periodic comment

   [+]   Serial & random periodic comment

   [+]   Auto comment template management

   [+]   Emoji and spintax comment

   [+]   Choose time & date interval of comment

   [+]   Increase page engagement

   [+]   Auto private reply for post comment.

   [+]   Auto private reply with template message (image, video, buttons, quick reply, carousel, generic template)

   [+]   Auto comment reply with webhook as instant.

   [+]   Auto comment reply for post comment.

   [+]   Auto like on the comment

   [+]   Dark post reply

   [+]   Reply multi-image post’s each image

   [+]   Highly customization auto private reply & comment reply text.

   [+]   Filtering word-based auto private reply & comment reply option.

   [+]   Full report of auto private reply & comment reply.

   [+]   Segment subscribers

   [+]   Emoji and spintax message

   ♦   IG AUTOPILOT

   [+]   Auto comment reply for post comment.

   [+]   Keyword filtering word-based comment reply option.

   [+]   Manual comment on the post

   ♦   STEP-BY-STEP TRAINING

👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

QishioSoci is 100% beginner-friendly so anyone can log in and use the software regardless of their experience. However, the creators include training that shows you how to make money with the platform for those of you who need that extra helping hand.

QishioSoci-bonus

👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

QISHIOSOCI REVIEW – HOW TO WORK ON IT PROPERLY?

Here let me show you how you can successfully apply this QishioSoci to your work and start making money in just a few minutes.

First things first, you need to log into your account

QishioSoci-demo-1-login

Once you successfully get a login, you will be directed to the Main Dashboard like this seen below:

QishioSoci-demo-2-dashboard

THE QUICK WALK-THRU OF THE MAIN FEATURES OF QISHIOSOCI:

   ♦   IMPORT ACCOUNTS

With this section, you are able to link up your social account. Just click into Login with Facebook and then enter your account name and password to log in:

QishioSoci-demo-3-Import-Accounts

   ♦   FIND OFFERS

Click on the “Find Offers” section and here you can see many offers that you can check out the sales page and grab the affiliate link.  

In particular, you will access JVzoo hookup and WarriorPlus hookup that allows you to easily find potential offers to promote.

Just insert your keyword and you can find any of the offers coming with the sale page and all the information:

QishioSoci-demo-4-Find-Offers

   ♦   BUILD YOUR PAGES

Go to the “Page Builder” section and choose your own from a library of ready-to-use templates. 

More than that, you can redesign the selected template with QishioSoci. You can add images, text, dividers, videos, any so much more. 

QishioSoci-demo-5-Page-Builder

There are a variety of editing tools to customize your template:

QishioSoci-demo-6-editing

   ♦   ADJUST THE CONTENT

QishioSoci-demo-7-Adjust-the-content

   ♦   CHANGE THE IMAGE

QishioSoci-demo-8-Change-the-image

   ♦   DOWNLOAD

Once you finish editing, don’t forget to hit “Download” to save your page.

QishioSoci-demo-9-Download

   ♦   DOWNLOADED PAGES

This section stores all the pages that you already have created. Then you can copy the page’s link, download it or delete it any time! 

QishioSoci-demo-10-Downloaded-Pages

   ♦   FB POSTING

In the “Multimedia Post” from the “FB Posting” section, you can manage your post by adding text, images, links…

It’s time to enter some information: Campaign name, Messages, Posting time… After completing your post, click on “Create Campaign”.

Apart from Multimedia Post, this tool also gives you the ability to create CTA Post and Carousel/slider posters. 

QishioSoci-demo-11-FB-Posting

   ♦   FB AUTOPILOT 

Click on the “FB/ IG autopilot” and you are able to get access to the Autopilot Tools to set auto comments, reply, and campaigns. 

   [+]   Comment Template

This QishioSoci will post comments to Facebook automatically. So all you have to do is just create some comment templates for different products that you promote.

QishioSoci-demo-12-Comment-Template

Fill in all required information and save your changes. 

QishioSoci-demo-13-information

   [+]   Reply Template

If someone comments on your post, you can still reply to them immediately without being online at that moment.

Thanks to this function of QishioSoci, you can easily create an auto-reply template. 

Just click on “Create new template”, choose a page for auto-reply and some of your auto-reply modes such as enabling comment replies… After that, enter your auto-reply campaign name and click on “Save”.

QishioSoci-demo-14-Reply-Template👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

QishioSoci-demo-15-Reply-Template

   [+]   Automation Campaign

This feature empowers you to create an automation campaign:

QishioSoci-demo-16-Automation-Campaign

   [+]   Report

The “report” section will show you the auto comment report and the report from the replies. So you will be able to track all of it, you will be able to see which product converts, which gets you clicks, which is making money… 

QishioSoci-demo-17-report

   ♦   INSTAGRAM AUTOPILOT

The same action with FB Autopilot. 

QishioSoci-demo-18-Instagram-Autopilot

QishioSoci-demo-19-Instagram-Autopilot

QishioSoci Review – The Demo Video

Video Player

 

 

 

00:00

 

05:07

 

👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

THE REASONS FOR GIVING IT A TRY

QishioSoci comes loaded with robust features that enable you to generate winning content at the push of a button. Inside QishioSoci, you are going to find the ability to offer any affiliate offer you want on any of your Facebook or Instagram pages and watch the Facebook algorithm send you targeted traffic.

Then your comment and messenger bots will answer for you while you’re busy sleeping, eating, or spending time with those you love.

In simple terms, the application sends a tsunami of targeted buyers to your affiliate links from a huge range of high traffic websites on autopilot, exposing hundreds of millions of buyers to your offers because this method does not rely on you creating videos, photos or creating any content at all, everything is automated for you.

On top of that, the creators have made this application the simplest tool to work with. There are video tutorials to help you in case you get stuck somewhere. But even if that doesn’t help and you need more assistance, the supporting team is more than happy to help.

Also, there are beta testers with no experience who made money with the application. If you can follow a few simple instructions, you can drive traffic and make money with this.

QishioSoci-feedback-1

QishioSoci-feedback-2

QishioSoci-feedback-3

QISHIOSOCI REVIEW – PRICE AND THE UPSELLS

QISHIOSOCI FE

During the launch phase, you can access QishioSoci for a one-time investment.

With just $13 – $16.29 to spend, you will get access to all of the amazing features I mentioned above:

QishioSoci-price

However the price will increase to a monthly subscription soon, so you need to invest now while the offer is still valid. Be sure to grab this golden opportunity quickly! I know you don’t want to miss out on it and then regret it later!

Don’t hesitate because if you are not satisfied with this product, you can always ask for a full refund within 30 days of your purchase. You don’t need to take any risk buying this product!

big-order-button

QISHIOSOCI REVIEW – THE UPGRADES

Also, if you want to maximize your benefits with this product, you can consider buying these upsells once you check out:

OTO 1: QishioSoci Unlimited – $27 – $37

OTO 2: QishioSoci Automation – $47 – $67

OTO 3: QishioSoci DFY – $97 – $197

OTO 4: QishioSoci Reseller – $47 – $67

OTO 5: QishioSoci Steal Our Website Traffic – $67 – $97

OTO 6: Qishio DFY Traffic – $397

NOTES-OTO👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

QISHIOSOCI REVIEW – PROS AND CONS

PROS:

   ♥   Generates the fastest results you have ever seen

   ♥   Is equally effective for experienced and beginner marketers

   ♥   Sends free targeted traffic to your affiliate links from 40 sources on complete autopilot

   ♥   Fixes all of your traffic generation problems in 1 click

   ♥   Requires no technical skills, experience or budget

   ♥   100% newbie friendly

   ♥   Pay once only

CONS:

   X   I have nothing bad to say about this amazing product

WHO IS THIS FOR?

From my own experience, this amazing product is cut out for:

   ♥   Affiliate marketer

   ♥   Product creator

   ♥   Entrepreneur

   ♥   Newbie

   ♥   Business owner

   ♥   Local or small business

   ♥   Local Marketing Consultants

   ♥   Website Owner

   ♥   SEOer

   ♥   Ecom site owner

   ♥   Freelancer

   ♥   Blogger

   ♥   Author and coach

QISHIOSOCI REVIEW – CONCLUSION

The bottom line is, QishioSoci is the most uncomplicated social media management & marketing technology ever. It lets you design, post, sell and respond on social media, all practically on autopilot – all from one dashboard with no recurring fees.

So I wish my QishioSoci review has given enough useful information for you. Please remember that this is a golden opportunity for you to transform your life. And please put in mind that this kind of product cannot be any cheaper so be quick because this good deal doesn’t last soon certainly.

Once again, wish you all have a good choice. Thank you for your reading my review!

QishioSoci-faq

NOTES-IMPORTANT👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

THE BONUSES FROM ME

The bonuses are carefully selected and presented with descriptions with the hope to facilitate your online business activities

getinstantaccessnow

You will get the first 6 powerful Packages for purchasing FE + 1 OTO

(Buy FE only? No worries! Pick 4 packages to your liking!)

(Bonus Delivery Note is at the end of this QishioSoci review)

PACKAGE 01: ADD TO YOUR DESIGN SERVICE

Bonus #1: DesignBundle – The Ultimate 10-In-1 Web & Graphics Design Suite

design-bundle-1

design-bundle-details

Bonus #2: All-in-One Solution to Create STUNNING Pro Quality Video Thumbnails

thumbnail-toolkit

Thumbnail Temples which are available in Standard Video Size, Square Video Size & Stories Video Size

thumbnail-toolkit-sample

Animated Thumbnail Template Samples

  
  

Bonus #3: MARTKET CRUSH

Your Marketing Needs To Be Professional AND Consistent You Need PORTFOLIO Marketing

*Agency License: Sell Edited Portfolios to Business Clients.
*White Label License: Sell product and raw template files as your own

Market Crush-1

Market Crush-sample-1

Market Crush-sample-2

Bonus #4: LOCAL NICHE ARTICLE PACK

local niche article

local niche article-2

Bonus #5: SOCIAL COVER GRAPHICS

social-cover-graphics

social-cover-graphics-2

PACKAGE 02: EARN WITH SOCIAL POSTING SERVICES

Part 1 – 350 Business Templates

Business-Templates

Business-Templates-1-questions

Business-Templates-2-tips

Business-Templates-3-quotes

Business-Templates-4-infographs

PART 2: 6 RESOURCES FOR SOCIAL POSTING TEMPLATE

Bonus #1: Food Social Media Kit

1a-Food-Social-Media-Kit

Bonus #2: 140+ Instagram Template Pack

Instagram Post Templates Full Bundle Pack suitable for all social media promotions.

1b-Instagram-pack

Bonus #3: Creative Social Media Templates

1c-creative-social-media

Bonus #4: Instagram Quotes Stories Pack suitable for all social media kits

1d-Instagram-stories

Bonus #5: 40 Pinterest Quotes

1e-pinterest-quotes

Bonus #6: Shutterstock Collection

1f-shutterstock-collection

PACKAGE 03: ADD TO YOUR CONTENT SERVICES

Want to create professional unique content to engage the visitors & gain better ranking? There’s no better way than providing informative content that keeps them staying longer on our website or social media pages. I have collected some great sources of e-books that include a variety of hot topics (Self-Help, Health & wellness, Making money online) with PLR assisting you to attract more eyeballs

Bonus #1: 70 Ebooks on Health, Fitness & Weightloss with PLR

Health & Wellness have been the hottest niches as people, no matter what their background, culture, or economic status… want to be happy and healthy. And they are easily attracted to the content of these topics and willing to buy products or treatments that will help them improve their health & lifestyle. 

That’s why this bonus package will give you an unfair advantage in generating content for your online presence. You will save a huge amount of money on copywriting services.

70-ebooks

Below is the sample of the content pieces:

PLR-1PLR-2

Bonus #2: Executive Collection PLR

The ONLY Personal Development PLR Ever Created By an Executive Director of the John Maxwell Team

executive-plr

Executive Collection is a brand new line of premium, gorgeous, high production value PLR courses that you’ll actually be proud to offer to your subscribers and customers.

executive-plr-2

PACKAGE 04: MAKE MONEY CREATING ADS FOR BUSINESS

Quick Adz – Create High Converting Animated Ads In Just 10 Minutes with 440+ Multipurpose Video Templates

what-is-quick-adz

Here’s What You Will Get Inside Quick Adz

20 MODULES OF THE MOST EYE CATCHING & PROFITABLE 2021 DESIGNS

GOOGLE ADS ANIMATED TEMPLATES

SOCIAL MEDIA ANIMATED TEMPLATES

STATIC MARKETING PACK (YOUTUBE – FACEBOOK – TWITTER) COVERS

Module #1 – Animated Google Ads Design Templates

SAMPLE: SOCIAL MEDIA NICHE “TRAVEL”

 

 

SAMPLE: SOCIAL MEDIA NICHE “COFFEE SHOP”

 

 

Module #2 – Animated Social Media Design Templates

SAMPLE: SOCIAL MEDIA NICHE “TRAVEL”

 

 

SOCIAL MEDIA NICHE “COFFEE SHOP”

 

 

Module #3 – Static Cover Design Templates For Facebook, Twitter & YouTube

quick-adz-static-image

PACKAGE 05: BRING MORE TRAFFIC TO YOUR BLOGS WITH VIDEO & SOCIAL MEDIA

This package is aimed to help you generate better social media & content marketing campaigns:

PACKAGE 06: SOCIAL MEDIA & VIDEO BONUSES

Video-Bonus-1

Video-Bonus-2

Video-Bonus-3

Video-Bonus-4

Video-Bonus-5

Video-Bonus-6

Video-Bonus-7

Video-Bonus-8-9

Video-Bonus-10

Video-Bonus-11

Video-Bonus-12

Video-Bonus-13

Video-Bonus-14

Video-Bonus-15

Video-Bonus-16

Video-Bonus-17

Video-Bonus-18

Video-Bonus-19

Video-Bonus-20

Video-Bonus-21

Video-Bonus-22

Video-Bonus-23

Video-Bonus-24

Video-Bonus-25

 

Video-Bonus-26

FROM YOUR THIRD PURCHASE, PICK 2 EXTRA PACKAGES BELOW FOR EACH OTO PURCHASE MADE

EXTRA PACKAGE 01: VIDEO MATERIALS

Part 1: Motion Graphics Pack

3-Motion-Graphics-Pack

The only setup & effects toolkit that is packed with 4500+ ready-to-use elements & presets that are just a few clicks away from turning your content into a masterpiece.

For a much intuitive and faster experience, this Graphics Library also comes with AtomX Extension, an After Effects extension bundled in the package.

The extension is really simple to use, and as the toolkit is packed with a huge collection of elements & presets, AtomX Extension just makes it a lot simple to find the right assets for the right job.

Below is the quick recap of what you’re getting: 

  • 70 Slideshows
  • 160+ Typography Slides
  • 15+ Typography Backgrounds
  • 60 IG Stories
  • 200+ Titles
  • 50 Wedding & Floral Titles
  • 200+ Lower Thirds
  • 60+ Logo Reveals
  • 180+ Social Media Elements
  • 30 Animated Devices
  • 60 Call outs
  • 300+ Shape Elements
  • 200+ Icons
  • 100+ Backgrounds
  • 50+ Infographics
  • 25 Audio Spectrums
  • 500+ Sound FX
  • 50 Brush Transitions
  • 100+ Flat Transitions
  • 60 Ink Transitions
  • 70 Seamless Transitions
  • 30 Shape Transitions
  • 60 Displacement Transitions
  • 50+ Animated Gradients
  • 30+ Animated IG Post
  • 25+ Audio Spectrums
  • 75+ Color Filters
  • 70 Color Palettes
  • and so much more…

Take a look at some samples included in this package:

3-Motion-Graphics-Pack-sample

Part 2: Smart Animation Pro FE + OTO 1 + Launch Bonuses

You’re getting several sets of character to make videos of any marketing goals: sales video, whiteboard video, explainer video, tutorial video, etc. and then place on your video website for more traffic and sales converting

smart-animation-3

EXTRA PACKAGE 02: MORE MORE MORE TRAFFIC

Traffic Generation is your struggle? No more worries! This bonus package will hep you out!

EXTRA PACKAGE 03: AGENCY MARKETING KIT

1-VidJack

Bonus 8: Moto Theme 4.0 with 2 OTO PLUS Unlimited Sites

2-Moto-Theme4

2-Moto-Theme4-content

Bonus 9: Content & Print Ready Graphics For Boosting Your Brand On Social Media

4-Eazy Social Ads

Bonus 10: Funnel & Templates To Boost Conversion

3-Client-Acquisition-Funnel

3-Client-Acquisition-Funnel-module-1

3-Client-Acquisition-Funnel-module-2

3-Client-Acquisition-Funnel-module-3

3-Client-Acquisition-Funnel-module-4

3-Client-Acquisition-Funnel-module-5

Bonus #3: PLR Jackpot 2

You’re getting PLR ebooks including Business & Money, Niche related topics ranging from SEO methods to Youtube strategies to viral methods, Personal Development, Health and Wellness, Internet Marketing, Self-help,… All of these ebooks include .docx files, .pdf files, hi-rez covers, and .psd files

EXTRA PACKAGE 04: THE NECESSARY WEAPONS

Extra Package 05: Lead Generation Bonuses

Find it hard to generate leads for your campaigns? The bonus package below might help you with that!

EXTRA PACKAGE 06: LIST BUILDING

(17 BONUSES)

List-Building-1

List-Building-2

List-Building-3

List-Building-4

List-Building-5

List-Building-6

EXTRA PACKAGE 07: VIDEO MATERIALS – ENVIDIO YOUTUBER THINGS

Produce a stunning video is hard?

Moreover, due to the short attention span, we just have a couple of seconds to attract people to watch our videos. If we fail, no matter how high quality our videos are produced with, you just try in vain!

So I hope to help you in this part by offering you Envidio – YouTuber Things (FE and OTO 1) as a bonus to create a better intro for an awesome video and more. The details of Envidio FE are listed below. And OTO 1 (DELUXE) gives you more elements with developer license.

EXTRA PACKAGE 08: AFFILIATE MARKETING BONUSES

Affiliate-1

Affiliate-2-3

Affiliate-4-5

Affiliate-6-7

Affiliate-8

Affiliate-9

Affiliate-10

HELP WITH YOUR AFFILIATE CAMPAIGNS

Besides email marketing, hopefully this package will give you another idea of getting sales and save you money on some extra tools you need for your promotion campaigns

EXTRA PACKAGE 09: GRAPHICS BONUSES

Graphics-Bonus-1

Graphics-Bonus-2-3

Graphics-Bonus-4

Graphics-Bonus-5

Graphics-Bonus-6

Graphics-Bonus-7

Graphics-Bonus-8-9

Graphics-Bonus-10-11

Graphics-Bonus-12-13

Graphics-Bonus-14-15

Graphics-Bonus-16-17

Graphics-Bonus-18

Graphics-Bonus-19

Graphics-Bonus-20

Graphics-Bonus-21

Graphics-Bonus-22-23

Graphics-Bonus-24

Graphics-Bonus-25

Graphics-Bonus-26

Graphics-Bonus-27

Graphics-Bonus-28

Graphics-Bonus-29

Graphics-Bonus-30

Graphics-Bonus-31

Graphics-Bonus-32

Graphics-Bonus-33

Graphics-Bonus-34

Graphics-Bonus-35-36

Graphics-Bonus-37

EXTRA BONUS PACKAGE 10: THEME AND PLUGIN BONUSES

Plugin-Bonus-1

Plugin-Bonus-2

Plugin-Bonus-3

Plugin-Bonus-4

Plugin-Bonus-5

Plugin-Bonus-6-7

Plugin-Bonus-8-9

Plugin-Bonus-10

Plugin-Bonus-11-12

Plugin-Bonus-13-14

Plugin-Bonus-15-16

Extra Package 11: HANDY SOFTWARE

(28 BONUSES)

Soft-1

Soft-2

Soft-3

Soft-4

Soft-5

Soft-6

Soft-7

Soft-8

Soft-9

Soft-10

Soft-11

Soft-12

Soft-13

Extra Package 12: SEO bonus

👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈

👉⚠️Click Here To Get QishioSoci And Custom Bonuses⚠️👈
 

Dylan  Iqbal

Dylan Iqbal

1638243664

ElectroDB: A DynamoDB Library to Make Single Table Designs Easier

ElectroDB

 ElectroDB is a DynamoDB library to ease the use of having multiple entities and complex hierarchical relationships in a single DynamoDB table.


 

Introducing: The NEW ElectroDB Playground

 

Try out and share ElectroDB Models, Services, and Single Table Design at electrodb.fun


Features


Turn this

tasks
  .patch({ 
    team: "core",
    task: "45-662", 
    project: "backend"
  })
  .set({ status: "open" })
  .add({ points: 5 })
  .append({ 
    comments: [{
      user: "janet",
      body: "This seems half-baked."
    }] 
  })
  .where(( {status}, {eq} ) => eq(status, "in-progress"))
  .go();

Into This

{
    "UpdateExpression": "SET #status = :status_u0, #points = #points + :points_u0, #comments = list_append(#comments, :comments_u0), #updatedAt = :updatedAt_u0, #gsi1sk = :gsi1sk_u0",
    "ExpressionAttributeNames": {
        "#status": "status",
        "#points": "points",
        "#comments": "comments",
        "#updatedAt": "updatedAt",
        "#gsi1sk": "gsi1sk"
    },
    "ExpressionAttributeValues": {
        ":status0": "in-progress",
        ":status_u0": "open",
        ":points_u0": 5,
        ":comments_u0": [
            {
                "user": "janet",
                "body": "This seems half-baked."
            }
        ],
        ":updatedAt_u0": 1630977029015,
        ":gsi1sk_u0": "$assignments#tasks_1#status_open"
    },
    "TableName": "your_table_name",
    "Key": {
        "pk": "$taskapp#team_core",
        "sk": "$tasks_1#project_backend#task_45-662"
    },
    "ConditionExpression": "attribute_exists(pk) AND attribute_exists(sk) AND #status = :status0"
}

Table of Contents


Project Goals

ElectroDB focuses on simplifying the process of modeling, enforcing data constraints, querying across entities, and formatting complex DocumentClient parameters. Three important design considerations we're made with the development of ElectroDB:

  1. ElectroDB should be able to be useful without having to query the database itself [read more].
  2. ElectroDB should be able to be added to a project that already has been established tables, data, and access patterns [read more].
  3. ElectroDB should not require additional design considerations on top of those made for DynamoDB, and therefore should be able to be removed from a project at any time without sacrifice.

Installation

Install from NPM

npm install electrodb --save

Usage

Require/import Entity and/or Service from electrodb:

const {Entity, Service} = require("electrodb");
// or 
import {Entity, Service} from "electrodb";

Entities and Services

To see full examples of ElectroDB in action, go to the Examples section.

Entity allows you to create separate and individual business objects in a DynamoDB table. When queried, your results will not include other Entities that also exist the same table. This allows you to easily achieve single table design as recommended by AWS. For more detail, read Entities.

Service allows you to build relationships across Entities. A service imports Entity Models, builds individual Entities, and creates Collections to allow cross Entity querying. For more detail, read Services.

You can use Entities independent of Services, you do not need to import models into a Service to use them individually. However, If you intend to make queries that join or span multiple Entities you will need to use a Service.

Entities

In ElectroDB an Entity is represents a single business object. For example, in a simple task tracking application, one Entity could represent an Employee and or a Task that is assigned to an employee.

Require or import Entity from electrodb:

const {Entity} = require("electrodb");
// or
import {Entity} from "electrodb";

When using TypeScript, for strong type checking, be sure to either add your model as an object literal to the Entity constructor or create your model using const assertions with the as const syntax.

Services

In ElectroDB a Service represents a collection of related Entities. Services allow you to build queries span across Entities. Similar to Entities, Services can coexist on a single table without collision. You can use Entities independent of Services, you do not need to import models into a Service to use them individually. However, you do you need to use a Service if you intend make queries that join multiple Entities.

Require:

const {Service} = require("electrodb");
// or
import {Service} from "electrodb";

TypeScript Support

Previously it was possible to generate type definition files (.d.ts) for you Models, Entities, and Services with the Electro CLI. New with version 0.10.0 is TypeScript support for Entities and Services.

As of writing this, this functionality is still a work in progress, and enforcement of some of ElectroDB's query constraints have still not been written into the type checks. Most notably are the following constraints not yet enforced by the type checker, but are enforced at query runtime:

  • Sort Key Composite Attribute order is not strongly typed. Sort Key Composite Attributes must be provided in the order they are defined on the model to build the key appropriately. This will not cause an error at query runtime, be sure your partial Sort Keys are provided in accordance with your model to fully leverage Sort Key queries. For more information about composite attribute ordering see the section on Composite Attributes.
  • Put/Create/Update/Patch/Delete/Create operations that partially impact index composite attributes are not statically typed. When performing a put or update type operation that impacts a composite attribute of a secondary index, ElectroDB performs a check at runtime to ensure all composite attributes of that key are included. This is detailed more in the section Composite Attribute and Index Considerations.
  • Use of the params method does not yet return strict types.
  • Use of the raw or includeKeys query options do not yet impact the returned types.

If you experience any issues using TypeScript with ElectroDB, your feedback is very important, please create a GitHub issue, and it can be addressed.

See the section Exported TypeScript Types to read more about the useful types exported from ElectroDB.

TypeScript Services

New with version 0.10.0 is TypeScript support. To ensure accurate types with, TypeScript users should create their services by passing an Object literal or const object that maps Entity alias names to Entity instances.

const table = "my_table_name";
const employees = new Entity(EmployeesModel, { client, table });
const tasks = new Entity(TasksModel, { client, table });
const TaskApp = new Service({employees, tasks});

The property name you assign the entity will then be "alias", or name, you can reference that entity by through the Service. Aliases can be useful if you are building a service with multiple versions of the same entity or wish to change the reference name of an entity without impacting the schema/key names of that entity.

Services take an optional second parameter, similar to Entities, with a client and table. Using this constructor interface, the Service will utilize the values from those entities, if they were provided, or be passed values to override the client or table name on the individual entities.

Not yet available for TypeScript, this pattern will also accept Models, or a mix of Entities and Models, in the same object literal format.

Join

When using JavaScript, use join to add Entities or Models onto a Service.

NOTE: If using TypeScript, see Joining Entities at Service construction for TypeScript to learn how to "join" entities for use in a TypeScript project.

Independent Models

let table = "my_table_name";
let employees = new Entity(EmployeesModel, { client, table });
let tasks = new Entity(TasksModel, { client, table });

Joining Entity instances to a Service

// Joining Entity instances to a Service
let TaskApp = new Service("TaskApp", { client, table });
TaskApp
    .join(employees) // available at TaskApp.entities.employees
    .join(tasks);    // available at TaskApp.entities.tasks

Joining models to a Service

let TaskApp = new Service("TaskApp", { client, table });
TaskApp
    .join(EmployeesModel) // available at TaskApp.entities.employees (based on entity name in model)
    .join(TasksModel);    // available at TaskApp.entities.tasks (based on entity name in model)

Joining Entities or Models with an alias

let TaskApp = new Service("TaskApp", { client, table });
TaskApp
    .join("personnel", EmployeesModel) // available at TaskApp.entities.personnel
    .join("directives", TasksModel); // available at TaskApp.entities.directives

Joining Entities at Service construction for TypeScript

let TaskApp = new Service({
    personnel: EmployeesModel, // available at TaskApp.entities.personnel
    directives: TasksModel, // available at TaskApp.entities.directives
});

When joining a Model/Entity to a Service, ElectroDB will perform a number of validations to ensure that Entity conforms to expectations collectively established by all joined Entities.

  • Entity names must be unique across a Service.
  • Collection names must be unique across a Service.
  • All Collections map to on the same DynamoDB indexes with the same index field names. See Indexes.
  • Partition Key [Composite Attributes](#composite attribute-arrays) on a Collection must have the same attribute names and labels (if applicable). See Attribute Definitions.
  • The name of the Service in the Model must match the Name defined on the Service instance.
  • Joined instances must be type Model or Entity.
  • If the attributes of an Entity have overlapping names with other attributes in that service, they must all have compatible or matching attribute definitions.
  • All models conform to the same model format. If you created your model prior to ElectroDB version 0.9.19 see section Version 1 Migration.

Model

Create an Entity's schema. In the below example.

const DynamoDB = require("aws-sdk/clients/dynamodb");
const {Entity, Service} = require("electrodb");
const client = new DynamoDB.DocumentClient();
const EmployeesModel = {
    model: {
        entity: "employees",
        version: "1",
        service: "taskapp",
    },
    attributes: {
        employee: {
            type: "string",
            default: () => uuidv4(),
        },
        firstName: {
            type: "string",
            required: true,
        },
        lastName: {
            type: "string",
            required: true,
        },
        office: {
            type: "string",
            required: true,
        },
        title: {
            type: "string",
            required: true,
        },
        team: {
            type: ["development", "marketing", "finance", "product", "cool cats and kittens"],
            required: true,
        },
        salary: {
            type: "string",
            required: true,
        },
        manager: {
            type: "string",
        },
        dateHired: {
            type: "string",
            validate: /^\d{4}-\d{2}-\d{2}$/gi
        },
        birthday: {
            type: "string",
            validate: /^\d{4}-\d{2}-\d{2}$/gi
        },
    },
    indexes: {
        employee: {
            pk: {
                field: "pk",
                composite: ["employee"],
            },
            sk: {
                field: "sk",
                composite: [],
            },
        },
        coworkers: {
            index: "gsi1pk-gsi1sk-index",
            collection: "workplaces",
            pk: {
                field: "gsi1pk",
                composite: ["office"],
            },
            sk: {
                field: "gsi1sk",
                composite: ["team", "title", "employee"],
            },
        },
        teams: {
            index: "gsi2pk-gsi2sk-index",
            pk: {
                field: "gsi2pk",
                composite: ["team"],
            },
            sk: {
                field: "gsi2sk",
                composite: ["title", "salary", "employee"],
            },
        },
        employeeLookup: {
            collection: "assignments",
            index: "gsi3pk-gsi3sk-index",
            pk: {
                field: "gsi3pk",
                composite: ["employee"],
            },
            sk: {
                field: "gsi3sk",
                composite: [],
            },
        },
        roles: {
            index: "gsi4pk-gsi4sk-index",
            pk: {
                field: "gsi4pk",
                composite: ["title"],
            },
            sk: {
                field: "gsi4sk",
                composite: ["salary", "employee"],
            },
        },
        directReports: {
            index: "gsi5pk-gsi5sk-index",
            pk: {
                field: "gsi5pk",
                composite: ["manager"],
            },
            sk: {
                field: "gsi5sk",
                composite: ["team", "office", "employee"],
            },
        },
    },
};

const TasksModel = {
    model: {
        entity: "tasks",
        version: "1",
        service: "taskapp",
    },
    attributes: {
        task: {
            type: "string",
            default: () => uuid(),
        },
        project: {
            type: "string",
        },
        employee: {
            type: "string",
        },
        description: {
            type: "string",
        },
    },
    indexes: {
        task: {
            pk: {
                field: "pk",
                composite: ["task"],
            },
            sk: {
                field: "sk",
                composite: ["project", "employee"],
            },
        },
        project: {
            index: "gsi1pk-gsi1sk-index",
            pk: {
                field: "gsi1pk",
                composite: ["project"],
            },
            sk: {
                field: "gsi1sk",
                composite: ["employee", "task"],
            },
        },
        assigned: {
            collection: "assignments",
            index: "gsi3pk-gsi3sk-index",
            pk: {
                field: "gsi3pk",
                composite: ["employee"],
            },
            sk: {
                field: "gsi3sk",
                composite: ["project", "task"],
            },
        },
    },
};

Model Properties

PropertyDescription
model.serviceName of the application using the entity, used to namespace all entities
model.entityName of the entity that the schema represents
model.version(optional) The version number of the schema, used to namespace keys
attributesAn object containing each attribute that makes up the schema
indexesAn object containing table indexes, including the values for the table's default Partition Key and Sort Key

Service Options

Optional second parameter

PropertyDescription
tableThe name of the dynamodb table in aws.
client(optional) An instance of the docClient from the aws-sdk for use when querying a DynamoDB table. This is optional if you wish to only use the params functionality, but required if you actually need to query against a database.

Attributes

Attributes define an Entity record. The AttributeName represents the value your code will use to represent an attribute.

Pro-Tip: Using the field property, you can map an AttributeName to a different field name in your table. This can be useful to utilize existing tables, existing models, or even to reduce record sizes via shorter field names. For example, you may refer to an attribute as organization but want to save the attribute with a field name of org in DynamoDB.

Simple Syntax

Assign just the type of the attribute directly to the attribute name. Types currently supported options are "string", "number", "boolean", an array of strings representing a fixed set of possible values, or "any" which disables value type checking on that attribute.

attributes: {
    <AttributeName>: "string" | "number" | "boolean" | "list" | "map" | "set" | "any" | string[] | ReadonlyArray<string> 
}

Expanded Syntax

Use the expanded syntax build out more robust attribute options.

attributes: {
    <AttributeName>: {
        type: "string" | "number" | "boolean" | "list" | "map" | "set" | "any" | ReadonlyArray<string>;
        required?: boolean;
        default?: <type> | (() => <type>);
        validate?: RegExp | ((value: <type>) => void | string);
        field?: string;
        readOnly?: boolean;
        label?: string;
        cast?: "number"|"string"|"boolean";
        get?: (attribute: <type>, schema: any) => <type> | void | undefined;
        set?: (attribute?: <type>, schema?: any) => <type> | void | undefined; 
        watch: "*" | string[]
    }
}

NOTE: When using get/set in TypeScript, be sure to use the ?: syntax to denote an optional attribute on set

Attribute Definition

PropertyTypeRequiredTypesDescription
typestring, ReadonlyArray<string>, string[]yesallAccepts the values: "string", "number" "boolean", "map", "list", "set", an array of strings representing a finite list of acceptable values: ["option1", "option2", "option3"], or "any" which disables value type checking on that attribute.
requiredbooleannoallFlag an attribute as required to be present when creating a record. This attribute also acts as a type of NOT NULL flag, preventing it from being removed directly.
hiddenbooleannoallFlag an attribute as hidden to remove the property from results before they are returned.
defaultvalue, () => valuenoallEither the default value itself or a synchronous function that returns the desired value. Applied before set and before required check.
validateRegExp, (value: any) => void, (value: any) => stringnoallEither regex or a synchronous callback to return an error string (will result in exception using the string as the error's message), or thrown exception in the event of an error.
fieldstringnoallThe name of the attribute as it exists in DynamoDB, if named differently in the schema attributes. Defaults to the AttributeName as defined in the schema.
readOnlybooleannoallPrevents an attribute from being updated after the record has been created. Attributes used in the composition of the table's primary Partition Key and Sort Key are read-only by default. The one exception to readOnly is for properties that also use the watch property, read attribute watching for more detail.
labelstringnoallUsed in index composition to prefix key composite attributes. By default, the AttributeName is used as the label.
cast"number", "string", "boolean"noallOptionally cast attribute values when interacting with DynamoDB. Current options include: "number", "string", and "boolean".
set(attribute, schema) => valuenoallA synchronous callback allowing you to apply changes to a value before it is set in params or applied to the database. First value represents the value passed to ElectroDB, second value are the attributes passed on that update/put
get(attribute, schema) => valuenoallA synchronous callback allowing you to apply changes to a value after it is retrieved from the database. First value represents the value passed to ElectroDB, second value are the attributes retrieved from the database.
watchAttribute[], "*"noroot-onlyDefine other attributes that will always trigger your attribute's getter and setter callback after their getter/setter callbacks are executed. Only available on root level attributes.
properties{[key: string]: Attribute}yes*mapDefine the properties available on a "map" attribute, required if your attribute is a map. Syntax for map properties is the same as root level attributes.
itemsAttributeyes*listDefine the attribute type your list attribute will contain, required if your attribute is a list. Syntax for list items is the same as a single attribute.
items"string""number"yes*set

Enum Attributes

When using TypeScript, if you wish to also enforce this type make sure to us the as const syntax. If TypeScript is not told this array is Readonly, even when your model is passed directly to the Entity constructor, it will not resolve the unique values within that array.

This may be desirable, however, as enforcing the type value can require consumers of your model to do more work to resolve the type beyond just the type string.

NOTE: Regardless of using TypeScript or JavaScript, ElectroDB will enforce values supplied match the supplied array of values at runtime.

The following example shows the differences in how TypeScript may enforce your enum value:

attributes: {
  myEnumAttribute1: {
      type: ["option1", "option2", "option3"]        // TypeScript enforces as `string[]`
  },
  myEnumAttribute2: {
    type: ["option1", "option2", "option3"] as const // TypeScript enforces as `"option1" | "option2" | "option3" | undefined`
  },
  myEnumAttribute3: {
    required: true,
    type: ["option1", "option2", "option3"] as const // TypeScript enforces as `"option1" | "option2" | "option3"`
  }
}

Map Attributes

Map attributes leverage DynamoDB's native support for object-like structures. The attributes within a Map are defined under the properties property; a syntax that mirrors the syntax used to define root level attributes. You are not limited in the types of attributes you can nest inside a map attribute.

attributes: {
  myMapAttribute: {
    type: "map",
    properties: {
      myStringAttribute: {
        type: "string"
      },
      myNumberAttribute: {
        type: "number"
      }
    }
  }
}

List Attributes

List attributes model array-like structures with DynamoDB's List type. The elements of a List attribute are defined using the items property. Similar to Map properties, ElectroDB does not restrict the types of items that can be used with a list.

attributes: {
  myStringList: { 
    type: "list",
    items: {
      type: "string"
    },
  },
  myMapList: {
    myMapAttribute: {
      type: "map",
      properties: {
        myStringAttribute: {
          type: "string"
        },
        myNumberAttribute: {
          type: "number"
        }
      }
    }
  }
}

Set Attributes

The Set attribute is arguably DynamoDB's most powerful type. ElectroDB supports String and Number Sets using the items property set as either "string" or "number".

In addition to having the same modeling benefits you get with other attributes, ElectroDB also simplifies the use of Sets by removing the need to use DynamoDB's special createSet class to work with Sets. ElectroDB Set Attributes accept Arrays, JavaScript native Sets, and objects from createSet as values. ElectroDB will manage the casting of values to a DynamoDB Set value prior to saving and ElectroDB will also convert Sets back to JavaScript arrays on retrieval.

NOTE: If you are using TypeScript, Sets are currently typed as Arrays to simplify the type system. Again, ElectroDB will handle the conversion of these Arrays without the need to use client.createSet().

attributes: {
  myStringSet: {
    type: "set",
    items: "string"
  },
  myNumberSet: {
    type: "set",
    items: "number"
  }
}

Attribute Getters and Setters

Using get and set on an attribute can allow you to apply logic before and just after modifying or retrieving a field from DynamoDB. Both callbacks should be pure synchronous functions and may be invoked multiple times during one query.

The first argument in an attribute's get or set callback is the value received in the query. The second argument, called "item", in an attribute's is an object containing the values of other attributes on the item as it was given or retrieved. If your attribute uses watch, the getter or setter of attribute being watched will be invoked before your getter or setter and the updated value will be on the "item" argument instead of the original.

NOTE: Using getters/setters on Composite Attributes is not recommended without considering the consequences of how that will impact your keys. When a Composite Attribute is supplied for a new record via a put or create operation, or is changed via a patch or updated operation, the Attribute's set callback will be invoked prior to formatting/building your record's keys on when creating or updating a record.

ElectroDB invokes an Attribute's get method in the following circumstances:

  1. If a field exists on an item after retrieval from DynamoDB, the attribute associated with that field will have its getter method invoked.
  2. After a put or create operation is performed, attribute getters are applied against the object originally received and returned.
  3. When using ElectroDB's attribute watching functionality, an attribute will have its getter callback invoked whenever the getter callback of any "watched" attributes are invoked. Note: The getter of an Attribute Watcher will always be applied after the getters for the attributes it watches.

ElectroDB invokes an Attribute's set callback in the following circumstances:

  1. Setters for all Attributes will always be invoked when performing a create or put operation.
  2. Setters will only be invoked when an Attribute is modified when performing a patch or update operation.
  3. When using ElectroDB's attribute watching functionality, an attribute will have its setter callback invoked whenever the setter callback of any "watched" attributes are invoked. Note: The setter of an Attribute Watcher will always be applied after the setters for the attributes it watches.

NOTE: As of ElectroDB 1.3.0, the watch property is only possible for root level attributes. Watch is currently not supported for nested attributes like properties on a "map" or items of a "list".

Attribute Watching

Attribute watching is a powerful feature in ElectroDB that can be used to solve many unique challenges with DynamoDB. In short, you can define a column to have its getter/setter callbacks called whenever another attribute's getter or setter callbacks are called. If you haven't read the section on Attribute Getters and Setters, it will provide you with more context about when an attribute's mutation callbacks are called.

Because DynamoDB allows for a flexible schema, and ElectroDB allows for optional attributes, it is possible for items belonging to an entity to not have all attributes when setting or getting records. Sometimes values or changes to other attributes will require corresponding changes to another attribute. Sometimes, to fully leverage some advanced model denormalization or query access patterns, it is necessary to duplicate some attribute values with similar or identical values. This functionality has many uses; below are just a few examples of how you can use watch:

NOTE: Using the watch property impacts the order of which getters and setters are called. You cannot watch another attribute that also uses watch, so ElectroDB first invokes the getters or setters of attributes without the watch property, then subsequently invokes the getters or setters of attributes who use watch.

myAttr: { 
  type: "string",
  watch: ["otherAttr"],
  set: (myAttr, {otherAttr}) => {
    // Whenever "myAttr" or "otherAttr" are updated from an `update` or `patch` operation, this callback will be fired. 
    // Note: myAttr or otherAttr could be independently undefined because either attribute could have triggered this callback
  },
  get: (myAttr, {otherAttr}) => {
    // Whenever "myAttr" or "otherAttr" are retrieved from a `query` or `get` operation, this callback will be fired. 
    // Note: myAttr or otherAttr could be independently undefined because either attribute could have triggered this callback.
  } 
}

Attribute Watching: Watch All

If your attributes needs to watch for any changes to an item, you can model this by supplying the watch property a string value of "*"

myAttr: { 
  type: "string",
  watch: "*", // "watch all"
  set: (myAttr, allAttributes) => {
    // Whenever an `update` or `patch` operation is performed, this callback will be fired. 
    // Note: myAttr or the attributes under `allAttributes` could be independently undefined because either attribute could have triggered this callback
  },
  get: (myAttr, allAttributes) => {
    // Whenever a `query` or `get` operation is performed, this callback will be fired. 
    // Note: myAttr or the attributes under `allAttributes` could be independently undefined because either attribute could have triggered this callback
  } 
}

Attribute Watching Examples

Example 1 - A calculated attribute that depends on the value of another attribute:

In this example, we have an attribute "fee" that needs to be updated any time an item's "price" attribute is updated. The attribute "fee" uses watch to have its setter callback called any time "price" is updated via a put, create, update, or patch operation.

Try it out!

{
  model: {
    entity: "products",
    service: "estimator",
    version: "1"
  },
  attributes: {
    product: {
      type: "string"
    },
    price: {
      type: "number",
              required: true
    },
    fee: {
      type: "number",
              watch: ["price"],
              set: (_, {price}) => {
        return price * .2;
      }
    }
  },
  indexes: {
    pricing: {
      pk: {
        field: "pk",
                composite: ["product"]
      },
      sk: {
        field: "sk",
                composite: []
      }
    }
  }
}

Example 2 - Making a virtual attribute that never persists to the database:

In this example we have an attribute "displayPrice" that needs its getter called anytime an item's "price" attribute is retrieved. The attribute "displayPrice" uses watch to return a formatted price string based whenever an item with a "price" attribute is queried. Additionally, "displayPrice" always returns undefined from its setter callback to ensure that it will never write data back to the table.

{
  model: {
    entity: "services",
    service: "costEstimator",
    version: "1"
  },
  attributes: {
    service: {
      type: "string"
    },
    price: {
      type: "number",
      required: true
    },
    displayPrice: {
      type: "string",
      watch: ["price"],
      get: (_, {price}) => {
        return "$" + price;  
      },
      set: () => undefined
    }
  },
  indexes: {
    pricing: {
      pk: {
        field: "pk",
        composite: ["service"]
      },
      sk: {
        field: "sk",
        composite: []
      }
    }
  }
}

Example 3 - Creating a more filter-friendly version of an attribute without impacting the original attribute:

In this example we have an attribute "descriptionSearch" which will help our users easily filter for transactions by "description". To ensure our filters will not take into account a description's character casing, descriptionSearch duplicates the value of "description" so it can be used in filters without impacting the original "description" value. Without ElectroDB's watch functionality, to accomplish this you would either have to duplicate this logic or cause permanent modification to the property itself. Additionally, the "descriptionSearch" attribute has used hidden:true to ensure this value will not be presented to the user.

{
  model: {
    entity: "transaction",
    service: "bank",
    version: "1"
  },
  attributes: {
    accountNumber: {
      type: "string"
    },
    transactionId: {
      type: "string"
    },
    amount: {
      type: "number",
    },
    description: {
      type: "string",
    },
    descriptionSearch: {
      type: "string",
      hidden: true,
      watch: ["description"],
      set: (_, {description}) => {
        if (typeof description === "string") {
            return description.toLowerCase();
        }
      }
    }
  },
  indexes: {
    transactions: {
      pk: {
        field: "pk",
        composite: ["accountNumber"]
      },
      sk: {
        field: "sk",
        composite: ["transactionId"]
      }
    }
  }
}

Example 4 - Creating an updatedAt property:

In this example we can easily create both updatedAt and createdAt attributes on our model. createdAt will use ElectroDB's set and readOnly attribute properties, while updatedAt will make use of readOnly, and watch with the "watchAll" syntax: {watch: "*"}. By supplying an asterisk, instead of an array of attribute names, attributes can be defined to watch all changes to all attributes.

Using watch in conjunction with readOnly is another powerful modeling technique. This combination allows you to model attributes that can only be modified via the model and not via the user. This is useful for attributes that need to be locked down and/or strictly calculated.

Notable about this example is that both updatedAt and createdAt use the set property without using its arguments. The readOnly only prevents modification of an attributes on update, and patch. By disregarding the arguments passed to set, the updatedAt and createdAt attributes are then effectively locked down from user influence/manipulation.

{
  model: {
    entity: "transaction",
    service: "bank",
    version: "1"
  },
  attributes: {
    accountNumber: {
      type: "string"
    },
    transactionId: {
      type: "string"
    },
    description: {
      type: "string",
    },
    createdAt: {
      type: "number",
      readOnly: true,
      set: () => Date.now()
    },
    updatedAt: {
      type: "number",
      readOnly: true,
      watch: "*",
      set: () => Date.now()
    },
    
  },
  indexes: {
    transactions: {
      pk: {
        field: "pk",
        facets: ["accountNumber"]
      },
      sk: {
        field: "sk",
        facets: ["transactionId"]
      }
    }
  }
}

Calculated Attributes

See: Attribute Watching (Example 1).

Virtual Attributes

See: Attribute Watching (Example 2).

CreatedAt and UpdatedAt Attributes

See: Attribute Watching (Example 4).

Attribute Validation

The validation property allows for multiple function/type signatures. Here the different combinations ElectroDB supports: signature | behavior ----------------------- | -------- Regexp | ElectroDB will call .test(val) on the provided regex with the value passed to this attribute (value: T) => string | If a string value with length is returned, the text will be considered the reason the value is invalid. It will generate a new exception this text as the message. (value: T) => boolean | If a boolean value is returned, true or truthy values will signify than a value is invalid while false or falsey will be considered valid. (value: T) => void | A void or undefined value is returned, will be treated as successful, in this scenario you can throw an Error yourself to interrupt the query

Indexes

When using ElectroDB, indexes are referenced by their AccessPatternName. This allows you to maintain generic index names on your DynamoDB table, but reference domain specific names while using your ElectroDB Entity. These will often be referenced as "Access Patterns".

All DynamoDB table start with at least a PartitionKey with an optional SortKey, this can be referred to as the "Table Index". The indexes object requires at least the definition of this Table Index Partition Key and (if applicable) Sort Key.

In your model, the Table Index this is expressed as an Access Pattern without an index property. For Secondary Indexes, use the index property to define the name of the index as defined on your DynamoDB table.

Within these AccessPatterns, you define the PartitionKey and (optionally) SortKeys that are present on your DynamoDB table and map the key's name on the table with the field property.

indexes: {
    [AccessPatternName]: {
        pk: {
            field: string; 
            composite: AttributeName[];
            template?: string;
        },
        sk?: {
            field: string;
            composite: AttributesName[];
            template?: string;
        },
        index?: string
        collection?: string | string[]
    }
}
PropertyTypeRequiredDescription
pkobjectyesConfiguration for the pk of that index or table
pk.composite`stringstring[]`yes
pk.templatestringnoA string that represents the template in which attributes composed to form a key (see Composite Attribute Templates below for more on this functionality).
pk.fieldstringyesThe name of the attribute as it exists in DynamoDB, if named differently in the schema attributes.
pk.casingdefaultupperlower
skobjectnoConfiguration for the sk of that index or table
sk.composite`stringstring[]`no
sk.templatestringnoA string that represents the template in which attributes composed to form a key (see Composite Attribute Templates below for more on this functionality).
sk.fieldstringyesThe name of the attribute as it exists in DynamoDB, if named differently in the schema attributes.
pk.casingdefaultupperlower
indexstringnoRequired when the Index defined is a Secondary Index; but is left blank for the table's primary index.
collection`stringstring[]`no

Indexes Without Sort Keys

When using indexes without Sort Keys, that should be expressed as an index without an sk property at all. Indexes without an sk cannot have a collection, see Collections for more detail.

NOTE: It is generally recommended to always use Sort Keys when using ElectroDB as they allow for more advanced query opportunities. Even if your model doesn't need an additional property to define a unique record, having an sk with no defined composite attributes (e.g. an empty array) still opens the door to many more query opportunities like collections.

// ElectroDB interprets as index *not having* an SK.
{
  indexes: {
    myIndex: {
      pk: {
        field: "pk",
        composite: ["id"]
      }
    }
  }
}

Try it out!

Indexes With Sort Keys

When using indexes with Sort Keys, that should be expressed as an index with an sk property. If you don't wish to use the Sort Key in your model, but it does exist on the table, simply use an empty for the composite property. An empty array is still very useful, and opens the door to more query opportunities and access patterns like collections.

// ElectroDB interprets as index *having* SK, but this model doesnt assign any composite attributes to it.
{
  indexes: {
    myIndex: {
      pk: {
        field: "pk",
        composite: ["id"]
      },
      sk: {
        field: "sk",
        composite: []
      }
    }
  }
}

Try it out!

Numeric Keys

If you have an index where the Partition or Sort Keys are expected to be numeric values, you can accomplish this with the template property on the index that requires numeric keys. Define the attribute used in the composite template as type "number", and then create a template string with only the attribute's name.

For example, this model defines both the Partition and Sort Key as numeric:

const schema = {
  model: {
    entity: "numeric",
    service: "example",
    version: "1"
  },
  attributes: {
    number1: {
      type: "number" // defined as number
    },
    number2: {
      type: "number"  // defined as number
    }
  },
  indexes: {
    record: {
      pk: {
        field: "pk",
        template: "${number1}" // will build PK as numeric value 
      },
      sk: {
        field: "sk",
        template: "${number2}" // will build SK as numeric value
      }
    }
  }
}

Try it out!

Index Casing

DynamoDB is a case-sensitive data store, and therefore it is common to convert the casing of keys to uppercase or lowercase prior to saving, updating, or querying data to your table. ElectroDB, by default, will lowercase all keys when preparing query parameters. For those who are using ElectroDB with an existing dataset, have preferences on upper or lowercase, or wish to not convert case at all, this can be configured on an index key field basis.

In the example below, we are configuring the casing ElectroDB will use individually for the Partition Key and Sort Key on the GSI "gis1". For the index's PK, mapped to gsi1pk, we ElectroDB will convert this key to uppercase prior to its use in queries. For the index's SK, mapped to gsi1pk, we ElectroDB will not convert the case of this key prior to its use in queries.

{
  indexes: {
    myIndex: {
      index: "gsi1",
      pk: {
        field: "gsi1pk",
        casing: "upper", // Acct_0120 -> ACCT_0120
        composite: ["organizationId"]
      },
      sk: {
        field: "gsi1sk",
        casing: "none", // Acct_0120 -> Acct_0120 
        composite: ["accountId"]
      }
    }
  }
}

Try it out!

NOTE: Casing is a very important decision when modeling your data in DynamoDB. While choosing upper/lower is largely a personal preference, once you have begun loading records in your table it can be difficult to change your casing after the fact. Unless you have good reason, allowing for mixed case keys can make querying data difficult because it will require database consumers to always have a knowledge of their data's case.

Casing OptionEffect
defaultThe default for keys is lowercase, or lower
lowerWill convert the key to lowercase prior it its use
upperWill convert the key to uppercase prior it its use
noneWill not perform any casing changes when building keys

Facets

As of version 0.11.1, "Facets" have been renamed to "Composite Attributes", and all documentation has been updated to reflect that change.

Composite Attributes

A Composite Attribute is a segment of a key based on one of the attributes. Composite Attributes are concatenated together from either a Partition Key, or a Sort Key key, which define an index.

NOTE: Only attributes with a type of "string", "number", "boolean", or string[] (enum) can be used as composite attributes.

There are two ways to provide composite:

  1. As a Composite Attribute Array
  2. As a Composite Attribute Template

For example, in the following Access Pattern, "locations" is made up of the composite attributes storeId, mallId, buildingId and unitId which map to defined attributes in the model:

// Input
{
    storeId: "STOREVALUE",
    mallId: "MALLVALUE",
    buildingId: "BUILDINGVALUE",
    unitId: "UNITVALUE"
};

// Output:
{
    pk: '$mallstoredirectory_1#storeId_storevalue',
    sk: '$mallstores#mallid_mallvalue#buildingid_buildingvalue#unitid_unitvalue'
}

For PK values, the service and version values from the model are prefixed onto the key.

For SK values, the entity value from the model is prefixed onto the key.

Composite Attribute Arrays

Within a Composite Attribute Array, each element is the name of the corresponding Attribute defined in the Model. The attributes chosen, and the order in which they are specified, will translate to how your composite keys will be built by ElectroDB.

NOTE: If the Attribute has a label property, that will be used to prefix the composite attributes, otherwise the full Attribute name will be used.

attributes: {
    storeId: {
        type: "string",
        label: "sid",
    },
    mallId: {
        type: "string",
        label: "mid",
    },
    buildingId: {
        type: "string",
        label: "bid",
    },
    unitId: {
        type: "string",
        label: "uid",
    }
},
indexes: {
    locations: {
        pk: {
            field: "pk",
            composite: ["storeId"]
        },
        sk: {
            field: "sk",
            composite: ["mallId", "buildingId", "unitId"]
        }
    }
}
    
// Input
{
    storeId: "STOREVALUE",
    mallId: "MALLVALUE",
    buildingId: "BUILDINGVALUE",
    unitId: "UNITVALUE"
};

// Output:
{
    pk: '$mallstoredirectory_1#sid_storevalue',
    sk: '$mallstores#mid_mallvalue#bid_buildingvalue#uid_unitvalue'
}

Try it out!

Composite Attribute Templates

In a Composite Template, you provide a formatted template for ElectroDB to use when making keys. Composite Attribute Templates allow for potential ElectroDB adoption on already established tables and records.

Attributes are identified by surrounding the attribute with ${...} braces. For example, the syntax ${storeId} will match storeId attribute in the model.

Convention for a composing a key use the # symbol to separate attributes, and for labels to attach with underscore. For example, when composing both the mallId and buildingId would be expressed as mid_${mallId}#bid_${buildingId}.

NOTE: ElectroDB will not prefix templated keys with the Entity, Project, Version, or Collection. This will give you greater control of your keys but will limit ElectroDB's ability to prevent leaking entities with some queries.

ElectroDB will continue to always add a trailing delimiter to composite attributes with keys are partially supplied. The section on BeginsWith Queries goes into more detail about how ElectroDB builds indexes from composite attributes.

{
    model: {
        entity: "MallStoreCustom",
        version: "1",
        service: "mallstoredirectory"
    },
  attributes: {
      storeId: {
          type: "string"
      },
      mallId: {
          type: "string"
      },
      buildingId: {
          type: "string"
      },
      unitId: {
          type: "string"
      }
  },
  indexes: {
      locations: {
          pk: {
              field: "pk",
              template: "sid_${storeId}"
          },
          sk: {
              field: "sk",
              template: "mid_${mallId}#bid_${buildingId}#uid_${unitId}"
          }
      }
  }
}


// Input
{
    storeId: "STOREVALUE",
    mallId: "MALLVALUE",
    buildingId: "BUILDINGVALUE",
    unitId: "UNITVALUE"
};

// Output:
{
    pk: 'sid_storevalue',
    sk: 'mid_mallvalue#bid_buildingvalue#uid_unitvalue'
}

Try it out!

Templates and Composite Attribute Arrays

The example above shows indexes defined only with the template property. This property alone is enough to work with ElectroDB, however it can be useful to also include a composite array with the names of the Composite Attributes included in the template string. Doing so achieves the following benefits:

ElectroDB will enforce that the template you have supplied actually resolves to the composite attributes specified in the array.

If you use ElectroDB with TypeScript, supplying the composite array will ensure the indexes' Composite Attributes are typed just the same as if you had not used a composite template.

An example of using template while also using composite:

{
  indexes: {
    locations: {
      pk: {
        field: "pk",
        template: "sid_${storeId}"
        composite: ["storeId"]
      },
      sk: {
        field: "sk",
        template: "mid_${mallId}#bid_${buildingId}#uid_${unitId}",
        composite: ["mallId", "buildingId", "unitId"]
      }
    }
  }
}

Try it out!

Composite Attribute and Index Considerations

As described in the above two sections (Composite Attributes, Indexes), ElectroDB builds your keys using the attribute values defined in your model and provided on your query. Here are a few considerations to take into account when thinking about how to model your indexes:

Your table's primary Partition and Sort Keys cannot be changed after a record has been created. Be mindful of not to use Attributes that have values that can change as composite attributes for your primary table index.

When updating/patching an Attribute that is also a composite attribute for secondary index, ElectroDB will perform a runtime check that the operation will leave a key in a partially built state. For example: if a Sort Key is defined as having the Composite Attributes ["prop1", "prop2", "prop3"], than an update to the prop1 Attribute will require supplying the prop2 and prop3 Attributes as well. This prevents a loss of key fidelity because ElectroDB is not able to update a key partially in place with its existing values.

As described and detailed in [Composite Attribute Arrays](#composite attribute-arrays), you can use the label property on an Attribute shorten a composite attribute's prefix on a key. This can allow trim down the length of your keys.

Attributes as Indexes

It may be the case that an index field is also an attribute. For example, if a table was created with a Primary Index partition key of accountId, and that same field is used to store the accountId value used by the application. The following are a few examples of how to model that schema with ElectroDB:

NOTE: If you have the unique opportunity to use ElectroDB with a new project, it is strongly recommended to use generically named index fields that are separate from your business attributes.

Using composite

When your attribute's name, or field property on an attribute, matches the field property on an indexes' pk or sk ElectroDB will forego its usual index key prefixing.

{
  model: {
    entity: "your_entity_name",
    service: "your_service_name",
    version: "1"
  },
  attributes: {
    accountId: {
      type: "string"
    },
    productNumber: {
      type: "number"
    }
  },
  indexes: {
    products: {
      pk: {
        field: "accountId",
        composite: ["accountId"]
      },
      sk: {
        field: "productNumber",
        composite: ["productNumber"]
      }
    }
  }
}

Try it out!

Using template

Another approach allows you to use the template property, which allows you to format exactly how your key should be built when interacting with DynamoDB. In this case composite is optional when using template, but including it helps with TypeScript typing.

{
  model: {
    entity: "your_entity_name",
    service: "your_service_name",
    version: "1"
  },
  attributes: {
    accountId: {
      type: "string" // string and number types are both supported
    }      
  },
  indexes: {
    "your_access_pattern_name": {
      pk: {
        field: "accountId",
        composite: ["accountId"], // `composite` is optional when using `template` but is required when using TypeScript
        template: "${accountId}"
      },
      sk: {...}
    }
  }
}

Try it out!

Advanced use of template

When your string attribute is also an index key, and using key templates, you can also add static prefixes and postfixes to your attribute. Under the covers, ElectroDB will leverage this template while interacting with DynamoDB but will allow you to maintain a relationship with the attribute value itself.

For example, given the following model:

{
  model: {
    entity: "your_entity_name",
    service: "your_service_name",
    version: "1"
  },
  attributes: {
    accountId: {
      type: "string" // only string types are both supported for this example
    },
    organizationId: {
      type: "string"
    },
    name: {
      type: "string"
    }
  },
  indexes: {
    "your_access_pattern_name": {
      pk: {
        field: "accountId",
        composite: ["accountId"],
        template: "prefix_${accountId}_postfix"
      },
      sk: {
        field: "organizationId",
        composite: ["organizationId"]
      }
    }
  }
}

Try it out!

ElectroDB will accept a get request like this:

await myEntity.get({
  accountId: "1111-2222-3333-4444",
  organizationId: "AAAA-BBBB-CCCC-DDDD"
}).go()

Query DynamoDB with the following params (note the pre/postfix on accountId):

NOTE: ElectroDB defaults keys to lowercase, though this can be configured using Index Casing.

{
  Key: {
    accountId: "prefix_1111-2222-3333-4444_postfix",
    organizationId: `aaaa-bbbb-cccc-dddd`, 
  },
  TableName: 'your_table_name'
}

When returned from a query, however, ElectroDB will return the following and trim the key of it's prefix and postfix:

{
  accountId: "prefix_1111-2222-3333-4444_postfix",
  organizationId: `aaaa-bbbb-cccc-dddd`,
}
name: "your_item_name"

Collections

A Collection is a grouping of Entities with the same Partition Key and allows you to make efficient query across multiple entities. If your background is SQL, imagine Partition Keys as Foreign Keys, a Collection represents a View with multiple joined Entities.

NOTE: ElectroDB Collections use DynamoDB queries to retrieve results. One query is made to retrieve results for all Entities (the benefits of single table design), however like the query method, ElectroDB will paginate through all results for a given query.

Collections are defined on an Index, and the name of the collection should represent what the query would return as a pseudo Entity. Additionally, Collection names must be unique across a Service.

NOTE: A collection name should be unique to a single common index across entities.

const DynamoDB = require("aws-sdk/clients/dynamodb");
const table = "projectmanagement";
const client = new DynamoDB.DocumentClient();

const employees = new Entity({
  model: {
    entity: "employees",
    version: "1",
    service: "taskapp",
  },
  attributes: {
    employeeId: {
      type: "string"
    },
    organizationId: {
      type: "string"
    },
    name: {
      type: "string"
    },
    team: {
      type: ["jupiter", "mercury", "saturn"]
    }
  },
  indexes: {
    staff: {
      pk: {
        field: "pk",
        composite: ["organizationId"]
      },
      sk: {
        field: "sk",
        composite: ["employeeId"]
      }
    },
    employee: {
      collection: "assignments",
      index: "gsi2",
      pk: {
        field: "gsi2pk",
        composite: ["employeeId"],
      },
      sk: {
        field: "gsi2sk",
        composite: [],
      },
    }
  }
}, { client, table })

const tasks = new Entity({
  model: {
    entity: "tasks",
    version: "1",
    service: "taskapp",
  },
  attributes: {
    taskId: {
      type: "string"
    },
    employeeId: {
      type: "string"
    },
    projectId: {
      type: "string"
    },
    title: {
      type: "string"
    },
    body: {
      type: "string"
    }
  },
  indexes: {
    project: {
      pk: {
        field: "pk",
        composite: ["projectId"]
      },
      sk: {
        field: "sk",
        composite: ["taskId"]
      }
    },
    assigned: {
      collection: "assignments",
      index: "gsi2",
      pk: {
        field: "gsi2pk",
        composite: ["employeeId"],
      },
      sk: {
        field: "gsi2sk",
        composite: ["projectId"],
      },
    }
  }
}, { client, table });

const TaskApp = new Service({employees, tasks});

await TaskApp.collections
    .assignments({employeeId: "JExotic"})
    .go();

// Equivalent Parameters
{
  "TableName": 'projectmanagement',
  "ExpressionAttributeNames": { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
  "ExpressionAttributeValues": { ':pk': '$taskapp_1#employeeid_joeexotic', ':sk1': '$assignments' },
  "KeyConditionExpression": '#pk = :pk and begins_with(#sk1, :sk1)',
  "IndexName": 'gsi2'
}

Try it out!

Collection Queries vs Entity Queries

To query across entities, collection queries make use of ElectroDB's Sort Key structure, which prefixes Sort Key fields with the collection name. Unlike an Entity Query, Collection Queries only leverage Composite Attributes from an access pattern's Partition Key.

To better explain how Collection Queries are formed, here is a juxtaposition of an Entity Query's parameters vs a Collection Query's parameters:

Entity Query

await TaskApp.entities
    .tasks.query
    .assigned({employeeId: "JExotic"})
    .go();

// Equivalent Parameters
{
  KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
  TableName: 'projectmanagement',
  ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
  ExpressionAttributeValues: {
    ':pk': '$taskapp#employeeid_jexotic',
    ':sk1': '$assignments#tasks_1'
  },
  IndexName: 'gsi2'
}

Try it out!

Collection Query

await TaskApp.collections
    .assignments({employeeId: "JExotic"})
    .go();

// Equivalent Parameters
{
  KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
  TableName: 'projectmanagement',
  ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
  ExpressionAttributeValues: { ':pk': '$taskapp#employeeid_jexotic', ':sk1': '$assignments' },
  IndexName: 'gsi2'
}

Try it out!

The notable difference between the two is how much of the Sort Key is specified at query time.

Entity Query:

ExpressionAttributeValues: { ':sk1': '$assignments#tasks_1' },

Collection Query:

ExpressionAttributeValues: { ':sk1': '$assignments' },

Collection Response Structure

Unlike Entity Queries which return an array, Collection Queries return an object. This object will have a key for every Entity name (or Entity Alias) associated with that Collection, and an array for all results queried that belong to that Entity.

For example, using the "TaskApp" models defined above, we would expect the following response from a query to the "assignments" collection:

let results = await TaskApp.collections
        .assignments({employeeId: "JExotic"})
        .go();

{
    tasks: [...],    // tasks for employeeId "JExotic" 
    employees: [...] // employee record(s) with employeeId "JExotic"
}

Because the Tasks and Employee Entities both associated their index (gsi2) with the same collection name (assignments), ElectroDB is able to associate the two entities via a shared Partition Key. As stated in the collections section, querying across Entities by PK can be comparable to querying across a foreign key in a traditional relational database.

Sub-Collections

Sub-Collections are an extension of Collection functionality that allow you to model more advanced access patterns. Collections and Sub-Collections are defined on Indexes via a property called collection, as either a string or string array respectively.

The following is an example of functionally identical collections, implemented as a string (referred to as a "collection") and then as a string array (referred to as sub-collections):

As a string (collection):

{
  collection: "assignments"
  pk: {
    field: "pk",
    composite: ["employeeId"]
  },
  sk: {
    field: "sk",
    composite: ["projectId"]
  }
}

As a string array (sub-collections):

{
  collection: ["assignments"]
  pk: {
    field: "pk",
            composite: ["employeeId"]
  },
  sk: {
    field: "sk",
            composite: ["projectId"]
  }
}

Both implementations above will create a "collections" method called assignments when added to a Service.

const results = await TaskApp.collections
    .assignments({employeeId: "JExotic"})
    .go();

The advantage to using a string array to define collections is the ability to express sub-collections. Below is an example of three entities using sub-collections, followed by an explanation of their sub-collection definitions:

Sub-Collection Entities

import {Entity, Service} from "electrodb"
import DynamoDB from "aws-sdk/clients/dynamodb";
const table = "projectmanagement";
const client = new DynamoDB.DocumentClient();

const employees = new Entity({
  model: {
    entity: "employees",
    version: "1",
    service: "taskapp",
  },
  attributes: {
    employeeId: {
      type: "string"
    },
    organizationId: {
      type: "string"
    },
    name: {
      type: "string"
    },
    team: {
      type: ["jupiter", "mercury", "saturn"] as const
    }
  },
  indexes: {
    staff: {
      pk: {
        field: "pk",
        composite: ["organizationId"]
      },
      sk: {
        field: "sk",
        composite: ["employeeId"]
      }
    },
    employee: {
      collection: "contributions",
      index: "gsi2",
      pk: {
        field: "gsi2pk",
        composite: ["employeeId"],
      },
      sk: {
        field: "gsi2sk",
        composite: [],
      },
    }
  }
}, { client, table })

const tasks = new Entity({
  model: {
    entity: "tasks",
    version: "1",
    service: "taskapp",
  },
  attributes: {
    taskId: {
      type: "string"
    },
    employeeId: {
      type: "string"
    },
    projectId: {
      type: "string"
    },
    title: {
      type: "string"
    },
    body: {
      type: "string"
    }
  },
  indexes: {
    project: {
      collection: "overview",
      pk: {
        field: "pk",
        composite: ["projectId"]
      },
      sk: {
        field: "sk",
        composite: ["taskId"]
      }
    },
    assigned: {
      collection: ["contributions", "assignments"] as const,
      index: "gsi2",
      pk: {
        field: "gsi2pk",
        composite: ["employeeId"],
      },
      sk: {
        field: "gsi2sk",
        composite: ["projectId"],
      },
    }
  }
}, { client, table });

const projectMembers = new Entity({
  model: {
    entity: "projectMembers",
    version: "1",
    service: "taskapp",
  },
  attributes: {
    employeeId: {
      type: "string"
    },
    projectId: {
      type: "string"
    },
    name: {
      type: "string"
    },
  },
  indexes: {
    members: {
      collection: "overview",
      pk: {
        field: "pk",
        composite: ["projectId"]
      },
      sk: {
        field: "sk",
        composite: ["employeeId"]
      }
    },
    projects: {
      collection: ["contributions", "assignments"] as const,
      index: "gsi2",
      pk: {
        field: "gsi2pk",
        composite: ["employeeId"],
      },
      sk: {
        field: "gsi2sk",
        composite: [],
      },
    }
  }
}, { client, table }); 

const TaskApp = new Service({employees, tasks, projectMembers});

Try it out!

TypeScript Note: Use as const syntax when defining collection as a string array for improved type support

The last line of the code block above creates a Service called TaskApp using the Entity instances created above its declaration. By creating a Service, ElectroDB will identify and validate the sub-collections defined across all three models. The result in this case are four unique collections: "overview", "contributions", and "assignments".

The simplest collection to understand is overview. This collection is defined on the table's Primary Index, composed of a projectId in the Partition Key, and is currently implemented by two Entities: tasks and projectMembers. If another entity were to be added to our service, it could "join" this collection by implementing an identical Partition Key composite (projectId) and labeling itself as part of the overview collection. The following is an example of using the overview collection:

// overview
const results = await TaskApp.collections
    .overview({projectId: "SD-204"})
    .go();

// results 
{ 
  tasks: [...],         // tasks associated with projectId "SD-204
  projectMembers: [...] // employees of project "SD-204"
}

// parameters
{
  KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
  TableName: 'projectmanagement',
  ExpressionAttributeNames: { '#pk': 'pk', '#sk1': 'sk' },
  ExpressionAttributeValues: { ':pk': '$taskapp#projectid_sd-204', ':sk1': '$overview' }
}

Try it out!

Unlike overview, the collections contributions, and assignments are more complex.

In the case of contributions, all three entities implement this collection on the gsi2 index, and compose their Partition Key with the employeeId attribute. The assignments collection, however, is only implemented by the tasks and projectMembers Entities. Below is an example of using these collections:

NOTE: Collection values of collection: "contributions" and collection: ["contributions"] are interpreted by ElectroDB as being the same implementation.

// contributions
const results = await TaskApp.collections
        .contributions({employeeId: "JExotic"})
        .go();

// results 
{
  tasks: [...], // tasks assigned to employeeId "JExotic" 
  projectMembers: [...], // projects with employeeId "JExotic"
  employees: [...] // employee record(s) with employeeId "JExotic"
}

{
  KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
  TableName: 'projectmanagement',
  ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
  ExpressionAttributeValues: { ':pk': '$taskapp#employeeid_jexotic', ':sk1': '$contributions' },
  IndexName: 'gsi2'
}

Try it out!

// assignments
const results = await TaskApp.collections
        .assignments({employeeId: "JExotic"})
        .go();

// results 
{
  tasks: [...],          // tasks assigned to employeeId "JExotic" 
  projectMembers: [...], // projects with employeeId "JExotic"
}

{
  KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
  TableName: 'projectmanagement',
  ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
  ExpressionAttributeValues: {
    ':pk': '$taskapp#employeeid_jexotic',
    ':sk1': '$contributions#assignments'
  },
  IndexName: 'gsi2'
}

Try it out!

Looking above we can see that the assignments collection is actually a subset of the results that could be queried with the contributions collection. The power behind having the assignments sub-collection is the flexibility to further slice and dice your cross-entity queries into more specific and performant queries.

If you're interested in the naming used in the collection and access pattern definitions above, checkout the section on Naming Conventions

Index and Collection Naming Conventions

ElectroDB puts an emphasis on allowing users to define more domain specific naming. Instead of referring to indexes by their name on the table, ElectroDB allows users to define their indexes as Access Patterns.

Please refer to the Entities defined in the section Sub-Collection Entities as the source of examples within this section.

Index Naming Conventions

The following is an access pattern on the "employees" entity defined here:

staff: {
  pk: {
    field: "pk",
    composite: ["organizationId"]
  },
  sk: {
    field: "sk",
    composite: ["employeeId"]
  }
}

This Access Pattern is defined on the table's Primary Index (note the lack of an index property), is given the name staff, and is composed of an organiztionId and an employeeId.

When deciding on an Access Pattern name, ask yourself, "What would the array of items returned represent if I only supplied the Partition Key". In this example case, the entity defines an "Employee" by its organizationId and employeeId. If you performed a query against this index, and only provided organizationId you would then expect to receive all Employees for that Organization. From there, the name staff was chosen because the focus becomes "What are these Employees to that Organization?".

This convention also becomes evident when you consider Access Pattern name becomes the name of the method you use query that index.

await employee.query.staff({organizationId: "nike"}).go();

Collection Naming Conventions

The following are access patterns on entities defined here:

// employees entity
employee: {
  collection: "contributions",
  index: "gsi2",
  pk: {
    field: "gsi2pk",
    composite: ["employeeId"],
  },
  sk: {
    field: "gsi2sk",
    composite: [],
  },
}

// tasks entity
assigned: {
  collection: ["contributions", "assignments"],
  index: "gsi2",
  pk: {
    field: "gsi2pk",
    composite: ["employeeId"],
  },
  sk: {
    field: "gsi2sk",
    composite: ["projectId"],
  },
}

// projectMembers entity
projects: {
  collection: ["contributions", "assignments"] as const,
  index: "gsi2",
  pk: {
    field: "gsi2pk",
    composite: ["employeeId"],
  },
  sk: {
    field: "gsi2sk",
    composite: [],
  },
}

In the case of the entities above, we see an example of a sub-collection. ElectroDB will use the above definitions to generate two collections: contributions, assignments.

The considerations for naming a collection are nearly identical to the considerations for naming an index: What do the query results from supplying just the Partition Key represent? In the case of collections you must also consider what the results represent across all of the involved entities, and the entities that may be added in the future.

For example, the contributions collection is named such because when given an employeeId we receive the employee's details, the tasks the that employee, and the projects where they are currently a member.

In the case of assignments, we receive a subset of contributions when supplying an employeeId: Only the tasks and projects they are "assigned" are returned.

Filters

Filters are no longer the preferred way to add FilterExpressions. Checkout the Where section to find out about how to apply FilterExpressions and ConditionExpressions.

Building thoughtful indexes can make queries simple and performant. Sometimes you need to filter results down further. By adding Filters to your model, you can extend your queries with custom filters. Below is the traditional way you would add a filter to Dynamo's DocumentClient directly alongside how you would accomplish the same using a Filter function.

{
  "IndexName": "idx2",
  "TableName": "StoreDirectory",
  "ExpressionAttributeNames": {
    "#rent": "rent",
    "#discount": "discount",
    "#pk": "idx2pk",
    "#sk1": "idx2sk"
  },
  "ExpressionAttributeValues": {
    ":rent1": "2000.00",
    ":rent2": "5000.00",
    ":discount1": "1000.00",
    ":pk": "$mallstoredirectory_1#mallid_eastpointe",
    ":sk1": "$mallstore#leaseenddate_2020-04-01#rent_",
    ":sk2": "$mallstore#leaseenddate_2020-07-01#rent_"
  },
  "KeyConditionExpression": ",#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2",
  "FilterExpression": "(#rent between :rent1 and :rent2) AND #discount <= :discount1"
}

Defined on the model

Deprecated but functional with 1.x

Filters can be defined on the model and used in your query chain.

/**
    * Filter by low rent a specific mall or a leaseEnd withing a specific range  
    * @param {Object} attributes - All attributes from the model with methods for each filter operation  
    * @param {...*} values - Values passed when calling the filter in a query chain.
**/
filters: {
    rentPromotions: function(attributes, minRent, maxRent, promotion)  {
        let {rent, discount} = attributes;
        return `
            ${rent.between(minRent, maxRent)} AND ${discount.lte(promotion)}
        `
    }
}


let StoreLocations  =  new Entity(model, {table: "StoreDirectory"});
let maxRent = "5000.00";
let minRent = "2000.00";
let promotion = "1000.00";
let stores = await MallStores.query
    .stores({ mallId: "EastPointe" })
    .between({ leaseEndDate:  "2020-04-01" }, { leaseEndDate:  "2020-07-01" })
    .rentPromotions(minRent, maxRent, promotion)
    .go();

// Equivalent Parameters
{
  IndexName: 'idx2',
  TableName: 'StoreDirectory',
  ExpressionAttributeNames: {
    '#rent': 'rent',
    '#discount': 'discount',
    '#pk': 'idx2pk',
    '#sk1': 'idx2sk'
  },
  ExpressionAttributeValues: {
    ':rent1': '2000.00',
    ':rent2': '5000.00',
    ':discount1': '1000.00',
    ':pk': '$mallstoredirectory_1#mallid_eastpointe',
    ':sk1': '$mallstore#leaseenddate_2020-04-01#rent_',
    ':sk2': '$mallstore#leaseenddate_2020-07-01#rent_'
  },
  KeyConditionExpression: '#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2',
  FilterExpression: '(#rent between :rent1 and :rent2) AND #discount <= :discount1'
}

Defined via Filter method after query operators

Filters are no longer the preferred way to add FilterExpressions. Checkout the Where section to find out about how to apply FilterExpressions and ConditionExpressions.

The easiest way to use filters is to use them inline in your query chain.

let StoreLocations  =  new Entity(model, {table: "StoreDirectory"});
let maxRent = "5000.00";
let minRent = "2000.00";
let promotion = "1000.00";
let stores  =  await StoreLocations.query
    .leases({ mallId: "EastPointe" })
    .between({ leaseEndDate:  "2020-04-01" }, { leaseEndDate:  "2020-07-01" })
    .filter(({rent, discount}) => `
        ${rent.between(minRent, maxRent)} AND ${discount.lte(promotion)}
    `)
    .go();

// Equivalent Parameters
{
  IndexName: 'idx2',
  TableName: 'StoreDirectory',
  ExpressionAttributeNames: {
    '#rent': 'rent',
    '#discount': 'discount',
    '#pk': 'idx2pk',
    '#sk1': 'idx2sk'
  },
  ExpressionAttributeValues: {
    ':rent1': '2000.00',
    ':rent2': '5000.00',
    ':discount1': '1000.00',
    ':pk': '$mallstoredirectory_1#mallid_eastpointe',
    ':sk1': '$mallstore#leaseenddate_2020-04-01#rent_',
    ':sk2': '$mallstore#leaseenddate_2020-07-01#rent_'
  },
  KeyConditionExpression: '#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2',
  FilterExpression: '(#rent between :rent1 and :rent2) AND #discount <= :discount1'
}

Filter functions allow you to write a FilterExpression without having to worry about the complexities of expression attributes. To accomplish this, ElectroDB injects an object attributes as the first parameter to all Filter Functions. This object contains every Attribute defined in the Entity's Model with the following operators as methods:

operator | example | result | ----------- | -------------------------------- |
gte | rent.gte(maxRent) | #rent >= :rent1 gt | rent.gt(maxRent) | #rent > :rent1 lte | rent.lte(maxRent) | #rent <= :rent1 lt | rent.lt(maxRent) | #rent < :rent1 eq | rent.eq(maxRent) | #rent = :rent1 ne | rent.ne(maxRent) | #rent <> :rent1 begins | rent.begins(maxRent) | begins_with(#rent, :rent1) exists | rent.exists() | attribute_exists(#rent) notExists | rent.notExists() | attribute_not_exists(#rent) contains | rent.contains(maxRent) | contains(#rent = :rent1) notContains | rent.notContains(maxRent) | not contains(#rent = :rent1) between | rent.between(minRent, maxRent) | (#rent between :rent1 and :rent2) name | rent.name() | #rent value | rent.value(maxRent) | :rent1

This functionality allows you to write the remaining logic of your FilterExpression with ease. Add complex nested and/or conditions or other FilterExpression logic while ElectroDB handles the ExpressionAttributeNames and ExpressionAttributeValues.

Multiple Filters

Filters are no longer the preferred way to add FilterExpressions. Checkout the Where section to find out about how to apply FilterExpressions and ConditionExpressions.

It is possible to chain together multiple filters. The resulting FilterExpressions are concatenated with an implicit AND operator.

let MallStores = new Entity(model, {table: "StoreDirectory"});
let stores = await MallStores.query
    .leases({ mallId: "EastPointe" })
    .between({ leaseEndDate: "2020-04-01" }, { leaseEndDate: "2020-07-01" })
    .filter(({ rent, discount }) => `
        ${rent.between("2000.00", "5000.00")} AND ${discount.eq("1000.00")}
    `)
    .filter(({ category }) => `
        ${category.eq("food/coffee")}
    `)
    .go();

// Equivalent Parameters
{
  TableName: 'StoreDirectory',
  ExpressionAttributeNames: {
    '#rent': 'rent',
    '#discount': 'discount',
    '#category': 'category',
    '#pk': 'idx2pk',
    '#sk1': 'idx2sk'
  },
  ExpressionAttributeValues: {
    ':rent1': '2000.00',
    ':rent2': '5000.00',
    ':discount1': '1000.00',
    ':category1': 'food/coffee',
    ':pk': '$mallstoredirectory_1#mallid_eastpointe',
    ':sk1': '$mallstore#leaseenddate_2020-04-01#storeid_',
    ':sk2': '$mallstore#leaseenddate_2020-07-01#storeid_'
  },
  KeyConditionExpression: '#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2',
  IndexName: 'idx2',
  FilterExpression: '(#rent between :rent1 and :rent2) AND (#discount = :discount1 AND #category = :category1)'
}

Where

The where() method is an improvement on the filter() method. Unlike filter, where will be compatible with upcoming features related to complex types.

Building thoughtful indexes can make queries simple and performant. Sometimes you need to filter results down further or add conditions to an update/patch/put/create/delete/remove action.

FilterExpressions

Below is the traditional way you would add a FilterExpression to Dynamo's DocumentClient directly alongside how you would accomplish the same using the where method.

animals.query
  .exhibit({habitat: "Africa"})
  .where(({isPregnant, offspring}, {exists, eq}) => `
    ${eq(isPregnant, true)} OR ${exists(offspring)}
  `)
  .go()
{
  "KeyConditionExpression": "#pk = :pk and begins_with(#sk1, :sk1)",
  "TableName": "zoo_manifest",
  "ExpressionAttributeNames": {
    "#isPregnant": "isPregnant",
    "#offspring": "offspring",
    "#pk": "gsi1pk",
    "#sk1": "gsi1sk"
  },
  "ExpressionAttributeValues": {
    ":isPregnant0": true,
    ":pk": "$zoo#habitat_africa",
    ":sk1": "$animals_1#enclosure_"
  },
  "IndexName": "gsi1pk-gsi1sk-index",
  "FilterExpression": "#isPregnant = :isPregnant0 OR attribute_exists(#offspring)"
}

Try it out!

ConditionExpressions

Below is the traditional way you would add a ConditionExpression to Dynamo's DocumentClient directly alongside how you would accomplish the same using the where method.

animals.update({
    animal: "blackbear",
    name: "Isabelle"
  })
  // no longer pregnant because Ernesto was born!
  .set({
    isPregnant: false,
    lastEvaluation: "2021-09-12",
    lastEvaluationBy: "stephanie.adler"
  })
  // welcome to the world Ernesto!
  .append({
    offspring: [{
      name: "Ernesto",
      birthday: "2021-09-12",
      note: "healthy birth, mild pollen allergy"
    }]
  })
  // using the where clause can guard against making
  // updates against stale data
  .where(({isPregnant, lastEvaluation}, {lt, eq}) => `
    ${eq(isPregnant, true)} AND ${lt(lastEvaluation, "2021-09-12")}
  `)
  .go()
{
  "UpdateExpression": "SET #isPregnant = :isPregnant_u0, #lastEvaluation = :lastEvaluation_u0, #lastEvaluationBy = :lastEvaluationBy_u0, #offspring = list_append(#offspring, :offspring_u0)",
  "ExpressionAttributeNames": {
    "#isPregnant": "isPregnant",
    "#lastEvaluation": "lastEvaluation",
    "#lastEvaluationBy": "lastEvaluationBy",
    "#offspring": "offspring"
  },
  "ExpressionAttributeValues": {
    ":isPregnant0": true,
    ":lastEvaluation0": "2021-09-12",
    ":isPregnant_u0": false,
    ":lastEvaluation_u0": "2021-09-12",
    ":lastEvaluationBy_u0": "stephanie.adler",
    ":offspring_u0": [
      {
        "name": "Ernesto",
        "birthday": "2021-09-12",
        "note": "healthy birth, mild pollen allergy"
      }
    ]
  },
  "TableName": "zoo_manifest",
  "Key": {
    "pk": "$zoo#animal_blackbear",
    "sk": "$animals_1#name_isabelle"
  },
  "ConditionExpression": "#isPregnant = :isPregnant0 AND #lastEvaluation < :lastEvaluation0"
}

Try it out!

Where with Complex Attributes

ElectroDB supports using the where() method with DynamoDB's complex attribute types: map, list, and set. When using the injected attributes object, simply drill into the attribute itself to apply your update directly to the required object.

The following are examples on how to filter on complex attributes:

Example 1: Filtering on a map attribute

animals.query
    .farm({habitat: "Africa"})
    .where(({veterinarian}, {eq}) => eq(veterinarian.name, "Herb Peterson"))
    .go()

Try it out!

Example 1: Filtering on an element in a list attribute

animals.query
  .exhibit({habitat: "Tundra"})
  .where(({offspring}, {eq}) => eq(offspring[0].name, "Blitzen"))
  .go()

Try it out!

Attributes and Operations

Where functions allow you to write a FilterExpression or ConditionExpression without having to worry about the complexities of expression attributes. To accomplish this, ElectroDB injects an object attributes as the first parameter to all Filter Functions, and an object operations, as the second parameter. Pass the properties from the attributes object to the methods found on the operations object, along with inline values to set filters and conditions.

NOTE: where callbacks must return a string. All method on the operation object all return strings, so you can return the results of the operation method or use template strings compose an expression.

// A single filter operation 
animals.update({habitat: "Africa", enclosure: "5b"})
  .set({keeper: "Joe Exotic"})
  .where((attr, op) => op.eq(attr.dangerous, true))
  .go();

// A single filter operation w/ destructuring
animals.update({animal: "tiger", name: "janet"})
  .set({keeper: "Joe Exotic"})
  .where(({dangerous}, {eq}) => eq(dangerous, true))
  .go();

// Multiple conditions - `op`
animals.update({animal: "tiger", name: "janet"})
  .set({keeper: "Joe Exotic"})
  .where((attr, op) => `
    ${op.eq(attr.dangerous, true)} AND ${op.notExists(attr.lastFed)}
  `)
  .go();

// Multiple usages of `where` (implicit AND)
animals.update({animal: "tiger", name: "janet"})
  .set({keeper: "Joe Exotic"})
  .where((attr, op) => `
    ${op.eq(attr.dangerous, true)} OR ${op.notExists(attr.lastFed)}
  `)
  .where(({birthday}, {between}) => {
    const today = Date.now();
    const lastMonth = today - 1000 * 60 * 60 * 24 * 30;
    return between(birthday, lastMonth, today);
  })
  .go();

// "dynamic" filtering
function getAnimals(habitat, keepers) {
  const query = animals.query.exhibit({habitat});
  for (const name of keepers) {
    query.where(({keeper}, {eq}) => eq(keeper, name));
  }
  return query.go();
}

const keepers = [
  "Joe Exotic",
  "Carol Baskin"
];

getAnimals("RainForest", keepers);

Try it out!

The attributes object contains every Attribute defined in the Entity's Model. The operations object contains the following methods:

operatorexampleresult
eqeq(rent, maxRent)#rent = :rent1
neeq(rent, maxRent)#rent <> :rent1
gtegte(rent, value)#rent >= :rent1
gtgt(rent, maxRent)#rent > :rent1
ltelte(rent, maxRent)#rent <= :rent1
ltlt(rent, maxRent)#rent < :rent1
beginsbegins(rent, maxRent)begins_with(#rent, :rent1)
existsexists(rent)attribute_exists(#rent)
notExistsnotExists(rent)attribute_not_exists(#rent)
containscontains(rent, maxRent)contains(#rent = :rent1)
notContainsnotContains(rent, maxRent)not contains(#rent = :rent1)
betweenbetween(rent, minRent, maxRent)(#rent between :rent1 and :rent2)
namename(rent)#rent
valuevalue(rent, maxRent):rent1

Multiple Where Clauses

It is possible to include chain multiple where clauses. The resulting FilterExpressions (or ConditionExpressions) are concatenated with an implicit AND operator.

let MallStores = new Entity(model, {table: "StoreDirectory"});
let stores = await MallStores.query
    .leases({ mallId: "EastPointe" })
    .between({ leaseEndDate: "2020-04-01" }, { leaseEndDate: "2020-07-01" })
    .where(({ rent, discount }, {between, eq}) => `
        ${between(rent, "2000.00", "5000.00")} AND ${eq(discount, "1000.00")}
    `)
    .where(({ category }, {eq}) => `
        ${eq(category, "food/coffee")}
    `)
    .go();

// Equivalent Parameters
{
  TableName: 'StoreDirectory',
  ExpressionAttributeNames: {
    '#rent': 'rent',
    '#discount': 'discount',
    '#category': 'category',
    '#pk': 'idx2pk',
    '#sk1': 'idx2sk'
  },
  ExpressionAttributeValues: {
    ':rent1': '2000.00',
    ':rent2': '5000.00',
    ':discount1': '1000.00',
    ':category1': 'food/coffee',
    ':pk': '$mallstoredirectory_1#mallid_eastpointe',
    ':sk1': '$mallstore#leaseenddate_2020-04-01#storeid_',
    ':sk2': '$mallstore#leaseenddate_2020-07-01#storeid_'
  },
  KeyConditionExpression: '#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2',
  IndexName: 'idx2',
  FilterExpression: '(#rent between :rent1 and :rent2) AND (#discount = :discount1 AND #category = :category1)'
}

Parse

The parse method can be given a DocClient response and return a typed and formatted ElectroDB item.

ElectroDB's parse() method accepts results from get, delete, put, update, query, and scan operations, applies all the same operations as though the item was retrieved by ElectroDB itself, and will return null (or empty array for query results) if the item could not be parsed.

const myEntity = new Entity({...});
const getResults = docClient.get({...}).promise();
const queryResults = docClient.query({...}).promise();
const updateResults = docClient.update({...}).promise(); 
const formattedGetResults = myEntity.parse(getResults);
const formattedQueryResults = myEntity.parse(formattedQueryResults);

Parse also accepts an optional options object as a second argument (see the section Query Options to learn more). Currently, the following query options are relevant to the parse() method:

Option | Default | Notes ----------------- : ------- | ----- ignoreOwnership | true | This property defaults to true here, unlike elsewhere in the application when it defaults to false. You can overwrite the default here with your own preference.

Building Queries

For hands-on learners: the following example can be followed along with and executed on runkit: https://runkit.com/tywalch/electrodb-building-queries

ElectroDB queries use DynamoDB's query method to find records based on your table's indexes.

NOTE: By default, ElectroDB will paginate through all items that match your query. To limit the number of items ElectroDB will retrieve, read more about the Query Options pages and limit, or use the ElectroDB Pagination API for fine-grain pagination support.

Forming a composite Partition Key and Sort Key is a critical step in planning Access Patterns in DynamoDB. When planning composite keys, it is crucial to consider the order in which they are composed. As of the time of writing this documentation, DynamoDB has the following constraints that should be taken into account when planning your Access Patterns:

  1. You must always supply the Partition Key in full for all queries to DynamoDB.
  2. You currently only have the following operators available on a Sort Key: begins_with, between, >, >=, <, <=, and Equals.
  3. To act on single record, you will need to know the full Partition Key and Sort Key for that record.

Using composite attributes to make hierarchical keys

Carefully considering your Composite Attribute order will allow ElectroDB to express hierarchical relationships and unlock more available Access Patterns for your application.

For example, let's say you have a StoreLocations Entity that represents Store Locations inside Malls:

Shopping Mall Stores

let schema = {  
    model: {
      service: "MallStoreDirectory",  
      entity: "MallStore",
      version: "1",    
    },  
    attributes: {
        cityId: {
            type: "string",
            required: true,
        }, 
        mallId: {  
            type: "string",  
            required: true,  
        },  
        storeId: {  
            type: "string",  
            required: true,  
        },  
        buildingId: {  
            type: "string",  
            required: true,  
        },  
        unitId: {  
            type: "string",  
            required: true,
        },  
        category: {  
            type: [
                "spite store",
                "food/coffee", 
                "food/meal", 
                "clothing", 
                "electronics", 
                "department", 
                "misc"
            ],  
            required: true  
        },  
        leaseEndDate: {  
            type: "string",  
            required: true  
        },
        rent: {
            type: "string",
            required: true,
            validate: /^(\d+\.\d{2})$/
        },
        discount: {
            type: "string",
            required: false,
            default: "0.00",
            validate: /^(\d+\.\d{2})$/
        }  
    },  
    indexes: {  
        stores: {  
            pk: {
                field: "pk",
                composite: ["cityId", "mallId"]
            }, 
            sk: {
                field: "sk",
                composite: ["buildingId", "storeId"]
            }  
        },  
        units: {  
            index: "gis1pk-gsi1sk-index",  
            pk: {
                field: "gis1pk",
                composite: ["mallId"]
            },  
            sk: {
                field: "gsi1sk",
                composite: ["buildingId", "unitId"]
            }  
        },
        leases: {
            index: "gis2pk-gsi2sk-index",
            pk: {
                field: "gis2pk",
                composite: ["storeId"]
            },  
            sk: {
                field: "gsi2sk",
                composite: ["leaseEndDate"]
            }  
        }
    }
};
const StoreLocations = new Entity(schema, {table: "StoreDirectory"});

Query App Records

Examples in this section using the MallStore schema defined above, and available for interacting with here: https://runkit.com/tywalch/electrodb-building-queries

All queries start from the Access Pattern defined in the schema.

const MallStore = new Entity(schema, {table: "StoreDirectory"}); 
// Each Access Pattern is available on the Entity instance
// MallStore.query.stores()
// MallStore.query.malls()

Partition Key Composite Attributes

All queries require (at minimum) the Composite Attributes included in its defined Partition Key. Composite Attributes you define on the Sort Key can be partially supplied, but must be supplied in the order they are defined.

Important: Composite Attributes must be supplied in the order they are composed when invoking the Access Pattern. This is because composite attributes are used to form a concatenated key string, and if attributes supplied out of order, it is not possible to fill the gaps in that concatenation.

const MallStore = new Entity({
  model: {
    service: "mallmgmt",
    entity: "store", 
    version: "1"
  },
  attributes: {
    cityId: "string",
    mallId: "string",
    storeId: "string",
    buildingId: "string",
    unitId: "string",
    name: "string",
    description: "string",
    category: "string"
  },
  indexes: {
    stores: {
      pk: {
        field: "pk",
        composite: ["cityId", "mallId"]
      },
      sk: {
        field: "sk",
        composite: ["storeId", "unitId"]
      }
    }
  }
}, {table: "StoreDirectory"});

const cityId = "Atlanta1";
const mallId = "EastPointe";
const storeId = "LatteLarrys";
const unitId = "B24";
const buildingId = "F34";

// Good: Includes at least the PK
StoreLocations.query.stores({cityId, mallId});

// Good: Includes at least the PK, and the first SK attribute
StoreLocations.query.stores({cityId, mallId, storeId});

// Good: Includes at least the PK, and the all SK attributes   
StoreLocations.query.stores({cityId, mallId, storeId, unitId});

// Bad: No PK composite attributes specified, will throw
StoreLocations.query.stores();

// Bad: Not All PK Composite Attributes included (cityId), will throw
StoreLocations.query.stores({mallId});

// Bad: Composite Attributes not included in order, will NOT throw, but will ignore `unitId` because `storeId` was not supplied as well
StoreLocations.query.stores({cityId, mallId, unitId});

Sort Key Operations

operatoruse case
beginsKeys starting with a particular set of characters.
betweenKeys between a specified range.
gtKeys less than some value
gteKeys less than or equal to some value
ltKeys greater than some value
lteKeys greater than or equal to some value

Each record represents one Store location. All Stores are located in Malls we manage.

To satisfy requirements for searching based on location, you could use the following keys: Each StoreLocations record would have a Partition Key with the store's storeId. This key alone is not enough to identify a particular store. To solve this, compose a Sort Key for the store's location attribute ordered hierarchically (mall/building/unit): ["mallId", "buildingId", "unitId"].

The StoreLocations entity above, using just the stores Index alone enables four Access Patterns:

  1. All LatteLarrys locations in all Malls
  2. All LatteLarrys locations in one Mall
  3. All LatteLarrys locations inside a specific Mall
  4. A specific LatteLarrys inside of a Mall and Building

Query Chains

Queries in ElectroDB are built around the Access Patterns defined in the Schema and are capable of using partial key Composite Attributes to create performant lookups. To accomplish this, ElectroDB offers a predictable chainable API.

Examples in this section using the StoreLocations schema defined above and can be directly experiment with on runkit: https://runkit.com/tywalch/electrodb-building-queries

The methods: Get (get), Create (put), Update (update), and Delete (delete) *require all composite attributes described in the Entities' primary PK and SK.

Query Method

ElectroDB queries use DynamoDB's query method to find records based on your table's indexes. To read more about queries checkout the section Building Queries

NOTE: By default, ElectroDB will paginate through all items that match your query. To limit the number of items ElectroDB will retrieve, read more about the Query Options pages and limit, or use the ElectroDB Pagination API for fine-grain pagination support.

Get Method

Provide all Table Index composite attributes in an object to the get method. In the event no record is found, a value of null will be returned.

NOTE: As part of ElectroDB's roll out of 1.0.0, a breaking change was made to the get method. Prior to 1.0.0, the get method would return an empty object if a record was not found. This has been changed to now return a value of null in this case.

let results = await StoreLocations.get({
    storeId: "LatteLarrys", 
    mallId: "EastPointe", 
    buildingId: "F34", 
    cityId: "Atlanta1"
}).go();

// Equivalent Params:
// {
//   Key: {
//     pk: "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
//     sk: "$mallstore_1#buildingid_f34#storeid_lattelarrys"
//   },
//   TableName: 'StoreDirectory'
// }

Batch Get

Provide all Table Index composite attributes in an array of objects to the get method to perform a BatchGet query.

NOTE: Performing a BatchGet will return a response structure unique to BatchGet: a two-dimensional array with the results of the query and any unprocessed records. See the example below. Additionally, when performing a BatchGet the .params() method will return an array of parameters, rather than just the parameters for one docClient query. This is because ElectroDB BatchGet queries larger than the docClient's limit of 100 records.

If the number of records you are requesting is above the BatchGet threshold of 100 records, ElectroDB will make multiple requests to DynamoDB and return the results in a single array. By default, ElectroDB will make these requests in series, one after another. If you are confident your table can handle the throughput, you can use the Query Option concurrent. This value can be set to any number greater than zero, and will execute that number of requests simultaneously.

For example, 150 records (50 records over the DynamoDB maximum):

The default value of concurrent will be 1. ElectroDB will execute a BatchGet request of 100, then after that request has responded, make another BatchGet request for 50 records.

If you set the Query Option concurrent to 2, ElectroDB will execute a BatchGet request of 100 records, and another BatchGet request for 50 records without waiting for the first request to finish.

It is important to consider your Table's throughput considerations when setting this value.

let [results, unprocessed] = await StoreLocations.get([
    {
        storeId: "LatteLarrys", 
        mallId: "EastPointe", 
        buildingId: "F34", 
        cityId: "Atlanta1"
    },
    {
        storeId: "MochaJoes", 
        mallId: "WestEnd", 
        buildingId: "A21", 
        cityId: "Madison2"
    }   
]).go({concurrent: 1}); // `concurrent` value is optional and default's to `1`

// Equivalent Params:
// {
//   "RequestItems": {
//     "electro": {
//       "Keys": [
//         {
//           "pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
//           "sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
//         },
//         {
//           "pk": "$mallstoredirectory#cityid_madison2#mallid_westend",
//           "sk": "$mallstore_1#buildingid_a21#storeid_mochajoes"
//         }
//       ]
//     }
//   }
// }

The two-dimensional array returned by batch get most easily used when deconstructed into two variables, in the above case: results and unprocessed.

The results array are records that were returned DynamoDB as Responses on the BatchGet query. They will appear in the same format as other ElectroDB queries.

Elements of the unprocessed array are unlike results received from a query. Instead of containing all the attributes of a record, an unprocessed record only includes the composite attributes defined in the Table Index. This is in keeping with DynamoDB's practice of returning only Keys in the case of unprocessed records. For convenience, ElectroDB will return these keys as composite attributes, but you can pass the query option {unprocessed:"raw"} override this behavior and return the Keys as they came from DynamoDB.

Delete Method

Provide all Table Index composite attributes in an object to the delete method to delete a record.

await StoreLocations.delete({
    storeId: "LatteLarrys", 
    mallId: "EastPointe", 
    buildingId: "F34", 
    cityId: "Atlanta1"
}).go();

// Equivalent Params:
// {
//   Key: {
//     pk: "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
//     sk: "$mallstore_1#buildingid_f34#storeid_lattelarrys"
//   },
//   TableName: 'StoreDirectory'
// }

Batch Write Delete Records

Provide all table index composite attributes in an array of objects to the delete method to batch delete records.

NOTE: Performing a Batch Delete will return an array of "unprocessed" records. An empty array signifies all records were processed. If you want the raw DynamoDB response you can always use the option {raw: true}, more detail found here: Query Options. Additionally, when performing a BatchWrite the .params() method will return an array of parameters, rather than just the parameters for one docClient query. This is because ElectroDB BatchWrite queries larger than the docClient's limit of 25 records.

If the number of records you are requesting is above the BatchWrite threshold of 25 records, ElectroDB will make multiple requests to DynamoDB and return the results in a single array. By default, ElectroDB will make these requests in series, one after another. If you are confident your table can handle the throughput, you can use the Query Option concurrent. This value can be set to any number greater than zero, and will execute that number of requests simultaneously.

For example, 75 records (50 records over the DynamoDB maximum):

The default value of concurrent will be 1. ElectroDB will execute a BatchWrite request of 25, then after that request has responded, make another BatchWrite request for 25 records, and then another.

If you set the Query Option concurrent to 2, ElectroDB will execute a BatchWrite request of 25 records, and another BatchGet request for 25 records without waiting for the first request to finish. After those two have finished it will execute another BatchWrite request for 25 records.

It is important to consider your Table's throughput considerations when setting this value.

let unprocessed = await StoreLocations.delete([
    {
        storeId: "LatteLarrys", 
        mallId: "EastPointe", 
        buildingId: "F34", 
        cityId: "LosAngeles1"
    },
    {
        storeId: "MochaJoes", 
        mallId: "EastPointe", 
        buildingId: "F35", 
        cityId: "LosAngeles1"
    }
]).go({concurrent: 1}); // `concurrent` value is optional and default's to `1` 

// Equivalent Params:
{
  "RequestItems": {
    "StoreDirectory": [
      {
        "DeleteRequest": {
          "Key": {
            "pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
            "sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
          }
        }
      },
      {
        "DeleteRequest": {
          "Key": {
            "pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
            "sk": "$mallstore_1#buildingid_f35#storeid_mochajoes"
          }
        }
      }
    ]
  }
}

Elements of the unprocessed array are unlike results received from a query. Instead of containing all the attributes of a record, an unprocessed record only includes the composite attributes defined in the Table Index. This is in keeping with DynamoDB's practice of returning only Keys in the case of unprocessed records. For convenience, ElectroDB will return these keys as composite attributes, but you can pass the query option {unprocessed:"raw"} override this behavior and return the Keys as they came from DynamoDB.

Put Record

Provide all required Attributes as defined in the model to create a new record. ElectroDB will enforce any defined validations, defaults, casting, and field aliasing. A Put operation will trigger the default, and set attribute callbacks when writing to DynamoDB. By default, after performing a put() or create() operation, ElectroDB will format and return the record through the same process as a Get/Query. This process will invoke the get callback on all included attributes. If this behaviour is not desired, use the Query Option response:"none" to return a null value.

This example includes an optional conditional expression

await StoreLocations
  .put({
      cityId: "Atlanta1",
      storeId: "LatteLarrys",
      mallId: "EastPointe",
      buildingId: "BuildingA1",
      unitId: "B47",
      category: "food/coffee",
      leaseEndDate: "2020-03-22",
      rent: "4500.00"
  })
  .where((attr, op) => op.eq(attr.rent, "4500.00"))
  .go()

// Equivalent Params:
{
  "Item": {
    "cityId": "Atlanta1",
    "mallId": "EastPointe",
    "storeId": "LatteLarrys",
    "buildingId": "BuildingA1",
    "unitId": "B47",
    "category": "food/coffee",
    "leaseEndDate": "2020-03-22",
    "rent": "4500.00",
    "discount": "0.00",
    "pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_buildinga1#storeid_lattelarrys",
    "gis1pk": "$mallstoredirectory#mallid_eastpointe",
    "gsi1sk": "$mallstore_1#buildingid_buildinga1#unitid_b47",
    "gis2pk": "$mallstoredirectory#storeid_lattelarrys",
    "gsi2sk": "$mallstore_1#leaseenddate_2020-03-22",
    "__edb_e__": "MallStore",
    "__edb_v__": "1"
  },
  "TableName": "StoreDirectory",
  "ConditionExpression": "#rent = :rent_w1",
  "ExpressionAttributeNames": {
    "#rent": "rent"
  },
  "ExpressionAttributeValues": {
    ":rent_w1": "4500.00"
  }
}

Batch Write Put Records

Provide all required Attributes as defined in the model to create records as an array to .put(). ElectroDB will enforce any defined validations, defaults, casting, and field aliasing. Another convenience ElectroDB provides, is accepting BatchWrite arrays larger than the 25 record limit. This is achieved making multiple, "parallel", requests to DynamoDB for batches of 25 records at a time. A failure with any of these requests will cause the query to throw, so be mindful of your table's configured throughput.

NOTE: Performing a Batch Put will return an array of "unprocessed" records. An empty array signifies all records returned were processed. If you want the raw DynamoDB response you can always use the option {raw: true}, more detail found here: Query Options. Additionally, when performing a BatchWrite the .params() method will return an array of parameters, rather than just the parameters for one docClient query. This is because ElectroDB BatchWrite queries larger than the docClient's limit of 25 records.

If the number of records you are requesting is above the BatchWrite threshold of 25 records, ElectroDB will make multiple requests to DynamoDB and return the results in a single array. By default, ElectroDB will make these requests in series, one after another. If you are confident your table can handle the throughput, you can use the Query Option concurrent. This value can be set to any number greater than zero, and will execute that number of requests simultaneously.

For example, 75 records (50 records over the DynamoDB maximum):

The default value of concurrent will be 1. ElectroDB will execute a BatchWrite request of 25, then after that request has responded, make another BatchWrite request for 25 records, and then another.

If you set the Query Option concurrent to 2, ElectroDB will execute a BatchWrite request of 25 records, and another BatchGet request for 25 records without waiting for the first request to finish. After those two have finished it will execute another BatchWrite request for 25 records.

It is important to consider your Table's throughput considerations when setting this value.

let unprocessed = await StoreLocations.put([
    {
        cityId: "LosAngeles1",
        storeId: "LatteLarrys",
        mallId: "EastPointe",
        buildingId: "F34",
        unitId: "a1",
        category: "food/coffee",
        leaseEndDate: "2022-03-22",
        rent: "4500.00"
    },
    {
        cityId: "LosAngeles1",
        storeId: "MochaJoes",
        mallId: "EastPointe",
        buildingId: "F35",
        unitId: "a2",
        category: "food/coffee",
        leaseEndDate: "2021-01-22",
        rent: "1500.00"
    }
]).go({concurrent: 1}); // `concurrent` value is optional and default's to `1`

// Equivalent Params:
{
  "RequestItems": {
    "StoreDirectory": [
      {
        "PutRequest": {
          "Item": {
            "cityId": "LosAngeles1",
            "mallId": "EastPointe",
            "storeId": "LatteLarrys",
            "buildingId": "F34",
            "unitId": "a1",
            "category": "food/coffee",
            "leaseEndDate": "2022-03-22",
            "rent": "4500.00",
            "discount": "0.00",
            "pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
            "sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys",
            "gis1pk": "$mallstoredirectory#mallid_eastpointe",
            "gsi1sk": "$mallstore_1#buildingid_f34#unitid_a1",
            "gis2pk": "$mallstoredirectory#storeid_lattelarrys",
            "gsi2sk": "$mallstore_1#leaseenddate_2022-03-22",
            "__edb_e__": "MallStore",
            "__edb_v__": "1"
          }
        }
      },
      {
        "PutRequest": {
          "Item": {
            "cityId": "LosAngeles1",
            "mallId": "EastPointe",
            "storeId": "MochaJoes",
            "buildingId": "F35",
            "unitId": "a2",
            "category": "food/coffee",
            "leaseEndDate": "2021-01-22",
            "rent": "1500.00",
            "discount": "0.00",
            "pk": "$mallstoredirectory#cityid_losangeles1#mallid_eastpointe",
            "sk": "$mallstore_1#buildingid_f35#storeid_mochajoes",
            "gis1pk": "$mallstoredirectory#mallid_eastpointe",
            "gsi1sk": "$mallstore_1#buildingid_f35#unitid_a2",
            "gis2pk": "$mallstoredirectory#storeid_mochajoes",
            "gsi2sk": "$mallstore_1#leaseenddate_2021-01-22",
            "__edb_e__": "MallStore",
            "__edb_v__": "1"
          }
        }
      }
    ]
  }
}

Elements of the unprocessed array are unlike results received from a query. Instead of containing all the attributes of a record, an unprocessed record only includes the composite attributes defined in the Table Index. This is in keeping with DynamoDB's practice of returning only Keys in the case of unprocessed records. For convenience, ElectroDB will return these keys as composite attributes, but you can pass the query option {unprocessed:"raw"} override this behavior and return the Keys as they came from DynamoDB.

Update Record

Update Methods are available after the method update() is called, and allow you to perform alter an item stored dynamodb. The methods can be used (and reused) in a chain to form update parameters, when finished with .params(), or an update operation, when finished with .go(). If your application requires the update method to return values related to the update (e.g. via the ReturnValues DocumentClient parameters), you can use the Query Option {response: "none" | "all_old" | "updated_old" | "all_new" | "updated_new"} with the value that matches your need. By default, the Update operation returns an empty object when using .go().

ElectroDB will validate an attribute's type when performing an operation (e.g. that the subtract() method can only be performed on numbers), but will defer checking the logical validity your update operation to the DocumentClient. If your query performs multiple mutations on a single attribute, or perform other illogical operations given nature of an item/attribute, ElectroDB will not validate these edge cases and instead will simply pass back any error(s) thrown by the Document Client.

Update MethodAttribute TypesParameter
setnumber string boolean enum map list set anyobject
removenumber string boolean enum map list set anyarray
addnumber any setobject
subtractnumberobject
appendany listobject
deleteany setobject
data*callback

Updates to Composite Attributes

ElectroDB adds some constraints to update calls to prevent the accidental loss of data. If an access pattern is defined with multiple composite attributes, then ElectroDB ensure the attributes cannot be updated individually. If an attribute involved in an index composite is updated, then the index key also must be updated, and if the whole key cannot be formed by the attributes supplied to the update, then it cannot create a composite key without overwriting the old data.

This example shows why a partial update to a composite key is prevented by ElectroDB:

{
  "index": "my-gsi",
  "pk": {
    "field": "gsi1pk",
    "composite": ["attr1"]
  },
  "sk": {
    "field": "gsi1sk",
    "composite": ["attr2", "attr3"]
  }
}

The above secondary index definition would generate the following index keys:

{
  "gsi1pk": "$service#attr1_value1",
  "gsi1sk": "$entity_version#attr2_value2#attr3_value6"
}

If a user attempts to update the attribute attr2, then ElectroDB has no way of knowing value of the attribute attr3 or if forming the composite key without it would overwrite its value. The same problem exists if a user were to update attr3, ElectroDB cannot update the key without knowing each composite attribute's value.

In the event that a secondary index includes composite values from the table's primary index, ElectroDB will draw from the values supplied for the update key to address index gaps in the secondary index. For example:

For the defined indexes:

{
  "accessPattern1": {
    "pk": {
      "field": "pk",
      "composite": ["attr1"]
    },
    "sk": {
      "field": "sk",
      "composite": ["attr2"]
    }
  },
  "accessPattern2": {
    "index": "my-gsi",
    "pk": {
      "field": "gsi1pk",
      "composite": ["attr3"]
    },
    "sk": {
      "field": "gsi1sk",
      "composite": ["attr2", "attr4"]
    }
  }
}

A user could update attr4 alone because ElectroDB is able to leverage the value for attr2 from values supplied to the update() method:

entity.update({ attr1: "value1", attr2: "value2" })
  .set({ attr4: "value4" })
  .go();

{
  "UpdateExpression": "SET #attr4 = :attr4_u0, #gsi1sk = :gsi1sk_u0, #attr1 = :attr1_u0, #attr2 = :attr2_u0",
  "ExpressionAttributeNames": {
    "#attr4": "attr4",
    "#gsi1sk": "gsi1sk",
    "#attr1": "attr1",
    "#attr2": "attr2"
  },
  "ExpressionAttributeValues": {
    ":attr4_u0": "value6",
    // This index was successfully built
    ":gsi1sk_u0": "$update-edgecases_1#attr2_value2#attr4_value6",
    ":attr1_u0": "value1",
    ":attr2_u0": "value2"
  },
  "TableName": "test_table",
  "Key": { 
    "pk": "$service#attr1_value1", 
    "sk": "$entity_version#attr2_value2" 
  }
}

Note: Included in the update are all attributes from the table's primary index. These values are automatically included on all updates in the event an update results in an insert.

Update Method: Set

The set() method will accept all attributes defined on the model. Provide a value to apply or replace onto the item.

await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .set({category: "food/meal"})
    .where((attr, op) => op.eq(attr.category, "food/coffee"))
    .go()

// Equivalent Params:
{
  "UpdateExpression": "SET #category = :category",
  "ExpressionAttributeNames": {
    "#category": "category"
  },
  "ExpressionAttributeValues": {
    ":category_w1": "food/coffee",
    ":category": "food/meal"
  },
  "TableName": "StoreDirectory",
  "Key": {
    "pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_f34#storeid_lattelarrys"
  },
  "ConditionExpression": "#category = :category_w1"
}

Update Method: Remove

The remove() method will accept all attributes defined on the model. Unlike most other update methods, the remove() method accepts an array with the names of the attributes that should be removed.

NOTE that the attribute property required functions as a sort of NOT NULL flag. Because of this, if a property exists as required:true it will not be possible to remove that property in particular. If the attribute is a property is on "map", and the "map" is not required, then the "map" can be removed.

await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .remove(["category"])
    .where((attr, op) => op.eq(attr.category, "food/coffee"))
    .go()

// Equivalent Params:
{
  "UpdateExpression": "REMOVE #category",
  "ExpressionAttributeNames": {
    "#category": "category"
  },
  "ExpressionAttributeValues": {
    ":category0": "food/coffee"
  },
  "TableName": "StoreDirectory",
  "Key": {
    "pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
  },
  "ConditionExpression": "#category = :category0"
}

Update Method: Add

The add() method will accept attributes with type number, set, and any defined on the model. In the case of a number attribute, provide a number to add to the existing attribute's value on the item.

If the attribute is defined as any, the syntax compatible with the attribute type set will be used. For this reason, do not use the attribute type any to represent a number.

const newTenant = client.createSet("larry");

await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .add({
      rent: 100,         // "number" attribute
      tenant: ["larry"]  // "set" attribute
    })
    .where((attr, op) => op.eq(attr.category, "food/coffee"))
    .go()

// Equivalent Params:
{
  "UpdateExpression": "SET #rent = #rent + :rent0 ADD #tenant :tenant0",
  "ExpressionAttributeNames": {
    "#category": "category",
    "#rent": "rent",
    "#tenant": "tenant"
  },
  "ExpressionAttributeValues": {
    ":category0": "food/coffee",
    ":rent0": 100,
    ":tenant0": ["larry"]
  },
  "TableName": "StoreDirectory",
  "Key": {
    "pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
  },
  "ConditionExpression": "#category = :category0"
}

Update Method: Subtract

The subtract() method will accept attributes with type number. In the case of a number attribute, provide a number to subtract from the existing attribute's value on the item.

await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .subtract({deposit: 500})
    .where((attr, op) => op.eq(attr.category, "food/coffee"))
    .go()

// Equivalent Params:
{
  "UpdateExpression": "SET #deposit = #deposit - :deposit0",
  "ExpressionAttributeNames": {
    "#category": "category",
    "#deposit": "deposit"
  },
  "ExpressionAttributeValues": {
    ":category0": "food/coffee",
    ":deposit0": 500
  },
  "TableName": "StoreDirectory",
  "Key": {
    "pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
  },
  "ConditionExpression": "#category = :category0"
}

Update Method: Append

The append() method will accept attributes with type any. This is a convenience method for working with DynamoDB lists, and is notably different that set because it will add an element to an existing array, rather than overwrite the existing value.

await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .append({
      rentalAgreement: [{
        type: "ammendment", 
        detail: "no soup for you"
      }]
    })
    .where((attr, op) => op.eq(attr.category, "food/coffee"))
    .go()

// Equivalent Params:
{
  "UpdateExpression": "SET #rentalAgreement = list_append(#rentalAgreement, :rentalAgreement0)",
  "ExpressionAttributeNames": {
    "#category": "category",
    "#rentalAgreement": "rentalAgreement"
  },
  "ExpressionAttributeValues": {
    ":category0": "food/coffee",
    ":rentalAgreement0": [
      {
        "type": "ammendment",
        "detail": "no soup for you"
      }
    ]
  },
  "TableName": "StoreDirectory",
  "Key": {
    "pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
  },
  "ConditionExpression": "#category = :category0"
}

Update Method: Delete

The delete() method will accept attributes with type any or set . This operation removes items from a the contract attribute, defined as a set attribute.

await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .delete({contact: ['555-345-2222']})
    .where((attr, op) => op.eq(attr.category, "food/coffee"))
    .go()

// Equivalent Params:
{
  "UpdateExpression": "DELETE #contact :contact0",
  "ExpressionAttributeNames": {
    "#category": "category",
    "#contact": "contact"
  },
  "ExpressionAttributeValues": {
    ":category0": "food/coffee",
    ":contact0": "555-345-2222"
  },
  "TableName": "StoreDirectory",
  "Key": {
    "pk": "$mallstoredirectory#cityid_atlanta#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
  },
  "ConditionExpression": "#category = :category0"
}

Update Method: Data

The data() allows for different approach to updating your item, by accepting a callback with a similar argument signature to the where clause.

The callback provided to the data method is injected with an attributes object as the first parameter, and an operations object as the second parameter. All operations accept an attribute from the attributes object as a first parameter, and optionally accept a second value parameter.

As mentioned above, this method is functionally similar to the where clause with one exception: The callback provided to data() is not expected to return a value. When you invoke an injected operation method, the side effects are applied directly to update expression you are building.

operationexampleresultdescription
setset(category, value)#category = :category0Add or overwrite existing value
addadd(tenant, name)#tenant :tenant1Add value to existing set attribute (used when provided attribute is of type any or set)
addadd(rent, amount)#rent = #rent + :rent0Mathematically add given number to existing number on record
subtractsubtract(deposit, amount)#deposit = #deposit - :deposit0Mathematically subtract given number from existing number on record
removeremove(petFee)#petFeeRemove attribute/property from item
appendappend(rentalAgreement, amendment)#rentalAgreement = list_append(#rentalAgreement, :rentalAgreement0)Add element to existing list attribute
deletedelete(tenant, name)#tenant :tenant1Remove item from existing set attribute
deldel(tenant, name)#tenant :tenant1Alias for delete operation
namename(rent)#rentReference another attribute's name, can be passed to other operation that allows leveraging existing attribute values in calculating new values
valuevalue(rent, value):rent1Create a reference to a particular value, can be passed to other operation that allows leveraging existing attribute values in calculating new values
await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .data((a, o) => {
        const newTenant = a.value(attr.tenant, "larry");
        o.set(a.category, "food/meal");   // electrodb "enum"   -> dynamodb "string"
        o.add(a.tenant, newTenant);       // electrodb "set"    -> dynamodb "set"
        o.add(a.rent, 100);               // electrodb "number" -> dynamodb "number"
        o.subtract(a.deposit, 200);       // electrodb "number" -> dynamodb "number"
        o.remove(a.leaseEndDate);         // electrodb "string" -> dynamodb "string"
        o.append(a.rentalAgreement, [{    // electrodb "list"   -> dynamodb "list"
            type: "ammendment",           // electrodb "map"    -> dynamodb "map"
            detail: "no soup for you"
        }]);
        o.delete(a.tags, ['coffee']);     // electrodb "set"    -> dynamodb "set"
        o.del(a.contact, '555-345-2222'); // electrodb "string" -> dynamodb "string"
        o.add(a.fees, op.name(a.petFee)); // electrodb "number" -> dynamodb "number"
        o.add(a.leaseHolders, newTenant); // electrodb "set"    -> dynamodb "set"
    })
    .where((attr, op) => op.eq(attr.category, "food/coffee"))
    .go()

// Equivalent Params:
{
  "UpdateExpression": "SET #category = :category_u0, #rent = #rent + :rent_u0, #deposit = #deposit - :deposit_u0, #rentalAgreement = list_append(#rentalAgreement, :rentalAgreement_u0), #totalFees = #totalFees + #petFee REMOVE #leaseEndDate, #gsi2sk ADD #tenant :tenant_u0, #leaseHolders :tenant_u0 DELETE #tags :tags_u0, #contact :contact_u0",
  "ExpressionAttributeNames": {
  "#category": "category",
    "#tenant": "tenant",
    "#rent": "rent",
    "#deposit": "deposit",
    "#leaseEndDate": "leaseEndDate",
    "#rentalAgreement": "rentalAgreement",
    "#tags": "tags",
    "#contact": "contact",
    "#totalFees": "totalFees",
    "#petFee": "petFee",
    "#leaseHolders": "leaseHolders",
    "#gsi2sk": "gsi2sk"
  },
  "ExpressionAttributeValues": {
    ":category0": "food/coffee",
    ":category_u0": "food/meal",
    ":tenant_u0": ["larry"],
    ":rent_u0": 100,
    ":deposit_u0": 200,
    ":rentalAgreement_u0": [{
      "type": "amendment",
      "detail": "no soup for you"
    }],
    ":tags_u0": ["coffee"], // <- DynamoDB Set
    ":contact_u0": ["555-345-2222"], // <- DynamoDB Set 
    },
  "TableName": "electro",
  "Key": {
    "pk": `$mallstoredirectory#cityid_12345#mallid_eastpointe`,
    "sk": "$mallstore_1#buildingid_a34#storeid_lattelarrys"
  },
  "ConditionExpression": "#category = :category0"
}

Update Method: Complex Data Types

ElectroDB supports updating DynamoDB's complex types (list, map, set) with all of its Update Methods.

When using the chain methods set, add, subtract, remove, append, and delete, you can access map properties, list elements, and set items by supplying the json path of the property as the name of the attribute.

The data() method also allows for working with complex types. Unlike using the update chain methods, the data() method ensures type safety when using TypeScript. When using the injected attributes object, simply drill into the attribute itself to apply your update directly to the required object.

The following are examples on how update complex attributes, using both with chain methods and the data() method.

Example 1: Set property on a map attribute

Specifying a property on a map attribute is expressed with dot notation.

// via Chain Method
await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .set({'mapAttribute.mapProperty':  "value"})
    .go();

// via Data Method 
await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .data(({mapAttribute}, {set}) => set(mapAttribute.mapProperty, "value"))
    .go()

Example 2: Removing an element from a list attribute

Specifying an index on a list attribute is expressed with square brackets containing the element's index number.

// via Chain Method
await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .remove(['listAttribute[0]'])
    .go();

// via Data Method 
await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .data(({listAttribute}, {remove}) => remove(listAttribute[0]))
    .go();

Example 3: Adding an item to a set attribute, on a map attribute, that is an element of a list attribute

All other complex structures are simply variations on the above two examples.

// Set values must use the DocumentClient to create a `set`
const newSetValue = StoreLocations.client.createSet("setItemValue"); 

// via Data Method 
await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .add({'listAttribute[1].setAttribute': newSetValue})
    .go();

await StoreLocations
    .update({cityId, mallId, storeId, buildingId})
    .data(({listAttribute}, {add}) => {
        add(listAttribute[1].setAttribute, newSetValue)
    })
    .go();

Scan Records

When scanning for rows, you can use filters the same as you would any query. For more information on filters, see the Where section.

Note: Scan functionality will be scoped to your Entity. This means your results will only include records that match the Entity defined in the model.

await StoreLocations.scan
    .where(({category}, {eq}) => `
        ${eq(category, "food/coffee")} OR ${eq(category, "spite store")}  
    `)
    .where(({leaseEndDate}, {between}) => `
        ${between(leaseEndDate, "2020-03", "2020-04")}
    `)
    .go()

// Equivalent Params:
{
  "TableName": "StoreDirectory",
  "ExpressionAttributeNames": {
    "#category": "category",
    "#leaseEndDate": "leaseEndDate",
    "#pk": "pk",
    "#sk": "sk",
    "#__edb_e__": "__edb_e__",
    "#__edb_v__": "__edb_v__"
  },
  "ExpressionAttributeValues": {
    ":category_w1": "food/coffee",
    ":category_w2": "spite store",
    ":leaseEndDate_w1": "2020-03",
    ":leaseEndDate_w2": "2020-04",
    ":pk": "$mallstoredirectory#cityid_",
    ":sk": "$mallstore_1#buildingid_",
    ":__edb_e__": "MallStore",
    ":__edb_v__": "1"
  },
  "FilterExpression": "begins_with(#pk, :pk) AND #__edb_e__ = :__edb_e__ AND #__edb_v__ = :__edb_v__ AND begins_with(#sk, :sk) AND (#category = :category_w1 OR #category = :category_w2) AND (#leaseEndDate between :leaseEndDate_w1 and :leaseEndDate_w2)"
}

Remove Method

A convenience method for delete with ConditionExpression that the item being deleted exists. Provide all Table Index composite attributes in an object to the remove method to remove the record.

await StoreLocations.remove({
    storeId: "LatteLarrys", 
    mallId: "EastPointe", 
    buildingId: "F34", 
    cityId: "Atlanta1"
}).go();

// Equivalent Params:
// {
//   Key: {
//     pk: "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
//     sk: "$mallstore_1#buildingid_f34#storeid_lattelarrys"
//   },
//   TableName: 'StoreDirectory'
//   ConditionExpression: 'attribute_exists(pk) AND attribute_exists(sk)'
// }

Patch Record

In DynamoDB, update operations by default will insert a record if record being updated does not exist. In ElectroDB, the patch method will utilize the attribute_exists() parameter dynamically to ensure records are only "patched" and not inserted when updating.

For more detail on how to use the patch() method, see the section Update Record to see all the transferable requirements and capabilities available to patch().

Create Record

In DynamoDB, put operations by default will overwrite a record if record being updated does not exist. In ElectroDB, the patch method will utilize the attribute_not_exists() parameter dynamically to ensure records are only "created" and not overwritten when inserting new records into the table.

A Put operation will trigger the default, and set attribute callbacks when writing to DynamoDB. By default, after writing to DynamoDB, ElectroDB will format and return the record through the same process as a Get/Query, which will invoke the get callback on all included attributes. If this behaviour is not desired, use the Query Option response:"none" to return a null value.

await StoreLocations
  .create({
      cityId: "Atlanta1",
      storeId: "LatteLarrys",
      mallId: "EastPointe",
      buildingId: "BuildingA1",
      unitId: "B47",
      category: "food/coffee",
      leaseEndDate: "2020-03-22",
      rent: "4500.00"
  })
  .where((attr, op) => op.eq(attr.rent, "4500.00"))
  .go()

// Equivalent Params:
{
  "Item": {
    "cityId": "Atlanta1",
    "mallId": "EastPointe",
    "storeId": "LatteLarrys",
    "buildingId": "BuildingA1",
    "unitId": "B47",
    "category": "food/coffee",
    "leaseEndDate": "2020-03-22",
    "rent": "4500.00",
    "discount": "0.00",
    "pk": "$mallstoredirectory#cityid_atlanta1#mallid_eastpointe",
    "sk": "$mallstore_1#buildingid_buildinga1#storeid_lattelarrys",
    "gis1pk": "$mallstoredirectory#mallid_eastpointe",
    "gsi1sk": "$mallstore_1#buildingid_buildinga1#unitid_b47",
    "gis2pk": "$mallstoredirectory#storeid_lattelarrys",
    "gsi2sk": "$mallstore_1#leaseenddate_2020-03-22",
    "__edb_e__": "MallStore",
    "__edb_v__": "1"
  },
  "TableName": "StoreDirectory",
  "ConditionExpression": "attribute_not_exists(pk) AND attribute_not_exists(sk) AND #rent = :rent_w1",
  "ExpressionAttributeNames": {
    "#rent": "rent"
  },
  "ExpressionAttributeValues": {
    ":rent_w1": "4500.00"
  }
}

Find Records

DynamoDB offers three methods to query records: get, query, and scan. In ElectroDB, there is a fourth type: find. Unlike get and query, the find method does not require you to provide keys, but under the covers it will leverage the attributes provided to choose the best index to query on. Provide the find method will all properties known to match a record and ElectroDB will generate the most performant query it can to locate the results. This can be helpful with highly dynamic querying needs. If an index cannot be satisfied with the attributes provided, scan will be used as a last resort.

NOTE: The Find method is similar to the Match method with one exception: The attributes you supply directly to the .find() method will only be used to identify and fulfill your index access patterns. Any values supplied that do not contribute to a composite key will not be applied as query filters. Furthermore, if the values you provide do not resolve to an index access pattern, then a table scan will be performed. Use the where() chain method to further filter beyond keys, or use Match for the convenience of automatic filtering based on the values given directly to that method.

await StoreLocations.find({
    mallId: "EastPointe",
    buildingId: "BuildingA1",
}).go()

// Equivalent Params:
{
  "KeyConditionExpression": "#pk = :pk and begins_with(#sk1, :sk1)",
  "TableName": "StoreDirectory",
  "ExpressionAttributeNames": {
    "#mallId": "mallId",
    "#buildingId": "buildingId",
    "#pk": "gis1pk",
    "#sk1": "gsi1sk"
  },
  "ExpressionAttributeValues": {
    ":mallId1": "EastPointe",
    ":buildingId1": "BuildingA1",
    ":pk": "$mallstoredirectory#mallid_eastpointe",
    ":sk1": "$mallstore_1#buildingid_buildinga1#unitid_"
  },
  "IndexName": "gis1pk-gsi1sk-index",
}

Match Records

Match is a convenience method based off of ElectroDB's find method. Similar to Find, Match does not require you to provide keys, but under the covers it will leverage the attributes provided to choose the best index to query on.

Match differs from Find in that it will also include all supplied values into a query filter.

await StoreLocations.find({
    mallId: "EastPointe",
    buildingId: "BuildingA1",
    leaseEndDate: "2020-03-22",
    rent: "1500.00"
}).go()

// Equivalent Params:
{
  "KeyConditionExpression": "#pk = :pk and begins_with(#sk1, :sk1)",
  "TableName": "StoreDirectory",
  "ExpressionAttributeNames": {
    "#mallId": "mallId",
    "#buildingId": "buildingId",
    "#leaseEndDate": "leaseEndDate",
    "#rent": "rent",
    "#pk": "gis1pk",
    "#sk1": "gsi1sk"
  },
  "ExpressionAttributeValues": {
    ":mallId1": "EastPointe",
    ":buildingId1": "BuildingA1",
    ":leaseEndDate1": "2020-03-22",
    ":rent1": "1500.00",
    ":pk": "$mallstoredirectory#mallid_eastpointe",
    ":sk1": "$mallstore_1#buildingid_buildinga1#unitid_"
  },
  "IndexName": "gis1pk-gsi1sk-index",
  "FilterExpression": "#mallId = :mallId1 AND#buildingId = :buildingId1 AND#leaseEndDate = :leaseEndDate1 AND#rent = :rent1"
}

After invoking the Access Pattern with the required Partition Key Composite Attributes, you can now choose what Sort Key Composite Attributes are applicable to your query. Examine the table in Sort Key Operations for more information on the available operations on a Sort Key.

Access Pattern Queries

When you define your indexes in your model, you are defining the Access Patterns of your entity. The composite attributes you choose, and their order, ultimately define the finite set of index queries that can be made. The more you can leverage these index queries the better from both a cost and performance perspective.

Unlike Partition Keys, Sort Keys can be partially provided. We can leverage this to multiply our available access patterns and use the Sort Key Operations: begins, between, lt, lte, gt, and gte. These queries are more performant and cost-effective than filters. The costs associated with DynamoDB directly correlate to how effectively you leverage Sort Key Operations.

For a comprehensive and interactive guide to build queries please visit this runkit: https://runkit.com/tywalch/electrodb-building-queries.

Begins With Queries

One important consideration when using Sort Key Operations to make is when to use and not to use "begins".

It is possible to supply partially supply Sort Key composite attributes. Sort Key attributes must be provided in the order they are defined, but it's possible to provide only a subset of the Sort Key Composite Attributes to ElectroDB. By default, when you supply a partial Sort Key in the Access Pattern method, ElectroDB will create a beginsWith query. The difference between that and using .begins() is that, with a .begins() query, ElectroDB will not post-pend the next composite attribute's label onto the query.

The difference is nuanced and makes better sense with an example, but the rule of thumb is that data passed to the Access Pattern method should represent values you know strictly equal the value you want.

The following examples will use the following Access Pattern definition for units:

{
  "units": {
    "index": "gis1pk-gsi1sk-index",
    "pk": {
      "field": "gis1pk",
      "composite attributes": [
        "mallId"
      ]
    },
    "sk": {
      "field": "gsi1sk",
      "composite attributes": [
        "buildingId",
        "unitId"
      ]
    }
  }
}

The names you have given to your indexes on your entity model/schema express themselves as "Access Pattern" methods on your Entity's query object:

// Example #1, access pattern `units`
StoreLocations.query.units({mallId, buildingId}).go();
// -----------------------^^^^^^^^^^^^^^^^^^^^^^

Data passed to the Access Pattern method is considered to be full, known, data. In the above example, we are saying we know the mallId, buildingId and unitId.

Alternatively, if you only know the start of a piece of data, use .begins():

// Example #2
StoreLocations.query.units({mallId}).begins({buildingId}).go();
// ---------------------------------^^^^^^^^^^^^^^^^^^^^^

Data passed to the .begins() method is considered to be partial data. In the second example, we are saying we know the mallId and buildingId, but only know the beginning of unitId.

For the above queries we see two different sort keys:

  1. "$mallstore_1#buildingid_f34#unitid_"
  2. "$mallstore_1#buildingid_f34"

The first example shows how ElectroDB post-pends the label of the next composite attribute (unitId) on the Sort Key to ensure that buildings such as "f340" are not included in the query. This is useful to prevent common issues with overloaded sort keys like accidental over-querying.

The second example allows you to make queries that do include buildings such as "f340" or "f3409" or "f340356346".

For these reasons it is important to consider that attributes passed to the Access Pattern method are considered to be full, known, data.

Collection Chains

Collections allow you to query across Entities. They can be used on Service instance.

const DynamoDB = require("aws-sdk/clients/dynamodb");
const table = "projectmanagement";
const client = new DynamoDB.DocumentClient();

const employees = new Entity({
  model: {
    entity: "employees",
    version: "1",
    service: "taskapp",
  },
  attributes: {
    employeeId: {
      type: "string"
    },
    organizationId: {
      type: "string"
    },
    name: {
      type: "string"
    },
    team: {
      type: ["jupiter", "mercury", "saturn"]
    }
  },
  indexes: {
    staff: {
      pk: {
        field: "pk",
        composite: ["organizationId"]
      },
      sk: {
        field: "sk",
        composite: ["employeeId"]
      }
    },
    employee: {
      collection: "assignments",
      index: "gsi2",
      pk: {
        field: "gsi2pk",
        composite: ["employeeId"],
      },
      sk: {
        field: "gsi2sk",
        composite: [],
      },
    }
  }
}, { client, table })

const tasks = new Entity({
  model: {
    entity: "tasks",
    version: "1",
    service: "taskapp",
  },
  attributes: {
    taskId: {
      type: "string"
    },
    employeeId: {
      type: "string"
    },
    projectId: {
      type: "string"
    },
    title: {
      type: "string"
    },
    body: {
      type: "string"
    }
  },
  indexes: {
    project: {
      pk: {
        field: "pk",
        composite: ["projectId"]
      },
      sk: {
        field: "sk",
        composite: ["taskId"]
      }
    },
    assigned: {
      collection: "assignments",
      index: "gsi2",
      pk: {
        field: "gsi2pk",
        composite: ["employeeId"],
      },
      sk: {
        field: "gsi2sk",
        composite: [],
      },
    }
  }
}, { client, table });

const TaskApp = new Service({employees, tasks});

Available on your Service are two objects: entites and collections. Entities available on entities have the same capabilities as they would if created individually. When a Model added to a Service with join however, its Collections are automatically added and validated with the other Models joined to that Service. These Collections are available on collections.

TaskApp.collections.assignments({employeeId: "JExotic"}).params();  

// Results
{
  TableName: 'projectmanagement',
  ExpressionAttributeNames: { '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
  ExpressionAttributeValues: { ':pk': '$taskapp_1#employeeid_joeexotic', ':sk1': '$assignments' },
  KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
  IndexName: 'gsi3'
}

Collections do not have the same query functionality and as an Entity, though it does allow for inline filters like an Entity. The attributes available on the filter object include all attributes across entities.

TaskApp.collections
    .assignments({employee: "CBaskin"})
    .filter((attributes) => `
        ${attributes.project.notExists()} OR ${attributes.project.contains("murder")}
    `)

// Results
{
  TableName: 'projectmanagement',
  ExpressionAttributeNames: { '#project': 'project', '#pk': 'gsi2pk', '#sk1': 'gsi2sk' },
  ExpressionAttributeValues: {
    ':project1': 'murder',
    ':pk': '$taskapp_1#employeeid_carolbaskin',
    ':sk1': '$assignments'
  },
  KeyConditionExpression: '#pk = :pk and begins_with(#sk1, :sk1)',
  IndexName: 'gsi2',
  FilterExpression: '\n\t\tattribute_not_exists(#project) OR contains(#project, :project1)\n\t'
}

Execute Queries

Lastly, all query chains end with either a .go() or a .params() method invocation. These will either execute the query to DynamoDB (.go()) or return formatted parameters for use with the DynamoDB docClient (.params()).

Both .params() and .go() take a query configuration object which is detailed more in the section Query Options.

Params

The params method ends a query chain, and synchronously formats your query into an object ready for the DynamoDB docClient.

For more information on the options available in the config object, checkout the section Query Options.

let config = {};
let stores = MallStores.query
    .leases({ mallId })
    .between(
      { leaseEndDate:  "2020-06-01" }, 
      { leaseEndDate:  "2020-07-31" })
    .filter(attr) => attr.rent.lte("5000.00"))
    .params(config);

// Results:
{
  IndexName: 'idx2',
  TableName: 'electro',
  ExpressionAttributeNames: { '#rent': 'rent', '#pk': 'idx2pk', '#sk1': 'idx2sk' },
  ExpressionAttributeValues: {
    ':rent1': '5000.00',
    ':pk': '$mallstoredirectory_1#mallid_eastpointe',
    ':sk1': '$mallstore#leaseenddate_2020-06-01#rent_',
    ':sk2': '$mallstore#leaseenddate_2020-07-31#rent_'
  },
  KeyConditionExpression: '#pk = :pk and #sk1 BETWEEN :sk1 AND :sk2',
  FilterExpression: '#rent <= :rent1'
}

Go

The go method ends a query chain, and asynchronously queries DynamoDB with the client provided in the model.

For more information on the options available in the config object, check out the section Query Options.

let config = {};
let stores = MallStores.query
    .leases({ mallId })
    .between(
        { leaseEndDate:  "2020-06-01" }, 
        { leaseEndDate:  "2020-07-31" })
    .filter(({rent}) => rent.lte("5000.00"))
    .go(config);

Page

NOTE: By Default, ElectroDB queries will paginate through all results with the go() method. ElectroDB's page() method can be used to manually iterate through DynamoDB query results.

The page method ends a query chain, and asynchronously queries DynamoDB with the client provided in the model. Unlike the .go(), the .page() method returns a tuple.

The first element for a page query is the "pager": an object contains the composite attributes that make up the ExclusiveStartKey that is returned by the DynamoDB client. This is very useful in multi-tenant applications where only some composite attributes are exposed to the client, or there is a need to prevent leaking keys between entities. If there is no ExclusiveStartKey this value will be null. On subsequent calls to .page(), pass the results returned from the previous call to .page() or construct the composite attributes yourself.

The "pager" includes the associated entity's Identifiers.

NOTE: It is highly recommended to use the query option pager: "raw"" flag when using .page() with scan operations. This is because when using scan on large tables the docClient may return an ExclusiveStartKey for a record that does not belong to entity making the query (regardless of the filters set). In these cases ElectroDB will return null (to avoid leaking the keys of other entities) when further pagination may be needed to find your records.

The second element is the results of the query, exactly as it would be returned through a query operation.

NOTE: When calling .page() the first argument is reserved for the "page" returned from a previous query, the second parameter is for Query Options. For more information on the options available in the config object, check out the section Query Options.

Entity Pagination

let [next, stores] = await MallStores.query
    .leases({ mallId })
    .page(); // no "pager" passed to `.page()`

let [pageTwo, moreStores] = await MallStores.query
    .leases({ mallId })
    .page(next, {}); // the "pager" from the first query (`next`) passed to the second query

// page:
// { 
//   storeId: "LatteLarrys", 
//   mallId: "EastPointe", 
//   buildingId: "BuildingA1", 
//   unitId: "B47"
//   __edb_e__: "MallStore",
//   __edb_v__: "version" 
// }

// stores
// [{
//   mall: '3010aa0d-5591-4664-8385-3503ece58b1c',
//   leaseEnd: '2020-01-20',
//   sector: '7d0f5c19-ec1d-4c1e-b613-a4cc07eb4db5',
//   store: 'MNO',
//   unit: 'B5',
//   id: 'e0705325-d735-4fe4-906e-74091a551a04',
//   building: 'BuildingE',
//   category: 'food/coffee',
//   rent: '0.00'
// },
// {
//   mall: '3010aa0d-5591-4664-8385-3503ece58b1c',
//   leaseEnd: '2020-01-20',
//   sector: '7d0f5c19-ec1d-4c1e-b613-a4cc07eb4db5',
//   store: 'ZYX',
//   unit: 'B9',
//   id: 'f201a1d3-2126-46a2-aec9-758ade8ab2ab',
//   building: 'BuildingI',
//   category: 'food/coffee',
//   rent: '0.00'
// }]

Service Pagination

NOTE: By Default, ElectroDB will paginate through all results with the query() method. ElectroDB's page() method can be used to manually iterate through DynamoDB query results.

Pagination with services is also possible. Similar to Entity Pagination, calling the .page() method returns a [pager, results] tuple. Also, similar to pagination on Entities, the pager object returned by default is a deconstruction of the returned LastEvaluatedKey.

Pager Query Options

The .page() method also accepts Query Options just like the .go() and .params() methods. Unlike those methods, however, the .page() method accepts Query Options as the second parameter (the first parameter is reserved for the "pager").

A notable Query Option, that is available only to the .page() method, is an option called pager. This property defines the post-processing ElectroDB should perform on a returned LastEvaluatedKey, as well as how ElectroDB should interpret an incoming pager, to use as an ExclusiveStartKey.

NOTE: Because the "pager" object is destructured from the keys DynamoDB returns as the LastEvaluatedKey, these composite attributes differ from the record's actual attribute values in one important way: Their string values will all be lowercase. If you intend to use these attributes in ways where their casing will matter (e.g. in a where filter), keep in mind this may result in unexpected outcomes.

The three options for the query option pager are as follows:

// LastEvaluatedKey
{
  pk: '$taskapp#country_united states of america#state_oregon',
  sk: '$offices_1#city_power#zip_34706#office_mobile branch',
  gsi1pk: '$taskapp#office_mobile branch',
  gsi1sk: '$workplaces#offices_1'
}

"named" (default): By default, ElectroDB will deconstruct the LastEvaluatedKey returned by the DocClient into it's individual composite attribute parts. The "named" option, chosen by default, also includes the Entity's column "identifiers" -- this is useful with Services where destructured pagers may be identical between more than one Entity in that Service.

// {pager: "named"} | {pager: undefined} 
{  
  "city": "power",
  "country": "united states of america",
  "state": "oregon",
  "zip": "34706",
  "office": "mobile branch",
  "__edb_e__": "offices",
  "__edb_v__": "1"
}

"item": Similar to "named", however without the Entity's "identifiers". If two Entities with a service have otherwise identical index definitions, using the "item" pager option can result in errors while paginating a Collection. If this is not a concern with your Service, or you are paginating with only an Entity, this option could be preferable because it has fewer properties.

// {pager: "item"} 
{  
  "city": "power",
  "country": "united states of america",
  "state": "oregon",
  "zip": "34706",
  "office": "mobile branch",
}

"raw": The "raw" option returns the LastEvaluatedKey as it was returned by the DynamoDB DocClient.

// {pager: "raw"} 
{
  pk: '$taskapp#country_united states of america#state_oregon',
  sk: '$offices_1#city_power#zip_34706#office_mobile branch',
  gsi1pk: '$taskapp#office_mobile branch',
  gsi1sk: '$workplaces#offices_1'
}

Pagination Example

Simple pagination example:

async function getAllStores(mallId) {
  let stores = [];
  let pager = null;

  do {
    let [next, results] = await MallStores.query
      .leases({ mallId })
      .page(pager);
    stores = [...stores, ...results]; 
    pager = next;
  } while(pager !== null);
  
  return stores;
} 

Query Examples

For a comprehensive and interactive guide to build queries please visit this runkit: https://runkit.com/tywalch/electrodb-building-queries.

const cityId = "Atlanta1";
const mallId = "EastPointe";
const storeId = "LatteLarrys";
const unitId = "B24";
const buildingId = "F34";
const june = "2020-06";
const july = "2020-07"; 
const discount = "500.00";
const maxRent = "2000.00";
const minRent = "5000.00";

// Lease Agreements by StoreId
await StoreLocations.query.leases({storeId}).go()

// Lease Agreement by StoreId for March 22nd 2020
await StoreLocations.query.leases({storeId, leaseEndDate: "2020-03-22"}).go()

// Lease agreements by StoreId for 2020
await StoreLocations.query.leases({storeId}).begins({leaseEndDate: "2020"}).go()

// Lease Agreements by StoreId after March 2020
await StoreLocations.query.leases({storeId}).gt({leaseEndDate: "2020-03"}).go()

// Lease Agreements by StoreId after, and including, March 2020
await StoreLocations.query.leases({storeId}).gte({leaseEndDate: "2020-03"}).go()

// Lease Agreements by StoreId before 2021
await StoreLocations.query.leases({storeId}).lt({leaseEndDate: "2021-01"}).go()

// Lease Agreements by StoreId before February 2021
await StoreLocations.query.leases({storeId}).lte({leaseEndDate: "2021-02"}).go()

// Lease Agreements by StoreId between 2010 and 2020
await StoreLocations.query
    .leases({storeId})
    .between(
        {leaseEndDate: "2010"}, 
        {leaseEndDate: "2020"})
    .go()

// Lease Agreements by StoreId after, and including, 2010 in the city of Atlanta and category containing food
await StoreLocations.query
    .leases({storeId})
    .gte({leaseEndDate: "2010"})
    .where((attr, op) => `
        ${op.eq(attr.cityId, "Atlanta1")} AND ${op.contains(attr.category, "food")}
    `)
    .go()
    
// Rents by City and Store who's rent discounts match a certain rent/discount criteria
await StoreLocations.query
    .units({mallId})
    .begins({leaseEndDate: june})
    .rentDiscount(discount, maxRent, minRent)
    .go()

// Stores by Mall matching a specific category
await StoreLocations.query
    .units({mallId})
    .byCategory("food/coffee")
    .go()

Query Options

Query options can be added the .params(), .go() and .page() to change query behavior or add customer parameters to a query.

By default, ElectroDB enables you to work with records as the names and properties defined in the model. Additionally, it removes the need to deal directly with the docClient parameters which can be complex for a team without as much experience with DynamoDB. The Query Options object can be passed to both the .params() and .go() methods when building you query. Below are the options available:

{
  params?: object;
  table?: string;
  raw?: boolean;
  includeKeys?: boolean;
  pager?: "raw" | "named" | "item";
  originalErr?: boolean;
  concurrent?: number;
  unprocessed?: "raw" | "item";
  response?: "default" | "none" | "all_old" | "updated_old" | "all_new" | "updated_new";
  ignoreOwnership?: boolean;
  limit?: number;
  pages?: number;
};
OptionDefaultDescription
params{}Properties added to this object will be merged onto the params sent to the document client. Any conflicts with ElectroDB will favor the params specified here.
table(from constructor)Use a different table than the one defined in the Service Options
rawfalseReturns query results as they were returned by the docClient.
includeKeysfalseBy default, ElectroDB does not return partition, sort, or global keys in its response.
pager"named"Used in with pagination (.pages()) calls to override ElectroDBs default behaviour to break apart LastEvaluatedKeys records into composite attributes. See more detail about this in the sections for Pager Query Options.
originalErrfalseBy default, ElectroDB alters the stacktrace of any exceptions thrown by the DynamoDB client to give better visibility to the developer. Set this value equal to true to turn off this functionality and return the error unchanged.
concurrent1When performing batch operations, how many requests (1 batch operation == 1 request) to DynamoDB should ElectroDB make at one time. Be mindful of your DynamoDB throughput configurations
unprocessed"item"Used in batch processing to override ElectroDBs default behaviour to break apart DynamoDBs Unprocessed records into composite attributes. See more detail about this in the sections for BatchGet, BatchDelete, and BatchPut.
response"default"Used as a convenience for applying the DynamoDB parameter ReturnValues. The options here are the same as the parameter values for the DocumentClient except lowercase. The "none" option will cause the method to return null and will bypass ElectroDB's response formatting -- useful if formatting performance is a concern.
ignoreOwnershipfalseBy default, ElectroDB interrogates items returned from a query for the presence of matching entity "identifiers". This helps to ensure other entities, or other versions of an entity, are filtered from your results. If you are using ElectroDB with an existing table/dataset you can turn off this feature by setting this property to true.
limitnoneA target for the number of items to return from DynamoDB. If this option is passed, Queries on entities and through collections will paginate DynamoDB until this limit is reached or all items for that query have been returned.
pagesHow many DynamoDB pages should a query iterate through before stopping. By default ElectroDB paginate through all results for your query.

Errors:

Error CodeDescription
1000sConfiguration Errors
2000sInvalid Queries
3000sUser Defined Errors
4000sDynamoDB Errors
5000sUnexpected Errors

No Client Defined On Model

Code: 1001

Why this occurred: If a DynamoDB DocClient is not passed to the constructor of an Entity or Service (client), ElectroDB will be unable to query DynamoDB. This error will only appear when a query(using go()) is made because ElectroDB is still useful without a DocClient through the use of it's params() method.

What to do about it: For an Entity be sure to pass the DocClient as the second param to the constructor:

new Entity(schema, {client})

For a Service, the client is passed the same way, as the second param to the constructor:

new Service("", {client});

Invalid Identifier

Code: 1002

Why this occurred: You tried to modify the entity identifier on an Entity.

What to do about it: Make sure you have spelled the identifier correctly or that you actually passed a replacement.

Invalid Key Composite Attribute Template

Code: 1003

Why this occurred: You are trying to use the custom Key Composite Attribute Template, and the format you passed is invalid.

What to do about it: Checkout the section on [Composite Attribute Templates](#composite attribute-templates) and verify your template conforms to the rules detailed there.

Duplicate Indexes

Code: 1004

Why this occurred: Your model contains duplicate indexes. This could be because you accidentally included an index twice or even forgot to add an index name on a secondary index, which would be interpreted as "duplicate" to the Table's Primary index.

What to do about it: Double-check the index names on your model for duplicate indexes. The error should specify which index has been duplicated. It is also possible that you have forgotten to include an index name. Each table must have at least one Table Index (which does not include an index property in ElectroDB), but all Secondary and Local indexes must include an index property with the name of that index as defined on the table.

{
  indexes: {
    index1: {
      index: "idx1", // <-- duplicate "idx1"
      pk: {},
      sk: {}
    },
    index2: {
      index: "idx1", // <-- duplicate "idx1"
      pk: {},
      sk: {}
    }
  }
}

Collection Without An SK

Code: 1005

Why this occurred: You have added a collection to an index that does not have an SK. Because Collections are used to help query across entities via the Sort Key, not having a Sort Key on an index defeats the purpose of a Collection.

What to do about it: If your index does have a Sort Key, but you are unsure of how to inform electro without setting composite attributes to the SK, add the SK object to the index and use an empty array for Composite Attributes:

// ElectroDB interprets as index *not having* an SK.
{
  indexes: {
    myIndex: {
      pk: {
        field: "pk",
        composite: ["id"]
      }
    }
  }
}

// ElectroDB interprets as index *having* SK, but this model doesnt attach any composite attributes to it.
{
  indexes: {
    myIndex: {
      pk: {
        field: "pk",
        composite: ["id"]
      },
      sk: {
        field: "sk",
        composite: []
      }
    }
  }
}

Duplicate Collections

Code: 1006

Why this occurred: You have assigned the same collection name to multiple indexes. This is not allowed because collection names must be unique.

What to do about it: Determine a new naming scheme

Missing Primary Index

Code: 1007

Why this occurred: DynamoDB requires the definition of at least one Primary Index on the table. In Electro this is defined as an Index without an index property. Each model needs at least one, and the composite attributes used for this index must ensure each composite represents a unique record.

What to do about it: Identify the index you're using as the Primary Index and ensure it does not have an index property on its definition.

// ElectroDB interprets as the Primary Index because it lacks an `index` property.
{
  indexes: {
    myIndex: {
      pk: {
        field: "pk",
        composite: ["org"]
      },
      sk: {
        field: "sk",
        composite: ["id"]
      }
    }
  }
}

// ElectroDB interprets as a Global Secondary Index because it has an `index` property.
{
  indexes: {
    myIndex: {
      index: "gsi1"
      pk: {
        field: "gsipk1",
        composite: ["org"]
      },
      sk: {
        field: "gsisk1",
        composite: ["id"]
      }
    }
  }
}

Invalid Attribute Definition

Code: 1008

Why this occurred: Some attribute on your model has an invalid configuration.

What to do about it: Use the error to identify which column needs to examined, double-check the properties on that attribute. Checkout the section on Attributes for more information on how they are structured.

Invalid Model

Code: 1009

Why this occurred: Some properties on your model are missing or invalid.

What to do about it: Checkout the section on Models to verify your model against what is expected.

Invalid Options

Code: 1010

Why this occurred: Some properties on your options object are missing or invalid.

What to do about it: Checkout the section on Model/Service Options to verify your model against what is expected.

Duplicate Index Fields

Code: 1014

Why this occurred: An Index in your model references the same field twice across indexes. The field property in the definition of an index is a mapping to the name of the field assigned to the PK or SK of an index.

What to do about it: This is likely a typo, if not double-check the names of the fields you assigned to be the PK and SK of your index, these field names must be unique.

Duplicate Index Composite Attributes

Code: 1015

Why this occurred: Within one index you tried to use the same composite attribute in both the PK and SK. A composite attribute may only be used once within an index. With ElectroDB it is not uncommon to use the same value as both the PK and SK when a Sort Key exists on a table -- this usually is done because some value is required in that column but for that entity it is not necessary. If this is your situation remember that ElectroDB does put a value in the SortKey even if does not include a composite attribute, checkout this section for more information.

What to do about it: Determine how you can change your access pattern to not duplicate the composite attribute. Remember that an empty array for an SK is valid.

Incompatible Key Composite Attribute Template

Code: 1017

Why this occurred: You are trying to use the custom Key Composite Attribute Template, and a Composite Attribute Array on your model, and they do not contain identical composite attributes.

What to do about it: Checkout the section on [Composite Attribute Templates](#composite attribute-templates) and verify your template conforms to the rules detailed there. Both properties must contain the same attributes and be provided in the same order.

Invalid Index With Attribute Name

Code: 1018

Why this occurred: ElectroDB's design revolves around best practices related to modeling in single table design. This includes giving indexed fields generic names. If the PK and SK fields on your table indexes also match the names of attributes on your Entity you will need to make special considerations to make sure ElectroDB can accurately map your data.

What to do about it: Checkout the section Using ElectroDB with existing data to learn more about considerations to make when using attributes as index fields.

Invalid Collection on Index With Attribute Field Names

Code: 1019

Why this occurred: Collections allow for unique access patterns to be modeled between entities. It does this by appending prefixes to your key composites. If an Entity leverages an attribute field as an index key, ElectroDB will be unable to prefix your value because that would result in modifying the value itself.

What to do about it: Checkout the section Collections to learn more about collections, as well as the section Using ElectroDB with existing data to learn more about considerations to make when using attributes as index fields.

Missing Composite Attributes

Code: 2002

Why this occurred: The current request is missing some composite attributes to complete the query based on the model definition. Composite Attributes are used to create the Partition and Sort keys. In DynamoDB Partition keys cannot be partially included, and Sort Keys can be partially include they must be at least passed in the order they are defined on the model.

What to do about it: The error should describe the missing composite attributes, ensure those composite attributes are included in the query or update the model to reflect the needs of the access pattern.

Missing Table

Code: 2003f

Why this occurred: You never specified a Table for DynamoDB to use.

What to do about it: Tables can be defined on the Service Options object when you create an Entity or Service, or if that is not known at the time of creation, it can be supplied as a Query Option and supplied on each query individually. If can be supplied on both, in that case the Query Option will override the Service Option.

Invalid Concurrency Option

Code: 2004

Why this occurred: When performing a bulk operation (Batch Get, Batch Delete Records, Batch Put Records) you can pass a Query Options called concurrent, which impacts how many batch requests can occur at the same time. Your value should pass the test of both, !isNaN(parseInt(value)) and parseInt(value) > 0.

What to do about it:
Expect this error only if you're providing a concurrency option. Double-check the value you are providing is the value you expect to be passing, and that the value passes the tests listed above.

Invalid Pages Option

Code: 2005

Why this occurred: When performing a query Query you can pass a Query Options called pages, which impacts how many DynamoDB pages a query should iterate through. Your value should pass the test of both, !isNaN(parseInt(value)) and parseInt(value) > 0.

What to do about it: Expect this error only if you're providing a pages option. Double-check the value you are providing is the value you expect to be passing, and that the value passes the tests listed above.

Invalid Limit Option

Code: 2006

Why this occurred: When performing a query Query you can pass a Query Options called limit, which impacts how many DynamoDB items a query should return. Your value should pass the test of both, !isNaN(parseInt(value)) and parseInt(value) > 0.

What to do about it: Expect this error only if you're providing a limit option. Double-check the value you are providing is the value you expect to be passing, and that the value passes the tests listed above.

Invalid Attribute

Code: 3001

Why this occurred: The value received for a validation either failed type expectations (e.g. a "number" instead of a "string"), or the user provided "validate" callback on an attribute rejected a value.

What to do about it: Examine the error itself for more precise detail on why the failure occurred. The error object itself should have a property called "fields" which contains an array of every attribute that failed validation, and a reason for each. If the failure originated from a "validate" callback, the originally thrown error will be accessible via the cause property the corresponding element within the fields array.1

Below is the type definition for an ElectroValidationError:

ElectroValidationError<T extends Error = Error> extends ElectroError {
    readonly name: "ElectroValidationError"
    readonly code: number;
    readonly date: number;
    readonly isElectroError: boolean;
    ref: {
        readonly code: number;
        readonly section: string;
        readonly name: string;
        readonly sym: unique symbol;
    }
    readonly fields: ReadonlyArray<{
        /**
         * The json path to the attribute that had a validation error
         */
        readonly field: string;

        /**
         * A description of the validation error for that attribute
         */
        readonly reason: string;

        /**
         * Index of the value passed (present only in List attribute validation errors)
         */
        readonly index: number | undefined;

        /**
         * The error thrown from the attribute's validate callback (if applicable)
         */
        readonly cause: T | undefined;
    }>
}

AWS Error

Code: 4001

Why this occurred: DynamoDB did not like something about your query.

What to do about it: By default ElectroDB tries to keep the stack trace close to your code, ideally this can help you identify what might be going on. A tip to help with troubleshooting: use .params() to get more insight into how your query is converted to DocClient params.

Unknown Errors

Invalid Last Evaluated Key

Code: 5003

Why this occurred: Likely you were calling .page() on a scan. If you weren't please make an issue and include as much detail about your query as possible.

What to do about it: When paginating with scan queries, it is highly recommended that the query option, {pager: "raw"}. This is because when using scan on large tables the docClient may return an ExclusiveStartKey for a record that does not belong to entity making the query (regardless of the filters set). In these cases ElectroDB will return null (to avoid leaking the keys of other entities) when further pagination may be needed to find your records.

// example
myModel.scan.page(null, {pager: "raw"});

No Owner For Pager

Code: 5004

Why this occurred: When using pagination with a Service, ElectroDB will try to identify which Entity is associated with the supplied pager. This error can occur when you supply an invalid pager, or when you are using a different pager option to a pager than what was used when retrieving it. Consult the section on Pagination to learn more.

What to do about it: If you are sure the pager you are passing to .page() is the same you received from .page() this could be an unexpected error. To mitigate the issue use the Query Option {pager: "raw"} and please open a support issue.

Pager Not Unique

Code: 5005

Why this occurred: When using pagination with a Service, ElectroDB will try to identify which Entity is associated with the supplied pager option. This error can occur when you supply a pager that resolves to more than one Entity. This can happen if your entities share the same composite attributes for the index you are querying on, and you are using the Query Option {pager: "item""}.

What to do about it: Because this scenario is possible with otherwise well considered/thoughtful entity models, the default pager type used by ElectroDB is "named". To avoid this error, you will need to use either the "raw" or "named" pager options for any index that could result in an ambiguous Entity owner.

Examples

Want to just play with ElectroDB instead of read about it? Try it out for yourself! https://runkit.com/tywalch/electrodb-building-queries

Employee App

For an example, lets look at the needs of application used to manage Employees. The application Looks at employees, offices, tasks, and projects.

Employee App Requirements

  1. As a Project Manager, I need to find all tasks and details on a specific employee.
  2. As a Regional Manager, I need to see all details about an office and its employees
  3. As an Employee, I need to see all my Tasks.
  4. As a Product Manager, I need to see all the tasks for a project.
  5. As a Client, I need to find a physical office close to me.
  6. As a Hiring manager, I need to find employees with comparable salaries.
  7. As HR, I need to find upcoming employee birthdays/anniversaries
  8. As HR, I need to find all the employees that report to a specific manager

App Entities

const EmployeesModel = {
    model: {
      entity: "employees",
      version: "1",
      service: "taskapp",  
    },
    attributes: {
        employee: "string",
        firstName: "string",
        lastName: "string",
        office: "string",
        title: "string",
        team: ["development", "marketing", "finance", "product"],
        salary: "string",
        manager: "string",
        dateHired: "string",
        birthday: "string",
    },
    indexes: {
        employee: {
            pk: {
                field: "pk",
                composite: ["employee"],
            },
            sk: {
                field: "sk",
                composite: [],
            },
        },
        coworkers: {
            index: "gsi1pk-gsi1sk-index",
            collection: "workplaces",
            pk: {
                field: "gsi1pk",
                composite: ["office"],
            },
            sk: {
                field: "gsi1sk",
                composite: ["team", "title", "employee"],
            },
        },
        teams: {
            index: "gsi2pk-gsi2sk-index",
            pk: {
                field: "gsi2pk",
                composite: ["team"],
            },
            sk: {
                field: "gsi2sk",
                composite: ["title", "salary", "employee"],
            },
        },
        employeeLookup: {
            collection: "assignements",
            index: "gsi3pk-gsi3sk-index",
            pk: {
                field: "gsi3pk",
                composite: ["employee"],
            },
            sk: {
                field: "gsi3sk",
                composite: [],
            },
        },
        roles: {
            index: "gsi4pk-gsi4sk-index",
            pk: {
                field: "gsi4pk",
                composite: ["title"],
            },
            sk: {
                field: "gsi4sk",
                composite: ["salary", "employee"],
            },
        },
        directReports: {
            index: "gsi5pk-gsi5sk-index",
            pk: {
                field: "gsi5pk",
                composite: ["manager"],
            },
            sk: {
                field: "gsi5sk",
                composite: ["team", "office", "employee"],
            },
        },
    },
    filters: {
        upcomingCelebrations: (attributes, startDate, endDate) => {
            let { dateHired, birthday } = attributes;
            return `${dateHired.between(startDate, endDate)} OR ${birthday.between(
                startDate,
                endDate,
            )}`;
        },
    },
};

const TasksModel = {
    model: {
        entity: "tasks",
        version: "1",
        service: "taskapp",  
    }, 
    attributes: {
        task: "string",
        project: "string",
        employee: "string",
        description: "string",
    },
    indexes: {
        task: {
            pk: {
                field: "pk",
                composite: ["task"],
            },
            sk: {
                field: "sk",
                composite: ["project", "employee"],
            },
        },
        project: {
            index: "gsi1pk-gsi1sk-index",
            pk: {
                field: "gsi1pk",
                composite: ["project"],
            },
            sk: {
                field: "gsi1sk",
                composite: ["employee", "task"],
            },
        },
        assigned: {
            collection: "assignements",
            index: "gsi3pk-gsi3sk-index",
            pk: {
                field: "gsi3pk",
                composite: ["employee"],
            },
            sk: {
                field: "gsi3sk",
                composite: ["project", "task"],
            },
        },
    },
};

const OfficesModel = {
    model: {
          entity: "offices",
          version: "1",
          service: "taskapp",  
      }, 
    attributes: {
        office: "string",
        country: "string",
        state: "string",
        city: "string",
        zip: "string",
        address: "string",
    },
    indexes: {
        locations: {
            pk: {
                field: "pk",
                composite: ["country", "state"],
            },
            sk: {
                field: "sk",
                composite: ["city", "zip", "office"],
            },
        },
        office: {
            index: "gsi1pk-gsi1sk-index",
            collection: "workplaces",
            pk: {
                field: "gsi1pk",
                composite: ["office"],
            },
            sk: {
                field: "gsi1sk",
                composite: [],
            },
        },
    },
};

Join models on a new Service called EmployeeApp

const DynamoDB = require("aws-sdk/clients/dynamodb");
const client = new DynamoDB.DocumentClient({region: "us-east-1"});
const { Service } = require("electrodb");
const table = "projectmanagement";
const EmployeeApp = new Service("EmployeeApp", { client, table });

EmployeeApp
    .join(EmployeesModel) // EmployeeApp.entities.employees
    .join(TasksModel)     // EmployeeApp.entities.tasks
    .join(OfficesModel);  // EmployeeApp.entities.tasks

Query Records

All tasks and employee information for a given employee

Fulfilling Requirement #1.

EmployeeApp.collections.assignements({employee: "CBaskin"}).go();

Returns the following:

{
    employees: [{
        employee: "cbaskin",
        firstName: "carol",
        lastName: "baskin",
        office: "big cat rescue",
        title: "owner",
        team: "cool cats and kittens",
        salary: "1,000,000",
        manager: "",
        dateHired: "1992-11-04",
        birthday: "1961-06-06",
    }],
    tasks: [{
        task: "Feed tigers",
        description: "Prepare food for tigers to eat",
        project: "Keep tigers alive",
        employee: "cbaskin"
    }, {
        task: "Fill water bowls",
        description: "Ensure the tigers have enough water",
        project: "Keep tigers alive",
        employee: "cbaskin"
    }]
}

Find all employees and office details for a given office

Fulfilling Requirement #2.

EmployeeApp.collections.workplaces({office: "big cat rescue"}).go()

Returns the following:

{
    employees: [{
        employee: "cbaskin",
        firstName: "carol",
        lastName: "baskin",
        office: "big cat rescue",
        title: "owner",
        team: "cool cats and kittens",
        salary: "1,000,000",
        manager: "",
        dateHired: "1992-11-04",
        birthday: "1961-06-06",
    }],
    offices: [{
        office: "big cat rescue",
        country: "usa",
        state: "florida",
        city: "tampa",
        zip: "12345",
        address: "123 Kitty Cat Lane"
    }]
}

Tasks for a given employee

Fulfilling Requirement #3.

EmployeeApp.entities.tasks.query.assigned({employee: "cbaskin"}).go();

Returns the following:

[
    {
        task: "Feed tigers",
        description: "Prepare food for tigers to eat",
        project: "Keep tigers alive",
        employee: "cbaskin"
    }, {
        task: "Fill water bowls",
        description: "Ensure the tigers have enough water",
        project: "Keep tigers alive",
        employee: "cbaskin"
    }
]

Tasks for a given project

Fulfilling Requirement #4.

EmployeeApp.entities.tasks.query.project({project: "Murder Carol"}).go();

Returns the following:

[
    {
        task: "Hire hitman",
        description: "Find someone to murder Carol",
        project: "Murder Carol",
        employee: "jexotic"
    }
];

Find office locations

Fulfilling Requirement #5.

EmployeeApp.entities.office.locations({country: "usa", state: "florida"}).go()

Returns the following:

[
    {
        office: "big cat rescue",
        country: "usa",
        state: "florida",
        city: "tampa",
        zip: "12345",
        address: "123 Kitty Cat Lane"
    }
]

Find employee salaries and titles

Fulfilling Requirement #6.

EmployeeApp.entities.employees
    .roles({title: "animal wrangler"})
    .lte({salary: "150.00"})
    .go()

Returns the following:

[
    {
        employee: "ssaffery",
        firstName: "saff",
        lastName: "saffery",
        office: "gw zoo",
        title: "animal wrangler",
        team: "keepers",
        salary: "105.00",
        manager: "jexotic",
        dateHired: "1999-02-23",
        birthday: "1960-07-11",
    }
]

Find employee birthdays or anniversaries

Fulfilling Requirement #7.

EmployeeApp.entities.employees
    .workplaces({office: "gw zoo"})
    .upcomingCelebrations("2020-05-01", "2020-06-01")
    .go()

Returns the following:

[
    {
        employee: "jexotic",
        firstName: "joe",
        lastName: "maldonado-passage",
        office: "gw zoo",
        title: "tiger king",
        team: "founders",
        salary: "10000.00",
        manager: "jlowe",
        dateHired: "1999-02-23",
        birthday: "1963-03-05",
    }
]

Find direct reports

Fulfilling Requirement #8.

EmployeeApp.entities.employees
    .reports({manager: "jlowe"})
    .go()

Returns the following:

[
    {
        employee: "jexotic",
        firstName: "joe",
        lastName: "maldonado-passage",
        office: "gw zoo",
        title: "tiger king",
        team: "founders",
        salary: "10000.00",
        manager: "jlowe",
        dateHired: "1999-02-23",
        birthday: "1963-03-05",
    }
]

Shopping Mall Property Management App

For an example, lets look at the needs of application used to manage Shopping Mall properties. The application assists employees in the day-to-day operations of multiple Shopping Malls.

Shopping Mall Requirements

  1. As a Maintenance Worker, I need to know which stores are currently in each Mall down to the Building they are located.
  2. As a Helpdesk Employee, I need to locate related stores in Mall locations by Store Category.
  3. As a Property Manager, I need to identify upcoming leases in need of renewal.

Create a new Entity using the StoreLocations schema defined above

const DynamoDB = require("aws-sdk/clients/dynamodb");
const client = new DynamoDB.DocumentClient();
const StoreLocations = new Entity(model, {client, table: "StoreLocations"});

Access Patterns are accessible on the StoreLocation

PUT Record

Add a new Store to the Mall

await StoreLocations.create({
    mallId: "EastPointe",
    storeId: "LatteLarrys",
    buildingId: "BuildingA1",
    unitId: "B47",
    category: "spite store",
    leaseEndDate: "2020-02-29",
    rent: "5000.00",
}).go();

Returns the following:

{
    "mallId": "EastPointe",
    "storeId": "LatteLarrys",
    "buildingId": "BuildingA1",
    "unitId": "B47",
    "category": "spite store",
    "leaseEndDate": "2020-02-29",
    "rent": "5000.00",
    "discount": "0.00"
}

UPDATE Record

Change the Stores Lease Date

When updating a record, you must include all Composite Attributes associated with the table's primary PK and SK.

let storeId = "LatteLarrys";
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
await StoreLocations.update({storeId, mallId, buildingId, unitId}).set({
    leaseEndDate: "2021-02-28"
}).go();

Returns the following:

{
    "leaseEndDate": "2021-02-28"
}

GET Record

Retrieve a specific Store in a Mall

When retrieving a specific record, you must include all Composite Attributes associated with the table's primary PK and SK.

let storeId = "LatteLarrys";
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
await StoreLocations.get({storeId, mallId, buildingId, unitId}).go();

Returns the following:

{
    "mallId": "EastPointe",
    "storeId": "LatteLarrys",
    "buildingId": "BuildingA1",
    "unitId": "B47",
    "category": "spite store",
    "leaseEndDate": "2021-02-28",
    "rent": "5000.00",
    "discount": "0.00"
}

DELETE Record

Remove a Store location from the Mall

When removing a specific record, you must include all Composite Attributes associated with the table's primary PK and SK.

let storeId = "LatteLarrys";
let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
let storeId = "LatteLarrys";
await StoreLocations.delete({storeId, mallId, buildingId, unitId}).go();

Returns the following:

{}

Query Mall Records

All Stores in a particular mall

Fulfilling Requirement #1.


let mallId = "EastPointe";
let stores = await StoreLocations.malls({mallId}).query().go();

All Stores in a particular mall building

Fulfilling Requirement #1.

let mallId = "EastPointe";
let buildingId = "BuildingA1";
let stores = await StoreLocations.malls({mallId}).query({buildingId}).go();

Find the store located in unit B47

Fulfilling Requirement #1.

let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
let stores = await StoreLocations.malls({mallId}).query({buildingId, unitId}).go();

Stores by Category at Mall

Fulfilling Requirement #2.

let mallId = "EastPointe";
let category = "food/coffee";
let stores = await StoreLocations.malls({mallId}).byCategory(category).go();

Stores by upcoming lease

Fulfilling Requirement #3.

let mallId = "EastPointe";
let q2StartDate = "2020-04-01";
let stores = await StoreLocations.leases({mallId}).lt({leaseEndDate: q2StateDate}).go();

Stores will renewals for Q4

Fulfilling Requirement #3.

let mallId = "EastPointe";
let q4StartDate = "2020-10-01";
let q4EndDate = "2020-12-31";
let stores = await StoreLocations.leases(mallId)
    .between (
      {leaseEndDate: q4StartDate}, 
      {leaseEndDate: q4EndDate})
    .go();

Spite-stores with release renewals this year

Fulfilling Requirement #3.

let mallId = "EastPointe";
let yearStarDate = "2020-01-01";
let yearEndDate = "2020-12-31";
let storeId = "LatteLarrys";
let stores = await StoreLocations.leases(mallId)
    .between (
      {leaseEndDate: yearStarDate}, 
      {leaseEndDate: yearEndDate})
    .filter(attr => attr.category.eq("Spite Store"))
    .go();

All Latte Larrys in a particular mall building

let mallId = "EastPointe";
let buildingId = "BuildingA1";
let unitId = "B47";
let storeId = "LatteLarrys";
let stores = await StoreLocations.malls({mallId}).query({buildingId, storeId}).go();

Exported TypeScript Types

The following types are exported for easier use while using ElectroDB with TypeScript:

EntityRecord Type

The EntityRecord type is an object containing every attribute an Entity's model.

Definition:

type EntityRecord<E extends Entity<any, any, any, any>> =
    E extends Entity<infer A, infer F, infer C, infer S>
        ? Item<A,F,C,S,S["attributes"]>
        : never;

Use:

type EntiySchema = EntityRecord<typeof MyEntity>

EntityItem Type

This type represents an item as it is returned from a query. This is different from the EntityRecord in that this type reflects the required, hidden, default, etc properties defined on the attribute.

Definition:

export type EntityItem<E extends Entity<any, any, any, any>> =
  E extends Entity<infer A, infer F, infer C, infer S>
  ? ResponseItem<A, F, C, S>
  : never;

Use:

type Thing = EntityItem<typeof MyEntityInstance>;

CollectionItem Type

This type represents the value returned from a collection query, and is similar to EntityItem.

Use:

type CollectionResults = CollectionItem<typeof MyServiceInstance, "collectionName">

CreateEntityItem Type

This type represents an item that you would pass your entity's put or create method

Definition:

export type CreateEntityItem<E extends Entity<any, any, any, any>> =
  E extends Entity<infer A, infer F, infer C, infer S>
  ? PutItem<A, F, C, S>
  : never;

Use:

type NewThing = CreateEntityItem<typeof MyEntityInstance>;

UpdateEntityItem Type

This type represents an item that you would pass your entity's set method when using create or update.

Definition:

export type UpdateEntityItem<E extends Entity<any, any, any, any>> =
  E extends Entity<infer A, infer F, infer C, infer S>
  ? SetItem<A, F, C, S>
  : never;

Use:

type UpdateProperties = UpdateEntityItem<typeof MyEntityInstance>;

UpdateAddEntityItem Type

This type represents an item that you would pass your entity's add method when using create or update.

Definition:

export type UpdateAddEntityItem<E extends Entity<any, any, any, any>> =
    E extends Entity<infer A, infer F, infer C, infer S>
        ? AddItem<A, F, C, S>
        : never;

UpdateSubtractEntityItem Type

This type represents an item that you would pass your entity's subtract method when using create or update.

Definition:

export type UpdateSubtractEntityItem<E extends Entity<any, any, any, any>> =
    E extends Entity<infer A, infer F, infer C, infer S>
        ? SubtractItem<A, F, C, S>
        : never;

UpdateAppendEntityItem Type

This type represents an item that you would pass your entity's append method when using create or update.

Definition:

export type UpdateAppendEntityItem<E extends Entity<any, any, any, any>> =
    E extends Entity<infer A, infer F, infer C, infer S>
        ? AppendItem<A, F, C, S>
        : never;

UpdateRemoveEntityItem Type

This type represents an item that you would pass your entity's remove method when using create or update.

Definition:

export type UpdateRemoveEntityItem<E extends Entity<any, any, any, any>> =
    E extends Entity<infer A, infer F, infer C, infer S>
        ? RemoveItem<A, F, C, S>
        : never;

UpdateDeleteEntityItem Type

This type represents an item that you would pass your entity's delete method when using create or update.

Definition:

export type UpdateDeleteEntityItem<E extends Entity<any, any, any, any>> =
    E extends Entity<infer A, infer F, infer C, infer S>
        ? DeleteItem<A, F, C, S>
        : never;

Using ElectroDB With Existing Data

When using ElectroDB with an existing table and/or data model, there are a few configurations you may need to make to your ElectroDB model. Read the sections below to see if any of the following cases fits your particular needs.

Whenever using ElectroDB with existing tables/data, it is best to use the Query Option ignoreOwnership. ElectroDB leaves some meta-data on items to help ensure data queried and returned from DynamoDB does not leak between entities. Because your data was not made by ElectroDB, these checks could impede your ability to return data.

// when building params
.params({ignoreOwnership: true})
// when querying the table
.go({ignoreOwnership: true})
// when using pagination
.page(null, {ignoreOwnership: true})

Your existing index fields have values with mixed case:

DynamoDB is case-sensitive, and ElectroDB will lowercase key values by default. In the case where you modeled your data with uppercase, or did not apply case modifications, ElectroDB can be configured to match this behavior. Checkout the second on Index Casing to read more.

You have index field names that match attribute names:

With Single Table Design, it is encouraged to give index fields a generic name, like pk, sk, gsi1pk, etc. In reality, it is also common for tables to have index fields that are named after the domain itself, like accountId, organizationId, etc.

ElectroDB tries to abstract away your when working with DynamoDB, so instead of defining pk or sk in your model's attributes, you define them as indexes and map other attributes onto those fields as a composite. Using separate item fields for keys, then for the actual attributes you use in your application, you can leverage more advanced modeling techniques in DynamoDB.

If your existing table uses non-generic fields that also function as attributes, checkout the section Attributes as Indexes to learn more about how ElectroDB handles these types of indexes.

Electro CLI

NOTE: The ElectroCLI is currently in a beta phase and subject to change.

Electro is a CLI utility toolbox for extending the functionality of ElectroDB. Current functionality of the CLI allows you to:

  1. Execute queries against your Entities, Services, Models directly from the command line.
  2. Dynamically stand up an HTTP Service to interact with your Entities, Services, Models.

For usage and installation details you can learn more here.

Version 1 Migration

This section is to detail any breaking changes made on the journey to a stable 1.0 product.

New schema format/breaking key format change

It became clear when I added the concept of a Service that the "version" paradigm of having the version in the PK wasn't going to work. This is because collection queries use the same PK for all entities and this would prevent some entities in a Service to change versions without impacting the service as a whole. The better more is the place the version in the SK after the entity name so that all version of an entity can be queried. This will work nicely into the migration feature I have planned that will help migrate between model versions.

To address this change, I decide it would be best to change the structure for defining a model, which is then used as heuristic to determine where to place the version in the key (PK or SK). This has the benefit of not breaking existing models, but does increase some complexity in the underlying code.

Additionally, a change was made to the Service class. New Services would take a string of the service name instead of an object as before.

In the old scheme, version came after the service name (see ^).

pk: $mallstoredirectory_1#mall_eastpointe
                        ^
sk: $mallstores#building_buildinga#store_lattelarrys

In the new scheme, version comes after the entity name (see ^).

pk: $mallstoredirectory#mall_eastpointe

sk: $mallstores_1#building_buildinga#store_lattelarrys
                ^

In practice the change looks like this for use of Entity:

const  DynamoDB  =  require("aws-sdk/clients/dynamodb");
const {Entity} = require("electrodb");
const client = new DynamoDB.DocumentClient();
const table = "dynamodb_table_name";

// old way
let old_schema = {
  entity: "model_name",
  service: "service_name",
  version: "1",
  table: table,
  attributes: {...},
  indexes: {...}
};
new Entity(old_schema, {client});

// new way
let new_schema = {
  model: {
    entity: "model_name",
    service: "service_name",
    version: "1",
  },
  attributes: {...},
  indexes: {...}
};
new Entity(new_schema, {client, table});

Changes to usage of Service would look like this:

const  DynamoDB  =  require("aws-sdk/clients/dynamodb");
const {Service} = require("electrodb");
const client = new DynamoDB.DocumentClient();
const table = "dynamodb_table_name";

// old way
new Service({
  service: "service_name",
  version: "1",
  table: table,
}, {client});

// new way
new Service("service_name", {client, table});

// new way (for better TypeScript support)
new Service({entity1, entity2, ...})

The renaming of index property Facets to Composite and Template

In preparation of moving the codebase to version 1.0, ElectroDB will now accept the facets property as either the composite and/or template properties. Using the facets property is still accepted by ElectroDB but will be deprecated sometime in the future (tbd).

This change stems from the fact the facets is already a defined term in the DynamoDB space and that definition does not fit the use-case of how ElectroDB uses the term. To avoid confusion from new developers, the facets property shall now be called composite (as in Composite Attributes) when supplying an Array of attributes, and template while supplying a string. These are two independent fields for two reasons:

ElectroDB will validate the Composite Attributes provided map to those in the template (more validation is always nice).

Allowing for the composite array to be supplied independently will allow for Composite Attributes to remained typed even when using a Composite Attribute Template.

Get Method to Return null

1.0.0 brings back a null response from the get() method when a record could not be found. Prior to 1.0.0 ElectroDB returned an empty object.

Coming Soon

  • Default query options defined on the model to give more general control of interactions with the Entity.

Download Details:
Author: tywalch
Official Website: https://github.com/tywalch/electrodb 
License: MIT
 

#electrodb #dynamodb