JavaScript Functional Reactive Stream Libraries - Compare Leading

JavaScript Functional Reactive Stream Libraries - Compare Leading

At the foundation of every reactive programming application stands a library that offers data structures and operators to compose the data flow that models the solution. These libraries, often called streaming libraries, are characterized by different factors. Let’s see some of them.

About Functional Reactive Streaming

At the foundation of every reactive programming application stands a library that offers data structures and operators to compose the data flow that models the solution.

These libraries, often called streaming libraries, are characterized by different factors. Let’s see some of them.

Push/Pull

In push-based behavior, the program graph is ready and constantly listening to new input data. This is, for example, the case with an event listener on a DOM element: the relative callback runs when a new input event occurs.

By contrast, with a pull-based input, it is read when needed. This usually happens for continuous changing values, for example, when we call a function that returns the current time.

Functional reactive libraries are usually focused on push but there are experiments to expand to support pull. (See, for example, hareactive behavior [2] and Xstream signals [3].)

Cold/Hot Streams

Reactive libraries offer the basis to compose our application business graph. When the graph starts running the input data, producers get connected and listen for input events.

Here come two flavors. The stream is defined as cold if the data producer is created inside the stream when a listener is added to it (or to a dependent stream). This usually means the stream is by default unicast and that every attached listener gets a different producer instantiation.

We call them hot streams when the producer is created outside the stream. This case is usually multicast, many streams can listen to the same shared data source. For more explanation, I forward you to Dave Sexton article about hot/cold observables [4].

Implicit/Explicit Time Modeling

As said, the base functionality of the reactive library is to implement the application data graph that reacts to input data and produces the desired output over time.

The time concept can be explicitly represented by input data push time and function calling order. Or, in a more implicit way, creating for every input data a tuple that associates it with a time number.

Sync/Async Startup

After the application graph is declared and instantiated, it is necessary to call a method to start observing for input events.

There are two kinds of strategies. Sync startup, usually happens to call an addListener method on a stream. The method synchronously runs the network backward, creating and observing the input sources when reaching them.

Usually, in an application with sync startup, we’ll find many invocations of the addListener method at different time points when the application needs to observe new sources.

In async startup, usually, we have a single run effect function call per application. All streams (existing and future) merge/compose down to this single final stream. This start method is usually asynchronous and all streams wait for processing input events until the whole network has warmed up.

Let’s present the contenders…

RxJS: Observable Boss

Probably the most mature and most used reactive lib in the JavaScript ecosystem.

  • Repository: https://github.com/ReactiveX/rxjs.
  • Current version: 6.3.3.
  • Stars and used by: 2.1K 2m.
  • Type support: Native TypeScript.
  • Push/pull: Push.
  • Cold/hot: Hot (can support cold).
  • Implicit/explicit time: Implicit.
  • API style: Static or dotted by pipe.
  • Native ES6 modules support: No. (Yes in v.7 currently in alpha.)

XStream: King of the Cycle

Born to better address Cycle.js functional specific requirements, it can be considered a lite and opinionated version of RxJS from André Staltz.

  • Repository: https://github.com/staltz/xstream.
  • Current version: 11.11.0.
  • Stars: 1.9K.
  • Type support: Native TypeScript.
  • Push/pull: Push.
  • Cold/hot: Hot.
  • Implicit/explicit time: Implicit.
  • API style: Principally dotted.
  • Native ES6 modules support: No.

@most/core: New in Town

The new incarnation of the monadic stream toolkit for FRP from Brian Cavalier.

  • Repository: https://github.com/mostjs/core.
  • Current version: 1.4.1.
  • Stars: 223 (3.1K on the main project).
  • Type support: Flow and TypeScript definitions.
  • Push/pull: Push.
  • Cold/hot: Hot.
  • Implicit/explicit time: Explicit.
  • API style: Static.
  • Native ES6 modules support: Yes.

Benchmark App

For benchmarking, a simple graph network was replicated with the three libraries.

Each version was implemented in a separated ES6 module. For every input impulse, the network generates a list of to-do items. Every item has a copy of an old item (the old item is updated with every four elements).

The network is very simple but uses some of the principal operators of the three libs.

This is image title

Another peculiarity of the network is the presence of a “diamond”. Diamond flows are not recommended and can cause glitches in the execution. In the demo, the diamond was voluntarily introduced to compare how the three libs react.

In short, Most, with its explicit time representation and the async startup was the most intuitive and glitch-resistant. For the other two libs, it was necessary to force an async behavior, adding a setTimeout to have the diamond network work as expected.

Let’s see the core of the three versions plus an imperative traditional version:

function todoList (deferred) {
    const list = []
    let old = {}
    for (let i = 0; i < config.iterations; i++) {
        const t = todo()
        old = t.id % 4 === 0 ? { ...t } : old
        t.old = { ...old }
        list.push(t)
    }
    return deferred.resolve(list)
}

Imperative traditional version

function todoList (deferred) {
    const ping$ = new R.Observable(listener => {
        setTimeout(() => {
            ;[...Array(config.iterations).keys()].forEach(i => listener.next(i))
        }, 0)
    })
    const todo$ = RO.share()(RO.map(todo)(ping$))
    const old$ = RO.startWith({})(RO.filter(todo => todo.id % 4 === 0)(todo$))
    const sample$ = RO.map(([last, old]) => ({
        ...last,
        old
    }))(RO.withLatestFrom(old$)(todo$))
    const list$ = RO.scan((list, item) => [...list, item], [])(sample$)
    const end$ = RO.take(1)(
        RO.filter(list => list.length === config.iterations)(list$)
    )
    return RO.tap(res => deferred.resolve(res))(end$)
}

RxJS version

function todoList (deferred) {
    const ping$ = fromArray([...Array(config.iterations).keys()])
    const todo$ = multicast(map(todo, ping$))
    const old$ = merge(
        map(todo => (todo.id % 4 === 0 ? todo : {}), take(1, todo$)),
        filter(todo => todo.id % 4 === 0, skip(1, todo$))
    )
    const sample$ = snapshot(
        (old, last) => ({...last, old}),
        old$,
        todo$
    )
    const list$ = scan((list, item) => [...list, item], [], sample$)
    const end$ = take(
        1,
        filter(list => list.length === config.iterations, list$)
    )
    return tap(res => deferred.resolve(res), end$)
}

@most/core version

function todoList (deferred) {
    const ping$ = xs.create({
        start: function (listener) {
            setTimeout(() => {
                ;[...Array(config.iterations).keys()].forEach(i =>
                    listener.next(i)
                )
            }, 0)
        },

        stop: function () {}
    })
    const todo$ = ping$.map(todo)
    const old$ = todo$.filter(todo => todo.id % 4 === 0).startWith({})
    const sample$ = sampleCombine(old$)(todo$).map(([last, old]) => ({
        ...last,
        old
    }))
    const list$ = sample$.fold((list, item) => [...list, item], [])
    const end$ = list$.filter(list => list.length === config.iterations).take(1)
    return end$.map(res => deferred.resolve(res))
}

XStream version

Speed Benchmarks

Benchmark.js was used to compare the three library’s speeds.

This is image title

Ops/Sec (high is better)

  • @most/core — 60.83 ops/sec ±2.60% (72 runs sampled).
  • XStream — 54.59 ops/sec ±1.46% (81 runs sampled).
  • RxJS — 51.58 ops/sec ±2.59% (80 runs sampled).

The fastest is @most/core. Done in 18.40s. Yarn speed 17.22s, user 0.58s, system 93% cpu, 18.955 total.

All three libs are very performant. Most/core was the winner with the best ops/sec value.

Memory Benchmarks

Memory management is equally important, if not more, than speed. In real-world applications, we are going to have hundreds of thousands of stream nodes. Correct and efficient memory management will be fundamental.

To measure memory pressure, Node was started with an exposed garbage collector API and the Airbnb Memwatch library was used to trace heap usage.

The libs were compared even against an imperative mutating “traditional” version of the algorithm.

Two different tests were done. One recursive and one based on promises.

Recursive Memory Test

import { induce as mostInduce } from '../most.mjs'
import { induce as xstreamInduce } from '../xstream.mjs'
import { induce as rxjsInduce } from '../rxjs.mjs'
import { induce as imperativeInduce } from '../imperative.mjs'
import { typeWriter } from './common.mjs'
import memwatch from '@HolgerFrank/node-memwatch'

const type = process.argv[2]
const cycles = parseInt(process.argv[3])

const induce = {
    imperative: imperativeInduce,
    most: mostInduce,
    xstream: xstreamInduce,
    rxjs: rxjsInduce
}

const writer = typeWriter('recursive-' + type)

let queue = Promise.resolve()

function cycle (induce, x) {
    x > 0 &&
        induce({
            resolve: function () {
                global.gc()
                cycle(induce, x - 1)
            }
        })
}

memwatch.on('stats', function (info) {
    queue = queue.then(() => writer.writeRecords([info]))
})
global.gc()
cycle(induce[type], cycles)
global.gc()

Recursion-based memory test app

Two graphs are presented, one with the imperative version and one without. This is because of the imperative version’s really bad results that obfuscate other library’s comparison.

This is image title

Recursive algorithm (low is better)

This is image title

Recursive algorithm (without imperative version, low is better)

The imperative version was unable to unload memory between recursive iterations. Removing the imperative, we see that @most/core is the better one.

RxJS and XStream are a little bit more expensive. XStream shows a growing trend that suggests more investigation.

Promise-Based Memory Test

import { induce as imperativeInduce } from '../imperative.mjs'
import { induce as mostInduce } from '../most.mjs'
import { induce as xstreamInduce } from '../xstream.mjs'
import { induce as rxjsInduce } from '../rxjs.mjs'
import { typeWriter } from './common.mjs'
import memwatch from '@HolgerFrank/node-memwatch'

const type = process.argv[2]
const cycles = parseInt(process.argv[3])

const induce = {
    imperative: imperativeInduce,
    most: mostInduce,
    xstream: xstreamInduce,
    rxjs: rxjsInduce
}

const writer = typeWriter('promise-' + type)

let queue = Promise.resolve()

function cycle (induce, x) {
    let runQueue = Promise.resolve()
    for (let i = 0; i < x; i++) {
        runQueue = runQueue.then(function () {
            return new Promise(function (resolve, reject) {
                induce({ resolve })
            }).then(function () {
                return global.gc()
            })
        })
    }
    return runQueue
}

memwatch.on('stats', function (info) {
    queue = queue.then(() => writer.writeRecords([info]))
})
global.gc()
cycle(induce[type], cycles)
global.gc()

Promise-based memory test app

This is image title

Promise-based (low is better)

Using promises, the imperative gets its revenge with optimal memory usage. @most/core is still the best among the three stream libs but with more deviations, XStream is the more expensive but more linear with fever spikes.

Conclusion

Three of the major functional reactive programming streaming JavaScript libraries were tested for speed and memory usage on Node.js.

All three libraries are very performant on speed and efficient on memory usage.

@most/core has been identified as the best in speed and the more efficient for memory usage. Moreover, explicit time representation and always async startup help prevent glitches and have an easy to understand and predictable data flow.

Thank you for reading !

JavaScript Programming Reactive Functional Front End

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Functional Programming in Javascript 

The mystic term of Functional Programming (FP) must be familiar to any JS developer. The first impression when we say “JS supports functional programming paradigm”.

Functional Programming in JavaScript

Let’s talk about the raw way of writing functions. Functions perform certain set of actions according to our requirements which could include fetching data, updating State, changing a set of mutable values and updating the DOM and so on.

Functional Programming: Higher Order Functions

Functional Programming: Higher Order Functions. A Better Take on JavaScript’s Higher Order Functions. Functional Programming is awesome! It makes programming fun.

From imperative to declarative JavaScript

In this post, I will explain why declarative code is better than imperative code. Then I will list some techniques to convert imperative JavaScript to a declarative one in common situations.

Functional Programming

Functional Programming: Functional Programming is a Declarative style of Programming Paradigm for writing computer programs.