How to Share a Common Redis Connection on Fastify

@fastify/redis

Fastify Redis connection plugin; with this you can share the same Redis connection in every part of your server.

Install

npm i @fastify/redis --save

Usage

Add it to your project with register and you are done!

Create a new Redis Client

Under the hood ioredis is used as client, the options that you pass to register will be passed to the Redis client.

const fastify = require('fastify')()

// create by specifying host
fastify.register(require('@fastify/redis'), { host: '127.0.0.1' })

// OR by specifying Redis URL
fastify.register(require('@fastify/redis'), { url: 'redis://127.0.0.1', /* other redis options */ })

// OR with more options
fastify.register(require('@fastify/redis'), { 
  host: '127.0.0.1', 
  password: '***',
  port: 6379, // Redis port
  family: 4   // 4 (IPv4) or 6 (IPv6)
})

Accessing the Redis Client

Once you have registered your plugin, you can access the Redis client via fastify.redis.

The client is automatically closed when the fastify instance is closed.

'use strict'

const Fastify = require('fastify')
const fastifyRedis = require('@fastify/redis')

const fastify = Fastify({ logger: true })

fastify.register(fastifyRedis, { 
  host: '127.0.0.1', 
  password: 'your strong password here',
  port: 6379, // Redis port
  family: 4   // 4 (IPv4) or 6 (IPv6)
})

fastify.get('/foo', (req, reply) => {
  const { redis } = fastify
  redis.get(req.query.key, (err, val) => {
    reply.send(err || val)
  })
})

fastify.post('/foo', (req, reply) => {
  const { redis } = fastify
  redis.set(req.body.key, req.body.value, (err) => {
    reply.send(err || { status: 'ok' })
  })
})

fastify.listen(3000, err => {
  if (err) throw err
  console.log(`server listening on ${fastify.server.address().port}`)
})

Using an existing Redis client

You may also supply an existing Redis client instance by passing an options object with the client property set to the instance. In this case, the client is not automatically closed when the Fastify instance is closed.

'use strict'

const fastify = require('fastify')()
const redis = require('redis').createClient({ host: 'localhost', port: 6379 })

fastify.register(require('@fastify/redis'), { client: redis })

Note: by default, @fastify/redis will not automatically close the client connection when the Fastify server shuts down. To opt-in to this behavior, register the client like so:

fastify.register(require('@fastify/redis'), {
  client: redis,
  closeClient: true
})

Registering multiple Redis client instances

By using the namespace option you can register multiple Redis client instances.

'use strict'

const fastify = require('fastify')()
const redis = require('redis').createClient({ host: 'localhost', port: 6379 })

fastify
  .register(require('@fastify/redis'), {
    host: '127.0.0.1',
    port: 6380,
    namespace: 'hello'
  })
  .register(require('@fastify/redis'), {
    client: redis,
    namespace: 'world'
  })

// Here we will use the `hello` named instance
fastify.get('/hello', (req, reply) => {
  const { redis } = fastify

  redis.hello.get(req.query.key, (err, val) => {
    reply.send(err || val)
  })
})

fastify.post('/hello', (req, reply) => {
  const { redis } = fastify

  redis['hello'].set(req.body.key, req.body.value, (err) => {
    reply.send(err || { status: 'ok' })
  })
})

// Here we will use the `world` named instance
fastify.get('/world', (req, reply) => {
  const { redis } = fastify

  redis['world'].get(req.query.key, (err, val) => {
    reply.send(err || val)
  })
})

fastify.post('/world', (req, reply) => {
  const { redis } = fastify

  redis.world.set(req.body.key, req.body.value, (err) => {
    reply.send(err || { status: 'ok' })
  })
})

fastify.listen(3000, function (err) {
  if (err) {
    fastify.log.error(err)
    process.exit(1)
  }
})

Redis streams (Redis 5.0 or greater is required)

@fastify/redis supports Redis streams out of the box.

'use strict'

const fastify = require('fastify')()

fastify.register(require('@fastify/redis'), {
  host: '127.0.0.1',
  port: 6380
})

fastify.get('/streams', async (request, reply) => {
  // We write an event to the stream 'my awesome fastify stream name', setting 'key' to 'value'
  await fastify.redis.xadd(['my awesome fastify stream name', '*', 'hello', 'fastify is awesome'])

  // We read events from the beginning of the stream called 'my awesome fastify stream name'
  let redisStream = await fastify.redis.xread(['STREAMS', 'my awesome fastify stream name', 0])

  // We parse the results
  let response = []
  let events = redisStream[0][1]

  for (let i = 0; i < events.length; i++) {
    const e = events[i]
    response.push(`#LOG: id is ${e[0].toString()}`)

    // We log each key
    for (const key in e[1]) {
      response.push(e[1][key].toString())
    }
  }

  reply.status(200)
  return { output: response }
  // Will return something like this :
  // { "output": ["#LOG: id is 1559985742035-0", "hello", "fastify is awesome"] }
})

fastify.listen(3000, function (err) {
  if (err) {
    fastify.log.error(err)
    process.exit(1)
  }
})

NB you can find more information about Redis streams and the relevant commands here and here.

Redis connection error

Majority of errors are silent due to the ioredis silent error handling but during the plugin registration it will check that the connection with the redis instance is correctly estabilished. In this case you can receive an ERR_AVVIO_PLUGIN_TIMEOUT error if the connection can't be estabilished in the expected time frame or a dedicated error for an invalid connection.

Acknowledgements

This project is kindly sponsored by:

Download Details:
 

Author: fastify
Download Link: Download The Source Code
Official Website: https://github.com/fastify/fastify-redis 
License: MIT license

#fastify  #nodejs #javascript 

What is GEEK

Buddha Community

How to Share a Common Redis Connection on Fastify
Loma  Baumbach

Loma Baumbach

1596679140

Redis Transactions & Long-Running Lua Scripts

Redis offers two mechanisms for handling transactions – MULTI/EXEC based transactions and Lua scripts evaluation. Redis Lua scripting is the recommended approach and is fairly popular in usage.

Our Redis™ customers who have Lua scripts deployed often report this error – “BUSY Redis is busy running a script. You can only call SCRIPT KILL or SHUTDOWN NOSAVE”. In this post, we will explain the Redis transactional property of scripts, what this error is about, and why we must be extra careful about it on Sentinel-managed systems that can failover.

Redis Lua Scripts Diagram - ScaleGrid Blog

Transactional Nature of Redis Lua Scripts

Redis “transactions” aren’t really transactions as understood conventionally – in case of errors, there is no rollback of writes made by the script.

Atomicity” of Redis scripts is guaranteed in the following manner:

  • Once a script begins executing, all other commands/scripts are blocked until the script completes. So, other clients either see the changes made by the script or they don’t. This is because they can only execute either before the script or after the script.
  • However, Redis doesn’t do rollbacks, so on an error within a script, any changes already made by the script will be retained and future commands/scripts will see those partial changes.
  • Since all other clients are blocked while the script executes, it is critical that the script is well-behaved and finishes in time.

The ‘lua-time-limit’ Value

It is highly recommended that the script complete within a time limit. Redis enforces this in a weak manner with the ‘lua-time-limit’ value. This is the maximum allowed time (in ms) that the script is allowed to run. The default value is 5 seconds. This is a really long time for CPU-bound activity (scripts have limited access and can’t run commands that access the disk).

However, the script is not killed when it executes beyond this time. Redis starts accepting client commands again, but responds to them with a BUSY error.

If you must kill the script at this point, there are two options available:

  • SCRIPT KILL command can be used to stop a script that hasn’t yet done any writes.
  • If the script has already performed writes to the server and must still be killed, use the SHUTDOWN NOSAVE to shutdown the server completely.

It is usually better to just wait for the script to complete its operation. The complete information on methods to kill the script execution and related behavior are available in the documentation.

#cloud #database #developer #high availability #howto #redis #scalegrid #lua-time-limit #redis diagram #redis master #redis scripts #redis sentinel #redis servers #redis transactions #sentinel-managed #server failures

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

In our previous posts in this series, we spoke at length about using PgBouncer  and Pgpool-II , the connection pool architecture and pros and cons of leveraging one for your PostgreSQL deployment. In our final post, we will put them head-to-head in a detailed feature comparison and compare the results of PgBouncer vs. Pgpool-II performance for your PostgreSQL hosting !

The bottom line – Pgpool-II is a great tool if you need load-balancing and high availability. Connection pooling is almost a bonus you get alongside. PgBouncer does only one thing, but does it really well. If the objective is to limit the number of connections and reduce resource consumption, PgBouncer wins hands down.

It is also perfectly fine to use both PgBouncer and Pgpool-II in a chain – you can have a PgBouncer to provide connection pooling, which talks to a Pgpool-II instance that provides high availability and load balancing. This gives you the best of both worlds!

Using PgBouncer with Pgpool-II - Connection Pooling Diagram

PostgreSQL Connection Pooling: Part 4 – PgBouncer vs. Pgpool-II

CLICK TO TWEET

Performance Testing

While PgBouncer may seem to be the better option in theory, theory can often be misleading. So, we pitted the two connection poolers head-to-head, using the standard pgbench tool, to see which one provides better transactions per second throughput through a benchmark test. For good measure, we ran the same tests without a connection pooler too.

Testing Conditions

All of the PostgreSQL benchmark tests were run under the following conditions:

  1. Initialized pgbench using a scale factor of 100.
  2. Disabled auto-vacuuming on the PostgreSQL instance to prevent interference.
  3. No other workload was working at the time.
  4. Used the default pgbench script to run the tests.
  5. Used default settings for both PgBouncer and Pgpool-II, except max_children*. All PostgreSQL limits were also set to their defaults.
  6. All tests ran as a single thread, on a single-CPU, 2-core machine, for a duration of 5 minutes.
  7. Forced pgbench to create a new connection for each transaction using the -C option. This emulates modern web application workloads and is the whole reason to use a pooler!

We ran each iteration for 5 minutes to ensure any noise averaged out. Here is how the middleware was installed:

  • For PgBouncer, we installed it on the same box as the PostgreSQL server(s). This is the configuration we use in our managed PostgreSQL clusters. Since PgBouncer is a very light-weight process, installing it on the box has no impact on overall performance.
  • For Pgpool-II, we tested both when the Pgpool-II instance was installed on the same machine as PostgreSQL (on box column), and when it was installed on a different machine (off box column). As expected, the performance is much better when Pgpool-II is off the box as it doesn’t have to compete with the PostgreSQL server for resources.

Throughput Benchmark

Here are the transactions per second (TPS) results for each scenario across a range of number of clients:

#database #developer #performance #postgresql #connection control #connection pooler #connection pooler performance #connection queue #high availability #load balancing #number of connections #performance testing #pgbench #pgbouncer #pgbouncer and pgpool-ii #pgbouncer vs pgpool #pgpool-ii #pooling modes #postgresql connection pooling #postgresql limits #resource consumption #throughput benchmark #transactions per second #without pooling

How to Share a Common Redis Connection on Fastify

@fastify/redis

Fastify Redis connection plugin; with this you can share the same Redis connection in every part of your server.

Install

npm i @fastify/redis --save

Usage

Add it to your project with register and you are done!

Create a new Redis Client

Under the hood ioredis is used as client, the options that you pass to register will be passed to the Redis client.

const fastify = require('fastify')()

// create by specifying host
fastify.register(require('@fastify/redis'), { host: '127.0.0.1' })

// OR by specifying Redis URL
fastify.register(require('@fastify/redis'), { url: 'redis://127.0.0.1', /* other redis options */ })

// OR with more options
fastify.register(require('@fastify/redis'), { 
  host: '127.0.0.1', 
  password: '***',
  port: 6379, // Redis port
  family: 4   // 4 (IPv4) or 6 (IPv6)
})

Accessing the Redis Client

Once you have registered your plugin, you can access the Redis client via fastify.redis.

The client is automatically closed when the fastify instance is closed.

'use strict'

const Fastify = require('fastify')
const fastifyRedis = require('@fastify/redis')

const fastify = Fastify({ logger: true })

fastify.register(fastifyRedis, { 
  host: '127.0.0.1', 
  password: 'your strong password here',
  port: 6379, // Redis port
  family: 4   // 4 (IPv4) or 6 (IPv6)
})

fastify.get('/foo', (req, reply) => {
  const { redis } = fastify
  redis.get(req.query.key, (err, val) => {
    reply.send(err || val)
  })
})

fastify.post('/foo', (req, reply) => {
  const { redis } = fastify
  redis.set(req.body.key, req.body.value, (err) => {
    reply.send(err || { status: 'ok' })
  })
})

fastify.listen(3000, err => {
  if (err) throw err
  console.log(`server listening on ${fastify.server.address().port}`)
})

Using an existing Redis client

You may also supply an existing Redis client instance by passing an options object with the client property set to the instance. In this case, the client is not automatically closed when the Fastify instance is closed.

'use strict'

const fastify = require('fastify')()
const redis = require('redis').createClient({ host: 'localhost', port: 6379 })

fastify.register(require('@fastify/redis'), { client: redis })

Note: by default, @fastify/redis will not automatically close the client connection when the Fastify server shuts down. To opt-in to this behavior, register the client like so:

fastify.register(require('@fastify/redis'), {
  client: redis,
  closeClient: true
})

Registering multiple Redis client instances

By using the namespace option you can register multiple Redis client instances.

'use strict'

const fastify = require('fastify')()
const redis = require('redis').createClient({ host: 'localhost', port: 6379 })

fastify
  .register(require('@fastify/redis'), {
    host: '127.0.0.1',
    port: 6380,
    namespace: 'hello'
  })
  .register(require('@fastify/redis'), {
    client: redis,
    namespace: 'world'
  })

// Here we will use the `hello` named instance
fastify.get('/hello', (req, reply) => {
  const { redis } = fastify

  redis.hello.get(req.query.key, (err, val) => {
    reply.send(err || val)
  })
})

fastify.post('/hello', (req, reply) => {
  const { redis } = fastify

  redis['hello'].set(req.body.key, req.body.value, (err) => {
    reply.send(err || { status: 'ok' })
  })
})

// Here we will use the `world` named instance
fastify.get('/world', (req, reply) => {
  const { redis } = fastify

  redis['world'].get(req.query.key, (err, val) => {
    reply.send(err || val)
  })
})

fastify.post('/world', (req, reply) => {
  const { redis } = fastify

  redis.world.set(req.body.key, req.body.value, (err) => {
    reply.send(err || { status: 'ok' })
  })
})

fastify.listen(3000, function (err) {
  if (err) {
    fastify.log.error(err)
    process.exit(1)
  }
})

Redis streams (Redis 5.0 or greater is required)

@fastify/redis supports Redis streams out of the box.

'use strict'

const fastify = require('fastify')()

fastify.register(require('@fastify/redis'), {
  host: '127.0.0.1',
  port: 6380
})

fastify.get('/streams', async (request, reply) => {
  // We write an event to the stream 'my awesome fastify stream name', setting 'key' to 'value'
  await fastify.redis.xadd(['my awesome fastify stream name', '*', 'hello', 'fastify is awesome'])

  // We read events from the beginning of the stream called 'my awesome fastify stream name'
  let redisStream = await fastify.redis.xread(['STREAMS', 'my awesome fastify stream name', 0])

  // We parse the results
  let response = []
  let events = redisStream[0][1]

  for (let i = 0; i < events.length; i++) {
    const e = events[i]
    response.push(`#LOG: id is ${e[0].toString()}`)

    // We log each key
    for (const key in e[1]) {
      response.push(e[1][key].toString())
    }
  }

  reply.status(200)
  return { output: response }
  // Will return something like this :
  // { "output": ["#LOG: id is 1559985742035-0", "hello", "fastify is awesome"] }
})

fastify.listen(3000, function (err) {
  if (err) {
    fastify.log.error(err)
    process.exit(1)
  }
})

NB you can find more information about Redis streams and the relevant commands here and here.

Redis connection error

Majority of errors are silent due to the ioredis silent error handling but during the plugin registration it will check that the connection with the redis instance is correctly estabilished. In this case you can receive an ERR_AVVIO_PLUGIN_TIMEOUT error if the connection can't be estabilished in the expected time frame or a dedicated error for an invalid connection.

Acknowledgements

This project is kindly sponsored by:

Download Details:
 

Author: fastify
Download Link: Download The Source Code
Official Website: https://github.com/fastify/fastify-redis 
License: MIT license

#fastify  #nodejs #javascript 

How much does it cost to build a video sharing app?

There are so many things you can do with your mobile phone, regardless of which operating system you use. Your smartphone is a miniature computer, which means you can use it to browse the web, stream music and download apps galore. You can also share videos with certain apps. There is no doubt that video, editing, recording, and sharing application development will give a wholesome solution to your app users and will help you make your application stand out from the competitors.

Are you searching for the app development company who build video sharing app? If yes then AppClues Infotech is the best mobile app development company offer world-class mobile app development services at competitive prices across all major mobile platforms for start-ups as well as enterprises. Our team of professional designers and developers can proficiently develop a video sharing mobile app tailored to your needs, to help you achieve the end result of your business gaining more market autonomy.

Our Expertise in Mobile App Development:

  • iPhone App Development
  • iPad App Development
  • Apple Watch App Development
  • Flutter App Development
  • Android App Development
  • Ionic App Development
  • React Native App Development
  • Windows App Development
  • Cross-Platform App Development

We offer custom social networking app development solutions which are designed to not just make your brand a household name but also to keep your brand above the ever-growing crowd of entertainment mobile apps. We build mobile apps across various industry verticals including travel, social networking, restaurant, real estate, health care, news, etc.

The expense of video sharing app development depends on app size, app platform, app functionality, what features you require, the team of app developers, etc. So generally cost is in between $2,000 - 15,000. It can vary from app to app because every app has different requirements.

#video sharing app development #best video sharing app development company #top video sharing app development company #make a video sharing mobile app #cost to create a video sharing app

Ian  Robinson

Ian Robinson

1623250560

An Introduction To Data Connectivity and Data Connectivity Solutions

In this article, we discuss facts about data connectivity, the related concepts, its benefits, as well as a discussion on some data connectivity solutions.

Introduction

In today’s world, data is the crux of major business decisions used by organizations all over the world. As such, it is imperative that the organizations have access to the right data and be able to analyze and make business decisions proactively. This article talks about data connectivity, the related concepts, its benefits, as well as a discussion on some data connectivity solutions.

#big data #data connectivity #data connectivity solutions #connectivity