Deno Developer

Deno Developer

1602070200

Next.js stateless session utility using signed and encrypted cookies to store data

next-iron-session

🛠 Next.js and Express (connect middleware) stateless session utility using signed and encrypted cookies to store data_

This Next.js, Express and Connect backend utility allows you to create a session to then be stored in browser cookies via a signed and encrypted seal. This provides client sessions that are ⚒️ iron-strong.

The seal stored on the client contains the session data, not your server, making it a “stateless” session from the server point of view. This is a different take than next-session where the cookie contains a session ID to then be used to map data on the server-side.

Online demo at https://next-iron-session.now.sh/ 👀

The seal is signed and encrypted using @hapi/iron, iron-store is used behind the scenes. This method of storing session data is the same technique used by frameworks like Ruby On Rails.

** ♻️ Password rotation is supported**. It allows you to change the password used to sign and encrypt sessions while still being able to decrypt sessions that were created with a previous password.

Next.js’s 🗿 Static generation (SG) and ⚙️ Server-side Rendering (SSR) are both supported.

There’s a Connect middleware available so you can use this library in any Connect compatible framework like Express.

By default the cookie has an ⏰ expiration time of 15 days, set via maxAge. After that, even if someone tries to reuse the cookie, next-iron-session will not accept the underlying seal because the expiration is part of the seal value. See https://hapi.dev/family/iron for more information on @hapi/iron mechanisms.

Installation

npm add next-iron-session

yarn add next-iron-session

Usage

You can find real-world examples (Next.js, Express) in the examples folder.

The password is a private key you must pass at runtime, it has to be at least 32 characters long. Use https://1password.com/password-generator/ to generate strong passwords.

⚠️ Always store passwords in secret environment variables on your platform.

pages/api/login.js:

import { withIronSession } from "next-iron-session";

async function handler(req, res) {
  // get user from database then:
  req.session.set("user", {
    id: 230,
    admin: true,
  });
  await req.session.save();
  res.send("Logged in");
}

export default withIronSession(handler, {
  password: "complex_password_at_least_32_characters_long",
  // if your localhost is served on http:// then disable the secure flag
  cookieOptions: {
    secure: process.env.NODE_ENV === "production",
  },
});

pages/api/user.js:

import { withIronSession } from "next-iron-session";

function handler(req, res, session) {
  const user = req.session.get("user");
  res.send({ user });
}

export default withIronSession(handler, {
  password: "complex_password_at_least_32_characters_long",
  // if your localhost is served on http:// then disable the secure flag
  cookieOptions: {
    secure: process.env.NODE_ENV === "production",
  },
});

pages/api/logout.js:

import { withIronSession } from "next-iron-session";

function handler(req, res, session) {
  req.session.destroy();
  res.send("Logged out");
}

export default withIronSession(handler, {
  password: "complex_password_at_least_32_characters_long",
  // if your localhost is served on http:// then disable the secure flag
  cookieOptions: {
    secure: process.env.NODE_ENV === "production",
  },
});

⚠️ Sessions are automatically recreated (empty session though) when:

  • they expire
  • a wrong password was used
  • we can’t find back the password id in the current list

Examples

Handle password rotation/update the password

When you want to:

  • rotate passwords for better security every two (or more, or less) weeks
  • change the password you previously used because it leaked somewhere ( 😱)

Then you can use multiple passwords:

Week 1:

export default withIronSession(handler, {
  password: [
    {
      id: 1,
      password: "complex_password_at_least_32_characters_long",
    },
  ],
});

Week 2:

export default withIronSession(handler, {
  password: [
    {
      id: 2,
      password: "another_password_at_least_32_characters_long",
    },
    {
      id: 1,
      password: "complex_password_at_least_32_characters_long",
    },
  ],
});

Notes:

  • id is required so that we do not have to try every password in the list when decrypting (the id is part of the cookie value).
  • The password used to encrypt session data (to seal) is always the first one in the array, so when rotating to put a new password, it must be first in the array list
  • Even if you do not provide an array at first, you can always move to array based passwords afterwards, knowing that your first password (string) was given {id:1} automatically.

Express / Connect middleware: ironSession

You can import and use ironSession if you want to use next-iron-session in Express and Connect.

import { ironSession } from "next-iron-session";

const session = ironSession({
  cookieName: "next-iron-session/examples/express",
  password: process.env.SECRET_COOKIE_PASSWORD,
  // if your localhost is served on http:// then disable the secure flag
  cookieOptions: {
    secure: process.env.NODE_ENV === "production",
  },
});

router.get("/profile", session, async function (req, res) {
  // now you can access all of the req.session.* utilities
  if (req.session.get("user") === undefined) {
    res.redirect("/restricted");
    return;
  }

  res.render("profile", {
    title: "Profile",
    userId: req.session.get("user").id,
  });
});

A more complete example using Express can be found in the examples folder.

Usage with next-connect

Since ironSession is an Express / Connect middleware, it means you can use it with next-connect:

import { ironSession } from "next-iron-session";

const session = ironSession({
  cookieName: "next-iron-session/examples/express",
  password: process.env.SECRET_COOKIE_PASSWORD,
  // if your localhost is served on http:// then disable the secure flag
  cookieOptions: {
    secure: process.env.NODE_ENV === "production",
  },
});
import nextConnect from "next-connect";

const handler = nextConnect();

handler.use(session).get((req, res) => {
  const user = req.session.get("user");
  res.send(`Hello user ${user.id}`);
});

export default handler;

API

withIronSession(handler, { password, cookieName, [ttl], [cookieOptions] })

This can be used to wrap Next.js getServerSideProps or API Routes so you can then access all req.session.* methods.

  • password, required: Private key used to encrypt the cookie. It has to be at least 32 characters long. Use https://1password.com/password-generator/ to generate strong passwords. password can be either a string or an array of objects like this: [{id: 2, password: "..."}, {id: 1, password: "..."}] to allow for password rotation.
  • cookieName, required: Name of the cookie to be stored
  • ttl, optional: In seconds, default to 14 days
  • cookieOptions, optional: Any option available from jshttp/cookie#serialize. Default to:
{
  httpOnly: true,
  secure: true,
  sameSite: "lax",
  // The next line makes sure browser will expire cookies before seals are considered expired by the server. It also allows for clock difference of 60 seconds maximum between server and clients.
  maxAge: (ttl === 0 ? 2147483647 : ttl) - 60,
  path: "/",
  // other options:
  // domain, if you want the cookie to be valid for the whole domain and subdomains, use domain: example.com
  // encode, there should be no need to use this option, encoding is done by next-iron-session already
  // expires, there should be no need to use this option, maxAge takes precedence
}

ironSession({ password, cookieName, [ttl], [cookieOptions] })

Connect middleware.

import { ironSession } from "next-iron-session";

app.use(ironSession({ ...options }));

async applySession(req, res, { password, cookieName, [ttl], [cookieOptions] })

Allows you to use this module the way you want as long as you have access to req and res.

import { applySession } from "next-session";

await applySession(req, res, options);

req.session.set(name, value)

req.session.get(name)

req.session.unset(name)

req.session.destroy()

FAQ

Why use pure 🍪 cookies for sessions?

This makes your sessions stateless: you do not have to store session data on your server. You do not need another server or service to store session data. This is particularly useful in serverless architectures where you’re trying to reduce your backend dependencies.

What are the drawbacks?

There are some drawbacks to this approach:

  • you cannot invalidate a seal when needed because there’s no state stored on the server-side about them. We consider that the way the cookie is stored reduces the possibility for this eventuality to happen. Also, in most applications the first thing you do when receiving an authenticated request is to validate the user and their rights in your database, which defeats the case where someone would try to use a token while their account was deactivated/deleted. Now if someone steals a user token you should have a process in place to mitigate that: deactivate the user and force a re-login with a flag in your database for example.
  • application not supporting cookies won’t work, but you can use iron-store to implement something similar. In the future, we could allow next-iron-session to accept basic auth or bearer token methods too. Open an issue if you’re interested.
  • on most browsers, you’re limited to 4,096 bytes per cookie. To give you an idea, a next-iron-session cookie containing {user: {id: 230, admin: true}} is 358 bytes signed and encrypted: still plenty of available cookie space in here.
  • performance: crypto on the server-side could be slow, if that’s the case let me know. Also, cookies are sent to every request to your website, even images, so this could be an issue

Now that you know the drawbacks, you can decide if they are an issue for your application or not. More information can also be found on the Ruby On Rails website which uses the same technique.

How is this different from JWT?

Not so much:

  • JWT is a standard, it stores metadata in the JWT token themselves to ensure communication between different systems is flawless.
  • JWT tokens are not encrypted, the payload is visible by customers if they manage to inspect the seal. You would have to use JWE to achieve the same.
  • @hapi/iron mechanism is not a standard, it’s a way to sign and encrypt data into seals

Depending on your own needs and preferences, next-iron-session may or may not fit you.

Project status

This is a recent library I authored because I needed it. While @hapi/iron is battle-tested and used in production on a lot of websites, this library is not (yet!). Please use it at your own risk.

If you find bugs or have API ideas, create an issue.

Credits

Thanks to Hoang Vo for advice and guidance while building this module. Hoang built next-connect and next-session.

Thanks to hapi team for creating iron.

Download Details:

Author: vvo

Demo: https://next-iron-session.now.sh/

Source Code: https://github.com/vvo/next-iron-session

#nodejs #node #nextjs #javascript

What is GEEK

Buddha Community

Next.js stateless session utility using signed and encrypted cookies to store data
Siphiwe  Nair

Siphiwe Nair

1620466520

Your Data Architecture: Simple Best Practices for Your Data Strategy

If you accumulate data on which you base your decision-making as an organization, you should probably think about your data architecture and possible best practices.

If you accumulate data on which you base your decision-making as an organization, you most probably need to think about your data architecture and consider possible best practices. Gaining a competitive edge, remaining customer-centric to the greatest extent possible, and streamlining processes to get on-the-button outcomes can all be traced back to an organization’s capacity to build a future-ready data architecture.

In what follows, we offer a short overview of the overarching capabilities of data architecture. These include user-centricity, elasticity, robustness, and the capacity to ensure the seamless flow of data at all times. Added to these are automation enablement, plus security and data governance considerations. These points from our checklist for what we perceive to be an anticipatory analytics ecosystem.

#big data #data science #big data analytics #data analysis #data architecture #data transformation #data platform #data strategy #cloud data platform #data acquisition

NBB: Ad-hoc CLJS Scripting on Node.js

Nbb

Not babashka. Node.js babashka!?

Ad-hoc CLJS scripting on Node.js.

Status

Experimental. Please report issues here.

Goals and features

Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.

Additional goals and features are:

  • Fast startup without relying on a custom version of Node.js.
  • Small artifact (current size is around 1.2MB).
  • First class macros.
  • Support building small TUI apps using Reagent.
  • Complement babashka with libraries from the Node.js ecosystem.

Requirements

Nbb requires Node.js v12 or newer.

How does this tool work?

CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).

Usage

Install nbb from NPM:

$ npm install nbb -g

Omit -g for a local install.

Try out an expression:

$ nbb -e '(+ 1 2 3)'
6

And then install some other NPM libraries to use in the script. E.g.:

$ npm install csv-parse shelljs zx

Create a script which uses the NPM libraries:

(ns script
  (:require ["csv-parse/lib/sync$default" :as csv-parse]
            ["fs" :as fs]
            ["path" :as path]
            ["shelljs$default" :as sh]
            ["term-size$default" :as term-size]
            ["zx$default" :as zx]
            ["zx$fs" :as zxfs]
            [nbb.core :refer [*file*]]))

(prn (path/resolve "."))

(prn (term-size))

(println (count (str (fs/readFileSync *file*))))

(prn (sh/ls "."))

(prn (csv-parse "foo,bar"))

(prn (zxfs/existsSync *file*))

(zx/$ #js ["ls"])

Call the script:

$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs

Macros

Nbb has first class support for macros: you can define them right inside your .cljs file, like you are used to from JVM Clojure. Consider the plet macro to make working with promises more palatable:

(defmacro plet
  [bindings & body]
  (let [binding-pairs (reverse (partition 2 bindings))
        body (cons 'do body)]
    (reduce (fn [body [sym expr]]
              (let [expr (list '.resolve 'js/Promise expr)]
                (list '.then expr (list 'clojure.core/fn (vector sym)
                                        body))))
            body
            binding-pairs)))

Using this macro we can look async code more like sync code. Consider this puppeteer example:

(-> (.launch puppeteer)
      (.then (fn [browser]
               (-> (.newPage browser)
                   (.then (fn [page]
                            (-> (.goto page "https://clojure.org")
                                (.then #(.screenshot page #js{:path "screenshot.png"}))
                                (.catch #(js/console.log %))
                                (.then #(.close browser)))))))))

Using plet this becomes:

(plet [browser (.launch puppeteer)
       page (.newPage browser)
       _ (.goto page "https://clojure.org")
       _ (-> (.screenshot page #js{:path "screenshot.png"})
             (.catch #(js/console.log %)))]
      (.close browser))

See the puppeteer example for the full code.

Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet macro is similar to promesa.core/let.

Startup time

$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)'   0.17s  user 0.02s system 109% cpu 0.168 total

The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx this adds another 300ms or so, so for faster startup, either use a globally installed nbb or use $(npm bin)/nbb script.cljs to bypass npx.

Dependencies

NPM dependencies

Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.

Classpath

To load .cljs files from local paths or dependencies, you can use the --classpath argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs relative to your current dir, then you can load it via (:require [foo.bar :as fb]). Note that nbb uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar in the namespace name becomes foo_bar in the directory name.

To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:

$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"

and then feed it to the --classpath argument:

$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]

Currently nbb only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar files will be added later.

Current file

The name of the file that is currently being executed is available via nbb.core/*file* or on the metadata of vars:

(ns foo
  (:require [nbb.core :refer [*file*]]))

(prn *file*) ;; "/private/tmp/foo.cljs"

(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"

Reagent

Nbb includes reagent.core which will be lazily loaded when required. You can use this together with ink to create a TUI application:

$ npm install ink

ink-demo.cljs:

(ns ink-demo
  (:require ["ink" :refer [render Text]]
            [reagent.core :as r]))

(defonce state (r/atom 0))

(doseq [n (range 1 11)]
  (js/setTimeout #(swap! state inc) (* n 500)))

(defn hello []
  [:> Text {:color "green"} "Hello, world! " @state])

(render (r/as-element [hello]))

Promesa

Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core namespace is included with the let and do! macros. An example:

(ns prom
  (:require [promesa.core :as p]))

(defn sleep [ms]
  (js/Promise.
   (fn [resolve _]
     (js/setTimeout resolve ms))))

(defn do-stuff
  []
  (p/do!
   (println "Doing stuff which takes a while")
   (sleep 1000)
   1))

(p/let [a (do-stuff)
        b (inc a)
        c (do-stuff)
        d (+ b c)]
  (prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3

Also see API docs.

Js-interop

Since nbb v0.0.75 applied-science/js-interop is available:

(ns example
  (:require [applied-science.js-interop :as j]))

(def o (j/lit {:a 1 :b 2 :c {:d 1}}))

(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1

Most of this library is supported in nbb, except the following:

  • destructuring using :syms
  • property access using .-x notation. In nbb, you must use keywords.

See the example of what is currently supported.

Examples

See the examples directory for small examples.

Also check out these projects built with nbb:

API

See API documentation.

Migrating to shadow-cljs

See this gist on how to convert an nbb script or project to shadow-cljs.

Build

Prequisites:

  • babashka >= 0.4.0
  • Clojure CLI >= 1.10.3.933
  • Node.js 16.5.0 (lower version may work, but this is the one I used to build)

To build:

  • Clone and cd into this repo
  • bb release

Run bb tasks for more project-related tasks.

Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb 
License: EPL-1.0

#node #javascript

Gerhard  Brink

Gerhard Brink

1620629020

Getting Started With Data Lakes

Frameworks for Efficient Enterprise Analytics

The opportunities big data offers also come with very real challenges that many organizations are facing today. Often, it’s finding the most cost-effective, scalable way to store and process boundless volumes of data in multiple formats that come from a growing number of sources. Then organizations need the analytical capabilities and flexibility to turn this data into insights that can meet their specific business objectives.

This Refcard dives into how a data lake helps tackle these challenges at both ends — from its enhanced architecture that’s designed for efficient data ingestion, storage, and management to its advanced analytics functionality and performance flexibility. You’ll also explore key benefits and common use cases.

Introduction

As technology continues to evolve with new data sources, such as IoT sensors and social media churning out large volumes of data, there has never been a better time to discuss the possibilities and challenges of managing such data for varying analytical insights. In this Refcard, we dig deep into how data lakes solve the problem of storing and processing enormous amounts of data. While doing so, we also explore the benefits of data lakes, their use cases, and how they differ from data warehouses (DWHs).


This is a preview of the Getting Started With Data Lakes Refcard. To read the entire Refcard, please download the PDF from the link above.

#big data #data analytics #data analysis #business analytics #data warehouse #data storage #data lake #data lake architecture #data lake governance #data lake management

Eva  Murphy

Eva Murphy

1625674200

Google analytics Setup with Next JS, React JS using Router Events - 14

In this video, we are going to implement Google Analytics to our Next JS application. Tracking page views of an application is very important.

Google analytics will allow us to track analytics information.

Frontend: https://github.com/amitavroy/video-reviews
API: https://github.com/amitavdevzone/video-review-api
App link: https://video-reviews.vercel.app

You can find me on:
Twitter: https://twitter.com/amitavroy7​
Discord: https://discord.gg/Em4nuvQk

#next js #js #react js #react #next #google analytics

Data Lake and Data Mesh Use Cases

As data mesh advocates come to suggest that the data mesh should replace the monolithic, centralized data lake, I wanted to check in with Dipti Borkar, co-founder and Chief Product Officer at Ahana. Dipti has been a tremendous resource for me over the years as she has held leadership positions at Couchbase, Kinetica, and Alluxio.

Definitions

  • A data lake is a concept consisting of a collection of storage instances of various data assets. These assets are stored in a near-exact, or even exact, copy of the resource format and in addition to the originating data stores.
  • A data mesh is a type of data platform architecture that embraces the ubiquity of data in the enterprise by leveraging a domain-oriented, self-serve design. Mesh is an abstraction layer that sits atop data sources and provides access.

According to Dipti, while data lakes and data mesh both have use cases they work well for, data mesh can’t replace the data lake unless all data sources are created equal — and for many, that’s not the case.

Data Sources

All data sources are not equal. There are different dimensions of data:

  • Amount of data being stored
  • Importance of the data
  • Type of data
  • Type of analysis to be supported
  • Longevity of the data being stored
  • Cost of managing and processing the data

Each data source has its purpose. Some are built for fast access for small amounts of data, some are meant for real transactions, some are meant for data that applications need, and some are meant for getting insights on large amounts of data.

AWS S3

Things changed when AWS commoditized the storage layer with the AWS S3 object-store 15 years ago. Given the ubiquity and affordability of S3 and other cloud storage, companies are moving most of this data to cloud object stores and building data lakes, where it can be analyzed in many different ways.

Because of the low cost, enterprises can store all of their data — enterprise, third-party, IoT, and streaming — into an S3 data lake. However, the data cannot be processed there. You need engines on top like Hive, Presto, and Spark to process it. Hadoop tried to do this with limited success. Presto and Spark have solved the SQL in S3 query problem.

#big data #big data analytics #data lake #data lake and data mesh #data lake #data mesh