Callum Slater

Callum Slater

1567235558

Going Serverless With Cloudflare Workers

Originally published by Leonardo Losoviz at https://www.smashingmagazine.com

At its core, serverless is a strategy for a website’s architecture, based on deploying static files (the good old HTML, CSS and image files) on cloud-based hosting, and augmenting the website’s capabilities by accessing cloud-based, charge-per-use dynamic functionality. There is nothing mystical or mysterious about serverless: its end result is simply a website or application.

In spite of its name, “serverless” doesn’t mean “without a server”. It simply means “without my own server”. This means that my site is still hosted on some server, but by offloading this responsibility to the cloud services provider, I can devote all my energies to developing my own product (the website) and not have to worry about the infrastructure.

Serverless is very appealing for several reasons:

  • Low-cost

You only pay for what you use. Hosting static files on the cloud can cost just a few cents a month (or even be free in some cases).

  • Fast

Static files can be delivered to the user from a Content Delivery Network (CDN) located near the user.

  • Secure

The cloud provider constantly keeps the underlying platform up-to-date.

  • Easy to scale

The cloud provider’s business is to scale up the infrastructure on demand.

Serverless is also becoming increasingly popular due to the increasing availability of services offered by cloud providers, simple-yet-powerful template-based static site generators (such as Jekyll, Hugo or Gatsby) and convenient ways to feed data into the process (such as through one of the many git based CMS’s).

The Network Is The Computer: Introducing Cloudflare Workers

Cloudflare, one of the world’s largest cloud network platforms, is well versed in providing the benefits we are after through serverless: for some time now they have made their extensive CDN available to make our sites fast, offered DDoS protection to make our sites secure, and made their 1.1.1.1 DNS service free so we could afford having privacy on the Internet, among many other services.

Their new serverless offering, Cloudflare Workers (or simply “Workers”), runs on the same global cloud network of over 165 data centers that powers those services. Cloudflare Workers is a service that provides a lightweight JavaScript execution environment to augment existing applications or create new ones.

Being stacked on top of Cloudflare’s widespread network makes Cloudflare Workers a big deal. Cloudflare can scale up its infrastructure based on spikes in demand, serving a serverless application from locations on five continents and supporting millions of users, making our applications fast, reliable, and scalable.

The Cloudflare network is powered by 165 data centers around the world. (Large preview)

On top of that, Cloudflare Workers provides unique features that make it an even more compelling service. Let’s explore these in detail.

Architecture Based On V8 For Fast Access And Low Cost

The Cloudflare engineers went out of their way to architect Workers, as they proudly explain in depth. Whereas almost every provider offering cloud computing has an architecture based on containers and virtual machines, Workers uses “Isolates”, the technology that allows V8 (Google Chrome’s JavaScript engine) to run thousands of processes on a single server in an efficient and secure manner.

Compared to virtual machines, Isolates greatly reduce the overhead required to execute user code, which translates into faster execution and lower use of memory.

Isolates allow thousands of processes to run efficiently on a single machine (Large preview)

Cloudflare Workers is not the first serverless cloud computing platform in operation: for instance, Amazon has offered AWS Lambda and Lambda@Edge. However, as a consequence of the lower overhead produced by Isolates, Cloudflare claims that when executing a similar task, Workers beats the competition where it matters most: speed and money.

Lower Price

While a Worker offering 50 milliseconds of CPU costs $0.50 per million requests, the equivalent Lambda costs $1.84 per million. Hence, running Workers ends up being around 3x cheaper than Lambda per CPU-cycle.

Faster Access

The Cloudflare team ran tests comparing Workers against AWS Lambda and Lambda@Edge, and came to the conclusion that Workers is 441% faster than a Lambda function and 192% faster than Lambda@Edge.

This chart shows what percentage of requests to Lambda, Lambda@Edge, and Cloudflare Workers were faster than a given number of milliseconds. (Large preview)

The better performance achieved by Cloudflare Workers is confirmed by the third-party site serverless-benchmark.com, which measures the performance of serverless providers and provides continuously updated statistics.

Statistics for Overhead (the time from request to response without the actual time the function took) and Cold start (the latency it takes a function to respond to the event) for Cloudflare Workers and its competitors. (Large preview)

Coded In JavaScript, Modeled On The Service Workers API

Because it is based on V8, programming for Workers is done in those languages supported by V8: JavaScript and languages that support compilation to WebAssembly, such as Go and Rust. V8’s code is merged into Workers at least once a week, so we can always expect it to support the latest implemented flavor of ECMAScript.

Workers are modeled on the Service Workers available in modern web browsers, and they use the same API whenever possible. This is significant: Because Service Workers are part of the foundation to create a Progressive Web App (PWA), creating Workers is done through an API that developers are already familiar with (or may be in the process of learning) for creating modern web applications.

In addition, relying on the Service Workers API allows developers to boost their productivity since it allows isomorphism of code, i.e. the same code that powers the Service Worker can be used for a Cloudflare Worker. Even though this is not always feasible because of the different contexts (while a Service Worker runs in the browser, the Cloudflare Worker runs in the network), certain use cases could apply to both contexts.

For instance, among the Service Workers recipes described in serviceworke.rs, recipes for API Analytics, Load Balancer, and Dependency Injection can be implemented on both the client side and the network using the same code (or most of it). And even when the functionality makes sense only on either the client-side or the network, it can be partly implemented using chunks of code that are context-agnostic and can be conveniently reused.

Furthermore, using the same API for Service Workers and Cloudflare Workers makes it easy to provide progressive enhancement. An application can execute a Service Worker whenever possible, and fall back on a Cloudflare Worker when the user visits the site for the first time (when the Service Worker is still not installed on the client), or when Service Workers are not supported (for instance, when the browser is old, or it is just Opera mini).

Finally, relying on a unique API simplifies the overall language stack, once again making it easier for the developer to get more work done. For instance, defining a caching policy for the CDN in Varnish is done through the Varnish Configuration Language, which has its own syntax. Cloudflare Workers, though, enables develpers to code the same tasks through, you guessed it, the Service Workers API.

It Leverages The Modern Toolbox

In addition to Workers not requiring developers to learn any new language or API, it follows modern conventions and provides integration with popular technologies, allowing us to use our current toolbox:

Let’s See Some Practical Examples

It’s time to have fun! Let’s play with some Workers based on common use cases to see how we can augment our sites or even create new ones.

Cloudflare makes available a browser-based testing tool, the Cloudflare Workers Playground. This tool is very comprehensive and easy to use: simply copy the Worker script on the left-side panel, execute it against the URL defined on the top-right bar, see the results on the ‘Preview‘ tab and the source code on the ‘Testing‘ tab (from where we can also add custom headers), and execute console.log inside the script to bring the results on the DevTools on the bottom-right. To share (or also store) your script, you can simply copy your browser’s URL at that point in time.

The Playground allows us to test-drive our Cloudflare Workers (Large preview)

Starting with the Playground will take you far, but, at some point, you will want to test on the actual Cloudflare network and, even better, deploy your scripts for production. For this, your site must be set up with Cloudflare. If you already are a Cloudflare user, simply sign in, navigate to the ‘Workers’ tab on the dashboard, and you are ready to go.

If you are not a Cloudflare user, you can either sign up, or you can request a workers.dev subdomain, under which you will soon be able to deploy your Workers. The workers.dev site is currently accepting reservations of subdomains, so hurry up and reserve yours before it is taken by someone else!

workers.dev is currently accepting reservations of subdomains (Large preview)

The recipes below have been taken from the Cloudflare Workers Recipe cookbook, from the examples repository in Github, and from the Cloudflare blog. Each example has a link to the script code in the Playground.

Static Site Hosting

The most straightforward use case for Workers is to create a new site, dynamically responding to requests without needing to connect to an origin server at all. So, hello world!

addEventListener('fetch', event => {
  event.respondWith(new Response('<html><body><p>Hello world!</p></body></html>'))
})

Instead of printing the HTML output in the script, we can also host static HTML files with some hosting service, and fetch these with a simple Worker script. Indeed, the Worker script can retrieve the content of any file available on the Internet: While the domain under which the Worker is executed must be handled by Cloudflare, the origin website from which the script fetches content does not have to. And this works not just for HTML pages, but also for CSS and JS assets, images, and everything else.

The script below, for instance, renders a page that is hosted under DigitalOcean Spaces:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  const parsedUrl = new URL(request.url)
  let path = parsedUrl.pathname

  let lastSegment = path.substring(path.lastIndexOf(‘/’))
  if (lastSegment.indexOf(‘.’) === -1) {
    path += ‘/index.html’
  }

  return fetch(“https://cloudflare-example-space.nyc3.digitaloceanspaces.com” + path)
}

Building APIs

A prominent use case for Workers is creating APIs. For instance, the script below powers an API service that states if a domain redirects to HTTPS or not:

addEventListener(‘fetch’, event => {
 event.respondWith(handleRequest(event.request))
})

/**

  • Fetch a request and follow redirects
  • @param {Request} request
    /
    async function handleRequest(request) {
     let headers = new Headers({
       ‘Content-Type’: ‘text/html’,
       ‘Access-Control-Allow-Origin’: '

     })
     const SECURE_RESPONSE = new Response(‘secure’, {status: 200, headers: headers})
     const INSECURE_RESPONSE = new Response(‘not secure’, {status: 200, headers: headers})
     const NO_SUCH_SITE = new Response(‘website not found’, {status: 200, headers: headers})

 let domain = new URL(request.url).searchParams.get(‘domain’)
 if(domain === null) {
   return new Response(‘Please pass in domain via query string’, {status: 404})
 }
 try {
   let resp = await fetch(http://${domain}, {headers: {‘User-Agent’: request.headers.get(‘User-Agent’)}})
   if(resp.redirected == true && resp.url.startsWith(‘https’)) {
     return SECURE_RESPONSE 
   }
   else if(resp.redirected == false && resp.status == 502) {
     return NO_SUCH_SITE
   }
   else {
     return INSECURE_RESPONSE
   }
 }
  catch (e) {
   return new Response(Something went wrong ${e}, {status: 404})
 }
}

Workers can also connect to several origins in parallel and combine all the responses into a single response. For instance, the script below powers an API service that simultaneously retrieves the price for several cryptocurrency coins:

addEventListener(‘fetch’, event => {
    event.respondWith(fetchAndApply(event.request))
})
  
/**
 * Make multiple requests, 
 * aggregate the responses and 
 * send it back as a single response
 */
async function fetchAndApply(request) {
    const init = {
      method: ‘GET’,
      headers: {‘Authorization’: ‘XXXXXX’}
    }
    const [btcResp, ethResp, ltcResp] = await Promise.all([
      fetch(‘https://api.coinbase.com/v2/prices/BTC-USD/spot’, init),
      fetch(‘https://api.coinbase.com/v2/prices/ETH-USD/spot’, init),
      fetch(‘https://api.coinbase.com/v2/prices/LTC-USD/spot’, init)
    ])
  
    const btc = await btcResp.json()
    const eth = await ethResp.json()
    const ltc = await ltcResp.json()
  
    let combined = {}
    combined[‘btc’] = btc[‘data’].amount
    combined[‘ltc’] = ltc[‘data’].amount
    combined[‘eth’] = eth[‘data’].amount
  
    const responseInit = {
      headers: {‘Content-Type’: ‘application/json’}
    }
    return new Response(JSON.stringify(combined), responseInit)
}

Making the API highly dynamic by retrieving data from a database is covered too! Workers KV is a global, low-latency, key-value data store. It is optimized for quick and frequent reads, and data should be saved sparingly. Then, it is a sensible approach to input data through the Cloudflare API:

curl “https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/values/first-key
-X PUT
-H “X-Auth-Email: $CLOUDFLARE_EMAIL”
-H “X-Auth-Key: $CLOUDFLARE_AUTH_KEY”
–data ‘My first value!’

And then the values can be read from within the Worker script:

addEventListener(‘fetch’, event => {
 event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
 const value = await FIRST_KV_NAMESPACE.get(“first-key”)
 if (value === null)
   return new Response(“Value not found”, {status: 404})

 return new Response(value)
}

At the time of writing, KV is still in beta and released only to beta testers. If you are interested in testing it out, you can reach out to the Cloudflare team and request access.

Geo-Targeting

Cloudflare detects the origin IP of the incoming request and appends a two-letter country code to header ‘Cf-Ipcountry’. The script below reads this header, obtains the country code, and then redirects to the corresponding site version if it exists:

addEventListener(‘fetch’, event => {
 event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {

   const country = request.headers.get(‘Cf-Ipcountry’).toLowerCase() 
   let url = new URL(request.url)

   const target_url = ‘https://’ + url.hostname + ‘/’ + country
   const target_url_response = await fetch(target_url)

   if(target_url_response.status === 200) {
       return new Response(‘’, {
         status: 302,
         headers: {
           ‘Location’: target_url
         }
       })     
   } else {
       return fetch(request)
   }
}

A similar approach can apply to implement load balancing, choosing from among multiple origins to improve speed or reliability.

Enhanced Security

The scripts below add security rules and filters to block unwanted visitors and bots.

Ignore the POST and PUT HTTP requests:

addEventListener(‘fetch’, event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {  
  if (request.method === ‘POST’ || request.method === ‘PUT’) {
    return new Response(‘Sorry, this page is not available.’,
        { status: 403, statusText: ‘Forbidden’ })
  }

  return fetch(request)
}

Deny a spider or crawler:

addEventListener(‘fetch’, event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {  
  if (request.headers.get(‘user-agent’).includes(‘annoying_robot’)) {
    return new Response(‘Sorry, this page is not available.’,
        { status: 403, statusText: ‘Forbidden’ })
  }

  return fetch(request)
}

Prevent a specific IP from connecting:

addEventListener(‘fetch’, event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {  
  if (request.headers.get(‘cf-connecting-ip’) === ‘225.0.0.1’) {
    return new Response(‘Sorry, this page is not available.’,
        { status: 403, statusText: ‘Forbidden’ })
  }

  return fetch(request)
}

A/B Testing

We can easily create a Worker to control A/B tests:

addEventListener(‘fetch’, event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {
  const name = ‘experiment-0’
  let group          // ‘control’ or ‘test’, set below
  let isNew = false  // is the group newly-assigned?

  // Determine which group this request is in.
  const cookie = request.headers.get(‘Cookie’)
  if (cookie && cookie.includes(${name}=control)) {
    group = ‘control’
  } else if (cookie && cookie.includes(${name}=test)) {
    group = ‘test’
  } else {
    // 50/50 Split
    group = Math.random() < 0.5 ? ‘control’ : ‘test’
    isNew = true
  }

  // We’ll prefix the request path with the experiment name. This way,
  // the origin server merely has to have two copies of the site under
  // top-level directories named “control” and “test”.
  let url = new URL(request.url)
  // Note that url.pathname always begins with a /, so we don’t
  // need to explicitly add one after ${group}.
  url.pathname = /${group}${url.pathname}

  request = new Request(url, request)

  let response = await fetch(request)

  if (isNew) {
    // The experiment was newly-assigned, so add a Set-Cookie header
    // to the response. We need to re-construct the response to make
    // the headers mutable.
    response = new Response(response.body, response)
    response.headers.append(‘Set-Cookie’, ${name}=${group}; path=/)
  }

  return response
}

Serving Device-Based Content

The script below delivers different content based on the device being used:

addEventListener(‘fetch’, event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {
  let uaSuffix = ‘’

  const ua = request.headers.get(‘user-agent’)
  if (ua.match(/iphone/i) || ua.match(/ipod/i)) {
    uaSuffix = ‘/mobile’
  } else if (ua.match(/ipad/i)) {
    uaSuffix = ‘/tablet’
  }

  return fetch(request.url + uaSuffix, request)
}

Conditional Routing

By passing custom values through headers, we can fetch most-specific content:

addEventListener(‘fetch’, event => {
  event.respondWith(fetchAndApply(event.request))
})

async function fetchAndApply(request) {
  let suffix = ‘’
  //Assuming that the client is sending a custom header
  const cryptoCurrency = request.headers.get(‘X-Crypto-Currency’)
  if (cryptoCurrency === ‘BTC’) {
    suffix = ‘/btc’
  } else if (cryptoCurrency === ‘XRP’) {
    suffix = ‘/xrp’
  } else if (cryptoCurrency === ‘ETH’) {
    suffix = ‘/eth’
  }

  return fetch(request.url + suffix, request)
}

Enhanced Performance

Workers makes available a Cache API through which we can save computationally intensive data and have it ready for immediate use from then on:

async function handleRequest(event) {
  let cache = caches.default
  let response = await cache.match(event.request)
      
  if (!response) {
    response = doSuperComputationallyHeavyThing()
    event.waitUntil(cache.put(event.request, response.clone()))
  }
        
  return  response
}

For instance, through the Cache API we can store GraphQL requests whose results have not changed:

async function handleRequest(event) {
  let cache = caches.default
  let response = await cache.match(event.request)
  
  if (!response){
    response = await fetch(event.request)
    if (response.ok) {
      event.waitUntil(cache.put(event.request, response.clone()))
    }
  }
        
  return response
}

Many Others

The list of useful applications goes on and on. Below are links to several additional examples:

Wrapping Up: “The Network Is The Computer”

Because speed matters, websites are going serverless. Cloudflare Workers is a new offering that enables this transition. It blurs the boundaries between the computer and the network, enabling developers to deploy apps globally that run on the fabric of the Internet itself, leveraging Cloudflare’s worldwide network of servers to run our code near where our users are located. It is fast, cheap, and secure, and it scales as much as we need it.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about Serverless

Introduction to Serverless

Create and Deploy AWS and AWS Lambda using Serverless Framework

Running TensorFlow on AWS Lambda using Serverless

Easily Deploy a Serverless Node App with ZEIT Now

What Is Serverless Computing?

Building a Full-Stack Serverless App with Cloudflare Workers

Serverless PHP on App Engine and Cloud Firestore with Firevel (serverless Laravel framework)

#serverless #microservices #aws #web-service #cloud

What is GEEK

Buddha Community

Going Serverless With Cloudflare Workers
Fannie  Zemlak

Fannie Zemlak

1599854400

What's new in the go 1.15

Go announced Go 1.15 version on 11 Aug 2020. Highlighted updates and features include Substantial improvements to the Go linker, Improved allocation for small objects at high core counts, X.509 CommonName deprecation, GOPROXY supports skipping proxies that return errors, New embedded tzdata package, Several Core Library improvements and more.

As Go promise for maintaining backward compatibility. After upgrading to the latest Go 1.15 version, almost all existing Golang applications or programs continue to compile and run as older Golang version.

#go #golang #go 1.15 #go features #go improvement #go package #go new features

Hermann  Frami

Hermann Frami

1655426640

Serverless Plugin for Microservice Code Management and Deployment

Serverless M

Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel

splash.gif

Currently this plugin is tested for the below stack only

  • AWS
  • NodeJS λ
  • Rest API (You can use other events as well)

Prerequisites

Make sure you have the serverless CLI installed

# Install serverless globally
$ npm install serverless -g

Getting Started

To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin

ES6 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev

ES5 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular --save-dev

If you dont want to use the templates above you can just add in your existing project

Adding it as plugin

plugins:
  - serverless-modular

Now you are all done to start building your serverless modular functions

API Reference

The serverless CLI can be accessed by

# Serverless Modular CLI
$ serverless modular

# shorthand
$ sls m

Serverless Modular CLI is based on 4 main commands

  • sls m init
  • sls m feature
  • sls m function
  • sls m build
  • sls m deploy

init command

sls m init

The serverless init command helps in creating a basic .gitignore that is useful for serverless modular.

The basic .gitignore for serverless modular looks like this

#node_modules
node_modules

#sm main functions
sm.functions.yml

#serverless file generated by build
src/**/serverless.yml

#main serverless directories generated for sls deploy
.serverless

#feature serverless directories generated sls deploy
src/**/.serverless

#serverless logs file generated for main sls deploy
.sm.log

#serverless logs file generated for feature sls deploy
src/**/.sm.log

#Webpack config copied in each feature
src/**/webpack.config.js

feature command

The feature command helps in building new features for your project

options (feature Command)

This command comes with three options

--name: Specify the name you want for your feature

--remove: set value to true if you want to remove the feature

--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--remove-rtrue, falsefalse
--basePath-pstringsame as name

Examples (feature Command)

Creating a basic feature

# Creating a jedi feature
$ sls m feature -n jedi

Creating a feature with different base path

# A feature with different base path
$ sls m feature -n jedi -p tatooine

Deleting a feature

# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true

function command

The function command helps in adding new function to a feature

options (function Command)

This command comes with four options

--name: Specify the name you want for your function

--feature: Specify the name of the existing feature

--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway

--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--feature-fstringN/A
--path-pstringsame as name
--method-mstring'GET'

Examples (function Command)

Creating a basic function

# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi

Creating a basic function with different path and method

# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST

build command

The build command helps in building the project for local or global scope

options (build Command)

This command comes with four options

--scope: Specify the scope of the build, use this with "--feature" tag

--feature: Specify the name of the existing feature you want to build

optionsshortcutrequiredvaluesdefault value
--scope-sstringlocal
--feature-fstringN/A

Saving build Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    build:
      scope: local

Examples (build Command)

all feature build (local scope)

# Building all local features
$ sls m build

Single feature build (local scope)

# Building a single feature
$ sls m build -f jedi -s local

All features build global scope

# Building all features with global scope
$ sls m build -s global

deploy command

The deploy command helps in deploying serverless projects to AWS (it uses sls deploy command)

options (deploy Command)

This command comes with four options

--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)

--sm-scope: Specify if you want to deploy local features or global

--sm-features: Specify the local features you want to deploy (comma separated if multiple)

optionsshortcutrequiredvaluesdefault value
--sm-paralleltrue, falsetrue
--sm-scopelocal, globallocal
--sm-featuresstringN/A
--sm-ignore-buildstringfalse

Saving deploy Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    deploy:
      scope: local
      parallel: true
      ignoreBuild: true

Examples (deploy Command)

Deploy all features locally

# deploy all local features
$ sls m deploy

Deploy all features globally

# deploy all global features
$ sls m deploy --sm-scope global

Deploy single feature

# deploy all global features
$ sls m deploy --sm-features jedi

Deploy Multiple features

# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side

Deploy Multiple features in sequence

# deploy all global features
$ sls m deploy  --sm-features jedi,sith,dark_side --sm-parallel false

Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular 
License: MIT license

#serverless #aws #node #lambda 

Ida  Nader

Ida Nader

1596415472

Eliminating cold starts with Cloudflare Workers

A “cold start” is the time it takes to load and execute a new copy of a serverless function for the first time. It’s a problem that’s both complicated to solve and costly to fix. Other serverless platforms make you choose between suffering from random increases in execution time, or paying your way out with synthetic requests to keep your function warm. Cold starts are a horrible experience, especially when serverless containers can take full seconds to warm up.

Unlike containers, Cloudflare Workers utilize isolate technology, which measure cold starts in single-digit milliseconds. Well, at least they did. Today, we’re removing the need to worry about cold starts entirely, by introducing support for Workers that have no cold starts at all – that’s right, zero. Forget about cold starts, warm starts, or… any starts, with Cloudflare Workers you get always-hot, raw performance in more than 200 cities worldwide.

Why is there a cold start problem?

It’s impractical to keep everyone’s functions warm in memory all the time. Instead, serverless providers only warm up a function after the first request is received. Then, after a period of inactivity, the function becomes cold again and the cycle continues.

For Workers, this has never been much of a problem. In contrast to containers that can spend full seconds spinning up a new containerized process for each function, the isolate technology behind Workers allows it to warm up a function in under 5 milliseconds.

_Learn more about how isolates enable Cloudflare Workers to be performant and secure _here.

Cold starts are ugly. They’re unexpected, unavoidable, and cause unpredictable code execution times. You shouldn’t have to compromise your customers’ experience to enjoy the benefits of serverless. In a collaborative effort between our Workers and Protocols teams, we set out to create a solution where you never have to worry about cold starts, warm starts, or pre-warming ever again.

How is a zero cold start even possible?

Like many features at Cloudflare, security and encryption make our network more intelligent. Since 95% of Worker requests are securely handled over HTTPS, we engineered a solution that uses the Internet’s encryption protocols to our advantage.

Before a client can send an HTTPS request, it needs to establish a secure channel with the server. This process is known as “handshaking” in the TLS, or Transport Layer Security, protocol. Most clients also send a hostname (e.g. cloudflare.com) in that handshake, which is referred to as the SNI, or Server Name Indication. The server receives the handshake, sends back a certificate, and now the client is allowed to send its original request, encrypted.

Previously, Workers would only load and compile after the entire handshake process was complete, which involves two round-trips between the client and server. But wait, we thought, if the hostname is present in the handshake, why wait until the entire process is done to preload the Worker? Since the handshake takes some time, there is an opportunity to warm up resources during the waiting time before the request arrives.

With our newest optimization, when Cloudflare receives the first packet during TLS negotiation, the “ClientHello,” we hint the Workers runtime to eagerly load that hostname’s Worker. After the handshake is done, the Worker is warm and ready to receive requests. Since it only takes 5 milliseconds to load a Worker, and the average latency between a client and Cloudflare is more than that, the cold start is zero. The Worker starts executing code the moment the request is received from the client.

#serverless week #serverless #cloudflare workers #javascript #product news

Serverless Applications - Pros and Cons to Help Businesses Decide - Prismetric

In the past few years, especially after Amazon Web Services (AWS) introduced its Lambda platform, serverless architecture became the business realm’s buzzword. The increasing popularity of serverless applications saw market leaders like Netflix, Airbnb, Nike, etc., adopting the serverless architecture to handle their backend functions better. Moreover, serverless architecture’s market size is expected to reach a whopping $9.17 billion by the year 2023.

Global_Serverless_Architecture_Market_2019-2023

Why use serverless computing?
As a business it is best to approach a professional mobile app development company to build apps that are deployed on various servers; nevertheless, businesses should understand that the benefits of the serverless applications lie in the possibility it promises ideal business implementations and not in the hype created by cloud vendors. With the serverless architecture, the developers can easily code arbitrary codes on-demand without worrying about the underlying hardware.

But as is the case with all game-changing trends, many businesses opt for serverless applications just for the sake of being up-to-date with their peers without thinking about the actual need of their business.

The serverless applications work well with stateless use cases, the cases which execute cleanly and give the next operation in a sequence. On the other hand, the serverless architecture is not fit for predictable applications where there is a lot of reading and writing in the backend system.

Another benefit of working with the serverless software architecture is that the third-party service provider will charge based on the total number of requests. As the number of requests increases, the charge is bound to increase, but then it will cost significantly less than a dedicated IT infrastructure.

Defining serverless software architecture
In serverless software architecture, the application logic is implemented in an environment where operating systems, servers, or virtual machines are not visible. Although where the application logic is executed is running on any operating system which uses physical servers. But the difference here is that managing the infrastructure is the soul of the service provider and the mobile app developer focuses only on writing the codes.

There are two different approaches when it comes to serverless applications. They are

Backend as a service (BaaS)
Function as a service (FaaS)

  1. Backend as a service (BaaS)
    The basic required functionality of the growing number of third party services is to provide server-side logic and maintain their internal state. This requirement has led to applications that do not have server-side logic or any application-specific logic. Thus they depend on third-party services for everything.

Moreover, other examples of third-party services are Autho, AWS Cognito (authentication as a service), Amazon Kinesis, Keen IO (analytics as a service), and many more.

  1. Function as a Service (FaaS)
    FaaS is the modern alternative to traditional architecture when the application still requires server-side logic. With Function as a Service, the developer can focus on implementing stateless functions triggered by events and can communicate efficiently with the external world.

FaaS serverless architecture is majorly used with microservices architecture as it renders everything to the organization. AWS Lambda, Google Cloud functions, etc., are some of the examples of FaaS implementation.

Pros of Serverless applications
There are specific ways in which serverless applications can redefine the way business is done in the modern age and has some distinct advantages over the traditional could platforms. Here are a few –

🔹 Highly Scalable
The flexible nature of the serverless architecture makes it ideal for scaling the applications. The serverless application’s benefit is that it allows the vendor to run each of the functions in separate containers, allowing optimizing them automatically and effectively. Moreover, unlike in the traditional cloud, one doesn’t need to purchase a certain number of resources in serverless applications and can be as flexible as possible.

🔹 Cost-Effective
As the organizations don’t need to spend hundreds and thousands of dollars on hardware, they don’t need to pay anything to the engineers to maintain the hardware. The serverless application’s pricing model is execution based as the organization is charged according to the executions they have made.

The company that uses the serverless applications is allotted a specific amount of time, and the pricing of the execution depends on the memory required. Different types of costs like presence detection, access authorization, image processing, etc., associated with a physical or virtual server is completely eliminated with the serverless applications.

🔹 Focuses on user experience
As the companies don’t always think about maintaining the servers, it allows them to focus on more productive things like developing and improving customer service features. A recent survey says that about 56% of the users are either using or planning to use the serverless applications in the coming six months.

Moreover, as the companies would save money with serverless apps as they don’t have to maintain any hardware system, it can be then utilized to enhance the level of customer service and features of the apps.

🔹 Ease of migration
It is easy to get started with serverless applications by porting individual features and operate them as on-demand events. For example, in a CMS, a video plugin requires transcoding video for different formats and bitrates. If the organization wished to do this with a WordPress server, it might not be a good fit as it would require resources dedicated to serving pages rather than encoding the video.

Moreover, the benefits of serverless applications can be used optimally to handle metadata encoding and creation. Similarly, serverless apps can be used in other plugins that are often prone to critical vulnerabilities.

Cons of serverless applications
Despite having some clear benefits, serverless applications are not specific for every single use case. We have listed the top things that an organization should keep in mind while opting for serverless applications.

🔹 Complete dependence on third-party vendor
In the realm of serverless applications, the third-party vendor is the king, and the organizations have no options but to play according to their rules. For example, if an application is set in Lambda, it is not easy to port it into Azure. The same is the case for coding languages. In present times, only Python developers and Node.js developers have the luxury to choose between existing serverless options.

Therefore, if you are planning to consider serverless applications for your next project, make sure that your vendor has everything needed to complete the project.

🔹 Challenges in debugging with traditional tools
It isn’t easy to perform debugging, especially for large enterprise applications that include various individual functions. Serverless applications use traditional tools and thus provide no option to attach a debugger in the public cloud. The organization can either do the debugging process locally or use logging for the same purpose. In addition to this, the DevOps tools in the serverless application do not support the idea of quickly deploying small bits of codes into running applications.

#serverless-application #serverless #serverless-computing #serverless-architeture #serverless-application-prosand-cons

Eldora  Bradtke

Eldora Bradtke

1596030120

Cloudflare Workers Announces Broad Language Support

We initially launched Cloudflare Workers with support for JavaScript and languages that compile to WebAssembly, such as Rust, C, and C++. Since then, Cloudflare and the community have improved the usability of Typescript on Workers. But we haven’t talked much about the many other popular languages that compile to JavaScript. Today, we’re excited to announce support for Python, Scala, Kotlin, Reason and Dart.

You can build applications on Cloudflare Workers using your favorite language starting today.

Getting Started

Getting started is as simple as installing Wrangler, then running generate for the template for your chosen language: PythonScalaKotlinDart, or Reason. For Python, this looks like:

wrangler generate my-python-project https://github.com/cloudflare/python-worker-hello-world

Follow the installation instructions in the README inside the generated project directory, then run wrangler publish. You can see the output of your Worker at your workers.dev subdomain, e.g. https://my-python-project.cody.workers.dev/. You can sign up for a free Workers account if you don’t have one yet.

That’s it. It is really easy to write in your favorite languages. But, this wouldn’t be a very compelling blog post if we left it at that. Now, I’ll shift the focus to how we added support for these languages and how you can add support for others.

How it all works under the hood

Language features are important. For instance, it’s hard to give up the safety and expressiveness of pattern matching once you’ve used it. Familiar syntax matters to us as programmers.

You may also have existing code in your preferred language that you’d like to reuse. Just keep in mind that the advantages of running on V8 come with the limitation that if you use libraries that depend on native code or language-specific VM features, they may not translate to JavaScript. WebAssembly may be an option in that case. But for memory-managed languages you’re usually better off compiling to JavaScript, at least until the story around garbage collection for WebAssembly stabilizes.

I’ll walk through how the Worker language templates are made using a representative example of a dynamically typed language, Python, and a statically typed language, Scala. If you want to follow along, you’ll need to have Wrangler installed and configured with your Workers account. If it’s your first time using Workers it’s a good idea to go through the quickstart.

#cloudflare workers #serverless #serverless week #javascript #python #wrangler #scala