Hunter  Krajcik

Hunter Krajcik

1633770867

First Look at Serverless Architecture: Why, What, and the How

Implement your first serverless app

I recently came across serverless architecture and at first sight, it looked like a one-stop solution for all my infrastructure problems. But with more time spent reading about its reason for existence, I found there is a specific use case that serverless architecture aims to solve, thus I decided to write an article detailing my take on it.

#serverless 

What is GEEK

Buddha Community

First Look at Serverless Architecture: Why, What, and the How

Serverless Vs Microservices Architecture - A Deep Dive

Companies need to be thinking long-term before even starting a software development project. These needs are solved at the level of architecture: business owners want to assure agility, scalability, and performance.

The top contenders for scalable solutions are serverless and microservices. Both architectures prioritize security but approach it in their own ways. Let’s take a look at how businesses can benefit from the adoption of serverless architecture vs microservices, examine their differences, advantages, and use cases.

#serverless #microservices #architecture #software-architecture #serverless-architecture #microservice-architecture #serverless-vs-microservices #hackernoon-top-story

Ellie Windler

Ellie Windler

1584167254

Why Serverless Architecture Is The Future Of Software Architecture?

Any business when thinking of scaling business applications in a cost-effective way goes for a cloud computing approach. Even leading technology companies like Quora, Facebook, LinkedIn, Pinterest, and Spotify are also getting benefits offered by cloud computing infrastructures.

In this article, we are going to deeply understand the concept of serverless and how it works and why it is useful for your business.

#Serverless Architecture #What is Serverless Architecture #Serverless

Christa  Stehr

Christa Stehr

1602681082

Overcoming Common Serverless Challenges with Mainframe CICS Programs

By this point most enterprises, including those running on legacy infrastructures, are familiar with the benefits of serverless computing:

  • Greater scalability
  • Faster development
  • More efficient deployment
  • Lower cost

The benefits of agility and cost reduction are especially relevant in the current macroeconomic environment when customer behavior is changing, end-user needs are difficult to predict, and development teams are under pressure to do more with less.

So serverless is a no-brainer, right?

Not exactly. Serverless might be relatively painless for a new generation of cloud-native software companies that grew up in a world of APIs and microservices, but it creates headaches for the many organizations that still rely heavily on legacy infrastructure.

In particular, enterprises running mainframe CICS programs are likely to encounter frustrating stumbling blocks on the path to launching Functions as a Service (FaaS). This population includes global enterprises that depend on CICS applications to effectively manage high-volume transactional processing requirements – particularly in the banking, financial services, and insurance industries.

These organizations stand to achieve time and cost savings through a modern approach to managing legacy infrastructure, as opposed to launching serverless applications on a brittle foundation. Here are three of the biggest obstacles they face and how to overcome them.

Challenge #1

Middleware that introduces complexity, technical debt, and latency. Many organizations looking to integrate CICS applications into a microservices or serverless architecture rely on middleware (e.g., an ESB or SOA) to access data from the underlying applications. This strategy introduces significant runtime performance challenges and creates what one bank’s chief architect referred to as a “lasagna architecture,” making DevOps impossible.

#serverless architecture #serverless functions #serverless benefits #mainframes #serverless api #serverless integration

Hermann  Frami

Hermann Frami

1655426640

Serverless Plugin for Microservice Code Management and Deployment

Serverless M

Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel

splash.gif

Currently this plugin is tested for the below stack only

  • AWS
  • NodeJS λ
  • Rest API (You can use other events as well)

Prerequisites

Make sure you have the serverless CLI installed

# Install serverless globally
$ npm install serverless -g

Getting Started

To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin

ES6 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev

ES5 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular --save-dev

If you dont want to use the templates above you can just add in your existing project

Adding it as plugin

plugins:
  - serverless-modular

Now you are all done to start building your serverless modular functions

API Reference

The serverless CLI can be accessed by

# Serverless Modular CLI
$ serverless modular

# shorthand
$ sls m

Serverless Modular CLI is based on 4 main commands

  • sls m init
  • sls m feature
  • sls m function
  • sls m build
  • sls m deploy

init command

sls m init

The serverless init command helps in creating a basic .gitignore that is useful for serverless modular.

The basic .gitignore for serverless modular looks like this

#node_modules
node_modules

#sm main functions
sm.functions.yml

#serverless file generated by build
src/**/serverless.yml

#main serverless directories generated for sls deploy
.serverless

#feature serverless directories generated sls deploy
src/**/.serverless

#serverless logs file generated for main sls deploy
.sm.log

#serverless logs file generated for feature sls deploy
src/**/.sm.log

#Webpack config copied in each feature
src/**/webpack.config.js

feature command

The feature command helps in building new features for your project

options (feature Command)

This command comes with three options

--name: Specify the name you want for your feature

--remove: set value to true if you want to remove the feature

--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--remove-rtrue, falsefalse
--basePath-pstringsame as name

Examples (feature Command)

Creating a basic feature

# Creating a jedi feature
$ sls m feature -n jedi

Creating a feature with different base path

# A feature with different base path
$ sls m feature -n jedi -p tatooine

Deleting a feature

# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true

function command

The function command helps in adding new function to a feature

options (function Command)

This command comes with four options

--name: Specify the name you want for your function

--feature: Specify the name of the existing feature

--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway

--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--feature-fstringN/A
--path-pstringsame as name
--method-mstring'GET'

Examples (function Command)

Creating a basic function

# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi

Creating a basic function with different path and method

# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST

build command

The build command helps in building the project for local or global scope

options (build Command)

This command comes with four options

--scope: Specify the scope of the build, use this with "--feature" tag

--feature: Specify the name of the existing feature you want to build

optionsshortcutrequiredvaluesdefault value
--scope-sstringlocal
--feature-fstringN/A

Saving build Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    build:
      scope: local

Examples (build Command)

all feature build (local scope)

# Building all local features
$ sls m build

Single feature build (local scope)

# Building a single feature
$ sls m build -f jedi -s local

All features build global scope

# Building all features with global scope
$ sls m build -s global

deploy command

The deploy command helps in deploying serverless projects to AWS (it uses sls deploy command)

options (deploy Command)

This command comes with four options

--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)

--sm-scope: Specify if you want to deploy local features or global

--sm-features: Specify the local features you want to deploy (comma separated if multiple)

optionsshortcutrequiredvaluesdefault value
--sm-paralleltrue, falsetrue
--sm-scopelocal, globallocal
--sm-featuresstringN/A
--sm-ignore-buildstringfalse

Saving deploy Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    deploy:
      scope: local
      parallel: true
      ignoreBuild: true

Examples (deploy Command)

Deploy all features locally

# deploy all local features
$ sls m deploy

Deploy all features globally

# deploy all global features
$ sls m deploy --sm-scope global

Deploy single feature

# deploy all global features
$ sls m deploy --sm-features jedi

Deploy Multiple features

# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side

Deploy Multiple features in sequence

# deploy all global features
$ sls m deploy  --sm-features jedi,sith,dark_side --sm-parallel false

Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular 
License: MIT license

#serverless #aws #node #lambda 

Christa  Stehr

Christa Stehr

1603934180

Predicting The Cost and Performance of Serverless Workloads Under Different Workload

Serverless Computing is the most promising trend for the future of Cloud Computing. As of 2020, all major cloud providers offer a wide variety of serverless services. Some of the FaaS offerings provided withing different cloud providers are AWS Lambda, Google Cloud Functions, Google Cloud Run, Azure Functions, and IBM Cloud Functions. If you want to use your current infrastructure, you could also use the open-source alternatives like OpenFaaS, IronFunctions, Apache OpenWhisk, Kubeless, Fission, OpenLambda, and Knative.

In a previous article, I iterated the most important autoscaling patterns used in major cloud services, along with their pros/cons. In this post, I will go through the process of predicting key performance characteristics and the cost of scale-per-request serverless platforms (like AWS Lambda, IBM Cloud Functions, Azure Functions, and Google Cloud Functions) with different workload intensities (in terms of requests per second) using a performance model. I will also include a link to a simulator that can generate more detailed insights at the end.

The Performance Model

A performance model is “A model created to define the significant aspects of the way in which a proposed or actual system operates in terms of resources consumed, contention for resources, and delays introduced by processing or physical limitations” [source]. So using a performance model, you can “predict” how different characteristics of your service will change in different settings without needing to perform costly experiments for them.

The performance model we will be using today is from one of my recent papers called “Performance Modeling of Serverless Computing Platforms”. You can try an interactive version of my model to see what kind of information you can expect from it.

Prerequisites

The input properties that need to be provided by the user to the performance model along with some default values.

The only system property you need to provide is the “idle expiration time” which is the amount of time the serverless platform will keep your function instance around after your last request before terminating it and freeing its resources (to know more about this, you are going to have to read my paper, especially the system description section). The good news is, this is a fixed value for all workloads which you don’t need to think about and is 10 minutes for AWS Lambda, Google Cloud Function, and IBM Cloud Functions and 20 minutes for Azure Functions.

The next thing you need is the cold/warm response time of your function. The only way you can get this value, for now, is by actually running your code on the platform and measuring the response times. Of course, there are tools that can help you with that, but I haven’t used them, so, I would be glad if you could tell me in the comments about how they were. Tools like the AWS Lambda Power Tuning can also tell you the response time for different memory settings, so you can check which one fits your QoS guarantees.

#serverless-computing #performance #serverless-architecture #serverless #serverless-apps