Poppy Cooke

Poppy Cooke

1563522623

Introduction to Serverless

Cloud computing, which started with Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS), is fast moving into a Function-as-a-Service (FaaS) model where users don’t have to think about servers. Serverless applications don’t require the provisioning, scaling, or management of servers, as everything required to run and scale the applications with high availability is provided by the platform itself. This allows users to focus on building their core products, instead of worrying about the runtime infrastructure demands; and to pay only for the actual compute time and resources consumed, instead of the total uptime.

Server-Less?

Serverless is merely an abstraction wherein the users (developers and maintainers in general) have zero or minimal visibility and control on the “server” aspects of their target environment — processes, the kernel, the filesystem, OS configurations, and so forth. From a traditional developer perspective, this may seem like a considerable restriction, but the same abstraction opens up wonderful possibilities for platform providers to optimize their infrastructure utilization (more bang for their buck), and for users to avoid unnecessary charges with per-usage billing. In this sense alone it’s a win-win, not to mention the ensuing resource optimization and “green computing” benefits.

FaaS is solely the computation aspect of serverless, which in itself is quite broad in scope: storage, hosting, relational/non-relational persistence, messaging, API/web service provisioning and management, and even IoT coordination. The first commercially successful serverless (“pay-as-you-go”) platform was Google’s App Engine in 2008 (which is now, arguably, categorized as a PaaS). However, the real surge was in 2014, when AWS announced Lambda as “serverless compute” as a novel, pay-per-execution alternative for pay-per-allocation VMs.

The Awakening

As of now, we see a few major players in the arena: AWS with Lambda (which, not coincidentally, fits nicely with the “as-a-service” stack they have been building since the beginning); Microsoft Azure with its Azure Functions (catching up fast, along with their other service offerings like Azure Storage, CosmosDB, and Event Grid), and Google Cloud Platform with its Cloud Functions and Firebase Functions (the former having gone GA in July this year — although it is seemingly lagging in comparison to its competitors — and the latter being a more “managed” version of the former, tailored for the Firebase backend-as-a-service (BaaS) platform). In addition, we see many new players waiting in line — IBM Cloud Functions based on OpenWhisk, PubNub Functions, MongoDB Stitch, and Function Compute from Alibaba Cloud.

In addition to such public cloud-based providers, there’s an increasing number of do-it-yourself serverless platforms which one can deploy on their own cloud infrastructure — Kubernetes-based Kubeless and Funktion; Apache OpenWhisk; Docker-based Oracle Fn and IronFunctions; even totally novel experiments like OpenFaaS on Raspberry Pi.

“Why” and “When”

Given the pay-per-use model of serverless, it is mostly suited for batch and event-driven workloads. An ideal example is end-of-month payroll processing; rather than running and maintaining a dedicated compute instance (such as a server, container, or platform (PaaS) app) throughout the month, or going through the hassle of starting, configuring, running, and shutting down such an instance just for the EoM batch process (while having to pay for the passive resources for the whole month), the organization can simply deploy a set of serverless components (a function, connected to a pay-per-use data store or source, such as Google Cloud Datastore or AWS DynamoDB). This way, the payroll personnel can simply invoke the function on demand to get their EoM calculations done, and literally forget about it — until the end of next month.

Another remarkable feature of serverless functions is their near-infinite scalability — the ability to scale from virtually nothing to literally tens of thousands of concurrent instances — which makes them perfect candidates for handling highly variant and highly unpredictable loads, such as traffic for a real-time sports score app. In most cases, it is much more economical to have serverless infrastruc- ture handling on-demand loads rather than running, managing, and paying for over-provisioned VM instances or container clusters just to handle a rarely occurring peak load.

A Blooming Ecosystem

Parallel to platforms, serverless frameworks are also booming. The literal Serverless Framework and SAM (Serverless Application Model) from AWS are two of the key players. Both are open-source (although the mechanisms behind SAM are embedded in the opaque AWS platform), have installable CLI components that can be set up to run quickly, and integrate with CI/CD workflows as well. The Serverless Framework supports multiple languages, provides lifecycle management, and is extensible via plugins (already growing into a rich ecosystem), whereas SAM has relatively limited capabilities due to being tightly coupled with AWS and its CloudFormation deployment service. While a basic Serverless Framework app could be portable across cloud platforms (currently AWS, GCP, Azure, and OpenWhisk), customizability and advanced features still have to be achieved via platform-specific configurations.

Additionally, we see different forms of serverless development tools in emergence: frameworks like Apex, Up, and Spring Cloud Functions; design stacks like Project Flogo; IDEs like AWS Cloud9 and SLAppForge Sigma; API-centric tooling like Shep and Lambda Forest; testing tools like SAM Local, and so forth. Anibal maintains an extensive curated list of tooling in his GitHub repo, which is one of the most comprehensive sources for staying current with serverless technologies.

Operational concerns of serverless — security, compliance, monitoring, and management — are just as important as the development aspect. PureSec, Protego, and other security-oriented ventures are introducing security best practices to the serverless world (least privileges for function runtimes, vulnerability analysis, DDoS/attack detection, and so forth) whereas others like Dashbird, IOpipe, and Epsagon are improving the monitoring aspects via unified dashboards, log analysis, automatic alerting and throttling, performance tracking, and cost estimation/prediction.

Many prominent enterprises are already moving parts of their production workloads to serverless. Prominent figures include Coca-Cola, The New York Times, Thomson Reuters, Fuji, and Localytics. So far, these have been mainly focused on event-driven workloads such as analytics, IoT coordination, on-demand image processing, and email sending. However, there are also full-stack examples like OpenGamma and Torii. Furthermore, recent surveys by Serverless, Inc. revealed that, among its respondents, the percentage of enterprise serverless usage has increased from 43% in 2017 to 82% in 2018.

Fragile: Handle with Care

Despite its traction and popularity, it should never be forgotten that serverless is no silver bullet for all your cost and scalability issues. In fact, beyond a certain point, serverless can be more expensive than dedicated provisioning; and this point can be closer to reality than you expect. As a rule of thumb, if your application needs continuous computation at a higher concurrency (the latter being inevitable for cases like HTTP request handling), you are better off running a dedicated compute instance (which can usually serve multiple concurrent requests and generally has access to larger memory and disk capacities as well). Despite the lack of “ops-free” benefits on serverless provisioning, there are written accounts of how untamed scalability can wreak havoc with your bill as well as on how giving up on serverless can cut down your costs by orders of magnitude.

“Going serverless” is not an overnight feat, especially when “serverfull” architecture is already baked into your systems, developers, and maintainers. The “think serverless” initiative should be adhered to from the start of a project, with developers, team leads, architects, and even project managers being well aware of the strengths, limitations, and pitfalls of the serverless “version” of what they are about to implement. Without a well-planned architecture, a serverless system can very easily turn into a mess or a white elephant — the chances are much higher than with a “traditional” server architecture.

Despite all these hardships, pitfalls, and controversies, serverless will survive — and thrive — just like it is doing now. Grab it by its correct end, and you’ll attain your nirvana.

#aws #web-service #microservices #serverless

What is GEEK

Buddha Community

Introduction to Serverless
Hermann  Frami

Hermann Frami

1655426640

Serverless Plugin for Microservice Code Management and Deployment

Serverless M

Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel

splash.gif

Currently this plugin is tested for the below stack only

  • AWS
  • NodeJS λ
  • Rest API (You can use other events as well)

Prerequisites

Make sure you have the serverless CLI installed

# Install serverless globally
$ npm install serverless -g

Getting Started

To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin

ES6 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev

ES5 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular --save-dev

If you dont want to use the templates above you can just add in your existing project

Adding it as plugin

plugins:
  - serverless-modular

Now you are all done to start building your serverless modular functions

API Reference

The serverless CLI can be accessed by

# Serverless Modular CLI
$ serverless modular

# shorthand
$ sls m

Serverless Modular CLI is based on 4 main commands

  • sls m init
  • sls m feature
  • sls m function
  • sls m build
  • sls m deploy

init command

sls m init

The serverless init command helps in creating a basic .gitignore that is useful for serverless modular.

The basic .gitignore for serverless modular looks like this

#node_modules
node_modules

#sm main functions
sm.functions.yml

#serverless file generated by build
src/**/serverless.yml

#main serverless directories generated for sls deploy
.serverless

#feature serverless directories generated sls deploy
src/**/.serverless

#serverless logs file generated for main sls deploy
.sm.log

#serverless logs file generated for feature sls deploy
src/**/.sm.log

#Webpack config copied in each feature
src/**/webpack.config.js

feature command

The feature command helps in building new features for your project

options (feature Command)

This command comes with three options

--name: Specify the name you want for your feature

--remove: set value to true if you want to remove the feature

--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--remove-rtrue, falsefalse
--basePath-pstringsame as name

Examples (feature Command)

Creating a basic feature

# Creating a jedi feature
$ sls m feature -n jedi

Creating a feature with different base path

# A feature with different base path
$ sls m feature -n jedi -p tatooine

Deleting a feature

# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true

function command

The function command helps in adding new function to a feature

options (function Command)

This command comes with four options

--name: Specify the name you want for your function

--feature: Specify the name of the existing feature

--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway

--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--feature-fstringN/A
--path-pstringsame as name
--method-mstring'GET'

Examples (function Command)

Creating a basic function

# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi

Creating a basic function with different path and method

# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST

build command

The build command helps in building the project for local or global scope

options (build Command)

This command comes with four options

--scope: Specify the scope of the build, use this with "--feature" tag

--feature: Specify the name of the existing feature you want to build

optionsshortcutrequiredvaluesdefault value
--scope-sstringlocal
--feature-fstringN/A

Saving build Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    build:
      scope: local

Examples (build Command)

all feature build (local scope)

# Building all local features
$ sls m build

Single feature build (local scope)

# Building a single feature
$ sls m build -f jedi -s local

All features build global scope

# Building all features with global scope
$ sls m build -s global

deploy command

The deploy command helps in deploying serverless projects to AWS (it uses sls deploy command)

options (deploy Command)

This command comes with four options

--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)

--sm-scope: Specify if you want to deploy local features or global

--sm-features: Specify the local features you want to deploy (comma separated if multiple)

optionsshortcutrequiredvaluesdefault value
--sm-paralleltrue, falsetrue
--sm-scopelocal, globallocal
--sm-featuresstringN/A
--sm-ignore-buildstringfalse

Saving deploy Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    deploy:
      scope: local
      parallel: true
      ignoreBuild: true

Examples (deploy Command)

Deploy all features locally

# deploy all local features
$ sls m deploy

Deploy all features globally

# deploy all global features
$ sls m deploy --sm-scope global

Deploy single feature

# deploy all global features
$ sls m deploy --sm-features jedi

Deploy Multiple features

# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side

Deploy Multiple features in sequence

# deploy all global features
$ sls m deploy  --sm-features jedi,sith,dark_side --sm-parallel false

Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular 
License: MIT license

#serverless #aws #node #lambda 

Serverless Applications - Pros and Cons to Help Businesses Decide - Prismetric

In the past few years, especially after Amazon Web Services (AWS) introduced its Lambda platform, serverless architecture became the business realm’s buzzword. The increasing popularity of serverless applications saw market leaders like Netflix, Airbnb, Nike, etc., adopting the serverless architecture to handle their backend functions better. Moreover, serverless architecture’s market size is expected to reach a whopping $9.17 billion by the year 2023.

Global_Serverless_Architecture_Market_2019-2023

Why use serverless computing?
As a business it is best to approach a professional mobile app development company to build apps that are deployed on various servers; nevertheless, businesses should understand that the benefits of the serverless applications lie in the possibility it promises ideal business implementations and not in the hype created by cloud vendors. With the serverless architecture, the developers can easily code arbitrary codes on-demand without worrying about the underlying hardware.

But as is the case with all game-changing trends, many businesses opt for serverless applications just for the sake of being up-to-date with their peers without thinking about the actual need of their business.

The serverless applications work well with stateless use cases, the cases which execute cleanly and give the next operation in a sequence. On the other hand, the serverless architecture is not fit for predictable applications where there is a lot of reading and writing in the backend system.

Another benefit of working with the serverless software architecture is that the third-party service provider will charge based on the total number of requests. As the number of requests increases, the charge is bound to increase, but then it will cost significantly less than a dedicated IT infrastructure.

Defining serverless software architecture
In serverless software architecture, the application logic is implemented in an environment where operating systems, servers, or virtual machines are not visible. Although where the application logic is executed is running on any operating system which uses physical servers. But the difference here is that managing the infrastructure is the soul of the service provider and the mobile app developer focuses only on writing the codes.

There are two different approaches when it comes to serverless applications. They are

Backend as a service (BaaS)
Function as a service (FaaS)

  1. Backend as a service (BaaS)
    The basic required functionality of the growing number of third party services is to provide server-side logic and maintain their internal state. This requirement has led to applications that do not have server-side logic or any application-specific logic. Thus they depend on third-party services for everything.

Moreover, other examples of third-party services are Autho, AWS Cognito (authentication as a service), Amazon Kinesis, Keen IO (analytics as a service), and many more.

  1. Function as a Service (FaaS)
    FaaS is the modern alternative to traditional architecture when the application still requires server-side logic. With Function as a Service, the developer can focus on implementing stateless functions triggered by events and can communicate efficiently with the external world.

FaaS serverless architecture is majorly used with microservices architecture as it renders everything to the organization. AWS Lambda, Google Cloud functions, etc., are some of the examples of FaaS implementation.

Pros of Serverless applications
There are specific ways in which serverless applications can redefine the way business is done in the modern age and has some distinct advantages over the traditional could platforms. Here are a few –

🔹 Highly Scalable
The flexible nature of the serverless architecture makes it ideal for scaling the applications. The serverless application’s benefit is that it allows the vendor to run each of the functions in separate containers, allowing optimizing them automatically and effectively. Moreover, unlike in the traditional cloud, one doesn’t need to purchase a certain number of resources in serverless applications and can be as flexible as possible.

🔹 Cost-Effective
As the organizations don’t need to spend hundreds and thousands of dollars on hardware, they don’t need to pay anything to the engineers to maintain the hardware. The serverless application’s pricing model is execution based as the organization is charged according to the executions they have made.

The company that uses the serverless applications is allotted a specific amount of time, and the pricing of the execution depends on the memory required. Different types of costs like presence detection, access authorization, image processing, etc., associated with a physical or virtual server is completely eliminated with the serverless applications.

🔹 Focuses on user experience
As the companies don’t always think about maintaining the servers, it allows them to focus on more productive things like developing and improving customer service features. A recent survey says that about 56% of the users are either using or planning to use the serverless applications in the coming six months.

Moreover, as the companies would save money with serverless apps as they don’t have to maintain any hardware system, it can be then utilized to enhance the level of customer service and features of the apps.

🔹 Ease of migration
It is easy to get started with serverless applications by porting individual features and operate them as on-demand events. For example, in a CMS, a video plugin requires transcoding video for different formats and bitrates. If the organization wished to do this with a WordPress server, it might not be a good fit as it would require resources dedicated to serving pages rather than encoding the video.

Moreover, the benefits of serverless applications can be used optimally to handle metadata encoding and creation. Similarly, serverless apps can be used in other plugins that are often prone to critical vulnerabilities.

Cons of serverless applications
Despite having some clear benefits, serverless applications are not specific for every single use case. We have listed the top things that an organization should keep in mind while opting for serverless applications.

🔹 Complete dependence on third-party vendor
In the realm of serverless applications, the third-party vendor is the king, and the organizations have no options but to play according to their rules. For example, if an application is set in Lambda, it is not easy to port it into Azure. The same is the case for coding languages. In present times, only Python developers and Node.js developers have the luxury to choose between existing serverless options.

Therefore, if you are planning to consider serverless applications for your next project, make sure that your vendor has everything needed to complete the project.

🔹 Challenges in debugging with traditional tools
It isn’t easy to perform debugging, especially for large enterprise applications that include various individual functions. Serverless applications use traditional tools and thus provide no option to attach a debugger in the public cloud. The organization can either do the debugging process locally or use logging for the same purpose. In addition to this, the DevOps tools in the serverless application do not support the idea of quickly deploying small bits of codes into running applications.

#serverless-application #serverless #serverless-computing #serverless-architeture #serverless-application-prosand-cons

Christa  Stehr

Christa Stehr

1602681082

Overcoming Common Serverless Challenges with Mainframe CICS Programs

By this point most enterprises, including those running on legacy infrastructures, are familiar with the benefits of serverless computing:

  • Greater scalability
  • Faster development
  • More efficient deployment
  • Lower cost

The benefits of agility and cost reduction are especially relevant in the current macroeconomic environment when customer behavior is changing, end-user needs are difficult to predict, and development teams are under pressure to do more with less.

So serverless is a no-brainer, right?

Not exactly. Serverless might be relatively painless for a new generation of cloud-native software companies that grew up in a world of APIs and microservices, but it creates headaches for the many organizations that still rely heavily on legacy infrastructure.

In particular, enterprises running mainframe CICS programs are likely to encounter frustrating stumbling blocks on the path to launching Functions as a Service (FaaS). This population includes global enterprises that depend on CICS applications to effectively manage high-volume transactional processing requirements – particularly in the banking, financial services, and insurance industries.

These organizations stand to achieve time and cost savings through a modern approach to managing legacy infrastructure, as opposed to launching serverless applications on a brittle foundation. Here are three of the biggest obstacles they face and how to overcome them.

Challenge #1

Middleware that introduces complexity, technical debt, and latency. Many organizations looking to integrate CICS applications into a microservices or serverless architecture rely on middleware (e.g., an ESB or SOA) to access data from the underlying applications. This strategy introduces significant runtime performance challenges and creates what one bank’s chief architect referred to as a “lasagna architecture,” making DevOps impossible.

#serverless architecture #serverless functions #serverless benefits #mainframes #serverless api #serverless integration

Christa  Stehr

Christa Stehr

1603934180

Predicting The Cost and Performance of Serverless Workloads Under Different Workload

Serverless Computing is the most promising trend for the future of Cloud Computing. As of 2020, all major cloud providers offer a wide variety of serverless services. Some of the FaaS offerings provided withing different cloud providers are AWS Lambda, Google Cloud Functions, Google Cloud Run, Azure Functions, and IBM Cloud Functions. If you want to use your current infrastructure, you could also use the open-source alternatives like OpenFaaS, IronFunctions, Apache OpenWhisk, Kubeless, Fission, OpenLambda, and Knative.

In a previous article, I iterated the most important autoscaling patterns used in major cloud services, along with their pros/cons. In this post, I will go through the process of predicting key performance characteristics and the cost of scale-per-request serverless platforms (like AWS Lambda, IBM Cloud Functions, Azure Functions, and Google Cloud Functions) with different workload intensities (in terms of requests per second) using a performance model. I will also include a link to a simulator that can generate more detailed insights at the end.

The Performance Model

A performance model is “A model created to define the significant aspects of the way in which a proposed or actual system operates in terms of resources consumed, contention for resources, and delays introduced by processing or physical limitations” [source]. So using a performance model, you can “predict” how different characteristics of your service will change in different settings without needing to perform costly experiments for them.

The performance model we will be using today is from one of my recent papers called “Performance Modeling of Serverless Computing Platforms”. You can try an interactive version of my model to see what kind of information you can expect from it.

Prerequisites

The input properties that need to be provided by the user to the performance model along with some default values.

The only system property you need to provide is the “idle expiration time” which is the amount of time the serverless platform will keep your function instance around after your last request before terminating it and freeing its resources (to know more about this, you are going to have to read my paper, especially the system description section). The good news is, this is a fixed value for all workloads which you don’t need to think about and is 10 minutes for AWS Lambda, Google Cloud Function, and IBM Cloud Functions and 20 minutes for Azure Functions.

The next thing you need is the cold/warm response time of your function. The only way you can get this value, for now, is by actually running your code on the platform and measuring the response times. Of course, there are tools that can help you with that, but I haven’t used them, so, I would be glad if you could tell me in the comments about how they were. Tools like the AWS Lambda Power Tuning can also tell you the response time for different memory settings, so you can check which one fits your QoS guarantees.

#serverless-computing #performance #serverless-architecture #serverless #serverless-apps

Hermann  Frami

Hermann Frami

1656636720

Serverless Framework: Deploy on Scaleway Functions

Serverless Framework: Deploy on Scaleway Functions

The Scaleway functions plugin for Serverless Framework allows users to deploy their functions and containers to Scaleway Functions with a simple serverless deploy.

Serverless Framework handles everything from creating namespaces to function/code deployment by calling APIs endpoint under the hood.

Requirements

  • Install node.js
  • Install Serverless CLI (npm install serverless -g)

Let's work into ~/my-srvless-projects

# mkdir ~/my-srvless-projects
# cd ~/my-srvless-projects

Create a Project

The easiest way to create a project is to use one of our templates. The list of templates is here

Let's use python3

serverless create --template-url https://github.com/scaleway/serverless-scaleway-functions/tree/master/examples/python3 --path myService

Once it's done, we can install mandatory node packages used by serverless

cd mypython3functions
npm i

Note: these packages are only used by serverless, they are not shipped with your functions.

Configure your functions

Your functions are defined in the serverless.yml file created:

service: scaleway-python3
configValidationMode: off

useDotenv: true

provider:
  name: scaleway
  runtime: python310
  # Global Environment variables - used in every functions
  env:
    test: test
  # Storing credentials in this file is strongly not recommanded for security concerns, please refer to README.md about best practices
  scwToken: <scw-token>
  scwProject: <scw-project-id>
  # region in which the deployment will happen (default: fr-par)
  scwRegion: <scw-region>

plugins:
  - serverless-scaleway-functions
  
package:
  patterns:
    - '!node_modules/**'
    - '!.gitignore'
    - '!.git/**'

functions:
  first:
    handler: handler.py
    # Local environment variables - used only in given function
    env:
      local: local

Note: provider.name and plugins MUST NOT be changed, they enable us to use the scaleway provider

This file contains the configuration of one namespace containing one or more functions (in this example, only one) of the same runtime (here python3)

The different parameters are:

  • service: your namespace name
  • useDotenv: Load environment variables from .env files (default: false), read Security and secret management
  • configValidationMode: Configuration validation: 'error' (fatal error), 'warn' (logged to the output) or 'off' (default: warn)
  • provider.runtime: the runtime of your functions (check the supported runtimes above)
  • provider.env: environment variables attached to your namespace are injected to all your namespace functions
  • provider.secret: secret environment variables attached to your namespace are injected to all your namespace functions, see this example project
  • scwToken: Scaleway token you got in prerequisites
  • scwProject: Scaleway org id you got in prerequisites
  • scwRegion: Scaleway region in which the deployment will take place (default: fr-par)
  • package.patterns: usually, you don't need to configure it. Enable to include/exclude directories to/from the deployment
  • functions: Configure of your fonctions. It's a yml dictionary, with the key being the function name
    • handler (Required): file or function which will be executed. See the next section for runtime specific handlers
    • env (Optional): environment variables specific for the current function
    • secret (Optional): secret environment variables specific for the current function, see this example project
    • minScale (Optional): how many function instances we keep running (default: 0)
    • maxScale (Optional): maximum number of instances this function can scale to (default: 20)
    • memoryLimit: ram allocated to the function instances. See the introduction for the list of supported values
    • timeout: is the maximum duration in seconds that the request will wait to be served before it times out (default: 300 seconds)
    • runtime: (Optional) runtime of the function, if you need to deploy multiple functions with different runtimes in your Serverless Project. If absent, provider.runtime will be used to deploy the function, see this example project.
    • events (Optional): List of events to trigger your functions (e.g, trigger a function based on a schedule with CRONJobs). See events section below
    • custom_domains (Optional): List of custom domains, refer to Custom Domain Documentation

Security and secret management

You configuration file may contains sensitive data, your project ID and your Token must not be shared and must not be commited in VCS.

To keep your informations safe and be able to share or commit your serverles.yml file you should remove your credentials from the file. Then you can :

  • use global environment variables
  • use .env file and keep it secret

To use .env file you can modify your serverless.yml file as following :

# This will alow the plugin to read your .env file
useDotenv: true

provider:
  name: scaleway
  runtime: node16

  scwToken: ${env:SCW_SECRET_KEY}
  scwProject: ${env:SCW_DEFAULT_PROJECT_ID}
  scwRegion: ${env:SCW_REGION}

And then create a .env file next to your serverless.yml file, containing following values :

SCW_SECRET_KEY=XXX
SCW_DEFAULT_PROJECT_ID=XXX
SCW_REGION=fr-par

You can use this pattern to hide your secrets (for example a connexion string to a database or a S3 bucket).

Functions Handler

Based on the chosen runtime, the handler variable on function might vary.

Using ES Modules

Node has two module systems: CommonJS modules and ECMAScript (ES) modules. By default, Node treats your code files as CommonJS modules, however ES modules have also been available since the release of node16 runtime on Scaleway Serverless Functions. ES modules give you a more modern way to re-use your code.

According to the official documentation, to use ES modules you can specify the module type in package.json, as in the following example:

  ...
  "type": "module",
  ...

This then enables you to write your code for ES modules:

export {handle};

function handle (event, context, cb) {
    return {
        body: process.version,
        statusCode: 200,
    };
};

The use of ES modules is encouraged, since they are more efficient and make setup and debugging much easier.

Note that using "type": "module" or "type": "commonjs" in your package.json will enable/disable some features in Node runtime. For a comprehensive list of differences, please refer to the official documentation, the following is a summary only:

  • commonjs is used as default value
  • commonjs allows you to use require/module.exports (synchronous code loading, it basically copies all file contents)
  • module allows you to use import/export ES6 instructions (asynchronous loading, more optimized as it imports only the pieces of code you need)

Node

Path to your handler file (from serverless.yml), omit ./, ../, and add the exported function to use as a handler :

- src
  - handlers
    - firstHandler.js  => module.exports.myFirstHandler = ...
    - secondHandler.js => module.exports.mySecondHandler = ...
- serverless.yml

In serverless.yml:

provider:
  # ...
  runtime: node16
functions:
  first:
    handler: src/handlers/firstHandler.myFirstHandler
  second:
    handler: src/handlers/secondHandler.mySecondHandler

Python

Similar to node, path to handler file src/testing/handler.py:

- src
  - handlers
    - firstHandler.py  => def my_first_handler
    - secondHandler.py => def my_second_handler
- serverless.yml

In serverless.yml:

provider:
  # ...
  runtime: python310 # or python37, python38, python39
functions:
  first:
    handler: src/handlers/firstHandler.my_first_handler
  second:
    handler: src/handlers/secondHandler.my_second_handler

Golang

Path to your handler's package, for example if I have the following structure:

- src
  - testing
    - handler.go -> package main in src/testing subdirectory
  - second
    - handler.go -> package main in src/second subdirectory
- serverless.yml
- handler.go -> package main at the root of project

Your serverless.yml functions should look something like this:

provider:
  # ...
  runtime: go118
functions:
  main:
    handler: "."
  testing:
    handler: src/testing
  second:
    handler: src/second

Events

With events, you may link your functions with specific triggers, which might include CRON Schedule (Time based), MQTT Queues (Publish on a topic will trigger the function), S3 Object update (Upload an object will trigger the function).

Note that we do not include HTTP triggers in our event types, as a HTTP endpoint is created for every function. Triggers are just a new way to trigger your Function, but you will always be able to execute your code via HTTP.

Here is a list of supported triggers on Scaleway Serverless, and the configuration parameters required to deploy them:

  • schedule: Trigger your function based on CRON schedules
    • rate: CRON Schedule (UNIX Format) on which your function will be executed
    • input: key-value mapping to define arguments that will be passed into your function's event object during execution.

To link a Trigger to your function, you may define a key events in your function:

functions:
  handler: myHandler.handle
  events:
    # "events" is a list of triggers, the first key being the type of trigger.
    - schedule:
        # CRON Job Schedule (UNIX Format)
        rate: '1 * * * *'
        # Input variable are passed in your function's event during execution
        input:
          key: value
          key2: value2

You may link Events to your Containers too (See section Managing containers below for more informations on how to deploy containers):

custom:
  containers:
    mycontainer:
      directory: my-directory
      # Events key
      events:
        - schedule:
            rate: '1 * * * *'
            input:
              key: value
              key2: value2

You may refer to the follow examples:

Custom domains

Custom domains allows users to use their own domains.

For domain configuration please Refer to Scaleway Documentation

Integration with serverless framework example :

functions:
  first:
    handler: handler.handle
    # Local environment variables - used only in given function
    env:
      local: local
    custom_domains:
      - func1.scaleway.com
      - func2.scaleway.com

Note As your domain must have a record to your function hostname, you should deploy your function once to read its hostname. Custom Domains configurations will be available after the first deploy.

Note: Serverless Framework will consider the configuration file as the source of truth.

If you create a domain with other tools (Scaleway's Console, CLI or API) you must refer created domain into your serverless configuration file. Otherwise it will be deleted as Serverless Framework will give the priority to its configuration.

Managing containers

Requirements: You need to have Docker installed to be able to build and push your image to your Scaleway registry.

You must define your containers inside the custom.containers field in your serverless.yml manifest. Each container must specify the relative path of its application directory (containing the Dockerfile, and all files related to the application to deploy):

custom:
  containers:
    mycontainer:
      directory: my-container-directory
      # port: 8080
      # Environment only available in this container 
      env:
        MY_VARIABLE: "my-value"

Here is an example of the files you should have, the directory containing your Dockerfile and scripts is my-container-directory.

.
├── my-container-directory
│   ├── Dockerfile
│   ├── requirements.txt
│   ├── server.py
│   └── (...)
├── node_modules
│   ├── serverless-scaleway-functions
│   └── (...)
├── package-lock.json
├── package.json
└── serverless.yml

Scaleway's platform will automatically inject a PORT environment variable on which your server should be listening for incoming traffic. By default, this PORT is 8080. You may change the port in your serverless.yml.

You may use the container example to getting started.

Logs

The serverless logs command lets you watch the logs of a specific function or container.

Pass the function or container name you want to fetch the logs for with --function:

serverless logs --function <function_or_container_name>

Info

serverless info command gives you informations your current deployement state in JSON format.

Documentation and useful Links

Contributing

This plugin is mainly developed and maintained by Scaleway Serverless Team but you are free to open issues or discuss with us on our Community Slack Channels #serverless-containers and #serverless-functions.

Author: Scaleway
Source Code: https://github.com/scaleway/serverless-scaleway-functions 
License: MIT license

#serverless #function #aws #lambda