Sean Robertson

Sean Robertson

1569318636

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

You might be seeing the term JAMstack popping up more and more frequently. I’ve been a fan of it as an approach for some time.

One of the principles of JAMstack is that of pre-rendering. In other words, it generates your site into a collection of static assets in advance, so that it can be served to your visitors with maximum speed and minimum overhead from a CDN or other optimized static hosting environment.

But if we are going to pre-generate our sites ahead of time, how do we make them feel dynamic? How do we build sites that need to change often? How do we work with things like user generated content?

As it happens, this can be a great use case for serverless functions. JAMstack and Serverless are the best of friends. They complement each other wonderfully.

In this article, we’ll look at a pattern of using serverless functions as a fallback for pre-generated pages in a site that is comprised almost entirely of user generated content. We’ll use a technique of optimistic URL routing where the 404 page is a serverless function to add serverless rendering on the fly.

Buzzwordy? Perhaps. Effective? Most certainly!

You can go and have a play with the demo site to help you imagine this use case. But only if you promise to come back.

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

[https://vlolly.net](https://vlolly.net)

Is that you? You came back? Great. Let’s dig in.

The idea behind this little example site is that it lets you create a nice, happy message and virtual pick-me-up to send to a friend. You can write a message, customize a lollipop (or a popsicle, for my American friends) and get a URL to share with your intended recipient. And just like that, you’ve brightened up their day. What’s not to love?

Traditionally, we’d build this site using some server-side scripting to handle the form submissions, add new lollies (our user generated content) to a database and generate a unique URL. Then we’d use some more server-side logic to parse requests for these pages, query the database to get the data needed to populate a page view, render it with a suitable template, and return it to the user.

That all seems logical.

But how much will it cost to scale?

Technical architects and tech leads often get this question when scoping a project. They need to plan, pay for, and provision enough horsepower in case of success.

This virtual lollipop site is no mere trinket. This thing is going to make me a gazillionaire due to all the positive messages we all want to send each other! Traffic levels are going to spike as the word gets out. I had better have a good strategy of ensuring that the servers can handle the hefty load. I might add some caching layers, some load balancers, and I’ll design my database and database servers to be able to share the load without groaning from the demand to make and serve all these lollies.

Except… I don’t know how to do that stuff.

And I don’t know how much it would cost to add that infrastructure and keep it all humming. It’s complicated.

This is why I love to simplify my hosting by pre-rendering as much as I can.

Serving static pages is significantly simpler and cheaper than serving pages dynamically from a web server which needs to perform some logic to generate views on demand for every visitor.

Since we are working with lots of user generated content, it still makes sense to use a database, but I’m not going to manage that myself. Instead, I’ll choose one of the many database options available as a service. And I’ll talk to it via its APIs.

I might choose Firebase, or MongoDB, or any number of others. Chris compiled a few of these on an excellent site about serverless resources which is well worth exploring.

In this case, I selected Fauna to use as my data store. Fauna has a nice API for stashing and querying data. It is a no-SQL flavored data store and gives me just what I need.

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

Critically, Fauna have made an entire business out of providing database services. They have the deep domain knowledge that I’ll never have. By using a database-as-a-service provider, I just inherited an expert data service team for my project, complete with high availability infrastructure, capacity and compliance peace of mind, skilled support engineers, and rich documentation.

Such are the advantages of using a third-party service like this rather than rolling your own.

Architecture

I often find myself doodling the logical flow of things when I’m working on a proof of concept. Here’s my doodle for this site:

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

And a little explanation:

  1. A user creates a new lollipop by completing a regular old HTML form.
  2. The new content is saved in a database, and its submission triggers a new site generation and deployment.
  3. Once the site deployment is complete, the new lollipop will be available on a unique URL. It will be a static page served very rapidly from the CDN with no dependency on a database query or a server.
  4. Until the site generation is complete, any new lollipops will not be available as static pages. Unsuccessful requests for lollipop pages fall back to a page which dynamically generates the lollipop page by querying the database API on the fly.

This kind of approach, which first assumes static/pre-generated assets, only then falling back to a dynamic render when a static view is not available was usefully described by Markus Schork of Unilever as “Static First” which I rather like.

In a little more detail

You could just dive into the code for this site, which is open source and available for you to explore, or we could talk some more.

You want to dig in a little further, and explore the implementation of this example? OK, I’ll explain in some more details:

  • Getting data from the database to generate each page
  • Posting data to a database API with a serverless function
  • Triggering a full site re-generation
  • Rendering on demand when pages are yet to be generated

Generating pages from a database

In a moment, we’ll talk about how we post data into the database, but first, let’s assume that there are some entries in the database already. We are going to want to generate a site which includes a page for each and every one of those.

Static site generators are great at this. They chomp through data, apply it to templates, and output HTML files ready to be served. We could use any generator for this example. I chose Eleventy due to it’s relative simplicity and the speed of its site generation.

To feed Eleventy some data, we have a number of options. One is to give it some JavaScript which returns structured data. This is perfect for querying a database API.

Our Eleventy data file will look something like this:

// Set up a connection with the Fauna database.
// Use an environment variable to authenticate
// and get access to the database.
const faunadb = require('faunadb');
const q = faunadb.query;
const client = new faunadb.Client({
  secret: process.env.FAUNADB_SERVER_SECRET
});

module.exports = () => {
  return new Promise((resolve, reject) => {
    // get the most recent 100,000 entries (for the sake of our example)
    client.query(
      q.Paginate(q.Match(q.Ref("indexes/all_lollies")),{size:100000})
    ).then((response) => {
      // get all data for each entry
      const lollies = response.data;
      const getAllDataQuery = lollies.map((ref) => {
        return q.Get(ref);
      });
      return client.query(getAllDataQuery).then((ret) => {
        // send the data back to Eleventy for use in the site build
        resolve(ret);
      });
    }).catch((error) => {
      console.log("error", error);
      reject(error);
    });
  })
}

I named this file lollies.js which will make all the data it returns available to Eleventy in a collection called lollies.

We can now use that data in our templates. If you’d like to see the code which takes that and generates a page for each item, you can see it in the code repository.

Submitting and storing data without a server

When we create a new lolly page we need to capture user content in the database so that it can be used to populate a page at a given URL in the future. For this, we are using a traditional HTML form which posts data to a suitable form handler.

The form looks something like this (or see the full code in the repo):

<form name="new-lolly" action="/new" method="POST">

  <!-- Default "flavors": 3 bands of colors with color pickers -->
  <input type="color" id="flavourTop" name="flavourTop" value="#d52358" />
  <input type="color" id="flavourMiddle" name="flavourMiddle" value="#e95946" />
  <input type="color" id="flavourBottom" name="flavourBottom" value="#deaa43" />

  <!-- Message fields -->
  <label for="recipientName">To</label>
  <input type="text" id="recipientName" name="recipientName" />

  <label for="message">Say something nice</label>
  <textarea name="message" id="message" cols="30" rows="10"></textarea>

  <label for="sendersName">From</label>
  <input type="text" id="sendersName" name="sendersName" />

  <!-- A descriptive submit button -->
  <input type="submit" value="Freeze this lolly and get a link">

</form>

We have no web servers in our hosting scenario, so we will need to devise somewhere to handle the HTTP posts being submitted from this form. This is a perfect use case for a serverless function. I’m using Netlify Functions for this. You could use Amazon Web Services, Lambda, Google Cloud, or Azure Functions if you prefer, but I like the simplicity of the workflow with Netlify Functions, and the fact that this will keep my serverless API and my UI all together in one code repository.

It is good practice to avoid leaking back-end implementation details into your front-end. A clear separation helps to keep things more portable and tidy. Take a look at the action attribute of the form element above. It posts data to a path on my site called /new which doesn’t really hint at what service this will be talking to.

We can use redirects to route that to any service we like. I’ll send it to a serverless function which I’ll be provisioning as part of this project, but it could easily be customized to send the data elsewhere if we wished. Netlify gives us a simple and highly optimized redirects engine which directs our traffic out at the CDN level, so users are very quickly routed to the correct place.

The redirect rule below (which lives in my project’s [netlify.toml](https://github.com/philhawksworth/virtual-lolly/blob/master/netlify.toml) file) will proxy requests to /new through to a serverless function hosted by Netlify Functions called [newLolly.js](https://github.com/philhawksworth/virtual-lolly/blob/master/src/functions/newLolly.js).

# resolve the "new" URL to a function
[[redirects]]
  from = "/new"
  to = "/.netlify/functions/newLolly"
  status = 200

Let’s look at that serverless function which:

  • stores the new data in the database,
  • creates a new URL for the new page and
  • redirects the user to the newly created page so that they can see the result.

First, we’ll require the various utilities we’ll need to parse the form data, connect to the Fauna database and create readably short unique IDs for new lollies.

const faunadb = require('faunadb');          // For accessing FaunaDB
const shortid = require('shortid');          // Generate short unique URLs
const querystring = require('querystring');  // Help us parse the form data

// First we set up a new connection with our database.
// An environment variable helps us connect securely
// to the correct database.
const q = faunadb.query
const client = new faunadb.Client({
  secret: process.env.FAUNADB_SERVER_SECRET
})

Now we’ll add some code to the handle requests to the serverless function. The handler function will parse the request to get the data we need from the form submission, then generate a unique ID for the new lolly, and then create it as a new record in the database.

// Handle requests to our serverless function
exports.handler = (event, context, callback) => {

  // get the form data
  const data = querystring.parse(event.body);
  // add a unique path id. And make a note of it - we'll send the user to it later
  const uniquePath = shortid.generate();
  data.lollyPath = uniquePath;

  // assemble the data ready to send to our database
  const lolly = {
    data: data
  };

  // Create the lolly entry in the fauna db
  client.query(q.Create(q.Ref('classes/lollies'), lolly))
    .then((response) => {
      // Success! Redirect the user to the unique URL for this new lolly page
      return callback(null, {
        statusCode: 302,
        headers: {
          Location: `/lolly/${uniquePath}`,
        }
      });
    }).catch((error) => {
      console.log('error', error);
      // Error! Return the error with statusCode 400
      return callback(null, {
        statusCode: 400,
        body: JSON.stringify(error)
      });
    });

}

Let’s check our progress. We have a way to create new lolly pages in the database. And we’ve got an automated build which generates a page for every one of our lollies.

To ensure that there is a complete set of pre-generated pages for every lolly, we should trigger a rebuild whenever a new one is successfully added to the database. That is delightfully simple to do. Our build is already automated thanks to our static site generator. We just need a way to trigger it. With Netlify, we can define as many build hooks as we like. They are webhooks which will rebuild and deploy our site of they receive an HTTP POST request. Here’s the one I created in the site’s admin console in Netlify:

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

Netlify build hook

To regenerate the site, including a page for each lolly recorded in the database, we can make an HTTP POST request to this build hook as soon as we have saved our new data to the database.

This is the code to do that:

const axios = require('axios'); // Simplify making HTTP POST requests

// Trigger a new build to freeze this lolly forever
axios.post('https://api.netlify.com/build_hooks/5d46fa20da4a1b70XXXXXXXXX')
.then(function (response) {
  // Report back in the serverless function's logs
  console.log(response);
})
.catch(function (error) {
  // Describe any errors in the serverless function's logs
  console.log(error);
});

You can see it in context, added to the success handler for the database insertion in the full code.

This is all great if we are happy to wait for the build and deployment to complete before we share the URL of our new lolly with its intended recipient. But we are not a patient lot, and when we get that nice new URL for the lolly we just created, we’ll want to share it right away.

Sadly, if we hit that URL before the site has finished regenerating to include the new page, we’ll get a 404. But happily, we can use that 404 to our advantage.

Optimistic URL routing and serverless fallbacks

With custom 404 routing, we can choose to send every failed request for a lolly page to a page which will can look for the lolly data directly in the database. We could do that in with client-side JavaScript if we wanted, but even better would be to generate a ready-to-view page dynamically from a serverless function.

Here’s how:

Firstly, we need to tell all those hopeful requests for a lolly page that come back empty to go instead to our serverless function. We do that with another rule in our Netlify redirects configuration:

# unfound lollies should proxy to the API directly
[[redirects]]
  from = "/lolly/*"
  to = "/.netlify/functions/showLolly?id=:splat"
  status = 302

This rule will only be applied if the request for a lolly page did not find a static page ready to be served. It creates a temporary redirect (HTTP 302) to our serverless function, which looks something like this:

const faunadb = require('faunadb');                  // For accessing FaunaDB
const pageTemplate = require('./lollyTemplate.js');  // A JS template litereal 

// setup and auth the Fauna DB client
const q = faunadb.query;
const client = new faunadb.Client({
  secret: process.env.FAUNADB_SERVER_SECRET
});

exports.handler = (event, context, callback) => {

  // get the lolly ID from the request
  const path = event.queryStringParameters.id.replace("/", "");

  // find the lolly data in the DB
  client.query(
    q.Get(q.Match(q.Index("lolly_by_path"), path))
  ).then((response) => {
    // if found return a view
    return callback(null, {
      statusCode: 200,
      body: pageTemplate(response.data)
    });

  }).catch((error) => {
    // not found or an error, send the sad user to the generic error page
    console.log('Error:', error);
    return callback(null, {
      body: JSON.stringify(error),
      statusCode: 301,
      headers: {
        Location: `/melted/index.html`,
      }
    });
  });
}

If a request for any other page (not within the /lolly/ path of the site) should 404, we won’t send that request to our serverless function to check for a lolly. We can just send the user directly to a 404 page. Our [netlify.toml](https://github.com/philhawksworth/virtual-lolly/blob/master/netlify.toml) config lets us define as many level of 404 routing as we’d like, by adding fallback rules further down in the file. The first successful match in the file will be honored.

# unfound lollies should proxy to the API directly
[[redirects]]
  from = "/lolly/*"
  to = "/.netlify/functions/showLolly?id=:splat"
  status = 302

# Real 404s can just go directly here:
[[redirects]]
  from = "/*"
  to = "/melted/index.html"
  status = 404

And we’re done! We’ve now got a site which is static first, and which will try to render content on the fly with a serverless function if a URL has not yet been generated as a static file.

Pretty snappy!

Supporting larger scale

Our technique of triggering a build to regenerate the lollipop pages every single time a new entry is created might not be optimal forever. While it’s true that the automation of the build means it is trivial to redeploy the site, we might want to start throttling and optimizing things when we start to get very popular. (Which can only be a matter of time, right?)

That’s fine. Here are a couple of things to consider when we have very many pages to create, and more frequent additions to the database:

  • Instead of triggering a rebuild for each new entry, we could rebuild the site as a scheduled job. Perhaps this could happen once an hour or once a day.
  • If building once per day, we might decide to only generate the pages for new lollies submitted in the last day, and cache the pages generated each day for future use. This kind of logic in the build would help us support massive numbers of lolly pages without the build getting prohibitively long. But I’ll not go into intra-build caching here. If you are curious, you could ask about it over in the Netlify Community forum.

By combining both static, pre-generated assets, with serverless fallbacks which give dynamic rendering, we can satisfy a surprisingly broad set of use cases — all while avoiding the need to provision and maintain lots of dynamic infrastructure.

What other use cases might you be able to satisfy with this "static first” approach?

#javascript #serverless #web-service #microservices #web-development

What is GEEK

Buddha Community

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback
Sean Robertson

Sean Robertson

1569318636

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

You might be seeing the term JAMstack popping up more and more frequently. I’ve been a fan of it as an approach for some time.

One of the principles of JAMstack is that of pre-rendering. In other words, it generates your site into a collection of static assets in advance, so that it can be served to your visitors with maximum speed and minimum overhead from a CDN or other optimized static hosting environment.

But if we are going to pre-generate our sites ahead of time, how do we make them feel dynamic? How do we build sites that need to change often? How do we work with things like user generated content?

As it happens, this can be a great use case for serverless functions. JAMstack and Serverless are the best of friends. They complement each other wonderfully.

In this article, we’ll look at a pattern of using serverless functions as a fallback for pre-generated pages in a site that is comprised almost entirely of user generated content. We’ll use a technique of optimistic URL routing where the 404 page is a serverless function to add serverless rendering on the fly.

Buzzwordy? Perhaps. Effective? Most certainly!

You can go and have a play with the demo site to help you imagine this use case. But only if you promise to come back.

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

[https://vlolly.net](https://vlolly.net)

Is that you? You came back? Great. Let’s dig in.

The idea behind this little example site is that it lets you create a nice, happy message and virtual pick-me-up to send to a friend. You can write a message, customize a lollipop (or a popsicle, for my American friends) and get a URL to share with your intended recipient. And just like that, you’ve brightened up their day. What’s not to love?

Traditionally, we’d build this site using some server-side scripting to handle the form submissions, add new lollies (our user generated content) to a database and generate a unique URL. Then we’d use some more server-side logic to parse requests for these pages, query the database to get the data needed to populate a page view, render it with a suitable template, and return it to the user.

That all seems logical.

But how much will it cost to scale?

Technical architects and tech leads often get this question when scoping a project. They need to plan, pay for, and provision enough horsepower in case of success.

This virtual lollipop site is no mere trinket. This thing is going to make me a gazillionaire due to all the positive messages we all want to send each other! Traffic levels are going to spike as the word gets out. I had better have a good strategy of ensuring that the servers can handle the hefty load. I might add some caching layers, some load balancers, and I’ll design my database and database servers to be able to share the load without groaning from the demand to make and serve all these lollies.

Except… I don’t know how to do that stuff.

And I don’t know how much it would cost to add that infrastructure and keep it all humming. It’s complicated.

This is why I love to simplify my hosting by pre-rendering as much as I can.

Serving static pages is significantly simpler and cheaper than serving pages dynamically from a web server which needs to perform some logic to generate views on demand for every visitor.

Since we are working with lots of user generated content, it still makes sense to use a database, but I’m not going to manage that myself. Instead, I’ll choose one of the many database options available as a service. And I’ll talk to it via its APIs.

I might choose Firebase, or MongoDB, or any number of others. Chris compiled a few of these on an excellent site about serverless resources which is well worth exploring.

In this case, I selected Fauna to use as my data store. Fauna has a nice API for stashing and querying data. It is a no-SQL flavored data store and gives me just what I need.

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

Critically, Fauna have made an entire business out of providing database services. They have the deep domain knowledge that I’ll never have. By using a database-as-a-service provider, I just inherited an expert data service team for my project, complete with high availability infrastructure, capacity and compliance peace of mind, skilled support engineers, and rich documentation.

Such are the advantages of using a third-party service like this rather than rolling your own.

Architecture

I often find myself doodling the logical flow of things when I’m working on a proof of concept. Here’s my doodle for this site:

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

And a little explanation:

  1. A user creates a new lollipop by completing a regular old HTML form.
  2. The new content is saved in a database, and its submission triggers a new site generation and deployment.
  3. Once the site deployment is complete, the new lollipop will be available on a unique URL. It will be a static page served very rapidly from the CDN with no dependency on a database query or a server.
  4. Until the site generation is complete, any new lollipops will not be available as static pages. Unsuccessful requests for lollipop pages fall back to a page which dynamically generates the lollipop page by querying the database API on the fly.

This kind of approach, which first assumes static/pre-generated assets, only then falling back to a dynamic render when a static view is not available was usefully described by Markus Schork of Unilever as “Static First” which I rather like.

In a little more detail

You could just dive into the code for this site, which is open source and available for you to explore, or we could talk some more.

You want to dig in a little further, and explore the implementation of this example? OK, I’ll explain in some more details:

  • Getting data from the database to generate each page
  • Posting data to a database API with a serverless function
  • Triggering a full site re-generation
  • Rendering on demand when pages are yet to be generated

Generating pages from a database

In a moment, we’ll talk about how we post data into the database, but first, let’s assume that there are some entries in the database already. We are going to want to generate a site which includes a page for each and every one of those.

Static site generators are great at this. They chomp through data, apply it to templates, and output HTML files ready to be served. We could use any generator for this example. I chose Eleventy due to it’s relative simplicity and the speed of its site generation.

To feed Eleventy some data, we have a number of options. One is to give it some JavaScript which returns structured data. This is perfect for querying a database API.

Our Eleventy data file will look something like this:

// Set up a connection with the Fauna database.
// Use an environment variable to authenticate
// and get access to the database.
const faunadb = require('faunadb');
const q = faunadb.query;
const client = new faunadb.Client({
  secret: process.env.FAUNADB_SERVER_SECRET
});

module.exports = () => {
  return new Promise((resolve, reject) => {
    // get the most recent 100,000 entries (for the sake of our example)
    client.query(
      q.Paginate(q.Match(q.Ref("indexes/all_lollies")),{size:100000})
    ).then((response) => {
      // get all data for each entry
      const lollies = response.data;
      const getAllDataQuery = lollies.map((ref) => {
        return q.Get(ref);
      });
      return client.query(getAllDataQuery).then((ret) => {
        // send the data back to Eleventy for use in the site build
        resolve(ret);
      });
    }).catch((error) => {
      console.log("error", error);
      reject(error);
    });
  })
}

I named this file lollies.js which will make all the data it returns available to Eleventy in a collection called lollies.

We can now use that data in our templates. If you’d like to see the code which takes that and generates a page for each item, you can see it in the code repository.

Submitting and storing data without a server

When we create a new lolly page we need to capture user content in the database so that it can be used to populate a page at a given URL in the future. For this, we are using a traditional HTML form which posts data to a suitable form handler.

The form looks something like this (or see the full code in the repo):

<form name="new-lolly" action="/new" method="POST">

  <!-- Default "flavors": 3 bands of colors with color pickers -->
  <input type="color" id="flavourTop" name="flavourTop" value="#d52358" />
  <input type="color" id="flavourMiddle" name="flavourMiddle" value="#e95946" />
  <input type="color" id="flavourBottom" name="flavourBottom" value="#deaa43" />

  <!-- Message fields -->
  <label for="recipientName">To</label>
  <input type="text" id="recipientName" name="recipientName" />

  <label for="message">Say something nice</label>
  <textarea name="message" id="message" cols="30" rows="10"></textarea>

  <label for="sendersName">From</label>
  <input type="text" id="sendersName" name="sendersName" />

  <!-- A descriptive submit button -->
  <input type="submit" value="Freeze this lolly and get a link">

</form>

We have no web servers in our hosting scenario, so we will need to devise somewhere to handle the HTTP posts being submitted from this form. This is a perfect use case for a serverless function. I’m using Netlify Functions for this. You could use Amazon Web Services, Lambda, Google Cloud, or Azure Functions if you prefer, but I like the simplicity of the workflow with Netlify Functions, and the fact that this will keep my serverless API and my UI all together in one code repository.

It is good practice to avoid leaking back-end implementation details into your front-end. A clear separation helps to keep things more portable and tidy. Take a look at the action attribute of the form element above. It posts data to a path on my site called /new which doesn’t really hint at what service this will be talking to.

We can use redirects to route that to any service we like. I’ll send it to a serverless function which I’ll be provisioning as part of this project, but it could easily be customized to send the data elsewhere if we wished. Netlify gives us a simple and highly optimized redirects engine which directs our traffic out at the CDN level, so users are very quickly routed to the correct place.

The redirect rule below (which lives in my project’s [netlify.toml](https://github.com/philhawksworth/virtual-lolly/blob/master/netlify.toml) file) will proxy requests to /new through to a serverless function hosted by Netlify Functions called [newLolly.js](https://github.com/philhawksworth/virtual-lolly/blob/master/src/functions/newLolly.js).

# resolve the "new" URL to a function
[[redirects]]
  from = "/new"
  to = "/.netlify/functions/newLolly"
  status = 200

Let’s look at that serverless function which:

  • stores the new data in the database,
  • creates a new URL for the new page and
  • redirects the user to the newly created page so that they can see the result.

First, we’ll require the various utilities we’ll need to parse the form data, connect to the Fauna database and create readably short unique IDs for new lollies.

const faunadb = require('faunadb');          // For accessing FaunaDB
const shortid = require('shortid');          // Generate short unique URLs
const querystring = require('querystring');  // Help us parse the form data

// First we set up a new connection with our database.
// An environment variable helps us connect securely
// to the correct database.
const q = faunadb.query
const client = new faunadb.Client({
  secret: process.env.FAUNADB_SERVER_SECRET
})

Now we’ll add some code to the handle requests to the serverless function. The handler function will parse the request to get the data we need from the form submission, then generate a unique ID for the new lolly, and then create it as a new record in the database.

// Handle requests to our serverless function
exports.handler = (event, context, callback) => {

  // get the form data
  const data = querystring.parse(event.body);
  // add a unique path id. And make a note of it - we'll send the user to it later
  const uniquePath = shortid.generate();
  data.lollyPath = uniquePath;

  // assemble the data ready to send to our database
  const lolly = {
    data: data
  };

  // Create the lolly entry in the fauna db
  client.query(q.Create(q.Ref('classes/lollies'), lolly))
    .then((response) => {
      // Success! Redirect the user to the unique URL for this new lolly page
      return callback(null, {
        statusCode: 302,
        headers: {
          Location: `/lolly/${uniquePath}`,
        }
      });
    }).catch((error) => {
      console.log('error', error);
      // Error! Return the error with statusCode 400
      return callback(null, {
        statusCode: 400,
        body: JSON.stringify(error)
      });
    });

}

Let’s check our progress. We have a way to create new lolly pages in the database. And we’ve got an automated build which generates a page for every one of our lollies.

To ensure that there is a complete set of pre-generated pages for every lolly, we should trigger a rebuild whenever a new one is successfully added to the database. That is delightfully simple to do. Our build is already automated thanks to our static site generator. We just need a way to trigger it. With Netlify, we can define as many build hooks as we like. They are webhooks which will rebuild and deploy our site of they receive an HTTP POST request. Here’s the one I created in the site’s admin console in Netlify:

Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

Netlify build hook

To regenerate the site, including a page for each lolly recorded in the database, we can make an HTTP POST request to this build hook as soon as we have saved our new data to the database.

This is the code to do that:

const axios = require('axios'); // Simplify making HTTP POST requests

// Trigger a new build to freeze this lolly forever
axios.post('https://api.netlify.com/build_hooks/5d46fa20da4a1b70XXXXXXXXX')
.then(function (response) {
  // Report back in the serverless function's logs
  console.log(response);
})
.catch(function (error) {
  // Describe any errors in the serverless function's logs
  console.log(error);
});

You can see it in context, added to the success handler for the database insertion in the full code.

This is all great if we are happy to wait for the build and deployment to complete before we share the URL of our new lolly with its intended recipient. But we are not a patient lot, and when we get that nice new URL for the lolly we just created, we’ll want to share it right away.

Sadly, if we hit that URL before the site has finished regenerating to include the new page, we’ll get a 404. But happily, we can use that 404 to our advantage.

Optimistic URL routing and serverless fallbacks

With custom 404 routing, we can choose to send every failed request for a lolly page to a page which will can look for the lolly data directly in the database. We could do that in with client-side JavaScript if we wanted, but even better would be to generate a ready-to-view page dynamically from a serverless function.

Here’s how:

Firstly, we need to tell all those hopeful requests for a lolly page that come back empty to go instead to our serverless function. We do that with another rule in our Netlify redirects configuration:

# unfound lollies should proxy to the API directly
[[redirects]]
  from = "/lolly/*"
  to = "/.netlify/functions/showLolly?id=:splat"
  status = 302

This rule will only be applied if the request for a lolly page did not find a static page ready to be served. It creates a temporary redirect (HTTP 302) to our serverless function, which looks something like this:

const faunadb = require('faunadb');                  // For accessing FaunaDB
const pageTemplate = require('./lollyTemplate.js');  // A JS template litereal 

// setup and auth the Fauna DB client
const q = faunadb.query;
const client = new faunadb.Client({
  secret: process.env.FAUNADB_SERVER_SECRET
});

exports.handler = (event, context, callback) => {

  // get the lolly ID from the request
  const path = event.queryStringParameters.id.replace("/", "");

  // find the lolly data in the DB
  client.query(
    q.Get(q.Match(q.Index("lolly_by_path"), path))
  ).then((response) => {
    // if found return a view
    return callback(null, {
      statusCode: 200,
      body: pageTemplate(response.data)
    });

  }).catch((error) => {
    // not found or an error, send the sad user to the generic error page
    console.log('Error:', error);
    return callback(null, {
      body: JSON.stringify(error),
      statusCode: 301,
      headers: {
        Location: `/melted/index.html`,
      }
    });
  });
}

If a request for any other page (not within the /lolly/ path of the site) should 404, we won’t send that request to our serverless function to check for a lolly. We can just send the user directly to a 404 page. Our [netlify.toml](https://github.com/philhawksworth/virtual-lolly/blob/master/netlify.toml) config lets us define as many level of 404 routing as we’d like, by adding fallback rules further down in the file. The first successful match in the file will be honored.

# unfound lollies should proxy to the API directly
[[redirects]]
  from = "/lolly/*"
  to = "/.netlify/functions/showLolly?id=:splat"
  status = 302

# Real 404s can just go directly here:
[[redirects]]
  from = "/*"
  to = "/melted/index.html"
  status = 404

And we’re done! We’ve now got a site which is static first, and which will try to render content on the fly with a serverless function if a URL has not yet been generated as a static file.

Pretty snappy!

Supporting larger scale

Our technique of triggering a build to regenerate the lollipop pages every single time a new entry is created might not be optimal forever. While it’s true that the automation of the build means it is trivial to redeploy the site, we might want to start throttling and optimizing things when we start to get very popular. (Which can only be a matter of time, right?)

That’s fine. Here are a couple of things to consider when we have very many pages to create, and more frequent additions to the database:

  • Instead of triggering a rebuild for each new entry, we could rebuild the site as a scheduled job. Perhaps this could happen once an hour or once a day.
  • If building once per day, we might decide to only generate the pages for new lollies submitted in the last day, and cache the pages generated each day for future use. This kind of logic in the build would help us support massive numbers of lolly pages without the build getting prohibitively long. But I’ll not go into intra-build caching here. If you are curious, you could ask about it over in the Netlify Community forum.

By combining both static, pre-generated assets, with serverless fallbacks which give dynamic rendering, we can satisfy a surprisingly broad set of use cases — all while avoiding the need to provision and maintain lots of dynamic infrastructure.

What other use cases might you be able to satisfy with this "static first” approach?

#javascript #serverless #web-service #microservices #web-development

Hermann  Frami

Hermann Frami

1655426640

Serverless Plugin for Microservice Code Management and Deployment

Serverless M

Serverless M (or Serverless Modular) is a plugin for the serverless framework. This plugins helps you in managing multiple serverless projects with a single serverless.yml file. This plugin gives you a super charged CLI options that you can use to create new features, build them in a single file and deploy them all in parallel

splash.gif

Currently this plugin is tested for the below stack only

  • AWS
  • NodeJS λ
  • Rest API (You can use other events as well)

Prerequisites

Make sure you have the serverless CLI installed

# Install serverless globally
$ npm install serverless -g

Getting Started

To start the serverless modular project locally you can either start with es5 or es6 templates or add it as a plugin

ES6 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es6 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular serverless-webpack webpack --save-dev

ES5 Template install

# Step 1. Download the template
$ sls create --template-url https://github.com/aa2kb/serverless-modular/tree/master/template/modular-es5 --path myModularService

# Step 2. Change directory
$ cd myModularService

# Step 3. Create a package.json file
$ npm init

# Step 3. Install dependencies
$ npm i serverless-modular --save-dev

If you dont want to use the templates above you can just add in your existing project

Adding it as plugin

plugins:
  - serverless-modular

Now you are all done to start building your serverless modular functions

API Reference

The serverless CLI can be accessed by

# Serverless Modular CLI
$ serverless modular

# shorthand
$ sls m

Serverless Modular CLI is based on 4 main commands

  • sls m init
  • sls m feature
  • sls m function
  • sls m build
  • sls m deploy

init command

sls m init

The serverless init command helps in creating a basic .gitignore that is useful for serverless modular.

The basic .gitignore for serverless modular looks like this

#node_modules
node_modules

#sm main functions
sm.functions.yml

#serverless file generated by build
src/**/serverless.yml

#main serverless directories generated for sls deploy
.serverless

#feature serverless directories generated sls deploy
src/**/.serverless

#serverless logs file generated for main sls deploy
.sm.log

#serverless logs file generated for feature sls deploy
src/**/.sm.log

#Webpack config copied in each feature
src/**/webpack.config.js

feature command

The feature command helps in building new features for your project

options (feature Command)

This command comes with three options

--name: Specify the name you want for your feature

--remove: set value to true if you want to remove the feature

--basePath: Specify the basepath you want for your feature, this base path should be unique for all features. helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--remove-rtrue, falsefalse
--basePath-pstringsame as name

Examples (feature Command)

Creating a basic feature

# Creating a jedi feature
$ sls m feature -n jedi

Creating a feature with different base path

# A feature with different base path
$ sls m feature -n jedi -p tatooine

Deleting a feature

# Anakin is going to delete the jedi feature
$ sls m feature -n jedi -r true

function command

The function command helps in adding new function to a feature

options (function Command)

This command comes with four options

--name: Specify the name you want for your function

--feature: Specify the name of the existing feature

--path: Specify the path for HTTP endpoint helps in running offline with offline plugin and for API Gateway

--method: Specify the path for HTTP method helps in running offline with offline plugin and for API Gateway

optionsshortcutrequiredvaluesdefault value
--name-nstringN/A
--feature-fstringN/A
--path-pstringsame as name
--method-mstring'GET'

Examples (function Command)

Creating a basic function

# Creating a cloak function for jedi feature
$ sls m function -n cloak -f jedi

Creating a basic function with different path and method

# Creating a cloak function for jedi feature with custom path and HTTP method
$ sls m function -n cloak -f jedi -p powers -m POST

build command

The build command helps in building the project for local or global scope

options (build Command)

This command comes with four options

--scope: Specify the scope of the build, use this with "--feature" tag

--feature: Specify the name of the existing feature you want to build

optionsshortcutrequiredvaluesdefault value
--scope-sstringlocal
--feature-fstringN/A

Saving build Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    build:
      scope: local

Examples (build Command)

all feature build (local scope)

# Building all local features
$ sls m build

Single feature build (local scope)

# Building a single feature
$ sls m build -f jedi -s local

All features build global scope

# Building all features with global scope
$ sls m build -s global

deploy command

The deploy command helps in deploying serverless projects to AWS (it uses sls deploy command)

options (deploy Command)

This command comes with four options

--sm-parallel: Specify if you want to deploy parallel (will only run in parallel when doing multiple deployments)

--sm-scope: Specify if you want to deploy local features or global

--sm-features: Specify the local features you want to deploy (comma separated if multiple)

optionsshortcutrequiredvaluesdefault value
--sm-paralleltrue, falsetrue
--sm-scopelocal, globallocal
--sm-featuresstringN/A
--sm-ignore-buildstringfalse

Saving deploy Config in serverless.yml

You can also save config in serverless.yml file

custom:
  smConfig:
    deploy:
      scope: local
      parallel: true
      ignoreBuild: true

Examples (deploy Command)

Deploy all features locally

# deploy all local features
$ sls m deploy

Deploy all features globally

# deploy all global features
$ sls m deploy --sm-scope global

Deploy single feature

# deploy all global features
$ sls m deploy --sm-features jedi

Deploy Multiple features

# deploy all global features
$ sls m deploy --sm-features jedi,sith,dark_side

Deploy Multiple features in sequence

# deploy all global features
$ sls m deploy  --sm-features jedi,sith,dark_side --sm-parallel false

Author: aa2kb
Source Code: https://github.com/aa2kb/serverless-modular 
License: MIT license

#serverless #aws #node #lambda 

Hermann  Frami

Hermann Frami

1656080760

Serverless Plugin: AWS API Gateway integration Helper

Serverless Plugin: AWS Api Gateway integration helper

Overview

The plugin provides the functionality to merge OpenApiSpecification files (formerly known as swagger) with one or multiple YML files containing the the x-amazon-apigateway extensions. There are several use-cases to keep both information separated, e.g. it is needed to deploy different api gateway integrations depending on a stage environment.

When dealing with functional tests you do not want to test the production environment, but only a mocking response.

The plugin supports YML based OpenApi3 specification files only

Features

  • deploy stage dependent x-amazon-apigateway integrations
  • separate infrastructure (aws) from openapi specification
  • use mock integrations for functional testing
  • auto-generating CORS methods, headers and api gateway mocking response
  • hook into package & deploy lifeCycle and generate combined openApi files on the fly during deployment
  • auto-inject generated openApi file into the Body property of specified API Gateway
  • generate mocking responses without specifying x-amazon-apigateway-integration objects
  • generate request-validation blocks
  • generate all required x-amazon-apigateway-integration objects automatically
  • full proxy generation support with [NEW]: feature: PROXY Manager

See the examples folder for a full working example

Installation & Setup

Run npm install in your Serverless project.

$ npm install --save-dev serverless-openapi-integration-helper

Add the plugin to your serverless.yml file

plugins:
  - serverless-openapi-integration-helper

Plugin configuration

You can configure the plugin under the key openApiIntegration. See See Configuration Reference for a list of available options

The mapping array must be used to configure where the files containing the x-amazon-apigateway-integration blocks are located.

openApiIntegration:
    package: true #New feature! Hook into the package & deploy process
    inputFile: schema.yml
    mapping:
       - stage: [dev, prod] #multiple stages
         path: integrations
       - stage: test #single stage
         path: mocks

In the above example all YML files inside the integrations directory will be processed and merged with the schema.yml file when deploying the dev stage

serverless deploy --stage=dev

To use a different x-amazon-apigateway to perform functional tests (with mocking responses e.g) the directory mock is processed and merged with the schema.yml file when deploying the test stage

serverless deploy --stage=test

Usage

You can setup a fully working API GATEWAY with any openApi 3.0 specification file First create the input file containing the OpenApiSpecification

# ./schema.yml
openapi: 3.0.0
info:
  description: User Registration
  version: 1.0.0
  title: UserRegistration
paths:
  /api/v1/user:
    post:
      summary: adds a user
      requestBody:
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/Customer'
      responses:
        '201':
          description: user created
components:
  schemas:
    Customer:
      type: object
      required:
        - email_address
        - password
      properties:
        email_address:
          type: string
          example: test@example.com
        password:
          type: string
          format: password
          example: someStrongPassword#

The plugin will generate the x-amazon-apigateway integrations objects for all methods that do not have an integration.

#generate a file containing a gateway mock integration in the directory /mocks
serverless integration create --output mocks --type mock --stage=test

#generate a file containing the production integration in the directory integrations/
serverless integration create --output integrations --type http --stage=prod

Supported types are

  • http_proxy
  • http
  • aws
  • aws_proxy
  • mock

The plugin now generates a merged file during deployment that is automatically injected in your serverless resources

#Create OpenApi File containing mocking responses (usable in functional tests) and deploy to ApiGateway
serverless deploy --stage==test
#Create OpenApi File containing the production integration and deploy to ApiGateway
serverless deploy --stage=prod

The generated output is automatically injected in the resources.Resources.YOUR_API_GATEWAY.Properties.Body property

resources:
  Resources:
    ApiGatewayRestApi:
      Type: AWS::ApiGateway::RestApi
      Properties:
        ApiKeySourceType: HEADER
        Body: ~ #autogenerated by plugin
        Description: "Some Description"
        FailOnWarnings: false
        Name: ${opt:stage, self:provider.stage}-some-name
        EndpointConfiguration:
          Types:
            - REGIONAL
    ApiGatewayDeployment:
      Type: AWS::ApiGateway::Deployment
      Properties:
        RestApiId:
          Ref: ApiGatewayRestApi
        StageName: ${opt:stage, self:provider.stage}

Commands

Manual merge

The generate command can be used independently with

serverless integration merge --stage=dev

Of course then the API Gateway Body property has to be specified manually

resources:
  Resources:
    ApiGatewayRestApi:
      Type: AWS::ApiGateway::RestApi
      Properties:
        ApiKeySourceType: HEADER
        Body: ${file(openapi-integration/api.yml)}

CORS generator

The plugin can generate full CORS support out of the box.

openApiIntegration:
  cors: true
  ...

If enabled, the plugin generates all required OPTIONS methods as well as the required header informations and adds a mocking response to API Gateway. You can customize the CORS templates by placing your own files inside a directory openapi-integration (in your project root). The following files can be overwritten:

FilenameDescription
headers.ymlAll headers required for CORS support
integration.ymlContains the x-amazon-apigateway-integration block
path.ymlOpenApi specification for the OPTIONS method
response-parameters.ymlThe response Parameters of the x-amazon-apigateway-integration responses

See the EXAMPLES directory for detailed instructions.

Auto Mock Generator

If enabled, the plugin generates mocking responses for all methods that do not have an x-amazon-apigateway-integration block defined. It takes the first 2xx response defined in the openApi specification and generates a simple mocking response on the fly

openApiIntegration:
  autoMock: true
  ...

When using the autoMock feature, you do not need to specify inputPath mappings, since all endpoints are mocked automatically

openApiIntegration:
    package: true
    inputFile: schema.yml
    mapping: ~

VALIDATION generator

The plugin supports full request validation out of the box

openApiIntegration:
  validation: true
  ...

If enabled, the plugin generates the x-amazon-apigateway-request-validators blocks and adds a basic request validation to all methods. You can customize the VALIDATION template by placing your own files inside a directory openapi-integration (in your project root). The following files can be overwritten:

FilenameDescription
request-validator.ymlThe x-amazon-apigateway-request-validators block

See the EXAMPLES directory for detailed instructions.

Proxy Manager

The proxymanager feature automates the complete generation of an HTTP proxy integration. You only have to define the target URL and all necessary AWS integration blocks are generated on-the-fly during deployment.

openApiIntegration:
   cors: true
   validation: true
   mapping:
      - stage: [dev, prod]
        proxyManager:
           type: http_proxy
           baseUrl: https://www.example.com
           pattern: "(?<=api\/v1)\/.+"
  ...

With this setting, no separate integration files need to be created

A combination of your own and auto-generated files is still possible without any problems

Proxy Manager configuration

type

at the moment only http_proxy supported

baseUrl

The base url is required to map the path variable from the openapi specification to the URI from the aws integration.

Example:

#original openapi specification
paths:
   /api/v1/user:
    post:
      ... 

will be translated to

#generated openapi specification output
paths:
   /api/v1/user:
      post:
         ...
         x-amazon-apigateway-integration:
            type: http_proxy
            passthroughBehavior: when_no_match
            httpMethod: POST
            uri: https://www.example.com/api/v1/user

pattern

The pattern can be used to adapt the mapping of the base url using regexp, to remove a prefix, or a version string

Example:

baseUrl: https://www.example.com
pattern: "(?<=api\/v1)\/.+"

will translate the route /api/v1/user to https://www.example.com/user

Configuration Reference

configure the plugin under the key openApiIntegration

openApiIntegration:
  inputFile: schema.yml #required
  package: true #optional, defaults to false 
  inputDirectory: ./ #optional, defaults to ./
  cors: true #optional, defaults to false
  autoMock: true #optional, defaults to false
  validation: true #optional, defaults to false
  mapping: #optional, can be completely blank if autoMock option is enabled
    - stage: [dev, prod] #multiple stages
      path: integrations
      proxyManager: #optional
         type: http_proxy
         baseUrl: https://example.com
         pattern: "(?<=v1)\/.+"
    - stage: test #single stage
      path: mocks/customer.yml
  outputFile: api.yml #optional, defaults to api.yml
  outputDirectory: openapi-integration #optional, defaults to ./openapi-integration

Known Issues

Stage deployment

When using serverless framework only to deploy your aws resources without having any lambda functions or triggers, the AWS Gateway deploymemt does not behave as expected. Any deployment to an existing stage will be ignored, since CloudFormation does not redeploy a stage if the DeploymentIdentifier has not changed.

The plugin serverless-random-gateway-deployment-id solves this problem by adding a random id to the deployment-name and all references to it on every deploy

See the examples folder for a full working example

Variable Resolving

Serverless variables inside the openapi integration files are not resolved correctly when using the package & deploy hooks. This problem can be solved by using the api gateway STAGE VARIABLES.

See the examples folder for a full working example

Example

service:
  name: user-registration

provider:
  name: aws
  stage: dev
  region: eu-central-1

plugins:
  - serverless-openapi-integration-helper
  
openApiIntegration:
  inputFile: schema.yml
  package: true
  mapping:
    - path: integrations
      stage: [dev, prod]
    - path: mocks/customer.yml
      stage: test

functions:

resources:
  Resources:
    ApiGatewayRestApi:
      Type: AWS::ApiGateway::RestApi
      Properties:
        ApiKeySourceType: HEADER
        Body: ~
        Description: "Some Description"
        FailOnWarnings: false
        Name: ${opt:stage, self:provider.stage}-some-name
        EndpointConfiguration:
          Types:
            - REGIONAL
    ApiGatewayDeployment:
      Type: AWS::ApiGateway::Deployment
      Properties:
        RestApiId:
          Ref: ApiGatewayRestApi
        StageName: ${opt:stage, self:provider.stage}
serverless deploy --stage=test
serverless deploy --stage=prod

Approach to a functional test of schema validation

The plugin works well in combination with the serverless-plugin-test-helper to automate tests against the deployed api gateway

Install The plugin test helper package

npm install --save-dev serverless-plugin-test-helper

add the plugin as a plugin dependency in your serverless configuration file and configure the plugin according to the Readme

#./serveless.yml
plugins:
  - serverless-plugin-test-helper
  - serverless-openapi-integration-helper

[...]

resources:
   Outputs:
      GatewayUrl: # This is the key that will be used in the generated outputs file
         Description: This is a helper for functional tests
         Value: !Join
            - ''
            - - 'https://'
              - !Ref ApiGatewayRestApi
              - '.execute-api.'
              - ${opt:region, self:provider.region}
              - '.amazonaws.com/'
              - ${opt:stage, self:provider.stage}

   Resources:
    ApiGatewayRestApi:
      Type: AWS::ApiGateway::RestApi
      Properties:
        ApiKeySourceType: HEADER
        Body: ~
        Description: User Registration (${opt:stage, self:provider.stage})
        FailOnWarnings: false
        Name: ${opt:stage, self:provider.stage}-gateway
        EndpointConfiguration:
          Types:
            - REGIONAL
    ApiGatewayDeployment:
      Type: AWS::ApiGateway::Deployment
      Properties:
        RestApiId:
          Ref: ApiGatewayRestApi
        StageName: ${opt:stage, self:provider.stage}

Testing the schema validation

Add a functional test (e.g. with jest)

//tests/registration.js
import {getOutput} from 'serverless-plugin-test-helper';
import axios from 'axios';

axios.defaults.adapter = require('axios/lib/adapters/http'); //Todo

const URL = getOutput('GatewayUrl');
test('request validation on registration', async () => {
    expect.assertions(1);
    const {status} = await axios.post(URL + '/api/v1/user',
        {
            "email_address": "test@example.com",
            "password": "someStrongPassword#"
        },
        {
            headers: {
                'Content-Type': 'application/json',
            }
        });
    expect(status).toEqual(201);
});

test('request validation on registration (invalid request)', async () => {
    expect.assertions(1);
    try {
        await axios.post(URL + '/api/v1/user',
            {
                "email": "test@example.com"
            },
            {
                headers: {
                    'Content-Type': 'application/json',
                }
            });
    } catch (e) {
        expect(e.response).toMatchObject({
            statusText: 'Bad Request',
            status: 400
        });
    }
});

Then perform the functional test

serverless deploy --stage=test
npm test
serverless remove --stage=test

The command will

  • merge the openapi specification file with the MOCK integration configured before
  • deploy to API Gateway in an isolated TEST infrastructure environment (your other environments will not be affected since we are deploying to a separated gateway)
  • perform the test and verify that schema validation is correct
  • remove all TEST resources if test succeeded

See the examples folder for a full working example

Feedback is appreciated! If you have an idea for how this plugin/library can be improved (or even just a complaint/criticism) then please open an issue.

Author: yndlingsfar
Source Code: https://github.com/yndlingsfar/serverless-openapi-integration-helper 
License: MIT license

#serverless #openapi #aws 

amelia jones

1591340335

How To Take Help Of Referencing Generator

APA Referencing Generator

Many students use APA style as the key citation style in their assignment in university or college. Although, many people find it quite difficult to write the reference of the source. You ought to miss the names and dates of authors. Hence, APA referencing generator is important for reducing the burden of students. They can now feel quite easy to do the assignments on time.

The functioning of APA referencing generator

If you are struggling hard to write the APA referencing then you can take the help of APA referencing generator. It will create an excellent list. You are required to enter the information about the source. Just ensure that the text is credible and original. If you will copy references then it is a copyright violation.

You can use a referencing generator in just a click. It will generate the right references for all the sources. You are required to organize in alphabetical order. The generator will make sure that you will get good grades.

How to use APA referencing generator?

Select what is required to be cited such as journal, book, film, and others. You can choose the type of required citations list and enter all the required fields. The fields are dates, author name, title, editor name, and editions, name of publishers, chapter number, page numbers, and title of journals. You can click for reference to be generated and you will get the desired result.

Chicago Referencing Generator

Do you require the citation style? You can rely on Chicago Referencing Generator and will ensure that you will get the right citation in just a click. The generator is created to provide solutions to students to cite their research paper in Chicago style. It has proved to be the quickest and best citation generator on the market. The generator helps to sort the homework issues in few seconds. It also saves a lot of time and energy.

This tool helps researchers, professional writers, and students to manage and generate text citation essays. It will help to write Chicago style in a fast and easy way. It also provides details and directions for formatting and cites resources.

So, you must stop wasting the time and can go for Chicago Referencing Generator or APA referencing generator. These citation generators will help to solve the problem of citation issues. You can easily create citations by using endnotes and footnotes.

So, you can generate bibliographies, references, in-text citations, and title pages. These are fully automatic referencing style. You are just required to enter certain details about the citation and you will get the citation in the proper and required format.

So, if you are feeling any problem in doing assignment then you can take the help of assignment help.
If you require help for Assignment then livewebtutors is the right place for you. If you see our prices, you will observe that they are actually very affordable. Also, you can always expect a discount. Our team is capable and versatile enough to offer you exactly what you need, the best services for the prices you can afford.

read more:- Are you struggling to write a bibliography? Use Harvard referencing generator

#apa referencing generator #harvard referencing generator #chicago referencing generator #mla referencing generator #deakin referencing generator #oxford referencing generator

Serverless Applications - Pros and Cons to Help Businesses Decide - Prismetric

In the past few years, especially after Amazon Web Services (AWS) introduced its Lambda platform, serverless architecture became the business realm’s buzzword. The increasing popularity of serverless applications saw market leaders like Netflix, Airbnb, Nike, etc., adopting the serverless architecture to handle their backend functions better. Moreover, serverless architecture’s market size is expected to reach a whopping $9.17 billion by the year 2023.

Global_Serverless_Architecture_Market_2019-2023

Why use serverless computing?
As a business it is best to approach a professional mobile app development company to build apps that are deployed on various servers; nevertheless, businesses should understand that the benefits of the serverless applications lie in the possibility it promises ideal business implementations and not in the hype created by cloud vendors. With the serverless architecture, the developers can easily code arbitrary codes on-demand without worrying about the underlying hardware.

But as is the case with all game-changing trends, many businesses opt for serverless applications just for the sake of being up-to-date with their peers without thinking about the actual need of their business.

The serverless applications work well with stateless use cases, the cases which execute cleanly and give the next operation in a sequence. On the other hand, the serverless architecture is not fit for predictable applications where there is a lot of reading and writing in the backend system.

Another benefit of working with the serverless software architecture is that the third-party service provider will charge based on the total number of requests. As the number of requests increases, the charge is bound to increase, but then it will cost significantly less than a dedicated IT infrastructure.

Defining serverless software architecture
In serverless software architecture, the application logic is implemented in an environment where operating systems, servers, or virtual machines are not visible. Although where the application logic is executed is running on any operating system which uses physical servers. But the difference here is that managing the infrastructure is the soul of the service provider and the mobile app developer focuses only on writing the codes.

There are two different approaches when it comes to serverless applications. They are

Backend as a service (BaaS)
Function as a service (FaaS)

  1. Backend as a service (BaaS)
    The basic required functionality of the growing number of third party services is to provide server-side logic and maintain their internal state. This requirement has led to applications that do not have server-side logic or any application-specific logic. Thus they depend on third-party services for everything.

Moreover, other examples of third-party services are Autho, AWS Cognito (authentication as a service), Amazon Kinesis, Keen IO (analytics as a service), and many more.

  1. Function as a Service (FaaS)
    FaaS is the modern alternative to traditional architecture when the application still requires server-side logic. With Function as a Service, the developer can focus on implementing stateless functions triggered by events and can communicate efficiently with the external world.

FaaS serverless architecture is majorly used with microservices architecture as it renders everything to the organization. AWS Lambda, Google Cloud functions, etc., are some of the examples of FaaS implementation.

Pros of Serverless applications
There are specific ways in which serverless applications can redefine the way business is done in the modern age and has some distinct advantages over the traditional could platforms. Here are a few –

🔹 Highly Scalable
The flexible nature of the serverless architecture makes it ideal for scaling the applications. The serverless application’s benefit is that it allows the vendor to run each of the functions in separate containers, allowing optimizing them automatically and effectively. Moreover, unlike in the traditional cloud, one doesn’t need to purchase a certain number of resources in serverless applications and can be as flexible as possible.

🔹 Cost-Effective
As the organizations don’t need to spend hundreds and thousands of dollars on hardware, they don’t need to pay anything to the engineers to maintain the hardware. The serverless application’s pricing model is execution based as the organization is charged according to the executions they have made.

The company that uses the serverless applications is allotted a specific amount of time, and the pricing of the execution depends on the memory required. Different types of costs like presence detection, access authorization, image processing, etc., associated with a physical or virtual server is completely eliminated with the serverless applications.

🔹 Focuses on user experience
As the companies don’t always think about maintaining the servers, it allows them to focus on more productive things like developing and improving customer service features. A recent survey says that about 56% of the users are either using or planning to use the serverless applications in the coming six months.

Moreover, as the companies would save money with serverless apps as they don’t have to maintain any hardware system, it can be then utilized to enhance the level of customer service and features of the apps.

🔹 Ease of migration
It is easy to get started with serverless applications by porting individual features and operate them as on-demand events. For example, in a CMS, a video plugin requires transcoding video for different formats and bitrates. If the organization wished to do this with a WordPress server, it might not be a good fit as it would require resources dedicated to serving pages rather than encoding the video.

Moreover, the benefits of serverless applications can be used optimally to handle metadata encoding and creation. Similarly, serverless apps can be used in other plugins that are often prone to critical vulnerabilities.

Cons of serverless applications
Despite having some clear benefits, serverless applications are not specific for every single use case. We have listed the top things that an organization should keep in mind while opting for serverless applications.

🔹 Complete dependence on third-party vendor
In the realm of serverless applications, the third-party vendor is the king, and the organizations have no options but to play according to their rules. For example, if an application is set in Lambda, it is not easy to port it into Azure. The same is the case for coding languages. In present times, only Python developers and Node.js developers have the luxury to choose between existing serverless options.

Therefore, if you are planning to consider serverless applications for your next project, make sure that your vendor has everything needed to complete the project.

🔹 Challenges in debugging with traditional tools
It isn’t easy to perform debugging, especially for large enterprise applications that include various individual functions. Serverless applications use traditional tools and thus provide no option to attach a debugger in the public cloud. The organization can either do the debugging process locally or use logging for the same purpose. In addition to this, the DevOps tools in the serverless application do not support the idea of quickly deploying small bits of codes into running applications.

#serverless-application #serverless #serverless-computing #serverless-architeture #serverless-application-prosand-cons