AWS Lambda vs. Azure Functions vs. Google Functions

AWS Lambda vs. Azure Functions vs. Google Functions

There is no doubt that the trend of serverless is growing. This article reviews these three big players so you can make the decision of which to use more clearly.

Serverless has been around for more than two years now and is not a new phenomenon. Our recent DevOps Pulse Survey showed, however, that only 30% of the responders are currently using serverless. As opposed to Docker, for example, which was quickly adopted and has transformed the way developers build and deploy code, serverless is taking its time.

Still, there is little doubt that the general usage trend is pointing upwards.

So, What Is Serverless?

Serverless allows developers to build and run applications and services without thinking about the servers actually running the code. Serverless services, or FaaS (Functions-as-a-Service) providers, instrument this concept by allowing developers to upload the code while taking care of deploying running and scaling it. Thus, serverless can help create an environment that allows DevOps teams to focus on improving code, processes, and upgrade procedures, instead of on provisioning, scaling, and maintaining servers.

Amazon was the first cloud provider to offer a full serverless platform with Lambda and was followed by Microsoft and Google who introduced Azure Functions and Cloud Functions, respectively.

Today, these are the three main providers of serverless platforms, with huge IT organizations such as Netflix, Dropbox, and others building their entire backend services upon them. This article reviews these three big players so you can make the decision of which to use more clearly.

AWS Lambda

AWS is regarded as the groundbreaking innovator that created the concept of serverless and has integrated it completely as part of its cloud portfolio.

Lambda has native support for code written in JavaScript, Python, Java (Java 8 compatible), and C#, and allows the creation of a wrapper that can be added to Go, PHP, or Ruby projects, thus allowing execution of the code when triggered.

Lambda is positioned at the heart of AWS’s offering, acting in some ways as a gateway to almost every other cloud service it provides. Integration with S3 and Kinesis allows log analysis and on-the-fly filtering or image processing and backup — triggered by activities in these AWS services. The DynamoDB integration adds another layer of triggers for operations performed outside the real-time echo system. Lambda can also act as the full backend service of a web, mobile, or IoT application — receiving requests from the client through the Amazon gateway, which converts these requests into API calls that are later translated to predefined triggers running specific functions.

A fundamental problem for applications developed using a serverless architecture is maintaining the state of a function. As one of the main reasons to use serverless is cost reduction — through paying only for function execution duration — the natural behavior of the platform is to shut down functions as soon as they have completed their task. This presents a difficulty for developers, as functions cannot use other functions’ knowledge without having a third-party tool to collect and manage it. Step Functions, a new module recently released by Amazon, addresses this exact challenge. It helps by logging the state of each function, so it can be used by other functions or for root-cause analysis.

Managing your functions in Lambda allows you to invoke them from an endless list of sources. Amazon services such as CodeCommit and CloudWatch Logs can trigger events for new or tracked deployment processes, AWS Config API can trigger events for evaluating whether AWS resource configurations comply with pre-defined rules, and even Amazon’s Alexa can trigger Lambda events and analyze voice-activated commands. Amazon also allows users to define workflows for more complicated triggers, including decision trees for different business use-cases originating from the same function.

An interesting trend based around Lambda is the development of Lambda frameworks. Several companies and individual contributors developed and open-sourced code that helps with building and deploying event-driven functions. These frameworks give developers a template into which code is inserted, and bring a built-in integration with the other Amazon services.

Amazon created serverless to save money and IT organizations choose to use it to save money. As concurrent servers are no longer required, resource usage is significantly decreased, and the cost is calculated according to execution time. Amazon charges for execution in 100 ms units. The first one million requests are free each month, and every 100,000 requests cost $0.04 — divided between request and compute charges.

These figures might seem meager, but if execution time exceeds goals, request numbers spike, and triggers are not monitored, costs can multiply by hundreds of percent and diminish the value of serverless.

Logs and metrics are significantly important and tools that process and analyze them and later produce valuable information must be part of the ecosystem. ELK stack is the most common method used for log analysis, and as such is now being used for Lambda function analysis as well. Other monitoring tools, such as Datadog and New Relic are being used as well, displaying usage metrics and giving a real-time view of traffic and the utilization of functions. Some tools even allow alert generation for when latency, load, or network throughput thresholds are exceeded.

Azure Functions

Azure Functions, Microsoft’s counterpart of Lambda, was created and introduced only at the end of March 2016. Microsoft is working hard to close the functionality and conceptual gap with Amazon, but lacking its competitor’s broad cloud portfolio, Functions has a more narrow scope in terms of overall functionality.

However, it does offer powerful integrations and practical functionality. Microsoft allows functions to be coded in its native languages — C# and F# — inside the web functions editor, or they can be written and uploaded using common script-based options — Bash, Batch, and PowerShell. Developers are also able to write functions in JavaScript or Python.

Azure has out-of-the-box integrations with VS Team Services, GitHub, and Bitbucket, allowing easy setup of a continuous integration process and the deploying of code in the cloud.

Azure Functions supports several types of event triggers. Cron jobs enable timer-based events for scheduled tasks, while events on Microsoft’s SaaS services, for example, OneDrive or SharePoint, can be configured to trigger operations in Functions. Common triggers for real-time processing of data or files add the ability to operate a serverless bot that uses Cortana as the information provider.

But Microsoft hasn’t stopped there and is now attempting to address the needs of less technical users, involving them in the process by making serverless simpler and more approachable for non-coders. Logic Apps, which according to Microsoft is the “workflow orchestration engine in the cloud,” is designed to allow business-oriented personnel to define and manage data processing tasks and to set workflow paths.

Microsoft has taken the same pricing approach as Amazon, calculating the total cost from both number of triggers and execution time. The same prices also apply, with the first one million requests being free, while exceeding this limit will cost $0.02 for every 100,000 executions and another $0.02 for the 100,000 GB/s.

Microsoft understands the importance and complexity of monitoring resources through logs and metrics in a serverless environment and also the fact that this is something missing from its offering. For this reason, it is currently in the process of purchasingCloudyn—a cloud management, and more importantly, cost optimization, software producer. This purchase, once confirmed, will give Microsoft’s users the added value of understanding how the business manages resource usage and allow them to make conscious decisions on utilization and improving cloud efficiency.

Google Cloud Functions

Google was the last to join the serverless scene. Its current support is very limited, allowing functions to be written only in JavaScript, and triggering events solely on Google’s internal event bus — Cloud Pub/Subtopics. HTTP triggers are supported as well, as mobile events from Firebase.

Google is still missing some important integrations with storage and other cloud services that help with business-related triggers, but this isn’t the problematic part. Google restricts projects to having less than 20 triggers.

Monitoring is enabled via the Stackdriver logging tool, which is very handy and easy to use, but does not supply all the information and metrics serverless users require.

On top of all the missing functionality and limited capabilities, Google charges the highest price for function triggering. After using the one million free requests, its prices are double those of the other providers — $0.04 for 100,000 invocations, plus $0.04 for 100,000 milliseconds.

It seems that Google is still designing and learning how its serverless platform is going to work, but once this process is finalized, it will have to align with the other providers.


On top of its cost saving and maintenance reduction benefits, serverless also encourages correct coding and pushes for effective and fast execution as a result of its pay-per-use model. Organizations aiming at reducing the monthly payment for serverless services must reduce their runtime. Developers who can reduce function run-time and write the smallest independent pieces of code will be able to take greater advantage of serverless and significantly reduce costs for their organization.

The Serverless Cost Calculator allows costs to be estimated according to predicted number of executions and average execution time and can help developers who would like to introduce serverless in their organization by clearly displaying potential savings.

The table below contains a summary of the key attributes of each provider:

Thanks for reading ❤

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Learn More

Build a Serverless App with AWS Lambda - Hands On!

Amazon Web Services - Web Hosting & Cloud Computing With AWS

Amazon Web Services - Learn 7 AWS Services in 7 days

Amazon Web Services (AWS) - Zero to Hero

AWS Certified Solution Architect Associate

AWS DevOps: Introduction to DevOps on AWS

How To Set Kubernetes Ingress Controller on AWS?

How do I Upload File To AWS S3 Using AWS CLI?

Build Secure Microservices with AWS Lambda and ASP.NET Core

How to set up a website with AWS S3, Route53 & CloudFront?

AWS Certified Solution Architect Associate

This course will help you in the final preparation steps for your AWS Certified Solution Architect Associate - Certification Exam.

In this course , we will go through the different concepts that get asked in the exam and map them to the different domain objectives for the exam.

This is good revision guide before you attempt the certification exam

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading about AWS

A Complete Guide on Deploying a Node app to AWS with Docker

AWS Certified Solutions Architect - Associate 2019

AWS Lambda vs. Azure Functions vs. Google Functions

AWS Certified Developer - Associate 2019

Create and Deploy AWS and AWS Lambda using Serverless Framework

Introduction To AWS Lambda

Why Azure is Better Than AWS

A Guide to Functional Programming in Python with Examples

A Guide to Functional Programming in Python with Examples

Functional programming is a kind of the declarative programming paradigm where functions represent relations among objects, like in mathematics. Thus, functions are much more than ordinary routines.

This programming paradigm can be implemented in a variety of languages. There are several functional programming languages such as Closure, Erlang or Haskel. Many languages support functional programming in addition to other paradigms: C++, C#, F#, Java, Python, JavaScript and others.

In this article, you’ll find explanations on several important principles and concepts related to functional programming:

  • pure functions,
  • anonymous functions,
  • recursive functions,
  • first-class functions,
  • immutable data types.

Pure Functions

A pure function is a function that:

  • is idempotent — returns the same result if provided the same arguments,
  • has no side effects.

If a function uses an object from a higher scope or random numbers, communicates with files and so on, it might be impure because its result doesn’t depend only on its arguments.

A function that modifies objects outside of its scope, write to files, prints to the console and so on, have side effects and might be impure as well.

Pure functions usually don’t use objects from outer scopes and thus avoid shared states. This might simplify a program and help escape some errors.

Anonymous Functions

Anonymous (lambda) functions can be very convenient for functional programming constructs. They don’t have names and usually are created ad-hoc, with a single purpose.

In Python, you create an anonymous function with the lambda keyword:

lambda x, y: x + y

The above statement creates the function that accepts two arguments and returns their sum. In the next example, the functions f and g do the same thing:

>>> f = lambda x, y: x + y
>>> def g(x, y):
return x + y

Recursive Functions

A recursive function is a function that calls itself during the execution. For example, we can use recursion to find the factorial in the functional style:

>>> def factorial_r(n):
if n == 0:
return 1
return n * factorial_r(n - 1)

Alternatively, we can solve the same problem with the while or for loop:

>>> def factorial_l(n):
if n == 0:
return 1
product = 1
for i in range(1, n+1):
product *= i
return product

First Class Functions

In functional programming, functions are the first-class objects, also called higher-order functions — the data types treated the same way as other types.

Functions (or, to be more precise, their pointers or references) can be passed as arguments to and returned from other functions. They can also be used as variables inside programs.

The code below illustrates passing the built-in function max as an argument of the function f and calling it from inside f.

>>> def f(function, *arguments):
return function(*arguments)

>>> f(max, 1, 2, 4, 8, 16)

Very important functional programming concepts are:

  • mapping,
  • filtering,
  • reducing.

They are all supported in Python.

Mapping is performed with the built-in class map. It takes a function (or method, or any callable) as the first argument and an iterable (like a list or tuple) as the second argument, and returns the iterator with the results of calling the function on the items of the iterable:

>>> list(map(abs, [-2, -1, 0, 1, 2]))
[2, 1, 0, 1, 2]

In this example, the built-in function abs is called with the arguments -2, -1, 0, 1 and 2, respectively. We can obtain the same result with the list comprehension:

>>> [abs(item) for item in [-2, -1, 0, 1, 2]]
[2, 1, 0, 1, 2]

We don’t have to use built-in functions. It is possible to provide a custom function (or method, or any callable). Lambda functions can be particularly convenient in such cases:

>>> list(map(lambda item: 2 * item, [-2, -1, 0, 1, 2]))
[-4, -2, 0, 2, 4]

The statement above multiplied each item of the list [-2, -1, 0, 1, 2] with 2 using the custom (lambda) function lambda item: 2 * item. Of course, we can use the comprehension to achieve the same thing:

>>> [2 * item for item in [-2, -1, 0, 1, 2]]
[-4, -2, 0, 2, 4]

Filtering is performed with the built-in class filter. It also takes a function (or method, or any callable) as the first argument and iterable as the second. It calls the function on the items of the iterable and returns a new iterable with the items for which the function returned True or anything that is evaluated as True. For example:

>>> list(filter(lambda item: item >= 0, [-2, -1, 0, 1, 2]))
[0, 1, 2]

The statement above returns the list of non-negative items of [-2, -1, 0, 1, 2], as defined with the function lambda item: item >= 0. Again, the same result can be achieved using the comprehension:

>>> [item for item in [-2, -1, 0, 1, 2] if item >= 0]
[0, 1, 2]

Reducing is performed with the function reduce from the module functools. Again, it takes two arguments: a function and iterable. It calls the function on the first two items of the iterable, then on the result of this operation and the third item and so on. It returns a single value. For example, we can find the sum of all items of a list like this:

>>> import functools
>>> functools.reduce(lambda x, y: x + y, [1, 2, 4, 8, 16])

This example is just for illustration. The preferred way of calculating the sum is using the built-in reducing function sum:

>>> sum([1, 2, 4, 8, 16])

Immutable Data Types

An immutable object is an object whose state can’t be modified once it’s created. Contrary, a mutable object allows changes in its state.

Immutable objects are generally desirable in functional programming.

For example, in Python, lists are mutable, while tuples are immutable:

>>> a = [1, 2, 4, 8, 16] >>> a[0] = 32 # OK. You can modify lists.
>>> a
[32, 2, 4, 8, 16]
>>> a = (1, 2, 4, 8, 16)
>>> a[0] = 32 # Wrong! You can't modify tuples.
Traceback (most recent call last):
File "", line 1, in
TypeError: 'tuple' object does not support item assignment

We can modify lists by appending them new elements, but when we try to do this with tuples, they are not changed but new instances are created:

>>> # Lists (mutable)
>>> a = [1, 2, 4] # a is a list
>>> b = a # a and b refer to the same list object
>>> id(a) == id(b)
>>> a += [8, 16] # a is modified and so is b - they refer to the same object
>>> a
[1, 2, 4, 8, 16]
>>> b
[1, 2, 4, 8, 16]
>>> id(a) == id(b)
>>> # Tuples (immutable)
>>> a = (1, 2, 4) # a is a tuple
>>> b = a # a and b refer to the same tuple object
>>> id(a) == id(b)
>>> a += (8, 16) # new tuple is created and assigned to a; b is unchanged
>>> a # a refers to the new object
(1, 2, 4, 8, 16)
>>> b # b refers to the old object
(1, 2, 4)
>>> id(a) == id(b)

Advantages of Functional Programming

The underlying concepts and principles — especially higher-order functions, immutable data and the lack of side effects — imply important advantages of functional programs:

  • they might be easier to comprehend, implement, test and debug,
  • they might be shorter and more concise (compare two programs for calculating the factorial above),
  • they might be less error-prone,
  • they are easier to work with when implementing parallel execution.

Functional programming is a valuable paradigm worth learning. In addition to the advantages listed above, it’ll probably give you a new perspective on solving programming problems.

What is Functional Programming?

What is Functional Programming?

Functional programming is declarative rather than imperative, and application state flows through pure functions. Contrast with object oriented programming, where application state is usually shared and colocated with methods in objects.

Functional programming is a programming paradigm, meaning that it is a way of thinking about software construction based on some fundamental, defining principles (listed above). Other examples of programming paradigms include object oriented programming and procedural programming.

Functional code tends to be more concise, more predictable, and easier to test than imperative or object oriented code — but if you’re unfamiliar with it and the common patterns associated with it, functional code can also seem a lot more dense, and the related literature can be impenetrable to newcomers.

If you start googling functional programming terms, you’re going to quickly hit a brick wall of academic lingo that can be very intimidating for beginners. To say it has a learning curve is a serious understatement. But if you’ve been programming in JavaScript for a while, chances are good that you’ve used a lot of functional programming concepts & utilities in your real software.

Don’t let all the new words scare you away. It’s a lot easier than it sounds.

The hardest part is wrapping your head around all the unfamiliar vocabulary. There are a lot of ideas in the innocent looking definition above which all need to be understood before you can begin to grasp the meaning of functional programming:

  • Pure functions
  • Function composition
  • Avoid shared state
  • Avoid mutating state
  • Avoid side effects

In other words, if you want to know what functional programming means in practice, you have to start with an understanding of those core concepts.

A pure function is a function which:

  • Given the same inputs, always returns the same output, and
  • Has no side-effects

Pure functions have lots of properties that are important in functional programming, including referential transparency (you can replace a function call with its resulting value without changing the meaning of the program).

Function composition is the process of combining two or more functions in order to produce a new function or perform some computation. For example, the composition f . g (the dot means “composed with”) is equivalent to f(g(x)) in JavaScript. Understanding function composition is an important step towards understanding how software is constructed using the functional programming.

Shared State

Shared state is any variable, object, or memory space that exists in a shared scope, or as the property of an object being passed between scopes. A shared scope can include global scope or closure scopes. Often, in object oriented programming, objects are shared between scopes by adding properties to other objects.

For example, a computer game might have a master game object, with characters and game items stored as properties owned by that object. Functional programming avoids shared state — instead relying on immutable data structures and pure calculations to derive new data from existing data.

The problem with shared state is that in order to understand the effects of a function, you have to know the entire history of every shared variable that the function uses or affects.

Imagine you have a user object which needs saving. Your saveUser() function makes a request to an API on the server. While that’s happening, the user changes their profile picture with updateAvatar() and triggers another saveUser() request. On save, the server sends back a canonical user object that should replace whatever is in memory in order to sync up with changes that happen on the server or in response to other API calls.

Unfortunately, the second response gets received before the first response, so when the first (now outdated) response gets returned, the new profile pic gets wiped out in memory and replaced with the old one. This is an example of a race condition — a very common bug associated with shared state.

Another common problem associated with shared state is that changing the order in which functions are called can cause a cascade of failures because functions which act on shared state are timing dependent:

// With shared state, the order in which function calls are made
// changes the result of the function calls.
const x = {
  val: 2

const x1 = () => x.val += 1;

const x2 = () => x.val *= 2;


console.log(x.val); // 6

// This example is exactly equivalent to the above, except...
const y = {
  val: 2

const y1 = () => y.val += 1;

const y2 = () => y.val *= 2;

// ...the order of the function calls is reversed...

// ... which changes the resulting value:
console.log(y.val); // 5

Timing dependency example

When you avoid shared state, the timing and order of function calls don’t change the result of calling the function. With pure functions, given the same input, you’ll always get the same output. This makes function calls completely independent of other function calls, which can radically simplify changes and refactoring. A change in one function, or the timing of a function call won’t ripple out and break other parts of the program.

const x = {
  val: 2

const x1 = x => Object.assign({}, x, { val: x.val + 1});

const x2 = x => Object.assign({}, x, { val: x.val * 2});

console.log(x1(x2(x)).val); // 5

const y = {
  val: 2

// Since there are no dependencies on outside variables,
// we don't need different functions to operate on different
// variables.

// this space intentionally left blank

// Because the functions don't mutate, you can call these
// functions as many times as you want, in any order, 
// without changing the result of other function calls.

console.log(x1(x2(y)).val); // 5

In the example above, we use Object.assign() and pass in an empty object as the first parameter to copy the properties of x instead of mutating it in place. In this case, it would have been equivalent to simply create a new object from scratch, without Object.assign(), but this is a common pattern in JavaScript to create copies of existing state instead of using mutations, which we demonstrated in the first example.

If you look closely at the console.log() statements in this example, you should notice something I’ve mentioned already: function composition. Recall from earlier, function composition looks like this: f(g(x)). In this case, we replace f() and g() with x1() and x2() for the composition: x1 . x2.

Of course, if you change the order of the composition, the output will change. Order of operations still matters. f(g(x)) is not always equal to g(f(x)), but what doesn’t matter anymore is what happens to variables outside the function — and that’s a big deal. With impure functions, it’s impossible to fully understand what a function does unless you know the entire history of every variable that the function uses or affects.

Remove function call timing dependency, and you eliminate an entire class of potential bugs.


An immutable object is an object that can’t be modified after it’s created. Conversely, a mutable object is any object which can be modified after it’s created.

Immutability is a central concept of functional programming because without it, the data flow in your program is lossy. State history is abandoned, and strange bugs can creep into your software.

In JavaScript, it’s important not to confuse const, with immutability. const creates a variable name binding which can’t be reassigned after creation. const does not create immutable objects. You can’t change the object that the binding refers to, but you can still change the properties of the object, which means that bindings created with const are mutable, not immutable.

Immutable objects can’t be changed at all. You can make a value truly immutable by deep freezing the object. JavaScript has a method that freezes an object one-level deep:

const a = Object.freeze({
  foo: 'Hello',
  bar: 'world',
  baz: '!'
}); = 'Goodbye';
// Error: Cannot assign to read only property 'foo' of object Object

But frozen objects are only superficially immutable. For example, the following object is mutable:

const a = Object.freeze({
  foo: { greeting: 'Hello' },
  bar: 'world',
  baz: '!'
}); = 'Goodbye';

console.log(`${ }, ${ }${a.baz}`);

As you can see, the top level primitive properties of a frozen object can’t change, but any property which is also an object (including arrays, etc…) can still be mutated — so even frozen objects are not immutable unless you walk the whole object tree and freeze every object property.

In many functional programming languages, there are special immutable data structures called trie data structures (pronounced “tree”) which are effectively deep frozen — meaning that no property can change, regardless of the level of the property in the object hierarchy.

Tries use structural sharing to share reference memory locations for all the parts of the object which are unchanged after an object has been copied by an operator, which uses less memory, and enables significant performance improvements for some kinds of operations.

For example, you can use identity comparisons at the root of an object tree for comparisons. If the identity is the same, you don’t have to walk the whole tree checking for differences.

There are several libraries in JavaScript which take advantage of tries, including Immutable.js and Mori.

I have experimented with both, and tend to use Immutable.js in large projects that require significant amounts of immutable state.

Side Effects

A side effect is any application state change that is observable outside the called function other than its return value. Side effects include:

  • Modifying any external variable or object property (e.g., a global variable, or a variable in the parent function scope chain)
  • Logging to the console
  • Writing to the screen
  • Writing to a file
  • Writing to the network
  • Triggering any external process
  • Calling any other functions with side-effects

Side effects are mostly avoided in functional programming, which makes the effects of a program much easier to understand, and much easier to test.

Haskell and other functional languages frequently isolate and encapsulate side effects from pure functions using monads. The topic of monads is deep enough to write a book on, so we’ll save that for later.

What you do need to know right now is that side-effect actions need to be isolated from the rest of your software. If you keep your side effects separate from the rest of your program logic, your software will be much easier to extend, refactor, debug, test, and maintain.

This is the reason that most front-end frameworks encourage users to manage state and component rendering in separate, loosely coupled modules.

Reusability Through Higher Order Functions

Functional programming tends to reuse a common set of functional utilities to process data. Object oriented programming tends to colocate methods and data in objects. Those colocated methods can only operate on the type of data they were designed to operate on, and often only the data contained in that specific object instance.

In functional programming, any type of data is fair game. The same map() utility can map over objects, strings, numbers, or any other data type because it takes a function as an argument which appropriately handles the given data type. FP pulls off its generic utility trickery using higher order functions.

JavaScript has first class functions, which allows us to treat functions as data — assign them to variables, pass them to other functions, return them from functions, etc…

A higher order function is any function which takes a function as an argument, returns a function, or both. Higher order functions are often used to:

  • Abstract or isolate actions, effects, or async flow control using callback functions, promises, monads, etc…
  • Create utilities which can act on a wide variety of data types
  • Partially apply a function to its arguments or create a curried function for the purpose of reuse or function composition
  • Take a list of functions and return some composition of those input functions

Containers, Functors, Lists, and Streams

A functor is something that can be mapped over. In other words, it’s a container which has an interface which can be used to apply a function to the values inside it. When you see the word functor, you should think “mappable”.

Earlier we learned that the same map() utility can act on a variety of data types. It does that by lifting the mapping operation to work with a functor API. The important flow control operations used by map() take advantage of that interface. In the case of, the container is an array, but other data structures can be functors, too — as long as they supply the mapping API.

Let’s look at how allows you to abstract the data type from the mapping utility to make map() usable with any data type. We’ll create a simple double() mapping that simply multiplies any passed in values by 2:

const double = n => n * 2;
const doubleMap = numbers =>;
console.log(doubleMap([2, 3, 4])); // [ 4, 6, 8 ]

What if we want to operate on targets in a game to double the number of points they award? All we have to do is make a subtle change to the double() function that we pass into map(), and everything still works:

const double = n => n.points * 2;

const doubleMap = numbers =>;

  { name: 'ball', points: 2 },
  { name: 'coin', points: 3 },
  { name: 'candy', points: 4}
])); // [ 4, 6, 8 ]

The concept of using abstractions like functors & higher order functions in order to use generic utility functions to manipulate any number of different data types is important in functional programming. You’ll see a similar concept applied in all sorts of different ways.

“A list expressed over time is a stream.”

All you need to understand for now is that arrays and functors are not the only way this concept of containers and values in containers applies. For example, an array is just a list of things. A list expressed over time is a stream — so you can apply the same kinds of utilities to process streams of incoming events — something that you’ll see a lot when you start building real software with FP.

Declarative vs Imperative

Functional programming is a declarative paradigm, meaning that the program logic is expressed without explicitly describing the flow control.

Imperative programs spend lines of code describing the specific steps used to achieve the desired results — the flow control: How to do things.

Declarative programs abstract the flow control process, and instead spend lines of code describing the data flow: What to do. The how gets abstracted away.

For example, this imperative mapping takes an array of numbers and returns a new array with each number multiplied by 2:

const doubleMap = numbers => {
  const doubled = [];
  for (let i = 0; i < numbers.length; i++) {
    doubled.push(numbers[i] * 2);
  return doubled;

console.log(doubleMap([2, 3, 4])); // [4, 6, 8]

Imperative data mapping

This declarative mapping does the same thing, but abstracts the flow control away using the functional utility, which allows you to more clearly express the flow of data:

const doubleMap = numbers => => n * 2);

console.log(doubleMap([2, 3, 4])); // [4, 6, 8]

Imperative code frequently utilizes statements. A statement is a piece of code which performs some action. Examples of commonly used statements include for, if, switch, throw, etc…

Declarative code relies more on expressions. An expression is a piece of code which evaluates to some value. Expressions are usually some combination of function calls, values, and operators which are evaluated to produce the resulting value.

These are all examples of expressions:

2 * 2
doubleMap([2, 3, 4])
Math.max(4, 3, 2)

Usually in code, you’ll see expressions being assigned to an identifier, returned from functions, or passed into a function. Before being assigned, returned, or passed, the expression is first evaluated, and the resulting value is used.


Functional programming favors:

  • Pure functions instead of shared state & side effects
  • Immutability over mutable data
  • Function composition over imperative flow control
  • Lots of generic, reusable utilities that use higher order functions to act on many data types instead of methods that only operate on their colocated data
  • Declarative rather than imperative code (what to do, rather than how to do it)
  • Expressions over statements
  • Containers & higher order functions over ad-hoc polymorphism

Building Bliss- Serverless Fullstack React with Prisma 2 and GraphQL

Building Bliss- Serverless Fullstack React with Prisma 2 and GraphQL

This type of solution has only been recently available and while it is still in beta, it really represents a full stack developer's paradise because you can develop an app, deploy it, forget about worrying about any of the DevOps particulars and be confident that it will work regardless of load.


  • One command to deploy the entire stack (Now)
  • Infinitely scalable, pay for what you use (lambda functions)
  • No servers to maintain (lambda functions)
  • All the advantages of React (composability, reusability and strong community support)
  • Server-side rendering for SEO (Next.js)
  • Correctly rendered social media link shares in Facebook and Twitter (Next.js)
  • Easy to evolve api (GraphQL)
  • One Schema to maintain for the entire stack (Prisma 2)
  • Secure secret management (Now)
  • Easy to set up development environment with hot code reloading (Docker)
  • Strongly typed (GraphQL and Typescript) that is autogenerated when possible (graphql-gen)

Before you start, you should go ahead and set up an RDS instance and configured like our previous blog post.


I. Install Dependencies

<iframe class="ql-video" frameborder="0" allowfullscreen="true" src=""></iframe>

II. Add Environmental Parameters

<iframe class="ql-video" frameborder="0" allowfullscreen="true" src=""></iframe>

III. Configure the Backend

<iframe class="ql-video" frameborder="0" allowfullscreen="true" src=""></iframe>

IV. Configure the Now Service

<iframe class="ql-video" frameborder="0" allowfullscreen="true" src=""></iframe>

V. Set up Now Secrets and Deploy!

<iframe class="ql-video" frameborder="0" allowfullscreen="true" src=""></iframe>

We will pick up from the example from our multi-part blog series [1][2][3]. If you aren't interested in following along from the start, you can start by checking out the repo from the now-serverless-start tag:

<pre class="ql-syntax" spellcheck="false">git clone git fetch && git fetch --tags git checkout now-serverless-start </pre>

I. Install and clean up dependencies

Upgrade to Next v9

In the frontend/package.json make sure that next has a version of "9.02" or greater. Previously we were using a canary version of 8.1.1 for typescript support, but since the post version 9 of next was released so we want to make sure we can take advantage of all the latest goodies.

Install webpack to the frontend

As a precaution, you should install webpack to the frontend folder. I've seen inconsistent behavior with now where if webpack is not installed, sometimes the deploy will fail saying that it needs webpack. When I read online it sounds like it shouldn't be required so this is likely a bug, but it can't hurt to add it:

<pre class="ql-syntax" spellcheck="false">npm install --save-dev webpack </pre>

Remove the main block from package.json and frontend/package.json

When we generated our package.json files, it auto-populated the main field. Since we are not using this feature and don't even have an index.js file in either folder, we should go ahead and remove them. In frontend/package.json go ahead and remove line 5. We didn't use it previously and it has the potential to confuse the now service.

<pre class="ql-syntax" spellcheck="false">"main": "index.js", </pre>

Also, do the same in the package.json in the root folder.

Install Prisma2 to the backend

Although we globally install prisma2 in our docker containers, we need to now add it to our backend package.json file so that when we use the now service it will be available during the build step up in AWS. Navigate to the backend folder and install prisma2:

<pre class="ql-syntax" spellcheck="false">npm install --save-dev prisma2 </pre>

Install Zeit Now

We should install now globally so that we will be able to run it from the command line:

<pre class="ql-syntax" spellcheck="false">npm install -g now </pre>

II. Add Environmental Variables

Add a .env file to the root of your project. Add the following variables which we will use across our docker environment.

<pre class="ql-syntax" spellcheck="false">MYSQL_URL=mysql://root:[email protected]:3306/prisma BACKEND_URL=http://backend:4000/graphql FRONTEND_URL=http://localhost:3000 </pre>

Modify the docker-compose.yml file to inject these new variables into our docker containers. This is what the updated file looks like:


<pre class="ql-syntax" spellcheck="false">version: '3.7' services: mysql: container_name: mysql ports: - '3306:3306' image: mysql:5.7 restart: always environment: MYSQL_DATABASE: prisma MYSQL_ROOT_PASSWORD: prisma volumes: - mysql:/var/lib/mysql prisma: links: - mysql depends_on: - mysql container_name: prisma ports: - '5555:5555' build: context: backend/prisma dockerfile: Dockerfile environment: MYSQL_URL: ${MYSQL_URL} volumes: - /app/prisma backend: links: - mysql depends_on: - mysql - prisma container_name: backend ports: - '4000:4000' build: context: backend dockerfile: Dockerfile args: - MYSQL_URL=${MYSQL_URL} environment: MYSQL_URL: ${MYSQL_URL} FRONTEND_URL: ${FRONTEND_URL} volumes: - ./backend:/app - /app/node_modules - /app/prisma frontend: container_name: frontend ports: - '3000:3000' build: context: frontend dockerfile: Dockerfile environment: BACKEND_URL: ${BACKEND_URL} volumes: - ./frontend:/app - /app/node_modules - /app/.next

volumes: #define our mysql volume used above

Let's take a look at the parts that were changed, below are the parts snipped out that we added to the above file:

<pre class="ql-syntax" spellcheck="false">prisma:

..more lines

context: backend
dockerfile: Dockerfile

..more lines


We added environment blocks to the prisma studio, backend, and frontend containers. Since we have the .env file, any variables that we define in the .env file, such as VAR1=my-variable, we can call it in the yml as ${VAR1} and that will be like we used the my-variable string directly in that spot of the yml file.

Dynamically set Backend url on the frontend

We need to set the uri that the frontend connects to dynamically instead of hardcoding it. In the frontend/utils/init-apollo.js we previously had this line which would connect to localhost if the request came from a user or from the backend if it came from the next.js server:

<pre class="ql-syntax" spellcheck="false">uri: isBrowser ? 'http://localhost:4000' : 'http://backend:4000', // Server URL (must be absolute)

We need to still keep track of whether we are in the browser or server in the docker environment. In addition, though, we need to check whether we are in a docker environment or whether we are deployed via now into a lambda function.

We can access environment variables by using the process.env.ENVIRONMENTAL_VARIABLE. We check if the url matches our local environment url and if so, we know that we are in a docker environment. Now our logic is that if we are in a docker environment and the browser is making the request, we return the localhost, otherwise we pass the BACKEND_URL as the uri.


<pre class="ql-syntax" spellcheck="false">function create(initialState) {
// Check out if you want to use the AWSAppSyncClient
const isBrowser = typeof window !== 'undefined'
const isDocker = process.env.BACKEND_URL === 'http://backend:4000/graphql'
return new ApolloClient({
connectToDevTools: isBrowser,
ssrMode: !isBrowser, // Disables forceFetch on the server (so queries are only run once)
link: new HttpLink({
isDocker && isBrowser
? 'http://localhost:4000/graphql'
: process.env.BACKEND_URL,
credentials: 'same-origin', // Additional fetch() options like credentials or headers
// Use fetch() polyfill on the server
fetch: !isBrowser && fetch,
cache: new InMemoryCache().restore(initialState || {}),

Now that should really be all that we need to do, but since Next.js is both rendered on the server and in the client, we won't have access to server environmental variables unless we take one more step. We need to expose the variable in our frontend/next.config.js file:


<pre class="ql-syntax" spellcheck="false">const withCSS = require('@zeit/next-css')

module.exports = withCSS({
target: 'serverless',
env: {

Note that due to how exactly Next.js handles process.env, you cannot destructure variables off of it. So the line below will not work, we need to use the entire process.env.BACKEND_URL variable.

<pre class="ql-syntax" spellcheck="false">const { BACKEND_URL } = process.env // NO!

III. Configure our backend server

Update the backend server to the /graphql backend and configure CORS

We updated the url above to the /graphql endpoint for the backend server. We are doing this because in now we will deploy our backend graphql server to We need to make this change in our backend/src/index.ts so that the server will be running at the /graphqlendpoint instead of /.

In addition, while we are here, we will disable subscriptions and enable CORS. CORS stands for cross origin resource sharing and it tells the backend server which frontend servers it should accept requests from. This ensures that if someone else stood up a frontend next server that pointed to our backend server that all requests would fail. We need this because you could imagine how damaging this could potentially be if someone bought a domain (I'm just making this up) and pointed their frontend server to the real backend server of amazon's shopping portal. This would allow a fake amazon frontend to gather all sorts of customer information while still sending real requests to amazon's actual backend server. Yikes!

In order to enable CORS we will pass in our frontend url. We will also enable credentials for future authentication-related purposes.


<pre class="ql-syntax" spellcheck="false">server.start(
endpoint: '/graphql',
playground: '/graphql',
subscriptions: false,
cors: {
credentials: true,
origin: process.env.FRONTEND_URL,
() => console.log(🚀 Server ready)

Update the backend/prisma/project.prisma file to use environmental variables and set our platform.

We can use the env("MYSQL_URL") which will take our MYSQL_URLenvironmental variable. Starting with prisma preview-3+ we need to specify which platforms that we plan to use with prisma2. We can use "native" for our docker work, but we need to use "linux-glibc-libssl1.0.2" for Zeit Now.


<pre class="ql-syntax" spellcheck="false">datasource db {
provider = "mysql"
url = env("MYSQL_URL")

generator photon {
provider = "photonjs"
platforms = ["native", "linux-glibc-libssl1.0.2"]
// Rest of file

Update the backend/Dockerfile to pass the environmental variable into the prisma2 generate. We first have to define a docker argument using ARG named MYSQL_URL. Then, we take the MYSQL_URLenvironmental variable and assign it to this newly created ARG.

We need the MYSQL_URL environment variable so that our url from the prisma file gets evaluated properly.


<pre class="ql-syntax" spellcheck="false">FROM node:10.16.0
RUN npm install -g --unsafe-perm prisma2

RUN mkdir /app

COPY package*.json ./
COPY prisma ./prisma/


RUN npm install
RUN prisma2 generate

CMD ["npm", "start" ]

Note that the only reason we have access to the $MYSQL_URL variable in this Dockerfile is due to an args block that we previously added to the docker-compose.yml file. Adding variables to the environment block of docker-compose is only accessible during the runtime of the containers, not the building step which is where we are at when the Dockerfile is being executed.

<pre class="ql-syntax" spellcheck="false">backend:
context: backend
dockerfile: Dockerfile

IV. Add our Now Configuration

Create now secrets

Locally, we have been using the .env file to store our secrets. Although we commit that file to our repo, the only reason why we can do that is because there are no sensitive environmental variables there. Ensure that if you ever add real secrets to that file, such as a stripe key, you need to never commit that to github or else you risk them being compromised!

For production, we need a more secure way to store secrets. Now provides a nice way to do this:

<pre class="ql-syntax" spellcheck="false">now secret add my_secret my_value

Now will encrypt and store these secrets on their servers and when we upload our app we can use them but we won't be able to read them out even if we try to be sneaky and read it out using console.logs. We need to create variables for the following variables that were in our .env file:

<pre class="ql-syntax" spellcheck="false">MYSQL_URL=mysql://user:[email protected]:3306/prisma

Note that by default your-now-url will be but you can always skip this step for now, get to Step V of this tutorial, deploy your site and then look at where it deploys to because it will be the last line of the console output. Then you come back to this step and add the now secrets and redeploy the site.

Add a now.json file to the root directory

We need to create a now.json file which will dictate details about how we should deploy our site. The first part of it has environmental variables for both the build and the runtime. We will be using secrets that we created in the previous step by using the @our-secret-name. If you forget what names you used, you can always type now secrets ls and you will get the names of the secrets (but critically not the secrets themselves).

Next we have to define our build steps. In our case we have to build both our nextjs application and our graphql-yoga server. The nextjs is built using a specially designed @now/next builder and we can just point it to our next.config.js file which is in our frontend folder. Our other build will use the index.ts file in our backend/src directory and the builder is smart enough to compile the code down into javascript and deploy it to a lambda function.

Finally, we have to define our routes. The backend server will end up at the /graphql endpoint while the frontend directory will use everything else. This ensures that any page we go to under will be forwarded onto the nextjs server except the /graphql endpoint.


<pre class="ql-syntax" spellcheck="false">{
"version": 2,
"build": {
"env": {
"MYSQL_URL": "@mysql_url",
"BACKEND_URL": "@backend_url",
"FRONTEND_URL": "@frontend_url"
"env": {
"MYSQL_URL": "@mysql_url",
"BACKEND_URL": "@backend_url",
"FRONTEND_URL": "@frontend_url"
"builds": [
"src": "frontend/next.config.js",
"use": "@now/next"
"src": "backend/src/index.ts",
"use": "@now/node",
"config": { "maxLambdaSize": "20mb" }
"routes": [
{ "src": "/graphql", "dest": "/backend/src/index.ts" },
"src": "/(.*)",
"dest": "/frontend/$1",
"headers": {
"x-request-path": "$1"

Add a .nowignore file to the root directory

Finally, we can add our ignore file which will tell now which things it shouldn't bother to upload.


<pre class="ql-syntax" spellcheck="false">**/node_modules

V. Deploy our now full stack site

This part is easy. Simply type now from the root folder and let it fly!

Thanks for reading. If you liked this post, share it with all of your programming buddies!

Further reading

☞ The Complete JavaScript Course 2019: Build Real Projects!

☞  Show Suggestions on Typing using Javascript

☞ JavaScript Bootcamp - Build Real World Applications

☞ The Web Developer Bootcamp

☞ JavaScript Programming Tutorial - Full JavaScript Course for Beginners

☞ Best JavaScript Frameworks, Libraries and Tools to Use in 2019

☞ What JavaScript Framework You Should Learn to Get a Job in 2019?

☞ Best JavaScript Frameworks, Libraries and Tools to Use in 2019

☞ Microfrontends — Connecting JavaScript frameworks together (React, Angular, Vue etc)

☞ Do we still need JavaScript frameworks?

Originally published on