Understanding Functional Programming in 40 Minutes

Understanding Functional Programming in 40 Minutes

In this Functional programming tutorial will strip away the cloud of mystery to uncover the simple — and wonderful — truth about functional programming. Functional programming has finally escaped from academia. These days developers are building real systems in functional programming languages like Clojure, Scala, Elixir and F#. Functional techniques are also seeping into more traditional languages like Java and Ruby.

Functional programming has finally escaped from academia. These days developers are building real systems in functional programming languages like Clojure, Scala, Elixir and F#. Functional techniques are also seeping into more traditional languages like Java and Ruby. Unfortunately somewhere along the way functional programming has also developed a reputation for being deep and mysterious: Good programs achieve the Zen-like state of being functional which somehow involves immutability, higher order functions and being referentially transparent.

In this talk Russ Olsen will strip away the cloud of mystery to uncover the simple — and wonderful — truth about functional programming: It can make your programming life easier by letting you do simple things simply while also providing you with the sharp tools you need to tackle more complex problems.

The Rust Programming Language - Understanding Functions in Rust

The Rust Programming Language - Understanding Functions in Rust

The Rust Programming Language - Understanding Functions in Rust - Functions - Functions are the building blocks of readable, maintainable, and reusable code. Every Rust program has at least one function.

Every Rust program has at least one function, the main function:

fn main() {
}

This is the simplest possible function declaration. As we mentioned before, fn says ‘this is a function’, followed by the name, some parentheses because this function takes no arguments, and then some curly braces to indicate the body. Here’s a function named foo:


# #![allow(unused_variables)]
#fn main() {
fn foo() {
}
#}

So, what about taking arguments? Here’s a function that prints a number:


# #![allow(unused_variables)]
#fn main() {
fn print_number(x: i32) {
    println!("x is: {}", x);
}
#}

Here’s a complete program that uses print_number:

fn main() {
    print_number(5);
}

fn print_number(x: i32) {
    println!("x is: {}", x);
}

As you can see, function arguments work very similar to let declarations: you add a type to the argument name, after a colon.

Here’s a complete program that adds two numbers together and prints them:

fn main() {
    print_sum(5, 6);
}

fn print_sum(x: i32, y: i32) {
    println!("sum is: {}", x + y);
}

You separate arguments with a comma, both when you call the function, as well as when you declare it.

Unlike let, you must declare the types of function arguments. This does not work:

fn print_sum(x, y) {
    println!("sum is: {}", x + y);
}

You get this error:

expected one of `!`, `:`, or `@`, found `)`
fn print_sum(x, y) {

This is a deliberate design decision. While full-program inference is possible, languages which have it, like Haskell, often suggest that documenting your types explicitly is a best-practice. We agree that forcing functions to declare types while allowing for inference inside of function bodies is a wonderful sweet spot between full inference and no inference.

What about returning a value? Here’s a function that adds one to an integer:


# #![allow(unused_variables)]
#fn main() {
fn add_one(x: i32) -> i32 {
    x + 1
}
#}

Rust functions return exactly one value, and you declare the type after an ‘arrow’, which is a dash (-) followed by a greater-than sign (>). The last line of a function determines what it returns. You’ll note the lack of a semicolon here. If we added it in:

fn add_one(x: i32) -> i32 {
    x + 1;
}

We would get an error:

error: not all control paths return a value
fn add_one(x: i32) -> i32 {
     x + 1;
}

help: consider removing this semicolon:
     x + 1;
          ^

This reveals two interesting things about Rust: it is an expression-based language, and semicolons are different from semicolons in other ‘curly brace and semicolon’-based languages. These two things are related.

Expressions vs. Statements

Rust is primarily an expression-based language. There are only two kinds of statements, and everything else is an expression.

So what's the difference? Expressions return a value, and statements do not. That’s why we end up with ‘not all control paths return a value’ here: the statement x + 1; doesn’t return a value. There are two kinds of statements in Rust: ‘declaration statements’ and ‘expression statements’. Everything else is an expression. Let’s talk about declaration statements first.

In some languages, variable bindings can be written as expressions, not statements. Like Ruby:

x = y = 5

In Rust, however, using let to introduce a binding is not an expression. The following will produce a compile-time error:

let x = (let y = 5); // Expected identifier, found keyword `let`.

The compiler is telling us here that it was expecting to see the beginning of an expression, and a let can only begin a statement, not an expression.

Note that assigning to an already-bound variable (e.g. y = 5) is still an expression, although its value is not particularly useful. Unlike other languages where an assignment evaluates to the assigned value (e.g. 5 in the previous example), in Rust the value of an assignment is an empty tuple () because the assigned value can have only one owner, and any other returned value would be too surprising:


# #![allow(unused_variables)]
#fn main() {
let mut y = 5;

let x = (y = 6);  // `x` has the value `()`, not `6`.
#}

The second kind of statement in Rust is the expression statement. Its purpose is to turn any expression into a statement. In practical terms, Rust's grammar expects statements to follow other statements. This means that you use semicolons to separate expressions from each other. This means that Rust looks a lot like most other languages that require you to use semicolons at the end of every line, and you will see semicolons at the end of almost every line of Rust code you see.

What is this exception that makes us say "almost"? You saw it already, in this code:


# #![allow(unused_variables)]
#fn main() {
fn add_one(x: i32) -> i32 {
    x + 1
}
#}

Our function claims to return an i32, but with a semicolon, it would return () instead. Rust realizes this probably isn’t what we want, and suggests removing the semicolon in the error we saw before.

Early returns

But what about early returns? Rust does have a keyword for that, return:


# #![allow(unused_variables)]
#fn main() {
fn foo(x: i32) -> i32 {
    return x;

    // We never run this code!
    x + 1
}
#}

Using a return as the last line of a function works, but is considered poor style:


# #![allow(unused_variables)]
#fn main() {
fn foo(x: i32) -> i32 {
    return x + 1;
}
#}

The previous definition without return may look a bit strange if you haven’t worked in an expression-based language before, but it becomes intuitive over time.

Diverging functions

Rust has some special syntax for ‘diverging functions’, which are functions that do not return:


# #![allow(unused_variables)]
#fn main() {
fn diverges() -> ! {
    panic!("This function never returns!");
}
#}

panic! is a macro, similar to println!() that we’ve already seen. Unlike println!(), panic!() causes the current thread of execution to crash with the given message. Because this function will cause a crash, it will never return, and so it has the type ‘!’, which is read ‘diverges’.

If you add a main function that calls diverges() and run it, you’ll get some output that looks like this:

thread ‘main’ panicked at ‘This function never returns!’, hello.rs:2

If you want more information, you can get a backtrace by setting the RUST_BACKTRACE environment variable:

$ RUST_BACKTRACE=1 ./diverges
thread 'main' panicked at 'This function never returns!', hello.rs:2
Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
  hello::diverges
        at ./hello.rs:2
  hello::main
        at ./hello.rs:6

If you want the complete backtrace and filenames:

$ RUST_BACKTRACE=full ./diverges
thread 'main' panicked at 'This function never returns!', hello.rs:2
stack backtrace:
   1:     0x7f402773a829 - sys::backtrace::write::h0942de78b6c02817K8r
   2:     0x7f402773d7fc - panicking::on_panic::h3f23f9d0b5f4c91bu9w
   3:     0x7f402773960e - rt::unwind::begin_unwind_inner::h2844b8c5e81e79558Bw
   4:     0x7f4027738893 - rt::unwind::begin_unwind::h4375279447423903650
   5:     0x7f4027738809 - diverges::h2266b4c4b850236beaa
   6:     0x7f40277389e5 - main::h19bb1149c2f00ecfBaa
   7:     0x7f402773f514 - rt::unwind::try::try_fn::h13186883479104382231
   8:     0x7f402773d1d8 - __rust_try
   9:     0x7f402773f201 - rt::lang_start::ha172a3ce74bb453aK5w
  10:     0x7f4027738a19 - main
  11:     0x7f402694ab44 - __libc_start_main
  12:     0x7f40277386c8 - <unknown>
  13:                0x0 - <unknown>

If you need to override an already set RUST_BACKTRACE, in cases when you cannot just unset the variable, then set it to 0 to avoid getting a backtrace. Any other value (even no value at all) turns on backtrace.

$ export RUST_BACKTRACE=1
...
$ RUST_BACKTRACE=0 ./diverges 
thread 'main' panicked at 'This function never returns!', hello.rs:2
note: Run with `RUST_BACKTRACE=1` for a backtrace.

RUST_BACKTRACE also works with Cargo’s run command:

$ RUST_BACKTRACE=full cargo run
     Running `target/debug/diverges`
thread 'main' panicked at 'This function never returns!', hello.rs:2
stack backtrace:
   1:     0x7f402773a829 - sys::backtrace::write::h0942de78b6c02817K8r
   2:     0x7f402773d7fc - panicking::on_panic::h3f23f9d0b5f4c91bu9w
   3:     0x7f402773960e - rt::unwind::begin_unwind_inner::h2844b8c5e81e79558Bw
   4:     0x7f4027738893 - rt::unwind::begin_unwind::h4375279447423903650
   5:     0x7f4027738809 - diverges::h2266b4c4b850236beaa
   6:     0x7f40277389e5 - main::h19bb1149c2f00ecfBaa
   7:     0x7f402773f514 - rt::unwind::try::try_fn::h13186883479104382231
   8:     0x7f402773d1d8 - __rust_try
   9:     0x7f402773f201 - rt::lang_start::ha172a3ce74bb453aK5w
  10:     0x7f4027738a19 - main
  11:     0x7f402694ab44 - __libc_start_main
  12:     0x7f40277386c8 - <unknown>
  13:                0x0 - <unknown>

A diverging function can be used as any type:


# #![allow(unused_variables)]
#fn main() {
# fn diverges() -> ! {
#    panic!("This function never returns!");
# }
let x: i32 = diverges();
let x: String = diverges();
#}
Function pointers

We can also create variable bindings which point to functions:


# #![allow(unused_variables)]
#fn main() {
let f: fn(i32) -> i32;
#}

f is a variable binding which points to a function that takes an i32 as an argument and returns an i32. For example:


# #![allow(unused_variables)]
#fn main() {
fn plus_one(i: i32) -> i32 {
    i + 1
}

// Without type inference:
let f: fn(i32) -> i32 = plus_one;

// With type inference:
let f = plus_one;
#}

We can then use f to call the function:


# #![allow(unused_variables)]
#fn main() {
# fn plus_one(i: i32) -> i32 { i + 1 }
# let f = plus_one;
let six = f(5);
#}

Python Tutorial | Python Functions and Functional Programming

Python Tutorial | Python Functions and Functional Programming

In this post, we will introduce a functional programming model. We learn about lambda expressions in Python, important function functions and the concept of particles.

Most of us have been introduced to Python as an object-oriented language, but Python functions are also useful tools for data scientists and programmers alike. While classes, and objects, are easy to start working with, there are other ways to write your Python code. Languages like Java can make it hard to move away from object-oriented thinking, but Python makes it easy.

Given that Python facilitates different approaches to writing code, a logical follow-up question is: what is a different way to write code? While there are several answers to this question, the most common alternative style of writing code is called functional programming. Functional programming gets its name from writing functions which provides the main source of logic in a program.

In this post, we will:

  • Explain the basics of functional programming by comparing it to object-oriented programming.

  • Cover why you might want to incorporate functional programming in your own code.

  • Show you how Python allows you to switch between the two.

Comparing object-oriented to functional

The easiest way to introduce functional programming is to compare it to something we’re already aware of: object-oriented programming. Suppose we wanted to create a line counter class that takes in a file, reads each line, then counts the total amount of lines in the file. Using a class, it could look something like the following:

class LineCounter:
    def __init__(self, filename):
        self.file = open(filename, 'r')
        self.lines = []

        def read(self):
            self.lines = [line for line in self.file]

        def count(self):
            return len(self.lines)

While not the best implementation, it does provide an insight into object-oriented design. Within the class, there are the familiar concepts of methods and properties. The properties set and retrieve the state of the object, and the methods manipulate that state.

For both these concepts to work, the object’s state must change over time. This change of state is evident in the lines property after calling the read() method. As an example, here’s how we would use this class:

# example_file.txt contains 100 lines.
lc = LineCounter('example_file.txt')
print(lc.lines)
>> []
print(lc.count())
>> 0

# The lc object must read the file to
# set the lines property.
lc.read()
# The `lc.lines` property has been changed.
# This is called changing the state of the lc
# object.
print(lc.lines)
>> [['Hello world!', ...]]
print(lc.count())
>> 100

The ever-changing state of an object is both its blessing and curse. To understand why a changing state can be seen as a negative, we have to introduce an alternative. The alternative is to build the line counter as a series of independent functions.

def read(filename):
    with open(filename, 'r') as f:
        return [line for line in f]

def count(lines):
    return len(lines)

example_lines = read('example_log.txt')
lines_count = count(example_lines)

Working with pure functions

In the previous example, we were able to count the lines only with the use of functions. When we only use functions, we are applying a functional approach to programming which is, non-excitingly, called functional programming. The concepts behind functional programming requires functions to be stateless, and rely only on their given inputs to produce an output.

The functions that meet the above criteria are called pure functions. Here’s an example to highlight the difference between pure functions, and non-pure:

# Create a global variable `A`.
A = 5

def impure_sum(b):
    # Adds two numbers, but uses the
    # global `A` variable.
    return b + A

def pure_sum(a, b):
    # Adds two numbers, using
    # ONLY the local function inputs.
    return a + b

print(impure_sum(6))
>> 11

print(pure_sum(4, 6))
>> 10

The benefit of using pure functions over impure (non-pure) functions is the reduction of side effects. **Side effects **occur when there are changes performed within a function’s operation that are outside its scope. For example, they occur when we change the state of an object, perform any I/O operation, or even call print():

def read_and_print(filename):
    with open(filename) as f:
        # Side effect of opening a
        # file outside of function.
        data = [line for line in f]
    for line in data:
        # Call out to the operating system
        # "println" method (side effect).
        print(line)

Programmers reduce side effects in their code to make it easier to follow, test, and debug. The more side effects a codebase has, the harder it is to step through a program and understand its sequence of execution.

While it’s convienent to try and eliminate all side effects, they’re often used to make programming easier. If we were to ban all side effects, then you wouldn’t be able to read in a file, call print, or even assign a variable within a function. Advocates for functional programming understand this tradeoff, and try to eliminate side effects where possible without sacrificing development implementation time.

The Lambda Expression

Instead of the def syntax for function declaration, we can use a lambda expression to write Python functions. The lambda syntax closely follows the def syntax, but it’s not a 1-to-1 mapping. Here’s an example of building a function that adds two integers:

# Using `def` (old way).
def old_add(a, b):
    return a + b

# Using `lambda` (new way).
new_add = lambda a, b: a + bold_add(10, 5) == new_add(10, 5)
>> True

The lambda expression takes in a comma seperated sequences of inputs (like def). Then, immediately following the colon, it returns the expression without using an explicit return statement. Finally, when assigning thelambda expression to a variable, it acts exactly like a Python function, and can be called using the the function call syntax: new_add().

If we didn’t assign lambda to a variable name, it would be called an anonymous function. These** anonymous functions** are extremely helpful, especially when using them as an input for another function. For example, the sorted() function takes in an optional key argument (a function) that describes how the items in a list should be sorted.

unsorted = [('b', 6), ('a', 10), ('d', 0), ('c', 4)]

# Sort on the second tuple value (the integer).
print(sorted(unsorted, key=lambda x: x[1]))
>> [('d', 0), ('c', 4), ('b', 6), ('a', 10)]

The Map Function

While the ability to pass in functions as arguments is not unique to Python, it is a recent development in programming languages. Functions that allow for this type of behavior are called** first-class functions**. Any language that contains first-class functions can be written in a functional style.

There are a set of important first-class functions that are commonly used within the functional paradigm. These functions take in a Python iterable, and, like sorted(), apply a function for each element in the list. Over the next few sections, we will examine each of these functions, but they all follow the general form of function_name(function_to_apply, iterable_of_elements).

The first function we’ll work with is the map() function. The map() function takes in an iterable (ie. list), and creates a new iterable object, a special map object. The new object has the first-class function applied to every element.

# Pseudocode for map.
def map(func, seq):
    # Return `Map` object with
    # the function applied to every
    # element.
    return Map(
        func(x)
        for x in seq
    )
		

Here’s how we could use map to add 10 or 20 to every element in a list:

values = [1, 2, 3, 4, 5]

# Note: We convert the returned map object to
# a list data structure.
add_10 = list(map(lambda x: x + 10, values))
add_20 = list(map(lambda x: x + 20, values))

print(add_10)
>> [11, 12, 13, 14, 15]

print(add_20)
>> [21, 22, 23, 24, 25]

Note that it’s important to cast the return value from map() as a list object. Using the returned map object is difficult to work with if you’re expecting it to function like a list. First, printing it does not show each of its items, and secondly, you can only iterate over it once.

The Filter Function

The second function we’ll work with is the filter() function. The filter() function takes in an iterable, creates a new iterable object (again, a special map object), and a first-class function that must return a bool value. The new map object is a filtered iterable of all elements that returned True.

# Pseudocode for filter.
def filter(evaluate, seq):
    # Return `Map` object with
    # the evaluate function applied to every
    # element.
    return Map(
        x for x in seq
        if evaluate(x) is True
    )
		

Here’s how we could filter odd or even values from a list:

values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

# Note: We convert the returned filter object to
# a list data structure.
even = list(filter(lambda x: x % 2 == 0, values))
odd = list(filter(lambda x: x % 2 == 1, values))

print(even)
>> [2, 4, 6, 8, 10]

print(odd)
>> [1, 3, 5, 7, 9]

The Reduce Function

The last function we’ll look at is the reduce() function from the functools package. The reduce() function takes in an iterable, and then reduces the iterable to a single value. Reduce is different from filter() and map(), because reduce() takes in a function that has two input values.

Here’s an example of how we can use reduce() to sum all elements in a list.

from functools import reduce

values = [1, 2, 3, 4]

summed = reduce(lambda a, b: a + b, values)
print(summed)
>> 10

An interesting note to make is that you do not have to operate on the second value in the lambda expression. For example, you can write a function that always returns the first value of an iterable:

from functools import reduce

values = [1, 2, 3, 4, 5]

# By convention, we add `_` as a placeholder for an input
# we do not use.
first_value = reduce(lambda a, _: a, values)
print(first_value)
>> 1

Rewriting with list comprehensions

Because we eventually convert to lists, we should rewrite the map() and filter() functions using list comprehension instead. This is the more pythonic way of writing them, as we are taking advantage of the Python syntax for making lists. Here’s how you could translate the previous examples of map() and filter() to list comprehensions:

values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

# Map.
add_10 = [x + 10 for x in values]
print(add_10)
>> [11, 12, 13, 14, 15, 16, 17, 18, 19, 20]

# Filter.
even = [x for x in values if x % 2 == 0]
print(even)
>> [2, 4, 6, 8, 10]

From the examples, you can see that we don’t need to add the lambda expressions. If you are looking to add map(), or filter() functions to your own code, this is usually the recommended way. However, in the next section, we’ll provide a case to still use the map() and filter() functions.

Writing Function Partials

Sometimes we want to use the behavior of a function, but decrease the number of arguments it takes. The purpose is to “save” one of the inputs, and create a new function that defaults the behavior using the saved input. Suppose we wanted to write a function that would always add 2 to any number:

def add_two(b):
    return 2 + b

print(add_two(4))
>> 6

Theadd_two function is similar to the general function, $f(a,b) = a + b$, only it defaults one of the arguments ($a = 2$). In Python, we can use the partial module from the functools package to set these argument defaults. The partial module takes in a function, and “freezes” any number of args (or kwargs), starting from the first argument, then returns a new function with the default inputs.

from functools import partialdef add(a, b):
    return a + b

add_two = partial(add, 2)
add_ten = partial(add, 10)

print(add_two(4))
>> 6
print(add_ten(4))
>> 14

Partials can take in any function, including ones from the standard library.

# A partial that grabs IP addresses using
# the `map` function from the standard library.
extract_ips = partial(
    map,
    lambda x: x.split(' ')[0]
)
lines = read('example_log.txt')
ip_addresses = list(extract_ip(lines))

Summary

In this post, we introduced the paradigm of functional programming. We learned about the lambda expression in Python, important functional functions, and the concept of partials. Overall, we showed that Python provides a programmer with the tools to easily switch between functional programming and object-oriented programming.

What is Functional Programming?

What is Functional Programming?

Functional programming has become a really hot topic in the JavaScript world. Functional programming (often abbreviated FP) is the process of building software by composing pure functions, avoiding shared state, mutable data, and side-effects.

Functional programming is declarative rather than imperative, and application state flows through pure functions. Contrast with object oriented programming, where application state is usually shared and colocated with methods in objects.

Functional programming is a programming paradigm, meaning that it is a way of thinking about software construction based on some fundamental, defining principles (listed above). Other examples of programming paradigms include object oriented programming and procedural programming.

Functional code tends to be more concise, more predictable, and easier to test than imperative or object oriented code — but if you’re unfamiliar with it and the common patterns associated with it, functional code can also seem a lot more dense, and the related literature can be impenetrable to newcomers.

If you start googling functional programming terms, you’re going to quickly hit a brick wall of academic lingo that can be very intimidating for beginners. To say it has a learning curve is a serious understatement. But if you’ve been programming in JavaScript for a while, chances are good that you’ve used a lot of functional programming concepts & utilities in your real software.

Don’t let all the new words scare you away. It’s a lot easier than it sounds.

The hardest part is wrapping your head around all the unfamiliar vocabulary. There are a lot of ideas in the innocent looking definition above which all need to be understood before you can begin to grasp the meaning of functional programming:

  • Pure functions
  • Function composition
  • Avoid shared state
  • Avoid mutating state
  • Avoid side effects

In other words, if you want to know what functional programming means in practice, you have to start with an understanding of those core concepts.

A pure function is a function which:

  • Given the same inputs, always returns the same output, and
  • Has no side-effects

Pure functions have lots of properties that are important in functional programming, including referential transparency (you can replace a function call with its resulting value without changing the meaning of the program).

Function composition is the process of combining two or more functions in order to produce a new function or perform some computation. For example, the composition f . g (the dot means “composed with”) is equivalent to f(g(x)) in JavaScript. Understanding function composition is an important step towards understanding how software is constructed using the functional programming.

Shared State

Shared state is any variable, object, or memory space that exists in a shared scope, or as the property of an object being passed between scopes. A shared scope can include global scope or closure scopes. Often, in object oriented programming, objects are shared between scopes by adding properties to other objects.

For example, a computer game might have a master game object, with characters and game items stored as properties owned by that object. Functional programming avoids shared state — instead relying on immutable data structures and pure calculations to derive new data from existing data.

The problem with shared state is that in order to understand the effects of a function, you have to know the entire history of every shared variable that the function uses or affects.

Imagine you have a user object which needs saving. Your saveUser() function makes a request to an API on the server. While that’s happening, the user changes their profile picture with updateAvatar() and triggers another saveUser() request. On save, the server sends back a canonical user object that should replace whatever is in memory in order to sync up with changes that happen on the server or in response to other API calls.

Unfortunately, the second response gets received before the first response, so when the first (now outdated) response gets returned, the new profile pic gets wiped out in memory and replaced with the old one. This is an example of a race condition — a very common bug associated with shared state.

Another common problem associated with shared state is that changing the order in which functions are called can cause a cascade of failures because functions which act on shared state are timing dependent:

// With shared state, the order in which function calls are made
// changes the result of the function calls.
const x = {
  val: 2
};

const x1 = () => x.val += 1;

const x2 = () => x.val *= 2;

x1();
x2();

console.log(x.val); // 6

// This example is exactly equivalent to the above, except...
const y = {
  val: 2
};

const y1 = () => y.val += 1;

const y2 = () => y.val *= 2;

// ...the order of the function calls is reversed...
y2();
y1();

// ... which changes the resulting value:
console.log(y.val); // 5

Timing dependency example

When you avoid shared state, the timing and order of function calls don’t change the result of calling the function. With pure functions, given the same input, you’ll always get the same output. This makes function calls completely independent of other function calls, which can radically simplify changes and refactoring. A change in one function, or the timing of a function call won’t ripple out and break other parts of the program.

const x = {
  val: 2
};

const x1 = x => Object.assign({}, x, { val: x.val + 1});

const x2 = x => Object.assign({}, x, { val: x.val * 2});

console.log(x1(x2(x)).val); // 5


const y = {
  val: 2
};

// Since there are no dependencies on outside variables,
// we don't need different functions to operate on different
// variables.

// this space intentionally left blank


// Because the functions don't mutate, you can call these
// functions as many times as you want, in any order, 
// without changing the result of other function calls.
x2(y);
x1(y);

console.log(x1(x2(y)).val); // 5

In the example above, we use Object.assign() and pass in an empty object as the first parameter to copy the properties of x instead of mutating it in place. In this case, it would have been equivalent to simply create a new object from scratch, without Object.assign(), but this is a common pattern in JavaScript to create copies of existing state instead of using mutations, which we demonstrated in the first example.

If you look closely at the console.log() statements in this example, you should notice something I’ve mentioned already: function composition. Recall from earlier, function composition looks like this: f(g(x)). In this case, we replace f() and g() with x1() and x2() for the composition: x1 . x2.

Of course, if you change the order of the composition, the output will change. Order of operations still matters. f(g(x)) is not always equal to g(f(x)), but what doesn’t matter anymore is what happens to variables outside the function — and that’s a big deal. With impure functions, it’s impossible to fully understand what a function does unless you know the entire history of every variable that the function uses or affects.

Remove function call timing dependency, and you eliminate an entire class of potential bugs.

Immutability

An immutable object is an object that can’t be modified after it’s created. Conversely, a mutable object is any object which can be modified after it’s created.

Immutability is a central concept of functional programming because without it, the data flow in your program is lossy. State history is abandoned, and strange bugs can creep into your software.

In JavaScript, it’s important not to confuse const, with immutability. const creates a variable name binding which can’t be reassigned after creation. const does not create immutable objects. You can’t change the object that the binding refers to, but you can still change the properties of the object, which means that bindings created with const are mutable, not immutable.

Immutable objects can’t be changed at all. You can make a value truly immutable by deep freezing the object. JavaScript has a method that freezes an object one-level deep:

const a = Object.freeze({
  foo: 'Hello',
  bar: 'world',
  baz: '!'
});

a.foo = 'Goodbye';
// Error: Cannot assign to read only property 'foo' of object Object

But frozen objects are only superficially immutable. For example, the following object is mutable:

const a = Object.freeze({
  foo: { greeting: 'Hello' },
  bar: 'world',
  baz: '!'
});

a.foo.greeting = 'Goodbye';

console.log(`${ a.foo.greeting }, ${ a.bar }${a.baz}`);

As you can see, the top level primitive properties of a frozen object can’t change, but any property which is also an object (including arrays, etc…) can still be mutated — so even frozen objects are not immutable unless you walk the whole object tree and freeze every object property.

In many functional programming languages, there are special immutable data structures called trie data structures (pronounced “tree”) which are effectively deep frozen — meaning that no property can change, regardless of the level of the property in the object hierarchy.

Tries use structural sharing to share reference memory locations for all the parts of the object which are unchanged after an object has been copied by an operator, which uses less memory, and enables significant performance improvements for some kinds of operations.

For example, you can use identity comparisons at the root of an object tree for comparisons. If the identity is the same, you don’t have to walk the whole tree checking for differences.

There are several libraries in JavaScript which take advantage of tries, including Immutable.js and Mori.

I have experimented with both, and tend to use Immutable.js in large projects that require significant amounts of immutable state.

Side Effects

A side effect is any application state change that is observable outside the called function other than its return value. Side effects include:

  • Modifying any external variable or object property (e.g., a global variable, or a variable in the parent function scope chain)
  • Logging to the console
  • Writing to the screen
  • Writing to a file
  • Writing to the network
  • Triggering any external process
  • Calling any other functions with side-effects

Side effects are mostly avoided in functional programming, which makes the effects of a program much easier to understand, and much easier to test.

Haskell and other functional languages frequently isolate and encapsulate side effects from pure functions using monads. The topic of monads is deep enough to write a book on, so we’ll save that for later.

What you do need to know right now is that side-effect actions need to be isolated from the rest of your software. If you keep your side effects separate from the rest of your program logic, your software will be much easier to extend, refactor, debug, test, and maintain.

This is the reason that most front-end frameworks encourage users to manage state and component rendering in separate, loosely coupled modules.

Reusability Through Higher Order Functions

Functional programming tends to reuse a common set of functional utilities to process data. Object oriented programming tends to colocate methods and data in objects. Those colocated methods can only operate on the type of data they were designed to operate on, and often only the data contained in that specific object instance.

In functional programming, any type of data is fair game. The same map() utility can map over objects, strings, numbers, or any other data type because it takes a function as an argument which appropriately handles the given data type. FP pulls off its generic utility trickery using higher order functions.

JavaScript has first class functions, which allows us to treat functions as data — assign them to variables, pass them to other functions, return them from functions, etc…

A higher order function is any function which takes a function as an argument, returns a function, or both. Higher order functions are often used to:

  • Abstract or isolate actions, effects, or async flow control using callback functions, promises, monads, etc…
  • Create utilities which can act on a wide variety of data types
  • Partially apply a function to its arguments or create a curried function for the purpose of reuse or function composition
  • Take a list of functions and return some composition of those input functions

Containers, Functors, Lists, and Streams

A functor is something that can be mapped over. In other words, it’s a container which has an interface which can be used to apply a function to the values inside it. When you see the word functor, you should think “mappable”.

Earlier we learned that the same map() utility can act on a variety of data types. It does that by lifting the mapping operation to work with a functor API. The important flow control operations used by map() take advantage of that interface. In the case of Array.prototype.map(), the container is an array, but other data structures can be functors, too — as long as they supply the mapping API.

Let’s look at how Array.prototype.map() allows you to abstract the data type from the mapping utility to make map() usable with any data type. We’ll create a simple double() mapping that simply multiplies any passed in values by 2:

const double = n => n * 2;
const doubleMap = numbers => numbers.map(double);
console.log(doubleMap([2, 3, 4])); // [ 4, 6, 8 ]

What if we want to operate on targets in a game to double the number of points they award? All we have to do is make a subtle change to the double() function that we pass into map(), and everything still works:

const double = n => n.points * 2;

const doubleMap = numbers => numbers.map(double);

console.log(doubleMap([
  { name: 'ball', points: 2 },
  { name: 'coin', points: 3 },
  { name: 'candy', points: 4}
])); // [ 4, 6, 8 ]

The concept of using abstractions like functors & higher order functions in order to use generic utility functions to manipulate any number of different data types is important in functional programming. You’ll see a similar concept applied in all sorts of different ways.

“A list expressed over time is a stream.”

All you need to understand for now is that arrays and functors are not the only way this concept of containers and values in containers applies. For example, an array is just a list of things. A list expressed over time is a stream — so you can apply the same kinds of utilities to process streams of incoming events — something that you’ll see a lot when you start building real software with FP.

Declarative vs Imperative

Functional programming is a declarative paradigm, meaning that the program logic is expressed without explicitly describing the flow control.

Imperative programs spend lines of code describing the specific steps used to achieve the desired results — the flow control: How to do things.

Declarative programs abstract the flow control process, and instead spend lines of code describing the data flow: What to do. The how gets abstracted away.

For example, this imperative mapping takes an array of numbers and returns a new array with each number multiplied by 2:

const doubleMap = numbers => {
  const doubled = [];
  for (let i = 0; i < numbers.length; i++) {
    doubled.push(numbers[i] * 2);
  }
  return doubled;
};

console.log(doubleMap([2, 3, 4])); // [4, 6, 8]

Imperative data mapping

This declarative mapping does the same thing, but abstracts the flow control away using the functional Array.prototype.map() utility, which allows you to more clearly express the flow of data:

const doubleMap = numbers => numbers.map(n => n * 2);

console.log(doubleMap([2, 3, 4])); // [4, 6, 8]

Imperative code frequently utilizes statements. A statement is a piece of code which performs some action. Examples of commonly used statements include for, if, switch, throw, etc…

Declarative code relies more on expressions. An expression is a piece of code which evaluates to some value. Expressions are usually some combination of function calls, values, and operators which are evaluated to produce the resulting value.

These are all examples of expressions:

2 * 2
doubleMap([2, 3, 4])
Math.max(4, 3, 2)

Usually in code, you’ll see expressions being assigned to an identifier, returned from functions, or passed into a function. Before being assigned, returned, or passed, the expression is first evaluated, and the resulting value is used.

Conclusion

Functional programming favors:

  • Pure functions instead of shared state & side effects
  • Immutability over mutable data
  • Function composition over imperative flow control
  • Lots of generic, reusable utilities that use higher order functions to act on many data types instead of methods that only operate on their colocated data
  • Declarative rather than imperative code (what to do, rather than how to do it)
  • Expressions over statements
  • Containers & higher order functions over ad-hoc polymorphism