Rust Vs. Haskell: Which Language is Best for API Design?

Rust Vs. Haskell: Which Language is Best for API Design?

Today, we’re going to compare two popular programming languages that might not immediately spring to mind when you thinking of designing an API. We’ll be doing a side-by-side comparison of Haskell vs. Rust, to determine which language is best for API design.

When it comes to designing, building, and maintaining an API, it’s not immediately obvious which development tools and programming languages you should use. Seeing as how APIs are essentially the nervous system of mobile apps, it makes sense that there would be copious amounts of resources for programmers and developers.

Knowing which development tools to use to create your own API depends on your level of technical expertise. Some development environments offer a barebones command-line programming environment. Others function more like an app, with fancy GUIs and lots of bells and whistles, with code debuggers and copious built-in libraries.

Introducing Haskell

Haskell is one of the most powerful and reliable functional programming languages out there. Haskell’s emphasis on top-level programming lets developers focus on getting results rather than getting bogged down in endless minutiae.

Programming in Haskell also allows for fast prototyping, thanks to its excellent compiler, getting apps and software onto the market much more quickly than other development languages. This makes Haskell a good fit for smaller startups or those looking to launch their first app.

Meet Rust

Mozilla is dedicated to developing tools for and evolving the web using Open Standards, starting with their flagship Internet browser Firefox. Every Internet browser on the market, including Firefox, is written in C++. Firefox features 12,900,992 lines of code. Google Chrome has 4,490,488. While this makes the programs fast, some argue they are more unsafe. The memory manipulations of C and C++ are not checked for validity. If something goes wrong, it can lead to a program crashing, memory leaks, buffer overflows, segmentation faults, and null pointers.

Rust defaults to writing “safe code,” by allocating memory to objects and not unallocating it until the process has been completed.

The security and efficiency are some of the reasons why Rust is one of the most beloved programming languages among developers and programmers as shown in this Stack Overflow survey.

Haskell Vs. Rust

According to this StackShare chart, Rust and Haskell have a number of similarities and a few notable differences. For starters, Rust is slightly more popular, with 416 developers using Rust as opposed to 347 developing with Haskell.

Due to its popularity, there’s a great deal more Rust content on the Internet than there is for Haskell. There are over 23,000 references to Rust on Hacker News while Haskell only has 763. Haskell’s got more than three times as much content on Stack Overflow than Rust, due to its longevity.

The advantages of Rust, according to Stack Overflow programmers include:

  • Guaranteed memory safety (75 votes)
  • Speed (64 votes)
  • Minimal runtime (46 votes)
  • Open source (46 votes)
  • Pattern matching (38 votes)
  • Type inference (36 votes)
  • Algebraic data types (34 votes)
  • Concurrent (34 votes)
  • Efficient C bindings (28 votes)
  • Practical (28 votes)

The advantages of Haskell, on the other hand, include:

  • Purely-functional programming (66 votes)
  • Statically typed (53 votes)
  • Type-safe (44 votes)
  • Great community (29 votes)
  • Open source (29 votes)
  • Composable (28 votes)
  • Built-in concurrency (24 votes)
  • Built-in parallelism (22 votes)
  • Referentially transparent (17 votes)
  • Generics (15 votes)

The cons of using Rust include:

  • Ownership learning curve
  • Variable shadowing
  • Hard to learn

The cons of using Haskell:

  • No good ABI
  • Unpredictable performance
  • Poor documentation for libraries
  • Poor packaging for apps
  • Confusing error messages
  • Slow compiling
  • No best practices
  • Too many distractions in language extensions

Programs that integrate with Rust:

  • Remacs
  • Sentry
  • Iron
  • Leaf
  • Pencil
  • Ruru
  • Sapper
  • Helix
  • Tokamak
  • Rocket
  • Airbrake
  • Yew Framework
  • Dependabot
  • Tower Web

Programs that interact with Haskell:

  • Eta
  • Yesod
  • Rollbar
  • Miso
  • Buddy

Finally, take a look at this Google Trends graph of interest over time in Rust vs. Haskell:

As you can see, while both programming languages have their ups and downs, Rust is exponentially more popular than Haskell. This means there are more resources available for Rust, which makes it a better pick for building APIs if you want something that will work straight out of the gate.

Haskell is adept at fast prototyping and building frameworks, however. The code you write in Haskell can be a part of the finished product, as one additional benefit.

Haskell vs. Rust: Which Is Better For Designing APIs?

Now that we know a bit more Haskell vs. Rust, let’s delve into the heart of the matter.

Which programming language is best for API design? That will depend on what you’re trying to do with as well as how comfortable you are with programming.

Let’s take a look at some specific instances, to help you figure out which approach is right for your API design.

Designing A RESTful API With Haskell

Designing an API with a functional programming language may seem like a lot to take on. It doesn’t have to be, however, as there are 3rd-party tools to make web development with Haskell easy. For example, the SNAP framework acts as a translator, letting Haskell communicate with the web easily and painlessly.

Getting Started With Haskell and SNAP

You’re going to start off by loading a few commands in Haskell. You can also clone these commands directly from

    cd snap-api-tutorial
    git checkout baseline
    cabal sandbox init
    cabal install snap
    cabal install --dependencies-only

Creating An API Snaplet

Snaplets are composable pieces of a SNAP application. SNAP applications are created by nesting applets. Look at the application.hs and you’ll notice the application initializer ‘app’ is composed of makeSnaplet functions.

We’re going to start by making a snaplet called ‘API’. This snaplet is responsible for creating the top level /api namespace. You’re going to load a few language extensions, import the necessary modules, and define the ‘API’ data type. Then you’ll define the initializer of the snaplet.

    -- new file: src/api/Core.hs
    {-# LANGUAGE OverloadedStrings #-}
    module Api.Core where
    import Snap.Snaplet
    data Api = Api
    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet "api" "Core Api" Nothing $ return Api

 Notice the ‘b’ in the ‘apilnit :: Snapletinit b Api’ line instead of the ‘app’ command. This means this snaplet can be loaded in any base operation, not just App. This is the basis of SNAP composability.

Now you’re going to tell the ‘App’ datatype to expect an API snaplet.

    -- src/Application.hs
    import Api.Core (Api(Api))
    data App = App { _api :: Snaplet Api }

 Then, you’ll nest the Api snaplet within the App snaplet, using nestSnaplet:

    nestSnaplet :: ByteString -> Lens v (Snaplet v1) -> SnapletInit b v1 -> Initializer b v         (Snaplet v1)

The first command defines the root base url for the snaplet’s route, /api in this instance. The second argument is a Lens, identifying the snaplet, generated by the makeLenses function in src/application.hs. The final argument is the snaplet initializer apiInit we’ve previously defined.

-- src/Site.hs

import Api.Core (Api(Api), apiInit)

    app :: SnapletInit App App
    app = makeSnaplet "app" "An snaplet example application." Nothing $ do
      api <- nestSnaplet "api" api apiInit
      addRoutes routes
      return $ App api

Now you’ve nested your first Api snaplet. It doesn’t have any routes yet, however, so you don’t know if it’s working or not. Adding an /api/status route that always responds with a ‘200 ok’ will let you see output from this snaplet.

Snap route handlers normally return a type of Handler (). The Handler is an example of HandlerSnap, which provides stateful access to the HTTP request and response.

All of the requests and response modifications take place inside a Handler monad. So we’ll define `respondOk :: Handler b Api ()’

    -- src/api/Core.hs
    import           Snap.Core
    import qualified Data.ByteString.Char8 as B

    apiRoutes :: [(B.ByteString, Handler b Api ())]
    apiRoutes = [("status", method GET respondOk)]

    respondOk :: Handler b Api ()
    respondOk = modifyResponse $ setResponseCode 200

    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet "api" "Core Api" Nothing $ do
        addRoutes apiRoutes
        return Api

Now look at the type signatures for modifyResponse and setResponseCode:

modifyResponse :: (MonadSnap m) => (Response -> Response) -> m ()
setResponseCode :: Int -> Response -> Response

 This means the setResponseCode takes in an integer and returns a Response modifying function that can be passed on to modifyResponse. modifyResponse performs the response modification within the monad function.

Now run the following code:

    $ cabal run -- -p 9000
    $ curl -I -XGET "localhost:9000/api/status"

    HTTP/1.1 200 OK
    Server: Snap
    Date: ...
    Transfer-Encoding: chunked

 This should give you your first response.

A Todo Snaplet

Now that we’ve seen how to get a simple response out of a snaplet, let’s learn how to make a Todo snaplet inside of the Api snaplet. Then we’ll learn how to connect that Todo snaplet to a database, write Get and Post handlers for /api/todos, which allows you to create and fetch new todo items.

We’ll start with some boilerplate code, which will define our snaplet, then nest it inside of the API snaplet.

   -- new file: src/api/services/TodoService.hs

    {-# LANGUAGE OverloadedStrings #-}

    module Api.Services.TodoService where

    import Api.Types (Todo(Todo))
    import Control.Lens (makeLenses)
    import Snap.Core
    import Snap.Snaplet

    data TodoService = TodoService

    todoServiceInit :: SnapletInit b TodoService
    todoServiceInit = makeSnaplet "todos" "Todo Service" Nothing $ return TodoService

    -- src/api/Core.hs

    {-# LANGUAGE TemplateHaskell #-}

    import Control.Lens (makeLenses)
    import Api.Services.TodoService(TodoService(TodoService), todoServiceInit)
    -- ...

    data Api = Api { _todoService :: Snaplet TodoService }

    makeLenses ''Api
    -- ...

    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet "api" "Core Api" Nothing $ do
      ts <- nestSnaplet "todos" todoService todoServiceInit
      addRoutes apiRoutes
      return $ Api ts

Next, we’ll nest a PostgreSQL, provided by snaplet-postgresql-simple, into the TodoService. This provides the TodoService with a connection to the database and makes queries possible. Then you’re going to import Aeson, encoding your responses into JSON using ToJSON using the ToJSON instance we defined before.

   -- src/api/services/TodoService.hs

    {-# LANGUAGE TemplateHaskell -#}
    {-# LANGUAGE FlexibleInstances -#}

    import Control.Lens (makeLenses)
    import Control.Monad.State.Class (get)
    import Data.Aeson (encode)
    import Snap.Snaplet.PostgresqlSimple
    import qualified Data.ByteString.Char8 as B
    -- ...

    data TodoService = TodoService { _pg :: Snaplet Postgres }

    makeLenses ''TodoService
    -- ...

    todoServiceInit :: SnapletInit b TodoService
    todoServiceInit = makeSnaplet "todos" "Todo Service" Nothing $ do
      pg <- nestSnaplet "pg" pg pgsInit
      return $ TodoService pg

    instance HasPostgres (Handler b TodoService) where
      getPostgresState = with pg get

A little bit of SQL sets up the database and inserts a few lines of test data:

    CREATE DATABASE snaptutorial;
    CREATE TABLE todos (id SERIAL, text TEXT);
    INSERT INTO todos (text) VALUES ('First todo');
    INSERT INTO todos (text) VALUES ('Second todo');

Finally, you’re going to configure the postgres snaplet by editing the following file:


 Now you’re ready to run your first GET to /api/todos. We’ll retrieve all of the rows of the todos table, convert them into Todo data, then serialize them as JSON to get your first response.

First, you’re going to use the query- function, which converts a SQL string and returns a monadic array of data that implements the FromRow typeclass:

query_ :: (HasPostgres m, FromRow r) => Query -> m [r]

Next, you’re going to use the writeLBS together with the encode function to write a JSON string to the response body:

writeLBS :: MonadSnap m => ByteString -> m ()

This function calls the modifyResponse function mentioned earlier.

Then you’re going to use the execute function (which is the database version of query) to insert the data gathered from the getPostParam into the database:

    todoRoutes :: [(B.ByteString, Handler b TodoService ())]
    todoRoutes = [("/", method GET getTodos)
                 ,("/", method POST createTodo)]

    createTodo :: Handler b TodoService ()
    createTodo = do
      todoTextParam <- getPostParam "text"
      newTodo <- execute "INSERT INTO todos (text) VALUES (?)" (Only todoTextParam)
      modifyResponse $ setResponseCode 201

 Here, the Only is postgresql-simple’s version of single value collections.

Here’s the finished version:

    $ cabal run -- -p 9000
    $ curl -i -XPOST --data "text=Third todo" "localhost:9000/api/todos"

    HTTP/1.1 201 Created
    Server: Snap
    Date: ...
    Transfer-Encoding: chunked

    $ psql snaptutorial
    $ SELECT * FROM todos;

     id |     text
      1 | First todo
      2 | Second todo
      3 | Third todo

 Now you have a working REST API written for Haskell and SNAP. If you want to know more about the SNAP framework, you can read the SNAP documentation or visit the #snapframework on freenode.

Designing a REST API in Rust

Now that we’ve learned how to set up an API in Haskell, let’s turn our attention to Rust. Seeing how to set up an API in Rust will help give you an idea of which language might be best for designing your API.

First off, we’re going to load some crates, which are Rust’s libraries. We’ll be using Rocket to create the API and Diesel to handle the database. Diesel works with Postgres, MySQL, and SQLite.

Define Your Dependencies

Before you begin, you’re going to define your dependencies:

    rocket = "0.3.6"
    rocket_codegen = "0.3.6"
    diesel = { version = "1.0.0", features = ["postgres"] }
   dotenv = "0.9.0"
   r2d2-diesel = "1.0"
    r2d2 = "0.8"
    serde = "1.0"
    serde_derive = "1.0"
    serde_json = "1.0"
   version = ""
   default-features = false
   features = ["json"]

You’ll notice that there are a number of crates being loaded. We’ve already mentioned Rocket and Diesel. Rocket_codegen calls on macros, while dotenv allows variables to be called from an external file. R2d2 and r2d2-diesel connects to the database using diesel. Last but not least, serde, serde_derive, and serde_json are used for serialization and deserialization of data sent and retrieved from the REST Api.

In this instance, postgres has been specified to only include Postgres modules in the diesel crate. If you were wanting to access another database, or multiple databases, you only need to specify them or eliminate the features list altogether.

One final note, to use Rocket, you need to use the nightly build of Rust since it uses features not included in the stable builds.

Accessing The Database With Diesel

First, we’re going to start off by setting up Diesel. Once that’s set up, you’ll have your schema defined that you’ll use to construct the application.

To set up Diesel, begin by constructing a Diesel CLI. If you don’t know how to do that, here’s a guide on getting started with Diesel.

We’ll be using Postgres, as you won’t be able to access all of the MySQL features. PostGRES is fast and easy to set up and will give you all of the features you need to create a database.

Create A Table

Start off by setting up the database_url to connect to PostGRES or simply add it to the .env file.

   DATABASE_URL = postgres://postgres:[email protected]/rust-web-with-rocket > .env

 Now you’re going to run diesel setup to create a database and an empty migrations folder to use later.

You will be modeling people who can be added to, retrieved, modified, or deleted from the database. You’re going to need a table to store them in. To do so, you’re going to create your first migration.

    diesel migration generate create_people

 This creates two new files which are stored in the migrations folder. Up.sql is for upgrading and is where you’ll place the SQL to create your table. Down.sql is for downgrading so you can undo the upgrades if need be.

In this example, you’re going to create the people table.

    CREATE TABLE people(
    first_name VARCHAR NOT NULL,
    last_name VARCHAR NOT NULL,
   age INT NOT NULL,
   profession VARCHAR NOT NULL,
   salary INT NOT NULL

To undo the table creation, you only have to use:

    DROP TABLE people 

 To execute a migration, run:


If you need to undo the migration, simply use:

diesel migration redo

Map To Structs

Now that your people table is created, you’re ready to start adding data into it. Seeing as how Diesel is an ORM, you’re going to need to translate it into something Rust will be able to read. You’re going to use a struct to do that.

    use super::schema::people;
    #[derive(Queryable, AsChangeset, Serialize, Deserialize)]
    #[table_name = "people"]
      pub struct Person {
      pub id: i32,
      pub first_name: String,
     pub last_name: String,
     pub age: i32,
     pub profession: String,
    pub salary: i32,

 You’re going to write a struct that will represent each record in the people table, otherwise known as a person. You’re going to use three commands particular to Diesel – #[derive(Queryable)], #[derive(AsChangeSet)] and #[table_name].

#[derive(Queryable)] generates the code that will retrieve a person from the database. #[derive(AsChangeSet)] makes it possible for you to use update.set in the future, if you so choose. #[table_name = "people"] names the table, as Diesel interprets the plural of Person and Persons. If you were using another name with a more common plural, this step wouldn’t be necessary.

The other functions are used to allow JSON data to interact with the REST API. #[derive(Serialize)] and #[derive(Deserialize)] both come from the serde crate. We will delve more fully into these commands a little later on.

Now you’re going to create a schema, specifically a Rust Schema using the “table!” command that handles the Rust to database mapping.

Run the following command:

diesel print-schema > src/

This generates the following file:

    table! {
      people (id) {
          id -> Int4,
         first_name -> Varchar,
         last_name -> Varchar,
         age -> Int4,
         profession -> Varchar,
        salary -> Int4,

Now you’re going to run SELECT and UPDATE queries, using the Person struct we created earlier. DELETE doesn’t require a struct as it only requires the record’s ID. You’re also going to use INSERT, but in a different way than what’s recommended in the Diesel documentation.

    #[table_name = "people"]
    struct InsertablePerson {
      first_name: String,
      last_name: String,
      age: i32,
      profession: String,
      salary: i32,
    impl InsertablePerson {
      fn from_person(person: Person) -> InsertablePerson {
          InsertablePerson {
              first_name: person.first_name,
              last_name: person.last_name,
              age: person.age,
              profession: person.profession,
              salary: person.salary,

InsertablePerson is almost identical to the Person struct but with one key difference – there’s no ID table. The ID table is generated automatically when you use Insert, so it’s not necessary.

Finally, #[derive(Insertable)] is added to generate the code to insert a new record.

Running Queries

Now that your tables are created and the structs are mapped to it, you’re going to put them into action.

Here’s how you implement the basic REST API:

    use diesel;
    use diesel::prelude::;
    use schema::people;
   use people::Person;
   pub fn all(connection: &PgConnection) -> QueryResult<Vec<Person>> {
    pub fn get(id: i32, connection: &PgConnection) -> QueryResult<Person> {
    pub fn insert(person: Person, connection: &PgConnection) -> QueryResult<Person> {
        .  values(&InsertablePerson::from_person(person))
        .  get_result(connection)
  pub fn update(id: i32, person: Person, connection: &PgConnection) ->     QueryResult<Person> {
        .  set(&person)
        .  get_result(connection)
   pub fn delete(id: i32, connection: &PgConnection) -> QueryResult<usize> {
        .  execute(connection)

 Diesel is used to access insert_into, update and delete functions. diesel::prelude:: accesses a range of structs and modules that are useful for running Diesel. In this example, we’re using PgConnection and QueryResult. We’re also including schema::people so we can access the People table with Rust and run methods in it.

Let’s look at one of these functions more closely:

     pub fn get(id: i32, connection: &PgConnection) -> QueryResult<Person> {

In this example, the QueryResult is returned from each function. Diesel returns QueryResult<t> from each method and is an abbreviation for Result<T, Error> because of this line:

pub type QueryResult<T> = Result<T, Error>;

 Using QueryResult lets us see if something goes wrong if the query fails for any reason. If you wanted to return a Person result directly from a function, you’d use expect to log the error immediately.

Since we’re using Postgres, the Pgconnection command is used. There are other commands for different databases — like Mysqlconnection, for example.

Here’s another example:

    pub fn insert(person: Person, connection: &PgConnection) -> QueryResult<Person> {

 This is a little different than the earlier get function. Instead of accessing an entry from the people::table it is passed along into the insert_into Diesel function. Earlier, we create the InsertablePerson variable to receive new records. The values from the Person table are retrieved using the from_person command. Get_result allows the statement to be executed.

Launching Rocket

At this point, your database should be up and running. Now we just need to create the REST API and link it to the back-end. In Rocket, this consists of incoming requests and handler functions that deal with the requests. So you’ve got to create the routes and the handler functions.

Handler Functions

It’s slightly easier to start with the handlers and work backwards, so you know what your routes are being mapped to. Here are all of the handlers you’ll need to implement the HTTP verbs GET, POST, PUT, DELETE:

    use connection::DbConn;
    use diesel::result::Error;
    use std::env;
   use people;
   use people::Person;
  use rocket::http::Status;
   use rocket::response::{Failure, status};
    use rocket_contrib::Json;
    fn all(connection: DbConn) -> Result<Json<Vec<Person>>, Failure> {
          .map(|people| Json(people))
          .map_err(|error| error_status(error))
    fn error_status(error: Error) -> Failure {
      Failure(match error {
          Error::NotFound => Status::NotFound,
          _ => Status::InternalServerError
     fn get(id: i32, connection: DbConn) -> Result<Json<Person>, Failure> {
        people::repository::get(id, &connection)
            .map(|person| Json(person))
          .map_err(|error| error_status(error))
     #[post("/", format = "application/json", data = "<person>")]
    fn post(person: Json<Person>, connection: DbConn) ->        Result<status::Created<Json<Person>>, Failure> {
      people::repository::insert(person.into_inner(), &connection)
          .map(|person| person_created(person))
          .map_err(|error| error_status(error))
    fn person_created(person: Person) -> status::Created<Json<Person>> {
        let host = env::var("ROCKET_ADDRESS").expect("ROCKET_ADDRESS must be    set");
    let port = env::var("ROCKET_PORT").expect("ROCKET_PORT must be set");
          format!("{host}:{port}/people/{id}", host = host, port = port, id =,
     #[put("/<id>", format = "application/json", data = "<person>")]
     fn put(id: i32, person: Json<Person>, connection: DbConn) -> Result<Json<Person>, Failure> {
    people::repository::update(id, person.into_inner(), &connection)
          .map(|person| Json(person))
          .map_err(|error| error_status(error))
    fn delete(id: i32, connection: DbConn) -> Result<status::NoContent, Failure> {
      match people::repository::get(id, &connection) {
          Ok() => people::repository::delete(id, &connection)
| status::NoContent)
              .map_err(|error| error_status(error)),
            Err(error) => Err(error_status(error))

 Each of these functions defines a REST verb and the path needed to get there. Part of the path is still missing, as they’ll be defined when the routes are created.

Assume the handler methods are localhost:8000/people until we get into the routing.

Here’s one of the easier handlers:

    fn all(connection: DbConn) -> Result<Json<Vec<Person>>, Failure> {
          .map(|people| Json(people))
          .map_err(|error| error_status(error))
    fn error_status(error: Error) -> Failure {
      Failure(match error {
          Error::NotFound => Status::NotFound,
          _ => Status::InternalServerError

 To access Curl for this function, use:

curl localhost:8000/people

Now let’s look at the PUT handler:

    #[put("/<id>", format = "application/json", data = "<person>")]
    fn put(id: i32, person: Json<Person>, connection: DbConn) -> Result<Json<Person>,     Failure> {
      people::repository::update(id, person.into_inner(), &connection)
          .map(|person| Json(person))
          .map_err(|error| error_status(error))

The difference between this function and the previous ALL example are the id and person variables. </id> stands for the id variable. Data = "<person"> stands for the request that maps the person variable onto the function arguments. The format property specifies the form the content request body will take, in this case JSON, due to the Json<Person>.

We’re using serde again to retrieve JSON<body> from the request body.

We’re going to use into_inner() to call the contents of Person. We’re also going to use update, which will map either the error or result into the Result variable. Since we’re also using error_status, an error will occur if a record does not match with an existing ID.

If you want to insert a new record, you’d use the error::not found and use a call code similar to what’s used in the Post function.

On that note, the Post function looks like:

    #[post("/", format = "application/json", data = "<person>")]
    fn post(person: Json<Person>, connection: DbConn) ->    Result<status::Created<Json<Person>>, Failure> {
      people::repository::insert(person.into_inner(), &connection)
          .map(|person| person_created(person))
        .map_err(|error| error_status(error))
     fn person_created(person: Person) -> status::Created<Json<Person>> {
          format!("{host}:{port}/people/{id}", host = host(), port = port(), id =,
    fn host() -> String {
      env::var("ROCKET_ADDRESS").expect("ROCKET_ADDRESS must be set")
     fn port() -> String {
      env::var("ROCKET_PORT").expect("ROCKET_PORT must be set")

 This function uses pieces similar to the PUT handle we’ve already discussed. The main difference is that POST will return a 201 Created rather than a 200 OK. To yield a different result, the Result variable should use the status::Created handle instead of Json<person>. This is what causes the 201 status code.

To make the status::created struct, the created record and the path to retrieve it must be passed into the constructor using the Get request.


The handlers have all been set up. Now we need to route them to the different functions. Each of the handles in this function are related to people, so they’re all going to be mapped to /people.

use people;
use rocket;
use connection;
pub fn create_routes() {

Create_routes is called by the main function to get everything started. Ignite creates a new version of Rocket. Handler functions are loaded into a base request path of /people, defining them all inside of routes! Launch runs the application.


Earlier, we created environment variables for retrieving the port and host of the running server. Here’s how to change the port and host of the running server. There are multiple ways of going about this. You can create an .env file or a rocket.toml file.

When using .env files, values most be formatted as ROCKET_{PARAM}. PARAM is the variable you’re trying to define. {Address} stands for the host while {port} stands for port.

The .env file would look something like this:


If you wanted to use Rocket.toml instead, it might look like:

    address = "localhost"
    port = 8000

 If you don’t include either of these, Rocket will return to its default configuration.

If you want to learn more about configuring Rocket, check out Rocket’s documentation.

Last Step: Create The Main Method

The final step is to create the main method so the application can run.

     #![feature(plugin, decl_macro, custom_derive)]
    extern crate diesel;
    extern crate dotenv;
    extern crate r2d2;
    extern crate r2d2_diesel;
    extern crate rocket;
    extern crate rocket_contrib;
   extern crate serde_derive;
  use dotenv::dotenv;
  mod people;
  mod schema;
 mod connection;
 fn main() {

 In this instance, all main is doing is loading the environment variables and starting Rocket by calling create_routes. The rest of the code just loads a bunch of crates so they’re not littered throughout the code.

To see the complete code, you can check out LankyDan’s GitHub.

Conclusion: Rust Vs. Haskell: Which Language Is Best For Building APIs?

As we stated at the beginning, knowing which programming language will best suit your needs isn’t just following a step-by-step recipe or formula. It depends on numerous variables, including your technical proficiency and what you’re working on.

That being said, there are a few reasons why Rust has some advantages over Haskell for building APIs, most notably its popularity. Rust has been trending in recent years, so there’s a ton of useful libraries and frameworks, not to mention a vibrant community to help you answer any questions you might encounter.

Secondly, Rust is also preferable when size, speed, and security matter, which is most of the time, at this point in the Web’s evolution.

Haskell definitely requires more technical understanding. There are certain advantages to using functional programming for constructing APIs. The main advantage for using Haskell for your API design is its utility in rapid prototyping. The code you write while constructing your prototype should still be able to be used in your official product.

Unless you’re a seasoned API designer, or you’re trying to get an app to market as quickly as possible, Rust has a slight advantage over Haskell for API design, in our opinion.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Design Patterns for Beginners with real-world Examples

Design Patterns for Beginners with real-world Examples

Design Patterns for Beginners with real-world Examples. When do you use Design Patterns? How do you implement different Design Patterns in Java? What are Design Patterns? Why do you use Design Patterns? What are the different types of Design Patterns? When do you use Design Patterns? What are the real-world examples for Design Patterns.

Design Patterns tutorial explained in simple words using real-world examples.


  • 0:00:00 Introduction
  • 0:01:40 What are Design Patterns?
  • 0:04:15 How to Take This Course
  • 0:05:50 The Essentials
  • 0:06:53 Getting Started with Java
  • 0:09:23 Classes
  • 0:13:34 Coupling
  • 0:15:34 Interfaces
  • 0:21:17 Encapsulation
  • 0:26:25 Abstraction
  • 0:30:33 Inheritance
  • 0:32:55 Polymorphism
  • 0:36:42 UML
  • 0:40:52 Memento Pattern
  • 0:42:43 Solution
  • 0:48:31 Implementation
  • 0:54:22 State Pattern
  • 0:59:46 Solution
  • 1:02:59 Implementation
  • 1:09:31 Abusing the Design Patterns
  • 1:11:18 Abusing the State Pattern

The Basic Design Patterns All Developers Need to Know in 2020

The Basic Design Patterns All Developers Need to Know in 2020

The Basic Design Patterns All Developers Need to Know. What is a Design Pattern? There are about 26 Patterns currently discovered. In this post, you'll see the 3 types of Design Patterns all Developers should know: Creational, Structural and Behavioral. We will go through one basic design pattern for each classified type.

What is a Design Pattern?

Design patterns are design level solutions for recurring problems that we software engineers come across often. It’s not code - I repeat,❌CODE. It is like a description on how to tackle these problems and design a solution.

Using these patterns is considered good practice, as the design of the solution is quite tried and tested, resulting in higher readability of the final code. Design patterns are quite often created for and used by OOP Languages, like Java, in which most of the examples from here on will be written.

Types of design patterns

There are about 26 Patterns currently discovered (I hardly think I will do them all…).

These 26 can be classified into 3 types:

1. Creational: These patterns are designed for class instantiation. They can be either class-creation patterns or object-creational patterns.

2. Structural: These patterns are designed with regard to a class's structure and composition. The main goal of most of these patterns is to increase the functionality of the class(es) involved, without changing much of its composition.

3. Behavioral: These patterns are designed depending on how one class communicates with others.

In this post, we will go through one basic design pattern for each classified type.

Type 1: Creational - The Singleton Design Pattern

The Singleton Design Pattern is a Creational pattern, whose objective is to create only one instance of a class and to provide only one global access point to that object. One commonly used example of such a class in Java is Calendar, where you cannot make an instance of that class. It also uses its own getInstance()method to get the object to be used.

A class using the singleton design pattern will include,

  1. A private static variable, holding the only instance of the class.
  2. A private constructor, so it cannot be instantiated anywhere else.
  3. A public static method, to return the single instance of the class.

There are many different implementations of singleton design. Today, I’ll be going through the implementations of;

1. Eager Instantiation

2. Lazy Instantiation

3. Thread-safe Instantiation

Eager Beaver ?️

public class EagerSingleton {
	// create an instance of the class.
	private static EagerSingleton instance = new EagerSingleton();

	// private constructor, so it cannot be instantiated outside this class.
	private EagerSingleton() {  }

	// get the only instance of the object created.
	public static EagerSingleton getInstance() {
		return instance;

This type of instantiation happens during class loading, as the instantiation of the variable instance happens outside any method. This poses a hefty drawback if this class is not being used at all by the client application. The contingency plan, if this class is not being used, is the Lazy Instantiation.

Lazy Days ?

There isn’t much difference from the above implementation. The main differences are that the static variable is initially declared null, and is only instantiated within the getInstance() method if - and only if - the instance variable remains null at the time of the check.

public class LazySingleton {
	// initialize the instance as null.
	private static LazySingleton instance = null;

	// private constructor, so it cannot be instantiated outside this class.
	private LazySingleton() {  }

	// check if the instance is null, and if so, create the object.
	public static LazySingleton getInstance() {
		if (instance == null) {
			instance = new LazySingleton();
		return instance;

This fixes one problem, but another one still exists. What if two different clients access the Singleton class at the same time, right to the millisecond? Well, they will check if the instance is null at the same time, and will find it true, and so will create two instances of the class for each request by the two clients. To fix this, Thread Safe instantiation is to be implemented.

(Thread) Safety is Key ?

In Java, the keyword synchronized is used on methods or objects to implement thread safety, so that only one thread will access a particular resource at one time. The class instantiation is put within a synchronized block so that the method can only be accessed by one client at a given time.

public class ThreadSafeSingleton {
	// initialize the instance as null.
	private static ThreadSafeSingleton instance = null;

	// private constructor, so it cannot be instantiated outside this class.
	private ThreadSafeSingleton() {  }

	// check if the instance is null, within a synchronized block. If so, create the object
	public static ThreadSafeSingleton getInstance() {
		synchronized (ThreadSafeSingleton.class) {
			if (instance == null) {
				instance = new ThreadSafeSingleton();
		return instance;

The overhead for the synchronized method is high, and reduces the performance of the whole operation.

For example, if the instance variable has already been instantiated, then each time any client accesses the getInstance() method, the synchronized method is run and the performance drops. This just happens in order to check if the instance variables’ value is null. If it finds that it is, it leaves the method.

To reduce this overhead, double locking is used. The check is used before the synchronized method as well, and if the value is null alone, does the synchronized method run.

// double locking is used to reduce the overhead of the synchronized method
public static ThreadSafeSingleton getInstanceDoubleLocking() {
	if (instance == null) {
		synchronized (ThreadSafeSingleton.class) {
			if (instance == null) {
				instance = new ThreadSafeSingleton();
	return instance;

Now onto the next classification.

Type 2: Structural - The Decorator Design Pattern

I’m gonna give you a small scenario to give a better context to why and where you should use the Decorator Pattern.

Say you own a coffee shop, and like any newbie, you start out with just two types of plain coffee, the house blend and dark roast. In your billing system, there was one class for the different coffee blends, which inherits the beverage abstract class. People actually start to come by and have your wonderful (albeit bitter?) coffee. Then there are the coffee newbs that, God forbid, want sugar or milk. Such a travesty for coffee!! ??

Now you need to have those two add-ons as well, both to the menu and unfortunately on the billing system. Originally, your IT person will make a subclass for both coffees, one including sugar, the other milk. Then, since customers are always right, one says these dreaded words:

“Can I get a milk coffee, with sugar, please?”


There goes your billing system laughing in your face again. Well, back to the drawing board….

The IT person then adds milk coffee with sugar as another subclass to each parent coffee class. The rest of the month is smooth sailing, people lining up to have your coffee, you actually making money. ??

But wait, there’s more!

The world is against you once again. A competitor opens up across the street, with not just 4 types of coffee, but more than 10 add-ons as well! ?

You buy all those and more, to sell better coffee yourself, and just then remember that you forgot to update that dratted billing system. You quite possibly cannot make the infinite number of subclasses for any and all combinations of all the add-ons, with the new coffee blends too. Not to mention, the size of the final system.??

Time to actually invest in a proper billing system. You find new IT personnel, who actually knows what they are doing and they say;

“Why, this will be so much easier and smaller if it used the decorator pattern.”

What on earth is that?

The decorator design pattern falls into the structural category, that deals with the actual structure of a class, whether is by inheritance, composition or both. The goal of this design is to modify an objects’ functionality at runtime. This is one of the many other design patterns that utilize abstract classes and interfaces with composition to get its desired result.

Let’s give Math a chance (shudder?) to bring this all into perspective;

Take 4 coffee blends and 10 add-ons. If we stuck to the generation of subclasses for each different combination of all the add-ons for one type of coffee. That’s;

(10–1)² = 9² = 81 subclasses

We subtract 1 from the 10, as you cannot combine one add-on with another of the same type, sugar with sugar sounds stupid. And that’s for just one coffee blend. Multiply that 81 by 4 and you get a whopping 324 different subclasses! Talk about all that coding…

But with the decorator pattern will require only 16 classes in this scenario. Wanna bet?

If we map out our scenario according to the class diagram above, we get 4 classes for the 4 coffee blends, 10 for each add-on and 1 for the abstract component and 1 more for the abstract decorator. See! 16! Now hand over that $100.?? (jk, but it will not be refused if given… just saying)

As you can see from above, just as the concrete coffee blends are subclasses of the beverage abstract class, the AddOn abstract class also inherits its methods from it. The add-ons, that are its subclasses, in turn inherit any new methods to add functionality to the base object when needed.

Let’s get to coding, to see this pattern in use.

First to make the Abstract beverage class, that all the different coffee blends will inherit from:

public abstract class Beverage {
	private String description;

	public Beverage(String description) {
		this.description = description;

	public String getDescription() {
		return description;

	public abstract double cost();

Then to add both the concrete coffee blend classes.

public class HouseBlend extends Beverage {
	public HouseBlend() {
		super(“House blend”);

	public double cost() {
		return 250;

public class DarkRoast extends Beverage {
	public DarkRoast() {
		super(“Dark roast”);

	public double cost() {
		return 300;

The AddOn abstract class also inherits from the Beverage abstract class (more on this below).

public abstract class AddOn extends Beverage {
	protected Beverage beverage;

	public AddOn(String description, Beverage bev) {
		this.beverage = bev;

	public abstract String getDescription();

And now the concrete implementations of this abstract class:

public class Sugar extends AddOn {
	public Sugar(Beverage bev) {
		super(“Sugar”, bev);

	public String getDescription() {
		return beverage.getDescription() + “ with Mocha”;

	public double cost() {
		return beverage.cost() + 50;

public class Milk extends AddOn {
	public Milk(Beverage bev) {
		super(“Milk”, bev);

	public String getDescription() {
		return beverage.getDescription() + “ with Milk”;

	@Override  public double cost() {
		return beverage.cost() + 100;

As you can see above, we can pass any subclass of Beverage to any subclass of AddOn, and get the added cost as well as the updated description. And, since the AddOn class is essentially of type Beverage, we can pass an AddOn into another AddOn. This way, we can add any number of add-ons to a specific coffee blend.

Now to write some code to test this out.

public class CoffeeShop {
	public static void main(String[] args) {
		HouseBlend houseblend = new HouseBlend();
		System.out.println(houseblend.getDescription() + “: “ + houseblend.cost());

		Milk milkAddOn = new Milk(houseblend);
		System.out.println(milkAddOn.getDescription() + “: “ + milkAddOn.cost());

		Sugar sugarAddOn = new Sugar(milkAddOn);
		System.out.println(sugarAddOn.getDescription() + “: “ + sugarAddOn.cost());

The final result is:

It works! We were able to add more than one add-on to a coffee blend and successfully update its final cost and description, without the need to make infinite subclasses for each add-on combination for all coffee blends.

Finally, to the last category.

Type 3: Behavioral - The Command Design Pattern

A behavioral design pattern focuses on how classes and objects communicate with each other. The main focus of the command pattern is to inculcate a higher degree of loose coupling between involved parties (read: classes).

Uhhhh… What’s that?

Coupling is the way that two (or more) classes that interact with each other, well, interact. The ideal scenario when these classes interact is that they do not depend heavily on each other. That’s loose coupling. So, a better definition for loose coupling would be, classes that are interconnected, making the least use of each other.

The need for this pattern arose when requests needed to be sent without consciously knowing what you are asking for or who the receiver is.

In this pattern, the invoking class is decoupled from the class that actually performs an action. The invoker class only has the callable method execute, which runs the necessary command, when the client requests it.

Let’s take a basic real-world example, ordering a meal at a fancy restaurant. As the flow goes, you give your order (command) to the waiter (invoker), who then hands it over to the chef(receiver), so you can get food. Might sound simple… but a bit meh to code.

The idea is pretty simple, but the coding goes around the nose.

The flow of operation on the technical side is, you make a concrete command, which implements the Command interface, asking the receiver to complete an action, and send the command to the invoker. The invoker is the person that knows when to give this command. The chef is the only one who knows what to do when given the specific command/order. So, when the execute method of the invoker is run, it, in turn, causes the command objects’ execute method to run on the receiver, thus completing necessary actions.

What we need to implement is;

  1. An interface Command
  2. A class Order that implements Command interface
  3. A class Waiter (invoker)
  4. A class Chef (receiver)

So, the coding goes like this:

Chef, the receiver

public class Chef {
	public void cookPasta() {
		System.out.println(“Chef is cooking Chicken Alfredo…”);

	public void bakeCake() {
		System.out.println(“Chef is baking Chocolate Fudge Cake…”);

Command, the interface

public interface Command {
	public abstract void execute();

Order, the concrete command

public class Order implements Command {
	private Chef chef;
	private String food;

	public Order(Chef chef, String food) {
		this.chef = chef; = food;

	public void execute() {
		if (“Pasta”)) {
		} else {

Waiter, the invoker

public class Waiter {
	private Order order;

	public Waiter(Order ord) {
		this.order = ord;

	public void execute() {
You, the client
public class Client {
	public static void main(String[] args) {
		Chef chef = new Chef();

		Order order = new Order(chef, “Pasta”);
		Waiter waiter = new Waiter(order);

		order = new Order(chef, “Cake”);
		waiter = new Waiter(order);

As you can see above, the Client makes an Order and sets the Receiver as the Chef. The Order is sent to the Waiter, who will know when to execute the Order (i.e. when to give the chef the order to cook). When the invoker is executed, the Orders’ execute method is run on the receiver (i.e. the chef is given the command to either cook pasta ? or bake cake?).

Quick recap

In this post we went through:

  1. What a design pattern really is,
  2. The different types of design patterns and why they are different
  3. One basic or common design pattern for each type

I hope this was helpful.

Find the code repo for the post, here.

Web Development with Rust - 03/x: Create a REST API

Web Development with Rust - 03/x: Create a REST API

Since Rust is a static typed language with a strong compiler you won't face many of the common pitfalls about running a web service in production. Although there are still run time errors which you have to cover.

  1. HTTP Requests
  2. POST/PUT/PATCH/DELETE are special
  3. The Job of a Framework
  4. Creating an API spec
  5. Crafting the API
  6. Input Validation
  7. Summary

APIs are the bread and butter of how a modern and fast-paced web environment. Frontend application, other web services and IoT devices need to be able to talk to your service. API endpoints are like doors to which you decide what comes in and in which format.

Since Rust is a static typed language with a strong compiler you won't face many of the common pitfalls about running a web service in production. Although there are still run time errors which you have to cover.

HTTP Requests

When we talk about creating an API we basically mean a web application which listens on certain paths and responds accordingly. But first things first. For two devices to be able to communicate with each other there has to be an established TCP connection.

TCP is a protocol which the two parties can use to establish a connection. After establishing this connection, you can receive and send messages to the other party. HTTP is another protocol, which is built on top of TCP, and it's defining the contents of the requests and responses.

So on the Rust side of things, TCP is implemented in the Rust core library, HTTP is not. Whatever framework you chose in the previous article they all implement HTTP and therefore are able to receive and send HTTP formatted messages.

An example GET requests for example looks like this:

GET / HTTP/1.1
Accept-Language: en

It includes:

  • GET: the HTTP method
  • /: The path
  • HTTP/1.1: The version of the HTTP protocol
  • HOST: The host/domain of the server we want to request data from
  • Accept-Language: Which language we prefer and understand

The most common used HTTP methods are:

  • GET
  • POST
  • PUT

We are using GET every time we browse the web. If we want to alter data however (like using POST to send data over to another server), we need to be more cautions and precise.

First, not everyone is allowed to just send a bunch of data to another server. Our API can for example say: "I just accept data from the server with the host name

Therefore, when you send a POST to another server, what actually happens is the CORS workflow:

We first ask the server what is allowed, where do you accept requests from and what are your accepted headers. If we fulfill all of these requirements, then we can send a POST.

Disclaimer: Not all frameworks (like rocket and tide) are implementing CORS in their core. However, in a professional environment, you handle CORS on the DevOps side of things and put it for example in your NGINX config.
The Job of a Framework

We use the hard work of other people to create web applications. Everything has to be implemented at some point, just not from you for most of the time. A framework covers the following concerns:

  • Start a web server and open a PORT
  • Listen to requests on this PORT
  • If a request comes in, look at the Path in the HTTP header
  • Route the request to the handler according to the Path
  • Help you extract the information out of the request
  • Pack the generated data and HTTP StatusCode (created from you) and form a response
  • Send the response back to the sender

The Rust web framework tide includes http-service, which provides the basic abstractions you need when working with HTTP calls. The crate http-service is built on top of hyper, which transforms TCP-Streams to valid HTTP requests and responses.

Your job is to create routes like /users/:id and add a route_handler which is a function to handle the requests on this particular path. The framework makes sure that it directs the incoming HTTP requests to this particular handler.

Creating an API spec

You have to define your resources first to get an idea what your application needs to handle and uncover relationships between them. So if you want to build a idea-up-voting site, you would have:

  • Users
  • Ideas
  • Votes

A simple spec for this scenario would look like this:

  • Users
  • POST /users
  • GET /users
  • PUT /users/:user_id
  • PATCH /users/:user_id
  • DELETE /users/:user_id
  • GET /users/:user_id

Ideas and Votes behave accordingly. A spec is helpful for two reasons:

  • It gives you guidelines not to forget a path
  • It helps to communicate to your API users what to expect

You can tools like swagger to write a full spec which also describes the structure of the data and the messages/responses for each path and route.

A more professional spec would include the return values for each route and the request and response bodies. However, the spec can be finalized once you know how your API should look like and behave. To get started, a simple list is enough.

Crafting the API

Depending on the framework you are using, your implementation will look different. You have to have the following features on your radar to look out for:

  • Creating routes for each method (like"/users").post(post_users_handler))
  • Extracting information from the request (like headers, uri-params and JSON from the request body)
  • Creating responses with proper HTTP codes (200201400404 etc.)

I am using the latest version of tide for this web series. You can add it in your Cargo.toml file and use it for your web app:

tide = "0.1.0"

Our first User implementation will look like this:

async fn handle_get_users(cx: Context<Database>) -> EndpointResult {

async fn handle_get_user(cx: Context<Database>) -> EndpointResult {
let id = cx.param("id").client_err()?;
if let Some(user) = cx.app_data().get(id) {
} else {

async fn handle_update_user(mut cx: Context<Database>) -> EndpointResult<()> {
let user = await!(cx.body_json()).client_err()?;
let id = cx.param("id").client_err()?;

if cx.app_data().set(id, user) {
} else {


async fn handle_create_user(mut cx: Context<Database>) -> EndpointResult<String> {
let user = await!(cx.body_json()).client_err()?;

async fn handle_delete_user(cx: Context<Database>) -> EndpointResult<String> {
let id = cx.param("id").client_err()?;

fn main() {
// We create a new application with a basic, local database
// You can use your own implementation, or none: App::new(())
let mut app = App::new(Database::default());"/users")



You can find the full implementation of the code in the GitHub repository to this series.

We see that we first have to create a new App

let mut app = App::new(())

add routes"/users")

and for each route add the HTTP requests we want to handle"/users").get(handle_get_users)

Each framework has a different method of extracting parameters and JSON bodies. Actix is using Extractors, rocket is using Query Guards.

With tide, you can access request parameters and bodies and database connections through Context. So when we want to update a User with a specific id, we send a PATCH to /users/:id. From there, we call the handle_update_user method.

Inside this method, we can access the id from the URI like this:

let id = cx.param("id").client_err()?;

Each framework is also handling its own way of sending responses back to the sender. Tide is using EndpointResult, rocket is using Response and actix HttpResponse.

Everything else is completely up to you. The framework might help you with session management and authentication, but you can also implement this yourself.

My suggestion is: Build the first skeleton of your app with the framework of your choice, figure out how to extract information out of requests and how to form responses. Once this is working, you can use your Rust skills to build small or big applications as you wish.

Input Validation

Your best friend in the Rust world will be serde. It will help you parse JSON and other formats, but will also allow you to serialize your data.

When we talk about input validation, we want to make sure the data we are getting has the right format. Lets say we are extracting the JSON body out of a request:

let user: User = serde_json::from_str(&request_body);

We are using serde_json here to transform a JSON-String into a Struct of our choice. So if we created this struct:

struct User {
name: String,
height: u32,

we want to make sure the sender is including name and height. If we just do serde_json::from_str, and the sender forgot to pass on the height, the app will panic and shut down, since we expect the response to be a user: let user: User.

We can improve the error handling like this:

let user: User = match serde_json::from_str(&request_body) {
Ok(user) => user,
Err(error) => handle_error_case(error),

We catch the error and call our handle_error_case method to handle it gracefully.

  1. Pick a framework of your choice
  2. rocket is nightly
  3. actix is stable
  4. tide is fostered close to the Rust Core and also works on Rust nightly
  5. Know that there is no common CORS handling (yet). Recommendation is to handle this on the DevOps side (NGINX for example)
  6. After picking a framework, spec out your resources (/users: GET, POST etc.)
  7. Figure out how the framework of your choice is handling extracting parameters and JSON from the request and how to form a response
  8. Validate your input via match and serde_json

Thanks For Visiting, Keep Visiting. If you liked this post, share it with all of your programming buddies!

Why you should learn the Rust programming language

☞ The Rust Programming Language

☞ Rust Vs. Haskell: Which Language is Best for API Design?

☞ An introduction to Web Development with Rust for Node.js Developers

☞ 7 reasons why you should learn Rust programming language in 2019

Why you should move from Node.js to Rust in 2019

☞ Rust: Building Reusable Code with Rust from Scratch

☞  Programming in Rust: the good, the bad, the ugly.

☞  An introduction to Web Development with Rust for Node.js Developers

☞ Intro to Web Development with Rust for NodeJS Developers

☞ Introducing the Rust Crash Course

3 Frameworks for Building APIs Using Rust

This post was originally published here