Garry Taylor

Garry Taylor

1565066639

Rust Vs. Haskell: Which Language is Best for API Design?

When it comes to designing, building, and maintaining an API, it’s not immediately obvious which development tools and programming languages you should use. Seeing as how APIs are essentially the nervous system of mobile apps, it makes sense that there would be copious amounts of resources for programmers and developers.

Knowing which development tools to use to create your own API depends on your level of technical expertise. Some development environments offer a barebones command-line programming environment. Others function more like an app, with fancy GUIs and lots of bells and whistles, with code debuggers and copious built-in libraries.

Introducing Haskell

Haskell is one of the most powerful and reliable functional programming languages out there. Haskell’s emphasis on top-level programming lets developers focus on getting results rather than getting bogged down in endless minutiae.

Programming in Haskell also allows for fast prototyping, thanks to its excellent compiler, getting apps and software onto the market much more quickly than other development languages. This makes Haskell a good fit for smaller startups or those looking to launch their first app.

Meet Rust

Mozilla is dedicated to developing tools for and evolving the web using Open Standards, starting with their flagship Internet browser Firefox. Every Internet browser on the market, including Firefox, is written in C++. Firefox features 12,900,992 lines of code. Google Chrome has 4,490,488. While this makes the programs fast, some argue they are more unsafe. The memory manipulations of C and C++ are not checked for validity. If something goes wrong, it can lead to a program crashing, memory leaks, buffer overflows, segmentation faults, and null pointers.

Rust defaults to writing “safe code,” by allocating memory to objects and not unallocating it until the process has been completed.

The security and efficiency are some of the reasons why Rust is one of the most beloved programming languages among developers and programmers as shown in this Stack Overflow survey.

Haskell Vs. Rust

According to this StackShare chart, Rust and Haskell have a number of similarities and a few notable differences. For starters, Rust is slightly more popular, with 416 developers using Rust as opposed to 347 developing with Haskell.

Due to its popularity, there’s a great deal more Rust content on the Internet than there is for Haskell. There are over 23,000 references to Rust on Hacker News while Haskell only has 763. Haskell’s got more than three times as much content on Stack Overflow than Rust, due to its longevity.

The advantages of Rust, according to Stack Overflow programmers include:

  • Guaranteed memory safety (75 votes)
  • Speed (64 votes)
  • Minimal runtime (46 votes)
  • Open source (46 votes)
  • Pattern matching (38 votes)
  • Type inference (36 votes)
  • Algebraic data types (34 votes)
  • Concurrent (34 votes)
  • Efficient C bindings (28 votes)
  • Practical (28 votes)

The advantages of Haskell, on the other hand, include:

  • Purely-functional programming (66 votes)
  • Statically typed (53 votes)
  • Type-safe (44 votes)
  • Great community (29 votes)
  • Open source (29 votes)
  • Composable (28 votes)
  • Built-in concurrency (24 votes)
  • Built-in parallelism (22 votes)
  • Referentially transparent (17 votes)
  • Generics (15 votes)

The cons of using Rust include:

  • Ownership learning curve
  • Variable shadowing
  • Hard to learn

The cons of using Haskell:

  • No good ABI
  • Unpredictable performance
  • Poor documentation for libraries
  • Poor packaging for apps
  • Confusing error messages
  • Slow compiling
  • No best practices
  • Too many distractions in language extensions

Programs that integrate with Rust:

  • Remacs
  • Sentry
  • Iron
  • Leaf
  • Pencil
  • Ruru
  • Sapper
  • Helix
  • Tokamak
  • Rocket
  • Airbrake
  • Yew Framework
  • Dependabot
  • Tower Web

Programs that interact with Haskell:

  • Eta
  • Yesod
  • Rollbar
  • Miso
  • Buddy

Finally, take a look at this Google Trends graph of interest over time in Rust vs. Haskell:


As you can see, while both programming languages have their ups and downs, Rust is exponentially more popular than Haskell. This means there are more resources available for Rust, which makes it a better pick for building APIs if you want something that will work straight out of the gate.

Haskell is adept at fast prototyping and building frameworks, however. The code you write in Haskell can be a part of the finished product, as one additional benefit.

Haskell vs. Rust: Which Is Better For Designing APIs?

Now that we know a bit more Haskell vs. Rust, let’s delve into the heart of the matter.

Which programming language is best for API design? That will depend on what you’re trying to do with as well as how comfortable you are with programming.

Let’s take a look at some specific instances, to help you figure out which approach is right for your API design.

Designing A RESTful API With Haskell

Designing an API with a functional programming language may seem like a lot to take on. It doesn’t have to be, however, as there are 3rd-party tools to make web development with Haskell easy. For example, the SNAP framework acts as a translator, letting Haskell communicate with the web easily and painlessly.

Getting Started With Haskell and SNAP

You’re going to start off by loading a few commands in Haskell. You can also clone these commands directly from ThoughtBot.com.

    cd snap-api-tutorial
    git checkout baseline
    cabal sandbox init
    cabal install snap
    cabal install --dependencies-only

Creating An API Snaplet

Snaplets are composable pieces of a SNAP application. SNAP applications are created by nesting applets. Look at the application.hs and you’ll notice the application initializer ‘app’ is composed of makeSnaplet functions.

We’re going to start by making a snaplet called ‘API’. This snaplet is responsible for creating the top level /api namespace. You’re going to load a few language extensions, import the necessary modules, and define the ‘API’ data type. Then you’ll define the initializer of the snaplet.

    -- new file: src/api/Core.hs
    {-# LANGUAGE OverloadedStrings #-}
    module Api.Core where
    import Snap.Snaplet
    data Api = Api
    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet "api" "Core Api" Nothing $ return Api

 Notice the ‘b’ in the ‘apilnit :: Snapletinit b Api’ line instead of the ‘app’ command. This means this snaplet can be loaded in any base operation, not just App. This is the basis of SNAP composability.

Now you’re going to tell the ‘App’ datatype to expect an API snaplet.

    -- src/Application.hs
    import Api.Core (Api(Api))
    data App = App { _api :: Snaplet Api }

 Then, you’ll nest the Api snaplet within the App snaplet, using nestSnaplet:

    nestSnaplet :: ByteString -> Lens v (Snaplet v1) -> SnapletInit b v1 -> Initializer b v         (Snaplet v1)

The first command defines the root base url for the snaplet’s route, /api in this instance. The second argument is a Lens, identifying the snaplet, generated by the makeLenses function in src/application.hs. The final argument is the snaplet initializer apiInit we’ve previously defined.

-- src/Site.hs

import Api.Core (Api(Api), apiInit)

    app :: SnapletInit App App
    app = makeSnaplet “app” “An snaplet example application.” Nothing $ do
      api <- nestSnaplet “api” api apiInit
      addRoutes routes
      return $ App api

Now you’ve nested your first Api snaplet. It doesn’t have any routes yet, however, so you don’t know if it’s working or not. Adding an /api/status route that always responds with a ‘200 ok’ will let you see output from this snaplet.

Snap route handlers normally return a type of Handler (). The Handler is an example of HandlerSnap, which provides stateful access to the HTTP request and response.

All of the requests and response modifications take place inside a Handler monad. So we’ll define `respondOk :: Handler b Api ()’

    – src/api/Core.hs
    import           Snap.Core
    import qualified Data.ByteString.Char8 as B

    apiRoutes :: [(B.ByteString, Handler b Api ())]
    apiRoutes = [(“status”, method GET respondOk)]

    respondOk :: Handler b Api ()
    respondOk = modifyResponse $ setResponseCode 200

    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet “api” “Core Api” Nothing $ do
        addRoutes apiRoutes
        return Api

Now look at the type signatures for modifyResponse and setResponseCode:

modifyResponse :: (MonadSnap m) => (Response -> Response) -> m ()
setResponseCode :: Int -> Response -> Response

 This means the setResponseCode takes in an integer and returns a Response modifying function that can be passed on to modifyResponse. modifyResponse performs the response modification within the monad function.

Now run the following code:

    $ cabal run – -p 9000
    $ curl -I -XGET “localhost:9000/api/status”

    HTTP/1.1 200 OK
    Server: Snap 0.9.4.6
    Date: …
    Transfer-Encoding: chunked

 This should give you your first response.

A Todo Snaplet

Now that we’ve seen how to get a simple response out of a snaplet, let’s learn how to make a Todo snaplet inside of the Api snaplet. Then we’ll learn how to connect that Todo snaplet to a database, write Get and Post handlers for /api/todos, which allows you to create and fetch new todo items.

We’ll start with some boilerplate code, which will define our snaplet, then nest it inside of the API snaplet.

   – new file: src/api/services/TodoService.hs

    {-# LANGUAGE OverloadedStrings #-}

    module Api.Services.TodoService where

    import Api.Types (Todo(Todo))
    import Control.Lens (makeLenses)
    import Snap.Core
    import Snap.Snaplet

    data TodoService = TodoService

    todoServiceInit :: SnapletInit b TodoService
    todoServiceInit = makeSnaplet “todos” “Todo Service” Nothing $ return TodoService

    – src/api/Core.hs

    {-# LANGUAGE TemplateHaskell #-}

    import Control.Lens (makeLenses)
    import Api.Services.TodoService(TodoService(TodoService), todoServiceInit)
    – …

    data Api = Api { _todoService :: Snaplet TodoService }

    makeLenses ''Api
    – …

    apiInit :: SnapletInit b Api
    apiInit = makeSnaplet “api” “Core Api” Nothing $ do
      ts <- nestSnaplet “todos” todoService todoServiceInit
      addRoutes apiRoutes
      return $ Api ts

Next, we’ll nest a PostgreSQL, provided by snaplet-postgresql-simple, into the TodoService. This provides the TodoService with a connection to the database and makes queries possible. Then you’re going to import Aeson, encoding your responses into JSON using ToJSON using the ToJSON instance we defined before.

   – src/api/services/TodoService.hs

    {-# LANGUAGE TemplateHaskell -#}
    {-# LANGUAGE FlexibleInstances -#}

    import Control.Lens (makeLenses)
    import Control.Monad.State.Class (get)
    import Data.Aeson (encode)
    import Snap.Snaplet.PostgresqlSimple
    import qualified Data.ByteString.Char8 as B
    – …

    data TodoService = TodoService { _pg :: Snaplet Postgres }

    makeLenses ''TodoService
    – …

    todoServiceInit :: SnapletInit b TodoService
    todoServiceInit = makeSnaplet “todos” “Todo Service” Nothing $ do
      pg <- nestSnaplet “pg” pg pgsInit
      return $ TodoService pg

    instance HasPostgres (Handler b TodoService) where
      getPostgresState = with pg get

A little bit of SQL sets up the database and inserts a few lines of test data:

    CREATE DATABASE snaptutorial;
    CREATE TABLE todos (id SERIAL, text TEXT);
    INSERT INTO todos (text) VALUES (‘First todo’);
    INSERT INTO todos (text) VALUES (‘Second todo’);

Finally, you’re going to configure the postgres snaplet by editing the following file:

snaplets/api/snaplets/todos/snaplets/postgresql-simple/devel.cfg

 Now you’re ready to run your first GET to /api/todos. We’ll retrieve all of the rows of the todos table, convert them into Todo data, then serialize them as JSON to get your first response.

First, you’re going to use the query- function, which converts a SQL string and returns a monadic array of data that implements the FromRow typeclass:

query_ :: (HasPostgres m, FromRow r) => Query -> m [r]

Next, you’re going to use the writeLBS together with the encode function to write a JSON string to the response body:

writeLBS :: MonadSnap m => ByteString -> m ()

This function calls the modifyResponse function mentioned earlier.

Then you’re going to use the execute function (which is the database version of query) to insert the data gathered from the getPostParam into the database:

    todoRoutes :: [(B.ByteString, Handler b TodoService ())]
    todoRoutes = [(“/”, method GET getTodos)
                 ,(“/”, method POST createTodo)]

    createTodo :: Handler b TodoService ()
    createTodo = do
      todoTextParam <- getPostParam “text”
      newTodo <- execute “INSERT INTO todos (text) VALUES (?)” (Only todoTextParam)
      modifyResponse $ setResponseCode 201

 Here, the Only is postgresql-simple’s version of single value collections.

Here’s the finished version:

    $ cabal run – -p 9000
    $ curl -i -XPOST --data “text=Third todo” “localhost:9000/api/todos”

    HTTP/1.1 201 Created
    Server: Snap 0.9.4.6
    Date: …
    Transfer-Encoding: chunked

    $ psql snaptutorial
    $ SELECT * FROM todos;

     id |     text
     ----±-------------
      1 | First todo
      2 | Second todo
      3 | Third todo

 Now you have a working REST API written for Haskell and SNAP. If you want to know more about the SNAP framework, you can read the SNAP documentation or visit the #snapframework on freenode.

Designing a REST API in Rust

Now that we’ve learned how to set up an API in Haskell, let’s turn our attention to Rust. Seeing how to set up an API in Rust will help give you an idea of which language might be best for designing your API.

First off, we’re going to load some crates, which are Rust’s libraries. We’ll be using Rocket to create the API and Diesel to handle the database. Diesel works with Postgres, MySQL, and SQLite.

Define Your Dependencies

Before you begin, you’re going to define your dependencies:

    [dependencies]
    rocket = “0.3.6”
    rocket_codegen = “0.3.6”
    diesel = { version = “1.0.0”, features = [“postgres”] }
   dotenv = “0.9.0”
   r2d2-diesel = “1.0”
    r2d2 = “0.8”
    serde = “1.0”
    serde_derive = “1.0”
    serde_json = “1.0”
    [dependencies.rocket_contrib]
   version = ""
   default-features = false
   features = [“json”]

You’ll notice that there are a number of crates being loaded. We’ve already mentioned Rocket and Diesel. Rocket_codegen calls on macros, while dotenv allows variables to be called from an external file. R2d2 and r2d2-diesel connects to the database using diesel. Last but not least, serde, serde_derive, and serde_json are used for serialization and deserialization of data sent and retrieved from the REST Api.

In this instance, postgres has been specified to only include Postgres modules in the diesel crate. If you were wanting to access another database, or multiple databases, you only need to specify them or eliminate the features list altogether.

One final note, to use Rocket, you need to use the nightly build of Rust since it uses features not included in the stable builds.

Accessing The Database With Diesel

First, we’re going to start off by setting up Diesel. Once that’s set up, you’ll have your schema defined that you’ll use to construct the application.

To set up Diesel, begin by constructing a Diesel CLI. If you don’t know how to do that, here’s a guide on getting started with Diesel.

We’ll be using Postgres, as you won’t be able to access all of the MySQL features. PostGRES is fast and easy to set up and will give you all of the features you need to create a database.

Create A Table

Start off by setting up the database_url to connect to PostGRES or simply add it to the .env file.

   echo       
   DATABASE_URL = postgres://postgres:password@localhost/rust-web-with-rocket > .env

 Now you’re going to run diesel setup to create a database and an empty migrations folder to use later.

You will be modeling people who can be added to, retrieved, modified, or deleted from the database. You’re going to need a table to store them in. To do so, you’re going to create your first migration.

    diesel migration generate create_people

 This creates two new files which are stored in the migrations folder. Up.sql is for upgrading and is where you’ll place the SQL to create your table. Down.sql is for downgrading so you can undo the upgrades if need be.

In this example, you’re going to create the people table.

    CREATE TABLE people(
    id SERIAL PRIMARY KEY,
    first_name VARCHAR NOT NULL,
    last_name VARCHAR NOT NULL,
   age INT NOT NULL,
   profession VARCHAR NOT NULL,
   salary INT NOT NULL
  )

To undo the table creation, you only have to use:

    DROP TABLE people 

 To execute a migration, run:

DROP TABLE people

If you need to undo the migration, simply use:

diesel migration redo

Map To Structs

Now that your people table is created, you’re ready to start adding data into it. Seeing as how Diesel is an ORM, you’re going to need to translate it into something Rust will be able to read. You’re going to use a struct to do that.

    use super::schema::people;
    #[derive(Queryable, AsChangeset, Serialize, Deserialize)]
    #[table_name = “people”]
      pub struct Person {
      pub id: i32,
      pub first_name: String,
     pub last_name: String,
     pub age: i32,
     pub profession: String,
    pub salary: i32,
 }

 You’re going to write a struct that will represent each record in the people table, otherwise known as a person. You’re going to use three commands particular to Diesel – #[derive(Queryable)], #[derive(AsChangeSet)] and #[table_name].

#[derive(Queryable)] generates the code that will retrieve a person from the database. #[derive(AsChangeSet)] makes it possible for you to use update.set in the future, if you so choose. #[table_name = “people”] names the table, as Diesel interprets the plural of Person and Persons. If you were using another name with a more common plural, this step wouldn’t be necessary.

The other functions are used to allow JSON data to interact with the REST API. #[derive(Serialize)] and #[derive(Deserialize)] both come from the serde crate. We will delve more fully into these commands a little later on.

Now you’re going to create a schema, specifically a Rust Schema using the “table!” command that handles the Rust to database mapping.

Run the following command:

diesel print-schema > src/schema.rs

This generates the following file:

    table! {
      people (id) {
          id -> Int4,
         first_name -> Varchar,
         last_name -> Varchar,
         age -> Int4,
         profession -> Varchar,
        salary -> Int4,
     }
  }

Now you’re going to run SELECT and UPDATE queries, using the Person struct we created earlier. DELETE doesn’t require a struct as it only requires the record’s ID. You’re also going to use INSERT, but in a different way than what’s recommended in the Diesel documentation.

    #[derive(Insertable)]
    #[table_name = “people”]
    struct InsertablePerson {
      first_name: String,
      last_name: String,
      age: i32,
      profession: String,
      salary: i32,
  }
    impl InsertablePerson {
      fn from_person(person: Person) -> InsertablePerson {
          InsertablePerson {
              first_name: person.first_name,
              last_name: person.last_name,
              age: person.age,
              profession: person.profession,
              salary: person.salary,
          }
      }
  }

InsertablePerson is almost identical to the Person struct but with one key difference – there’s no ID table. The ID table is generated automatically when you use Insert, so it’s not necessary.

Finally, #[derive(Insertable)] is added to generate the code to insert a new record.

Running Queries

Now that your tables are created and the structs are mapped to it, you’re going to put them into action.

Here’s how you implement the basic REST API:

    use diesel;
    use diesel::prelude::
;
    use schema::people;
   use people::Person;
   pub fn all(connection: &PgConnection) -> QueryResult<Vec<Person>> {
     people::table.load::<Person>(&connection)
  }
    pub fn get(id: i32, connection: &PgConnection) -> QueryResult<Person> {
      people::table.find(id).get_result::<Person>(connection)
  }
    pub fn insert(person: Person, connection: &PgConnection) -> QueryResult<Person> {
      diesel::insert_into(people::table)
        .  values(&InsertablePerson::from_person(person))
        .  get_result(connection)
}
  pub fn update(id: i32, person: Person, connection: &PgConnection) ->     QueryResult<Person> {
     diesel::update(people::table.find(id))
        .  set(&person)
        .  get_result(connection)
  }
   pub fn delete(id: i32, connection: &PgConnection) -> QueryResult<usize> {
      diesel::delete(people::table.find(id))
        .  execute(connection)
  }

 Diesel is used to access insert_into, update and delete functions. diesel::prelude:: accesses a range of structs and modules that are useful for running Diesel. In this example, we’re using PgConnection and QueryResult. We’re also including schema::people so we can access the People table with Rust and run methods in it.

Let’s look at one of these functions more closely:

     pub fn get(id: i32, connection: &PgConnection) -> QueryResult<Person> {
      people::table.find(id).get_result::<Person>(connection)
  }

In this example, the QueryResult is returned from each function. Diesel returns QueryResult<t> from each method and is an abbreviation for Result<T, Error> because of this line:

pub type QueryResult<T> = Result<T, Error>;

 Using QueryResult lets us see if something goes wrong if the query fails for any reason. If you wanted to return a Person result directly from a function, you’d use expect to log the error immediately.

Since we’re using Postgres, the Pgconnection command is used. There are other commands for different databases — like Mysqlconnection, for example.

Here’s another example:

    pub fn insert(person: Person, connection: &PgConnection) -> QueryResult<Person> {
      diesel::insert_into(people::table)
          .values(&InsertablePerson::from_person(person))
          .get_result(connection)
   }

 This is a little different than the earlier get function. Instead of accessing an entry from the people::table it is passed along into the insert_into Diesel function. Earlier, we create the InsertablePerson variable to receive new records. The values from the Person table are retrieved using the from_person command. Get_result allows the statement to be executed.

Launching Rocket

At this point, your database should be up and running. Now we just need to create the REST API and link it to the back-end. In Rocket, this consists of incoming requests and handler functions that deal with the requests. So you’ve got to create the routes and the handler functions.

Handler Functions

It’s slightly easier to start with the handlers and work backwards, so you know what your routes are being mapped to. Here are all of the handlers you’ll need to implement the HTTP verbs GET, POST, PUT, DELETE:

    use connection::DbConn;
    use diesel::result::Error;
    use std::env;
   use people;
   use people::Person;
  use rocket::http::Status;
   use rocket::response::{Failure, status};
    use rocket_contrib::Json;
    #[get(“/”)]
    fn all(connection: DbConn) -> Result<Json<Vec<Person>>, Failure> {
      people::repository::all(&connection)
          .map(|people| Json(people))
          .map_err(|error| error_status(error))
  }
    fn error_status(error: Error) -> Failure {
      Failure(match error {
          Error::NotFound => Status::NotFound,
          _ => Status::InternalServerError
      })
  }
     #[get(“/<id>”)]
     fn get(id: i32, connection: DbConn) -> Result<Json<Person>, Failure> {
        people::repository::get(id, &connection)
            .map(|person| Json(person))
          .map_err(|error| error_status(error))
  }
     #[post(“/”, format = “application/json”, data = “<person>”)]
    fn post(person: Json<Person>, connection: DbConn) ->        Result<status::Created<Json<Person>>, Failure> {
      people::repository::insert(person.into_inner(), &connection)
          .map(|person| person_created(person))
          .map_err(|error| error_status(error))
  }
    fn person_created(person: Person) -> status::Created<Json<Person>> {
        let host = env::var(“ROCKET_ADDRESS”).expect(“ROCKET_ADDRESS must be    set”);
    let port = env::var(“ROCKET_PORT”).expect(“ROCKET_PORT must be set”);
      status::Created(
          format!(“{host}:{port}/people/{id}”, host = host, port = port, id =  person.id).to_string(),
          Some(Json(person)))
  }
     #[put(“/<id>”, format = “application/json”, data = “<person>”)]
     fn put(id: i32, person: Json<Person>, connection: DbConn) -> Result<Json<Person>, Failure> {
    people::repository::update(id, person.into_inner(), &connection)
          .map(|person| Json(person))
          .map_err(|error| error_status(error))
  }
     #[delete(“/<id>”)]
    fn delete(id: i32, connection: DbConn) -> Result<status::NoContent, Failure> {
      match people::repository::get(id, &connection) {
          Ok() => people::repository::delete(id, &connection)
              .map(|
| status::NoContent)
              .map_err(|error| error_status(error)),
            Err(error) => Err(error_status(error))
      }
  }

 Each of these functions defines a REST verb and the path needed to get there. Part of the path is still missing, as they’ll be defined when the routes are created.

Assume the handler methods are localhost:8000/people until we get into the routing.

Here’s one of the easier handlers:

    #[get(“/”)]
    fn all(connection: DbConn) -> Result<Json<Vec<Person>>, Failure> {
      people::repository::all(&connection)
          .map(|people| Json(people))
          .map_err(|error| error_status(error))
  }
    fn error_status(error: Error) -> Failure {
      Failure(match error {
          Error::NotFound => Status::NotFound,
          _ => Status::InternalServerError
      })
  }

 To access Curl for this function, use:

curl localhost:8000/people

Now let’s look at the PUT handler:

    #[put(“/<id>”, format = “application/json”, data = “<person>”)]
    fn put(id: i32, person: Json<Person>, connection: DbConn) -> Result<Json<Person>,     Failure> {
      people::repository::update(id, person.into_inner(), &connection)
          .map(|person| Json(person))
          .map_err(|error| error_status(error))
  }

The difference between this function and the previous ALL example are the id and person variables. </id> stands for the id variable. Data = “<person”> stands for the request that maps the person variable onto the function arguments. The format property specifies the form the content request body will take, in this case JSON, due to the Json<Person>.

We’re using serde again to retrieve JSON<body> from the request body.

We’re going to use into_inner() to call the contents of Person. We’re also going to use update, which will map either the error or result into the Result variable. Since we’re also using error_status, an error will occur if a record does not match with an existing ID.

If you want to insert a new record, you’d use the error::not found and use a call code similar to what’s used in the Post function.

On that note, the Post function looks like:

    #[post(“/”, format = “application/json”, data = “<person>”)]
    fn post(person: Json<Person>, connection: DbConn) ->    Result<status::Created<Json<Person>>, Failure> {
      people::repository::insert(person.into_inner(), &connection)
          .map(|person| person_created(person))
        .map_err(|error| error_status(error))
  }
     fn person_created(person: Person) -> status::Created<Json<Person>> {
      status::Created(
          format!(“{host}:{port}/people/{id}”, host = host(), port = port(), id =    person.id).to_string(),
          Some(Json(person)))
  }
    fn host() -> String {
      env::var(“ROCKET_ADDRESS”).expect(“ROCKET_ADDRESS must be set”)
  }
     fn port() -> String {
      env::var(“ROCKET_PORT”).expect(“ROCKET_PORT must be set”)
  }

 This function uses pieces similar to the PUT handle we’ve already discussed. The main difference is that POST will return a 201 Created rather than a 200 OK. To yield a different result, the Result variable should use the status::Created handle instead of Json<person>. This is what causes the 201 status code.

To make the status::created struct, the created record and the path to retrieve it must be passed into the constructor using the Get request.

Routing

The handlers have all been set up. Now we need to route them to the different functions. Each of the handles in this function are related to people, so they’re all going to be mapped to /people.

use people;
use rocket;
use connection;
pub fn create_routes() {
    rocket::ignite()
        .manage(connection::init_pool())
        .mount(“/people”,
               routes![people::handler::all,
                    people::handler::get,
                    people::handler::post,
                    people::handler::put,
                    people::handler::delete],
        ).launch();
}

Create_routes is called by the main function to get everything started. Ignite creates a new version of Rocket. Handler functions are loaded into a base request path of /people, defining them all inside of routes! Launch runs the application.

Configurations

Earlier, we created environment variables for retrieving the port and host of the running server. Here’s how to change the port and host of the running server. There are multiple ways of going about this. You can create an .env file or a rocket.toml file.

When using .env files, values most be formatted as ROCKET_{PARAM}. PARAM is the variable you’re trying to define. {Address} stands for the host while {port} stands for port.

The .env file would look something like this:

    ROCKET_ADDRESS=localhost
    ROCKET_PORT=8000

If you wanted to use Rocket.toml instead, it might look like:

     [development]
    address = “localhost”
    port = 8000

 If you don’t include either of these, Rocket will return to its default configuration.

If you want to learn more about configuring Rocket, check out Rocket’s documentation.

Last Step: Create The Main Method

The final step is to create the main method so the application can run.

     #![feature(plugin, decl_macro, custom_derive)]
    #![plugin(rocket_codegen)]
    #[macro_use]
    extern crate diesel;
    extern crate dotenv;
    extern crate r2d2;
    extern crate r2d2_diesel;
    extern crate rocket;
    extern crate rocket_contrib;
   #[macro_use]
   extern crate serde_derive;
  use dotenv::dotenv;
  mod people;
  mod schema;
 mod connection;
 fn main() {
      dotenv().ok();
      people::router::create_routes();
  }

 In this instance, all main is doing is loading the environment variables and starting Rocket by calling create_routes. The rest of the code just loads a bunch of crates so they’re not littered throughout the code.

To see the complete code, you can check out LankyDan’s GitHub.

Conclusion: Rust Vs. Haskell: Which Language Is Best For Building APIs?

As we stated at the beginning, knowing which programming language will best suit your needs isn’t just following a step-by-step recipe or formula. It depends on numerous variables, including your technical proficiency and what you’re working on.

That being said, there are a few reasons why Rust has some advantages over Haskell for building APIs, most notably its popularity. Rust has been trending in recent years, so there’s a ton of useful libraries and frameworks, not to mention a vibrant community to help you answer any questions you might encounter.

Secondly, Rust is also preferable when size, speed, and security matter, which is most of the time, at this point in the Web’s evolution.

Haskell definitely requires more technical understanding. There are certain advantages to using functional programming for constructing APIs. The main advantage for using Haskell for your API design is its utility in rapid prototyping. The code you write while constructing your prototype should still be able to be used in your official product.

Unless you’re a seasoned API designer, or you’re trying to get an app to market as quickly as possible, Rust has a slight advantage over Haskell for API design, in our opinion.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter


#web-development #rust #api #design-pattern #function

What is GEEK

Buddha Community

Rust Vs. Haskell: Which Language is Best for API Design?

An API-First Approach For Designing Restful APIs | Hacker Noon

I’ve been working with Restful APIs for some time now and one thing that I love to do is to talk about APIs.

So, today I will show you how to build an API using the API-First approach and Design First with OpenAPI Specification.

First thing first, if you don’t know what’s an API-First approach means, it would be nice you stop reading this and check the blog post that I wrote to the Farfetchs blog where I explain everything that you need to know to start an API using API-First.

Preparing the ground

Before you get your hands dirty, let’s prepare the ground and understand the use case that will be developed.

Tools

If you desire to reproduce the examples that will be shown here, you will need some of those items below.

  • NodeJS
  • OpenAPI Specification
  • Text Editor (I’ll use VSCode)
  • Command Line

Use Case

To keep easy to understand, let’s use the Todo List App, it is a very common concept beyond the software development community.

#api #rest-api #openai #api-first-development #api-design #apis #restful-apis #restful-api

Top 10 API Security Threats Every API Team Should Know

As more and more data is exposed via APIs either as API-first companies or for the explosion of single page apps/JAMStack, API security can no longer be an afterthought. The hard part about APIs is that it provides direct access to large amounts of data while bypassing browser precautions. Instead of worrying about SQL injection and XSS issues, you should be concerned about the bad actor who was able to paginate through all your customer records and their data.

Typical prevention mechanisms like Captchas and browser fingerprinting won’t work since APIs by design need to handle a very large number of API accesses even by a single customer. So where do you start? The first thing is to put yourself in the shoes of a hacker and then instrument your APIs to detect and block common attacks along with unknown unknowns for zero-day exploits. Some of these are on the OWASP Security API list, but not all.

Insecure pagination and resource limits

Most APIs provide access to resources that are lists of entities such as /users or /widgets. A client such as a browser would typically filter and paginate through this list to limit the number items returned to a client like so:

First Call: GET /items?skip=0&take=10 
Second Call: GET /items?skip=10&take=10

However, if that entity has any PII or other information, then a hacker could scrape that endpoint to get a dump of all entities in your database. This could be most dangerous if those entities accidently exposed PII or other sensitive information, but could also be dangerous in providing competitors or others with adoption and usage stats for your business or provide scammers with a way to get large email lists. See how Venmo data was scraped

A naive protection mechanism would be to check the take count and throw an error if greater than 100 or 1000. The problem with this is two-fold:

  1. For data APIs, legitimate customers may need to fetch and sync a large number of records such as via cron jobs. Artificially small pagination limits can force your API to be very chatty decreasing overall throughput. Max limits are to ensure memory and scalability requirements are met (and prevent certain DDoS attacks), not to guarantee security.
  2. This offers zero protection to a hacker that writes a simple script that sleeps a random delay between repeated accesses.
skip = 0
while True:    response = requests.post('https://api.acmeinc.com/widgets?take=10&skip=' + skip),                      headers={'Authorization': 'Bearer' + ' ' + sys.argv[1]})    print("Fetched 10 items")    sleep(randint(100,1000))    skip += 10

How to secure against pagination attacks

To secure against pagination attacks, you should track how many items of a single resource are accessed within a certain time period for each user or API key rather than just at the request level. By tracking API resource access at the user level, you can block a user or API key once they hit a threshold such as “touched 1,000,000 items in a one hour period”. This is dependent on your API use case and can even be dependent on their subscription with you. Like a Captcha, this can slow down the speed that a hacker can exploit your API, like a Captcha if they have to create a new user account manually to create a new API key.

Insecure API key generation

Most APIs are protected by some sort of API key or JWT (JSON Web Token). This provides a natural way to track and protect your API as API security tools can detect abnormal API behavior and block access to an API key automatically. However, hackers will want to outsmart these mechanisms by generating and using a large pool of API keys from a large number of users just like a web hacker would use a large pool of IP addresses to circumvent DDoS protection.

How to secure against API key pools

The easiest way to secure against these types of attacks is by requiring a human to sign up for your service and generate API keys. Bot traffic can be prevented with things like Captcha and 2-Factor Authentication. Unless there is a legitimate business case, new users who sign up for your service should not have the ability to generate API keys programmatically. Instead, only trusted customers should have the ability to generate API keys programmatically. Go one step further and ensure any anomaly detection for abnormal behavior is done at the user and account level, not just for each API key.

Accidental key exposure

APIs are used in a way that increases the probability credentials are leaked:

  1. APIs are expected to be accessed over indefinite time periods, which increases the probability that a hacker obtains a valid API key that’s not expired. You save that API key in a server environment variable and forget about it. This is a drastic contrast to a user logging into an interactive website where the session expires after a short duration.
  2. The consumer of an API has direct access to the credentials such as when debugging via Postman or CURL. It only takes a single developer to accidently copy/pastes the CURL command containing the API key into a public forum like in GitHub Issues or Stack Overflow.
  3. API keys are usually bearer tokens without requiring any other identifying information. APIs cannot leverage things like one-time use tokens or 2-factor authentication.

If a key is exposed due to user error, one may think you as the API provider has any blame. However, security is all about reducing surface area and risk. Treat your customer data as if it’s your own and help them by adding guards that prevent accidental key exposure.

How to prevent accidental key exposure

The easiest way to prevent key exposure is by leveraging two tokens rather than one. A refresh token is stored as an environment variable and can only be used to generate short lived access tokens. Unlike the refresh token, these short lived tokens can access the resources, but are time limited such as in hours or days.

The customer will store the refresh token with other API keys. Then your SDK will generate access tokens on SDK init or when the last access token expires. If a CURL command gets pasted into a GitHub issue, then a hacker would need to use it within hours reducing the attack vector (unless it was the actual refresh token which is low probability)

Exposure to DDoS attacks

APIs open up entirely new business models where customers can access your API platform programmatically. However, this can make DDoS protection tricky. Most DDoS protection is designed to absorb and reject a large number of requests from bad actors during DDoS attacks but still need to let the good ones through. This requires fingerprinting the HTTP requests to check against what looks like bot traffic. This is much harder for API products as all traffic looks like bot traffic and is not coming from a browser where things like cookies are present.

Stopping DDoS attacks

The magical part about APIs is almost every access requires an API Key. If a request doesn’t have an API key, you can automatically reject it which is lightweight on your servers (Ensure authentication is short circuited very early before later middleware like request JSON parsing). So then how do you handle authenticated requests? The easiest is to leverage rate limit counters for each API key such as to handle X requests per minute and reject those above the threshold with a 429 HTTP response. There are a variety of algorithms to do this such as leaky bucket and fixed window counters.

Incorrect server security

APIs are no different than web servers when it comes to good server hygiene. Data can be leaked due to misconfigured SSL certificate or allowing non-HTTPS traffic. For modern applications, there is very little reason to accept non-HTTPS requests, but a customer could mistakenly issue a non HTTP request from their application or CURL exposing the API key. APIs do not have the protection of a browser so things like HSTS or redirect to HTTPS offer no protection.

How to ensure proper SSL

Test your SSL implementation over at Qualys SSL Test or similar tool. You should also block all non-HTTP requests which can be done within your load balancer. You should also remove any HTTP headers scrub any error messages that leak implementation details. If your API is used only by your own apps or can only be accessed server-side, then review Authoritative guide to Cross-Origin Resource Sharing for REST APIs

Incorrect caching headers

APIs provide access to dynamic data that’s scoped to each API key. Any caching implementation should have the ability to scope to an API key to prevent cross-pollution. Even if you don’t cache anything in your infrastructure, you could expose your customers to security holes. If a customer with a proxy server was using multiple API keys such as one for development and one for production, then they could see cross-pollinated data.

#api management #api security #api best practices #api providers #security analytics #api management policies #api access tokens #api access #api security risks #api access keys

Ajay Kapoor

1623902370

5 Most Important API Designing Principles to Keep Customers Coming Back

Blog source

While starting any new web project, we can easily get lost in the hype and confusion of our technological decisions — which dependency injection framework will work best, which database to use, and which programming language will be superior. In the end, we lose focus from our aim, i.e., “developing a robust application that can effectively solve customer’s problems”. Though language and tools are an essential facet of app design, there is something more important, too, and it’s API design. Yes, API is also a vital aspect, and we tend to skip this generally. We hardly give special attention to this area, which is completely unacceptable if you want to develop a good application.

According to the Google Trend results, we have found that every 90 out of 100 developers equally prioritize API design over other app development aspects, programming language, framework, and more.

Hire api developers

#api #hire-api-developer #api-design #api-designing #api-designing-principles

Autumn  Blick

Autumn Blick

1601381326

Public ASX100 APIs: The Essential List

We’ve conducted some initial research into the public APIs of the ASX100 because we regularly have conversations about what others are doing with their APIs and what best practices look like. Being able to point to good local examples and explain what is happening in Australia is a key part of this conversation.

Method

The method used for this initial research was to obtain a list of the ASX100 (as of 18 September 2020). Then work through each company looking at the following:

  1. Whether the company had a public API: this was found by googling “[company name] API” and “[company name] API developer” and “[company name] developer portal”. Sometimes the company’s website was navigated or searched.
  2. Some data points about the API were noted, such as the URL of the portal/documentation and the method they used to publish the API (portal, documentation, web page).
  3. Observations were recorded that piqued the interest of the researchers (you will find these below).
  4. Other notes were made to support future research.
  5. You will find a summary of the data in the infographic below.

Data

With regards to how the APIs are shared:

#api #api-development #api-analytics #apis #api-integration #api-testing #api-security #api-gateway

Marcelle  Smith

Marcelle Smith

1598083582

What Are Good Traits That Make Great API Product Managers

As more companies realize the benefits of an API-first mindset and treating their APIs as products, there is a growing need for good API product management practices to make a company’s API strategy a reality. However, API product management is a relatively new field with little established knowledge on what is API product management and what a PM should be doing to ensure their API platform is successful.

Many of the current practices of API product management have carried over from other products and platforms like web and mobile, but API products have their own unique set of challenges due to the way they are marketed and used by customers. While it would be rare for a consumer mobile app to have detailed developer docs and a developer relations team, you’ll find these items common among API product-focused companies. A second unique challenge is that APIs are very developer-centric and many times API PMs are engineers themselves. Yet, this can cause an API or developer program to lose empathy for what their customers actually want if good processes are not in place. Just because you’re an engineer, don’t assume your customers will want the same features and use cases that you want.

This guide lays out what is API product management and some of the things you should be doing to be a good product manager.

#api #analytics #apis #product management #api best practices #api platform #api adoption #product managers #api product #api metrics