What is REST API? | Restful Web Service

What is REST API? | Restful Web Service

In this post "Restful Web Service", you'll learn: What is Web services, what is API, What is REST API, How REST works and Implementation of REST API

What is REST API? | Restful Web Service

A REST API defines a set of functions to process requests and responses via HTTP protocol.

REST is used in mobile application as well as in web applications.

What is REST? What are RESTful Web Services?

What is REST? What are RESTful Web Services?

This tutorial provides an introduction to RESTful web services and goes over what REST is as well as HTTP.

REST stands for REpresentational State Transfer. It is a popular architectural approach to create your API's in today's world.

You Will Learn
  • What is REST?
  • What are the fundamentals of REST APIs?
  • How do you make use of HTTP when building REST API?
  • What is a Resource?
  • How do you identify REST API Resources?
  • What are some of the best practices in designing REST API?
What Is REST?

The acronym REST stands for REpresentational State Transfer. It was term originally coined by Roy Fielding, who was also the inventor of the HTTP protocol. The striking feature of REST services is that they want to make the best use of HTTP. Let's now have a quick overview of HTTP.

A Relook at HTTP

Let's open up the browser and visit a web page first:

And then click on one of the result pages:

Next, we can click on the link on the page we end up in:

And land upon another page:

This is how we typically browse the web.

When we browse the internet, there are a lot of things that happen behind the scenes. The following is a simplified view of what happens between the browser, and the servers running on the visited websites:

The HTTP Protocol

When you enter a URL such as https://www.google.com in the browser, a request is sent to the server on the website identified by the URL. That server then responds with a response. The important thing is the formats of these requests and responses. These formats are defined by a protocol called HTTPHyper Text Transfer Protocol.

When you type in a URL at the browser, it sends out a GET request to the identified server. The server then replies with an HTTP response that contains data in HTMLHyper Text Markup Language. The browser then takes this HTML and displays it on your screen.

Let's say you are filling in a form present on a web page with a list of details. In such a scenario when you click the Submit button, an HTTP POST request gets sent out to the server.

HTTP and RESTful Web Services

HTTP provides the base layer for building web services. Therefore, it is important to understand HTTP. Here are a few key abstractions.


A resource is a key abstraction that HTTP centers round. A resource is anything you want to expose to the outside world through your application. For instance, if we write a todo management application, instances of resources are:

  • A specific user
  • A specific todo
  • A list of todos

Resource URIs

When you develop RESTful services, you need to focus your thinking on the resources in the application. The way we identify a resource to expose, is to assign a URIUniform Resource Identifier — to it. For example:

  • The URI for the user Ranga is /user/ranga
  • The URI for all the todos belonging to Ranga is /user/Ranga/todos
  • The URI for the first todo that Ranga has is /user/Ranga/todos/1

Resource Representation

REST does not worry about how you represent your resource. It could be XML, HTML, JSON, or something entirely different! The only important thing is you clearly define your resource and perform whatever actions that are supported on it by making use of features already provided by HTTP. Examples are:

  • Create a user: POST /users
  • Delete a user: DELETE /users/1
  • Get all users: GET /users
  • Get a single user: GET /users/1
REST and Resources

A significant point to note is that with REST, you need to think about your application in terms of resources:

  • Identify what resources you want to expose to the outside world
  • Make use of the verbs already specified by HTTP to perform operations on these resources

Here is how a REST service is generally implemented:

  • Data Exchange Format: No restriction is imposed over here. JSON is a highly popular format, although other such as XML can be used as well
  • Transport: Always HTTP. REST is completely built on top of HTTP.
  • Service Definition: There is no standard to specify this, and REST is flexible. This could be a drawback in some scenarios, as it might be necessary for the consuming application to understand the request and response formats. There are widely used ones however, such as WADL (Web Application Definition Language) and Swagger.

REST focuses on resources and how effectively you perform operations on them using HTTP.

The Components of HTTP

HTTP defines the following for a request:

For the response, HTTP defines the:

HTTP Request Methods

The method used in a HTTP request indicates what action you want to perform with that request. Important examples are:

  • GET: Retrieve details of a resource
  • POST : Create a new resource
  • PUT: Update an existing resource
  • DELETE: Delete a resource

HTTP Response Status Code

A status code is always present in a HTTP response. Common examples are:

  • 200: Success
  • 404: Page not found

In this article, we had a high-level look at REST. We stressed the fact that HTTP is the building block of REST services. HTTP is a protocol that is used to define the structure of browser requests and responses. We saw that HTTP deals mainly with resources that are exposed on web servers. Resources are identified using URIs, and operations on these resources are performed using verbs defined by HTTP.

Finally, we looked at how REST services make the best use of features offered by HTTP to expose resources to the outside world. REST does not put any restrictions on the resource representation formats or on the service definition.

Web Development with Rust - 03/x: Create a REST API

Web Development with Rust - 03/x: Create a REST API

Since Rust is a static typed language with a strong compiler you won't face many of the common pitfalls about running a web service in production. Although there are still run time errors which you have to cover.

  1. HTTP Requests
  2. POST/PUT/PATCH/DELETE are special
  3. The Job of a Framework
  4. Creating an API spec
  5. Crafting the API
  6. Input Validation
  7. Summary

APIs are the bread and butter of how a modern and fast-paced web environment. Frontend application, other web services and IoT devices need to be able to talk to your service. API endpoints are like doors to which you decide what comes in and in which format.

Since Rust is a static typed language with a strong compiler you won't face many of the common pitfalls about running a web service in production. Although there are still run time errors which you have to cover.

HTTP Requests

When we talk about creating an API we basically mean a web application which listens on certain paths and responds accordingly. But first things first. For two devices to be able to communicate with each other there has to be an established TCP connection.

TCP is a protocol which the two parties can use to establish a connection. After establishing this connection, you can receive and send messages to the other party. HTTP is another protocol, which is built on top of TCP, and it's defining the contents of the requests and responses.

So on the Rust side of things, TCP is implemented in the Rust core library, HTTP is not. Whatever framework you chose in the previous article they all implement HTTP and therefore are able to receive and send HTTP formatted messages.

An example GET requests for example looks like this:

GET / HTTP/1.1
Host: api.awesomerustwebapp.com
Accept-Language: en

It includes:

  • GET: the HTTP method
  • /: The path
  • HTTP/1.1: The version of the HTTP protocol
  • HOST: The host/domain of the server we want to request data from
  • Accept-Language: Which language we prefer and understand

The most common used HTTP methods are:

  • GET
  • POST
  • PUT

We are using GET every time we browse the web. If we want to alter data however (like using POST to send data over to another server), we need to be more cautions and precise.

First, not everyone is allowed to just send a bunch of data to another server. Our API can for example say: "I just accept data from the server with the host name allowed.awesomerustapp.com.

Therefore, when you send a POST to another server, what actually happens is the CORS workflow:

We first ask the server what is allowed, where do you accept requests from and what are your accepted headers. If we fulfill all of these requirements, then we can send a POST.

Disclaimer: Not all frameworks (like rocket and tide) are implementing CORS in their core. However, in a professional environment, you handle CORS on the DevOps side of things and put it for example in your NGINX config.
The Job of a Framework

We use the hard work of other people to create web applications. Everything has to be implemented at some point, just not from you for most of the time. A framework covers the following concerns:

  • Start a web server and open a PORT
  • Listen to requests on this PORT
  • If a request comes in, look at the Path in the HTTP header
  • Route the request to the handler according to the Path
  • Help you extract the information out of the request
  • Pack the generated data and HTTP StatusCode (created from you) and form a response
  • Send the response back to the sender

The Rust web framework tide includes http-service, which provides the basic abstractions you need when working with HTTP calls. The crate http-service is built on top of hyper, which transforms TCP-Streams to valid HTTP requests and responses.

Your job is to create routes like /users/:id and add a route_handler which is a function to handle the requests on this particular path. The framework makes sure that it directs the incoming HTTP requests to this particular handler.

Creating an API spec

You have to define your resources first to get an idea what your application needs to handle and uncover relationships between them. So if you want to build a idea-up-voting site, you would have:

  • Users
  • Ideas
  • Votes

A simple spec for this scenario would look like this:

  • Users
  • POST /users
  • GET /users
  • PUT /users/:user_id
  • PATCH /users/:user_id
  • DELETE /users/:user_id
  • GET /users/:user_id

Ideas and Votes behave accordingly. A spec is helpful for two reasons:

  • It gives you guidelines not to forget a path
  • It helps to communicate to your API users what to expect

You can tools like swagger to write a full spec which also describes the structure of the data and the messages/responses for each path and route.

A more professional spec would include the return values for each route and the request and response bodies. However, the spec can be finalized once you know how your API should look like and behave. To get started, a simple list is enough.

Crafting the API

Depending on the framework you are using, your implementation will look different. You have to have the following features on your radar to look out for:

  • Creating routes for each method (like app.at("/users").post(post_users_handler))
  • Extracting information from the request (like headers, uri-params and JSON from the request body)
  • Creating responses with proper HTTP codes (200201400404 etc.)

I am using the latest version of tide for this web series. You can add it in your Cargo.toml file and use it for your web app:

tide = "0.1.0"

Our first User implementation will look like this:

async fn handle_get_users(cx: Context<Database>) -> EndpointResult {

async fn handle_get_user(cx: Context<Database>) -> EndpointResult {
let id = cx.param("id").client_err()?;
if let Some(user) = cx.app_data().get(id) {
} else {

async fn handle_update_user(mut cx: Context<Database>) -> EndpointResult<()> {
let user = await!(cx.body_json()).client_err()?;
let id = cx.param("id").client_err()?;

if cx.app_data().set(id, user) {
} else {


async fn handle_create_user(mut cx: Context<Database>) -> EndpointResult<String> {
let user = await!(cx.body_json()).client_err()?;

async fn handle_delete_user(cx: Context<Database>) -> EndpointResult<String> {
let id = cx.param("id").client_err()?;

fn main() {
// We create a new application with a basic, local database
// You can use your own implementation, or none: App::new(())
let mut app = App::new(Database::default());



You can find the full implementation of the code in the GitHub repository to this series.

We see that we first have to create a new App

let mut app = App::new(())

add routes


and for each route add the HTTP requests we want to handle


Each framework has a different method of extracting parameters and JSON bodies. Actix is using Extractors, rocket is using Query Guards.

With tide, you can access request parameters and bodies and database connections through Context. So when we want to update a User with a specific id, we send a PATCH to /users/:id. From there, we call the handle_update_user method.

Inside this method, we can access the id from the URI like this:

let id = cx.param("id").client_err()?;

Each framework is also handling its own way of sending responses back to the sender. Tide is using EndpointResult, rocket is using Response and actix HttpResponse.

Everything else is completely up to you. The framework might help you with session management and authentication, but you can also implement this yourself.

My suggestion is: Build the first skeleton of your app with the framework of your choice, figure out how to extract information out of requests and how to form responses. Once this is working, you can use your Rust skills to build small or big applications as you wish.

Input Validation

Your best friend in the Rust world will be serde. It will help you parse JSON and other formats, but will also allow you to serialize your data.

When we talk about input validation, we want to make sure the data we are getting has the right format. Lets say we are extracting the JSON body out of a request:

let user: User = serde_json::from_str(&request_body);

We are using serde_json here to transform a JSON-String into a Struct of our choice. So if we created this struct:

struct User {
name: String,
height: u32,

we want to make sure the sender is including name and height. If we just do serde_json::from_str, and the sender forgot to pass on the height, the app will panic and shut down, since we expect the response to be a user: let user: User.

We can improve the error handling like this:

let user: User = match serde_json::from_str(&request_body) {
Ok(user) => user,
Err(error) => handle_error_case(error),

We catch the error and call our handle_error_case method to handle it gracefully.

  1. Pick a framework of your choice
  2. rocket is nightly
  3. actix is stable
  4. tide is fostered close to the Rust Core and also works on Rust nightly
  5. Know that there is no common CORS handling (yet). Recommendation is to handle this on the DevOps side (NGINX for example)
  6. After picking a framework, spec out your resources (/users: GET, POST etc.)
  7. Figure out how the framework of your choice is handling extracting parameters and JSON from the request and how to form a response
  8. Validate your input via match and serde_json

Thanks For Visiting, Keep Visiting. If you liked this post, share it with all of your programming buddies!

Why you should learn the Rust programming language

☞ The Rust Programming Language

☞ Rust Vs. Haskell: Which Language is Best for API Design?

☞ An introduction to Web Development with Rust for Node.js Developers

☞ 7 reasons why you should learn Rust programming language in 2019

Why you should move from Node.js to Rust in 2019

☞ Rust: Building Reusable Code with Rust from Scratch

☞  Programming in Rust: the good, the bad, the ugly.

☞  An introduction to Web Development with Rust for Node.js Developers

☞ Intro to Web Development with Rust for NodeJS Developers

☞ Introducing the Rust Crash Course

3 Frameworks for Building APIs Using Rust

This post was originally published here

Learn about Developing REST APIs

Learn about Developing REST APIs

Building RESTful web services, like other programming skills is part art, part science. As the Internet industry progresses, creating a REST API becomes more ...

This article introduces a set of tools essential to building REST APIs. The tools are platform independent, which means they are applicable to REST APIs built with any technology stack. The goal of this article is to familiarise novice API developers with different stages of API development and introduce tools that help with those stages. Detailed coverage of these tools can be found on the web. The different phases of API development are enumerated below.

  1. Design — The main goal here is to define the shape of APIs, document interfaces, and provide stub endpoints.
  2. Testing — Here, we do functional testing of APIs by sending a request and analyzing the response at different levels of visibility, namely, application, HTTP, and network.
  3. Web Hosting — When deployed on the web, there are HTTP tools that help with the hosting of APIs for performance, security, and reliability.
  4. Performance — Before moving on to production, we use tools for performance testing of APIs that tell us how much load APIs may support.
  5. Observability — Once the API is deployed in production, testing in production provides the overall health of live APIs and alert us if any problem occurs.
  6. Management — Lastly, we will take a look at some of the tools for API management activities like traffic shaping, blue-green deployment, canary, etc.

The following figure shows different stages highlighting the tools.

We will illustrate the usage of tools on APIs exposed by a web application as we elaborate on each phase of API development. Product Catalog is a Spring Boot web application that manages a catalog of products. It exposes REST APIs to perform CRUD operations on a product catalog.


In the design phase, the API developer collaborates with clients of the API and the data provider to arrive at the shape of the API. REST API essentially consists of exchanging JSON messages over HTTP. JSON is a dominant format in REST API since it is a compact, easy to understand, and has a flexible format that does not require declaring schema up front. Different clients can use the same API and read the data that they need.

We will illustrate API design using Swagger. It is a tool that uses open format to describe the APIs coupled with Web UI for visualizing and sharing. There is no separation between design and implementation. It is an API documentation tool where the documentation is hosted alongside the API. The benefit of this is that the API and the documentation will also remain in sync. The drawback is that only API developers can change the structure of the API. The documentation is generated from the API. This means we need to build the skeleton of our API first. We have used Spring Boot to develop the API and Springfox package to generate the swagger documentation. Bring in swagger 2 and swagger-ui maven dependencies into your pom.xml.


Add SwaggerConfig.java to the project with following content.

package com.rks.catalog.configuration;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import springfox.documentation.builders.PathSelectors;
import springfox.documentation.builders.RequestHandlerSelectors;
import springfox.documentation.spi.DocumentationType;
import springfox.documentation.spring.web.plugins.Docket;
import springfox.documentation.swagger2.annotations.EnableSwagger2;
public class SwaggerConfig {
    public Docket api() {
        return new Docket(DocumentationType.SWAGGER_2)

This configuration tells Swagger to scan all the controllers and include all the URLs defined in those controllers for API documentation.

Once the application is started, Swagger documentation of the APIs can be accessed at the URL


Click on each API to examine the details — the URL, HTTP headers, and the HTTP body where applicable. A useful feature is the "Try it out!" button, which provides a sandbox environment that lets people play with the API to get a feel for it before they start plugging them in their apps.


Functional testing of REST APIs entails sending HTTP requests and checking responses so that we can verify that APIs behave as we expect. REST uses HTTP for transport that specifies the request and response formats of API. TCP/IP, in turn, takes the HTTP messages and decides how to transport them over the wire. We introduce three sets of tools to test APIs at these three layers of protocol stack, namely, REST Clients for REST layer, Web Debuggers for HTTP layer, and Packet Sniffers for TCP/IP layer.

  • Postman — Postman is a REST client that allows us to test REST APIs. It allows us to:
  • Create HTTP requests and generate equivalent cURL commands that can be used in scripts.
  • Create multiple environments for Dev, Test, Pre-Prod as each environment has different configurations.
  • Create a test collection having multiple tests for each product area. The different parts of a test can be parameterized that allows us to switch between environments.
  • Create code snippets in JavaScript to augment our tests, e.g., assert return codes or set an environment variables.
  • Automate running of tests with a command-line tool called Newman.
  • Import/export test collections and environments.

  • cURL — It is a command-line tool that uses it's own HTTP stack and is available cross platform.
curl -X POST \
  http://localhost:8080/books \
  -H 'Cache-Control: no-cache' \
  -H 'Content-Type: application/json' \
  -d '{
  • Burp — Burp is a HTTP debugger that let us see the web traffic that goes between the client and the API. It runs as a proxy between the client and the server. This allows us to intercept the request and the reponse and modify them to create scenarios that are otherwise difficult to test without changing the client. It is a suite of tools that is mainly used for security testing but it can be very useful for API testing as well. Set up your postman to send request to Burp proxy and configure Burp to intercept client request and server response. Intercept request and response as shown below.

  • Wireshark — Verification of some features of API, e.g., encryption, compression, etc., will require us to look a level deeper to see what is being sent and received on the network. Wireshark is a tool that monitors network interface and keeps a copy of all TCP packets that pass through it. Traffic is split by layers — HTTP, TCP, IP, etc. It also helps us to troubleshoot issues that require us to go deeper, e.g., TLS handshake.

Web Hosting

In this section, we will look at some of the features of the HTTP protocol that, if properly used, help us deliver performant, highly available, robust, and secure APIs. In particular, we will cover three parts of HTTP protocol — Caching for performance, DNS for high availability and scalability, and TLS for transport security.

  • Caching — Caching is one of the best ways to improve client performance and reduce load on API. HTTP allows clients to save a copy of resource locally by sending a caching header in the response. Next time, the client sends HTTP request for the same resource, it will be served from the local cache. This saves both network traffic and compute load on the API.
  • HTTP 1.0 Expiration Caching. HTTP 1.0 provides Expires header in the HTTP response indicating the time when the resource will expire. This can be useful for shared resource with a fixed expiration time.
  • HTTP 1.1 Expiration Caching. HTTP 1.1 provides a more flexible expiration header cache-control that instructs a client to cache the resource for a period that is set in max-age value. There is another value s-maxage that can be set for the intermediaries, e.g., a caching proxy.
  • HTTP Validation Caching. With caching, there is a problem of a client having an out-dated resource or two clients to have different versions of the same resource. If this is not acceptable or if there are personalized resources that cannot be cached, e.g., auth tokens, HTTP provides validation caching. With validation caching, HTTP provides headers in the response Etag or last-modified timestamp. If API returns either of the two headers, clients cache it and include in subsequent GET calls to the API.
GET http://api.endpoint.com/books
If-none-match: "4v44ffgg1e"

If the resource is not changed, the API will return 304 Not Modified response with no body, and the client can safely use its cached copy.

  • DNS — Domain Name System finds IP addresses for a domain name so that clients can route their request to the correct server. When HTTP request is made, clients first query a DNS server to find the address for the host and then send the request directly to the IP address. DNS is a multi-tiered system that is heavily cached to ensure requests are not slowed down. Clients maintain a DNS cache, then there are intermediate DNS servers leading all the way to a nameserver. DNS provides CNAME (Canonical Names) to access different parts of the server, e.g., both API and the webserver may be hosted on the same server with two different CNAMEs — api.endpoint.com and www.endpoint.com or CNAMEs may point to different servers. CNAMEs also let us segregate parts of our API. For HTTP GET requests, we can have separate CNAME for static and transactional resources that let us set up a fronting proxy for resources that we know are likely to be cache hits. We can also have a CNAME for HTTP POST requests to separate reads and writes so that we can scale them independently. Or we can provide a fast lane for priority customers.

With advanced DNS like Route53, a single CNAME instead of just pointing to a single server may point to multiple servers. A routing policy may then be configured for weighted routing, latency routing or for fault tolerance.

  • TLS — We can secure our APIs with TLS which lets us serve our request over HTTPS. HTTPS works on the basic security principle of key-pair. To enable HTTPS on our API, we need a certificate on our server that contains public and private key-pair. The server sends a public key to the client, which uses it to encrypt data and the server uses its private key to decrypt it. When the client first connects to an HTTPS endpoint, there is a handshake where client and server agree upon how to encrypt the traffic. They exchange another key unique to the session which is used to encrypt and decrypt data for the life of that session. There is a performance hit during the initial handshake due to the asymmetric encryption, but once the connection is established, symmetric encryption is used which is quite fast.

For proxies to cache the TLS traffic, we have to upload the same certificate that is used to encrypt the traffic. Proxy should be able to decrypt the traffic, save it in its cache and encrypt it with the same certificate and send it to the client. Some proxy servers do not allow this. In such situations, one solution is to have two CNAMEs — one for static cacheable resources over HTTP and for non-cacheable personalized resources, requests over secured TLS channel will be served by the API directly.


In this section, we will look at tools to load test our API so that we can quantify how much traffic our infrastructure can cope with. The basic idea behind performance testing is to send lots of requests to the API at the same time and see at what point performance degrades and ultimately fails. The answers we look for are:

  • What response times can the API give under different load conditions?
  • How many concurrent requests can the API handle without errors?
  • What infrastructure is required to deliver the desired performance?

loader.io is a cloud-based free load testing service that allows us to stress test our APIs. To get a baseline performance of API, different kinds of load tests can be run with increasing loads, measured by the number of requests per second, to find out performance figures quantified by errors and response times, for

  • Soak test — average load for long periods, e.g., run for 48 hours @1 request per second. This will uncover any memory leaks or other similar latent bugs.
  • Load test — peak load, e.g., run 2K requests per second with 6 instances of API.
  • Stress test — way-over peak load, e.g., run10K requests per second for 10 minutes.

This also lets us decide the infrastructure that will let us deliver API with desired performance numbers and whether our solution scales linearly.


Once API is deployed in production, it does not mean we can forget about the API. Production deployment kicks off another phase of testing — testing in production that may uncover issues that remained uncaught in earlier phases. Testing in production includes a set of activities clubbed together as observability that includes logging, monitoring, and tracing. The tools for these activities will help us to diagnose and resolve issues found in production.

  • Logging — Logging needs to be done explicitly by the developers using their preferred logging framework and a logging standard. For example, one log statement for every 10 lines of code or more if the code is complex with log levels split as - 60 percent DEBUG, 25 percent INFO, 10 percent WARN and 5 percent ERROR.
  • Monitoring — Monitoring runs at a higher level than logging. While logging explicitly tells us what is going on with the API, monitoring provides the overall health of API using generic metrics exposed by the platform and the API itself. Metrics are typically exposed by an agent deployed on the server or it may be part of the solution and are collected periodically by the monitoring solution deployed remotely.

Diagnostic endpoints may be included in the solution that tells us the overall health of the API.

  • Tracing — Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures.

Enabling Centralized Logging covers logging and tracing. For monitoring, interesting metrics may be stored in a time-series store like Prometheus and visualized using Grafana.


API Management tools serve as a gateway that provides services that let:

  • API Clients provision themselves by getting API key
  • API Providers configure DNS, caching, throttling policies, API versioning, canarying.

These features and more are available on AWS API Gateway.

Thanks for reading. Originally published on https://dzone.com