When building web services, simpler is better. Learn how to build a Rust web service without using a web framework.

In my experience, when building web services, simpler is better. It’s tempting to take a fully featured, heavyweight web framework, drop it in, and use its convention-over-configuration approach to just “get stuff done”. Many developers have used this approach successfully, myself included.

However, I noticed after a while that the hidden complexity — which is huge — has some drawbacks. These drawbacks can range from degraded performance, to time lost debugging issues in the huge network of transitive dependencies, to simply not knowing what’s going on. Likewise, writing everything from scratch is not optimal. It’s extremely time-consuming there is potential to introduce errors at every step.

I’m not saying that widely used open-source web frameworks are error-free — far from it. But there are, at the least, more eyes on them and more stakeholders actively finding and fixing bugs.

In my opinion, there is a happy medium. The goal is to keep complexity low, while retaining most of the convenience and development speed you get from having everything in one dependency.

This sweet spot might be different for different people because, depending on your experience, what you’re comfortable writing yourself or using a microdependency for will vary. But the general approach — to take multiple, small(er) libraries and build a minimal system out of them — has worked well for me in the past. One advantage is that the dependencies are small enough that you can actually, in finite time, go to their repository to read, understand, and, if necessary, fix them. What’s more, you’ll have a better overview of what’s in the codebase (i.e., what can go wrong) and what isn’t, which is difficult with huge frameworks. This enables you to actually tailor the system to the problem you’re solving.

You can define base-level APIs exactly as you need them to build your system without having to fight the framework to get things done. This requires a certain level of experience to get right, but if you’re willing to spend time wrestling with frameworks, it’s certainly worth the effort.

In this tutorial, we’ll show you how to build a Rust web service without using a web framework.

We won’t be building everything from scratch, though. For our HTTP server, we’ll use hyper, which uses the tokio runtime underneath. Neither of these libraries is the most lightweight or minimal of options, but both are widely used and the concepts described here will apply regardless of the libraries used.

We’ll build a basic web server with a somewhat flexible routing API and a few sample handlers to show it off. By no means will our finished product be ready for production to use as-is, but by the end of the tutorial, you should have a clear idea of how you can extend it to get there.

Setup

To follow along, you’ll need a recent Rust installation (1.39+) and a tool to send HTTP requests, such as cURL.

First, create a new Rust project.

cargo new rust-minimal-web-example
cd rust-minimal-web-example

Next, edit the Cargo.toml file and add the following dependencies.

[dependencies]
futures = { version = "0.3.6", default-features = false, features = ["async-await"] }
hyper = "0.13"
tokio = { version = "0.2", features = ["macros", "rt-threaded"] }
serde = {version = "1.0", features = ["derive"] }
serde_json = "1.0"
route-recognizer = "0.2"
bytes = "0.5"
async-trait = "0.1"

That’s quite a few dependencies for a “minimal” web application. What’s up with that?

Since we’re not using a full web framework but trying to build our own system composed of several micro-libraries, the overall complexity is still lower, even if the number of direct dependencies goes up.

It’s not necessarily the number of direct dependencies we’re worried about, but the number of transitive dependencies and the amount of code that glues them together.

Since we’re using hyper as an async HTTP server, we also need an async runtime. In this example, we use tokio, but we could also use a lighter-weight solution, such as smol.

The serde and serde_json dependencies are necessary for handling incoming JSON. We could probably get away with using nanoserde as well if we wanted to minimalize everything. The route-recognizer crate is a very small, lightweight router that can handle paths with parameters such as /product/:product_id as well.

For the remaining libraries — namely, bytes, async-trait, and futures — we need them to build our router. They’re very likely transitive dependencies of whatever we’re using already anyway, so they don’t add any weight and they’re not particularly heavy. In the case of futures, we could also use the futures-lite crate.

Handler API

Let’s build this from the bottom-up and look at the API we want for our handlers first. Then, we’ll implement a router and, at the end, put everything together.

We’ll create three handlers in this example:

  • GET /test, a basic handler that returns a string to show it works
  • POST /send, a handler expecting a JSON payload, which returns an error if it isn’t valid
  • GET /params/:some_param, a simple handler to show off how we handle path parameters

Let’s implement those in handler.rs to see the API we’d like to create for them.

use crate::{Context, Response};
use hyper::StatusCode;
use serde::Deserialize;

pub async fn test_handler(ctx: Context) -> String {
    format!("test called, state_thing was: {}", ctx.state.state_thing)
}

#[derive(Deserialize)]
struct SendRequest {
    name: String,
    active: bool,
}

pub async fn send_handler(mut ctx: Context) -> Response {
    let body: SendRequest = match ctx.body_json().await {
        Ok(v) => v,
        Err(e) => {
            return hyper::Response::builder()
                .status(StatusCode::BAD_REQUEST)
                .body(format!("could not parse JSON: {}", e).into())
                .unwrap()
        }
    };

    Response::new(
        format!(
            "send called with name: {} and active: {}",
            body.name, body.active
        )
        .into(),
    )
}

pub async fn param_handler(ctx: Context) -> String {
    let param = match ctx.params.find("some_param") {
        Some(v) => v,
        None => "empty",
    };
    format!("param called, param was: {}", param)
}

The test_handler is very simple, but we’ve already seen one important concept: the Context. To use request state (such as the body, query parameters, headers etc.) within a handler, we need some way to get it in there.

Also, depending on how we want to architect the system, we might want to make certain shared systems (an HTTP client or a database repository) available to handlers. In this example, we’ll use a Context object to encapsulate these things. In this case, we return the content of the ctx.state.state_thing variable, which is a dummy version of some actual application state. We’ll look at how it’s implemented later on, but it holds every bit of information the handler might need to do its work.

For the send_handler, we already see Context in action. We expect a SendRequest JSON payload in this handler, so we use the ctx.body_json() method to parse the request body to a SendRequest. If this goes wrong, we simply return a 400 error.

An interesting difference between test_handler and send_handler is the return type. We want to be able to return values of different types. In the most basic case, a String, or a &'static str will do. In other cases — as in our case — we’d want to return a Result<> or a raw Response.

The third handler, param_handler, shows off how we can use Context to get to path parameters, which are defined in the route. All handlers are async functions, but we could also create a handler using impl Future by hand.

In main.rs, we can add the definition for the Context and some helper types.

use route_recognizer::Params;

type Response = hyper::Response<hyper::Body>;
type Error = Box<dyn std::error::Error + Send + Sync + 'static>;

#[derive(Clone, Debug)]
pub struct AppState {
    pub state_thing: String,
}

#[derive(Debug)]
pub struct Context {
    pub state: AppState,
    pub req: Request<Body>,
    pub params: Params,
    body_bytes: Option<Bytes>,
}

impl Context {
    pub fn new(state: AppState, req: Request<Body>, params: Params) -> Context {
        Context {
            state,
            req,
            params,
            body_bytes: None,
        }
    }

    pub async fn body_json<T: serde::de::DeserializeOwned>(&mut self) -> Result<T, Error> {
        let body_bytes = match self.body_bytes {
            Some(ref v) => v,
            _ => {
                let body = to_bytes(self.req.body_mut()).await?;
                self.body_bytes = Some(body);
                self.body_bytes.as_ref().expect("body_bytes was set above")
            }
        };
        Ok(serde_json::from_slice(&body_bytes)?)
    }
}

Here, we define Response to avoid some typing and Error, which is a generic error type. In practice, we’d probably want to use a custom error type to propagate errors throughout our application.

Next, we define our AppState struct. In this example, this simply holds the aforementioned dummy application state. This AppState could, for example, also hold shared references to a cache, a database repository, or other application-wide things you’d want your handlers to access.

The Context itself is just a struct containing the AppState, the incoming Request, the path parameters if there are any, and body_bytes. It exposes one function called body_json, which sets body_bytes and tries to parse it to the given type.

We memoize body_bytes here because it’s possible we’d want to access the body multiple times during the request’s life cycle. This way, we only have to read and store it once (for example, in a middleware).

#rust #web-development #programming #developer

How to Build a Rust Web Service without using a Web Framework
12.35 GEEK