Building a GraphQL server in Next.js

Building a GraphQL server in Next.js

In this Next.js tutorial, we will learn how to use API routes to set up a GraphQL API within a Next.js app. It will start with the basic setup, and then cover some more in-depth concepts such as CORS, loading data from Postgres via the Knex package, improving performance using the DataLoader package and pattern, and avoiding costly N+1 queries.

Many people only think of Next.js as a frontend React framework, providing server-side rendering, built-in routing, and a number of performance features. All of this is still true, but since version 9, Next.js also supports API routes, an easy way to provide a backend to your React frontend code, all within the same package and setup.

In this article, we will learn how to use API routes to set up a GraphQL API within a Next.js app. It will start with the basic setup, and then cover some more in-depth concepts such as CORS, loading data from Postgres via the Knex package, improving performance using the DataLoader package and pattern, and avoiding costly N+1 queries.

Setting up Next.js

The easiest way to set up Next.js is to run the command npx create-next-app. If you don’t have npx installed, it can first be installed by running npm i -g npx to install it globally on your system.

There is even an example setup you can use for the very thing we are going to cover today, setting up Next.js with a GraphQL API: npx create-next-app --example api-routes-graphql. That said, we’re going to set things up ourselves, focusing on a number of additional concepts, so I have chosen to go with the bare-bones starter app.

Adding an API route

With Next.js setup, we’re going to add an API (server) route to our app. This is as easy as creating a file within the pages/api folder called graphql.js. For now, its contents will be:

export default (_req, res) => {
  res.end("GraphQL!");
};

Done! Just joking… if only it were that easy! The above code will simply respond with the text “GraphQL!”, but with this setup we could respond with any JSON that we wanted, reading query params, headers, etc… from the req (request) object.

What we want to produce

At the end of this example, we want to be able to perform the following query of albums and artists, loaded efficiently from our Postgres database:

{
  albums(first: 5) {
    id
    name
    year
    artist {
      id
      name
    }
  }
}

Producing output which might resemble:

{
  "data": {
    "albums": [
      {
        "id": "1",
        "name": "Turn It Around",
        "year": "2003",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"
        }
      },
      {
        "id": "2",
        "name": "Wake the Dead",
        "year": "2005",
        "artist": {
          "id": "1",
          "name": "Comeback Kid"
        }
      }
    ]
  }
}
Basic GraphQL setup

Setting up a GraphQL server involves four steps:

  1. Defining type definitions which describe the GraphQL schema
  2. Creating resolvers: The ability to generate a response to a query or mutation
  3. Creating an Apollo Server
  4. Creating a handler that will tie things into the Next.js API request and response lifecycle

After importing the gql function from apollo-server-micro (and having installed yarn add apollo-server-micro), we can define our type definitions, describing the schema of our GraphQL server. Eventually, we’ll expand on this, but for now, we have a field we can query called hello that responds with a String.

import { ApolloServer, gql } from "apollo-server-micro";

const typeDefs = gql`
  type Query {
    hello: String!
  }
`;

With our schema defined, we need to write the code that will enable our server to respond to queries and mutations. These are called resolvers, and each field (such as hello) requires a function that will produce some result. The result of the resolver functions must line up with the types defined above.

The arguments each resolver function receive are:

  • parent: This is typically ignored on the Query (topmost) level, but will be used as we eventually tackle albums and artists
  • arguments: In the first example which included albums(first: 5), this would arrive to our resolver function as {first: 5}, allowing us to access the field’s arguments
  • context: Context is global state, such as who the authenticated user is, or in our case, the global instance of DataLoader
const resolvers = {
  Query: {
    hello: (_parent, _args, _context) => "Hello!"
  }
};

Passing the typeDefs and resolvers to a new instance of ApolloServer gets us up and running:

const apolloServer = new ApolloServer({
  typeDefs,
  resolvers,
  context: () => {
    return {};
  }
});

From the apolloServer we can access a handler, in charge of handling the request and response lifecycle. There is an additional config we need to export, stopping the body of incoming HTTP requests from being parsed, a requirement for GraphQL to work correctly:

const handler = apolloServer.createHandler({ path: "/api/hello" });

export const config = {
  api: {
    bodyParser: false
  }
};

export default handler;
Adding CORS support

If we would like to enable or limit cross-origin requests using CORS, we’re able to add the micro-cors package to enable this:

import Cors from "micro-cors";

const cors = Cors({
  allowMethods: ["POST", "OPTIONS"]
});

export default cors(handler);

In this case, I have limited the cross-origin HTTP methods to POST and OPTIONS, changing the default export to have our handler passed to the cors function.

Dynamic data with Postgres and Knex

Hard-coding data can get boring… it’s time to load it from the database! There’s a tiny bit of setup to get this up and running. First, install the required packages: yarn add knex pg.

Create a knexfile.js file, configuring Knex so it knows how to talk to our database. In this setup, an ENV variable is required in order to know how to connect to the database. If you are using Now along with Next.js, I have an article which talks about setting up secrets. My local ENV variable looks like PG_CONNECTION_STRING="postgres://[email protected]:5432/next-graphql":

// knexfile.js
module.exports = {
  development: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"
    }
  },

  production: {
    client: "postgresql",
    connection: process.env.PG_CONNECTION_STRING,
    migrations: {
      tableName: "knex_migrations"
    }
  }
};

Next, we are able to create database migrations to set up our artists and albums tables. Empty migration files are created with the command yarn run knex migrate:make create_artists (and a similar one for albums). The migration for artists looks like this:

exports.up = function(knex) {
  return knex.schema.createTable("artists", function(table) {
    table.increments("id");
    table.string("name", 255).notNullable();
    table.string("url", 255).notNullable();
  });
};

exports.down = function(knex) {
  return knex.schema.dropTable("artists");
};

And the migration for albums looks like this:

exports.up = function(knex) {
  return knex.schema.createTable("albums", function(table) {
    table.increments("id");
    table.integer("artist_id").notNullable();
    table.string("name", 255).notNullable();
    table.string("year").notNullable();

    table.index("artist_id");
    table.index("name");
  });
};

exports.down = function(knex) {
  return knex.schema.dropTable("albums");
};

With our tables in place, I ran the following insert statements in Postico to set up a few dummy records:

INSERT INTO artists("name", "url") VALUES('Comeback Kid', 'http://comeback-kid.com/');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Turn It Around', '2003');
INSERT INTO albums("artist_id", "name", "year") VALUES(1, 'Wake the Dead', '2005');

The last step before updating our GraphQL API to load data from the database is to create a connection to our DB within the graphql.js file.

import knex from "knex";

const db = knex({
  client: "pg",
  connection: process.env.PG_CONNECTION_STRING
});
New definitions and resolvers

Let’s remove our hello query and resolvers, replacing them with definitions for loading albums and artists from the database:

const typeDefs = gql`
  type Query {
    albums(first: Int = 25, skip: Int = 0): [Album!]!
  }

  type Artist {
    id: ID!
    name: String!
    url: String!
    albums(first: Int = 25, skip: Int = 0): [Album!]!
  }

  type Album {
    id: ID!
    name: String!
    year: String!
    artist: Artist!
  }
`;

const resolvers = {
  Query: {
    albums: (_parent, args, _context) => {
      return db
        .select("*")
        .from("albums")
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))
        .offset(args.skip);
    }
  },

  Album: {
    id: (album, _args, _context) => album.id,
    artist: (album, _args, _context) => {
      return db
        .select("*")
        .from("artists")
        .where({ id: album.artist_id })
        .first();
    }
  },

  Artist: {
    id: (artist, _args, _context) => artist.id,
    albums: (artist, args, _context) => {
      return db
        .select("*")
        .from("albums")
        .where({ artist_id: artist.id })
        .orderBy("year", "asc")
        .limit(Math.min(args.first, 50))
        .offset(args.skip);
    }
  }
};

You’ll have noticed that I didn’t define every single field for the Album and Artist resolvers. If it is going to simply read an attribute from an object, you can avoid defining the resolver for that field. This is why the artist doesn’t have a name resolver, for example. To be honest we could remove the id resolver as well!

Avoiding N+1 queries with DataLoader

There is a hidden problem with the above resolvers… specifically, loading the artist for each album. This SQL query gets run for each album, meaning that if you have fifty albums to display, you will have to perform fifty additional SQL queries to load each album’s artist. Marc-André Giroux has a great article on this problem, and we’re going to discover how to solve it right now!

The first step is to define a loader. The purpose of a loader is to pool up IDs (of artists in our case) to load and load them all at once in a single batch, rather than each one on its own:

import DataLoader from "dataloader";

const loader = {
  artist: new DataLoader(ids =>
    db
      .table("artists")
      .whereIn("id", ids)
      .select()
      .then(rows => ids.map(id => rows.find(row => row.id === id)))
  )
};

We need to pass our loader to our GraphQL resolvers, which can be done via context:

const apolloServer = new ApolloServer({
  typeDefs,
  resolvers,
  context: () => {
    return { loader };
  }
});

This allows us to update our album resolver to utilize the DataLoader:

const resolvers = {
  //...
  Album: {
    id: (album, _args, _context) => album.id,
    artist: (album, _args, { loader }) => {
      return loader.artist.load(album.artist_id);
    }
  }
  //...
};

The end result is a single query to the database to load all the artists at once… N+1 problem solved!

Conclusion

In this article, we were able to create a GraphQL server with CORS support, loading data from Postgres, and stomping out N+1 performance issues using DataLoader. Not bad for a day’s work! The next step might involve adding mutations along with some authentication to our app, enabling users to create and modify data with the correct permissions. Next.js is no longer just for the frontend (as you can see). It has first-class support for server endpoints and the perfect place to put your GraphQL API.

Source code: https://github.com/leighhalliday/nextjs-graphql-example

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Hire Dedicated eCommerce Web Developers | Top eCommerce Web Designers

Hire Dedicated eCommerce Web Developers | Top eCommerce Web Designers

Build your eCommerce project by hiring our expert eCommerce Website developers. Our Dedicated Web Designers develop powerful & robust website in a short span of time.

Build your eCommerce project by hiring our expert eCommerce Website developers. Our Dedicated Web Designers develop powerful & robust website in a short span of time.

Hire Now: https://bit.ly/394wdOx

JavaScript developers should you be using Web Workers?

JavaScript developers should you be using Web Workers?

Do you think JavaScript developers should be making more use of Web Workers to shift execution off of the main thread?

Originally published by David Gilbertson at https://medium.com

So, Web Workers. Those wonderful little critters that allow us to execute JavaScript off the main thread.

Also known as “no, you’re thinking of Service Workers”.

Photo by Caleb Jones on Unsplash

Before I get into the meat of the article, please sit for a lesson in how computers work:

Understood? Good.

For the red/green colourblind, let me explain. While a CPU is doing one thing, it can’t be doing another thing, which means you can’t sort a big array while a user scrolls the screen.

This is bad, if you have a big array and users with fingers.

Enter, Web Workers. These split open the atomic concept of a ‘CPU’ and allow us to think in terms of threads. We can use one thread to handle user-facing work like touch events and rendering the UI, and different threads to carry out all other work.

Check that out, the main thread is green the whole way through, ready to receive and respond to the gentle caress of a user.

You’re excited (I can tell), if we only have UI code on the main thread and all other code can go in a worker, things are going to be amazing (said the way Oprah would say it).

But cool your jets for just a moment, because websites are mostly about the UI — it’s why we have screens. And a lot of a user’s interactions with your site will be tapping on the screen, waiting for a response, reading, tapping, looking, reading, and so on.

So we can’t just say “here’s some JS that takes 20ms to run, chuck it on a thread”, we must think about where that execution time exists in the user’s world of tap, read, look, read, tap…

I like to boil this down to one specific question:

Is the user waiting anyway?

Imagine we have created some sort of git-repository-hosting website that shows all sorts of things about a repository. We have a cool feature called ‘issues’. A user can even click an ‘issues’ tab in our website to see a list of all issues relating to the repository. Groundbreaking!

When our users click this issues tab, the site is going to fetch the issue data, process it in some way — perhaps sort, or format dates, or work out which icon to show — then render the UI.

Inside the user’s computer, that’ll look exactly like this.

Look at that processing stage, locking up the main thread even though it has nothing to do with the UI! That’s terrible, in theory.

But think about what the human is actually doing at this point. They’re waiting for the common trio of network/process/render; just sittin’ around with less to do than the Bolivian Navy.

Because we care about our users, we show a loading indicator to let them know we’ve received their request and are working on it — putting the human in a ‘waiting’ state. Let’s add that to the diagram.

Now that we have a human in the picture, we can mix in a Web Worker and think about the impact it will have on their life:

Hmmm.

First thing to note is that we’re not doing anything in parallel. We need the data from the network before we process it, and we need to process the data before we can render the UI. The elapsed time doesn’t change.

(BTW, the time involved in moving data to a Web Worker and back is negligible: 1ms per 100 KB is a decent rule of thumb.)

So we can move work off the main thread and have a page that is responsive during that time, but to what end? If our user is sitting there looking at a spinner for 600ms, have we enriched their experience by having a responsive screen for the middle third?

No.

I’ve fudged these diagrams a little bit to make them the gorgeous specimens of graphic design that they are, but they’re not really to scale.

When responding to a user request, you’ll find that the network and DOM-manipulating part of any given task take much, much longer than the pure-JS data processing part.

I saw an article recently making the case that updating a Redux store was a good candidate for Web Workers because it’s not UI work (and non-UI work doesn’t belong on the main thread).

Chucking the data processing over to a worker thread sounds sensible, but the idea struck me as a little, umm, academic.

First, let’s split instances of ‘updating a store’ into two categories:

  1. Updating a store in response to a user interaction, then updating the UI in response to the data change
  2. Not that first one

If the first scenario, a user taps a button on the screen — perhaps to change the sort order of a list. The store updates, and this results in a re-rendering of the DOM (since that’s the point of a store).

Let me just delete one thing from the previous diagram:

In my experience, it is rare that the store-updating step goes beyond a few dozen milliseconds, and is generally followed by ten times that in DOM updating, layout, and paint. If I’ve got a site that’s taking longer than this, I’d be asking questions about why I have so much data in the browser and so much DOM, rather than on which thread I should do my processing.

So the question we’re faced with is the same one from above: the user tapped something on the screen, we’re going to work on that request for hopefully less than a second, why would we want to make the screen responsive during that time?

OK what about the second scenario, where a store update isn’t in response to a user interaction? Performing an auto-save, for example — there’s nothing more annoying than an app becoming unresponsive doing something you didn’t ask it to do.

Actually there’s heaps of things more annoying than that. Teens, for example.

Anyhoo, if you’re doing an auto-save and taking 100ms to process data client-side before sending it off to a server, then you should absolutely use a Web Worker.

In fact, any ‘background’ task that the user hasn’t asked for, or isn’t waiting for, is a good candidate for moving to a Web Worker.

The matter of value

Complexity is expensive, and implementing Web Workers ain’t cheap.

If you’re using a bundler — and you are — you’ll have a lot of reading to do, and probably npm packages to install. If you’ve got a create-react-app app, prepare to eject (and put aside two days twice a year to update 30 different packages when the next version of Babel/Redux/React/ESLint comes out).

Also, if you want to share anything fancier than plain data between a worker and the main thread you’ve got some more reading to do (comlink is your friend).

What I’m getting at is this: if the benefit is real, but minimal, then you’ve gotta ask if there’s something else you could spend a day or two on with a greater benefit to your users.

This thinking is true of everything, of course, but I’ve found that Web Workers have a particularly poor benefit-to-effort ratio.

Hey David, why you hate Web Workers so bad?

Good question.

This is a doweling jig:

I own a doweling jig. I love my doweling jig. If I need to drill a hole into the end of a piece of wood and ensure that it’s perfectly perpendicular to the surface, I use my doweling jig.

But I don’t use it to eat breakfast. For that I use a spoon.

Four years ago I was working on some fancy animations. They looked slick on a fast device, but janky on a slow one. So I wrote fireball-js, which executes a rudimentary performance benchmark on the user’s device and returns a score, allowing me to run my animations only on devices that would render them smoothly.

Where’s the best spot to run some CPU intensive code that the user didn’t request? On a different thread, of course. A Web Worker was the correct tool for the job.

Fast forward to 2019 and you’ll find me writing a routing algorithm for a mapping application. This requires parsing a big fat GeoJSON map into a collection of nodes and edges, to be used when a user asks for directions. The processing isn’t in response to a user request and the user isn’t waiting on it. And so, a Web Worker is the correct tool for the job.

It was only when doing this that it dawned on me: in the intervening quartet of years, I have seen exactly zero other instances where Web Workers would have improved the user experience.

Contrast this with a recent resurgence in Web Worker wonderment, and combine that contrast with the fact that I couldn’t think of anything else to write about, then concatenate that combined contrast with my contrarian character and you’ve got yourself a blog post telling you that maybe Web Workers are a teeny-tiny bit overhyped.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Web Workers

JavaScript Web Workers: A Beginner’s Guide

Using Web Workers to Real-time Processing

How to use Web Workers in Angular app

Using Web Workers with Angular CLI