Build Isomorphic Apps with Next.js

Build Isomorphic Apps with Next.js

The app is isomorphic, meaning it renders both on the client and server. The same logic gets reused to load the component and fire updates. Universal rendering is done by Next.js. For Next.js, you are free to use whatever tools already work with React.

The app is isomorphic, meaning it renders both on the client and server. The same logic gets reused to load the component and fire updates. Universal rendering is done by Next.js. For Next.js, you are free to use whatever tools already work with React.

React opens ways to render components anywhere through its virtual DOM. As a result, rendering is not tightly-coupled to a browser. Next.js grabs a hold of this idea and unlocks server-side rendering by default — using React components that render on both client and server.

In this take, we’ll be building a simple timer component. It keeps track of elapsed minutes and seconds and updates once every second. The app is isomorphic, meaning it renders both on the client and server. The same logic gets reused to load the component and fire updates. Universal rendering is done by Next.js.

For example, to fire updates every second:

componentDidMount() {  
  this.intervalTimer = setInterval(() => this.increase(), 1000);
}

For Next.js, you are free to use whatever tools already work with React. For this demo, I’ll pick TypeScript, Redux, and Enzyme.

There is some plumbing required in getting Redux and TypeScript to work with Next.js. The App component, for example, needs a custom extension so the store can go in a <Provider />. This is the Redux store necessary when it loads the component. App props need both the default set that comes with Next.js and the custom store. This same store is used for universal rendering in Next.js. To set up TypeScript, Next.js has a plugin @zeit/next-typescript one can configure. We’ll forgo all the plumbing to keep the sample code focused. Feel free to check out the GitHub repo for more details.

When the component is ready in Next.js, you render it through an index route. For example:

const IndexPage: React.FC = () => <Timer />;  

Next.js has a pages folder where each .tsx or .ts file becomes a route. The rest gets rendered by the framework and React. The index page doesn’t concern itself with prop parameters. These parameters are encapsulated in a state machine like Redux.

TypeScript

The timer needs a state object to keep track of the minutes and seconds. Dispatched actions will increase the timer in seconds. The component props encapsulate both current state and the action function. For TypeScript, we can build these concepts using type annotations. This allows the app to scale as it grows with more requirements. Having a set of static types makes it easier to refactor and make changes at will.

For example:

interface TimerState {  
  seconds: number;
  minutes: number;
}

const INCREASE_SECONDS = 'INCREASE_SECONDS';

interface IncreaseTimerAction {  
  type: typeof INCREASE_SECONDS;
}

type TimerActionTypes = IncreaseTimerAction;

interface TimerProps {  
  timer: TimerState;
  increaseTimer: () => void;
}

interface ReduxAppProps extends AppProps {  
  reduxStore: Store;
}

Note ReduxAppProps extends the default Next.js AppProps type. This is how we define a custom Redux store. For dispatched actions, we encapsulate all actions through a single type. If there are more actions, add it with a union type. In TypeScript, you do this through a pipe, for example, Action1Type | Action2Type. This keeps the reducer from having to worry about too many action types.

Redux

For Redux, the biggest piece is the reducer. This is where dispatched actions go to get the next state. The reducer gets the initial state and the dispatched action. The initial state comes from a default parameter when it first loads. We’ll use the types declared in TypeScript to nail down type contracts. This adds a way of communicating intent in the reducer with each type. There can be many action types going into the reducer, which makes the code scalable.

For example:

const reducer = (state = initialTimerState,  
  action: TimerActionTypes): TimerState => {
  switch (action.type) {
    case INCREASE_SECONDS:
      const isOverAMinute: boolean = state.seconds >= 59;

      return {
        seconds: isOverAMinute
          ? 0 : state.seconds + 1,
        minutes: isOverAMinute
          ? state.minutes + 1 : state.minutes
      };

    default:
      return state;
  }
};

This handles the logic of rolling the seconds over when it goes over a minute. Note that, with each call, we return a new state object. Next.js can execute this code both on the client and the server. There is no special code necessary to get this to work. Also note the use of types using a colon, for example, : TimerState. This tells TypeScript to do type checking during compilation. You typically do npm run type-check to run the compiler. The type checker can run during a build and block any commits that break any contracts.

Enzyme

The timer component shows the minutes and seconds separated by a colon. It pads both with a zero when they are below ten. For example, 00:00. There is a componentDidMount method that starts the timer with a dispatched action. Enzyme can shallow render the component so Jest can verify how the timer renders.

For example:

it('pads minutes and seconds', () => {  
  const component = shallow(<TimerComponent
    timer={{seconds: 0, minutes: 0}}
    increaseTimer={() => {}} />);

  expect(component.find('p').text()).toEqual('00:00');
  component.unmount();
});

To clear out the interval set by setInterval, be sure to unmount the component. This calls the componentWillUnmount method. Note the use of find to query the virtual DOM for a p tag. Think of Enzyme as the jQuery for testing React components. Changing the seconds parameter into a string trips the type checker and throws a compiler error. This turns this unit test into sound code that does what the type contracts say it should do.

Jest works with TypeScript out of the box as of the latest version. Be sure to set the Enzyme adapter and check that it matches the React version, for example, enzyme-adapter-react-16. Jest needs to know about this adapter configuration through jest.config.js. Note the component is isolated enough to where it doesn’t depend on Redux nor Next.js.

Conclusion

Working with Next.js is a lot like working with React. If you are familiar with React, then Next.js feels like home.

Next.js offers code splitting, filesystem-based routing, and hot code reloading out of the box. These are some very advanced features that work well with React components. TypeScript plays well with the rest of the tools and does not get in the way.

The app is isomorphic because it does universal rendering. This is super nice to have since it reduces load times in the browser. Meaning, it doesn’t have to wait on JavaScript for the initial page to load.

JavaScript developers should you be using Web Workers?

JavaScript developers should you be using Web Workers?

Do you think JavaScript developers should be making more use of Web Workers to shift execution off of the main thread?

Originally published by David Gilbertson at https://medium.com

So, Web Workers. Those wonderful little critters that allow us to execute JavaScript off the main thread.

Also known as “no, you’re thinking of Service Workers”.

Photo by Caleb Jones on Unsplash

Before I get into the meat of the article, please sit for a lesson in how computers work:

Understood? Good.

For the red/green colourblind, let me explain. While a CPU is doing one thing, it can’t be doing another thing, which means you can’t sort a big array while a user scrolls the screen.

This is bad, if you have a big array and users with fingers.

Enter, Web Workers. These split open the atomic concept of a ‘CPU’ and allow us to think in terms of threads. We can use one thread to handle user-facing work like touch events and rendering the UI, and different threads to carry out all other work.

Check that out, the main thread is green the whole way through, ready to receive and respond to the gentle caress of a user.

You’re excited (I can tell), if we only have UI code on the main thread and all other code can go in a worker, things are going to be amazing (said the way Oprah would say it).

But cool your jets for just a moment, because websites are mostly about the UI — it’s why we have screens. And a lot of a user’s interactions with your site will be tapping on the screen, waiting for a response, reading, tapping, looking, reading, and so on.

So we can’t just say “here’s some JS that takes 20ms to run, chuck it on a thread”, we must think about where that execution time exists in the user’s world of tap, read, look, read, tap…

I like to boil this down to one specific question:

Is the user waiting anyway?

Imagine we have created some sort of git-repository-hosting website that shows all sorts of things about a repository. We have a cool feature called ‘issues’. A user can even click an ‘issues’ tab in our website to see a list of all issues relating to the repository. Groundbreaking!

When our users click this issues tab, the site is going to fetch the issue data, process it in some way — perhaps sort, or format dates, or work out which icon to show — then render the UI.

Inside the user’s computer, that’ll look exactly like this.

Look at that processing stage, locking up the main thread even though it has nothing to do with the UI! That’s terrible, in theory.

But think about what the human is actually doing at this point. They’re waiting for the common trio of network/process/render; just sittin’ around with less to do than the Bolivian Navy.

Because we care about our users, we show a loading indicator to let them know we’ve received their request and are working on it — putting the human in a ‘waiting’ state. Let’s add that to the diagram.

Now that we have a human in the picture, we can mix in a Web Worker and think about the impact it will have on their life:

Hmmm.

First thing to note is that we’re not doing anything in parallel. We need the data from the network before we process it, and we need to process the data before we can render the UI. The elapsed time doesn’t change.

(BTW, the time involved in moving data to a Web Worker and back is negligible: 1ms per 100 KB is a decent rule of thumb.)

So we can move work off the main thread and have a page that is responsive during that time, but to what end? If our user is sitting there looking at a spinner for 600ms, have we enriched their experience by having a responsive screen for the middle third?

No.

I’ve fudged these diagrams a little bit to make them the gorgeous specimens of graphic design that they are, but they’re not really to scale.

When responding to a user request, you’ll find that the network and DOM-manipulating part of any given task take much, much longer than the pure-JS data processing part.

I saw an article recently making the case that updating a Redux store was a good candidate for Web Workers because it’s not UI work (and non-UI work doesn’t belong on the main thread).

Chucking the data processing over to a worker thread sounds sensible, but the idea struck me as a little, umm, academic.

First, let’s split instances of ‘updating a store’ into two categories:

  1. Updating a store in response to a user interaction, then updating the UI in response to the data change
  2. Not that first one

If the first scenario, a user taps a button on the screen — perhaps to change the sort order of a list. The store updates, and this results in a re-rendering of the DOM (since that’s the point of a store).

Let me just delete one thing from the previous diagram:

In my experience, it is rare that the store-updating step goes beyond a few dozen milliseconds, and is generally followed by ten times that in DOM updating, layout, and paint. If I’ve got a site that’s taking longer than this, I’d be asking questions about why I have so much data in the browser and so much DOM, rather than on which thread I should do my processing.

So the question we’re faced with is the same one from above: the user tapped something on the screen, we’re going to work on that request for hopefully less than a second, why would we want to make the screen responsive during that time?

OK what about the second scenario, where a store update isn’t in response to a user interaction? Performing an auto-save, for example — there’s nothing more annoying than an app becoming unresponsive doing something you didn’t ask it to do.

Actually there’s heaps of things more annoying than that. Teens, for example.

Anyhoo, if you’re doing an auto-save and taking 100ms to process data client-side before sending it off to a server, then you should absolutely use a Web Worker.

In fact, any ‘background’ task that the user hasn’t asked for, or isn’t waiting for, is a good candidate for moving to a Web Worker.

The matter of value

Complexity is expensive, and implementing Web Workers ain’t cheap.

If you’re using a bundler — and you are — you’ll have a lot of reading to do, and probably npm packages to install. If you’ve got a create-react-app app, prepare to eject (and put aside two days twice a year to update 30 different packages when the next version of Babel/Redux/React/ESLint comes out).

Also, if you want to share anything fancier than plain data between a worker and the main thread you’ve got some more reading to do (comlink is your friend).

What I’m getting at is this: if the benefit is real, but minimal, then you’ve gotta ask if there’s something else you could spend a day or two on with a greater benefit to your users.

This thinking is true of everything, of course, but I’ve found that Web Workers have a particularly poor benefit-to-effort ratio.

Hey David, why you hate Web Workers so bad?

Good question.

This is a doweling jig:

I own a doweling jig. I love my doweling jig. If I need to drill a hole into the end of a piece of wood and ensure that it’s perfectly perpendicular to the surface, I use my doweling jig.

But I don’t use it to eat breakfast. For that I use a spoon.

Four years ago I was working on some fancy animations. They looked slick on a fast device, but janky on a slow one. So I wrote fireball-js, which executes a rudimentary performance benchmark on the user’s device and returns a score, allowing me to run my animations only on devices that would render them smoothly.

Where’s the best spot to run some CPU intensive code that the user didn’t request? On a different thread, of course. A Web Worker was the correct tool for the job.

Fast forward to 2019 and you’ll find me writing a routing algorithm for a mapping application. This requires parsing a big fat GeoJSON map into a collection of nodes and edges, to be used when a user asks for directions. The processing isn’t in response to a user request and the user isn’t waiting on it. And so, a Web Worker is the correct tool for the job.

It was only when doing this that it dawned on me: in the intervening quartet of years, I have seen exactly zero other instances where Web Workers would have improved the user experience.

Contrast this with a recent resurgence in Web Worker wonderment, and combine that contrast with the fact that I couldn’t think of anything else to write about, then concatenate that combined contrast with my contrarian character and you’ve got yourself a blog post telling you that maybe Web Workers are a teeny-tiny bit overhyped.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

An Introduction to Web Workers

JavaScript Web Workers: A Beginner’s Guide

Using Web Workers to Real-time Processing

How to use Web Workers in Angular app

Using Web Workers with Angular CLI