Originally published by Bogdan Nechyporenko at dzone.com

The goal of my article is to share my experience of how I configured Nest.js, GraphQL, and Apollo together. I have extensive experience with software, such as React, Ramda, Gulp, and Webpack; I've touched on others, such as TypeScript and GraphQL, a bit — I know the context and the idea behind them, but I do not have much experience using them. And others, such as Apollo and Material UI, I have almost no experience with.

The goal of this project was two-fold. I wanted to familiarize myself with software that I hadn't previously worked extensively with in order to create an open source, highly-customizable WebStore (related to UI, services, databases, integration layers, localization, etc.); as a result, there shouldn’t be any boundary to make some modules work with a completely different ecosystem.

My plan is to have periodical articles describing new features, how they were built, and what kind of architecture was used. I’ll use branching, and you can always come to that state to check out the project and be confident that the content of articles matches the source code in GitHub. For this article, you need to switch to the 0.1.x-maintenance branch. 

As an initial design, I took an amazing material dashboard from Creative Tim's website.

Technology wise I used:

  • Typescript for the frontend and backend.
  • Nest.js for the backend.
  • GraphQL for API communication between the frontend and backend — type-graphql as a facilitator).
  • MySQL for initial database storage.
  • Project uses ORM — typeorm
  • React for the frontend
  • Apollo for the GraphQL client.  
  • Design is based on Material UI components.
  • Gulp as a build system tool.
  • Webpack helps to create a frontend bundle.
  • Jest for unit testing.
  • Cypress for integration testing.
  • Lerna to build a mono repository.
  • Circle CI for continuous integration.

The Project

Circle CI

To check the project on each commit, I decided to use Circle CI. 


  1. A create a .circleci folder in the root of the project.
  2. create config.yml file in .circleci folder.

In config.yml, we use Docker base images on which we can install extra modules. In CircleCI, you can specify several Docker images, and it will behave in the same way as Docker compose; that’s really handy. Here is the chosen Node Docker image, so it’s possible to work with npm, as we have a frontend and MySQL with preinstalled databases.

      - image: circleci/node:12
      - image: circleci/mysql:5.7
          MYSQL_ROOT_PASSWORD: zebra
          MYSQL_DATABASE: zebra
          MYSQL_USER: zebra
          MYSQL_PASSWORD: zebra

When the docker image is downloaded, you can configure your own custom steps and the job names you’d like to run.


Gulp is used to create a seamless flow of running consequent commands; some of them can be run in parallel. Others can be run sequentially. 

To run the backend server, you need to execute the command: gulp nest-server

To run the frontend server, you need to execute the command: gulp dev-server  

If you’ll have a look at the e2e.js file, you’ll see which commands are running first before starting running e2e tests on Cypress. 

gulp nest-server  runs npm run start  and waits until the server will be up and running by pinging the URL, http://localhost:3333/ping, every five seconds. If after fifth attempt the URL is still not accessible, the process will be killed. 

gulp dev-server  is running a webpack client, which builds frontend source code to a bundle, runs a dev server based on Express.js with configured proxy to the backend server, and makes the frontend page accessible on http://localhost:6517.


To be able to run npm install  or yarn in different folders, I used Lerna.

You can find the config file for Yerna in the root of the project, in the file named lerna.json  

  "npmClient": "yarn”, // changes default NPM client to Yarn
  "version": "independent”, // allows us to publish each package with independent version
  "parallel": true, // allows us to build packages in parallel
  "stream": true, // output from child processes immediately with the originating package name
  "packages": [
    "src/**” // glob to match directories with package.json, which afterwards will be treated as              // packages
  "useWorkspaces": true, // enables integration with Yarn Workspaces
  "workspaces": ["src/*”] // specifying the sources of workspaces


To create a bundle for the frontend we use Webpack. Configuration is located in gulp/tasks/webpack.js . There are two configurations — one for development and another for the development environment. The biggest difference is that for production, the code is minimized and the webpack dev server client is not running. It’s used only in development to be able to get hot-reloading features. 


As a backend server, I used Nest.js. It contains functionality, which boosts the performance of development. The Nest server is started with command defined in package.json residing in the server folder:  "ts-node -r tsconfig-paths/register src/main.ts" 

Let's have a look at the main.ts file:

import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
async function bootstrap() {
    const app = await NestFactory.create(AppModule);
    await app.listen(3333);

We are creating the main module of application (line 6) and waiting for when the application will be started by listening to the defined port. This is an entry point to the backend of the application. 

import {Module} from '@nestjs/common';
import {GraphQLModule} from '@nestjs/graphql';
import {ProductModule} from './modules/products/product-module';
    imports: [
            installSubscriptionHandlers: true,
            autoSchemaFile: 'schema.gql',
export class AppModule {

On that level, we define a module that will work with products and the name of the genereated database schema for the entire application. Each specific part of the database should be placed in its own module; then, you'll make your application granular enough to be split into logical parts. After a time, you can replace/remove outdated parts with less hassle. As at this point of time, current implementation contains only one module — ProductModule

import {Module} from '@nestjs/common';
import {DatabaseModule} from '../database/database.module';
import {productProviders} from './product-providers';
import {ProductService} from './product-service';
import {ProductResolver} from './product-resolver';
    imports: [DatabaseModule],
    providers: [
        ...productProviders, // creates a connection to a product repository
        ProductService, // creates a transformation to DB entities and GraphQL objects
        ProductResolver // creates a layer to communicate with a database
export class ProductModule {


To not map every object's field manually to a database table I decided to use an ORM solution. It supports a number of databases, so that's quite handy. Initial implementation is done for MySQL, but with a little of effort, it can be changed to another database. In the next stage, I'd like to demonstrate how to connect to other databases.

import {Entity, PrimaryGeneratedColumn, Column} from 'typeorm';
export class ProductEntity {
    id: number;
    name: string;
    description: string;
    price: number;

As you can see from this example, it's a quite simple and straightforward way of annotating entity fields to map them on a database table. 

To define database implementation and all required configuration, you'll need to create a ormconfig.json file in the root of the module. 

  "type": "mysql",
  "host": "localhost",
  "port": 3306,
  "username": "zebra",
  "password": "zebra",
  "database": "zebra",
  "synchronize": true,
  "logging": false,
  "entities": [
  "migrations": [
  "subscribers": [
  "cli": {
    "entitiesDir": "src/modules/**/entity",
    "migrationsDir": "src/modules/**/migration",
    "subscribersDir": "src/modules/**/subscriber"

Apart from all credentials for the database, you need to define what folders have to be scanned for entities/mutations/etc. That's made with performance in mind, especially when the project becomes monstrous size.


GraphQL gives us another way of communicating with the frontend and backend. Additionally, when you have a complicated graph of objects, GraphQL allows you to specify exactly which objects and which fields you are interested in. In the REST approach, you'll get the full object or full graph of objects.

As GraphQL is based on strong typing, you have to explicitly declare input/output types. Based on objects (product.ts, productInput,ts) annotated by the type-graphql library and defined mutations in product-resolver.ts during compilation time, the GraphQL schema is generated (schema.sql):

# -----------------------------------------------
# -----------------------------------------------
type Mutation {
  addProduct(product: ProductInput!): Product!
  removeProduct(name: String!): Product!
type Product {
  id: ID!
  name: String!
  description: String
  price: Float!
input ProductInput {
  name: String!
  description: String
  price: Float!
type Query {
  product(name: String!): Product
  """Get all the products from around the world """
  products: [Product!]!

In this project, I decided to use source of truth entities and generate the schema out of that, but you can also configure it so that you can define the schema manually. It might be useful in some cases when you don't want to rely on an auto-generation tool.


For the frontend, I decided to create a single section, as it's quite hard to separate each topic separately. As I have experience with React and I really love it, I'm using it in this project. Previously, I hadn't worked with TypeScript, as at my current project at work I really don't see any problem with using pure Javascript, especially with appeared ES6; replacing a working solution without a significant reason is never a good idea. 

Let's have a look at the entry point of the frontend side: 

import historyService from './history';
import React from 'react';
import ReactDOM from 'react-dom';
import {AppContainer} from 'react-hot-loader';
import {Provider} from 'react-redux';
import {ApolloClient} from 'apollo-client';
import {InMemoryCache} from 'apollo-cache-inmemory';
import {HttpLink} from 'apollo-link-http';
import {ApolloProvider} from 'react-apollo';
import {Redirect, Route, Switch} from 'react-router';
import {ConnectedRouter} from 'connected-react-router';
import {store} from './plumbing';
import Admin from './components/Admin';
const cache = new InMemoryCache();
const link = new HttpLink({
    uri: 'http://localhost:3333/graphql'
const client = new ApolloClient({
        // applying Appolo Client
        <ApolloProvider client={client}>
            // applying Redux store
            <Provider store={store}>  
               // applying React Redux routing  
                <ConnectedRouter history={historyService.history} key={Math.random()}> 
                    <div className="app-root">
                        // defining the application routes
                            <Route path="/admin" component={Admin}/>
                            <Redirect from="/" to="/admin/table"/>

As we are using GraphQL, we have to somehow read that format on the frontend and map it to React elements. To solve that issue, we're using Appollo Client; in order to use it, you have to know how to connect it to an exposed GraqhGL schema (line 18-20). The connection, at this point, is not secured. That's why knowing the link address is enough. In the next stage, I'll walk through a few ways to make the connection secure. 

Another thing that you have to configure is where to save the cache (line 16). Currently, I'm using the simplest solution — to keep the cache in memory. 

The render point for React application is defined on lines 27-42. document.getElementById('root')  attaches that component to defined the element with id="root" in the index.html file.

Currently, Redux is used only for routing. As the project evolves, Redux will keep the internal state of components inside, so you can easily control component from another component or define the state. That's way more flexible and stable approach compared to keeping the internal state inside of the component itself, as it is done now in Admin.tsx.

class Dashboard extends React.Component<Props, State> {
    static propTypes: { classes: Validator<NonNullable<object>> };
    constructor(props) {
        this.state = {mobileOpen: false}; // that part better to keep in Redux store

Let's have a look at how React components communicate with GraphQL, starting from the React component in  ProductItems.tsx.

export default compose(
    // @ts-ignore

Here is an important line: withApollo. We create a connection with the Apollo Client. And then we inject two functions to properties defined in GETALLPRODUCTS and REMOVE_PRODUCTIf you'll open both of them, you'll find that there is a name defined in each of these variables. For example, GET_ALL_PRODUCTS, you find the defined allProducts name, and that's why we desctuct it from properties in ProductsItem:

function ProductItems(props) {
    const {allProducts, classes, removeProduct} = props;
    return (

How the query itself is defined:

import {gql} from 'apollo-boost';
import {graphql} from 'react-apollo';
export const GET_ALL_PRODUCTS = gql`
      products {
export default graphql(GET_ALL_PRODUCTS, {name: 'allProducts'});

So in essense, you provide a GraphQL query, which options contain at least a name for UI connection.

Data modification works the same way, just with extra parameters. Let's have a look at add-product-mutation.ts 

import {gql} from 'apollo-boost';
import {graphql} from 'react-apollo';
import {GET_ALL_PRODUCTS} from '../queries/get-all-products';
const ADD_PRODUCT = gql`
  mutation AddProduct($product: ProductInput!) {
   addProduct(product: $product) {
export default graphql(ADD_PRODUCT, {
    name: 'addProduct',
    options: () => ({
        refetchQueries: [{
            query: GET_ALL_PRODUCTS,

In parameters, alongside a name, an optionsfield is defined. You can do it even without it, and it still will work. Watch what refetchQueriresdoes here that solves the caching problem. Once you add a new product, you need to update the table, and for that, you need to notify the Apollo Client which queries have to be refetched. They don't do it automatically in order to not degrade the performance of the application, so you, as a developer, have to decide where and when the data has to be refetched. 


Test suites are missing so far though. CircleCI and Cypress are fully configured and working. As the article became quite long, I chose to split that information to the next article and describe how you can test even that simple application. 

Please leave your feedback in the comments section; I'd really appreciate hearing your opinion about the article and the code. 

Originally published by Bogdan Nechyporenko at dzone.com


Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow me on Facebook | Twitter

Learn More

☞ NestJS Zero to Hero - Modern TypeScript Back-end Development

☞ The Complete Node.js Developer Course (3rd Edition)

☞ Complete Next.js with React & Node - Beautiful Portfolio App

☞ Angular & NodeJS - The MEAN Stack Guide

☞ NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

#javascript #node-js #graphql #apollo #web-development

How to Build a Webstore Using Modern Stack (Nest.js, GraphQL, Apollo) Part 1
3 Likes108.60 GEEK