Building a GraphQL API with Node, Express and MongoDB

Building a GraphQL API with Node, Express and MongoDB

Create a GraphQL API With Node.js, Mongoose, and Express .This quick-and-dirty tutorial is just the beginning of all the fun you can have using GraphQL to make your development stronger, cleaner, and more efficient.

GraphQL is a technology that helps developers build robust software more quickly. The ability to request all of the information you need in a single request is a game changer.

It has simplified the back-end development of APIs for consumption by mobile and web applications that would normally rely on RESTful APIs. A normal RESTful API may have several end points for various entities (e.g., users, submissions, etc.); with GraphQL, you can get all of this information in a single go using GraphQL’s query language, also known as GQL.

In this tutorial, I’ll walk you through how to build a GraphQL API with graphql-compose-mongoose, as well as a few other tools. And, of course, everything will be to ES6 spec using Node.js. If this sounds like an exciting adventure, read on.

Getting Started

To get started, we’ll need to double-check you have a few prerequisites to ensure both that you understand the technology and that you can complete the tutorial in full.

Prerequisites

  • Node.js (Latest 13.x or above)
  • Yarn (brew install yarn on macOS)
  • An understanding of JavaScript and the ES6 spec
  • An account with MongoDB Atlas or a local instance of MongoDB running
Directory Structure

To start, create a new directory.

You can name your directory whatever you would like; for this tutorial, we’re going to create a to-do application, so I called mine todo.

mkdir todo && cd todo

mkdir.sh

Next, let’s go ahead and generate our package.json file using Yarn. We’ll add modules, as necessary, as we continue to move forward.

yarn init

init.js

Note: Answer the questions as prompted. Nothing necessarily required here — just whatever you’d like to set as your defaults.

Because we are using ES6, we’ll need to transpile all code from ES6 to vanilla JavaScript. To do so, let’s go ahead and create a src directory. Note that we’ll also need to set up the required structure within the src. The script below will accomplish the following:

  • Make a src directory
  • Move into the src directory
  • Generate schema, models, scripts, and utils directories
mkdir src && cd src && mkdir models schema scripts utils

src.sh

Lastly, we’ll create an index.js file, which will allow us to import our dependent files and directories:

touch index.js

index.sh

Inside of index.js, place the following contents, and save:

import dotenv from 'dotenv';
import express from 'express';
import { ApolloServer } from 'apollo-server-express';

import mongoose from 'mongoose';

import './utils/db';
import schema from './schema';

dotenv.config();

const app = express();

const server = new ApolloServer({
    schema,
    cors: true,
    playground: process.env.NODE_ENV === 'development' ? true : false,
    introspection: true,
    tracing: true,
    path: '/',
});

server.applyMiddleware({
    app,
    path: '/',
    cors: true,
    onHealthCheck: () =>
        // eslint-disable-next-line no-undef
        new Promise((resolve, reject) => {
            if (mongoose.connection.readyState > 0) {
                resolve();
            } else {
                reject();
            }
        }),
});

app.listen({ port: process.env.PORT }, () => {
    console.log(`🚀 Server listening on port ${process.env.PORT}`);
    console.log(`😷 Health checks available at ${process.env.HEALTH_ENDPOINT}`);
});

index.js

Package File

Now that we have the base files in place, let’s go ahead and add the required production packages to our package.json file using Yarn, like so:

yarn add @babel/cli @babel/core @babel/node @babel/preset-env apollo-engine apollo-server-express body-parser cors dotenv express graphql graphql-compose graphql-compose-connection graphql-compose-mongoose graphql-middleware graphql-tools mongoose mongoose-bcrypt mongoose-timestamp

package.sh

And for development packages, add the following:

yarn add --dev babel-eslint babel-loader babel-preset-env eslint eslint-plugin-babel eslint-plugin-import eslint-plugin-node eslint-plugin-promise fs-extra nodemon prettier

package.sh

Now that we have the necessary packages installed, we can modify our package.json file to allow for additional functionality.

Let’s modify it to add scripts and hooks; once we’ve done that, your package.json file should look much like this:

Scripts

The below will allow us to run scripts via Yarn (e.g., yarn <INSERT SCRIPT HERE>). For example, we can lint our code using yarn lint, and it’ll perform ESLint and Prettier operations on our files.

"scripts": {
    "build": "babel src --out-dir dist",
    "start": "node dist/index.js",
    "dev": "nodemon --exec npx babel-node src/index.js",
    "prettier": "prettier --config ./.prettierrc --write \"**/*.js\"",
    "pretest": "eslint --ignore-path .gitignore .",
    "postinstall": "rm -rf dist && yarn run build",
    "lint": "yarn prettier --write --check --config ./.prettierrc \"**/*.js\" && eslint --fix ./src",
    "release": "release-it patch --no-npm.publish"
}

scripts.json

Similar to above, we’ll add a Husky script that will trigger on the precommit event, effectively running yarn lint for us prior to committing code.

This is an excellent practice for maintaining quality, clean code:

"husky": {
    "hooks": {
        "pre-commit": "yarn lint"
    }
}

husky.json

That’s all for scripts. Let’s continue on.

Configuring Babel, Prettier, and ESLint

We’ve taken the necessary steps to install the correct packages for Babel, Prettier, and ESLint.

Now, it’s time to add the configuration files to the root of your project. Move the root, and add the following files:

.babelrc


{
    "presets": [
        [
            "env",
            {
                "targets": {
                    "node": "current"
                }
            }
        ]
    ]
}

babel.js

prettierrc.json

{
    "trailingComma": "es5",
    "tabWidth": 4,
    "semi": true,
    "singleQuote": true
}

.babelrc.json

.eslintrc.json

{
    "plugins": ["babel"],
    "extends": ["eslint:recommended"],
    "rules": {
        "no-console": 0,
        "no-mixed-spaces-and-tabs": 1,
        "comma-dangle": 0,
        "no-unused-vars": 1,
        "eqeqeq": [2, "smart"],
        "no-useless-concat": 2,
        "default-case": 2,
        "no-self-compare": 2,
        "prefer-const": 2,
        "object-shorthand": 1,
        "array-callback-return": 2,
        "valid-typeof": 2,
        "arrow-body-style": 2,
        "require-await": 2,
        "react/prop-types": 0,
        "no-var": 2,
        "linebreak-style": [2, "unix"],
        "semi": [1, "always"]
    },
    "env": {
        "node": true
    },
    "parser": "babel-eslint",
    "parserOptions": {
        "sourceType": "module",
        "ecmaVersion": 2018,
        "ecmaFeatures": {
            "modules": true
        }
    }
}

.eslintrc.json

Perfect! We’re making progress.

Onto the next section.

Creating Our Models

The reason I enjoy working with graphql-compose-mongoose is that it allows me to use Mongoose models rather than writing GraphQL models by hand (which, by the way, can become quite cumbersome on a large application).

Head over to src/models, and create a new file named user.js. Inside this file, we’ll define all of the required characteristics that make up a user. This will be a small file, but feel free to add additional information to the user record if you wish (for example, a password using mongoose-bcrypt).

import mongoose, { Schema } from 'mongoose';
import timestamps from 'mongoose-timestamp';
import { composeWithMongoose } from 'graphql-compose-mongoose';

export const UserSchema = new Schema(
    {
        name: {
            type: String,
            trim: true,
            required: true,
        },
        email: {
            type: String,
            lowercase: true,
            trim: true,
            unique: true,
            required: true,
        },
    },
    {
        collection: 'users',
    }
);

UserSchema.plugin(timestamps);

UserSchema.index({ createdAt: 1, updatedAt: 1 });

export const User = mongoose.model('User', UserSchema);
export const UserTC = composeWithMongoose(User);

user.js

Next, let’s create a task.js file (given that this is, after all, a to-do GraphQL API):

import mongoose, { Schema } from 'mongoose';
import timestamps from 'mongoose-timestamp';
import { composeWithMongoose } from 'graphql-compose-mongoose';

export const TaskSchema = new Schema(
    {
        user: {
            type: Schema.Types.ObjectId,
            ref: 'User',
            required: true,
        },
        task: {
            type: String,
            trim: true,
            required: true,
        },
        description: {
            type: String,
            trim: true,
            required: true,
        },
    },
    {
        collection: 'tasks',
    }
);

TaskSchema.plugin(timestamps);

TaskSchema.index({ createdAt: 1, updatedAt: 1 });

export const Task = mongoose.model('Task', TaskSchema);
export const TaskTC = composeWithMongoose(Task);

task.js

We now have two models/schemas: UserSchema and TaskSchema.

A user is an individual entity, and a task always belongs to a user. From this, we will eventually be able to pull all tasks for a user in a single GraphQL call. Pretty cool, right?

Creating Our Schemas

Schemas are an interesting part of this implementation. They, essentially, allow us to define what calls can and cannot be made to the server.

Schemas are made up of queries and mutations, where queries allow you to fetch data, and mutations allow you to modify data. Let’s create our schemas for both the user and task model.

Inside of the schema directory, create a file called user.js. Then, drop the following contents into the file:

import { User, UserTC } from '../models/user';

const UserQuery = {
    userById: UserTC.getResolver('findById'),
    userByIds: UserTC.getResolver('findByIds'),
    userOne: UserTC.getResolver('findOne'),
    userMany: UserTC.getResolver('findMany'),
    userCount: UserTC.getResolver('count'),
    userConnection: UserTC.getResolver('connection'),
    userPagination: UserTC.getResolver('pagination'),
};

const UserMutation = {
    userCreateOne: UserTC.getResolver('createOne'),
    userCreateMany: UserTC.getResolver('createMany'),
    userUpdateById: UserTC.getResolver('updateById'),
    userUpdateOne: UserTC.getResolver('updateOne'),
    userUpdateMany: UserTC.getResolver('updateMany'),
    userRemoveById: UserTC.getResolver('removeById'),
    userRemoveOne: UserTC.getResolver('removeOne'),
    userRemoveMany: UserTC.getResolver('removeMany'),
};

export { UserQuery, UserMutation };

user.js

Next, let’s create one called task.js:

import { Task, TaskTC } from '../models/task';

const TaskQuery = {
    taskById: TaskTC.getResolver('findById'),
    taskByIds: TaskTC.getResolver('findByIds'),
    taskOne: TaskTC.getResolver('findOne'),
    taskMany: TaskTC.getResolver('findMany'),
    taskCount: TaskTC.getResolver('count'),
    taskConnection: TaskTC.getResolver('connection'),
    taskPagination: TaskTC.getResolver('pagination'),
};

const TaskMutation = {
    taskCreateOne: TaskTC.getResolver('createOne'),
    taskCreateMany: TaskTC.getResolver('createMany'),
    taskUpdateById: TaskTC.getResolver('updateById'),
    taskUpdateOne: TaskTC.getResolver('updateOne'),
    taskUpdateMany: TaskTC.getResolver('updateMany'),
    taskRemoveById: TaskTC.getResolver('removeById'),
    taskRemoveOne: TaskTC.getResolver('removeOne'),
    taskRemoveMany: TaskTC.getResolver('removeMany'),
};

export { TaskQuery, TaskMutation };

task.js

To tie things together, we’ll generate an index.js file in the root of the directory (src/schema) and import our schemas:

import { SchemaComposer } from 'graphql-compose';

import db from '../utils/db'; // eslint-disable-line no-unused-vars

const schemaComposer = new SchemaComposer();

import { UserQuery, UserMutation } from './user';
import { TaskQuery, TaskMutation } from './task';

schemaComposer.Query.addFields({
    ...UserQuery,
    ...TaskQuery,
});

schemaComposer.Mutation.addFields({
    ...UserMutation,
    ...TaskMutation,
});

export default schemaComposer.buildSchema();

index.js

Now that we have full CRUD capabilities with GraphQL, let’s add our final utilities.

Build Script

The build script allows you to transform your Mongoose-style schemas into pure GraphQL schemas. Pretty fancy, huh?

Create a file called buildSchema.js inside of src/scripts, and drop the following code in:

import fs from 'fs-extra';
import path from 'path';
import { graphql } from 'graphql';
import { introspectionQuery, printSchema } from 'graphql/utilities';

import Schema from '../schema';

async function buildSchema() {
    await fs.ensureFile('../data/schema.graphql.json');
    await fs.ensureFile('../data/schema.graphql');

    fs.writeFileSync(
        path.join(__dirname, '../data/schema.graphql.json'),
        JSON.stringify(await graphql(Schema, introspectionQuery), null, 2)
    );

    fs.writeFileSync(
        path.join(__dirname, '../data/schema.graphql.txt'),
        printSchema(Schema)
    );
}

async function run() {
    await buildSchema();
    console.log('Schema build complete!');
}

run().catch(e => {
    console.log(e);
    process.exit(0);
});

buildSchema.js

This file will be called with the yarn build command and will output the raw GraphQL queries into a data directory.

Database Connectivity

What’s an API without a database? That’s why we’ll need to create a connection from Mongoose to MongoDB.

If you haven’t already created a .env file in the root directory, now’s the time to do so. You’ll want to ensure it has the following environment variables:

NODE_ENV=development
PORT=8000
MONGODB_URI=YOUR_MONGODB_URI

.env

Once your .env file’s in place, let’s go ahead and create another file inside of src/utils. Name the file db.js, and add the following contents:

import mongoose from 'mongoose';
import dotenv from 'dotenv';

dotenv.config();

mongoose.Promise = global.Promise;

const connection = mongoose.connect(process.env.MONGODB_URI, {
    autoIndex: true,
    reconnectTries: Number.MAX_VALUE,
    reconnectInterval: 500,
    poolSize: 50,
    bufferMaxEntries: 0,
    keepAlive: 120,
    useNewUrlParser: true,
});

mongoose.set('useCreateIndex', true);

connection
    .then(db => db)
    .catch(err => {
        console.log(err);
    });

export default connection;

db.js

Note: If you don’t have MongoDB up and running locally, MongoDB Atlas is a great alternative. Not only is it free, but it packs enough power on the free tier to run a development application without any issues. Check it out here.

The Playground

Your GraphQL is now complete. Run the command yarn dev, and you’ll be able to spin up the playground for GraphQL, which allows you to add, modify, remove, and query users and tasks — all in one call.

It looks a little something like this:

Conclusion

This quick-and-dirty tutorial is just the beginning of all the fun you can have using GraphQL to make your development stronger, cleaner, and more efficient.

Try expanding on what you’ve just built to add additional functionality to the models, or venture out on your own to improve one of your existing applications — or even spin up a new one; I’d love to hear more about all that you decide to do.

Until then, thank you for following me along throughout this tutorial, and stay tuned for future updates. Happy coding!

Angular 9 Tutorial: Learn to Build a CRUD Angular App Quickly

What's new in Bootstrap 5 and when Bootstrap 5 release date?

What’s new in HTML6

How to Build Progressive Web Apps (PWA) using Angular 9

What is new features in Javascript ES2020 ECMAScript 2020

Building A GraphQL API With Nodejs And MongoDB

Building A GraphQL API With Nodejs And MongoDB

In this tutorial, you’ll build and deploy a GraphQL server with Node.js that can query and mutate data from a MongoDB database that is running on Ubuntu 18.04.

While REST APIs are amongst the most popular when it comes to client consumption, they are not the only way to consume data and they aren’t always the best way. For example, having to deal with many endpoints or endpoints that return massive amounts of data that you don’t need are common. This is where GraphQL comes in.

With GraphQL you can query your API in the same sense that you would query a database. You write a query, define the data you want returned, and you get what you requested. Nothing more, nothing less. I actually had the opportunity to interview the co-creator of GraphQL on my podcast in an episode titled, GraphQL for API Development, and in that episode we discuss GraphQL at a high level.

You might remember that I wrote a tutorial titled, Getting Started with GraphQL Development Using Node.js which focused on mock data and no database. This time around we’re going to take a look at including MongoDB as our NoSQL data layer.

A few assumptions are going to be made going forward. I’m going to assume that you already have access to a MongoDB instance. If you don’t and need help getting MongoDB setup, you might check out my tutorial titled, Getting Started with MongoDB as a Docker Container Deployment. The other assumption is that you have Node.js installed and configured. While this tutorial will be a working example, if you want to get into more depth with GraphQL, I suggest you check out my eBook and video course titled Web Services for the JavaScript Developer.

Creating a New Node.js Project for GraphQL Development

The first step in this tutorial will be to create a new project with the dependencies and boilerplate GraphQL code. From the command line, execute the following commands:

npm init -y
npm install express express-graphql graphql mongoose --save

The above commands will create a new package.json file and install our dependencies. The backbone of this project will use Express.js, which is also commonly used for RESTful APIs. However, we’ll be processing GraphQL requests and connecting them to MongoDB with the Mongoose ODM.

The next step is to create an app.js file and include the following boilerplate JavaScript code:

const Express = require("express");
const ExpressGraphQL = require("express-graphql");
const Mongoose = require("mongoose");
const {
    GraphQLID,
    GraphQLString,
    GraphQLList,
    GraphQLNonNull,
    GraphQLObjectType,
    GraphQLSchema
} = require("graphql");

var app = Express();

Mongoose.connect("mongodb://localhost/thepolyglotdeveloper");

const PersonModel = Mongoose.model("person", {
firstname: String,
lastname: String
});

const PersonType = new GraphQLObjectType({
name: "Person",
fields: {
id: { type: GraphQLID },
firstname: { type: GraphQLString },
lastname: { type: GraphQLString }
}
});

const schema = new GraphQLSchema({});

app.use("/graphql", ExpressGraphQL({
schema: schema,
graphiql: true
}));

app.listen(3000, () => {
console.log("Listening at :3000...");
});

In the above code we are importing our dependencies, initializing Express.js, and connecting to our MongoDB instance with Mongoose. We’re planning to use a very simple Mongoose model which will also be seen in our GraphQL model.

Rather than reiterating the content that was discussed in the previous tutorial, we’re going to focus primarily on the schema in our project which will contain queries and mutations.

Designing GraphQL Queries for Retrieving MongoDB NoSQL Data

When it comes to our schema, there will be queries for retrieving data and mutations for creating, updating, or deleting data. We’re going to start by designing our queries which will retrieve data from MongoDB.

Take a look at the following queries:

const schema = new GraphQLSchema({
query: new GraphQLObjectType({
name: "Query",
fields: {
people: {
type: GraphQLList(PersonType),
resolve: (root, args, context, info) => {
return PersonModel.find().exec();
}
},
person: {
type: PersonType,
args: {
id: { type: GraphQLNonNull(GraphQLID) }
},
resolve: (root, args, context, info) => {
return PersonModel.findById(args.id).exec();
}
}
}
})
});

In the above code we have a people query as well as a person query. One will retrieve multiple documents and the other will retrieve a single document. When we wish to query for multiple documents, we specify we want to return a GraphQLList of the PersonType that we had created. This PersonType maps to our document model. We can simply do a find for all documents within our people MongoDB collection.

The person query is similar, but not quite the same. In the person query we accept an argument which must be present. With that argument we can use the findById function and return the result.

We didn’t need to specify the document properties because they are already conveniently mapped to our model because of our naming conventions. If we wanted to use different names, we could specify the properties in the find operation.

If we wanted to query for our data from a client facing application, we could run something like this:

{
people {
id,
firstname,
lastname
}
person(id: "123") {
firstname
}
}

You can try to run this query by navigating to http://localhost:3000/graphql in your web browser because we have GraphiQL enabled for troubleshooting.

Designing GraphQL Mutations for Creating, Changing, or Deleting MongoDB Documents

Now that we can query for documents, we probably want a way to create documents. Mutations are designed nearly the same as queries, but they are executed differently in the client facing application.

Within your schema we can modify it to the following:

const schema = new GraphQLSchema({
query: new GraphQLObjectType({
name: "Query",
fields: {
people: {
type: GraphQLList(PersonType),
resolve: (root, args, context, info) => {
return PersonModel.find().exec();
}
},
person: {
type: PersonType,
args: {
id: { type: GraphQLNonNull(GraphQLID) }
},
resolve: (root, args, context, info) => {
return PersonModel.findById(args.id).exec();
}
}
}
}),
mutation: new GraphQLObjectType({
name: "Mutation",
fields: {
person: {
type: PersonType,
args: {
firstname: { type: GraphQLNonNull(GraphQLString) },
lastname: { type: GraphQLNonNull(GraphQLString) }
},
resolve: (root, args, context, info) => {
var person = new PersonModel(args);
return person.save();
}
}
}
})
});

Notice that now we have a mutation and not just a query. For the mutation, we are requiring two arguments to exist and we are using them in the resolve function. Using the arguments, without further data validation, we create a new model instance and save it to the database. The results are returned to the client that executed the mutation.

To try this mutation, the following could be executed:

mutation CreatePerson($firstname: String!, $lastname: String!) {
person(firstname: $firstname, lastname: $lastname) {
id,
firstname,
lastname
}
}

The above code would make sense in GraphiQL. The variables would be populated with another part of the GraphiQL application. Various frameworks will have different expectations when it comes to executing mutations, but the same idea applies.

Conclusion

You just saw how to use MongoDB as the NoSQL database in your GraphQL web application. GraphQL can be used in combination with REST or as an alternative depending on your business needs. If you’d like to know how to create a RESTful application with MongoDB, you might want to check out my previous tutorial titled, Building a REST API with MongoDB, Mongoose, and Node.js.

This example was quite simple and a lot more can be accomplished with GraphQL.

Thanks For Visiting, Keep Visiting. If you liked this post, share it with all of your programming buddies!

How to set-up a powerful API with Nodejs, GraphQL, MongoDB, Hapi, and Swagger

Separating your frontend and backend has many advantages:

Separating your frontend and backend has many advantages:

  • The biggest reason why reusable APIs are popular — APIs allow you to consume data from a web client, mobile app, desktop app — any client really.
  • Separation of concerns. Long gone are the days where you have one monolithic-like app where everything is bundled together. Imagine you have an extremely convoluted application. Your only option is to hire extremely experienced/senior developers due to the natural complexity.

I’m all for hiring juniors and training your staff, and that’s exactly why you should separate concerns. With separation of concerns, you can reduce the complexity of your application by splitting responsibilities into “micro-services” where each team is specialized in their micro-service.

As mentioned above, the on-boarding/ramp-up process is much quicker thanks to splitting up responsibilities (backend team, frontend team, dev ops team, and so on)


Forward thinking and getting started

We will be building a very powerful, yet flexible, GraphQL API based on Nodejs with Swagger documentation powered by MongoDB.

The main backbone of our API will be Hapi.js. We will go over all the technology in substantial detail.

At the very end, we will have a very powerful GraphQL API with great documentation.

The cherry on top will be our integration with the client (React, Vue, Angular)


Prerequisites

  • NodeJS installed
  • Basic JavaScript
  • Terminal (any will do, preferably bash-based)
  • Text editor (any will do)
  • MongoDB (install instructions here) — Mac: brew install mongodb

Let’s goo!

Open the terminal and create the project. Inside the project directory we initialize a Node project.

Creating our project

Next, we want to setup our Hapi server, so let’s install the dependencies. You can either use Yarn or NPM.

yarn add hapi nodemon

Before we go on, let’s talk about what hapi.js is and what it can do for us.

hapi enables developers to focus on writing reusable application logic instead of spending time building infrastructure.

Instead of going with Express, we are going with Hapi. In a nutshell, Hapi is a Node framework. The reason why I chose Hapi is rather simple — simplicity and flexibility over boilerplate code_._

Hapi enables us to build our API in a very rapid manner.

Optional: check out this quick crash course on hapi.js:

The second dependency we installed was the good-ole nodemon. Nodemon restarts our server automatically whenever we make changes. It speeds up our development by a big factor.

Let’s open our project with a text editor. I chose Visual Studio Code.

Setting up a Hapi server is very straightforward. Create a index.js file at the root directory with the contents of the following:

  • We require the hapi dependency
  • Secondly, we make a constant called server which creates a new instance of our Hapi server — as the arguments, we pass an object with the port and host options.
  • Third and finally, we create an asynchronous expression called init. Inside the init method, we have another asynchronous method which starts the server. See server.start() — at the bottom we call the init()function.

If you’re unsure about async await — watch this:

Now, if we head over to [http://localhost:4000](http://localhost:4000) we should see the following:

Which is perfectly fine, since the Hapi server expects a route and a handler. More on that in a second.

Let’s quickly add the script to run our server with nodemon. Open package.json and edit the scripts section.

Now we can do the following 😎


Routing

Routing is very intuitive with Hapi. Let’s say you hit / — what would you expect to happen? There are three main components in play here.

  • What’s the path? — path
  • What’s the HTTP method? Is it a GET — POST or something else? — method
  • What will happen if that route is reached? — handler

Inside the init method we attached a new method to our server called route with options passed as our argument.

If we refresh our page we should see return value of our root handler

Well done, but there is so much more we can do!


Setting up our database

Right, next up we are going to setup our database. We’re going to use mongodb with mongoose.

Let’s face it, writing MongoDB validation, casting and business logic boilerplate is a drag. That’s why we wrote Mongoose.

The next final ingredient related to our database is mlab. Instead of running mongo on our local computer, we are gonna use a cloud provider like mlab.

The reason why I chose mlab is because of the free plan (useful for prototyping) and how simple it is to use. There are more alternatives out there, and I encourage you to explore all of them ❤

Head over to https://mlab.com/ and signup.

Let’s create our database.

And finally create a user for the database. That will be all we will be editing on mlab.


Connecting mongoose with mlab

Open index.js and add the following lines and credentials. We are basically just telling mongoose which database we want to connect. Make sure to use your credentials.

If you want to brush up your MongoDB skills, here’s a solid series.

If everything went according the plan, we should see ‘connected to database’ in the console.


Wohoo!

Good job! Take a quick break and grab some coffee, we are almost ready to dive into the “cool parts”.


Creating Models

With mongoDB, we follow the convention of models. In other words — data modeling.

It’s a relatively simple concept which you will be able to grasp. Basically we just declare our schema for collections. Think of collections as tables in an SQL database.

Let’s create a directory called models. Inside we will create a file Painting.js

Painting.js is our painting model. It will hold all data related to paintings. Here’s how it will look:

  • We require the mongoose dependency.
  • We declare our PaintingSchema by calling the mongoose schema constructor and passing in the options. Notice how it’s strongly typed: for example the name field can consist of a string, and techniques consists of an array of strings.
  • We export the model and name it Painting

Let’s fetch all of our paintings from the database

First we need to import the Painting model to index.js


Adding new routes

Ideally, we want to have URL endpoints reflecting our actions.

such as /api/v1/paintings — /api/v1/paintings/{id} — and so on.

Let’s start off with a GET and POST route. GET fetches all the paintings and POST adds a new painting.

Notice we modified the route to be an array of objects instead a single object. Also, arrow functions 😊

  • We created a GET for [/api/v1/paintings](http://localhost:4000/api/v1/paintings) path. Inside the handler we are calling the mongoose schema. Mongoose has built-in methods — the handy method we are using is find() which returns all paintings since we’re not passing in any conditions to find by. Therefore it returns all records.
  • We also created a POST for the same path. The reason for that is we’re following REST conventions. Let’s deconstruct (pun intended) the route handler — remember in our Painting schema we declared three fields: name — url — techniques 
  • Here we are just accepting those arguments from the request (we will be doing that with postman in a sec) and passing the request arguments to our mongoose schema. After we’re done passing arguments, we call the save() method on our new record, which saves it to the mlab database.

If we head over to [http://localhost:4000**/api/v1/paintings**](http://localhost:4000/api/v1/paintings) we should see an empty array.

Why empty? Well we haven’t added any paintings just yet. Let’s do that now!

Install postman, it’s available for all platforms.

After installation, open postman.

  • On the left you can see the method options. Change that to POST
  • Next to the POST method we have the URL. That’s the URL we want to send our method to.
  • On the right you can see blue button which sends the request.
  • Below the URL bar we have the options. Click on the body and fill in the fields like in the example.
{
  "name": "Mona Lisa",
  "url": "https://en.wikipedia.org/wiki/Mona_Lisa#/media/File:Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg",
  "techniques": ["Portrait"]
}

POST paintings

Alright. Good to go! Let’s open [http://localhost:4000/api/v1/paintings](http://localhost:4000/api/v1/paintings)

Excellent! We still have some way to go! Next up — GraphQL!


Here’s the source code just in case anyone needs it :-)


Working with MongoDB and GraphQL

Working with MongoDB and GraphQL

Developer Advocate Joe Karlsson gives an introduction to GraphQL and refactors an old project to use MongoDB Atlas

Developer Advocate Joe Karlsson gives an introduction to GraphQL and refactors an old project to use MongoDB Atlas