Dylan  Iqbal

Dylan Iqbal

1559027463

Building A REST API With MongoDB, Mongoose, And Node.js

In this tutorial, we build A REST API With MongoDB, Mongoose, And Node.js … Mongoose is Nodejs package for modeling Mongodb.

About a week or so ago I had written a tutorial titled, Getting Started with MongoDB as a Docker Container Deployment, which focused on the deployment of MongoDB. In that tutorial we saw how to interact with the MongoDB instance using the shell client, but what if we wanted to actually develop a web application with MongoDB as our NoSQL database?

In this tutorial we’re going to see how to develop a REST API with create, retrieve, update, and delete (CRUD) endpoints using Node.js and the very popular Mongoose object document modeler (ODM) to interact with MongoDB.

Before we get too invested in this tutorial, I wanted to point out that the focus of this tutorial won’t be around installing, configuring, or deploying MongoDB instances. I recommend you take a look at my previous tutorial if you need help with that. The assumption is that your MongoDB database is ready to go.

Configuring Node.js with Express Framework

To start things off, let’s go ahead and create a fresh Node.js project with the appropriate dependencies. From the command line, execute the following:

npm init -y
npm install express body-parser mongoose --save


The above commands will create a new package.json file and install Express.js, Mongoose, and a package that will allow us to pass JSON data around in our requests.

For simplicity, we’re going to keep all of our code within a single JavaScript file. Create an app.js file at the root of your project and include the following boilerplate code:

const Express = require("express");
const Mongoose = require("mongoose");
const BodyParser = require("body-parser");

var app = Express();

app.use(BodyParser.json());
app.use(BodyParser.urlencoded({ extended: true }));

app.post("/person", async (request, response) => {});
app.get("/people", async (request, response) => {});
app.get("/person/:id", async (request, response) => {});
app.put("/person/:id", async (request, response) => {});
app.delete("/person/:id", async (request, response) => {});

app.listen(3000, () => {
    console.log("Listening at :3000...");
});


In the above code we are doing a few things. First we are importing each of the dependencies that we had previously downloaded when creating our project. Next we are initializing Express Framework and configuring the body-parser package so we can receive JSON data in our payloads.

The API we develop will be create, retrieve, update, and delete (CRUD) oriented hence the five endpoint functions that are prepared. We’ll be adding the MongoDB and Mongoose logic to each of these endpoint functions.

Finally, we are listening for requests to our application on port 3000.

Interacting with MongoDB using the Mongoose ODM

With the foundation of our REST API in place, we can focus on the database logic. Within the project’s app.js file include the following near where you initialized Express Framework:

Mongoose.connect("mongodb://localhost/thepolyglotdeveloper");


The above line will connect to our MongoDB instance running on localhost and it will either use or create a thepolyglotdeveloper database. If you followed the Docker example, the database will have already been created. If you’re not using localhost, change it up as necessary.

After the connection information is in place, we can define our document models. This particular example will only have a single document model that looks like the following:

const PersonModel = Mongoose.model("person", {
    firstname: String,
    lastname: String
});


Our model is person which will create a people collection within our database. Each document in our collection will have the firstname and lastname properties. The model can be significantly more complex than what we have here.

So let’s take a look at each of our endpoints, starting with the creation of data:

app.post("/person", async (request, response) => {
    try {
        var person = new PersonModel(request.body);
        var result = await person.save();
        response.send(result);
    } catch (error) {
        response.status(500).send(error);
    }
});


When the client makes a POST request to our endpoint, we can use our PersonModel and the JSON payload provided to save into the database. Only the most basic of data validation is happening on our JSON payload and rather than using promises directly, we are using async and await, which in my opinion is a little cleaner. If the save is successful, we return the data back to the client facing application.

Now that we have data in our database, we can try to retrieve it:

app.get("/people", async (request, response) => {
    try {
        var result = await PersonModel.find().exec();
        response.send(result);
    } catch (error) {
        response.status(500).send(error);
    }
});


There are two different retrievals that can happen. We can retrieve everything, or we can retrieve something in particular. The first retrieval that we are looking at is the everything scenario. Using a find with no properties will get all data for a given collection, which in our scenario is the people collection. That data is then returned to the client facing application.

On the other side of things, we can try to retrieve a single document based on its stored id value:

app.get("/person/:id", async (request, response) => {
    try {
        var person = await PersonModel.findById(request.params.id).exec();
        response.send(person);
    } catch (error) {
        response.status(500).send(error);
    }
});


Given an id value defined by the client facing application, we can call the findById function rather than the find function. This function will find a single document based on the associated id. When we have this data we will return it back to the user.

Not bad so far, right?

Let’s finish this simple API with an update and a delete endpoint. Starting with an update, we can do the following:

app.put("/person/:id", async (request, response) => {
    try {
        var person = await PersonModel.findById(request.params.id).exec();
        person.set(request.body);
        var result = await person.save();
        response.send(result);
    } catch (error) {
        response.status(500).send(error);
    }
});


When the client provides an id value, we can first find the document by the id. Once we’ve found the document, we can set the properties that were provided in the request payload. Again, basic validation is performed. For example, if a property is provided that doesn’t appear in our model, it will be stripped out. In our model, none of the properties are required. This means whatever data appears in the payload, as long as it is valid, it will override what already exists. We can save any changes we made back to the database and return the data back to the client.

Our final endpoint for this example is the delete endpoint:

app.delete("/person/:id", async (request, response) => {
    try {
        var result = await PersonModel.deleteOne({ _id: request.params.id }).exec();
        response.send(result);
    } catch (error) {
        response.status(500).send(error);
    }
});


Again, we are expecting an id to be provided from the client facing application. When we have an id, we can use it in the deleteOne function and the document will be removed from the database.

If you run your application, you can play around with it using a tool like Postman or similar.

Conclusion

You just saw how to build a simple RESTful API using popular technologies such as Node.js, JavaScript, and MongoDB as the NoSQL database. If you’ve been following the blog, you might remember I did something similar in a tutorial titled, Developing a RESTful API with Node.js and MongoDB Atlas. With Atlas, we built a REST API, but it was with a cloud deployment of MongoDB.

If you’re interested in learning more about RESTful API development, I encourage you to check out my eBook and video course titled, Web Services for the JavaScript Developer, as it goes into significantly more depth.

A video version of this article can be found below.

#mongodb #node-js #rest #api

What is GEEK

Buddha Community

Building A REST API With MongoDB, Mongoose, And Node.js

NBB: Ad-hoc CLJS Scripting on Node.js

Nbb

Not babashka. Node.js babashka!?

Ad-hoc CLJS scripting on Node.js.

Status

Experimental. Please report issues here.

Goals and features

Nbb's main goal is to make it easy to get started with ad hoc CLJS scripting on Node.js.

Additional goals and features are:

  • Fast startup without relying on a custom version of Node.js.
  • Small artifact (current size is around 1.2MB).
  • First class macros.
  • Support building small TUI apps using Reagent.
  • Complement babashka with libraries from the Node.js ecosystem.

Requirements

Nbb requires Node.js v12 or newer.

How does this tool work?

CLJS code is evaluated through SCI, the same interpreter that powers babashka. Because SCI works with advanced compilation, the bundle size, especially when combined with other dependencies, is smaller than what you get with self-hosted CLJS. That makes startup faster. The trade-off is that execution is less performant and that only a subset of CLJS is available (e.g. no deftype, yet).

Usage

Install nbb from NPM:

$ npm install nbb -g

Omit -g for a local install.

Try out an expression:

$ nbb -e '(+ 1 2 3)'
6

And then install some other NPM libraries to use in the script. E.g.:

$ npm install csv-parse shelljs zx

Create a script which uses the NPM libraries:

(ns script
  (:require ["csv-parse/lib/sync$default" :as csv-parse]
            ["fs" :as fs]
            ["path" :as path]
            ["shelljs$default" :as sh]
            ["term-size$default" :as term-size]
            ["zx$default" :as zx]
            ["zx$fs" :as zxfs]
            [nbb.core :refer [*file*]]))

(prn (path/resolve "."))

(prn (term-size))

(println (count (str (fs/readFileSync *file*))))

(prn (sh/ls "."))

(prn (csv-parse "foo,bar"))

(prn (zxfs/existsSync *file*))

(zx/$ #js ["ls"])

Call the script:

$ nbb script.cljs
"/private/tmp/test-script"
#js {:columns 216, :rows 47}
510
#js ["node_modules" "package-lock.json" "package.json" "script.cljs"]
#js [#js ["foo" "bar"]]
true
$ ls
node_modules
package-lock.json
package.json
script.cljs

Macros

Nbb has first class support for macros: you can define them right inside your .cljs file, like you are used to from JVM Clojure. Consider the plet macro to make working with promises more palatable:

(defmacro plet
  [bindings & body]
  (let [binding-pairs (reverse (partition 2 bindings))
        body (cons 'do body)]
    (reduce (fn [body [sym expr]]
              (let [expr (list '.resolve 'js/Promise expr)]
                (list '.then expr (list 'clojure.core/fn (vector sym)
                                        body))))
            body
            binding-pairs)))

Using this macro we can look async code more like sync code. Consider this puppeteer example:

(-> (.launch puppeteer)
      (.then (fn [browser]
               (-> (.newPage browser)
                   (.then (fn [page]
                            (-> (.goto page "https://clojure.org")
                                (.then #(.screenshot page #js{:path "screenshot.png"}))
                                (.catch #(js/console.log %))
                                (.then #(.close browser)))))))))

Using plet this becomes:

(plet [browser (.launch puppeteer)
       page (.newPage browser)
       _ (.goto page "https://clojure.org")
       _ (-> (.screenshot page #js{:path "screenshot.png"})
             (.catch #(js/console.log %)))]
      (.close browser))

See the puppeteer example for the full code.

Since v0.0.36, nbb includes promesa which is a library to deal with promises. The above plet macro is similar to promesa.core/let.

Startup time

$ time nbb -e '(+ 1 2 3)'
6
nbb -e '(+ 1 2 3)'   0.17s  user 0.02s system 109% cpu 0.168 total

The baseline startup time for a script is about 170ms seconds on my laptop. When invoked via npx this adds another 300ms or so, so for faster startup, either use a globally installed nbb or use $(npm bin)/nbb script.cljs to bypass npx.

Dependencies

NPM dependencies

Nbb does not depend on any NPM dependencies. All NPM libraries loaded by a script are resolved relative to that script. When using the Reagent module, React is resolved in the same way as any other NPM library.

Classpath

To load .cljs files from local paths or dependencies, you can use the --classpath argument. The current dir is added to the classpath automatically. So if there is a file foo/bar.cljs relative to your current dir, then you can load it via (:require [foo.bar :as fb]). Note that nbb uses the same naming conventions for namespaces and directories as other Clojure tools: foo-bar in the namespace name becomes foo_bar in the directory name.

To load dependencies from the Clojure ecosystem, you can use the Clojure CLI or babashka to download them and produce a classpath:

$ classpath="$(clojure -A:nbb -Spath -Sdeps '{:aliases {:nbb {:replace-deps {com.github.seancorfield/honeysql {:git/tag "v2.0.0-rc5" :git/sha "01c3a55"}}}}}')"

and then feed it to the --classpath argument:

$ nbb --classpath "$classpath" -e "(require '[honey.sql :as sql]) (sql/format {:select :foo :from :bar :where [:= :baz 2]})"
["SELECT foo FROM bar WHERE baz = ?" 2]

Currently nbb only reads from directories, not jar files, so you are encouraged to use git libs. Support for .jar files will be added later.

Current file

The name of the file that is currently being executed is available via nbb.core/*file* or on the metadata of vars:

(ns foo
  (:require [nbb.core :refer [*file*]]))

(prn *file*) ;; "/private/tmp/foo.cljs"

(defn f [])
(prn (:file (meta #'f))) ;; "/private/tmp/foo.cljs"

Reagent

Nbb includes reagent.core which will be lazily loaded when required. You can use this together with ink to create a TUI application:

$ npm install ink

ink-demo.cljs:

(ns ink-demo
  (:require ["ink" :refer [render Text]]
            [reagent.core :as r]))

(defonce state (r/atom 0))

(doseq [n (range 1 11)]
  (js/setTimeout #(swap! state inc) (* n 500)))

(defn hello []
  [:> Text {:color "green"} "Hello, world! " @state])

(render (r/as-element [hello]))

Promesa

Working with callbacks and promises can become tedious. Since nbb v0.0.36 the promesa.core namespace is included with the let and do! macros. An example:

(ns prom
  (:require [promesa.core :as p]))

(defn sleep [ms]
  (js/Promise.
   (fn [resolve _]
     (js/setTimeout resolve ms))))

(defn do-stuff
  []
  (p/do!
   (println "Doing stuff which takes a while")
   (sleep 1000)
   1))

(p/let [a (do-stuff)
        b (inc a)
        c (do-stuff)
        d (+ b c)]
  (prn d))
$ nbb prom.cljs
Doing stuff which takes a while
Doing stuff which takes a while
3

Also see API docs.

Js-interop

Since nbb v0.0.75 applied-science/js-interop is available:

(ns example
  (:require [applied-science.js-interop :as j]))

(def o (j/lit {:a 1 :b 2 :c {:d 1}}))

(prn (j/select-keys o [:a :b])) ;; #js {:a 1, :b 2}
(prn (j/get-in o [:c :d])) ;; 1

Most of this library is supported in nbb, except the following:

  • destructuring using :syms
  • property access using .-x notation. In nbb, you must use keywords.

See the example of what is currently supported.

Examples

See the examples directory for small examples.

Also check out these projects built with nbb:

API

See API documentation.

Migrating to shadow-cljs

See this gist on how to convert an nbb script or project to shadow-cljs.

Build

Prequisites:

  • babashka >= 0.4.0
  • Clojure CLI >= 1.10.3.933
  • Node.js 16.5.0 (lower version may work, but this is the one I used to build)

To build:

  • Clone and cd into this repo
  • bb release

Run bb tasks for more project-related tasks.

Download Details:
Author: borkdude
Download Link: Download The Source Code
Official Website: https://github.com/borkdude/nbb 
License: EPL-1.0

#node #javascript

Wilford  Pagac

Wilford Pagac

1594289280

What is REST API? An Overview | Liquid Web

What is REST?

The REST acronym is defined as a “REpresentational State Transfer” and is designed to take advantage of existing HTTP protocols when used for Web APIs. It is very flexible in that it is not tied to resources or methods and has the ability to handle different calls and data formats. Because REST API is not constrained to an XML format like SOAP, it can return multiple other formats depending on what is needed. If a service adheres to this style, it is considered a “RESTful” application. REST allows components to access and manage functions within another application.

REST was initially defined in a dissertation by Roy Fielding’s twenty years ago. He proposed these standards as an alternative to SOAP (The Simple Object Access Protocol is a simple standard for accessing objects and exchanging structured messages within a distributed computing environment). REST (or RESTful) defines the general rules used to regulate the interactions between web apps utilizing the HTTP protocol for CRUD (create, retrieve, update, delete) operations.

What is an API?

An API (or Application Programming Interface) provides a method of interaction between two systems.

What is a RESTful API?

A RESTful API (or application program interface) uses HTTP requests to GET, PUT, POST, and DELETE data following the REST standards. This allows two pieces of software to communicate with each other. In essence, REST API is a set of remote calls using standard methods to return data in a specific format.

The systems that interact in this manner can be very different. Each app may use a unique programming language, operating system, database, etc. So, how do we create a system that can easily communicate and understand other apps?? This is where the Rest API is used as an interaction system.

When using a RESTful API, we should determine in advance what resources we want to expose to the outside world. Typically, the RESTful API service is implemented, keeping the following ideas in mind:

  • Format: There should be no restrictions on the data exchange format
  • Implementation: REST is based entirely on HTTP
  • Service Definition: Because REST is very flexible, API can be modified to ensure the application understands the request/response format.
  • The RESTful API focuses on resources and how efficiently you perform operations with it using HTTP.

The features of the REST API design style state:

  • Each entity must have a unique identifier.
  • Standard methods should be used to read and modify data.
  • It should provide support for different types of resources.
  • The interactions should be stateless.

For REST to fit this model, we must adhere to the following rules:

  • Client-Server Architecture: The interface is separate from the server-side data repository. This affords flexibility and the development of components independently of each other.
  • Detachment: The client connections are not stored on the server between requests.
  • Cacheability: It must be explicitly stated whether the client can store responses.
  • Multi-level: The API should work whether it interacts directly with a server or through an additional layer, like a load balancer.

#tutorials #api #application #application programming interface #crud #http #json #programming #protocols #representational state transfer #rest #rest api #rest api graphql #rest api json #rest api xml #restful #soap #xml #yaml

An API-First Approach For Designing Restful APIs | Hacker Noon

I’ve been working with Restful APIs for some time now and one thing that I love to do is to talk about APIs.

So, today I will show you how to build an API using the API-First approach and Design First with OpenAPI Specification.

First thing first, if you don’t know what’s an API-First approach means, it would be nice you stop reading this and check the blog post that I wrote to the Farfetchs blog where I explain everything that you need to know to start an API using API-First.

Preparing the ground

Before you get your hands dirty, let’s prepare the ground and understand the use case that will be developed.

Tools

If you desire to reproduce the examples that will be shown here, you will need some of those items below.

  • NodeJS
  • OpenAPI Specification
  • Text Editor (I’ll use VSCode)
  • Command Line

Use Case

To keep easy to understand, let’s use the Todo List App, it is a very common concept beyond the software development community.

#api #rest-api #openai #api-first-development #api-design #apis #restful-apis #restful-api

Hire Dedicated Node.js Developers - Hire Node.js Developers

If you look at the backend technology used by today’s most popular apps there is one thing you would find common among them and that is the use of NodeJS Framework. Yes, the NodeJS framework is that effective and successful.

If you wish to have a strong backend for efficient app performance then have NodeJS at the backend.

WebClues Infotech offers different levels of experienced and expert professionals for your app development needs. So hire a dedicated NodeJS developer from WebClues Infotech with your experience requirement and expertise.

So what are you waiting for? Get your app developed with strong performance parameters from WebClues Infotech

For inquiry click here: https://www.webcluesinfotech.com/hire-nodejs-developer/

Book Free Interview: https://bit.ly/3dDShFg

#hire dedicated node.js developers #hire node.js developers #hire top dedicated node.js developers #hire node.js developers in usa & india #hire node js development company #hire the best node.js developers & programmers

Dylan  Iqbal

Dylan Iqbal

1570765635

Using Hapi.js, Mongoose, And MongoDB To Build A REST API

In this tutorial we’re going to develop a simple RESTful API using Hapi.js, Joi and Mongoose as the backend framework, and MongoDB as the NoSQL database. Rather than just using Hapi.js as a drop in framework replacement, I wanted to improve upon what we had previously seen, by simplifying functions and validating client provided data.

To continue on my trend of MongoDB with Node.js material, I thought it would be a good idea to use one of my favorite Node.js frameworks. Previously I had written about using Express.js with Mongoose, but this time I wanted to evaluate the same tasks using Hapi.js.

If you haven’t seen my previous tutorial, don’t worry because it is not a requirement. However, the previous tutorial is a valuable read if you’re evaluating Node.js frameworks. What is required is having a MongoDB instance available to you. If you’re unfamiliar with deploying MongoDB, you might want to check out my tutorial titled, Getting Started with MongoDB as a Docker Container Deployment

Creating a Hapi.js Project with MongoDB and Data Validation Support

With MongoDB available to us, we can create a fresh Hapi.js project with all the appropriate dependencies. Create a new project directory and execute the following commands:

npm init -y
npm install hapi joi mongoose --save


The above commands will create a new package.json file and install the Hapi.js framework, the Joi validation framework, and the Mongoose object document modeler (ODM).

We’re going to add all of our application code into a single project file. Create an app.js file and include the following boilerplate JavaScript code:

const Hapi = require("hapi");
const Mongoose = require("mongoose");
const Joi = require("joi");

const server = new Hapi.Server({ "host": "localhost", "port": 3000 });

server.route({
    method: "POST",
    path: "/person",
    options: {
        validate: {}
    },
    handler: async (request, h) => {}
});

server.route({
    method: "GET",
    path: "/people",
    handler: async (request, h) => {}
});

server.route({
    method: "GET",
    path: "/person/{id}",
    handler: async (request, h) => {}
});

server.route({
    method: "PUT",
    path: "/person/{id}",
    options: {
        validate: {}
    },
    handler: async (request, h) => {}
});

server.route({
    method: "DELETE",
    path: "/person/{id}",
    handler: async (request, h) => {}
});

server.start();


We’ve added quite a bit of code to our app.js file, but it isn’t really anything beyond the Hapi.js getting started material found on the framework’s website. Essentially we’ve imported the dependencies that we downloaded, defined our servers settings, defined the routes which are also referred to as endpoints, and started the server.

You’ll notice that not all of our routes are the same. We’re developing a create, retrieve, update, and delete (CRUD) based REST API with validation on some of our endpoints. In particular we’ll be adding validation logic to the endpoints that save data to the database, not retrieve or remove.

With our boilerplate code in place, lets take a look at configuring MongoDB and adding our endpoint logic.

Interacting with the Database using the Mongoose ODM

Remember, I’m assuming you already have access to an instance of MongoDB. At the top of your app.js file, after you defined your server configuration, we need to connect to MongoDB. Inculde the following line to establish a connection:

Mongoose.connect("mongodb://localhost/thepolyglotdeveloper");


You’ll need to swap out my connection string information with your connection string information. When working with Mongoose, we need to have a model defined for each of our collections. Since this is a simple example, we’ll only have one model and it looks like the following:

const PersonModel = Mongoose.model("person", {
    firstname: String,
    lastname: String
});


Each of our documents will contain a firstname and a lastname, but neither of the two fields are required. These documents will be saved to the people collection which is the plural form of our ODM model.

At this point in time MongoDB is ready to be used.

It is now time to start developing our API endpoints, so starting with the creation endpoint, we might have something like this:

server.route({
    method: "POST",
    path: "/person",
    options: {
        validate: {}
    },
    handler: async (request, h) => {
        try {
            var person = new PersonModel(request.payload);
            var result = await person.save();
            return h.response(result);
        } catch (error) {
            return h.response(error).code(500);
        }
    }
});


We’ve skipped the validation logic for now, but inside our handler we are taking the payload data sent with the request from the client and creating a new instance of our model. Using our model we can save to the database and return the response back to the client. Mongoose will do basic validation against the payload data based on the schema, but we can do better. This is where Joi comes in with Hapi.js.

Let’s look at the validate object in our route:

validate: {
    payload: {
        firstname: Joi.string().required(),
        lastname: Joi.string().required()
    },
    failAction: (request, h, error) => {
        return error.isJoi ? h.response(error.details[0]).takeover() : h.response(error).takeover();
    }
}


In the validate object, we are choosing to validate our payload. We also have the option to validate the params as well as the query of a request, but neither are necessary here. While we could do some very complex validation, we’re just validating that both properties are present. Rather than returning a vague error to the user if either are missing, we’re returning the exact error using the failAction which is optional.

Now let’s take a look at retrieving the data that had been created. In a typical CRUD scenario, we can retrieve all data or a particular piece of data. We’re going to accommodate both scenarios.

server.route({
    method: "GET",
    path: "/people",
    handler: async (request, h) => {
        try {
            var person = await PersonModel.find().exec();
            return h.response(person);
        } catch (error) {
            return h.response(error).code(500);
        }
    }
});


The above route will execute the find function in Mongoose with no query parameters. This means that there is no criteria to search for which results in all data being returned from the collection. Similarly, we could return a particular piece of data.

If we wanted to return a particular piece of data we could either provide parameters in the find function, or use the following:

server.route({
    method: "GET",
    path: "/person/{id}",
    handler: async (request, h) => {
        try {
            var person = await PersonModel.findById(request.params.id).exec();
            return h.response(person);
        } catch (error) {
            return h.response(error).code(500);
        }
    }
});


In the above endpoint we are accepting an id route parameter and are using the findById function. The data returned from the interaction is returned to the client facing application.

With the create and retrieve endpoints out of the way, we can bring this tutorial to an end with the update and delete endpoints. Starting with the update endpoint, we might have something like this:

server.route({
    method: "PUT",
    path: "/person/{id}",
    options: {
        validate: {
            payload: {
                firstname: Joi.string().optional(),
                lastname: Joi.string().optional()
            },
            failAction: (request, h, error) => {
                return error.isJoi ? h.response(error.details[0]).takeover() : h.response(error).takeover();
            }
        }
    },
    handler: async (request, h) => {
        try {
            var result = await PersonModel.findByIdAndUpdate(request.params.id, request.payload, { new: true });
            return h.response(result);
        } catch (error) {
            return h.response(error).code(500);
        }
    }
});


Just like with the create endpoint we are validating our data. However, our validation is a little different than the previous endpoint. Instead of making our properties required, we are just saying they are optional. When we do this, any property that shows up that isn’t in our validator, we will be throwing an error. So for example, if I wanted to include a middle name, it would fail.

Inside the handler function, we can use a shortcut function called findByIdAndUpdate which will allow us to find a document to update and update it in the same operation rather than doing it in two steps. We are including the new setting so that the latest document information can be returned back to the client.

The delete endpoint will be a lot simpler:

server.route({
    method: "DELETE",
    path: "/person/{id}",
    handler: async (request, h) => {
        try {
            var result = await PersonModel.findByIdAndDelete(request.params.id);
            return h.response(result);
        } catch (error) {
            return h.response(error).code(500);
        }
    }
});


Using an id parameter passed from the client, we can execute the findByIdAndDelete function which will find a document by the id, then remove it in one swoop rather than using two steps.

You should be able to play around with the API as of now. You might want to use a tool like Postman before trying to use with a frontend framework like Angular or Vue.js.

Conclusion

You just saw how to create a REST API with Hapi.js and MongoDB. While we used Mongoose and Joi to help us with the job, there are other alternatives that can be used as well.

While Hapi.js is awesome, in my opinion, if you’d like to check out how to accomplish the same using a popular framework like Express.js, you might want to check out my tutorial titled, Building a REST API with MongoDB, Mongoose, and Node.js. I’ve also written a version of this tutorial using Couchbase as the NoSQL database. That version of the tutorial can be found here.

A video version of this tutorial can be seen below.

#mongodb #rest #api #node-js #hapi-js