Setting up MongoDB for Bi-Temporal Data

Setting up MongoDB for Bi-Temporal Data

In this tutorial, I will demonstrate how to integrate BarbelHisto with MongoDB to have a properly audit-proof document management solution.

In this tutorial, I will demonstrate how to integrate BarbelHisto with MongoDB to have a properly audit-proof document management solution.

1. Get Your Version of BarbelHisto

If you want to make MongoDB work with BarbelHisto, you need two maven dependencies to get started.

2. Develop a **Client** Pojo

Implement a POJO like this one:

public class Client {

    @DocumentId
    private String clientId;
    private String title;
    private String name;
    private String firstname;
    private String address;
    private String email;
    private LocalDate dateOfBirth;

}

Notice that we use the @DocumentIdannotation to tell BarbelHisto that the clientId uniquely identifies the client, and this should be the document id. BarbelHisto will maintain document journals for each document id.

3. Create an Instance of **BarbelHisto** With MongoDB Listeners

Like so:

MongoClient mongoClient = SimpleMongoListenerClient.create("mongodb://localhost:12345").getMongoClient();
// update listener
SimpleMongoUpdateListener updateListener = SimpleMongoUpdateListener.create(mongoClient, "testDb", "testCol", Client.class, BarbelHistoContext.getDefaultGson());
// pre-fetch listener
SimpleMongoLazyLoadingListener loadingListener = SimpleMongoLazyLoadingListener.create(mongoClient, "testDb", "testCol", Client.class, BarbelHistoContext.getDefaultGson());
// locking listener
MongoPessimisticLockingListener lockingListener = MongoPessimisticLockingListener.create(mongoClient, "lockDb", "docLocks");
// BarbelHisto instance
BarbelHisto<Client> mongoBackedHisto = BarbelHistoBuilder.barbel().withSynchronousEventListener(updateListener)
                .withSynchronousEventListener(loadingListener).withSynchronousEventListener(lockingListener).build();

You can use your own MongoClient settings if you like. The BarbelHisto mongo package does provide a client creation class for convenience. There are three listeners registered synchronously with BarbelHisto. The SimpleMongoUpdateListener will forward updates saved against BarbelHisto against the mongo shadow collection. The SimpleMongoLazyLoadingListener listener ensures that data is fetched into the local BarbelHisto instance if clients perform queries using the BarbelHisto.retrieve() methods. The MongoPessimisticLockingListener will lock journals in mongo if the client performs an update using the BarbelHisto.save().

4. Do Some Bi-Temporal **save()** Operations

With this setup, you can now store and retrieve bi-temporal data with a MongoCollection as a remote data source.

Client client = new Client("1234", "Mr.", "Smith", "Martin", "some street 11", "[email protected]", LocalDate.of(1973, 6, 20));
mongoBackedHisto.save(client, LocalDate.now(), LocalDate.MAX);

5. Retrieve Bi-Temporal Data From MongoDB

Later, in other sessions of your web application, you can retrieve the client using the BarbelHisto.retrieve() from MongoDB.

Client client = mongoBackedHisto.retrieveOne(BarbelQueries.effectiveNow("1234"));

Use the BarbelQueries to make queries against the MongoDB backed BarbelHisto instance. You can also combine BarbelQueries.

List<Client> clients = mongoBackedHisto.retrieve(QueryFactory.and(BarbelQueries.effectiveNow("1234"),BarbelQueries.effectiveNow("1234")));

Be careful if you don’t specify document IDs in your query. This might cause a full load of the mongo collection.

That’s it. There is nothing more to do than this.

Let’s access the version data for the client object we’ve just added. You’ve retrieved the client previously in this tutorial. You can cast every object received from BarbelHisto to Bitemporal to access the version data.

Bitemporal clientBitemporal = (Bitemporal)client;
System.out.println(clientBitemporal.getBitemporalStamp().toString());

If you’d like to print out what MongoDB knows about your client, then pretty print the document journal like so:

System.out.println(mongoBackedHisto.prettyPrintJournal("1234"));

This should return a printout that looks similar to this one:

Document-ID: 1234

|Version-ID                              |Effective-From |Effective-Until |State   |Created-By           |Created-At                                   |Inactivated-By       |Inactivated-At                               |Data                           |
|----------------------------------------|---------------|----------------|--------|---------------------|---------------------------------------------|---------------------|---------------------------------------------|-------------------------------|
|d18cd394-aa62-429b-a23d-46e935f80e71    |2019-03-01     |999999999-12-31 |ACTIVE  |SYSTEM               |2019-03-01T10:46:27.236+01:00[Europe/Berlin] |NOBODY               |2199-12-31T23:59:00Z                         |EffectivePeriod [from=2019-03- |

You can get the complete code from this tutorial in this test case.

Configuring for Performance

In the previous paragraphs, I’ve used the lazy loading and update listeners to integrate a MongoCollection with the synchronous event bus. There are advantages and some drawbacks of this configuration, especially in scenarios where high performance is one of the key requirements.

Registration with the synchronous service bus eases error handling because clients can react immediately when an exception is thrown in the listeners. On the other hand, synchronous means waiting(!) for a response, which isn’t always necessary, especially with update operations.

Also, the lazy loading listeners require the user to pass the journal ID in queries to work properly. In some situations, this isn’t enough. Clients may want to define complex custom queries against BarbelHisto, which combine many attributes, but no document IDs are included. In fact, these complex queries should have the document IDs as a result, not as a parameter. For instance, when an address or client name is known and the user needs to find the corresponding client record or a policy number (which is the document ID in many cases).

To address the complex query and performance requirements, users can configure BarbelHisto in advanced performance setups. One option is that you can resign lazy loading listeners. Instead, you use disk persistent indexed collections as object query pool. At the same time, you register a SimpleMongoUpdateListener to the asynchronous service bus. In such a setup, complex queries can be defined without restrictions and the data is still shadowed to a MongoCollection of your choice. But everything works considerably faster than in the synchronous scenarios described above.

// define primary key in POJO class -> versionID !
SimpleAttribute<Client, String> primaryKey = 
   new SimpleAttribute<Client, String>("versionId") 
      {
        public String getValue(Client object, QueryOptions queryOptions) {
            return (String) ((Bitemporal) object).getBitemporalStamp().getVersionId();
       };
// define the update listener
SimpleMongoUpdateListener updateListener = SimpleMongoUpdateListener.create(client.getMongoClient(),
                            "testSuiteDb", "testCol", Client.class, BarbelHistoContext.getDefaultGson());
// make BarbelHisto backbone persistent and register 
// Mongo update listener with asynchronous bus
BarbelHisto<DefaultPojo> histo = BarbelHistoBuilder.barbel()
        .withBackboneSupplier(() -> new ConcurrentIndexedCollection<>
            (
            DiskPersistence.onPrimaryKeyInFile(primaryKey, new File("test.dat"))
            )
        ).withAsynchronousEventListener(updateListener);


In the first step, you define the primaryKey, which is mandatory when you use persistent disk space. Use the versionId as the primary key as demonstrated above. Then, define the update listener and register that with BarbelHistoBuilder. Also, register a DiskPersistence backbone using the builder. In the above example, your data is kept in the test.dat file and also in the shadow MongoCollection.

Look into this test case to see the complete scenario, and let us know your thoughts and questions in the comments below.

Learn More

Fullstack Vue App with Node, Express and MongoDB

Building A REST API With MongoDB, Mongoose, And Node.js

AngularJS tutorial for beginners with NodeJS, ExpressJS and MongoDB

Creating RESTful APIs with NodeJS and MongoDB Tutorial

MongoDB with Python Crash Course - Tutorial for Beginners

MongoDB - The Complete Developer’s Guide

The Complete Developers Guide to MongoDB

MongoDB - The Complete Developer’s Guide

Learn MongoDB : Leading NoSQL Database from scratch

Learn NoSQL Databases - Complete MongoDB Bootcamp 2019

How to connect to MongoDB database from Node.js?

How to connect to MongoDB database from Node.js?

Use Node.js? Want to learn MongoDB? In today’s post, we’ll work through connecting to a MongoDB database from a Node.js script, retrieving a list of databases, and printing the results to your console.

In today’s post, we’ll work through connecting to a MongoDB database from a Node.js script, retrieving a list of databases, and printing the results to your console.

Set up

Before we begin, we need to ensure you’ve completed a few prerequisite steps.

Install Node.js

First, make sure you have a supported version of Node.js installed (the MongoDB Node.js Driver requires Node 4.x or greater and for these examples, I've used Node.js 10.16.3).

Install the MongoDB Node.js Driver

The MongoDB Node.js Driver allows you to easily interact with MongoDB databases from within Node.js applications. You’ll need the driver in order to connect to your database and execute the queries described in this Quick Start series.

If you don’t have the MongoDB Node.js Driver installed, you can install it with the following command.

npm install mongodb

At the time of writing, this installed version 3.3.2 of the driver. Running npm list mongodb will display the currently installed driver version number. For more details on the driver and installation, see the official documentation.

Create a free MongoDB Atlas cluster and load the sample data

Next, you’ll need a MongoDB database. Your database will be stored inside of a cluster. At a high level, a cluster is a set of nodes where copies of your database will be stored.

The easiest way to get started with MongoDB is to use Atlas, MongoDB’s fully-managed database-as-a-service. Head over to Atlas and create a new cluster in the free tier. Once your tier is created, load the sample data.

Get started with an M0 cluster on Atlas today. It's free forever, and it’s the easiest way to try out the steps in this blog series.

If you’re not familiar with how to create a new cluster and load the sample data, check out this video tutorial from MongoDB Developer Advocate Maxime Beugnet.

Get your cluster’s connection info

The final step is to prep your cluster for connection.

In Atlas, navigate to your cluster and click CONNECT. The Cluster Connection Wizard will appear.

The Wizard will prompt you to whitelist your current IP address and create a MongoDB user if you haven’t already done so. Be sure to note the username and password you use for the new MongoDB user as you’ll need them in a later step.

Next, the Wizard will prompt you to choose a connection method. Select Connect Your Application. When the Wizard prompts you to select your driver version, select Node.js and 3.0 or later. Copy the provided connection string.

For more details on how to access the Connection Wizard and complete the steps described above, see the official documentation.

Connect to your database from a Node.js application

Now that everything is set up, it’s time to code! Let’s write a Node.js script that connects to your database and lists the databases in your cluster.

Import MongoClient

The MongoDB module exports MongoClient, and that’s what we’ll use to connect to a MongoDB database. We can use an instance of MongoClient to connect to a cluster, access the database in that cluster, and close the connection to that cluster.

const {MongoClient} = require('mongodb');

Create our main function

Let’s create an asynchronous function named main() where we will connect to our MongoDB cluster, call functions that query our database, and disconnect from our cluster.

The first thing we need to do inside of main() is create a constant for our connection URI. The connection URI is the connection string you copied in Atlas in the previous section. When you paste the connection string, don’t forget to update <username> and <password> to be the credentials for the user you created in the previous section. Note: the username and password you provide in the connection string are NOT the same as your Atlas credentials.

/**
 * Connection URI. Update <username>, <password>, and <your-cluster-url> to reflect your cluster.
 * See https://docs.mongodb.com/ecosystem/drivers/node/ for more details
 */
const uri = "mongodb+srv://<username>:<password>@<your-cluster-url>/test?retryWrites=true&w=majority";

Now that we have our URI, we can create an instance of MongoClient.

const client = new MongoClient(uri);

Note: When you run this code, you may see DeprecationWarnings around the URL string parser and the Server Discover and Monitoring engine. If you see these warnings, you can remove them by passing options to the MongoClient. For example, you could instantiate MongoClient by calling new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true }). See the Node.js MongoDB Driver API documentation for more information on these options.

Now we’re ready to use MongoClient to connect to our cluster. client.connect() will return a promise. We will use the await keyword when we call client.connect() to indicate that we should block further execution until that operation has completed.

await client.connect();

Now we are ready to interact with our database. Let’s build a function that prints the names of the databases in this cluster. It’s often useful to contain this logic in well named functions in order to improve the readability of your codebase. Throughout this series, we’ll create new functions similar to the function we’re creating here as we learn how to write different types of queries. For now, let’s call a function named listDatabases().

await listDatabases(client);

Let’s wrap our calls to functions that interact with the database in a try/catch statement so that we handle any unexpected errors.

try {
    await client.connect();

    await listDatabases(client);

} catch (e) {
    console.error(e);
}

We want to be sure we close the connection to our cluster, so we’ll end our try/catch with a finally statement.

finally {
    await client.close();
}

Once we have our main() function written, we need to call it. Let’s send the errors to the console.

main().catch(console.err);

Putting it all together, our main() function and our call to it will look something like the following.

async function main(){
    /**
     * Connection URI. Update <username>, <password>, and <your-cluster-url> to reflect your cluster.
     * See https://docs.mongodb.com/ecosystem/drivers/node/ for more details
     */
    const uri = "mongodb+srv://<username>:<password>@<your-cluster-url>/test?retryWrites=true&w=majority";

    const client = new MongoClient(uri);

    try {
        // Connect to the MongoDB cluster
        await client.connect();

        // Make the appropriate DB calls
        await  listDatabases(client);

    } catch (e) {
        console.error(e);
    } finally {
        await client.close();
    }
}

main().catch(console.err);

List the databases in our cluster

In the previous section, we referenced the listDatabases() function. Let’s implement it!

This function will retrieve a list of databases in our cluster and print the results in the console.

async function listDatabases(client){
    databasesList = await client.db().admin().listDatabases();

    console.log("Databases:");
    databasesList.databases.forEach(db => console.log(` - ${db.name}`));
};

Save Your File

You’ve been implementing a lot of code. Save your changes, and name your file something like connection.js. To see a copy of the complete file, visit the nodejs-quickstart GitHub repo.

Execute Your Node.js Script

Now you’re ready to test your code! Execute your script by running a command like the following in your terminal: node connection.js

You will see output like the following:

Databases:
 - sample_airbnb
 - sample_geospatial
 - sample_mflix
 - sample_supplies
 - sample_training
 - sample_weatherdata
 - admin
 - local

What’s next?

Today, you were able to connect to a MongoDB database from a Node.js script, retrieve a list of databases in your cluster, and view the results in your console. Nice!

In future posts in this series, we’ll dive into each of the CRUD (create, read, update, and delete) operations as well as topics like change streams, transactions, and the aggregation pipeline, so you’ll have the tools you need to successfully interact with data in your databases.

In the meantime, check out the following resources:

Series versions

This examples in this article were created with the following application versions:

MongoDB: 4.0
MongoDB Node.js Driver: 3.3.2
Node.js: 10.16.3

MongoDB Tutorial - How to backup and restore a database

MongoDB Tutorial - How to backup and restore a database

This is a quick example to show how to easily backup and restore a MongoDB database from the command line, using the current date as the backup folder name (YYYYMMDD)

Originally published at https://jasonwatmore.com
Backup MongoDB database using date for the folder name

The mongodump command will create a backup / dump of the MongoDB database with the name specified by the --db [DB NAME] argument.

The --out /var/backups/`date +"%Y%m%d" argument specifies the output directory as /var/backups/[TODAY'S DATE] e.g. /var/backups/20190903.

sudo mongodump --db [DB NAME] --out /var/backups/`date +"%Y%m%d"`
Restore MongoDB database from backup

The mongorestore command restores a database to the destination --db [DB NAME] from the specified directory, e.g. /var/backups/20190903.

sudo mongorestore --db [DB NAME] /var/backups/[BACKUP FOLDER NAME]

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Learn More

MongoDB - The Complete Developer’s Guide

The Complete Developers Guide to MongoDB

Learn MongoDB : Leading NoSQL Database from scratch

Learn NoSQL Databases - Complete MongoDB Bootcamp 2019

MongoDB Connection Initialization – Node.js API with TDD Tutorial

Building a simple app using NodeJS, MongoDB, and ExpressJS

Node, Express, React.js, Graphql and MongoDB CRUD Web Application

Building REST API with Nodejs / MongoDB /Passport /JWT

Learn MongoDB - MongoDB Tutorial for Beginners - Getting Started with MongoDB - Part 3/3

Learn MongoDB - MongoDB Tutorial for Beginners - Getting Started with MongoDB

What you’ll learn

  • Work with MongoDB with Clarity and Confidence
  • Use 4 tools MongoCHEF, NOSQL Manager, RoboMongo, MongoBooster easily
  • Do Regex, GridFS , Replication , Sharding, Full text search
  • Basic and Advanced CRUD operations using MongoDB
  • Import and Export data from MongoDB
  • Work MapReduce, Embedded Documents,Save&Insert , indexing, capped collections, TTL
  • Bonus section * Use java,C#,PHP,Nodejs to access MongoDB features like CRUD, GridFS
  • Bonus Section * A 50 minutes MongoDB key feature exercises
  • 100+ Quizzes 40+ Activities

Learn More

MongoDB - The Complete Developer’s Guide

The Complete Developers Guide to MongoDB

MongoDB - The Complete Developer’s Guide

Learn MongoDB : Leading NoSQL Database from scratch

Learn NoSQL Databases - Complete MongoDB Bootcamp 2019

Build a CRUD Operation using PHP & MongoBD

MongoDB with Python Crash Course - Tutorial for Beginners

Learn NoSQL Databases from Scratch - Complete MongoDB Bootcamp 2019

AngularJS tutorial for beginners with NodeJS, ExpressJS and MongoDB

MEAN Stack Tutorial MongoDB, ExpressJS, AngularJS and NodeJS

Creating RESTful APIs with NodeJS and MongoDB Tutorial