Hands-on tutorial on the basics Streams API and KSQL

Event-driven architectures comprise complex business processes interconnected with streams of events. These are often online service uses cases and also backend processes such as billing, fulfillment or fraud detection, which may need to be decoupled from the frontend where users click buttons and expect things to happen.

Event-driven architectures comprise complex business processes interconnected with streams of events. These are often online service uses cases and also backend processes such as billing, fulfillment or fraud detection, which may need to be decoupled from the frontend where users click buttons and expect things to happen.

The event-driven model provides many benefits: It decouples dependencies between services, provides some level of pluggability to the architecture, enables services to evolve independently, etc.


In his book Designing Event-Driven Systems, Ben Stopford explains how event-driven architectures can be used to build business-critical systems. He describes the value of turning databases “inside out” and treating event streams as a “source of truth.” Among other things, the book shows you how to apply patterns, including event collaboration, event sourcing and CQRS for building microservices and event-oriented architectures.


Such systems typically use Apache Kafka® as the foundation. Kafka is like a central dataplane that holds shared events and keeps services in sync. Its distributed cluster technology provides availability, resiliency and performance properties that strengthen the architecture, leaving the programmer to simply write and deploy client applications that will run load balanced and be highly available.


If you are ready to move from reading about these fundamental concepts to more hands-on learning, Confluent offers several resources:


This two-part blog series will help you develop and validate real-time streaming applications. With part 1, we introduce a new resource:


And in the second part, we validate those streaming applications. For now, let’s talk about this new tutorial for developers.


Tutorial for developers

This free, self-paced tutorial is a great introduction for developers who are just getting started with stream processing. You will learn the basics of the Kafka Streams API, which is far richer than a Kafka producer or Kafka consumer, and common patterns to design and build event-driven applications.

The tutorial is based on a small microservices ecosystem, showcasing an order management workflow, such as one you might find in retail and online shopping. It is built using Kafka Streams, whereby business events that describe the order management workflow propagate through this ecosystem. The blog post Building a Microservices Ecosystem with Kafka Streams and KSQL outlines the approach used.

In this example, the system centers on an Orders Service which exposes a REST interface to POST and GET Orders. Posting an Order creates an event in Kafka that is recorded in the topic orders. This is picked up by three different validation engines (Fraud Service, Inventory Service and Order Details Service), which validate the order in parallel, emitting a PASS or FAIL based on whether each validation succeeds.


The result of each validation is pushed through a separate topic, Order Validations, so that we retain the single writer status of the Orders Service —> Orders Topic (Ben Stopford’s book discusses several options for managing consistency in event collaboration). The results of the various validation checks are aggregated in the Validation Aggregator Service, which then moves the order to a Validated or Failed state, based on the combined result.


To allow users to GET any order, the Orders Service creates a queryable materialized view (embedded inside the Orders Service), using a state store in each instance of the service, so that any Order can be requested historically. Note also that the Orders Service can be scaled out over a number of nodes, in which case GET requests must be routed to the correct node to get a certain key. This is handled automatically using the interactive queries functionality in Kafka Streams.


The Orders Service also includes a blocking HTTP GET so that clients can read their own writes. In this way, we bridge the synchronous, blocking paradigm of a RESTful interface with the asynchronous, non-blocking processing performed server side.


There is a simple service that sends emails, and another that collates orders and makes them available in a search index using Elasticsearch.


Finally, Confluent KSQL is running with persistent queries to enrich streams and to also check for fraudulent behavior.


Here is a diagram of the microservices and the related Kafka topics:

To use the tutorial, first you have to properly set up your environment. You can use a local Confluent Platform install or Docker.


Then run the full end-to-end working solution, which requires no code development, to see a customer-representative deployment of a streaming application. This provides context for each of the exercises in which you will develop pieces of the microservices.


After you have successfully run the full solution, go through the individual exercises in the tutorial to better understand the basic principles of streaming applications. For each exercise, the tutorial provides a stub file for which you have to complete the code. By working on these exercises, you will learn the patterns for writing solid streaming applications and gain experience with using the Kafka Streams API. Complete the code (there are hints if you need them!), compile and run the provided tests to ensure it works!


The tutorial walks through the following exercises:


Exercise 1: Persist events

In this exercise, you will persist events into Kafka by producing records that represent customer orders. An event is simply a thing that happened or occurred. An event in a business is some fact that occurred, such as a sale, an invoice, a trade, a customer experience, etc., and it is the source of truth. In event-oriented architectures, events are first-class citizens that constantly push data into applications. Client applications can then react to these streams of events in real time and decide what to do next.

Exercise 2: Event-driven applications

In this exercise, you will let the order event itself trigger a service. In such an event-driven design, an event stream is the inter-service communication that leads to less coupling and queries, enables services to cross deployment boundaries and avoids synchronous execution. In contrast, service-based architectures are often designed to be request driven, in which services send commands to other services to tell them what to do, await a response or send queries to get the resulting state.

A visual summary of commands, events and queries


Exercise 3: Enriching streams with joins

In this exercise, you will write a service that enriches the streaming order information by joining it with streaming payment information and data from a customer database. Many stream processing applications in practice are coded as streaming joins. For example, applications backing an online shop might need to access multiple updating database tables (e.g., sales prices, inventory, customer information) in order to enrich a new data record (e.g., customer transaction) with context information. In these scenarios, you may need to perform table lookups at very large scale and with a low processing latency.

A stateful streaming service that joins two streams at runtime

A popular pattern is to make the information in the databases available in Kafka through so-called change data capture (CDC), together with Kafka’s Connect API to pull in the data from the database (read more in Robin Moffatt’s blog post No More Silos: How to Integrate Your Databases with Apache Kafka and CDC). Once the data is in Kafka, client applications can perform very fast and efficient joins of such tables and streams, rather than requiring the application to make a query to a remote database over the network for each record.

Exercise 4: Filtering and branching

Kafka can capture a lot of information related to an event into a single Kafka topic. Client applications can then manipulate that data based on some user-defined criteria to create new streams of data that they can act on. In this exercise, you will define one set of criteria to filter records in a stream based on some criteria. Then you will define define another set of criteria to branch records into two different streams.

Exercise 5: Stateful operations

In this exercise, you will create a session window to define five-minute windows for processing. You can combine current record values with previous record values using aggregations. They are stateful operations because they maintain data during processing. Oftentimes, these are combined with windowing capabilities in order to run computations in real time over a window of time. Additionally, you will use a stateful operation to collapse duplicate records in a stream.

Exercise 6: State stores

In this exercise, you will create a state store which is a disk-resident hash table held inside the API for the client application. The state store can be used within stream processing applications to store and query data, an important capability when implementing stateful operations. It can be used to remember recently received input records, to track rolling aggregates, to de-duplicate input records, etc.

State stores in Kafka Streams can be used to create use-case-specific views right inside the service

A state store is also backed by a Kafka topic and comes with all the Kafka guarantees. Consequently, other applications can also interactively query another application’s state store. Querying state stores is always read-only to guarantee that the underlying state stores will never be mutated out of band (i.e., you cannot add new entries).


Exercise 7: Enrichment with KSQL

Confluent KSQL is the streaming SQL engine that enables real-time data processing against Apache Kafka. It provides an easy-to-use, yet powerful interactive SQL interface for stream processing on Kafka, without requiring you to write code in a programming language such as Java or Python.

KSQL is scalable, elastic, fault tolerant and able to support a wide range of streaming operations, including data filtering, transformations, aggregations, joins, windowing and sessionization. In this exercise, you will create one persistent query that enriches the orders stream with customer information. You will create another persistent query that detects fraudulent behavior by counting the number of orders in a given window.

Interested in more?

The new tutorial Introduction to Streaming Application Development is a great introduction for developers to learn the basics of the Kafka Streams API, and apply them to a retail microservices example with an event-driven architecture. With each of these exercises, you can dive in and run the end-to-end automated demo.

We hope you will stay with us for part 2 of this blog series, which will help you be successful in validating your streaming applications and cover unit testing, integration testing, Avro and schema compatibility testing, Confluent Cloud™ tools and multi-datacenter testing.

Validating that a solution works is just as important as implementing one. It provides assurance that the application is working as designed, can handle unexpected events and can evolve without breaking existing functionality.

Learn to Build SQL Query| Ultimate SQL and DataBase Concepts|Simpliv

Learn to Build SQL Query| Ultimate SQL and DataBase Concepts|Simpliv

Learn to Build SQL Query| Ultimate SQL and DataBase Concepts

Description
SQL developers are earning higher salary in IT industry, but, its not about writing queries its about understanding and applying the right query at right time and this course will let you understand complex SQL Statements in an easy way .

Moreover, This Course will teach you how to extract Data from Database and write complex queries to a database This course will focus on a wider scale by Covering Structure Query Language SQL concepts as a whole, whether Students work with MySQL, Microsoft SQL Server, Oracle Server, etc.

This course have 5 Chapters in which you will learn

Chapter 1 Fundamentals

Fundamentals
Building Blocks
Selecting Records from DB
Working with Arithmetic Expressions
Chapter 2 Conditioning Sorting and Operators

Logical Operators
Comparison Operators
Operator Precedence
Sorting Results
Chapter 3 Functions

Character Functions
Number Functions
Date Functions
Conversions
General Purpose Functions
Nesting Functions
Chapter 4 Grouping

Multiple Row Functions on a single Table
Multiple Row Functions on Many Table
Chapter 5 Joins

Understanding Primary Key
Understanding Foreign Key
Understanding Need of Joins
Cartesian Product
Equie Join Simple Join Self Join
Non Equie Join
Outer Join
Self Join
Course is Designed for College and University Students who want Solid SQL and Data Base Concepts in a short period of time.

Who this course is for:

Beginners
University or College students
Anyone who wants Solid SQL Concepts
Basic knowledge
No prior knowledge is required
PC or MAC
What will you learn
SQL Fundamentals
Understand Complex SQL Concepts in Easy way using daily life examples
Construct SQL Statements
Use SQL to retrieve data from database
Selecting Data From Database
Restricting and Sorting Data from DB
Grouping Data From DB
Construct SQL statements that will let them work with more than two tables
Use SQL Functions
Work with SQL Operators and find out precedence
Nesting in SQL
Joins
To continue:

How to Build a CRUD API with Java, MongoDB, and Spring Boot

How to Build a CRUD API with Java, MongoDB, and Spring Boot

This tutorial shows how to build a CRUD API with Java, MongoDB, and Spring Boot. How to build a CRUD API with Java and MongoDB. In this tutorial, you create a Java data model class and mapped it to a MongoDB domain document using Spring Data annotations. You use a simple embedded MongoDB database as the datastore. You use Spring Boot to quickly and easily expose your data model via a REST API. Finally, you secured the REST API using Okta and Okta’s Spring Boot Starter.

This tutorial leverages two technologies that are commonly used to build web services: MongoDB and Java (we’ll actually use Spring Boot). MongoDB is a NoSQL database, which is a generic term for any non-relational databases and differentiates them from relational databases. Relational databases, such as SQL, MySQL, Postgres, etc…, store data in large tables with well-defined structures. These structures are strong and tight and not easily changed or customized on a per-record basis (this structure can also be a strength, depending on the use case, but we won’t get too deep into that here). Further, because relational databases grew up pre-internet, they were designed to run on monolithic servers. This makes them hard to scale and sync across multiple machines.

NoSQL databases like MongoDB were developed, to a large degree, to fit the needs of internet scaling where server loads can balloon dramatically and the preferred growth pattern is the replication of servers, not scaling a single monolithic server. MongoDB is a document-based database that natively stores JSON and was built for distributed scaling. Mongo documents are JSON objects and have no predetermined structure on the side of the database. The structure of the documents is determined by the application and can be changed dynamically, adding or removing fields as needed. This means that Mongo documents are very flexible (possibly a blessing and a curse, FYI). Also, because MongoDB produces JSON documents, it has become very popular with many of the JS-based front-ends where Javascript is king and JSON is easily handled.

Spring Boot is an easy to use web application framework from Spring that can be used to create enterprise web services and web applications. They’ve done an admirable job simplifying the underlying complexity of the Spring framework, while still exposing all of its power. And no XML required! Spring Boot can be deployed in a traditional WAR format or can be run stand-alone using embedded Tomcat (the default), Jetty, or Undertow. With Spring you get the benefit of literally decades of proven enterprise Java expertise - Spring has run thousands of productions applications - combined with the simplicity of a modern, “just work” web framework, incredible depth of features, and great community support.

In this tutorial, you will create a simple Java class file to model your data, you will store this data in a MongoDB database, and you will expose the data with a REST API. To do this, you will use Spring Boot and Spring Data.

Once you have created an unsecured REST API, you are going to use Okta and Spring Security (along with Okta’s Spring Boot Starter) to quickly and easily at JSON Web Token (JWT) authentication to your web service.

Install Java, Spring Boot, MongoDB, and Other Project Dependencies

You’ll need to install a few things before you get started.

Java 11: This project uses Java 11. If you don’t have Java 11, you can install OpenJDK. Instructions are found on the OpenJDK website. OpenJDK can also be installed using Homebrew. SDKMAN is another great option for installing and managing Java versions.

HTTPie: This is a simple command-line utility for making HTTP requests. You’ll use this to test the REST application. Check out the installation instructions on their website.

Okta Developer Account: You’ll be using Okta as an OAuth/OIDC provider to add JWT authentication and authorization to the application. Go to developer.okta.com/signup and sign up for a free developer account, if you haven’t already.

Download a Skeleton Project From Spring Initializr

To create a skeleton project, you can use Spring Initializr. It’s a great way to quickly configure a starter for a Spring Boot project.

Open this link to view and download your pre-configured starter project on Spring Initializr.

Take a look at the settings if you like. You can even preview the project by clicking the Explore button at the bottom of the page.

Once you’re ready, click the green Generate button at the bottom of the page to download the starter project to your computer.

The starter for this project is a Spring Boot 2.2.2 project that uses Java as the application language and Gradle as the build system (there are other options for both). We’ve covered Gradle in-depth in a few other posts (see below), so we won’t go into too much detail here, except to say that you won’t need to install anything for Gradle to work because of the Gradle wrapper, which includes a version of Gradle with the project.

The included dependencies in this project are:

  • Spring Web (spring-boot-starter-web): web application functionality
  • Spring Data MongoDB (spring-boot-starter-data-mongodb): MongoDB functionality
  • Embedded MongoDB Database (de.flapdoodle.embed.mongo): embed an in-memory MongoDB database, great for testing and tutorials like this
  • Rest Repositories (spring-boot-starter-data-rest): needed for the @RepositoryRestResource annotation, which allows us to quickly generate a REST api from our domain classes
  • Okta (okta-spring-boot-starter): starter that simplifies integrating OAuth 2.0 and OIDC authentication and authorization
  • Lombok (lombok): a getter, constructor, and setter helper generator via annotations

Before you do anything else, you need to make two changes to the build.gradle file.

  1. Temporarily comment out the dependency okta-spring-boot-starter
  2. Change de.flapdoodle.embed.mongo from testImplementation to implementation

You’re doing number one because you won’t be configuring the JWT OAuth until later in the tutorial, and the application won’t run with this dependency in it unless it is configured. You’re changing the de.flapdoodle.embed.mongo dependency because typically this embedded database is only used in testing, but for the purposes of this tutorial, you’re using it in the actual implementation. In a production situation, you’d use a real MongoDB instance.

The dependencies {} block should look like this:

dependencies {  
    implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'  
    implementation 'org.springframework.boot:spring-boot-starter-data-rest'  
    implementation 'org.springframework.boot:spring-boot-starter-web'  
    //implementation 'com.okta.spring:okta-spring-boot-starter:1.3.0'  
    compileOnly 'org.projectlombok:lombok'  
    annotationProcessor 'org.projectlombok:lombok'  
    testImplementation('org.springframework.boot:spring-boot-starter-test') {
        exclude group: 'org.junit.vintage', module: 'junit-vintage-engine'
    }
    implementation 'de.flapdoodle.embed:de.flapdoodle.embed.mongo'  
}

With that done, you can run the application using:

./gradlew bootRun

If all goes well, you’ll see a bunch of output that ends with something like:

...
2019-12-16 20:19:16.430  INFO 69710 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-12-16 20:19:16.447  INFO 69710 --- [           main] c.o.m.m.MongodboauthApplication          : Started MongodboauthApplication in 16.557 seconds (JVM running for 18.032)

Open a second shell and use HTTPie to make a request:

$ http :8080

HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Mon, 16 Dec 2019 03:21:21 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "self": {
            "href": "http://localhost:8080/profile"
        }
    }
}

The astute out there might be wondering why this request returned a 200 instead of a 404 (since you haven’t actually defined a controller endpoint).

When you included the spring-boot-starter-data-rest dependency, it included the functionality to automatically generate a “hypermedia-based RESTful front end” (as Spring describes it in their docs).

Create a Hypermedia-based RESTful Front End

What is a “hypermedia-based RESTful front end”? It is a REST API that uses Hypertext Application Language (HAL) format to output descriptive JSON. From the HAL Specification GitHub page:

HAL is a simple format that gives a consistent and easy way to hyperlink between resources in your API. Adopting HAL will make your API explorable, and its documentation easily discoverable from within the API itself. In short, it will make your API easier to work with and therefore more attractive to client developers.

Thus it’s a systematic way for a REST API to describe itself to client applications and for the client applications to easily navigate between the various endpoints.

Currently, there’s not much going on with the application, so there isn’t much to see in the response. It’ll make more sense a little later as we add endpoints and data.

Create a Domain Class with Java

To get the ball rolling, you need to create a domain class. Your application is going to be a simple refrigerator inventory application. You’ll be able to add, update, delete, and list all the items in a refrigerator. Each item will have 4 properties: 1) a unique ID assigned by the database, 2) a name, 3) an owner, and 4) and expiration date.

Create a RefrigeratorItem Java class and copy and paste the code below into it.

src/main/java/com/okta/mongodb/mongodboauth/RefrigeratorItem.java

package com.okta.mongodb.mongodboauth;  

import lombok.AllArgsConstructor;  
import lombok.Data;  
import lombok.NoArgsConstructor;  
import org.springframework.data.annotation.Id;  
import org.springframework.data.mongodb.core.mapping.Document;  

import java.util.Date;  

@Document  
@Data  
@AllArgsConstructor  
@NoArgsConstructor  
public class RefrigeratorItem {  

    @Id  
    private String id;  
    private String name;
    private String owner;
    private Date expiration;
}

The @Document annotation is the Spring Data annotation that marks this class as defining a MongoDB document data model. The other annotations are Lombok helpers that save us from the drudgery of creating various getters, setters, and constructors. See more about Lombok at the project’s website.

NOTE: If you’re using an IDE for this tutorial, you may need to install and enable the Lombok plugin.

Create a Spring Data Repository

The next step is to define a Spring Data repository. This is where some pretty incredible auto-magicking happens. You’re going to create a base class that extends the Spring Data class MongoRepository. This superclass includes all the necessary code for reading and writing our domain class to and from the database. Further, you will use the @RepositoryRestResource annotation to tell Spring Boot to automatically generate a REST endpoint for the data using the HAL JSON spec. mentioned above.

Create the repository class shown below.

src/main/java/com/okta/mongodb/mongodboauth/RefrigeratorRepository.java

package com.okta.mongodb.mongodboauth;  

import org.springframework.data.mongodb.repository.MongoRepository;  
import org.springframework.data.rest.core.annotation.RepositoryRestResource;  

@RepositoryRestResource(collectionResourceRel = "fridge", path = "fridge")
public interface RefrigeratorRepository extends MongoRepository<RefrigeratorItem, String> {  
}

You might notice that in the @RepositoryRestResource annotation you are specifying the /fridge URL context for the generated endpoints.

Test the Mongo Repository and Add Some Data

Stop the app (Control-C, if it’s still running) and re-run it.

./gradlew bootRun

Test the home endpoint again.

$ http :8080

HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Mon, 16 Dec 2019 03:41:05 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "fridge": {
            "href": "http://localhost:8080/fridge{?page,size,sort}",
            "templated": true
        },
        "profile": {
            "href": "http://localhost:8080/profile"
        }
    }
}

This time you’ll see the /fridge endpoint is listed.

Test it out with http :8080/fridge. You should see a response like the one below:

{
    "_embedded": {
        "fridge": []
    },
    "_links": {
        "profile": {
            "href": "http://localhost:8080/profile/fridge"
        },
        "self": {
            "href": "http://localhost:8080/fridge{?page,size,sort}",
            "templated": true
        }
    },
    "page": {
        "number": 0,
        "size": 20,
        "totalElements": 0,
        "totalPages": 0
    }
}

Not a whole lot going on yet, but that’s easily changed. You’re going to use POST requests to add some data to the embedded MongoDB database. But first, you need to configure an application property.

Add the following line to your src/main/resources/application.properties file.

spring.jackson.date-format=MM-dd-yyyy

This tells Spring the expected date format for the expiration property, which will allow it to properly parse the JSON string into a Java date.

Stop (Control-C) and restart the application.

./gradlew bootRun

Now add some data using the following requests.

http POST :8080/fridge name=milk owner=Andrew expiration=01-01-2020 
http POST :8080/fridge name=cheese owner=Andrew expiration=02-10-2020
http POST :8080/fridge name=pizza owner=Andrew expiration=03-30-2020

Check out the inventory now and you should see these new items.

$ http :8080/fridge

HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Mon, 16 Dec 2019 03:45:23 GMT
Transfer-Encoding: chunked

{
    "_embedded": {
        "fridge": [
            {
                "_links": {
                    "refrigeratorItem": {
                        "href": "http://localhost:8080/fridge/5dae7b4c6a99f01364de916c"
                    },
                    "self": {
                        "href": "http://localhost:8080/fridge/5dae7b4c6a99f01364de916c"
                    }
                },
                "expiration": "01-01-2020",
                "name": "milk",
                "owner": "Andrew"
            },
            {
                "_links": {
                    "refrigeratorItem": {
                        "href": "http://localhost:8080/fridge/5dae7b4d6a99f01364de916d"
                    },
                    "self": {
                        "href": "http://localhost:8080/fridge/5dae7b4d6a99f01364de916d"
                    }
                },
                "expiration": "02-10-2020",
                "name": "cheese",
                "owner": "Andrew"
            },
            {
                "_links": {
                    "refrigeratorItem": {
                        "href": "http://localhost:8080/fridge/5dae7b4f6a99f01364de916e"
                    },
                    "self": {
                        "href": "http://localhost:8080/fridge/5dae7b4f6a99f01364de916e"
                    }
                },
                "expiration": "03-30-2020",
                "name": "pizza",
                "owner": "Andrew"
            }
        ]
    },
    "_links": {
        "profile": {
            "href": "http://localhost:8080/profile/fridge"
        },
        "self": {
            "href": "http://localhost:8080/fridge{?page,size,sort}",
            "templated": true
        }
    },
    "page": {
        "number": 0,
        "size": 20,
        "totalElements": 3,
        "totalPages": 1
    }
}

Notice that the returns JSON gives you the URL for each individual item. If you wanted to delete the first item in the list above, you could run the following request.

http DELETE :8080/fridge/5dae7b4c6a99f01364de916c

The long string,5dae7b4c6a99f01364de916c, is the unique ID for that item. MongoDB doesn’t use sequential integer ID numbers like SQL databases often do. It uses randomly generated UUIDs instead.

If you wanted to update an item, you could use a PUT, as shown below.

http PUT :8080/fridge/5dae7b4f6a99f01364de916e name="old pizza" expiration="03-30-2020" owner="Erin"

Note that with a PUT you have to send data for all the fields, not just the field you want to update, otherwise the omitted fields are set to null. If you just want to update select fields, use a PATCH.

http PATCH :8080/fridge/5dae7b4f6a99f01364de916e owner="Andrew"

With that rather paltry amount of work, you’ve created a MongoDB database model and exposed it to the world using a REST API. Pretty sweet!

The next step is to secure it. The last thing you need is hackers breaking into your house and stealing your pizza and cheese.

Create an OIDC Application for Your Java + MongoDB App

Okta is a software-as-service identity management provider. We provide solutions that make adding authentication and authorization to web applications easy. In this tutorial, you are going to use Okta to add JSON Web Token authentication and authorization to your application using OAuth 2.0 and OpenID Connect (OIDC).

OAuth 2.0 is an authorization protocol (verifying what the client or user is allowed to do) and OIDC is an authentication protocol (verifying the identity of the user) built on top of OAuth 2.0. They are a set of open standards that help ensure your web application’s security is handled safely and effectively. Together they provide a complete authentication and authorization protocol.

They are not, however, implementations. That’s where Okta comes in. Okta will be the identity provider and your Spring Boot app will be the client.

You should have already signed up for a free developer account at Okta. Navigate to the developer dashboard at https://developer.okta.com. If this is your first time logging in, you may need to click the Admin button.

To configure JSON Web Token (JWT) authentication and authorization, you need to create an OIDC application.

From the top menu, click on the Applications button. Click the Add Application button.

Select application type Web and click Next.

Give the app a name. I named mine “Spring Boot Mongo”.

Under Login redirect URIs, add a new URI: https://oidcdebugger.com/debug.

Under Grant types allowed, check Implicit (Hybrid).

The rest of the default values will work.

Click Done.

Leave the page open or take note of the Client ID. You’ll need it in a bit when you generate a token.

To test the REST API, you’re going to use the OpenID Connect Debugger to generate a token. This is why you need to add the login redirect URI and allow the implicit grant type.

Configure Spring Boot for OAuth 2.0

Now you need to update the Spring Boot application to use JWT authentication. First, open your src/main/resources/build.gradle file and uncomment the okta-spring-boot-starter dependency.

dependencies {  
    ...
    implementation 'com.okta.spring:okta-spring-boot-starter:1.3.0'  <-- UNCOMMENT ME
    ... 
}

Next, open your src/main/resources/application.properties file and add your Okta Issuer URI to it. The Issuer URI can be found by opening your Okta developer dashboard. From the top menu, select API and Authorization Servers. Your Issuer URI can be found in the panel in the row for the default authorization server.

spring.jackson.date-format=MM-dd-yyyy  
okta.oauth2.issuer=https://{yourOktaUrl}/oauth2/default

The last update is to add a new class called SecurityConfiguration.

src/main/java/com/okta/mongodb/mongodboauth/SecurityConfiguration.java

package com.okta.mongodb.mongodboauth;  

import org.springframework.context.annotation.Configuration;  
import org.springframework.security.config.annotation.web.builders.HttpSecurity;  
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;  

@Configuration  
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {  

    @Override  
    public void configure (HttpSecurity http) throws Exception {  
        http.authorizeRequests()
            .anyRequest().authenticated()
            .and()
            .oauth2ResourceServer()
            .jwt();
    }
}

This simple class configures Spring Boot to authenticate all requests and to use an OAuth 2.0 resource server with JWT authentication and authorization.

Now if you restart the application and try a request, you’ll get a 401.

$ http :8080

HTTP/1.1 401
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Length: 0
...

This is the expected response. You’re REST API is now protected and requires a valid token.

Generate a Token Using the OpenID Connect Debugger

To access your now-protected server, you need a valid JSON Web Token. The OIDC Debugger is a handy page that will allow you to generate a JWT.

Open the OIDC Debugger.

You will need to fill in the following values.

  • Authorization URI: https://{yourOktaUrl}/oauth2/default/v1/authorize
  • Client ID: the Client ID from your Okta OIDC application
  • State: just fill in any non-blank value (this is used in production to help protect against cross-site forgery attacks)
  • Response type: check box for token

The rest of the default values should work. Scroll down to the bottom and click Send Request.

If all went well, you will see your brand new access token.

Copy the token to your clipboard and store it in a shell variable like so:

TOKEN=eyJraWQiOiJrQkNxZ3o1MmQtOUhVSl94c0x4aGtzYlJxUDVD...

Now you can make authenticated and authorized requests.

$ http :8080 "Authorization: Bearer $TOKEN"

HTTP/1.1 200
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
...

{
    "_links": {
        "fridge": {
            "href": "http://localhost:8080/fridge{?page,size,sort}",
            "templated": true
        },
        "profile": {
            "href": "http://localhost:8080/profile"
        }
    }
}

In this tutorial, you created a Java data model class and mapped it to a MongoDB domain document using Spring Data annotations. You used a simple embedded MongoDB database as the datastore. You used Spring Boot to quickly and easily expose your data model via a REST API. Finally, you secured the REST API using Okta and Okta’s Spring Boot Starter.

The source code for this tutorial is available on GitHub at oktadeveloper/okta-java-mongodb-example.

Migrate Entity Framework Core to SQL Database on Startup

Migrate Entity Framework Core to SQL Database on Startup

This ASP.NET Core tutorial explains how to migrate Entity Framework Core to SQL Database on Startup. How to automatically migrate database changes from code in ASP.NET Core using Entity Framework Core from the Startup.cs file. Use the EF Core DB Context Service to automatically migrate database changes.

Example code tested with ASP.NET Core 3.1

This is a super quick example of how to automatically migrate database changes from code in ASP.NET Core using Entity Framework Core from the Startup.cs file.

Solution

Register the EF Core DB Context as an ASP.NET Core Service

The Entity Framework Core DB Context is registered as a service with the ASP.NET Core Dependency Injection (DI) system from the ConfigureServices() method of the Startup.cs file.

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    services.AddDbContext<DataContext>(x => x.UseSqlite("Data Source=LocalDatabase.db"));

    ...
}

Use the EF Core DB Context Service to automatically migrate database changes

An instance of the EF Core DB Context service is injected as a parameter into the Configure() method of the Startup.cs file, the DB Context instance is then used to apply any pending migrations to the database by calling the Database.Migrate() method.

// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env, DataContext dataContext)
{
    // migrate any database changes on startup (includes initial db creation)
    dataContext.Database.Migrate();

    ...
}
Extra Info

While updating the tutorial from an EF Core InMemory database to SQLite I ran into some difficulties trying to automatically run database migrations from the Startup.cs. At first I was following a tutorial on the MS Docs website that called services.BuildServiceProvider().GetService<MyDatabaseContext>().Database.Migrate(); from within the ConfigureServices() method, but this resulted in the following warning in the console when I ran the application:

Startup.cs(39,13): warning ASP0000: Calling 'BuildServiceProvider' from application code results in an additional copy of singleton services being created.
Consider alternatives such as dependency injecting services as parameters to 'Configure'.
[/Users/jwatmore/Projects/aspnet-core-3-registration-login-api/WebApi.csproj]