Java is a general-purpose, statically typed, object-oriented programming language designed to be used in conjunction with the Java Virtual Machine. "Java platform" is the name for a computing system that has installed tools for developing and running Java programs.

Helidon: Java Libraries for Microservices

Project Helidon is a set of Java Libraries for writing microservices. Helidon supports two programming models:

  • Helidon MP: MicroProfile 3.3
  • Helidon SE: a small, functional style API

In either case your application is just a Java SE program.

Downloads / Accessing Binaries

There are no Helidon downloads. Just use our Maven releases (GroupID io.helidon). See Getting Started at

Helidon CLI


curl -O
chmod +x ./helidon
sudo mv ./helidon /usr/local/bin/


curl -O
chmod +x ./helidon
sudo mv ./helidon /usr/local/bin/


PowerShell -Command Invoke-WebRequest -Uri "" -OutFile "C:\Windows\system32\helidon.exe"

See this document for more info.


You need JDK 17+ to build Helidon.

You also need Maven. We recommend 3.6.1 or newer.

Building the documentation requires the dot utility from Graphviz. This is included in many Linux distributions. For other platforms see

Full build

$ mvn install


# Cd to the component you want to check
$ mvn validate  -Pcheckstyle


# Cd to the component you want to check
$ mvn validate  -Pcopyright


# Cd to the component you want to check
$ mvn verify  -Pspotbugs

Build Scripts

Build scripts are located in etc/scripts. These are primarily used by our pipeline, but a couple are handy to use on your desktop to verify your changes.

  • Run a full copyright check
  • Run a full style check


Latest documentation and javadocs are available at

Get Started

See Getting Started at

Get Help

Get Involved

Stay Informed

Download Details:
Author: oracle
Source Code:
License: Apache-2.0 license

#java #microservice

Helidon: Java Libraries for Microservices

Eureka: AWS Service Registry for Resilient Mid-tier Load Balancing


Eureka is a RESTful (Representational State Transfer) service that is primarily used in the AWS cloud for the purpose of discovery, load balancing and failover of middle-tier servers. It plays a critical role in Netflix mid-tier infra.


The build requires java8 because of some required libraries that are java8 (servo), but the source and target compatibility are still set to 1.7. Note that tags should be checked out to perform a build.


For any non-trivial change (or a large LoC-wise change), please open an issue first to make sure there's alignment on the scope, the approach, and the viability.


Community-driven mostly, feel free to open an issue with your question, the maintainers are looking over these periodically. Issues with the most minimal repro possible have the highest chance of being answered.


Please see wiki for detailed documentation.

Download Details:
Author: Netflix
Source Code:
License: Apache-2.0 license

#java #microservice

Eureka: AWS Service Registry for Resilient Mid-tier Load Balancing

Consul API: Java Client for Consul HTTP API


Java client for Consul HTTP API

Supports all API endpoints, all consistency modes and parameters (tags, datacenters etc.)

How to use

ConsulClient client = new ConsulClient("localhost");

// set KV
byte[] binaryData = new byte[] {1,2,3,4,5,6,7};
client.setKVBinaryValue("someKey", binaryData);

client.setKVValue("", "foo");
client.setKVValue("", "bar");
client.setKVValue("", "hello");
client.setKVValue("", "world");

// get single KV for key
Response<GetValue> keyValueResponse = client.getKVValue("");
System.out.println(keyValueResponse.getValue().getKey() + ": " + keyValueResponse.getValue().getDecodedValue()); // prints " foo"

// get list of KVs for key prefix (recursive)
Response<List<GetValue>> keyValuesResponse = client.getKVValues("");
keyValuesResponse.getValue().forEach(value -> System.out.println(value.getKey() + ": " + value.getDecodedValue())); // prints " foo" and " bar"

//list known datacenters
Response<List<String>> response = client.getCatalogDatacenters();
System.out.println("Datacenters: " + response.getValue());

// register new service
NewService newService = new NewService();
newService.setTags(Arrays.asList("EU-West", "EU-East"));

// register new service with associated health check
NewService newService = new NewService();

NewService.Check serviceCheck = new NewService.Check();


// query for healthy services based on name (returns myapp_01 and myapp_02 if healthy)
HealthServicesRequest request = HealthServicesRequest.newBuilder()
Response<List<HealthService>> healthyServices = client.getHealthServices("myapp", request);

// query for healthy services based on name and tag (returns myapp_01 if healthy)
HealthServicesRequest request = HealthServicesRequest.newBuilder()
Response<List<HealthService>> healthyServices = client.getHealthServices("myapp", request);

How to add consul-api into your project


compile "com.ecwid.consul:consul-api:1.4.5"



How to build from sources

  • Checkout the sources
  • ./gradlew build

Gradle will compile sources, package classes (sources and javadocs too) into jars and run all tests. The build results will located in build/libs/ folder

Download Details:
Author: Ecwid
Source Code:
License: Apache-2.0 license

#java #microservice

Consul API: Java Client for Consul HTTP API

Armeria: Asynchronous RPC/REST Client/server Library Written in Java


Build a reactive microservice at your pace, not theirs.

Armeria is your go-to microservice framework for any situation. You can build any type of microservice leveraging your favorite technologies, including gRPC, Thrift, Kotlin, Retrofit, Reactive Streams, Spring Boot and Dropwizard.

It is open-sourced by the creator of Netty and his colleagues at LINE Corporation.


How to reach us — chat, questions and newsletters

Visit the community to chat with us, ask questions and learn how to contribute.

Download Details:
Author: line
Source Code:
License: Apache-2.0 license

#java #microservice

Armeria: Asynchronous RPC/REST Client/server Library Written in Java
Coding  Fan

Coding Fan


Best 5 IntelliJ IDEA Plugins for Java Developer

Java programmers use IntelliJ Idea to write code like me, idea provides rich and powerful functions, such as automatic code completion, editing and navigation, powerful search functions, and so on. Working with IntelliJ Idea gives you a great coding experience. Today, I will recommend five excellent third-party plugins. Because of these plugins, my coding efficiency has been greatly improved.

(00:00) What you will learn
(01:33) GenerateAllSetter Plugin
(10:06) Maven Helper Plugin
(15:08) Codota AI Autocomplete Plugin
(16:47) GsonFormat Plugin
(19:20) Key Promoter X Plugin




Best 5 IntelliJ IDEA Plugins for Java Developer

Apollo: Java Libraries for Writing Composable Microservices


Apollo is a set of Java libraries that we use at Spotify when writing microservices. Apollo includes modules such as an HTTP server and a URI routing system, making it trivial to implement restful API services.

Apollo has been used in production at Spotify for a long time. As a part of the work to release version 1.0.0 we moved the development of Apollo into the open.

There are three main libraries in Apollo:

If you need to solve a problem where the main APIs aren't powerful enough, apollo-environment provides more hooks, allowing you to modify the core behaviours of Apollo.

Apollo HTTP Service

The apollo-http-service library is a standardized assembly of Apollo modules. It incorporates both apollo-api and apollo-core and ties them together with other modules to get a standard api service using http for incoming and outgoing communication.

Apollo API

The apollo-api library is the Apollo library you are most likely to interact with. It gives you the tools you need to define your service routes and your request/reply handlers.

Here, for example, we define that our service will respond to a GET request on the path / with the string "hello world":

public static void init(Environment environment) {
      .registerAutoRoute(Route.sync("GET", "/", requestContext -> "hello world"));

The apollo-api library provides several ways to help you define your request/reply handlers. You can specify how responses should be serialized (such as JSON). Read more about this library in the Apollo API Readme.

Apollo Core

The apollo-core library manages the lifecycle (loading, starting, and stopping) of your service. You do not usually need to interact directly with apollo-core; think of it merely as "plumbing". For more information about this library, see the Apollo Core Readme.

Apollo Test

In addition to the three main Apollo libraries listed above, to help you write tests for your service we have an additional library called apollo-test. It has helpers to set up a service for testing, and to mock outgoing request responses.

Getting Started with Apollo

Apollo will be distributed as a set of Maven artifacts, which makes it easy to get started no matter the build tool; Maven, Ant + Ivy or Gradle. Below is a very simple but functional service — more extensive examples are available in the examples directory. Until these are released, you can build and install Apollo from source by running mvn install.

public final class App {

    public static void main(String... args) throws LoadingException {
        HttpService.boot(App::init, "my-app", args);

    static void init(Environment environment) {
            .registerAutoRoute(Route.sync("GET", "/", rc -> "hello world"));

Apollo Metadata

Metadata about an Apollo-based service, such as endpoints, is generated at runtime. At Spotify we use this to keep track of our running services. More info can be found here.

Examples from spotify-api-example:

$ curl http://localhost:8080/_meta/0/endpoints

  "result": {
    "docstring": null,
        "docstring": "Get the latest albums on Spotify.\n\nUses the public Spotify API to get 'new' albums.",
        "method": [
        "methodName": "/albums/new[GET]",
        "uri": "/albums/new"
        "docstring": "Responds with a 'pong!' if the service is up.\n\nUseful endpoint for doing health checks.",
        "method": [
        "methodName": "/ping[GET]",
        "queryParameters": [],
        "uri": "/ping"

$ curl http://localhost:8080/_meta/0/info

  "result": {
    "buildVersion": "spotify-api-example-service 1.3.1",
    "componentId": "spotify-api-example-service",
    "containerVersion": "apollo-http2.0.0-SNAPSHOT",
    "serviceUptime": 778.249,
    "systemVersion": "java 1.8.0_111"


Introduction Website
Maven site


Download Details:
Author: spotify
Source Code:
License: Apache-2.0 license

#java #microservice

Apollo: Java Libraries for Writing Composable Microservices

ActiveJ: A Modern Java Platform Built From The Ground Up


ActiveJ is a modern Java platform built from the ground up. It is designed to be self-sufficient (no third-party dependencies), simple, lightweight and provides competitive performance. ActiveJ consists of a range of libraries, from dependency injection and high-performance asynchronous I/O (inspired by Node.js), to application servers and big data solutions. You can use ActiveJ to build scalable web applications, distributed systems and use it for high-load data processing.

ActiveJ components

ActiveJ consists of several modules, which can be logically grouped into the following categories :

  • - High-performance asynchronous IO with the efficient event loop, NIO, promises, streaming, and CSP. Alternative to Netty, RxJava, Akka, and others. (Promise, Eventloop, Net, CSP, Datastream)
  • HTTP - High-performance HTTP server and client with WebSocket support. It can be used as a simple web server or as an application server. Alternative to other conventional HTTP clients and servers. (HTTP)
  • ActiveJ Inject - Lightweight library for dependency injection. Optimized for fast application start-up and performance at runtime. Supports annotation-based component wiring as well as reflection-free wiring. (ActiveJ Inject)
  • Boot - Production-ready tools for running and monitoring an ActiveJ application. Concurrent control of services lifecycle based on their dependencies. Various service monitoring utilities with JMX and Zabbix support. (Launcher, Service Graph, JMX, Triggers)
  • Bytecode manipulation
    • ActiveJ Codegen - Dynamic bytecode generator for classes and methods on top of ObjectWeb ASM library. Abstracts the complexity of direct bytecode manipulation and allows you to create custom classes on the fly using Lisp-like AST expressions. (ActiveJ Codegen)
    • ActiveJ Serializer - Fast and space-efficient serializers created with bytecode engineering. Introduces schema-free approach for best performance. (ActiveJ Serializer)
    • ActiveJ Specializer - Innovative technology to improve class performance at runtime by automatically converting class instances into specialized static classes and class instance fields into baked-in static fields. Provides a wide variety of JVM optimizations for static classes that are impossible otherwise: dead code elimination, aggressive inlining of methods and static constants. (ActiveJ Specializer)
  • Cloud components
    • ActiveJ FS - Asynchronous abstraction over the file system for building efficient, scalable local or remote file storages that support data redundancy, rebalancing, and resharding. (ActiveJ FS)
    • ActiveJ RPC - High-performance binary client-server protocol. Allows building distributed, sharded, and fault-tolerant microservice applications. (ActiveJ RPC)
    • Various extra services: ActiveJ CRDT, Redis client, Memcache, OLAP Cube, Dataflow

Quick start

Paste this snippet into your terminal...

mvn archetype:generate -DarchetypeGroupId=io.activej -DarchetypeArtifactId=archetype-http -DarchetypeVersion=5.2

... and open the project in your favorite IDE. Then build the application and run it. Open your browser on localhost:8080 to see the "Hello World" message.

Full-featured embedded web application server with Dependency Injection:

public final class HttpHelloWorldExample extends HttpServerLauncher {
    AsyncServlet servlet() {
        return request -> HttpResponse.ok200().withPlainText("Hello, World!");

    public static void main(String[] args) throws Exception {
        Launcher launcher = new HttpHelloWorldExample();

Some technical details about the example above:

  • The JAR file size is only 1.4 MB. By comparison, the minimum size of a Spring web application is about 17 MB.
  • The cold start time is 0.65 sec.
  • The ActiveJ Inject DI library used is 5.5 times faster than Guice and hundreds of times faster than Spring.

To learn more about ActiveJ, please visit or follow our 5-minute getting-started guide.

Examples of using the ActiveJ platform and all ActiveJ libraries can be found in the examples module.

Release notes for ActiveJ can be found here

Download Details:
Author: activej
Source Code:
License: Apache-2.0 license

#java #microservice

ActiveJ: A Modern Java Platform Built From The Ground Up

NATS Client: A Java Client for The NATS Messaging System

NATS - Java Client

A Java client for the NATS messaging system.

A Note on Versions

This is version 2.x of the java-nats library. This version is a ground up rewrite of the original library. Part of the goal of this re-write was to address the excessive use of threads, we created a Dispatcher construct to allow applications to control thread creation more intentionally. This version also removes all non-JDK runtime dependencies.

The API is simple to use and highly performant.

Version 2+ uses a simplified versioning scheme. Any issues will be fixed in the incremental version number. As a major release, the major version has been updated to 2 to allow clients to limit there use of this new API. With the addition of drain() we updated to 2.1, NKey support moved us to 2.2.

The NATS server renamed itself from gnatsd to nats-server around 2.4.4. This and other files try to use the new names, but some underlying code may change over several versions. If you are building yourself, please keep an eye out for issues and report them.

Version 2.5.0 adds some back pressure to publish calls to alleviate issues when there is a slow network. This may alter performance characteristics of publishing apps, although the total performance is equivalent.

Previous versions are still available in the repo.

Versions 2.11.6 and server versions

Version 2.11.6 is the last java-nats version which is supported to work with server v2.3.4 and earlier. It will not be officially supported to work with servers after v2.3.4, but should be fine if you don't use the queue behavior advertised in example code and provided with java-nats 2.11.5. The example does not work correctly against server versions after server v2.3.4 due to a significant change made to correct queue behavior that was considered wrong.

If you want to take advantage of the fixes and features provided in the server after v2.3.4, you must upgrade to the release version 2.12.0 or later.

SSL/TLS Performance

After recent tests we realized that TLS performance is lower than we would like. After researching the problem and possible solutions we came to a few conclusions:

  • TLS performance for the native JDK has not be historically great
  • TLS performance is better in JDK12 than JDK8
  • A small fix to the library in 2.5.1 allows the use of and, conscrypt provides the best performance in our tests
  • TLS still comes at a price (1gb/s vs 4gb/s in some tests), but using the JNI libraries can result in a 10x boost in our testing
  • If TLS performance is reasonable for your application we recommend using the j2se implementation for simplicity

To use conscrypt or wildfly, you will need to add the appropriate jars to your class path and create an SSL context manually. This context can be passed to the Options used when creating a connection. The NATSAutoBench example provides a conscrypt flag which can be used to try out the library, manually including the jar is required.

OCSP Stapling

Our server now supports OCSP stapling. To enable Java to automatically check the stapling when making TLS connections, you must set system properties. This can be done from your command line or from your Java code:

System.setProperty("jdk.tls.client.enableStatusRequestExtension", "true");
System.setProperty("", "true");

For more information, see the Oracle Java documentation page on Client-Driven OCSP and OCSP Stapling

Also, there is a detailed OCSP Example that shows how to create SSL contexts enabling OCSP stapling.

UTF-8 Subjects

The client protocol spec doesn't explicitly state the encoding on subjects. Some clients use ASCII and some use UTF-8 which matches ASCII for a-Z and 0-9. Until 2.1.2 the 2.0+ version of the Java client used ASCII for performance reasons. As of 2.1.2 you can choose to support UTF-8 subjects via the Options. Keep in mind that there is a small performance penalty for UTF-8 encoding and decoding in benchmarks, but depending on your application this cost may be negligible. Also, keep in mind that not all clients support UTF-8 and test accordingly.

NKey-based Challenge Response Authentication

The NATS server is adding support for a challenge response authentication scheme based on NKeys. Version 2.2.0 of the Java client supports this scheme via an AuthHandler interface. Version 2.3.0 replaced several NKey methods that used strings with methods using char[] to improve security.


The java-nats client is provided in a single jar file, with a single external dependency for the encryption in NKey support. See Building From Source for details on building the library.

Downloading the Jar

You can download the latest jar at

The examples are available at

To use NKeys, you will need the ed25519 library, which can be downloaded at

Using Gradle

The NATS client is available in the Maven central repository, and can be imported as a standard dependency in your build.gradle file:

dependencies {
    implementation 'io.nats:jnats:2.15.4'

If you need the latest and greatest before Maven central updates, you can use:

repositories {
    maven {
        url ""

If you need a snapshot version, you must add the url for the snapshots and change your dependency.

repositories {
    maven {
        url ""

dependencies {
   implementation 'io.nats:jnats:2.15.4-SNAPSHOT'

Using Maven

The NATS client is available on the Maven central repository, and can be imported as a normal dependency in your pom.xml file:


If you need the absolute latest, before it propagates to maven central, you can use the repository:

        <id>sonatype releases</id>

If you need a snapshot version, you must enable snapshots and change your dependency.

        <id>sonatype snapshots</id>


If you are using the 1.x version of java-nats and don't want to upgrade to 2.0.0 please use ranges in your POM file, java-nats-streaming 1.x is using [1.1, 1.9.9) for this.

Basic Usage

Sending and receiving with NATS is as simple as connecting to the nats-server and publishing or subscribing for messages. A number of examples are provided in this repo as described in the Examples Readme.


There are four different ways to connect using the Java library:

  1. Connect to a local server on the default port:
Connection nc = Nats.connect();

2.    Connect to one or more servers using a URL:

//single URL
Connection nc = Nats.connect("nats://myhost:4222");

//comma-separated list of URLs
Connection nc = Nats.connect("nats://myhost:4222,nats://myhost:4223");

3.   Connect to one or more servers with a custom configuration:

Options o = new Options.Builder().server("nats://serverone:4222").server("nats://servertwo:4222").maxReconnects(-1).build();
Connection nc = Nats.connect(o);

See the javadoc for a complete list of configuration options.

4.   Connect asynchronously, this requires a callback to tell the application when the client is connected:

Options options = new Options.Builder().server(Options.DEFAULT_URL).connectionListener(handler).build();
Nats.connectAsynchronously(options, true);

This feature is experimental, please let us know if you like it.

5.   Connect with authentication handler:

AuthHandler authHandler = Nats.credentials(System.getenv("NATS_CREDS")
Connection nc = Nats.connect("nats://myhost:4222", authHandler);


Once connected, publishing is accomplished via one of three methods:

  1. With a subject and message body:
nc.publish("subject", "hello world".getBytes(StandardCharsets.UTF_8));

2.   With a subject and message body, as well as a subject for the receiver to reply to:

nc.publish("subject", "replyto", "hello world".getBytes(StandardCharsets.UTF_8));

3.   As a request that expects a reply. This method uses a Future to allow the application code to wait for the response. Under the covers a request/reply pair is the same as a publish/subscribe only the library manages the subscription for you.

Future<Message> incoming = nc.request("subject", "hello world".getBytes(StandardCharsets.UTF_8));
Message msg = incoming.get(500, TimeUnit.MILLISECONDS);
String response = new String(msg.getData(), StandardCharsets.UTF_8);

All of these methods, as well as the incoming message code use byte arrays for maximum flexibility. Applications can send JSON, Strings, YAML, Protocol Buffers, or any other format through NATS to applications written in a wide range of languages.

ReplyTo When Making A Request

The Message object allows you to set a replyTo, but in requests, the replyTo is reserved for internal use as the address for the server to respond to the client with the consumer's reply.

Listening for Incoming Messages

The Java NATS library provides two mechanisms to listen for messages, three if you include the request/reply discussed above.

  1. Synchronous subscriptions where the application code manually asks for messages and blocks until they arrive. Each subscription is associated with a single subject, although that subject can be a wildcard.
Subscription sub = nc.subscribe("subject");
Message msg = sub.nextMessage(Duration.ofMillis(500));

String response = new String(msg.getData(), StandardCharsets.UTF_8);

2.   A Dispatcher that will call application code in a background thread. Dispatchers can manage multiple subjects with a single thread and shared callback.

Dispatcher d = nc.createDispatcher((msg) -> {
    String response = new String(msg.getData(), StandardCharsets.UTF_8);


A dispatcher can also accept individual callbacks for any given subscription.

Dispatcher d = nc.createDispatcher((msg) -> {});

Subscription s = d.subscribe("some.subject", (msg) -> {
    String response = new String(msg.getData(), StandardCharsets.UTF_8);
    System.out.println("Message received (up to 100 times): " + response);
d.unsubscribe(s, 100);


Publishing and subscribing to JetStream enabled servers is straightforward. A JetStream enabled application will connect to a server, establish a JetStream context, and then publish or subscribe. This can be mixed and matched with standard NATS subject, and JetStream subscribers, depending on configuration, receive messages from both streams and directly from other NATS producers.

The JetStream Context

After establishing a connection as described above, create a JetStream Context.

JetStream js = nc.JetStream();

You can pass options to configure the JetStream client, although the defaults should suffice for most users. See the JetStreamOptions class.

There is no limit to the number of contexts used, although normally one would only require a single context. Contexts may be prefixed to be used in conjunction with NATS authorization.


To publish messages, use the JetStream.publish(...) API. A stream must be established before publishing. You can publish in either a synchronous or asynchronous manner.


       // create a typical NATS message
       Message msg = NatsMessage.builder()
               .data("hello", StandardCharsets.UTF_8)
       PublishAck pa = js.publish(msg);

See in the JetStream examples for a detailed and runnable example.

If there is a problem an exception will be thrown, and the message may not have been persisted. Otherwise, the stream name and sequence number is returned in the publish acknowledgement.

There are a variety of publish options that can be set when publishing. When duplicate checking has been enabled on the stream, a message ID should be set. One set of options are expectations. You can set a publish expectation such as a particular stream name, previous message ID, or previous sequence number. These are hints to the server that it should reject messages where these are not met, primarily for enforcing your ordering or ensuring messages are not stored on the wrong stream.

The PublishOptions are immutable, but the builder an be re-used for expectations by clearing the expected.

For example:

      PublishOptions.Builder pubOptsBuilder = PublishOptions.builder()
      PublishAck pa = js.publish("foo", null,;
      pa = js.publish("foo", null,;

See in the JetStream examples for a detailed and runnable example.


      List<CompletableFuture<PublishAck>> futures = new ArrayList<>();
      for (int x = 1; x < roundCount; x++) {
          // create a typical NATS message
          Message msg = NatsMessage.builder()
          .data("hello", StandardCharsets.UTF_8)

          // Publish a message

     for (CompletableFuture<PublishAck> future : futures) {
         ... process the futures

See the in the JetStream examples for a detailed and runnable example.

ReplyTo When Publishing

The Message object allows you to set a replyTo, but in publish requests, the replyTo is reserved for internal use as the address for the server to respond to the client with the PublishAck.


There are two methods of subscribing, Push and Pull with each variety having its own set of options and abilities.

Push Subscribing

Push subscriptions can be synchronous or asynchronous. The server pushes messages to the client.


        Dispatcher disp = ...;

        MessageHandler handler = (msg) -> {
        // Process the message.
        // Ack the message depending on the ack model

        PushSubscribeOptions so = PushSubscribeOptions.builder()
        boolean autoAck = ...
        js.subscribe("my-subject", disp, handler, autoAck);

See the in the JetStream examples for a detailed and runnable example.


See in the JetStream examples for a detailed and runnable example.

         PushSubscribeOptions so = PushSubscribeOptions.builder()

         // Subscribe synchronously, then just wait for messages.
         JetStreamSubscription sub = js.subscribe("subject", so);

         Message msg = sub.nextMessage(Duration.ofSeconds(1));

Pull Subscribing

Pull subscriptions are always synchronous. The server organizes messages into a batch which it sends when requested.

        PullSubscribeOptions pullOptions = PullSubscribeOptions.builder()

        JetStreamSubscription sub = js.subscribe("subject", pullOptions);


        List<Message> message = sub.fetch(100, Duration.ofSeconds(1));
        for (Message m : messages) {
            // process message

The fetch pull is a macro pull that uses advanced pulls under the covers to return a list of messages. The list may be empty or contain at most the batch size. All status messages are handled for you. The client can provide a timeout to wait for the first message in a batch. The fetch call returns when the batch is ready. The timeout may be exceeded if the server sent messages very near the end of the timeout period.

See and in the JetStream examples for a detailed and runnable example.


        Iterator<Message> iter = sub.iterate(100, Duration.ofSeconds(1));
        while (iter.hasNext()) {
            Message m =;
            // process message

The iterate pull is a macro pull that uses advanced pulls under the covers to return an iterator. The iterator may have no messages up to at most the batch size. All status messages are handled for you. The client can provide a timeout to wait for the first message in a batch. The iterate call returns the iterator immediately, but under the covers it will wait for the first message based on the timeout. The timeout may be exceeded if the server sent messages very near the end of the timeout period.

See and in the JetStream examples for a detailed and runnable example.

Batch Size:

        Message m = sub.nextMessage(Duration.ofSeconds(1));

An advanced version of pull specifies a batch size. When asked, the server will send whatever messages it has up to the batch size. If it has no messages it will wait until it has some to send. The client may time out before that time. If there are less than the batch size available, you can ask for more later. Once the entire batch size has been filled, you must make another pull request.

See and in the JetStream examples for detailed and runnable example.

No Wait and Batch Size:

        Message m = sub.nextMessage(Duration.ofSeconds(1));

An advanced version of pull also specifies a batch size. When asked, the server will send whatever messages it has up to the batch size, but will never wait for the batch to fill and the client will return immediately. If there are less than the batch size available, you will get what is available and a 404 status message indicating the server did not have enough messages. You must make a pull request every time. This is an advanced api

See the in the JetStream examples for a detailed and runnable example.

Expires In and Batch Size:

        sub.pullExpiresIn(100, Duration.ofSeconds(3));
        Message m = sub.nextMessage(Duration.ofSeconds(4));

Another advanced version of pull specifies a maximum time to wait for the batch to fill. The server returns messages when either the batch is filled or the time expires. It's important to set your client's timeout to be longer than the time you've asked the server to expire in. You must make a pull request every time. In subsequent pulls, you will receive multiple 408 status messages, one for each message the previous batch was short. You can just ignore these. This is an advanced api

See and in the JetStream examples for detailed and runnable examples.

Ordered Push Subscription Option

You can now set a Push Subscription option called "Ordered". When you set this flag, library will take over creation of the consumer and create a subscription that guarantees the order of messages. This consumer will use flow control with a default heartbeat of 5 seconds. Messages will not require acks as the Ack Policy will be set to No Ack. When creating the subscription, there are some restrictions for the consumer configuration settings.

  • Ack policy must be AckPolicy.None (or left un-set). maxAckPending will be ignored.
  • Deliver Group (aka Queue) cannot be used
  • You cannot set a durable consumer name
  • You cannot set the deliver subject
  • max deliver can only be set to 1 (or left un-set)
  • The idle heartbeat cannot be less than 5 seconds. Flow control will automatically be used.

You can however set the deliver policy which will be used to start the subscription.

Subscription Creation Checks

Subscription creation has many checks to make sure that a valid, operable subscription can be made. SO group are validations that can occur when building push or pull subscribe options. SUB group are validations that occur when creating a subscription.

JsSoDurableMismatchSO90101Builder durable must match the consumer configuration durable if both are provided.
JsSoDeliverGroupMismatchSO90102Builder deliver group must match the consumer configuration deliver group if both are provided.
JsSoDeliverSubjectMismatchSO90103Builder deliver subject must match the consumer configuration deliver subject if both are provided.
JsSoOrderedNotAllowedWithBindSO90104Bind is not allowed with an ordered consumer.
JsSoOrderedNotAllowedWithDeliverGroupSO90105Deliver group is not allowed with an ordered consumer.
JsSoOrderedNotAllowedWithDurableSO90106Durable is not allowed with an ordered consumer.
JsSoOrderedNotAllowedWithDeliverSubjectSO90107Deliver subject is not allowed with an ordered consumer.
JsSoOrderedRequiresAckPolicyNoneSO90108Ordered consumer requires Ack Policy None.
JsSoOrderedRequiresMaxDeliverSO90109Max deliver is limited to 1 with an ordered consumer.
JsSubPullCantHaveDeliverGroupSUB90001Pull subscriptions can't have a deliver group.
JsSubPullCantHaveDeliverSubjectSUB90002Pull subscriptions can't have a deliver subject.
JsSubPushCantHaveMaxPullWaitingSUB90003Push subscriptions cannot supply max pull waiting.
JsSubQueueDeliverGroupMismatchSUB90004Queue / deliver group mismatch.
JsSubFcHbNotValidPullSUB90005Flow Control and/or heartbeat is not valid with a pull subscription.
JsSubFcHbNotValidQueueSUB90006Flow Control and/or heartbeat is not valid in queue mode.
JsSubNoMatchingStreamForSubjectSUB90007No matching streams for subject.
JsSubConsumerAlreadyConfiguredAsPushSUB90008Consumer is already configured as a push consumer.
JsSubConsumerAlreadyConfiguredAsPullSUB90009Consumer is already configured as a pull consumer.
JsSubSubjectDoesNotMatchFilterSUB90011Subject does not match consumer configuration filter.
JsSubConsumerAlreadyBoundSUB90012Consumer is already bound to a subscription.
JsSubExistingConsumerNotQueueSUB90013Existing consumer is not configured as a queue / deliver group.
JsSubExistingConsumerIsQueueSUB90014Existing consumer is configured as a queue / deliver group.
JsSubExistingQueueDoesNotMatchRequestedQueueSUB90015Existing consumer deliver group does not match requested queue / deliver group.
JsSubExistingConsumerCannotBeModifiedSUB90016Existing consumer cannot be modified.
JsSubConsumerNotFoundRequiredInBindSUB90017Consumer not found, required in bind mode.
JsSubOrderedNotAllowOnQueuesSUB90018Ordered consumer not allowed on queues.
JsSubPushCantHaveMaxBatchSUB90019Push subscriptions cannot supply max batch.
JsSubPushCantHaveMaxBytesSUB90020Push subscriptions cannot supply max bytes.

Message Acknowledgements

There are multiple types of acknowledgements in JetStream:

  • Message.ack(): Acknowledges a message.
  • Message.ackSync(Duration): Acknowledges a message and waits for a confirmation. When used with deduplications this creates exactly once delivery guarantees (within the deduplication window). This may significantly impact performance of the system.
  • Message.nak(): A negative acknowledgment indicating processing failed and the message should be resent later.
  • Message.term(): Never send this message again, regardless of configuration.
  • Message.inProgress(): The message is being processed and reset the redelivery timer in the server. The message must be acknowledged later when processing is complete.

Note that exactly once delivery guarantee can be achieved by using a consumer with explicit ack mode attached to stream setup with a deduplication window and using the ackSync to acknowledge messages. The guarantee is only valid for the duration of the deduplication window.

Advanced Usage


NATS supports TLS 1.2. The server can be configured to verify client certificates or not. Depending on this setting the client has several options.

  1. The Java library allows the use of the tls:// protocol in its urls. This setting expects a default SSLContext to be set. You can set this default context using System properties, or in code. For example, you could run the publish example using:
java io.nats.examples.NatsPub tls://localhost:4443 test "hello world"

where the following properties are being set:

This method can be used with or without client verification.

2.   During development, or behind a firewall where the client can trust the server, the library supports the opentls:// protocol which will use a special SSLContext that trusts all server certificates, but provides no client certificates.

java io.nats.examples.NatsSub opentls://localhost:4443 test 3

This method requires that client verification is off.

3.   Your code can build an SSLContext to work with or without client verification.

SSLContext ctx = createContext();
Options options = new Options.Builder().server(ts.getURI()).sslContext(ctx).build();
Connection nc = Nats.connect(options);

If you want to try out these techniques, take a look at the for instructions.

Also, here are some places in the code that may help

Clusters & Reconnecting

The Java client will automatically reconnect if it loses its connection the nats-server. If given a single server, the client will keep trying that one. If given a list of servers, the client will rotate between them. When the nats servers are in a cluster, they will tell the client about the other servers, so that in the simplest case a client could connect to one server, learn about the cluster and reconnect to another server if its initial one goes down.

To tell the connection about multiple servers for the initial connection, use the servers() method on the options builder, or call server() multiple times.

String[] serverUrls = {"nats://serverOne:4222", "nats://serverTwo:4222"};
Options o = new Options.Builder().servers(serverUrls).build();

Reconnection behavior is controlled via a few options, see the javadoc for the Options.Builder class for specifics on reconnect limits, delays and buffers.


The io.nats.examples package contains two benchmarking tools, modeled after tools in other NATS clients. Both examples run against an existing nats-server. The first called io.nats.examples.benchmark.NatsBench runs two simple tests, the first simply publishes messages, the second also receives messages. Tests are run with 1 thread/connection per publisher or subscriber. Running on an iMac (2017), with 4.2 GHz Intel Core i7 and 64GB of memory produced results like:

Starting benchmark(s) [msgs=5000000, msgsize=256, pubs=2, subs=2]
Current memory usage is 966.14 mb / 981.50 mb / 14.22 gb free/total/max
Use ctrl-C to cancel.
Pub Only stats: 9,584,263 msgs/sec ~ 2.29 gb/sec
 [ 1] 4,831,495 msgs/sec ~ 1.15 gb/sec (2500000 msgs)
 [ 2] 4,792,145 msgs/sec ~ 1.14 gb/sec (2500000 msgs)
  min 4,792,145 | avg 4,811,820 | max 4,831,495 | stddev 19,675.00 msgs
Pub/Sub stats: 3,735,744 msgs/sec ~ 912.05 mb/sec
 Pub stats: 1,245,680 msgs/sec ~ 304.12 mb/sec
  [ 1] 624,385 msgs/sec ~ 152.44 mb/sec (2500000 msgs)
  [ 2] 622,840 msgs/sec ~ 152.06 mb/sec (2500000 msgs)
   min 622,840 | avg 623,612 | max 624,385 | stddev 772.50 msgs
 Sub stats: 2,490,461 msgs/sec ~ 608.02 mb/sec
  [ 1] 1,245,230 msgs/sec ~ 304.01 mb/sec (5000000 msgs)
  [ 2] 1,245,231 msgs/sec ~ 304.01 mb/sec (5000000 msgs)
   min 1,245,230 | avg 1,245,230 | max 1,245,231 | stddev .71 msgs
Final memory usage is 2.02 gb / 2.94 gb / 14.22 gb free/total/max

The second, called io.nats.examples.autobench.NatsAutoBench runs a series of tests with various message sizes. Running this test on the same iMac, resulted in:

PubOnly 0b           10,000,000          8,464,850 msg/s       0.00 b/s
PubOnly 8b           10,000,000         10,065,263 msg/s     76.79 mb/s
PubOnly 32b          10,000,000         12,534,612 msg/s    382.53 mb/s
PubOnly 256b         10,000,000          7,996,057 msg/s      1.91 gb/s
PubOnly 512b         10,000,000          5,942,165 msg/s      2.83 gb/s
PubOnly 1k            1,000,000          4,043,937 msg/s      3.86 gb/s
PubOnly 4k              500,000          1,114,947 msg/s      4.25 gb/s
PubOnly 8k              100,000            460,630 msg/s      3.51 gb/s
PubSub 0b            10,000,000          3,155,673 msg/s       0.00 b/s
PubSub 8b            10,000,000          3,218,427 msg/s     24.55 mb/s
PubSub 32b           10,000,000          2,681,550 msg/s     81.83 mb/s
PubSub 256b          10,000,000          2,020,481 msg/s    493.28 mb/s
PubSub 512b           5,000,000          2,000,918 msg/s    977.01 mb/s
PubSub 1k             1,000,000          1,170,448 msg/s      1.12 gb/s
PubSub 4k               100,000            382,964 msg/s      1.46 gb/s
PubSub 8k               100,000            196,474 msg/s      1.50 gb/s
PubDispatch 0b       10,000,000          4,645,438 msg/s       0.00 b/s
PubDispatch 8b       10,000,000          4,500,006 msg/s     34.33 mb/s
PubDispatch 32b      10,000,000          4,458,481 msg/s    136.06 mb/s
PubDispatch 256b     10,000,000          2,586,563 msg/s    631.49 mb/s
PubDispatch 512b      5,000,000          2,187,592 msg/s      1.04 gb/s
PubDispatch 1k        1,000,000          1,369,985 msg/s      1.31 gb/s
PubDispatch 4k          100,000            403,314 msg/s      1.54 gb/s
PubDispatch 8k          100,000            203,320 msg/s      1.55 gb/s
ReqReply 0b              20,000              9,548 msg/s       0.00 b/s
ReqReply 8b              20,000              9,491 msg/s     74.15 kb/s
ReqReply 32b             10,000              9,778 msg/s    305.59 kb/s
ReqReply 256b            10,000              8,394 msg/s      2.05 mb/s
ReqReply 512b            10,000              8,259 msg/s      4.03 mb/s
ReqReply 1k              10,000              8,193 msg/s      8.00 mb/s
ReqReply 4k              10,000              7,915 msg/s     30.92 mb/s
ReqReply 8k              10,000              7,454 msg/s     58.24 mb/s
Latency 0b    5,000     35 /  49.20 / 134    +/- 0.77  (microseconds)
Latency 8b    5,000     35 /  49.54 / 361    +/- 0.80  (microseconds)
Latency 32b   5,000     35 /  49.27 / 135    +/- 0.79  (microseconds)
Latency 256b  5,000     41 /  56.41 / 142    +/- 0.90  (microseconds)
Latency 512b  5,000     40 /  56.41 / 174    +/- 0.91  (microseconds)
Latency 1k    5,000     35 /  49.76 / 160    +/- 0.80  (microseconds)
Latency 4k    5,000     36 /  50.64 / 193    +/- 0.83  (microseconds)
Latency 8k    5,000     38 /  55.45 / 206    +/- 0.88  (microseconds)

It is worth noting that in both cases memory was not a factor, the processor and OS were more of a consideration. To test this, take a look at the NatsBench results again. Those are run without any constraint on the Java heap and end up doubling the used memory. However, if we run the same test again with a constraint of 1Gb using -Xmx1g, the performance is comparable, differentiated primarily by "noise" that we can see between test runs with the same settings.

Starting benchmark(s) [msgs=5000000, msgsize=256, pubs=2, subs=2]
Current memory usage is 976.38 mb / 981.50 mb / 981.50 mb free/total/max
Use ctrl-C to cancel.

Pub Only stats: 10,123,382 msgs/sec ~ 2.41 gb/sec
 [ 1] 5,068,256 msgs/sec ~ 1.21 gb/sec (2500000 msgs)
 [ 2] 5,061,691 msgs/sec ~ 1.21 gb/sec (2500000 msgs)
  min 5,061,691 | avg 5,064,973 | max 5,068,256 | stddev 3,282.50 msgs

Pub/Sub stats: 3,563,770 msgs/sec ~ 870.06 mb/sec
 Pub stats: 1,188,261 msgs/sec ~ 290.10 mb/sec
  [ 1] 594,701 msgs/sec ~ 145.19 mb/sec (2500000 msgs)
  [ 2] 594,130 msgs/sec ~ 145.05 mb/sec (2500000 msgs)
   min 594,130 | avg 594,415 | max 594,701 | stddev 285.50 msgs
 Sub stats: 2,375,839 msgs/sec ~ 580.04 mb/sec
  [ 1] 1,187,919 msgs/sec ~ 290.02 mb/sec (5000000 msgs)
  [ 2] 1,187,920 msgs/sec ~ 290.02 mb/sec (5000000 msgs)
   min 1,187,919 | avg 1,187,919 | max 1,187,920 | stddev .71 msgs

Final memory usage is 317.62 mb / 960.50 mb / 960.50 mb free/total/max

Building From Source

The build depends on Gradle, and contains gradlew to simplify the process. After cloning, you can build the repository and run the tests with a single command:

> git clone
> cd
> ./gradlew clean build

Or to build without tests

> ./gradlew clean build -x test

This will place the class files in a new build folder. To just build the jar:

> ./gradlew jar

The jar will be placed in build/libs.

You can also build the java doc, and the samples jar using:

> ./gradlew javadoc
> ./gradlew exampleJar

The java doc is located in build/docs and the example jar is in build/libs. Finally, to run the tests with the coverage report:

> ./gradlew test jacocoTestReport

which will create a folder called build/reports/jacoco containing the file index.html you can open and use to browse the coverage. Keep in mind we have focused on library test coverage, not coverage for the examples.

Many of the tests run nats-server on a custom port. If nats-server is in your path they should just work, but in cases where it is not, or an IDE running tests has issues with the path you can specify the nats-server location with the environment variable nats_-_server_path.

TLS Certs

The raw TLS test certs are in src/test/resources/certs and come from the nats.go repository. However, the java client also needs a keystore and truststore.jks files for creating a context. These can be created using:

> cd src/test/resources
> keytool -keystore truststore.jks -alias CARoot -import -file certs/ca.pem -storepass password -noprompt -storetype pkcs12
> cat certs/client-key.pem certs/client-cert.pem > combined.pem
> openssl pkcs12 -export -in combined.pem -out cert.p12
> keytool -importkeystore -srckeystore cert.p12 -srcstoretype pkcs12 -deststoretype pkcs12 -destkeystore keystore.jks
> keytool -keystore keystore.jks -alias CARoot -import -file certs/ca.pem -storepass password -noprompt
> rm cert.p12 combined.pem

Download Details:
Author: nats-io
Source Code:
License: Apache-2.0 license


NATS Client: A Java Client for The NATS Messaging System

Smack: Cross-platform XMPP Client Library Written in Java



Smack is an open-source, highly modular, easy to use, XMPP client library written in Java for Java SE compatible JVMs and Android.

Being a pure Java library, it can be embedded into your applications to create anything from a full XMPP instant messaging client to simple XMPP integrations such as sending notification messages and presence-enabling devices. Smack and XMPP allow you to easily exchange data in various ways e.g., fire-and-forget, publish-subscribe, between human and non-human endpoints (M2M, IoT, …).

More information is provided by the Overview.

Getting started

Start with having a look at the Documentation and the Javadoc.

Instructions on how to use Smack in your Java or Android project are provided in the Smack Readme and Upgrade Guide.

Professional Services

Smack is a collaborative effort of many people. Some are paid, e.g., by their employer or a third party, for their contributions. But many contribute in their spare time for free. While we try to provide the best possible XMPP library for Android and Java SE-compatible execution environments by following state-of-the-art software engineering practices, the API may not always perfectly fit your requirements. Hence welcome contributions and encourage discussion about how Smack can be further improved. We also provide paid services ranging from XMPP/Smack related consulting to designing and developing features to accommodate your needs. Please contact Florian Schmaus for further information.

Bug Reporting

Only a few users have access for filling bugs in the tracker. New users should:

  1. Read "How to ask for help or report an issue"
  2. Create a discourse account (you can also sign up with your Google account).
  3. Login to the forum account
  4. Press "New Topic" in your toolbar and choose the 'Smack Support' sub-category.

Please search for your issues in the bug tracker before reporting.


The developers hang around in You may use this link to join the room via Remember that it may take some time (~hours) to get a response.

You can also reach us via the Smack Support Forum if you have questions or need support, or the Smack Developers Forum if you want to discuss Smack development.


If you want to start developing for Smack and eventually contribute code back, then please have a look at the Guidelines for Smack Developers and Contributors. The guidelines also contain development quickstart instructions.


Ignite Realtime

Ignite Realtime is an Open Source community composed of end-users and developers around the world who are interested in applying innovative, open-standards-based RealTime Collaboration to their businesses and organizations. We're aimed at disrupting proprietary, non-open standards-based systems and invite you to participate in what's already one of the biggest and most active Open Source communities.

Smack - an Ignite Realtime community project.

Download Details:
Author: igniterealtime
Source Code:
License: Apache-2.0 license


Smack: Cross-platform XMPP Client Library Written in Java

RabbitMQ Java Client

This repository contains source code of the RabbitMQ Java client. The client is maintained by the RabbitMQ team at Pivotal.

Dependency (Maven Artifact)

This package is published to several Maven package repositories:


5.x Series

This client releases are independent from RabbitMQ server releases and can be used with RabbitMQ server 3.x. They require Java 8 or higher.



compile 'com.rabbitmq:amqp-client:5.15.0'

4.x Series

As of 1 January 2021 the 4.x branch is no longer supported.

This client releases are independent from RabbitMQ server releases and can be used with RabbitMQ server 3.x. They require Java 6 or higher.



compile 'com.rabbitmq:amqp-client:4.12.0'

Experimenting with JShell

You can experiment with the client from JShell. This requires Java 9 or more.

git clone
cd rabbitmq-java-client
./mvnw test-compile jshell:run
import com.rabbitmq.client.*
ConnectionFactory cf = new ConnectionFactory()
Connection c = cf.newConnection()

Building from Source

Getting the Project and its Dependencies

git clone
cd rabbitmq-java-client
make deps

Building the JAR File

./mvnw clean package -Dmaven.test.skip -P '!setup-test-cluster'

Launching Tests with the Broker Running in a Docker Container

Run the broker:

docker run -it --rm --name rabbitmq -p 5672:5672 rabbitmq:3.8

Launch "essential" tests (takes about 10 minutes):

./mvnw verify -P '!setup-test-cluster' \
    -Drabbitmqctl.bin=DOCKER:rabbitmq \

Launch a single test:

./mvnw verify -P '!setup-test-cluster' \
    -Drabbitmqctl.bin=DOCKER:rabbitmq \

Launching Tests with a Local Broker

The tests can run against a local broker as well. The rabbitmqctl.bin system property must point to the rabbitmqctl program:

./mvnw verify -P '!setup-test-cluster' \
       -Dtest-broker.A.nodename=rabbit@$(hostname) \
       -Drabbitmqctl.bin=/path/to/rabbitmqctl \

To launch a single test:

./mvnw verify -P '!setup-test-cluster' \
       -Dtest-broker.A.nodename=rabbit@$(hostname) \
       -Drabbitmqctl.bin=/path/to/rabbitmqctl \


See Contributing and How to Run Tests.


This library uses semantic versioning.


See the RabbitMQ Java libraries support page for the support timeline of this library.

Download Details:
Author: rabbitmq
Source Code:
License: Unknown and 3 other licenses found


RabbitMQ Java Client

Nakadi: Provides A RESTful API on top Of Kafka

Nakadi Event Broker 

Nakadi is a distributed event bus broker that implements a RESTful API abstraction on top of Kafka-like queues, which can be used to send, receive, and analyze streaming data in real time, in a reliable and highly available manner.

One of the most prominent use cases of Nakadi is to decouple micro-services by building data streams between producers and consumers.

Main users of nakadi are developers and analysts. Nakadi provides features like REST based integration, multi consumer, ordered delivery, interactive UI, fully managed, security, ensuring data quality, abstraction of big data technology, and push model based consumption.

Nakadi is in active developement and is currently in production inside Zalando as the backbone of our microservices sending millions of events daily with a throughput of more than hundreds gigabytes per second. In one line, Nakadi is a high-scalability data-stream for enterprise engineering teams.

Nakadi Deployment Diagram

More detailed information can be found on our website.

Project goal

The goal of Nakadi (ნაკადი means stream in Georgian) is to provide an event broker infrastructure to:

This allows microservices teams to maintain service boundaries, and not directly depend on any specific message broker technology. Access can be managed individually for every queue and secured using OAuth and custom authorization plugins.

  • Enable convenient development of event-driven applications and asynchronous microservices.

Event types can be defined with Event type schemas and managed via a registry. All events will be validated against the schema before publishing. This guarantees data quality and consistency for consumers.

  • Efficient low latency event delivery.

Once a publisher sends an event using a simple HTTP POST, consumers can be pushed to via a streaming HTTP connection, allowing near real-time event processing. The consumer connection has keepalive controls and support for managing stream offsets using subscriptions.

Development status

  • Nakadi is high-load production ready.
  • Zalando uses Nakadi as its central Event Bus Service.
  • Nakadi reliably handles the traffic from thousands event types with the throughput of more than hundreds gigabytes per second.
  • The project is in active development.




  • REST abstraction over Kafka-like queues.
  • CRUD for event types.
  • Event batch publishing.
  • Low-level interface (deprecated).
    • manual client side partition management is needed
    • no support of commits
  • High-level interface (Subscription API).
    • automatic redistribution of partitions between consuming clients
    • commits should be issued to move server-side cursors


  • Schema registry.
  • Several event type categories (Undefined, Business, Data Change).
  • Several partitioning strategies (Random, Hash, User defined).
  • Event enrichment strategies.
  • Schema evolution.
  • Events validation using an event type schema.


  • OAuth2 authentication.
  • Per-event type authorization.
  • Blacklist of users and applications.


  • STUPS platform compatible.
  • ZMON monitoring compatible.
  • SLO monitoring.
  • Timelines:
    • this allows transparently switch production and consumption to different cluster (tier, region, AZ) without moving actual data and any service degradation.
    • opens the possibility for implementation of other streaming technologies and engines besides Kafka (like Amazon Kinesis or Google Cloud Pub/Sub)

Read more about latest development on the releases page.

Additional features that we plan to cover in the future are:

  • Support for different streaming technologies and engines. Nakadi currently uses Apache Kafka as its broker, but other providers (such as Kinesis) will be possible.
  • Filtering of events for subscribing consumers.
  • Store old published events forever using transparent fall back backup shortages like AWS S3.
  • Separate the internal schema register to standalone service.
  • Use additional schema formats and protocols like Avro, protobuf and others.

Related projects

The zalando-nakadi organisation contains many useful related projects like

How to contribute to Nakadi

Read our contribution guidelines on how to submit issues and pull requests, then get Nakadi up and running locally using Docker:


The Nakadi server is a Java 8 Spring Boot application. It uses Kafka 1.1.1 as its broker and PostgreSQL 9.5 as its supporting database.

Nakadi requires recent versions of docker and docker-compose. In particular, docker-compose >= v1.7.0 is required. See Install Docker Compose for information on installing the most recent docker-compose version.

The project is built with Gradle. The ./gradlew wrapper script will bootstrap the right Gradle version if it's not already installed.


To get the source, clone the git repository.

git clone


The gradle setup is fairly standard, the main tasks are:

  • ./gradlew build: run a build and test
  • ./gradlew clean: clean down the build

Some other useful tasks are:

  • ./gradlew startNakadi: build Nakadi and start docker-compose services: nakadi, postgresql, zookeeper and kafka
  • ./gradlew stopNakadi: shutdown docker-compose services
  • ./gradlew startStorages: start docker-compose services: postgres, zookeeper and kafka (useful for development purposes)
  • ./gradlew fullAcceptanceTest: start Nakadi configured for acceptance tests and run acceptance tests

For working with an IDE, the eclipse IDE task is available and you'll be able to import the build.gradle into Intellij IDEA directly.

Running a Server

Note: Nakadi Docker for ARM processors is available at here

From the project's home directory you can start Nakadi via Gradle:

./gradlew startNakadi

This will build the project and run docker compose with 4 services:

  • Nakadi (8080)
  • PostgreSQL (5432)
  • Kafka (9092)
  • Zookeeper (2181)

To stop the running Nakadi server:

./gradlew stopNakadi

Using Nakadi and its API

Please read the manual for the full API usage details.

Creating Event Types

The Nakadi API allows the publishing and consuming of events over HTTP. To do this the producer must register an event type with the Nakadi schema registry.

This example shows a minimalistic undefined category event type with a wildcard schema:

curl -v -XPOST http://localhost:8080/event-types -H "Content-type: application/json" -d '{
  "name": "order.ORDER_RECEIVED",
  "owning_application": "order-service",
  "category": "undefined",
  "schema": {
    "type": "json_schema",
    "schema": "{ \"additionalProperties\": true }"

Note: This is not a recommended category and schema. It should be used only for testing.

You can read more about this in the manual.

Consuming Events

You can open a stream for an event type via the events sub-resource:

curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events

HTTP/1.1 200 OK

{"cursor":{"partition":"0","offset":"82376-000087231"},"events":[{"order_number": "ORDER_001"}]}
{"cursor":{"partition":"0","offset":"82376-000087232"},"events":[{"order_number": "ORDER_002"}]}
{"cursor":{"partition":"0","offset":"82376-000087233"},"events":[{"order_number": "ORDER_003"}]}

You will see the events when you publish them from another console for example. The records without events field are Keep Alive messages.

Note: the low-level API should be used only for debugging. It is not recommended for production systems. For production systems, please use the Subscriptions API.

Publishing Events

Events for an event type can be published by posting to its "events" collection:

curl -v -XPOST http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
 -H "Content-type: application/json" \
 -d '[{
    "order_number": "24873243241"
  }, {
    "order_number": "24873243242"

HTTP/1.1 200 OK  

Read more in the manual.


Nakadi accepts contributions from the open-source community.

Please read

Please also note our


This email address serves as the main contact address for this project.

Bug reports and feature requests are more likely to be addressed if posted as issues here on GitHub.

Download Details:
Author: zalando
Source Code:
License: MIT license


Nakadi: Provides A RESTful API on top Of Kafka

JeroMQ: Pure Java Implementation Of Libzmq


Pure Java implementation of libzmq


  • Based on libzmq 4.1.7.
  • ZMTP/3.0 (
  • tcp:// protocol and inproc:// is compatible with zeromq.
  • ipc:// protocol works only between jeromq (uses tcp:// internally).
  • Securities
  • Performance that's not too bad, compared to native libzmq.
  • Exactly same developer experience with zeromq and jzmq.


  • ipc:// protocol with zeromq. Java doesn't support UNIX domain socket.
  • pgm:// protocol. Cannot find a pgm Java implementation.
  • norm:// protocol. Cannot find a Java implementation.
  • tipc:// protocol. Cannot find a Java implementation.
  • GSSAPI mechanism is not yet implemented.
  • TCP KeepAlive Count, Idle, Interval cannot be set via Java but as OS level.
  • Interrupting threads is still unsupported: library is NOT Thread.interrupt safe.


Contributions welcome! See for details about the contribution process and useful development tasks.



Add it to your Maven project's pom.xml:


    <!-- for the latest SNAPSHOT -->

    <!-- If you can't find the latest snapshot -->


To generate an ant build file from pom.xml, issue the following command:

mvn ant:ant

Getting started

Simple example

Here is how you might implement a server that prints the messages it receives and responds to them with "Hello, world!":

import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZContext;

public class hwserver
    public static void main(String[] args) throws Exception
        try (ZContext context = new ZContext()) {
            // Socket to talk to clients
            ZMQ.Socket socket = context.createSocket(SocketType.REP);

            while (!Thread.currentThread().isInterrupted()) {
                // Block until a message is received
                byte[] reply = socket.recv(0);

                // Print the message
                    "Received: [" + new String(reply, ZMQ.CHARSET) + "]"

                // Send a response
                String response = "Hello, world!";
                socket.send(response.getBytes(ZMQ.CHARSET), 0);

More examples

The JeroMQ translations of the zguide examples are a good reference for recommended usage.


For API-level documentation, see the Javadocs.

This repo also has a doc folder, which contains assorted "how to do X" guides and other useful information about various topics related to using JeroMQ.

Download Details:
Author: zeromq
Source Code:
License: MPL-2.0 license


JeroMQ: Pure Java Implementation Of Libzmq
Best of Crypto

Best of Crypto


Binance Toolbox in Java

A collection of Java examples that connects to the Binance API endpoints based on binance-connector-java.


Replace LATEST_VERSION with the latest version number and paste the snippet below in pom.xml


Run mvn install where pom.xml is located to install the dependency.

Running a java file

mvn compile exec:java -Dexec.mainClass="<java_file_name>"

API key & secret

To get user's information, e.g account balance, you will need to setup API key/secret from


Fill up the API/Secret key parameters in

If you see API server returns error "Invalid API-key, IP, or permissions for action.", please check this topic 
This forum has plenty of topics covering most of common questions, it's the best place to ask or search API related questions.

Learn More

Download Details:
Author: binance
Source Code:

#Binance #blockchain #java

Binance Toolbox in Java

EventBus: A Publish/Subscribe Event Bus for Android and Java


EventBus is a publish/subscribe event bus for Android and Java.


  • simplifies the communication between components
    • decouples event senders and receivers
    • performs well with Activities, Fragments, and background threads
    • avoids complex and error-prone dependencies and life cycle issues
  • makes your code simpler
  • is fast
  • is tiny (~60k jar)
  • is proven in practice by apps with 1,000,000,000+ installs
  • has advanced features like delivery threads, subscriber priorities, etc.

EventBus in 3 steps

  1. Define events:
public static class MessageEvent { /* Additional fields if needed */ }

2.   Prepare subscribers: Declare and annotate your subscribing method, optionally specify a thread mode:

@Subscribe(threadMode = ThreadMode.MAIN)  
public void onMessageEvent(MessageEvent event) {
    // Do something

Register and unregister your subscriber. For example on Android, activities and fragments should usually register according to their life cycle:

 public void onStart() {

 public void onStop() {

3.   Post events:

 EventBus.getDefault().post(new MessageEvent());

Read the full getting started guide.

There are also some examples.

Note: we highly recommend the EventBus annotation processor with its subscriber index. This will avoid some reflection related problems seen in the wild.

Add EventBus to your project

Available on Maven Central.

Android projects:


Java projects:


R8, ProGuard

If your project uses R8 or ProGuard this library ships with embedded rules.

Homepage, Documentation, Links

For more details please check the EventBus website. Here are some direct links you may find useful:





Download Details:
Author: greenrobot
Source Code:
License: Apache-2.0 license


EventBus: A Publish/Subscribe Event Bus for Android and Java

Apache RocketMQ: Fast, and Scalable Distributed Messaging Platform

Apache RocketMQ

Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability.

It offers a variety of features:

  • Messaging patterns including publish/subscribe, request/reply and streaming
  • Financial grade transactional message
  • Built-in fault tolerance and high availability configuration options base on DLedger
  • A variety of cross language clients, such as Java, C/C++, Python, Go, Node.js
  • Pluggable transport protocols, such as TCP, SSL, AIO
  • Built-in message tracing capability, also support opentracing
  • Versatile big-data and streaming ecosystem integration
  • Message retroactivity by time or offset
  • Reliable FIFO and strict ordered messaging in the same queue
  • Efficient pull and push consumption model
  • Million-level message accumulation capacity in a single queue
  • Multiple messaging protocols like JMS and OpenMessaging
  • Flexible distributed scale-out deployment architecture
  • Lightning-fast batch message exchange system
  • Various message filter mechanics such as SQL and Tag
  • Docker images for isolated testing and cloud isolated clusters
  • Feature-rich administrative dashboard for configuration, metrics and monitoring
  • Authentication and authorization
  • Free open source connectors, for both sources and sinks
  • Lightweight real-time computing

Apache RocketMQ Community

Learn it & Contact us


We always welcome new contributions, whether for trivial cleanups, big new features or other material rewards, more details see here.

Export Control Notice

This distribution includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See for more information.

The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.

The following provides more details on the included cryptographic software:

This software uses Apache Commons Crypto ( to support authentication, and encryption and decryption of data sent across the network between services.

Download Details:
Author: apache
Source Code:
License: Apache-2.0 license


Apache RocketMQ: Fast, and Scalable Distributed Messaging Platform