1656759300
Project Helidon is a set of Java Libraries for writing microservices. Helidon supports two programming models:
In either case your application is just a Java SE program.
There are no Helidon downloads. Just use our Maven releases (GroupID io.helidon
). See Getting Started at https://helidon.io.
MacOS:
curl -O https://helidon.io/cli/latest/darwin/helidon
chmod +x ./helidon
sudo mv ./helidon /usr/local/bin/
Linux:
curl -O https://helidon.io/cli/latest/linux/helidon
chmod +x ./helidon
sudo mv ./helidon /usr/local/bin/
Windows:
PowerShell -Command Invoke-WebRequest -Uri "https://helidon.io/cli/latest/windows/helidon.exe" -OutFile "C:\Windows\system32\helidon.exe"
See this document for more info.
You need JDK 17+ to build Helidon.
You also need Maven. We recommend 3.6.1 or newer.
Building the documentation requires the dot
utility from Graphviz. This is included in many Linux distributions. For other platforms see https://www.graphviz.org/.
Full build
$ mvn install
Checkstyle
# Cd to the component you want to check
$ mvn validate -Pcheckstyle
Copyright
# Cd to the component you want to check
$ mvn validate -Pcopyright
Spotbugs
# Cd to the component you want to check
$ mvn verify -Pspotbugs
Build Scripts
Build scripts are located in etc/scripts
. These are primarily used by our pipeline, but a couple are handy to use on your desktop to verify your changes.
copyright.sh
: Run a full copyright checkcheckstyle.sh
: Run a full style checkLatest documentation and javadocs are available at https://helidon.io/docs/latest.
See Getting Started at https://helidon.io.
Download Details:
Author: oracle
Source Code: https://github.com/oracle/helidon
License: Apache-2.0 license
#java #microservice
1656759180
Eureka is a RESTful (Representational State Transfer) service that is primarily used in the AWS cloud for the purpose of discovery, load balancing and failover of middle-tier servers. It plays a critical role in Netflix mid-tier infra.
The build requires java8
because of some required libraries that are java8
(servo
), but the source and target compatibility are still set to 1.7
. Note that tags should be checked out to perform a build.
For any non-trivial change (or a large LoC-wise change), please open an issue first to make sure there's alignment on the scope, the approach, and the viability.
Community-driven mostly, feel free to open an issue with your question, the maintainers are looking over these periodically. Issues with the most minimal repro possible have the highest chance of being answered.
Please see wiki for detailed documentation.
Download Details:
Author: Netflix
Source Code: https://github.com/Netflix/eureka
License: Apache-2.0 license
#java #microservice
1656751860
Java client for Consul HTTP API
Supports all API endpoints, all consistency modes and parameters (tags, datacenters etc.)
ConsulClient client = new ConsulClient("localhost");
// set KV
byte[] binaryData = new byte[] {1,2,3,4,5,6,7};
client.setKVBinaryValue("someKey", binaryData);
client.setKVValue("com.my.app.foo", "foo");
client.setKVValue("com.my.app.bar", "bar");
client.setKVValue("com.your.app.foo", "hello");
client.setKVValue("com.your.app.bar", "world");
// get single KV for key
Response<GetValue> keyValueResponse = client.getKVValue("com.my.app.foo");
System.out.println(keyValueResponse.getValue().getKey() + ": " + keyValueResponse.getValue().getDecodedValue()); // prints "com.my.app.foo: foo"
// get list of KVs for key prefix (recursive)
Response<List<GetValue>> keyValuesResponse = client.getKVValues("com.my");
keyValuesResponse.getValue().forEach(value -> System.out.println(value.getKey() + ": " + value.getDecodedValue())); // prints "com.my.app.foo: foo" and "com.my.app.bar: bar"
//list known datacenters
Response<List<String>> response = client.getCatalogDatacenters();
System.out.println("Datacenters: " + response.getValue());
// register new service
NewService newService = new NewService();
newService.setId("myapp_01");
newService.setName("myapp");
newService.setTags(Arrays.asList("EU-West", "EU-East"));
newService.setPort(8080);
client.agentServiceRegister(newService);
// register new service with associated health check
NewService newService = new NewService();
newService.setId("myapp_02");
newService.setTags(Collections.singletonList("EU-East"));
newService.setName("myapp");
newService.setPort(8080);
NewService.Check serviceCheck = new NewService.Check();
serviceCheck.setScript("/usr/bin/some-check-script");
serviceCheck.setInterval("10s");
newService.setCheck(serviceCheck);
client.agentServiceRegister(newService);
// query for healthy services based on name (returns myapp_01 and myapp_02 if healthy)
HealthServicesRequest request = HealthServicesRequest.newBuilder()
.setPassing(true)
.setQueryParams(QueryParams.DEFAULT)
.build();
Response<List<HealthService>> healthyServices = client.getHealthServices("myapp", request);
// query for healthy services based on name and tag (returns myapp_01 if healthy)
HealthServicesRequest request = HealthServicesRequest.newBuilder()
.setTag("EU-West")
.setPassing(true)
.setQueryParams(QueryParams.DEFAULT)
.build();
Response<List<HealthService>> healthyServices = client.getHealthServices("myapp", request);
compile "com.ecwid.consul:consul-api:1.4.5"
<dependency>
<groupId>com.ecwid.consul</groupId>
<artifactId>consul-api</artifactId>
<version>1.4.5</version>
</dependency>
Gradle will compile sources, package classes (sources and javadocs too) into jars and run all tests. The build results will located in build/libs/ folder
Download Details:
Author: Ecwid
Source Code: https://github.com/Ecwid/consul-api
License: Apache-2.0 license
#java #microservice
1656744540
Build a reactive microservice at your pace, not theirs.
Armeria is your go-to microservice framework for any situation. You can build any type of microservice leveraging your favorite technologies, including gRPC, Thrift, Kotlin, Retrofit, Reactive Streams, Spring Boot and Dropwizard.
It is open-sourced by the creator of Netty and his colleagues at LINE Corporation.
Visit the community to chat with us, ask questions and learn how to contribute.
Download Details:
Author: line
Source Code: https://github.com/line/armeria
License: Apache-2.0 license
#java #microservice
1656742456
Java programmers use IntelliJ Idea to write code like me, idea provides rich and powerful functions, such as automatic code completion, editing and navigation, powerful search functions, and so on. Working with IntelliJ Idea gives you a great coding experience. Today, I will recommend five excellent third-party plugins. Because of these plugins, my coding efficiency has been greatly improved.
Agenda
(00:00) What you will learn
(01:33) GenerateAllSetter Plugin
(10:06) Maven Helper Plugin
(15:08) Codota AI Autocomplete Plugin
(16:47) GsonFormat Plugin
(19:20) Key Promoter X Plugin
GitHub: https://github.com/Java-Techie-jt
1656737160
Apollo is a set of Java libraries that we use at Spotify when writing microservices. Apollo includes modules such as an HTTP server and a URI routing system, making it trivial to implement restful API services.
Apollo has been used in production at Spotify for a long time. As a part of the work to release version 1.0.0 we moved the development of Apollo into the open.
There are three main libraries in Apollo:
If you need to solve a problem where the main APIs aren't powerful enough, apollo-environment provides more hooks, allowing you to modify the core behaviours of Apollo.
The apollo-http-service library is a standardized assembly of Apollo modules. It incorporates both apollo-api and apollo-core and ties them together with other modules to get a standard api service using http for incoming and outgoing communication.
The apollo-api library is the Apollo library you are most likely to interact with. It gives you the tools you need to define your service routes and your request/reply handlers.
Here, for example, we define that our service will respond to a GET request on the path /
with the string "hello world"
:
public static void init(Environment environment) {
environment.routingEngine()
.registerAutoRoute(Route.sync("GET", "/", requestContext -> "hello world"));
}
The apollo-api library provides several ways to help you define your request/reply handlers. You can specify how responses should be serialized (such as JSON). Read more about this library in the Apollo API Readme.
The apollo-core library manages the lifecycle (loading, starting, and stopping) of your service. You do not usually need to interact directly with apollo-core; think of it merely as "plumbing". For more information about this library, see the Apollo Core Readme.
In addition to the three main Apollo libraries listed above, to help you write tests for your service we have an additional library called apollo-test. It has helpers to set up a service for testing, and to mock outgoing request responses.
Apollo will be distributed as a set of Maven artifacts, which makes it easy to get started no matter the build tool; Maven, Ant + Ivy or Gradle. Below is a very simple but functional service — more extensive examples are available in the examples directory. Until these are released, you can build and install Apollo from source by running mvn install
.
public final class App {
public static void main(String... args) throws LoadingException {
HttpService.boot(App::init, "my-app", args);
}
static void init(Environment environment) {
environment.routingEngine()
.registerAutoRoute(Route.sync("GET", "/", rc -> "hello world"));
}
}
Metadata about an Apollo-based service, such as endpoints, is generated at runtime. At Spotify we use this to keep track of our running services. More info can be found here.
Examples from spotify-api-example:
$ curl http://localhost:8080/_meta/0/endpoints
{
"result": {
"docstring": null,
"endpoints":[
{
"docstring": "Get the latest albums on Spotify.\n\nUses the public Spotify API https://api.spotify.com to get 'new' albums.",
"method": [
"GET"
],
"methodName": "/albums/new[GET]",
"queryParameters":[],
"uri": "/albums/new"
},
{
"docstring": "Responds with a 'pong!' if the service is up.\n\nUseful endpoint for doing health checks.",
"method": [
"GET"
],
"methodName": "/ping[GET]",
"queryParameters": [],
"uri": "/ping"
},
...
]
}
}
$ curl http://localhost:8080/_meta/0/info
{
"result": {
"buildVersion": "spotify-api-example-service 1.3.1",
"componentId": "spotify-api-example-service",
"containerVersion": "apollo-http2.0.0-SNAPSHOT",
"serviceUptime": 778.249,
"systemVersion": "java 1.8.0_111"
}
}
Introduction Website
JavaDocs
Maven site
Download Details:
Author: spotify
Source Code: https://github.com/spotify/apollo
License: Apache-2.0 license
#java #microservice
1656729840
ActiveJ is a modern Java platform built from the ground up. It is designed to be self-sufficient (no third-party dependencies), simple, lightweight and provides competitive performance. ActiveJ consists of a range of libraries, from dependency injection and high-performance asynchronous I/O (inspired by Node.js), to application servers and big data solutions. You can use ActiveJ to build scalable web applications, distributed systems and use it for high-load data processing.
ActiveJ consists of several modules, which can be logically grouped into the following categories :
Paste this snippet into your terminal...
mvn archetype:generate -DarchetypeGroupId=io.activej -DarchetypeArtifactId=archetype-http -DarchetypeVersion=5.2
... and open the project in your favorite IDE. Then build the application and run it. Open your browser on localhost:8080 to see the "Hello World" message.
public final class HttpHelloWorldExample extends HttpServerLauncher {
@Provides
AsyncServlet servlet() {
return request -> HttpResponse.ok200().withPlainText("Hello, World!");
}
public static void main(String[] args) throws Exception {
Launcher launcher = new HttpHelloWorldExample();
launcher.launch(args);
}
}
Some technical details about the example above:
To learn more about ActiveJ, please visit https://activej.io or follow our 5-minute getting-started guide.
Examples of using the ActiveJ platform and all ActiveJ libraries can be found in the examples
module.
Release notes for ActiveJ can be found here
Download Details:
Author: activej
Source Code: https://github.com/activej/activej
License: Apache-2.0 license
#java #microservice
1656722520
A Java client for the NATS messaging system.
This is version 2.x of the java-nats library. This version is a ground up rewrite of the original library. Part of the goal of this re-write was to address the excessive use of threads, we created a Dispatcher construct to allow applications to control thread creation more intentionally. This version also removes all non-JDK runtime dependencies.
The API is simple to use and highly performant.
Version 2+ uses a simplified versioning scheme. Any issues will be fixed in the incremental version number. As a major release, the major version has been updated to 2 to allow clients to limit there use of this new API. With the addition of drain() we updated to 2.1, NKey support moved us to 2.2.
The NATS server renamed itself from gnatsd to nats-server around 2.4.4. This and other files try to use the new names, but some underlying code may change over several versions. If you are building yourself, please keep an eye out for issues and report them.
Version 2.5.0 adds some back pressure to publish calls to alleviate issues when there is a slow network. This may alter performance characteristics of publishing apps, although the total performance is equivalent.
Previous versions are still available in the repo.
Version 2.11.6 is the last java-nats version which is supported to work with server v2.3.4 and earlier. It will not be officially supported to work with servers after v2.3.4, but should be fine if you don't use the queue behavior advertised in example code NatsJsPushSubQueueDurable.java
and provided with java-nats 2.11.5. The example does not work correctly against server versions after server v2.3.4 due to a significant change made to correct queue behavior that was considered wrong.
If you want to take advantage of the fixes and features provided in the server after v2.3.4, you must upgrade to the release version 2.12.0 or later.
After recent tests we realized that TLS performance is lower than we would like. After researching the problem and possible solutions we came to a few conclusions:
To use conscrypt or wildfly, you will need to add the appropriate jars to your class path and create an SSL context manually. This context can be passed to the Options used when creating a connection. The NATSAutoBench example provides a conscrypt flag which can be used to try out the library, manually including the jar is required.
Our server now supports OCSP stapling. To enable Java to automatically check the stapling when making TLS connections, you must set system properties. This can be done from your command line or from your Java code:
System.setProperty("jdk.tls.client.enableStatusRequestExtension", "true");
System.setProperty("com.sun.net.ssl.checkRevocation", "true");
For more information, see the Oracle Java documentation page on Client-Driven OCSP and OCSP Stapling
Also, there is a detailed OCSP Example that shows how to create SSL contexts enabling OCSP stapling.
The client protocol spec doesn't explicitly state the encoding on subjects. Some clients use ASCII and some use UTF-8 which matches ASCII for a-Z and 0-9. Until 2.1.2 the 2.0+ version of the Java client used ASCII for performance reasons. As of 2.1.2 you can choose to support UTF-8 subjects via the Options. Keep in mind that there is a small performance penalty for UTF-8 encoding and decoding in benchmarks, but depending on your application this cost may be negligible. Also, keep in mind that not all clients support UTF-8 and test accordingly.
The NATS server is adding support for a challenge response authentication scheme based on NKeys. Version 2.2.0 of the Java client supports this scheme via an AuthHandler interface. Version 2.3.0 replaced several NKey methods that used strings with methods using char[] to improve security.
The java-nats client is provided in a single jar file, with a single external dependency for the encryption in NKey support. See Building From Source for details on building the library.
You can download the latest jar at https://search.maven.org/remotecontent?filepath=io/nats/jnats/2.15.4/jnats-2.15.4.jar.
The examples are available at https://search.maven.org/remotecontent?filepath=io/nats/jnats/2.15.4/jnats-2.15.4-examples.jar.
To use NKeys, you will need the ed25519 library, which can be downloaded at https://repo1.maven.org/maven2/net/i2p/crypto/eddsa/0.3.0/eddsa-0.3.0.jar.
The NATS client is available in the Maven central repository, and can be imported as a standard dependency in your build.gradle
file:
dependencies {
implementation 'io.nats:jnats:2.15.4'
}
If you need the latest and greatest before Maven central updates, you can use:
repositories {
jcenter()
maven {
url "https://oss.sonatype.org/content/repositories/releases"
}
}
If you need a snapshot version, you must add the url for the snapshots and change your dependency.
repositories {
...
maven {
url "https://oss.sonatype.org/content/repositories/snapshots"
}
}
dependencies {
implementation 'io.nats:jnats:2.15.4-SNAPSHOT'
}
The NATS client is available on the Maven central repository, and can be imported as a normal dependency in your pom.xml file:
<dependency>
<groupId>io.nats</groupId>
<artifactId>jnats</artifactId>
<version>2.15.4</version>
</dependency>
If you need the absolute latest, before it propagates to maven central, you can use the repository:
<repositories>
<repository>
<id>sonatype releases</id>
<url>https://oss.sonatype.org/content/repositories/releases</url>
<releases>
<enabled>true</enabled>
</releases>
</repository>
</repositories>
If you need a snapshot version, you must enable snapshots and change your dependency.
<repositories>
<repository>
<id>sonatype snapshots</id>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<dependency>
<groupId>io.nats</groupId>
<artifactId>jnats</artifactId>
<version>2.15.4-SNAPSHOT</version>
</dependency>
If you are using the 1.x version of java-nats and don't want to upgrade to 2.0.0 please use ranges in your POM file, java-nats-streaming 1.x is using [1.1, 1.9.9) for this.
Sending and receiving with NATS is as simple as connecting to the nats-server and publishing or subscribing for messages. A number of examples are provided in this repo as described in the Examples Readme.
There are four different ways to connect using the Java library:
Connection nc = Nats.connect();
2. Connect to one or more servers using a URL:
//single URL
Connection nc = Nats.connect("nats://myhost:4222");
//comma-separated list of URLs
Connection nc = Nats.connect("nats://myhost:4222,nats://myhost:4223");
3. Connect to one or more servers with a custom configuration:
Options o = new Options.Builder().server("nats://serverone:4222").server("nats://servertwo:4222").maxReconnects(-1).build();
Connection nc = Nats.connect(o);
See the javadoc for a complete list of configuration options.
4. Connect asynchronously, this requires a callback to tell the application when the client is connected:
Options options = new Options.Builder().server(Options.DEFAULT_URL).connectionListener(handler).build();
Nats.connectAsynchronously(options, true);
This feature is experimental, please let us know if you like it.
5. Connect with authentication handler:
AuthHandler authHandler = Nats.credentials(System.getenv("NATS_CREDS")
Connection nc = Nats.connect("nats://myhost:4222", authHandler);
Once connected, publishing is accomplished via one of three methods:
nc.publish("subject", "hello world".getBytes(StandardCharsets.UTF_8));
2. With a subject and message body, as well as a subject for the receiver to reply to:
nc.publish("subject", "replyto", "hello world".getBytes(StandardCharsets.UTF_8));
3. As a request that expects a reply. This method uses a Future to allow the application code to wait for the response. Under the covers a request/reply pair is the same as a publish/subscribe only the library manages the subscription for you.
Future<Message> incoming = nc.request("subject", "hello world".getBytes(StandardCharsets.UTF_8));
Message msg = incoming.get(500, TimeUnit.MILLISECONDS);
String response = new String(msg.getData(), StandardCharsets.UTF_8);
All of these methods, as well as the incoming message code use byte arrays for maximum flexibility. Applications can send JSON, Strings, YAML, Protocol Buffers, or any other format through NATS to applications written in a wide range of languages.
The Message object allows you to set a replyTo, but in requests, the replyTo is reserved for internal use as the address for the server to respond to the client with the consumer's reply.
The Java NATS library provides two mechanisms to listen for messages, three if you include the request/reply discussed above.
Subscription sub = nc.subscribe("subject");
Message msg = sub.nextMessage(Duration.ofMillis(500));
String response = new String(msg.getData(), StandardCharsets.UTF_8);
2. A Dispatcher that will call application code in a background thread. Dispatchers can manage multiple subjects with a single thread and shared callback.
Dispatcher d = nc.createDispatcher((msg) -> {
String response = new String(msg.getData(), StandardCharsets.UTF_8);
...
});
d.subscribe("subject");
A dispatcher can also accept individual callbacks for any given subscription.
Dispatcher d = nc.createDispatcher((msg) -> {});
Subscription s = d.subscribe("some.subject", (msg) -> {
String response = new String(msg.getData(), StandardCharsets.UTF_8);
System.out.println("Message received (up to 100 times): " + response);
});
d.unsubscribe(s, 100);
Publishing and subscribing to JetStream enabled servers is straightforward. A JetStream enabled application will connect to a server, establish a JetStream context, and then publish or subscribe. This can be mixed and matched with standard NATS subject, and JetStream subscribers, depending on configuration, receive messages from both streams and directly from other NATS producers.
After establishing a connection as described above, create a JetStream Context.
JetStream js = nc.JetStream();
You can pass options to configure the JetStream client, although the defaults should suffice for most users. See the JetStreamOptions
class.
There is no limit to the number of contexts used, although normally one would only require a single context. Contexts may be prefixed to be used in conjunction with NATS authorization.
To publish messages, use the JetStream.publish(...)
API. A stream must be established before publishing. You can publish in either a synchronous or asynchronous manner.
Synchronous:
// create a typical NATS message
Message msg = NatsMessage.builder()
.subject("foo")
.data("hello", StandardCharsets.UTF_8)
.build();
PublishAck pa = js.publish(msg);
See NatsJsPub.java
in the JetStream examples for a detailed and runnable example.
If there is a problem an exception will be thrown, and the message may not have been persisted. Otherwise, the stream name and sequence number is returned in the publish acknowledgement.
There are a variety of publish options that can be set when publishing. When duplicate checking has been enabled on the stream, a message ID should be set. One set of options are expectations. You can set a publish expectation such as a particular stream name, previous message ID, or previous sequence number. These are hints to the server that it should reject messages where these are not met, primarily for enforcing your ordering or ensuring messages are not stored on the wrong stream.
The PublishOptions are immutable, but the builder an be re-used for expectations by clearing the expected.
For example:
PublishOptions.Builder pubOptsBuilder = PublishOptions.builder()
.expectedStream("TEST")
.messageId("mid1");
PublishAck pa = js.publish("foo", null, pubOptsBuilder.build());
pubOptsBuilder.clearExpected()
.setExpectedLastMsgId("mid1")
.setExpectedLastSequence(1)
.messageId("mid2");
pa = js.publish("foo", null, pubOptsBuilder.build());
See NatsJsPubWithOptionsUseCases.java
in the JetStream examples for a detailed and runnable example.
Asynchronous:
List<CompletableFuture<PublishAck>> futures = new ArrayList<>();
for (int x = 1; x < roundCount; x++) {
// create a typical NATS message
Message msg = NatsMessage.builder()
.subject("foo")
.data("hello", StandardCharsets.UTF_8)
.build();
// Publish a message
futures.add(js.publishAsync(msg));
}
for (CompletableFuture<PublishAck> future : futures) {
... process the futures
}
See the NatsJsPubAsync.java
in the JetStream examples for a detailed and runnable example.
The Message object allows you to set a replyTo, but in publish requests, the replyTo is reserved for internal use as the address for the server to respond to the client with the PublishAck.
There are two methods of subscribing, Push and Pull with each variety having its own set of options and abilities.
Push subscriptions can be synchronous or asynchronous. The server pushes messages to the client.
Asynchronous:
Dispatcher disp = ...;
MessageHandler handler = (msg) -> {
// Process the message.
// Ack the message depending on the ack model
};
PushSubscribeOptions so = PushSubscribeOptions.builder()
.durable("optional-durable-name")
.build();
boolean autoAck = ...
js.subscribe("my-subject", disp, handler, autoAck);
See the NatsJsPushSubWithHandler.java
in the JetStream examples for a detailed and runnable example.
Synchronous:
See NatsJsPushSub.java
in the JetStream examples for a detailed and runnable example.
PushSubscribeOptions so = PushSubscribeOptions.builder()
.durable("optional-durable-name")
.build();
// Subscribe synchronously, then just wait for messages.
JetStreamSubscription sub = js.subscribe("subject", so);
nc.flush(Duration.ofSeconds(5));
Message msg = sub.nextMessage(Duration.ofSeconds(1));
Pull subscriptions are always synchronous. The server organizes messages into a batch which it sends when requested.
PullSubscribeOptions pullOptions = PullSubscribeOptions.builder()
.durable("durable-name-is-required")
.build();
JetStreamSubscription sub = js.subscribe("subject", pullOptions);
Fetch:
List<Message> message = sub.fetch(100, Duration.ofSeconds(1));
for (Message m : messages) {
// process message
m.ack();
}
The fetch pull is a macro pull that uses advanced pulls under the covers to return a list of messages. The list may be empty or contain at most the batch size. All status messages are handled for you. The client can provide a timeout to wait for the first message in a batch. The fetch call returns when the batch is ready. The timeout may be exceeded if the server sent messages very near the end of the timeout period.
See NatsJsPullSubFetch.java
and NatsJsPullSubFetchUseCases.java
in the JetStream examples for a detailed and runnable example.
Iterate:
Iterator<Message> iter = sub.iterate(100, Duration.ofSeconds(1));
while (iter.hasNext()) {
Message m = iter.next();
// process message
m.ack();
}
The iterate pull is a macro pull that uses advanced pulls under the covers to return an iterator. The iterator may have no messages up to at most the batch size. All status messages are handled for you. The client can provide a timeout to wait for the first message in a batch. The iterate call returns the iterator immediately, but under the covers it will wait for the first message based on the timeout. The timeout may be exceeded if the server sent messages very near the end of the timeout period.
See NatsJsPullSubIterate.java
and NatsJsPullSubIterateUseCases.java
in the JetStream examples for a detailed and runnable example.
Batch Size:
sub.pull(100);
...
Message m = sub.nextMessage(Duration.ofSeconds(1));
An advanced version of pull specifies a batch size. When asked, the server will send whatever messages it has up to the batch size. If it has no messages it will wait until it has some to send. The client may time out before that time. If there are less than the batch size available, you can ask for more later. Once the entire batch size has been filled, you must make another pull request.
See NatsJsPullSubBatchSize.java
and NatsJsPullSubBatchSizeUseCases.java
in the JetStream examples for detailed and runnable example.
No Wait and Batch Size:
sub.pullNoWait(100);
...
Message m = sub.nextMessage(Duration.ofSeconds(1));
An advanced version of pull also specifies a batch size. When asked, the server will send whatever messages it has up to the batch size, but will never wait for the batch to fill and the client will return immediately. If there are less than the batch size available, you will get what is available and a 404 status message indicating the server did not have enough messages. You must make a pull request every time. This is an advanced api
See the NatsJsPullSubNoWaitUseCases.java
in the JetStream examples for a detailed and runnable example.
Expires In and Batch Size:
sub.pullExpiresIn(100, Duration.ofSeconds(3));
...
Message m = sub.nextMessage(Duration.ofSeconds(4));
Another advanced version of pull specifies a maximum time to wait for the batch to fill. The server returns messages when either the batch is filled or the time expires. It's important to set your client's timeout to be longer than the time you've asked the server to expire in. You must make a pull request every time. In subsequent pulls, you will receive multiple 408 status messages, one for each message the previous batch was short. You can just ignore these. This is an advanced api
See NatsJsPullSubExpire.java
and NatsJsPullSubExpireUseCases.java
in the JetStream examples for detailed and runnable examples.
You can now set a Push Subscription option called "Ordered". When you set this flag, library will take over creation of the consumer and create a subscription that guarantees the order of messages. This consumer will use flow control with a default heartbeat of 5 seconds. Messages will not require acks as the Ack Policy will be set to No Ack. When creating the subscription, there are some restrictions for the consumer configuration settings.
You can however set the deliver policy which will be used to start the subscription.
Subscription creation has many checks to make sure that a valid, operable subscription can be made. SO
group are validations that can occur when building push or pull subscribe options. SUB
group are validations that occur when creating a subscription.
Name | Group | Code | Description |
---|---|---|---|
JsSoDurableMismatch | SO | 90101 | Builder durable must match the consumer configuration durable if both are provided. |
JsSoDeliverGroupMismatch | SO | 90102 | Builder deliver group must match the consumer configuration deliver group if both are provided. |
JsSoDeliverSubjectMismatch | SO | 90103 | Builder deliver subject must match the consumer configuration deliver subject if both are provided. |
JsSoOrderedNotAllowedWithBind | SO | 90104 | Bind is not allowed with an ordered consumer. |
JsSoOrderedNotAllowedWithDeliverGroup | SO | 90105 | Deliver group is not allowed with an ordered consumer. |
JsSoOrderedNotAllowedWithDurable | SO | 90106 | Durable is not allowed with an ordered consumer. |
JsSoOrderedNotAllowedWithDeliverSubject | SO | 90107 | Deliver subject is not allowed with an ordered consumer. |
JsSoOrderedRequiresAckPolicyNone | SO | 90108 | Ordered consumer requires Ack Policy None. |
JsSoOrderedRequiresMaxDeliver | SO | 90109 | Max deliver is limited to 1 with an ordered consumer. |
JsSubPullCantHaveDeliverGroup | SUB | 90001 | Pull subscriptions can't have a deliver group. |
JsSubPullCantHaveDeliverSubject | SUB | 90002 | Pull subscriptions can't have a deliver subject. |
JsSubPushCantHaveMaxPullWaiting | SUB | 90003 | Push subscriptions cannot supply max pull waiting. |
JsSubQueueDeliverGroupMismatch | SUB | 90004 | Queue / deliver group mismatch. |
JsSubFcHbNotValidPull | SUB | 90005 | Flow Control and/or heartbeat is not valid with a pull subscription. |
JsSubFcHbNotValidQueue | SUB | 90006 | Flow Control and/or heartbeat is not valid in queue mode. |
JsSubNoMatchingStreamForSubject | SUB | 90007 | No matching streams for subject. |
JsSubConsumerAlreadyConfiguredAsPush | SUB | 90008 | Consumer is already configured as a push consumer. |
JsSubConsumerAlreadyConfiguredAsPull | SUB | 90009 | Consumer is already configured as a pull consumer. |
removed | SUB | 90010 | |
JsSubSubjectDoesNotMatchFilter | SUB | 90011 | Subject does not match consumer configuration filter. |
JsSubConsumerAlreadyBound | SUB | 90012 | Consumer is already bound to a subscription. |
JsSubExistingConsumerNotQueue | SUB | 90013 | Existing consumer is not configured as a queue / deliver group. |
JsSubExistingConsumerIsQueue | SUB | 90014 | Existing consumer is configured as a queue / deliver group. |
JsSubExistingQueueDoesNotMatchRequestedQueue | SUB | 90015 | Existing consumer deliver group does not match requested queue / deliver group. |
JsSubExistingConsumerCannotBeModified | SUB | 90016 | Existing consumer cannot be modified. |
JsSubConsumerNotFoundRequiredInBind | SUB | 90017 | Consumer not found, required in bind mode. |
JsSubOrderedNotAllowOnQueues | SUB | 90018 | Ordered consumer not allowed on queues. |
JsSubPushCantHaveMaxBatch | SUB | 90019 | Push subscriptions cannot supply max batch. |
JsSubPushCantHaveMaxBytes | SUB | 90020 | Push subscriptions cannot supply max bytes. |
There are multiple types of acknowledgements in JetStream:
Message.ack()
: Acknowledges a message.Message.ackSync(Duration)
: Acknowledges a message and waits for a confirmation. When used with deduplications this creates exactly once delivery guarantees (within the deduplication window). This may significantly impact performance of the system.Message.nak()
: A negative acknowledgment indicating processing failed and the message should be resent later.Message.term()
: Never send this message again, regardless of configuration.Message.inProgress()
: The message is being processed and reset the redelivery timer in the server. The message must be acknowledged later when processing is complete.Note that exactly once delivery guarantee can be achieved by using a consumer with explicit ack mode attached to stream setup with a deduplication window and using the ackSync
to acknowledge messages. The guarantee is only valid for the duration of the deduplication window.
NATS supports TLS 1.2. The server can be configured to verify client certificates or not. Depending on this setting the client has several options.
java -Djavax.net.ssl.keyStore=src/test/resources/keystore.jks -Djavax.net.ssl.keyStorePassword=password -Djavax.net.ssl.trustStore=src/test/resources/truststore.jks -Djavax.net.ssl.trustStorePassword=password io.nats.examples.NatsPub tls://localhost:4443 test "hello world"
where the following properties are being set:
-Djavax.net.ssl.keyStore=src/test/resources/keystore.jks
-Djavax.net.ssl.keyStorePassword=password
-Djavax.net.ssl.trustStore=src/test/resources/truststore.jks
-Djavax.net.ssl.trustStorePassword=password
This method can be used with or without client verification.
2. During development, or behind a firewall where the client can trust the server, the library supports the opentls:// protocol which will use a special SSLContext that trusts all server certificates, but provides no client certificates.
java io.nats.examples.NatsSub opentls://localhost:4443 test 3
This method requires that client verification is off.
3. Your code can build an SSLContext to work with or without client verification.
SSLContext ctx = createContext();
Options options = new Options.Builder().server(ts.getURI()).sslContext(ctx).build();
Connection nc = Nats.connect(options);
If you want to try out these techniques, take a look at the README.md for instructions.
Also, here are some places in the code that may help https://github.com/nats-io/nats.java/blob/main/src/main/java/io/nats/client/support/SSLUtils.java https://github.com/nats-io/nats.java/blob/main/src/test/java/io/nats/client/TestSSLUtils.java
The Java client will automatically reconnect if it loses its connection the nats-server. If given a single server, the client will keep trying that one. If given a list of servers, the client will rotate between them. When the nats servers are in a cluster, they will tell the client about the other servers, so that in the simplest case a client could connect to one server, learn about the cluster and reconnect to another server if its initial one goes down.
To tell the connection about multiple servers for the initial connection, use the servers()
method on the options builder, or call server()
multiple times.
String[] serverUrls = {"nats://serverOne:4222", "nats://serverTwo:4222"};
Options o = new Options.Builder().servers(serverUrls).build();
Reconnection behavior is controlled via a few options, see the javadoc for the Options.Builder class for specifics on reconnect limits, delays and buffers.
The io.nats.examples
package contains two benchmarking tools, modeled after tools in other NATS clients. Both examples run against an existing nats-server. The first called io.nats.examples.benchmark.NatsBench
runs two simple tests, the first simply publishes messages, the second also receives messages. Tests are run with 1 thread/connection per publisher or subscriber. Running on an iMac (2017), with 4.2 GHz Intel Core i7 and 64GB of memory produced results like:
Starting benchmark(s) [msgs=5000000, msgsize=256, pubs=2, subs=2]
Current memory usage is 966.14 mb / 981.50 mb / 14.22 gb free/total/max
Use ctrl-C to cancel.
Pub Only stats: 9,584,263 msgs/sec ~ 2.29 gb/sec
[ 1] 4,831,495 msgs/sec ~ 1.15 gb/sec (2500000 msgs)
[ 2] 4,792,145 msgs/sec ~ 1.14 gb/sec (2500000 msgs)
min 4,792,145 | avg 4,811,820 | max 4,831,495 | stddev 19,675.00 msgs
Pub/Sub stats: 3,735,744 msgs/sec ~ 912.05 mb/sec
Pub stats: 1,245,680 msgs/sec ~ 304.12 mb/sec
[ 1] 624,385 msgs/sec ~ 152.44 mb/sec (2500000 msgs)
[ 2] 622,840 msgs/sec ~ 152.06 mb/sec (2500000 msgs)
min 622,840 | avg 623,612 | max 624,385 | stddev 772.50 msgs
Sub stats: 2,490,461 msgs/sec ~ 608.02 mb/sec
[ 1] 1,245,230 msgs/sec ~ 304.01 mb/sec (5000000 msgs)
[ 2] 1,245,231 msgs/sec ~ 304.01 mb/sec (5000000 msgs)
min 1,245,230 | avg 1,245,230 | max 1,245,231 | stddev .71 msgs
Final memory usage is 2.02 gb / 2.94 gb / 14.22 gb free/total/max
The second, called io.nats.examples.autobench.NatsAutoBench
runs a series of tests with various message sizes. Running this test on the same iMac, resulted in:
PubOnly 0b 10,000,000 8,464,850 msg/s 0.00 b/s
PubOnly 8b 10,000,000 10,065,263 msg/s 76.79 mb/s
PubOnly 32b 10,000,000 12,534,612 msg/s 382.53 mb/s
PubOnly 256b 10,000,000 7,996,057 msg/s 1.91 gb/s
PubOnly 512b 10,000,000 5,942,165 msg/s 2.83 gb/s
PubOnly 1k 1,000,000 4,043,937 msg/s 3.86 gb/s
PubOnly 4k 500,000 1,114,947 msg/s 4.25 gb/s
PubOnly 8k 100,000 460,630 msg/s 3.51 gb/s
PubSub 0b 10,000,000 3,155,673 msg/s 0.00 b/s
PubSub 8b 10,000,000 3,218,427 msg/s 24.55 mb/s
PubSub 32b 10,000,000 2,681,550 msg/s 81.83 mb/s
PubSub 256b 10,000,000 2,020,481 msg/s 493.28 mb/s
PubSub 512b 5,000,000 2,000,918 msg/s 977.01 mb/s
PubSub 1k 1,000,000 1,170,448 msg/s 1.12 gb/s
PubSub 4k 100,000 382,964 msg/s 1.46 gb/s
PubSub 8k 100,000 196,474 msg/s 1.50 gb/s
PubDispatch 0b 10,000,000 4,645,438 msg/s 0.00 b/s
PubDispatch 8b 10,000,000 4,500,006 msg/s 34.33 mb/s
PubDispatch 32b 10,000,000 4,458,481 msg/s 136.06 mb/s
PubDispatch 256b 10,000,000 2,586,563 msg/s 631.49 mb/s
PubDispatch 512b 5,000,000 2,187,592 msg/s 1.04 gb/s
PubDispatch 1k 1,000,000 1,369,985 msg/s 1.31 gb/s
PubDispatch 4k 100,000 403,314 msg/s 1.54 gb/s
PubDispatch 8k 100,000 203,320 msg/s 1.55 gb/s
ReqReply 0b 20,000 9,548 msg/s 0.00 b/s
ReqReply 8b 20,000 9,491 msg/s 74.15 kb/s
ReqReply 32b 10,000 9,778 msg/s 305.59 kb/s
ReqReply 256b 10,000 8,394 msg/s 2.05 mb/s
ReqReply 512b 10,000 8,259 msg/s 4.03 mb/s
ReqReply 1k 10,000 8,193 msg/s 8.00 mb/s
ReqReply 4k 10,000 7,915 msg/s 30.92 mb/s
ReqReply 8k 10,000 7,454 msg/s 58.24 mb/s
Latency 0b 5,000 35 / 49.20 / 134 +/- 0.77 (microseconds)
Latency 8b 5,000 35 / 49.54 / 361 +/- 0.80 (microseconds)
Latency 32b 5,000 35 / 49.27 / 135 +/- 0.79 (microseconds)
Latency 256b 5,000 41 / 56.41 / 142 +/- 0.90 (microseconds)
Latency 512b 5,000 40 / 56.41 / 174 +/- 0.91 (microseconds)
Latency 1k 5,000 35 / 49.76 / 160 +/- 0.80 (microseconds)
Latency 4k 5,000 36 / 50.64 / 193 +/- 0.83 (microseconds)
Latency 8k 5,000 38 / 55.45 / 206 +/- 0.88 (microseconds)
It is worth noting that in both cases memory was not a factor, the processor and OS were more of a consideration. To test this, take a look at the NatsBench results again. Those are run without any constraint on the Java heap and end up doubling the used memory. However, if we run the same test again with a constraint of 1Gb using -Xmx1g, the performance is comparable, differentiated primarily by "noise" that we can see between test runs with the same settings.
Starting benchmark(s) [msgs=5000000, msgsize=256, pubs=2, subs=2]
Current memory usage is 976.38 mb / 981.50 mb / 981.50 mb free/total/max
Use ctrl-C to cancel.
Pub Only stats: 10,123,382 msgs/sec ~ 2.41 gb/sec
[ 1] 5,068,256 msgs/sec ~ 1.21 gb/sec (2500000 msgs)
[ 2] 5,061,691 msgs/sec ~ 1.21 gb/sec (2500000 msgs)
min 5,061,691 | avg 5,064,973 | max 5,068,256 | stddev 3,282.50 msgs
Pub/Sub stats: 3,563,770 msgs/sec ~ 870.06 mb/sec
Pub stats: 1,188,261 msgs/sec ~ 290.10 mb/sec
[ 1] 594,701 msgs/sec ~ 145.19 mb/sec (2500000 msgs)
[ 2] 594,130 msgs/sec ~ 145.05 mb/sec (2500000 msgs)
min 594,130 | avg 594,415 | max 594,701 | stddev 285.50 msgs
Sub stats: 2,375,839 msgs/sec ~ 580.04 mb/sec
[ 1] 1,187,919 msgs/sec ~ 290.02 mb/sec (5000000 msgs)
[ 2] 1,187,920 msgs/sec ~ 290.02 mb/sec (5000000 msgs)
min 1,187,919 | avg 1,187,919 | max 1,187,920 | stddev .71 msgs
Final memory usage is 317.62 mb / 960.50 mb / 960.50 mb free/total/max
The build depends on Gradle, and contains gradlew
to simplify the process. After cloning, you can build the repository and run the tests with a single command:
> git clone https://github.com/nats-io/nats.java
> cd nats.java
> ./gradlew clean build
Or to build without tests
> ./gradlew clean build -x test
This will place the class files in a new build
folder. To just build the jar:
> ./gradlew jar
The jar will be placed in build/libs
.
You can also build the java doc, and the samples jar using:
> ./gradlew javadoc
> ./gradlew exampleJar
The java doc is located in build/docs
and the example jar is in build/libs
. Finally, to run the tests with the coverage report:
> ./gradlew test jacocoTestReport
which will create a folder called build/reports/jacoco
containing the file index.html
you can open and use to browse the coverage. Keep in mind we have focused on library test coverage, not coverage for the examples.
Many of the tests run nats-server on a custom port. If nats-server is in your path they should just work, but in cases where it is not, or an IDE running tests has issues with the path you can specify the nats-server location with the environment variable nats_-_server_path
.
The raw TLS test certs are in src/test/resources/certs and come from the nats.go repository. However, the java client also needs a keystore and truststore.jks files for creating a context. These can be created using:
> cd src/test/resources
> keytool -keystore truststore.jks -alias CARoot -import -file certs/ca.pem -storepass password -noprompt -storetype pkcs12
> cat certs/client-key.pem certs/client-cert.pem > combined.pem
> openssl pkcs12 -export -in combined.pem -out cert.p12
> keytool -importkeystore -srckeystore cert.p12 -srcstoretype pkcs12 -deststoretype pkcs12 -destkeystore keystore.jks
> keytool -keystore keystore.jks -alias CARoot -import -file certs/ca.pem -storepass password -noprompt
> rm cert.p12 combined.pem
Download Details:
Author: nats-io
Source Code: https://github.com/nats-io/nats.java
License: Apache-2.0 license
1656707820
Smack is an open-source, highly modular, easy to use, XMPP client library written in Java for Java SE compatible JVMs and Android.
Being a pure Java library, it can be embedded into your applications to create anything from a full XMPP instant messaging client to simple XMPP integrations such as sending notification messages and presence-enabling devices. Smack and XMPP allow you to easily exchange data in various ways e.g., fire-and-forget, publish-subscribe, between human and non-human endpoints (M2M, IoT, …).
More information is provided by the Overview.
Start with having a look at the Documentation and the Javadoc.
Instructions on how to use Smack in your Java or Android project are provided in the Smack Readme and Upgrade Guide.
Smack is a collaborative effort of many people. Some are paid, e.g., by their employer or a third party, for their contributions. But many contribute in their spare time for free. While we try to provide the best possible XMPP library for Android and Java SE-compatible execution environments by following state-of-the-art software engineering practices, the API may not always perfectly fit your requirements. Hence welcome contributions and encourage discussion about how Smack can be further improved. We also provide paid services ranging from XMPP/Smack related consulting to designing and developing features to accommodate your needs. Please contact Florian Schmaus for further information.
Only a few users have access for filling bugs in the tracker. New users should:
Please search for your issues in the bug tracker before reporting.
The developers hang around in smack@conference.igniterealtime.org. You may use this link to join the room via inverse.chat. Remember that it may take some time (~hours) to get a response.
You can also reach us via the Smack Support Forum if you have questions or need support, or the Smack Developers Forum if you want to discuss Smack development.
If you want to start developing for Smack and eventually contribute code back, then please have a look at the Guidelines for Smack Developers and Contributors. The guidelines also contain development quickstart instructions.
Ignite Realtime
Ignite Realtime is an Open Source community composed of end-users and developers around the world who are interested in applying innovative, open-standards-based RealTime Collaboration to their businesses and organizations. We're aimed at disrupting proprietary, non-open standards-based systems and invite you to participate in what's already one of the biggest and most active Open Source communities.
Smack - an Ignite Realtime community project.
Download Details:
Author: igniterealtime
Source Code: https://github.com/igniterealtime/Smack
License: Apache-2.0 license
1656700500
This repository contains source code of the RabbitMQ Java client. The client is maintained by the RabbitMQ team at Pivotal.
This package is published to several Maven package repositories:
This client releases are independent from RabbitMQ server releases and can be used with RabbitMQ server 3.x
. They require Java 8 or higher.
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>5.15.0</version>
</dependency>
compile 'com.rabbitmq:amqp-client:5.15.0'
As of 1 January 2021 the 4.x branch is no longer supported.
This client releases are independent from RabbitMQ server releases and can be used with RabbitMQ server 3.x
. They require Java 6 or higher.
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>4.12.0</version>
</dependency>
compile 'com.rabbitmq:amqp-client:4.12.0'
You can experiment with the client from JShell. This requires Java 9 or more.
git clone https://github.com/rabbitmq/rabbitmq-java-client.git
cd rabbitmq-java-client
./mvnw test-compile jshell:run
...
import com.rabbitmq.client.*
ConnectionFactory cf = new ConnectionFactory()
Connection c = cf.newConnection()
...
c.close()
/exit
git clone git@github.com:rabbitmq/rabbitmq-java-client.git
cd rabbitmq-java-client
make deps
./mvnw clean package -Dmaven.test.skip -P '!setup-test-cluster'
Run the broker:
docker run -it --rm --name rabbitmq -p 5672:5672 rabbitmq:3.8
Launch "essential" tests (takes about 10 minutes):
./mvnw verify -P '!setup-test-cluster' \
-Drabbitmqctl.bin=DOCKER:rabbitmq \
-Dit.test=ClientTests,FunctionalTests,ServerTests
Launch a single test:
./mvnw verify -P '!setup-test-cluster' \
-Drabbitmqctl.bin=DOCKER:rabbitmq \
-Dit.test=DeadLetterExchange
The tests can run against a local broker as well. The rabbitmqctl.bin
system property must point to the rabbitmqctl
program:
./mvnw verify -P '!setup-test-cluster' \
-Dtest-broker.A.nodename=rabbit@$(hostname) \
-Drabbitmqctl.bin=/path/to/rabbitmqctl \
-Dit.test=ClientTests,FunctionalTests,ServerTests
To launch a single test:
./mvnw verify -P '!setup-test-cluster' \
-Dtest-broker.A.nodename=rabbit@$(hostname) \
-Drabbitmqctl.bin=/path/to/rabbitmqctl \
-Dit.test=DeadLetterExchange
See Contributing and How to Run Tests.
This library uses semantic versioning.
See the RabbitMQ Java libraries support page for the support timeline of this library.
Download Details:
Author: rabbitmq
Source Code: https://github.com/rabbitmq/rabbitmq-java-client
License: Unknown and 3 other licenses found
1656685989
Nakadi is a distributed event bus broker that implements a RESTful API abstraction on top of Kafka-like queues, which can be used to send, receive, and analyze streaming data in real time, in a reliable and highly available manner.
One of the most prominent use cases of Nakadi is to decouple micro-services by building data streams between producers and consumers.
Main users of nakadi are developers and analysts. Nakadi provides features like REST based integration, multi consumer, ordered delivery, interactive UI, fully managed, security, ensuring data quality, abstraction of big data technology, and push model based consumption.
Nakadi is in active developement and is currently in production inside Zalando as the backbone of our microservices sending millions of events daily with a throughput of more than hundreds gigabytes per second. In one line, Nakadi is a high-scalability data-stream for enterprise engineering teams.
More detailed information can be found on our website.
The goal of Nakadi (ნაკადი means stream in Georgian) is to provide an event broker infrastructure to:
This allows microservices teams to maintain service boundaries, and not directly depend on any specific message broker technology. Access can be managed individually for every queue and secured using OAuth and custom authorization plugins.
Event types can be defined with Event type schemas and managed via a registry. All events will be validated against the schema before publishing. This guarantees data quality and consistency for consumers.
Once a publisher sends an event using a simple HTTP POST, consumers can be pushed to via a streaming HTTP connection, allowing near real-time event processing. The consumer connection has keepalive controls and support for managing stream offsets using subscriptions.
Stream:
Schema:
Security:
Operations:
Read more about latest development on the releases page.
The zalando-nakadi organisation contains many useful related projects like
Read our contribution guidelines on how to submit issues and pull requests, then get Nakadi up and running locally using Docker:
The Nakadi server is a Java 8 Spring Boot application. It uses Kafka 1.1.1 as its broker and PostgreSQL 9.5 as its supporting database.
Nakadi requires recent versions of docker and docker-compose. In particular, docker-compose >= v1.7.0 is required. See Install Docker Compose for information on installing the most recent docker-compose version.
The project is built with Gradle. The ./gradlew
wrapper script will bootstrap the right Gradle version if it's not already installed.
To get the source, clone the git repository.
git clone https://github.com/zalando/nakadi.git
The gradle setup is fairly standard, the main tasks are:
./gradlew build
: run a build and test./gradlew clean
: clean down the buildSome other useful tasks are:
./gradlew startNakadi
: build Nakadi and start docker-compose services: nakadi, postgresql, zookeeper and kafka./gradlew stopNakadi
: shutdown docker-compose services./gradlew startStorages
: start docker-compose services: postgres, zookeeper and kafka (useful for development purposes)./gradlew fullAcceptanceTest
: start Nakadi configured for acceptance tests and run acceptance testsFor working with an IDE, the eclipse
IDE task is available and you'll be able to import the build.gradle
into Intellij IDEA directly.
Note: Nakadi Docker for ARM processors is available at here
From the project's home directory you can start Nakadi via Gradle:
./gradlew startNakadi
This will build the project and run docker compose with 4 services:
To stop the running Nakadi server:
./gradlew stopNakadi
Please read the manual for the full API usage details.
The Nakadi API allows the publishing and consuming of events over HTTP. To do this the producer must register an event type with the Nakadi schema registry.
This example shows a minimalistic undefined
category event type with a wildcard schema:
curl -v -XPOST http://localhost:8080/event-types -H "Content-type: application/json" -d '{
"name": "order.ORDER_RECEIVED",
"owning_application": "order-service",
"category": "undefined",
"schema": {
"type": "json_schema",
"schema": "{ \"additionalProperties\": true }"
}
}'
Note: This is not a recommended category and schema. It should be used only for testing.
You can read more about this in the manual.
You can open a stream for an event type via the events
sub-resource:
curl -v http://localhost:8080/event-types/order.ORDER_RECEIVED/events
HTTP/1.1 200 OK
{"cursor":{"partition":"0","offset":"82376-000087231"},"events":[{"order_number": "ORDER_001"}]}
{"cursor":{"partition":"0","offset":"82376-000087232"}}
{"cursor":{"partition":"0","offset":"82376-000087232"},"events":[{"order_number": "ORDER_002"}]}
{"cursor":{"partition":"0","offset":"82376-000087233"},"events":[{"order_number": "ORDER_003"}]}
You will see the events when you publish them from another console for example. The records without events
field are Keep Alive
messages.
Note: the low-level API should be used only for debugging. It is not recommended for production systems. For production systems, please use the Subscriptions API.
Events for an event type can be published by posting to its "events" collection:
curl -v -XPOST http://localhost:8080/event-types/order.ORDER_RECEIVED/events \
-H "Content-type: application/json" \
-d '[{
"order_number": "24873243241"
}, {
"order_number": "24873243242"
}]'
HTTP/1.1 200 OK
Read more in the manual.
Nakadi accepts contributions from the open-source community.
Please read CONTRIBUTING.md
.
Please also note our CODE_OF_CONDUCT.md
.
This email address serves as the main contact address for this project.
Bug reports and feature requests are more likely to be addressed if posted as issues here on GitHub.
Download Details:
Author: zalando
Source Code: https://github.com/zalando/nakadi
License: MIT license
1656678667
Pure Java implementation of libzmq
Contributions welcome! See CONTRIBUTING.md for details about the contribution process and useful development tasks.
Add it to your Maven project's pom.xml
:
<dependency>
<groupId>org.zeromq</groupId>
<artifactId>jeromq</artifactId>
<version>0.5.2</version>
</dependency>
<!-- for the latest SNAPSHOT -->
<dependency>
<groupId>org.zeromq</groupId>
<artifactId>jeromq</artifactId>
<version>0.5.3-SNAPSHOT</version>
</dependency>
<!-- If you can't find the latest snapshot -->
<repositories>
<repository>
<id>sonatype-nexus-snapshots</id>
<url>https://oss.sonatype.org/content/repositories/snapshots</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
To generate an ant build file from pom.xml
, issue the following command:
mvn ant:ant
Here is how you might implement a server that prints the messages it receives and responds to them with "Hello, world!":
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZContext;
public class hwserver
{
public static void main(String[] args) throws Exception
{
try (ZContext context = new ZContext()) {
// Socket to talk to clients
ZMQ.Socket socket = context.createSocket(SocketType.REP);
socket.bind("tcp://*:5555");
while (!Thread.currentThread().isInterrupted()) {
// Block until a message is received
byte[] reply = socket.recv(0);
// Print the message
System.out.println(
"Received: [" + new String(reply, ZMQ.CHARSET) + "]"
);
// Send a response
String response = "Hello, world!";
socket.send(response.getBytes(ZMQ.CHARSET), 0);
}
}
}
}
The JeroMQ translations of the zguide examples are a good reference for recommended usage.
For API-level documentation, see the Javadocs.
This repo also has a doc folder, which contains assorted "how to do X" guides and other useful information about various topics related to using JeroMQ.
Download Details:
Author: zeromq
Source Code: https://github.com/zeromq/jeromq
License: MPL-2.0 license
1656672720
A collection of Java examples that connects to the Binance API endpoints based on binance-connector-java
.
Replace LATEST_VERSION
with the latest version number and paste the snippet below in pom.xml
<dependency>
<groupId>io.github.binance</groupId>
<artifactId>binance-connector-java</artifactId>
<version>LATEST_VERSION</version>
</dependency>
Run mvn install
where pom.xml
is located to install the dependency.
mvn compile exec:java -Dexec.mainClass="<java_file_name>"
To get user's information, e.g account balance, you will need to setup API key/secret from
Production: https://www.binance.com/en/my/settings/api-management
Testnet: https://testnet.binance.vision/
Fill up the API/Secret key parameters in PrivateConfig.java
If you see API server returns error "Invalid API-key, IP, or permissions for action.", please check this topic https://dev.binance.vision/t/why-do-i-see-this-error-invalid-api-key-ip-or-permissions-for-action/93
This forum has plenty of topics covering most of common questions, it's the best place to ask or search API related questions.
Download Details:
Author: binance
Source Code: https://github.com/binance/binance-toolbox-java
License:
#Binance #blockchain #java
1656671340
EventBus is a publish/subscribe event bus for Android and Java.
EventBus...
public static class MessageEvent { /* Additional fields if needed */ }
2. Prepare subscribers: Declare and annotate your subscribing method, optionally specify a thread mode:
@Subscribe(threadMode = ThreadMode.MAIN)
public void onMessageEvent(MessageEvent event) {
// Do something
}
Register and unregister your subscriber. For example on Android, activities and fragments should usually register according to their life cycle:
@Override
public void onStart() {
super.onStart();
EventBus.getDefault().register(this);
}
@Override
public void onStop() {
super.onStop();
EventBus.getDefault().unregister(this);
}
3. Post events:
EventBus.getDefault().post(new MessageEvent());
Read the full getting started guide.
There are also some examples.
Note: we highly recommend the EventBus annotation processor with its subscriber index. This will avoid some reflection related problems seen in the wild.
Available on Maven Central.
Android projects:
implementation("org.greenrobot:eventbus:3.3.1")
Java projects:
implementation("org.greenrobot:eventbus-java:3.3.1")
<dependency>
<groupId>org.greenrobot</groupId>
<artifactId>eventbus-java</artifactId>
<version>3.3.1</version>
</dependency>
If your project uses R8 or ProGuard this library ships with embedded rules.
For more details please check the EventBus website. Here are some direct links you may find useful:
Download Details:
Author: greenrobot
Source Code: https://github.com/greenrobot/EventBus
License: Apache-2.0 license
1656663960
Apache RocketMQ is a distributed messaging and streaming platform with low latency, high performance and reliability, trillion-level capacity and flexible scalability.
It offers a variety of features:
We always welcome new contributions, whether for trivial cleanups, big new features or other material rewards, more details see here.
This distribution includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See http://www.wassenaar.org/ for more information.
The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code.
The following provides more details on the included cryptographic software:
This software uses Apache Commons Crypto (https://commons.apache.org/proper/commons-crypto/) to support authentication, and encryption and decryption of data sent across the network between services.
Download Details:
Author: apache
Source Code: https://github.com/apache/rocketmq
License: Apache-2.0 license