1561005863
In this post, we intend to bridge the gap between the world of Reactor and the powerful trends on the horizon of programming. Here we’ll discuss and demonstrate concepts, advantages, drawbacks and an approach for building an API using Project Reactor.
There are currently several programming paradigms; in this post we’ll discuss reactive programming which focuses on the asynchronous management of finite and infinite data flow. *Reactor *is a reactive programming library for the Java language which provides the basis for developing non-blocking applications, thus representing a change in how we think about an application’s execution model. Reactor was developed by Pivotal, a software and services company, which takes care of developments worldwide in different divisions of software engineering.
We would like to share some highlights, new trends in Java, and the world of programming that we have learned thanks to our project experiences and research. Our aim was to achieve better results with more complex applications but in a simplified way.
Taking as an example the development of an event manager with Twitter, we will explain how we consume and save responses to various requests in a non-blocking manner.
To talk about Project Reactor we must first define what Reactive Programmingis. It’s a paradigm or microarchitecture that involves the routing or consumption of Streams (data stream emitted over time). Data-flows and the propagation of changes that can be generated in the application analyzed, providing fast and consistent response times (responsive), remaining responsive to error situations (resilient) and to workload increases (elastic) based on the exchange of asynchronous messages (oriented to messages). Reactive Programming follows the Observer design pattern which means that when there is a change of status in an object, the other objects are notified and updated, thus reducing the inefficient use of resources. More can be learned about this topic in the Programming Reactive documentation.
Within this conceptual frame of reactive programming, we can begin to examine Project Reactor. As mentioned above, this is a library that exhibits the following characteristics:
Backpressure grants the Consumer of an asynchronous stream the ability to tell the Producer the amount of data that must be sent to prevent the issuance of events at a rate that is faster than the processing capabilities. Reactor provides several strategies to reduce the amount of data that gets sent, including engaging buffers and using the Windowing technique, that allows a program to analyze data from the last n seconds every m seconds.
Summing up, Project Reactor can maintain a high-performance message rate and also work with very low memory space. Thanks to these features it is suitable for creating efficient applications based on events. It allows these to cope with more requests at the same time in an asynchronous way which is ideal for high-latency applications.
The main artifact Project Reactor employs is reactor-core, which is a reactive library that focuses on the specification of reactive Streams and the Java 8 objectives. Reactor has two reactive types that implement the Publisher interface, but also provide a broad set of operators: Flux and *Mono. *These types allow applications to serve more requests at the same time and both support non-blocking backpressure.
There are standard or basic methods for its creation that are used by these operators, among which we find: create, defer and error.
Flux
Source: Reactor 3 Reference Guide. https://projectreactor.io/docs/core/release/reference/
A Flux object represents a reactive sequence of 0 to N elements, and also allows the generation of sources from arbitrary callback types. The following code fragment shows one of the most common and basic examples for creating a Flux.
// Creates a Flux containing integer values
Flux<Integer> integerFlux = Flux.just(1, 2, 3);
// Creates a Flux containing string values
Flux<String> stringFlux = Flux.just("Hello", "World", "Wolox");
// Creates a Flux from an already existing list
List<String> stringList = Arrays.asList("Hello", "World", "Wolox");
Flux<String> fluxFromList = Flux.fromIterable(stringList);
// It works the same with Java Streams (which are not reactive).
Stream<String> stringStream = stringList.stream();
Flux<String> fluxFromStream = Flux.fromStream(stringStream);
In the snippet, we see the creation of Fluxs of integers, String and even from a Java stream. We have a Flux.just(…) method that creates a Flux that emits the specified element, which is then captured at the time of instance creation. The Flux.fromIterable(…) method creates a Flux that emits the elements contained in the provided Iterable, and this will in turn create a new iterator for each Subscriber. Subsequently, we’ll see more elaborate Flux implementations to fetch values that are needed to subscribe and consume data from an external API.
Mono
Source: Reactor 3 ReferenceGuide. https://projectreactor.io/docs/core/release/reference/
A Mono object represents a single or empty value result (0…1) and allows deterministic generation from scratch or a sequence from arbitrary callback types. Below, some of the most common creations of Mono are shown. The Mono.empty() method creates a Mono that is completed without emitting any element.
// Creating a Mono containing "Hello World Wolox"
Mono<String> helloWorldWolox = Mono.just("Hello World Wolox");
// Creating an empty Mono
Mono<T> empty = Mono.empty();
// Creating a mono from a Callable
Mono<String> helloWorldWoloxCallable = Mono.fromCallable(() -> "Hello World Wolox");
// Same with Java 8 method reference
Mono<User> user = Mono.fromCallable(UserService::fetchAnyUser);
Schedulers
Reactor uses a Scheduler that determines the context for an execution of arbitrary tasks, providing the assurance required by a reactive Stream. We can also use or create efficient Schedulers for subscribeOn and publishOn. It’s possible to use multiple reactor instances that can be instantiated with different schedulers.
// Insert a person, calling your DAO
Mono personWrapper = Mono
.fromCallable(
() -> personDao.insertPerson(person));
return personWrapper.subscribeOn(Schedulers.elastic());
// Get a person by identification
Mono<Person> personWrapper = Mono
.fromCallable(() -> {
return personDao.findByRut(id);
});
return personWrapper.subscribeOn(Schedulers.elastic());
The snippet shows two examples that are analyzed with Schedulers. The first one defines a variable of type Mono that calls the Dao to insert a Person object. A method called Mono.fromCallable is observed, and this method expects that the supplier method (in this case insertPerson) returns a value of type T, and creates a type Mono, which is non-blocking. This method even captures errors implicitly and maps them to a Mono.error(…).
It should be clarified that Reactor has implemented its own error capture, and as it was just mentioned, it does so through the error() method. The *Mono.defer(…) *method works in a similar way to the fromCallable but has a difference: its method supplier must return a Mono value. Since it doesn’t capture the errors, we need to do it ourselves. Once the Mono is captured, it’s returned through a subscribeOn, passing the Schedulers.elastic() as parameters, which returns a shared Schedulers instance. This means that multiple calls to this function will return the same Scheduler.
In other words, it dynamically creates groups of execution-services based workers as necessary and reuses the inactive ones, saving the groups of sub-processes or workers in cache. Groups of sub-processes that remain inactive for 60 seconds are eliminated. Elastic() method is a useful way to assign it’s own sub-processes to a blocking process so that other resources aren’t compromised. Therefore, elastic() is considered as the default Scheduler. Note that in the other example, though the method returns a Mono, it follows the same behavior as in the previous case in order to abstract its data.
Now that we have a clear concept of Reactor we are going to demonstrate and develop an implementation of how it works with the consumption of external APIs, in this case with Twitter API. We will observe how it responds asynchronously to all the tweets that are captured from the application and show how it processes the information in a non-blocking way.
We must first configure the build.gradle file with the necessary dependencies for the example. In the following snippet we see the Reactor dependencies, Twitter (we’ll use the properties of Twitter4J), Spring Boot, among others. IntellijIDE is used as the text editor.
dependencies {
implementation 'org.springframework.boot:spring-boot-starter'
implementation 'org.json:json:20180813'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testCompile group: 'com.github.javafaker', name: 'javafaker', version: '0.15'
testCompile group: 'org.hamcrest', name: 'hamcrest-all', version: '1.3'
testCompile group: 'io.projectreactor', name: 'reactor-test'
compile 'org.springframework.boot:spring-boot-starter-data-jpa'
compile 'org.springframework.boot:spring-boot-starter-web'
compile 'com.h2database:h2'
compile 'org.springframework.boot:spring-boot-starter-thymeleaf'
compile group: 'com.google.guava', name: 'guava', version: '27.0-jre'
compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-jsr310', version: '2.9.8'
compile group: 'io.projectreactor', name: 'reactor-core', version: '3.2.6.RELEASE'
compile "io.projectreactor.netty:reactor-netty:0.8.5.RELEASE"
compile group: 'org.twitter4j', name: 'twitter4j-stream', version: '4.0.2'
}
Coding
A service called TwitterService is created, which validates that our stream always exists, in this case as subscribers on Twitter. It does so through the abstract ConnectableFlux class, which allows subscribers to accumulate before connecting to their data source. This means that when we call the subscribe() method it doesn’t start broadcasting immediately and therefore we can add several subscriptions in advance. It then validates the Twitter stream calling a method that configures or builds it by adding credentials, methods to implement and finally adds the stream to a Listener. All of this in the case when the stream has no connection and hasn’t started accumulating subscriptions.
There are several properties to configure Twitter4J, either by creating an instance of the ConfigurationBuilder class to do it manually, or it could also be done through the creation of a twitter4j.properties file. The following example was done in the service using the first method. In this configuration, it’s necessary to create the consumerKey, consumerSecret, accessToken and accessTokenSecret, each with their respective credentials.
public class TwitterService {
private static ConnectableFlux twitterStream;
public static synchronized ConnectableFlux getTwitterStream() {
if (twitterStream == null) {
initTwitterStream();
}
return twitterStream;
}
private static void initTwitterStream() {
Flux<Status> stream = Flux.create(emitter -> {
StatusListener listener = new StatusListener() {
@Override
public void onException(Exception e) {
emitter.error(e);
}
@Override
public void onDeletionNotice(StatusDeletionNotice arg) {
}
@Override
public void onScrubGeo(long userId, long upToStatusId) {
}
@Override
public void onStallWarning(StallWarning warning) {
}
@Override
public void onStatus(Status status) {
emitter.next(status);
}
@Override
public void onTrackLimitationNotice(int numberOfLimitedStatuses) {
System.out.println(numberOfLimitedStatuses);
}
};
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true)
.setOAuthConsumerKey("YOUR_ACCESS_KEY")
.setOAuthConsumerSecret("YOUR_ACCESS_SECRET")
.setOAuthAccessToken("YOUR_ACCESS_TOKEN")
.setOAuthAccessTokenSecret("YOUR_ACCESS_TOKEN_SECRET");
TwitterStream twitterStream = new TwitterStreamFactory(cb.build()).getInstance();
twitterStream.addListener(listener);
twitterStream.sample();
});
twitterStream = stream.publish();
twitterStream.connect();
}
}
Following this, an instance of the TwitterStream interface is created, passing the configuration that was made in the previous step as parameter. Then a StatusListener with its properties is added, working as a streams reader. With the sample() method we start listening to a random sample of all public statuses. The final result is that we have a stream.publish() method that allows us to publish our own messages in the TwitterStream instance while also having the *connect() *method that sends a connection request to the API to open up the data stream and receive the tweets. The data transmission model opens up a Pipeline for data to be sent as they occur, having an indefinite period of existence.
With the configured service, a controller called TwitterController is created, and with it, the communication of the external service with the application to obtain different results with the data of the captured tweets. We show that four endpoints will be created where we have different behaviors and different data in order to show the different actions that can be achieved with reactor and the Twitter API.
@RestController
@RequestMapping("/api/tweets")
public class TwitterController {
@GetMapping(path = "/filtered", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> filtered() {
ConnectableFlux<Status> flux = TwitterService.getTwitterStream();
return flux
.filter(status -> status.getText().contains("the"))
.map(status -> status.getText());
}
@GetMapping(path = "/feed", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> feed() {
ConnectableFlux<Status> flux = TwitterService.getTwitterStream();
return flux.map(status -> status.getText());
}
@GetMapping(path = "/onePerSecond", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<String> onePerSecond() {
ConnectableFlux<Status> flux = TwitterService.getTwitterStream();
Flux<Status> filtered = flux.filter(status -> {
Place place = status.getPlace();
if (place != null) {
return status.getPlace().getCountryCode().equalsIgnoreCase("us");
}
return false;
});
return filtered
.map(status -> status.getCreatedAt().toGMTString() + " " + status.getPlace().getCountryCode() + " " + status.getText());
}
@GetMapping(path = "/grouped", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux grouped() {
ConnectableFlux<Status> flux = TwitterService.getTwitterStream();
Flux<Status> filtered = flux.filter(status -> {
Place place = status.getPlace();
if (place != null) {
return status.getPlace().getCountryCode().equalsIgnoreCase("us");
}
return false;
});
return Flux.interval(Duration.ofSeconds(1))
.zipWith(filtered, (tick, status) -> status)
.map(status -> status.getText());
}
}
The first endpoint called Filtered() is a Get method that returns a Flux; in this function, we get the connection to the external API to gather all tweets, then we make a filter to only get those statuses that contain the word “the”, and finally a map of the result is made and the obtained tweets are returned. Regarding the second endpoint feed() the Twitter service that was implemented is once again called, and we get all the news or published statuses through a map.
In the third endpoint onePerSecond() the same process of obtaining all tweets through the service is followed: a Flux is created, filtering by the tweet’s place or location if present and a Place instance is created, which is an interface provided by the Twitter API that extends TwitterResponse and other interfaces. The snippet shows the methods that can be used. It validates that a Placeexists, and only tweets within states in the United States are returned. Finally, using the map function, parsed tweets are returned in a cleaner format.
public interface Place extends TwitterResponse, Comparable<Place>, java.io.Serializable {
String getName();
String getStreetAddress();
String getCountryCode();
String getId();
String getCountry();
String getPlaceType();
String getURL();
String getFullName();
String getBoundingBoxType();
GeoLocation[][] getBoundingBoxCoordinates();
String getGeometryType();
GeoLocation[][] getGeometryCoordinates();
Place[] getContainedWithIn();
}
In the fourth and final endpoint grouped(), tweets are grouped every 1 second, and once again the filter is defined by tweets originating in the United States, but this time the Flux.interval(…) method is used to assign a grouping duration, together with the zipWith(…) method to merge the previously obtained filter and then return its publications.
As illustrated by these examples, we can deepen and get fascinating things in today’s applications using Reactor — especially for microservices, a growing trend in digital transformation. Therefore, I invite readers to continue to engage with this amazing topic. In summary, I’d like to highlight the advantages and drawbacks of reactive and reactor programming.
Advantages
We can answer many requests or call messages by generating one or a few threads.
It’s possible to do a callback asynchronously and this could potentially save us calling resources.
It achieves weakly coupled programming and tends to isolate faults or errors, so it’s easily scalable, anticipating the handling of the number of events it can receive.
Through the efficient use of resources, this is absolutely doing much more with less. Specifically, we can process higher workloads with fewer threads.
Drawbacks
More intensive memory usage is needed to store large data-flows as they are maintained for a long time.
It may be a little different from conventional programming, and it may be hard to understand in the beginning.
Most of the complexities must be dealt with at the time of declaring the service.
It doesn’t work well for applications that have very little data flow, as it can deem simple programming unnecessarily complex, or possibly even affect the performance.
Although Reactor has only been around for a short time, it achieved a great impact on applications that suffer from high latency, allowing better processing and response performance. This makes it ideal for the programming world’s new trend, as well as zooming in and allowing reactive programming in Java.
On the other hand, it’s proving itself to be a strong library since nowadays we have devices and applications connected to the Internet 24/7 so we need to show information almost instantaneously to millions of users, generating very intense loads. That’s why I see great potential in Reactor being able to respond in an optimal and correct way to these massive data demands, meaning that the application responds as the user expects it.
Finally, I’d like to thank Matias de Santi for being one of the contributors to the code proposed in this post.
#java
1600135200
OpenJDk or Open Java Development Kit is a free, open-source framework of the Java Platform, Standard Edition (or Java SE). It contains the virtual machine, the Java Class Library, and the Java compiler. The difference between the Oracle OpenJDK and Oracle JDK is that OpenJDK is a source code reference point for the open-source model. Simultaneously, the Oracle JDK is a continuation or advanced model of the OpenJDK, which is not open source and requires a license to use.
In this article, we will be installing OpenJDK on Centos 8.
#tutorials #alternatives #centos #centos 8 #configuration #dnf #frameworks #java #java development kit #java ee #java environment variables #java framework #java jdk #java jre #java platform #java sdk #java se #jdk #jre #open java development kit #open source #openjdk #openjdk 11 #openjdk 8 #openjdk runtime environment
1620458875
According to some surveys, such as JetBrains’s great survey, Java 8 is currently the most used version of Java, despite being a 2014 release.
What you are reading is one in a series of articles titled ‘Going beyond Java 8,’ inspired by the contents of my book, Java for Aliens. These articles will guide you step-by-step through the most important features introduced to the language, starting from version 9. The aim is to make you aware of how important it is to move forward from Java 8, explaining the enormous advantages that the latest versions of the language offer.
In this article, we will talk about the most important new feature introduced with Java 10. Officially called local variable type inference, this feature is better known as the **introduction of the word **var
. Despite the complicated name, it is actually quite a simple feature to use. However, some observations need to be made before we can see the impact that the introduction of the word var
has on other pre-existing characteristics.
#java #java 11 #java 10 #java 12 #var #java 14 #java 13 #java 15 #verbosity
1623803640
If you’re here for the top tips, we assume you’re ahead of the “how to learn Java” part and already boarded on your flight of learning Java. In this lesson, apart from just throwing some do’s and don’ts, we’ll be asking some basic questions that will help you align your path with what’s best for you.
Determining your goal and creating a learning strategy is more significant than you can probably think of. Your ambition, execution, and consistency can make or break your career. So if you want to become a full-time Java Developer shadowing a layout/map goes without saying.
Mastering the basics doesn’t necessarily mean learning syntax by heart and not be able to do anything with it. It actually means you’re comfortable working with keywords, know the language protocols, smartly use variables and loops. Know how to choose a data structure depending upon a certain problem. Able to implement object orient approach, since Java is an object-oriented language. Understand encapsulation and how to tamper with it. With this much content freely available widely on the web, newbies are most likely to fell prey to learn more in a shorter period of time. However, you need to understand you can’t build a sustainable building over a weak foundation. Hence, it’s forever helpful to give due time to all the concepts in order to truly “master” them.
#java #learning java programming #java programming #top tips #top tips for learning java programming #programmers
1620462686
On March 16th, 2021, Java 16 was GA. With this new release, tons of new exciting features have been added. Check out the release notes to know more about these changes in detail. This article’s focus will be on Java Records, which got delivered with JEP 395. Records were first introduced in JDK 14 as a preview feature proposed by JEP 359, and with JDK 15, they remained in preview with JEP 384. However, with JDK 16, Records are no longer in preview.
I have picked Records because they are definitely the most favored feature added in Java 16, according to this Twitter poll by Java Champion Mala Gupta.
I also conducted a similar survey, but it was focused on features from Java 8 onwards. The results were not unexpected, as Java 8 is still widely used. Very unfortunate, though, as tons of new features and improvements are added to newer Java versions. But in terms of features, Java 8 was definitely a game-changer from a developer perspective.
So let’s discuss what the fuss is about Java Records.
#java #springboot #java programming #records #java tutorials #java programmer #java records #java 16
1624540920
The key to reactive programming is to react. You don’t say “do this now,” you say “do this when.” The “when” applies to when you have work to do. The work comes to you as events: a message on a message bus or an HTTP request.
First, I should explain the reason reactive programming is important. One of the benefits of Java is relatively easy threading. That has made threads the predominant model for handling events. When you get an event, you dispatch a thread to handle it. The problem is when you get a lot of events, you wind up creating a lot of threads. Threads can be expensive; each one has stack memory and switching threads requires a system call and context switch.
The Node.js system only had a single thread when it was created. (It introduced worker threads in v10.5.0). And yet it became a very popular system for building servers that could handle thousands of requests. It does this by using an event-driven idiom for handling requests. Because it only had a single thread, most libraries that implement things like HTTP servers or clients, database clients, or other I/O intensive libraries had to use the single event loop of the single thread.
But Java used the thread-per-request, which has become a bottleneck in scaling. Languages like Scala, so named because it could be more scalable, were created with extensive frameworks to enable event-driven or asynchronous I/O. Java 8 introduced CompletableFuture
. Java 9 introduced the Flow
class with its Publisher
and Subscriber
. These are used as the basis for the two fully reactive frameworks, RxJava and Project Reactor.
Because Java has used the thread-per-request model for so long, most libraries that deal with I/O will block. They can block because they expect to own the thread they’re running on and won’t block other requests. But now that we can use the asynchronous model, they become a problem. And because Java now has a hybrid model, it’s hard to tell when and how you should use threads when in a mostly asynchronous system.
#java #programming #software-engineering #reactive-programming #software-development #how to avoid blocking in reactive java