Reading Time: 8 minutes Writing a clean code that nobody mocks is a little difficult, but by cultivating a habit, following best practices, it is not impossible to achieve.
Message-driven systems are those that communicate primarily through asynchronous and non-blocking messages. Messages enable us to build systems that are both resilient, and elastic, and therefore responsive under a variety of situations.
JAVA FUTURES – These allow us to isolate the blocking operations to a separate thread so that the execution of the main thread continues uninterrupted. The result of the futures is handled through a callback. What are Java Futures and why d
I will be covering the concept of Circuit Breaker in Akka. Before moving forward to it just think of a situation when you are requesting on a website and it is taking too much time. You try to refresh the page and still the same.
In this blog, we will be talking about one of the core modules of resilience4j: Retry. If you are not familiar with the resilience4j library then you can refer my last blog Bulkhead with Resilience4j. It would be a 2 minutes read.
Lambda expressions were one of the new features that was introduced in Java 8. They help clean up verbose code by providing a concise and local way to reduce redundancy by keeping code short and self-explanatory. In addition to saving code, Java’s lambda expressions are important in functional programming. They allow developers to write in a functional style by acting as functions without belonging to any class.
The concept of JPMS i.e. Java Platform Module System came in Java 9. Its development was first started in 2005 and finally in 2017, this concept came under the project named Jigsaw.
The world is a stage where all of us are artists. Constant learning is the foundation of success. So, here we are going to learn about a query language introduc.
In Cypress, we can parameterize our tests, with the help of scripts. For achieving parameterization in cypress we can add scripts in our package.json file with all the required commands.
Being a container-orchestration system for automating application deployment, Kubernetes is adopted and highly practiced by many teams and that is where namespaces emerge.
This article will cover the fundamentals of Scala language. In Scala, we work with values and we compose them to obtain other values. The composition structures are expressions, and they’re exactly what we expect.
One of the most frequently used transformations in Apache Spark is Join operation. Joins in Apache Spark allow the developer to combine two or more data frames based on certain (sortable) keys.
This is the era of containerization and orchestration where the majority of the applications are following the trend of running on a container which is further deployed in a Kubernetes cluster.
Building a Reactive System is all about the balance between consistency and availability and the consequences of picking one over the other. This article mainly focuses on consistency and availability and how they impact the scalability of a system.
This blog pertains to Time Travel and Fail-safe in Snowflake, and I will explain you all the things you need to know about these features with practical example. So let’s get started.Snowflake allows accessing historical data of a point in the past that may have been modified or deleted at the current time.
In this article we will be talking about Informatica Intelligent Cloud Services Application Integration. IICS is the cloud-based data integration platform that provides CDI(cloud data integration), CAI(cloud application integration) and API management b/w cloud and on-premise applications.
This blog provides some ways through which backup and restoring in Jenkins can be carried out. The data loss can be the result of hardware or software failure, data corruption, or a human-caused event, or accidental deletion of data.
Welcome back folks to this blog series of Spark Structured Streaming. This blog is the continuation of the earlier blog "Understanding Stateful Streaming".
Spark Structured Streaming – Stateful Streaming. Welcome back folks to this blog series of Spark Structured Streaming. This blog is the continuation of the earlier blog "Internals of Structured Streaming".
Reading Time: 4 minutes Knime Analytics Platform provides it’s users a way to consume messages from Apache Kafka and publish the transformed results back to Kafka. This allows the users to integrate their knime workflows easily with a distributed streaming pub-sub mechanism.With Knime 3.6 +, the users get a Kafka extension with three new nodes: 1. Kafka Connector 2. Kafka Consumer