Louis Jones

Louis Jones

1567498310

The JVM Architecture Explained

Originally published by Jackson Joseraj at https://dzone.com

Every Java developer knows that bytecode will be executed by the JRE (Java Runtime Environment). But many don't know the fact that JRE is the implementation of Java Virtual Machine (JVM), which analyzes the bytecode, interprets the code, and executes it. It is very important, as a developer, that we know the architecture of the JVM, as it enables us to write code more efficiently. In this article, we will learn more deeply about the JVM architecture in Java and different components of the JVM.

What Is the JVM?

A Virtual Machine is a software implementation of a physical machine. Java was developed with the concept of WORA (Write Once Run Anywhere), which runs on a VM. The compiler compiles the Java file into a Java .class file, then that .class file is input into the JVM, which loads and executes the class file. Below is a diagram of the Architecture of the JVM.

JVM Architecture Diagram

How Does the JVM Work?

As shown in the above architecture diagram, the JVM is divided into three main subsystems:

  1. ClassLoader Subsystem
  2. Runtime Data Area
  3. Execution Engine

1. ClassLoader Subsystem

Java's dynamic class loading functionality is handled by the ClassLoader subsystem. It loads, links. and initializes the class file when it refers to a class for the first time at runtime, not compile time.

1.1 Loading

Classes will be loaded by this component. BootStrap ClassLoader, Extension ClassLoader, and Application ClassLoader are the three ClassLoaders that will help in achieving it.

  1. BootStrap ClassLoader – Responsible for loading classes from the bootstrap classpath, nothing but rt.jar. Highest priority will be given to this loader.
  2. Extension ClassLoader – Responsible for loading classes which are inside the ext folder (jre\lib).
  3. Application ClassLoader –Responsible for loading Application Level Classpath, path mentioned Environment Variable, etc.

The above ClassLoaders will follow Delegation Hierarchy Algorithm while loading the class files.

1.2 Linking

  1. Verify – Bytecode verifier will verify whether the generated bytecode is proper or not if verification fails we will get the verification error.
  2. Prepare – For all static variables memory will be allocated and assigned with default values.
  3. Resolve – All symbolic memory references are replaced with the original references from Method Area.

1.3 Initialization

This is the final phase of ClassLoading; here, all static variables will be assigned with the original values, and the static block will be executed.

2. Runtime Data Area

The Runtime Data Area is divided into five major components:

  1. Method Area – All the class-level data will be stored here, including static variables. There is only one method area per JVM, and it is a shared resource.
  2. Heap Area – All the Objects and their corresponding instance variables and arrays will be stored here. There is also one Heap Area per JVM. Since the Method and Heap areas share memory for multiple threads, the data stored is not thread-safe.
  3. Stack Area – For every thread, a separate runtime stack will be created. For every method call, one entry will be made in the stack memory which is called Stack Frame. All local variables will be created in the stack memory. The stack area is thread-safe since it is not a shared resource. The Stack Frame is divided into three subentities:
  4. Local Variable Array – Related to the method how many local variables are involved and the corresponding values will be stored here.
  5. Operand stack – If any intermediate operation is required to perform, operand stack acts as runtime workspace to perform the operation.
  6. Frame data – All symbols corresponding to the method is stored here. In the case of any exception, the catch block information will be maintained in the frame data.
  7. PC Registers – Each thread will have separate PC Registers, to hold the address of current executing instruction once the instruction is executed the PC register will be updated with the next instruction.
  8. Native Method stacks – Native Method Stack holds native method information. For every thread, a separate native method stack will be created.

3. Execution Engine

The bytecode, which is assigned to the Runtime Data Area, will be executed by the Execution Engine. The Execution Engine reads the bytecode and executes it piece by piece.

  1. Interpreter – The interpreter interprets the bytecode faster but executes slowly. The disadvantage of the interpreter is that when one method is called multiple times, every time a new interpretation is required.
  2. JIT Compiler – The JIT Compiler neutralizes the disadvantage of the interpreter. The Execution Engine will be using the help of the interpreter in converting byte code, but when it finds repeated code it uses the JIT compiler, which compiles the entire bytecode and changes it to native code. This native code will be used directly for repeated method calls, which improve the performance of the system.
  3. Intermediate Code Generator – Produces intermediate code
  4. Code Optimizer – Responsible for optimizing the intermediate code generated above
  5. Target Code Generator – Responsible for Generating Machine Code or Native Code
  6. Profiler – A special component, responsible for finding hotspots, i.e. whether the method is called multiple times or not.
  7. Garbage Collector: Collects and removes unreferenced objects. Garbage Collection can be triggered by calling System.gc(), but the execution is not guaranteed. Garbage collection of the JVM collects the objects that are created.

Java Native Interface (JNI): JNI will be interacting with the Native Method Libraries and provides the Native Libraries required for the Execution Engine.

Native Method Libraries: This is a collection of the Native Libraries, which is required for the Execution Engine.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Java Programming Masterclass for Software Developers

Selenium WebDriver with Java -Basics to Advanced+Frameworks

Java In-Depth: Become a Complete Java Engineer!

Top 4 Spring Annotations for Java Developer in 2019

Java Tutorial for Absolute Beginners

100+ Java Interview Questions and Answers In 2019

Python vs Java: Understand Object Oriented Programming

Angular 7 + Spring Boot CRUD Example

#java

What is GEEK

Buddha Community

The JVM Architecture Explained

Serverless Vs Microservices Architecture - A Deep Dive

Companies need to be thinking long-term before even starting a software development project. These needs are solved at the level of architecture: business owners want to assure agility, scalability, and performance.

The top contenders for scalable solutions are serverless and microservices. Both architectures prioritize security but approach it in their own ways. Let’s take a look at how businesses can benefit from the adoption of serverless architecture vs microservices, examine their differences, advantages, and use cases.

#serverless #microservices #architecture #software-architecture #serverless-architecture #microservice-architecture #serverless-vs-microservices #hackernoon-top-story

Eliseo  Kutch

Eliseo Kutch

1625019926

MicroServices Architecture Explained Simply | Why to use MicroServices Architecture ?

In this video we will see what is microservices architecture and why developers are prefering this architecture over monolithic architecture.
The architecture is explained in simple way using day to day life example.

#microservices #microservices architecture #explained

Fannie  Zemlak

Fannie Zemlak

1595927640

Road to Simplicity: Hexagonal Architecture [Part One]

Software writing taught me that: a well written software is a simple software.

So I started to think how to achieve simplicity in a methodological

way. This is the first story of a series about this methodology.

Naturally it’s a snapshot because it’s in constant evolution.

Simplicity

A definition of simplicity is:

The quality or condition of being easy to understand or do.

Oxford dictionary (https://www.lexico.com/en/definition/simplicity)

So, a simple software is a software that is easy to understand.

After all software are written by humans for humans. This implies

that they should be understandable. Simplicity guarantees that its

understandability isn’t an intellectual pain.

A software solves a problem. So to build the former you should understand the latter.

But to build a simple software you should understand - clearly - a problem.

First step: architecture

On the Martin Fowler blog there is a deep definition of architecture and its explanation:

“Architecture is about the important stuff. Whatever that is.”

On first blush, that sounds trite, but I find it carries a lot of richness.

It means that the heart of thinking architecturally about software is to decide what is important, (i.e. what is architectural), and then expend energy on keeping those architectural elements in good condition.

Ultimately the important stuffs are about the solved problem. In other words about the software domain.

So we need an architecture that allows us to express - clearly - the software domain.

I think that the hexagonal architecture (a.k.a. ports and adapter architecture) is an ideal candidate.

It’s based on layered architecture, so the outer layer depends on the inner layer. Each layer is represented as a hexagon.

Here a UML-like diagram to express the below concepts:

In this architecture the innermost hexagon is dedicated to the

software domain. Here we define domain objects and we express clearly:

  • what the domain does as input port or use case (I prefer the latter because expressiveness).
  • what the domain need, to fulfill use cases, as output port.

Conceptually on the sides of the domain layer there are use case and output port interfaces.

The communication between the outer layers and the domain layer happens through these interfaces.

The outer layer provides output port implementations and they use the use case interfaces.

The implementations and use case clients are are called adapter. Because they adapt our interface to a specific technology.

This relation is an instance of the dependency inversion principle. Simply put: high level concept, the domain, doesn’t rely on a specific

technology. Instead low level concept depends upon high level concept.

In other words our code is technology agnostic.

As you can see the concepts expressed in the outer layers are just details.

The real important stuff, the domain, is isolated and expressed clearly.

Code

A little project accompanies this series to show this methodology. It’s written in Java with the reactive paradigm from the beginning. For this reason the ReactiveX library is also used in the domain layer.

The software analyzes the capabilities (e.g. the java version, the

network speed and so on) of the machine and it exposes them through REST API.

It’s inspired by a real world software that I wrote because of work.

The first step is to define the innermost hexagon.

We can already identify:

  • the main use case, expressed as GetCapabilitiesUseCase
  • the object that describe the machine capabilities, expressed as Capabilities

The use case is an interface:

(if you never used ReactiveX: a Single means that the method will return asynchronously an object or an error)

public interface GetCapabilitiesUseCase {
  Single<Capabilities> getCapabilities();
}

The Capabilities objects are immutable (precisely they’re value objects). And there is an associated builder (I’m using lombok annotations to generate the code):

@RequiredArgsConstructor
@Value
@Builder
public class Capabilities {
  private final String javaVersion;
  private final Long networkSpeed;
}

#architecture #software-architecture #programming #java #hexagonal-architecture #reactive-programming #software-development #software-engineering

Event-Driven Architecture as a Strategy

Event-driven architecture, or EDA, is an integration pattern where applications are oriented around publishing events and responding to events. It provides five key benefits to modern application architecture: scalability, resilience, agility, data sharing, and cloud-enabling.

This article explores how EDA fits into enterprise integration, its three styles, how it enables business strategy, its benefits and trade-offs, and the next steps to start an EDA implementation.

Although there are many brokers you can use to publish event messages, the open-source software Apache Kafka has emerged as the market leader in this space. This article is focused on a Kafka-based EDA, but much of the principles here apply to any EDA implementation.

Spectrum of Integration

If asked to describe integration a year ago, I would have said there are two modes: application integration and data integration. Today I’d say that integration is on a spectrum, with data on one end, application on the other end, and event integration in the middle.

A spectrum with event integration on the left, application integration on the right, and event integration in the middle.

The spectrum of integration.

Application integration is REST, SOAP, ESB, etc. These are patterns for making functionality in one application run in response to a request from another app. It’s especially strong for B2B partnership and exposing value in one application to another. It’s less strong for many data use cases, like BI reporting and ML pipelines, since most application integrations wait passively to be invoked by a client, rather than actively pushing data where it needs to go.Data integration is patterns for getting data from point A to point B, including ETL, managed file transfer, etc. They’re strong for BI reporting, ML pipelines, and other data movement tasks, but weaker than application integration for many B2B partnerships and applications sharing functionality.

Event integration has one foot in data and the other in application integration, and it largely gets the benefits of both. When one application subscribes to another app’s events, it can trigger application code in response to those events, which feels a bit like an API from application integration. The events triggering this functionality also carry with them a significant amount of data, which feels a bit like data integration.

EDA strikes a balance between the two classic integration modes. Refactoring traditional application integrations into an event integration pattern opens more doors for analytics, machine learning, BI, and data synchronization between applications. It gets the best of application and data integration patterns. This is especially relevant for companies moving towards an operating model of leveraging data to drive new channels and partnerships. If your integration strategy does not unlock your data, then that strategy will fail. But if your integration strategy unlocks data at the expense of application architecture that’s scalable and agile, then again it will fail. Event integration strikes a balance between both those needs.

Strategy vs. Tactic

EDA often begins with isolated teams as a tactic for delivering projects. Ideally, such a project would have a deliberative approach to EDA and a common event message broker, usually cloud-native brokers on AWS, Azure, etc. Different teams select different brokers to meet their immediate needs. They do not consider integration beyond their project scope. Eventually, they may face the need for enterprise integration at a later date.

A major transition in EDA maturity happens when the investment in EDA shifts from a project tactic to enterprise strategy via a common event bus, usually Apache Kafka. Events can take a role in the organization’s business and technical innovation across the enterprise. Data becomes more rapidly shareable across the enterprise and also between you and your external strategic partners.

EDA Styles

Before discussing the benefits of EDA, let’s cover the three common styles of EDA: event notification, event-carried state transfer, and event sourcing.

Event Notification

This pattern publishes events with minimal information: the event type, timestamps, and a key-value like an account number or some other key of the entity that raised the event. This informs subscribers that an event occurred, but if subscribers need any information about how that event changed things (like which fields changed, etc.), it must invoke a data retrieval service from the system of record. This is the simplest form of EDA, but it provides the least benefit.

Event-Carried State Transfer

In this pattern, the events carry all information about the state change, typically a before and after image. Subscribing systems can then store their cache of data without the need to retrieve it from the system of record.

This builds resilience since the subscribing systems can function if the source becomes unavailable. It helps performance, as there’s no remote call required to access source information. For example, if an inventory system publishes the full state of all inventory changes, a sales service subscribing to it can know the current inventory without retrieving from the inventory system — it can simply use the cache it built from the inventory events, even during an inventory service outage.

It also helps performance because the subscriber’s data storage can be custom-tuned just for that subscriber’s unique performance needs. Using the previous example, perhaps the inventory service is best suited using a relational database, but the sales service could get better performance from a no-SQL database like MongoDB. Since the sales services no longer need to retrieve from the inventory service, it’s at liberty to use a different DBMS than the inventory service. Additionally, if the inventory service is having an outage, the sales service would be unaffected since it pulls inventory data from its local cache.

The cons are that lots of data is copied around and there is more complexity on the receivers since they have to sort out maintaining all the state they are receiving.

#integration #microservices #data #kafka #enterprise architecture #event driven architecture #application architecture

Seamus  Quitzon

Seamus Quitzon

1603018500

Internals of Compiler and JVM

Hello guys, I am back with a new blog and in this blog, we are going to talk about some important aspects of Java such as

  • Difference between JDK, JRE, and JVM
  • The Java Compiler
  • Internals of JVM

Let us first start with the differences between JDK, JRE, and JVM.

  • **JDK — **JDK stands for Java Development Kit. It provides a software development environment for developing Java applications. It includes the Java Runtime Environment (JRE), an interpreter (java), a compiler (javac), an archiver (jar), a documentation generator (Javadoc), and other tools required for the development of applications.
  • JRE — JRE stands for Java Runtime Environment. JRE is core libraries plus Java virtual machine. It provides an environment to execute a java application.
  • JVM — JVM stands for Java Virtual Machine. Whenever you execute a program via the java command, it creates a virtual environment in which the program is loaded along with core libraries.

Compilation

The Java Compiler compiles the source files (*.java) into class files. Each class file contains machine-independent byte code, and once compiled, it can be executed on any machine. Therefore class files are platform-independent whereas JVM is platform dependent. The reason behind this is JVM makes use of the internals of the Operating System. That is why we have different setups for different operating systems. The JVM transforms the byte code into machine code or native code.

The compilation of source files involves the following steps

  • **Parse — **Reads source files and then maps the resulting token sequence into the Abstract Syntax Tree. The Abstract Syntax Tree is a tree representation of the abstract syntactic structure of source code. Each node in a tree denotes a construct occurring in the source code. The syntax is “abstract” in the sense that it does not represent every detail appearing in the real syntax, but rather just the structural or content related details

Image for post

  • **Enter — **Enter symbols for the definitions into the symbol table. The Symbol table stores information about various entities such as variable names, function names, objects, classes, interfaces, etc. A symbol table may serve the following purposes

#compilers #jvm #architecture #java