1597926840
VGG is an acronym for the Visual Geometric Group from Oxford University and VGG-16 is a network with 16 layers proposed by the Visual Geometric Group. These 16 layers contain the trainable parameters and there are other layers also like the Max pool layer but those do not contain any trainable parameters. This architecture was the 1st runner up of the Visual Recognition Challenge of 2014 i.e. **_ILSVRC-2014 _and was developed by_Simonyan _**and Zisserman.
The VGG research group released a series of the convolution network model starting from VGG11 to VGG19. The main intention of the VGG group on depth was to understand how the depth of convolutional networks affects the accuracy of the models of large-scale image classification and recognition. The minimum VGG11 has 8 convolutional layers and 3 fully connected layers as compared to the maximum VGG19 which has 16 convolutional layers and the 3 fully connected layers. The different variations of VGGs are exactly the same in the last three fully connected layers. The overall structure includes 5 sets of convolutional layers, followed by a MaxPool. But the difference is that as the depth increases that is as we move from VGG11 to VGG19 more and more cascaded convolutional layers are added in the five sets of convolutional layers.
The below-shown figure is the overall network configuration of different models created by VGG that uses the same principle but only varies in depth.
Image from Original Paper- Reference [1]
From the above comparison table that represents a different network, we can see that as the model moves from simpler to complex the depth of the network is getting increased. This is the best way to solve any problem, means to say, solve the problem using a simpler model and then gradually optimize it by making it complex.
#vgg16 #data-science #artificial-intelligence #deep-learning-network #image-classification #deep learning
1600088400
Companies need to be thinking long-term before even starting a software development project. These needs are solved at the level of architecture: business owners want to assure agility, scalability, and performance.
The top contenders for scalable solutions are serverless and microservices. Both architectures prioritize security but approach it in their own ways. Let’s take a look at how businesses can benefit from the adoption of serverless architecture vs microservices, examine their differences, advantages, and use cases.
#serverless #microservices #architecture #software-architecture #serverless-architecture #microservice-architecture #serverless-vs-microservices #hackernoon-top-story
1624596420
In this tutorial, we’ll walk through a simple implementation of the hexagonal architecture. Alistair Cockburn proposed this concept in 2005.
This architecture is also called “Ports and Adapters” and “Object Structural”.
We’ll go through the concepts first and then learn the implementation of this architecture using Java.
Hexagonal Architecture defines that an application’s business logic should be isolated from the external applications.
Based on this concept, we can divide our application into 3 parts. The business logic will be the inside of an application. The user and server sides of the application would be the outside of an application.
#java #rest-api #design-patterns #hexagonal-architecture #programming #hexagonal architecture — implementation in java
1595927640
Software writing taught me that: a well written software is a simple software.
So I started to think how to achieve simplicity in a methodological
way. This is the first story of a series about this methodology.
Naturally it’s a snapshot because it’s in constant evolution.
A definition of simplicity is:
The quality or condition of being easy to understand or do.
Oxford dictionary (https://www.lexico.com/en/definition/simplicity)
So, a simple software is a software that is easy to understand.
After all software are written by humans for humans. This implies
that they should be understandable. Simplicity guarantees that its
understandability isn’t an intellectual pain.
A software solves a problem. So to build the former you should understand the latter.
But to build a simple software you should understand - clearly - a problem.
On the Martin Fowler blog there is a deep definition of architecture and its explanation:
“Architecture is about the important stuff. Whatever that is.”
On first blush, that sounds trite, but I find it carries a lot of richness.
It means that the heart of thinking architecturally about software is to decide what is important, (i.e. what is architectural), and then expend energy on keeping those architectural elements in good condition.
Ultimately the important stuffs are about the solved problem. In other words about the software domain.
So we need an architecture that allows us to express - clearly - the software domain.
I think that the hexagonal architecture (a.k.a. ports and adapter architecture) is an ideal candidate.
It’s based on layered architecture, so the outer layer depends on the inner layer. Each layer is represented as a hexagon.
Here a UML-like diagram to express the below concepts:
In this architecture the innermost hexagon is dedicated to the
software domain. Here we define domain objects and we express clearly:
Conceptually on the sides of the domain layer there are use case and output port interfaces.
The communication between the outer layers and the domain layer happens through these interfaces.
The outer layer provides output port implementations and they use the use case interfaces.
The implementations and use case clients are are called adapter. Because they adapt our interface to a specific technology.
This relation is an instance of the dependency inversion principle. Simply put: high level concept, the domain, doesn’t rely on a specific
technology. Instead low level concept depends upon high level concept.
In other words our code is technology agnostic.
As you can see the concepts expressed in the outer layers are just details.
The real important stuff, the domain, is isolated and expressed clearly.
A little project accompanies this series to show this methodology. It’s written in Java with the reactive paradigm from the beginning. For this reason the ReactiveX library is also used in the domain layer.
The software analyzes the capabilities (e.g. the java version, the
network speed and so on) of the machine and it exposes them through REST API.
It’s inspired by a real world software that I wrote because of work.
The first step is to define the innermost hexagon.
We can already identify:
The use case is an interface:
(if you never used ReactiveX: a Single means that the method will return asynchronously an object or an error)
public interface GetCapabilitiesUseCase {
Single<Capabilities> getCapabilities();
}
The Capabilities objects are immutable (precisely they’re value objects). And there is an associated builder (I’m using lombok annotations to generate the code):
@RequiredArgsConstructor
@Value
@Builder
public class Capabilities {
private final String javaVersion;
private final Long networkSpeed;
}
#architecture #software-architecture #programming #java #hexagonal-architecture #reactive-programming #software-development #software-engineering
1596259380
Event-driven architecture, or EDA, is an integration pattern where applications are oriented around publishing events and responding to events. It provides five key benefits to modern application architecture: scalability, resilience, agility, data sharing, and cloud-enabling.
This article explores how EDA fits into enterprise integration, its three styles, how it enables business strategy, its benefits and trade-offs, and the next steps to start an EDA implementation.
Although there are many brokers you can use to publish event messages, the open-source software Apache Kafka has emerged as the market leader in this space. This article is focused on a Kafka-based EDA, but much of the principles here apply to any EDA implementation.
If asked to describe integration a year ago, I would have said there are two modes: application integration and data integration. Today I’d say that integration is on a spectrum, with data on one end, application on the other end, and event integration in the middle.
The spectrum of integration.
Application integration is REST, SOAP, ESB, etc. These are patterns for making functionality in one application run in response to a request from another app. It’s especially strong for B2B partnership and exposing value in one application to another. It’s less strong for many data use cases, like BI reporting and ML pipelines, since most application integrations wait passively to be invoked by a client, rather than actively pushing data where it needs to go.Data integration is patterns for getting data from point A to point B, including ETL, managed file transfer, etc. They’re strong for BI reporting, ML pipelines, and other data movement tasks, but weaker than application integration for many B2B partnerships and applications sharing functionality.
Event integration has one foot in data and the other in application integration, and it largely gets the benefits of both. When one application subscribes to another app’s events, it can trigger application code in response to those events, which feels a bit like an API from application integration. The events triggering this functionality also carry with them a significant amount of data, which feels a bit like data integration.
EDA strikes a balance between the two classic integration modes. Refactoring traditional application integrations into an event integration pattern opens more doors for analytics, machine learning, BI, and data synchronization between applications. It gets the best of application and data integration patterns. This is especially relevant for companies moving towards an operating model of leveraging data to drive new channels and partnerships. If your integration strategy does not unlock your data, then that strategy will fail. But if your integration strategy unlocks data at the expense of application architecture that’s scalable and agile, then again it will fail. Event integration strikes a balance between both those needs.
EDA often begins with isolated teams as a tactic for delivering projects. Ideally, such a project would have a deliberative approach to EDA and a common event message broker, usually cloud-native brokers on AWS, Azure, etc. Different teams select different brokers to meet their immediate needs. They do not consider integration beyond their project scope. Eventually, they may face the need for enterprise integration at a later date.
A major transition in EDA maturity happens when the investment in EDA shifts from a project tactic to enterprise strategy via a common event bus, usually Apache Kafka. Events can take a role in the organization’s business and technical innovation across the enterprise. Data becomes more rapidly shareable across the enterprise and also between you and your external strategic partners.
Before discussing the benefits of EDA, let’s cover the three common styles of EDA: event notification, event-carried state transfer, and event sourcing.
This pattern publishes events with minimal information: the event type, timestamps, and a key-value like an account number or some other key of the entity that raised the event. This informs subscribers that an event occurred, but if subscribers need any information about how that event changed things (like which fields changed, etc.), it must invoke a data retrieval service from the system of record. This is the simplest form of EDA, but it provides the least benefit.
In this pattern, the events carry all information about the state change, typically a before and after image. Subscribing systems can then store their cache of data without the need to retrieve it from the system of record.
This builds resilience since the subscribing systems can function if the source becomes unavailable. It helps performance, as there’s no remote call required to access source information. For example, if an inventory system publishes the full state of all inventory changes, a sales service subscribing to it can know the current inventory without retrieving from the inventory system — it can simply use the cache it built from the inventory events, even during an inventory service outage.
It also helps performance because the subscriber’s data storage can be custom-tuned just for that subscriber’s unique performance needs. Using the previous example, perhaps the inventory service is best suited using a relational database, but the sales service could get better performance from a no-SQL database like MongoDB. Since the sales services no longer need to retrieve from the inventory service, it’s at liberty to use a different DBMS than the inventory service. Additionally, if the inventory service is having an outage, the sales service would be unaffected since it pulls inventory data from its local cache.
The cons are that lots of data is copied around and there is more complexity on the receivers since they have to sort out maintaining all the state they are receiving.
#integration #microservices #data #kafka #enterprise architecture #event driven architecture #application architecture
1622262713
Evolution may not only happened in IT section. Evolution happen in every other fields even human has a evolution. So in this article I will explain about the evolution of software architecture and I will take you from standalone systems to microservices.
**One Tier application is also known as a Standalone application which is the simplest architecture. **It is equivalent to running the application on a personal computer. So all the required components for an application to run are on a single application or server. So Presentation layer, Business logic (application)layer, and data layer are all located on a single machine or a software package. Some of the examples for one tier architecture is MP3 player and MS office. So in one-tier architecture data can be stored in the local system or a shared drive. Speaking about some of the advantages of this type of systems are,
When we think about some of the disadvantages,
After one-tier architecture, we moved to two-tier architecture. The two-tier** architecture is also known as client-server application**. So this architecture is divided into two parts. Those are,
1. Client Application (Client Tier)
2. Database (Data Tier)
The client system handles both Presentation and Business layer(Application layer) and the Server system handles the Database layer. So basically what happens is the client system sends the request to the server system and the Server system processes the request and sends back the data to the client system. So the communication takes place between the client and the server. When we consider the advantages,
Some of the disadvantages are,
#software-architecture #2-tier-architecture #microservices #soa #3-tier-architecture