Julie  Donnelly

Julie Donnelly

1626225559

Modules and Architectures - 02L

Modules and architectures - 02L
Course website: http://bit.ly/DLSP21-home
Playlist: http://bit.ly/DLSP21-YouTube
Speaker: Yann LeCun

Chapters

  • 00:00:00 – Welcome to class
  • 00:00:38 – Non-linear functions
  • 00:14:34 – Q&A
  • 00:28:09 – Softargmax and softargmin
  • 00:38:10 – Logsoftargmax
  • 00:47:14 – Cost functions
  • 00:58:39 – Architectures: multiplicative interaction
  • 01:09:48 – Mixture of experts
  • 01:27:50 – Parameter transformations

#developer

What is GEEK

Buddha Community

Modules and Architectures - 02L

Serverless Vs Microservices Architecture - A Deep Dive

Companies need to be thinking long-term before even starting a software development project. These needs are solved at the level of architecture: business owners want to assure agility, scalability, and performance.

The top contenders for scalable solutions are serverless and microservices. Both architectures prioritize security but approach it in their own ways. Let’s take a look at how businesses can benefit from the adoption of serverless architecture vs microservices, examine their differences, advantages, and use cases.

#serverless #microservices #architecture #software-architecture #serverless-architecture #microservice-architecture #serverless-vs-microservices #hackernoon-top-story

Ray  Patel

Ray Patel

1619571780

Top 20 Most Useful Python Modules or Packages

 March 25, 2021  Deepak@321  0 Comments

Welcome to my blog, In this article, we will learn the top 20 most useful python modules or packages and these modules every Python developer should know.

Hello everybody and welcome back so in this article I’m going to be sharing with you 20 Python modules you need to know. Now I’ve split these python modules into four different categories to make little bit easier for us and the categories are:

  1. Web Development
  2. Data Science
  3. Machine Learning
  4. AI and graphical user interfaces.

Near the end of the article, I also share my personal favorite Python module so make sure you stay tuned to see what that is also make sure to share with me in the comments down below your favorite Python module.

#python #packages or libraries #python 20 modules #python 20 most usefull modules #python intersting modules #top 20 python libraries #top 20 python modules #top 20 python packages

Fannie  Zemlak

Fannie Zemlak

1595927640

Road to Simplicity: Hexagonal Architecture [Part One]

Software writing taught me that: a well written software is a simple software.

So I started to think how to achieve simplicity in a methodological

way. This is the first story of a series about this methodology.

Naturally it’s a snapshot because it’s in constant evolution.

Simplicity

A definition of simplicity is:

The quality or condition of being easy to understand or do.

Oxford dictionary (https://www.lexico.com/en/definition/simplicity)

So, a simple software is a software that is easy to understand.

After all software are written by humans for humans. This implies

that they should be understandable. Simplicity guarantees that its

understandability isn’t an intellectual pain.

A software solves a problem. So to build the former you should understand the latter.

But to build a simple software you should understand - clearly - a problem.

First step: architecture

On the Martin Fowler blog there is a deep definition of architecture and its explanation:

“Architecture is about the important stuff. Whatever that is.”

On first blush, that sounds trite, but I find it carries a lot of richness.

It means that the heart of thinking architecturally about software is to decide what is important, (i.e. what is architectural), and then expend energy on keeping those architectural elements in good condition.

Ultimately the important stuffs are about the solved problem. In other words about the software domain.

So we need an architecture that allows us to express - clearly - the software domain.

I think that the hexagonal architecture (a.k.a. ports and adapter architecture) is an ideal candidate.

It’s based on layered architecture, so the outer layer depends on the inner layer. Each layer is represented as a hexagon.

Here a UML-like diagram to express the below concepts:

In this architecture the innermost hexagon is dedicated to the

software domain. Here we define domain objects and we express clearly:

  • what the domain does as input port or use case (I prefer the latter because expressiveness).
  • what the domain need, to fulfill use cases, as output port.

Conceptually on the sides of the domain layer there are use case and output port interfaces.

The communication between the outer layers and the domain layer happens through these interfaces.

The outer layer provides output port implementations and they use the use case interfaces.

The implementations and use case clients are are called adapter. Because they adapt our interface to a specific technology.

This relation is an instance of the dependency inversion principle. Simply put: high level concept, the domain, doesn’t rely on a specific

technology. Instead low level concept depends upon high level concept.

In other words our code is technology agnostic.

As you can see the concepts expressed in the outer layers are just details.

The real important stuff, the domain, is isolated and expressed clearly.

Code

A little project accompanies this series to show this methodology. It’s written in Java with the reactive paradigm from the beginning. For this reason the ReactiveX library is also used in the domain layer.

The software analyzes the capabilities (e.g. the java version, the

network speed and so on) of the machine and it exposes them through REST API.

It’s inspired by a real world software that I wrote because of work.

The first step is to define the innermost hexagon.

We can already identify:

  • the main use case, expressed as GetCapabilitiesUseCase
  • the object that describe the machine capabilities, expressed as Capabilities

The use case is an interface:

(if you never used ReactiveX: a Single means that the method will return asynchronously an object or an error)

public interface GetCapabilitiesUseCase {
  Single<Capabilities> getCapabilities();
}

The Capabilities objects are immutable (precisely they’re value objects). And there is an associated builder (I’m using lombok annotations to generate the code):

@RequiredArgsConstructor
@Value
@Builder
public class Capabilities {
  private final String javaVersion;
  private final Long networkSpeed;
}

#architecture #software-architecture #programming #java #hexagonal-architecture #reactive-programming #software-development #software-engineering

Event-Driven Architecture as a Strategy

Event-driven architecture, or EDA, is an integration pattern where applications are oriented around publishing events and responding to events. It provides five key benefits to modern application architecture: scalability, resilience, agility, data sharing, and cloud-enabling.

This article explores how EDA fits into enterprise integration, its three styles, how it enables business strategy, its benefits and trade-offs, and the next steps to start an EDA implementation.

Although there are many brokers you can use to publish event messages, the open-source software Apache Kafka has emerged as the market leader in this space. This article is focused on a Kafka-based EDA, but much of the principles here apply to any EDA implementation.

Spectrum of Integration

If asked to describe integration a year ago, I would have said there are two modes: application integration and data integration. Today I’d say that integration is on a spectrum, with data on one end, application on the other end, and event integration in the middle.

A spectrum with event integration on the left, application integration on the right, and event integration in the middle.

The spectrum of integration.

Application integration is REST, SOAP, ESB, etc. These are patterns for making functionality in one application run in response to a request from another app. It’s especially strong for B2B partnership and exposing value in one application to another. It’s less strong for many data use cases, like BI reporting and ML pipelines, since most application integrations wait passively to be invoked by a client, rather than actively pushing data where it needs to go.Data integration is patterns for getting data from point A to point B, including ETL, managed file transfer, etc. They’re strong for BI reporting, ML pipelines, and other data movement tasks, but weaker than application integration for many B2B partnerships and applications sharing functionality.

Event integration has one foot in data and the other in application integration, and it largely gets the benefits of both. When one application subscribes to another app’s events, it can trigger application code in response to those events, which feels a bit like an API from application integration. The events triggering this functionality also carry with them a significant amount of data, which feels a bit like data integration.

EDA strikes a balance between the two classic integration modes. Refactoring traditional application integrations into an event integration pattern opens more doors for analytics, machine learning, BI, and data synchronization between applications. It gets the best of application and data integration patterns. This is especially relevant for companies moving towards an operating model of leveraging data to drive new channels and partnerships. If your integration strategy does not unlock your data, then that strategy will fail. But if your integration strategy unlocks data at the expense of application architecture that’s scalable and agile, then again it will fail. Event integration strikes a balance between both those needs.

Strategy vs. Tactic

EDA often begins with isolated teams as a tactic for delivering projects. Ideally, such a project would have a deliberative approach to EDA and a common event message broker, usually cloud-native brokers on AWS, Azure, etc. Different teams select different brokers to meet their immediate needs. They do not consider integration beyond their project scope. Eventually, they may face the need for enterprise integration at a later date.

A major transition in EDA maturity happens when the investment in EDA shifts from a project tactic to enterprise strategy via a common event bus, usually Apache Kafka. Events can take a role in the organization’s business and technical innovation across the enterprise. Data becomes more rapidly shareable across the enterprise and also between you and your external strategic partners.

EDA Styles

Before discussing the benefits of EDA, let’s cover the three common styles of EDA: event notification, event-carried state transfer, and event sourcing.

Event Notification

This pattern publishes events with minimal information: the event type, timestamps, and a key-value like an account number or some other key of the entity that raised the event. This informs subscribers that an event occurred, but if subscribers need any information about how that event changed things (like which fields changed, etc.), it must invoke a data retrieval service from the system of record. This is the simplest form of EDA, but it provides the least benefit.

Event-Carried State Transfer

In this pattern, the events carry all information about the state change, typically a before and after image. Subscribing systems can then store their cache of data without the need to retrieve it from the system of record.

This builds resilience since the subscribing systems can function if the source becomes unavailable. It helps performance, as there’s no remote call required to access source information. For example, if an inventory system publishes the full state of all inventory changes, a sales service subscribing to it can know the current inventory without retrieving from the inventory system — it can simply use the cache it built from the inventory events, even during an inventory service outage.

It also helps performance because the subscriber’s data storage can be custom-tuned just for that subscriber’s unique performance needs. Using the previous example, perhaps the inventory service is best suited using a relational database, but the sales service could get better performance from a no-SQL database like MongoDB. Since the sales services no longer need to retrieve from the inventory service, it’s at liberty to use a different DBMS than the inventory service. Additionally, if the inventory service is having an outage, the sales service would be unaffected since it pulls inventory data from its local cache.

The cons are that lots of data is copied around and there is more complexity on the receivers since they have to sort out maintaining all the state they are receiving.

#integration #microservices #data #kafka #enterprise architecture #event driven architecture #application architecture

Nat  Grady

Nat Grady

1622262713

Evolution of Software Architecture

Evolution may not only happened in IT section. Evolution happen in every other fields even human has a evolution. So in this article I will explain about the evolution of software architecture and I will take you from standalone systems to microservices.

One Tier Architecture

**One Tier application is also known as a Standalone application which is the simplest architecture. **It is equivalent to running the application on a personal computer. So all the required components for an application to run are on a single application or server. So Presentation layer, Business logic (application)layer, and data layer are all located on a single machine or a software package. Some of the examples for one tier architecture is MP3 player and MS office. So in one-tier architecture data can be stored in the local system or a shared drive. Speaking about some of the advantages of this type of systems are,

  1. It’s simple and easy to implement.
  2. Very efficient.
  3. No compatibility and no context switching issues

When we think about some of the disadvantages,

  1. Since it only in one machine it does not support remote/ distributed access for data resources.

Two-Tier Architecture

After one-tier architecture, we moved to two-tier architecture. The two-tier** architecture is also known as client-server application**. So this architecture is divided into two parts. Those are,

1. Client Application (Client Tier)

2. Database (Data Tier)

The client system handles both Presentation and Business layer(Application layer) and the Server system handles the Database layer. So basically what happens is the client system sends the request to the server system and the Server system processes the request and sends back the data to the client system. So the communication takes place between the client and the server. When we consider the advantages,

  1. Easy to maintain and the modification is a bit easy.
  2. Faster communication.

Some of the disadvantages are,

  1. Application performance will be decreased when increasing the number of users.

#software-architecture #2-tier-architecture #microservices #soa #3-tier-architecture