Bongani  Ngema

Bongani Ngema

1675508640

Open Event Attendee Android General App

Open Event Attendee App

An events app to discover events happening around the world using the Open Event Platform on Eventyay.

Eventyay Attendee App provides following features for users:

  • All events by the organizers can be viewed
  • Functionality to filter out events by date, time, location and event name
  • Users can buy tickets and register as attendees for any event
  • Pay for their orders via PayPal and Stripe
  • All important event details such as location, date, and timing of the event can be viewed
  • Users can view all the tickets bought for an event with their status
  • Easy check-in using QR code for Tickets and see check-in timings
  • Users can view similar events
  • Users have the privilege to mark an event as favorite

Application is available here:

Get it on Google Play Get it on F-Droid

Communication

Please join our mailing list to discuss questions regarding the project here

Our chat channel is on gitter here

Screenshots

 

Development

A native Android app using Kotlin for writing code and Open event server for API.

Libraries used and their documentation

Project Conventions

There are certain conventions we follow in the project, we recommend that you become familiar with these so that the development process is uniform for everyone:

Project Structure

Generally, projects are created using package by layer approach where packages are names by layers like ui, activity, fragment, etc but it quickly becomes unscalable in large projects where a large number of unrelated classes are crammed in one layer and it becomes difficult to navigate through them.
Instead, we follow package by feature, which at the cost of flatness of our project, provides us packages of isolated functioning related classes which are likely to be a complete self-sufficient component of the application. Each package contains all related classes of view, presenter, their implementations like Activities and Fragments.
A notable exception to this is the helper module and data classes like Models and Repositories as they are used in a cross component way.

Separation of concerns

Lastly, each class should only perform one task, do it well, and be unit tested for it. For example, if a presenter is doing more than it should, i.e., parsing dates or implementing search logic, better move it in its own class. There can be exceptions to this practice, but if the functionality can be generalised and reused, it should most definitely be transferred in its own class and unit tested.

Contributions Best Practices

For first time Contributors

First time contributors can read CONTRIBUTING.md file for help regarding creating issues and sending pull requests.

Branch Policy

We have the following branches

  • development All development goes on in this branch. If you're making a contribution, you are supposed to make a pull request to development. PRs to development branch must pass a build check and a unit-test check on Circle CI.
  • master This contains shipped code. After significant features/bugfixes are accumulated on development, we make a version update and make a release.

Please Note that :-

Each push to master branch automatically publishes the application to Play Store as an Alpha Release. Thus, on each merge into master, the versionCode and versionName MUST be changed accordingly in app/build.gradle

versionCode : Integer : To be monotonically incremented with each merge. Failure to do so will lead to publishing error, and thus is a crucial step before any merge

versionName : String : User visible version of the app. To be changed following semantic versioning

  • apk This branch contains two apk's, that are automatically generated on the merged pull request a) debug apk and b) release apk.
    • Please download and test the app that is using the code from the development and master branches here.

Code practices

Please help us follow the best practices to make it easy for the reviewer as well as the contributor. We want to focus on the code quality more than on managing pull request ethics.

  • Single commit per pull request
  • For writing commit messages please read the COMMITSTYLE carefully. Kindly adhere to the guidelines.
  • Follow uniform design practices. The design language must be consistent throughout the app.
  • The pull request will not get merged until and unless the commits are squashed. In case there are multiple commits on the PR, the commit author needs to squash them and not the maintainers cherrypicking and merging squashes.
  • If the PR is related to any front end change, please attach relevant screenshots in the pull request description.

Join the development

  • Before you join development, please set up the project on your local machine, run it and go through the application completely. Press on any button you can find and see where it leads to. Explore. (Don't worry ... Nothing will happen to the app or to you due to the exploring :wink: Only thing that will happen is, you'll be more familiar with what is where and might even get some cool ideas on how to improve various aspects of the app.)
  • If you would like to work on an issue, drop in a comment at the issue. If it is already assigned to someone, but there is no sign of any work being done, please free to drop in a comment so that the issue can be assigned to you if the previous assignee has dropped it entirely.

For Testers: Testing the App

Installing APK on your device: You can get debug APK as well as Release APK in apk branch of the repository. After each PR merge, both the APKs are automatically updated. So, just download the APK you want and install it on your device. The APKs will always be the latest one.

Download Details:

Author: Fossasia
Source Code: https://github.com/fossasia/open-event-attendee-android 
License: Apache-2.0 license

#kotlin #android #event 

Open Event Attendee Android General App

How to implement Event Sourcing with SpringBoot

The name directly comes from the fact that event sourcing events are the source of truth. So all of the other data and other data structures are just derived from the events. So we can erase in theory all of those other storages as long as we keep event lock then we can always regenerate them. Event sourcing contains a ordered of our operation so if we have look on the shopping cart.

  • At first we are initializing the shopping cart.
  • We are adding new product.
  • We may remove the product because we decided that we did it by mistake.
  • Then We added a new product.
  • At the end we are confirming card.

Nice thing about Event sourcing is that we are able to do time traveling. If we have recorded the sequence of events then we can always go back, So we can just take the events and apply that to the current state and get back to time to see what has happened.

Use case :

Let’s take a use case. Gaurav is a shop keeper, he sells electronic items like mobile phones, laptops etc, he wants to keep track of stock in his shop and wants to know whether his shop has stock of a particular item or not without checking manually. He wants an app for it.

The app has three functionalities:

  • User can add new stock.
  • He can remove stock after selling it.
  • User can find the current stock of a particular item.

event-sourcing

In Event Sourcing you just capture user events and add them in database, you just keep adding new events for every user action and no record is updated or deleted in the database , just events are added. With events, you also add event data specific to the event.

In this way you maintain the history of the user action. It is useful if your application has security requirements to audit all user actions. This is also useful in any application where you want a history of user actions (eg Github commits, analytics applications, etc.) and to know the current state of an entity, you simply iterate through your code. are and receive it.

The project structure will be as follows-

project-structure

The pom.xml will be as follows-

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.7.0</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.example</groupId>
	<artifactId>stockmanagement_eventstore</artifactId>
	<version>1.0.0</version>
	<name>stockmanagement_eventstore</name>
	<description>Demo project for Event Sourcing</description>
	<properties>
		<java.version>11</java.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-jpa</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
		<!-- H2 database dependency(in-memory databases ) -->
		<dependency>
			<groupId>com.h2database</groupId>
			<artifactId>h2</artifactId>
			<scope>runtime</scope>
		</dependency>
		<dependency>
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    
      </dependency>
      <dependency>
	     <groupId>org.springframework.boot</groupId>
	     <artifactId>spring-boot-starter-test</artifactId>
	     <scope>test</scope>
     </dependency>
     <!-- Lombok remove boilerplate codes -->
     <dependency>
	     <groupId>org.projectlombok</groupId>
	     <artifactId>lombok</artifactId>
	     <optional>true</optional>
    </dependency>
</dependencies>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>

Created Stock Model class

package com.example.stock.management;

import lombok.Data;

//entity model
@Data
public class Stock {

	private String name;
	private int quantity;
	private String user;
	
}

EventStore class will be as follows

package com.example.stock.management;

import java.time.LocalDateTime;
import java.util.Map;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;

import lombok.Builder;
import lombok.Data;

@Entity
@Data

public class EventStore {

	@Id
	@GeneratedValue(strategy = GenerationType.IDENTITY)
	private long eventId;

	private String eventType;

	private String entityId;
	
	private String eventData;

	private LocalDateTime eventTime;

}

Created StockEvent interface

package com.example.stock.management;

public interface StockEvent {

}

Here is StockAddedEvent class and it’s implementaion

package com.example.stock.management;

import lombok.Builder;
import lombok.Data;

@Builder
@Data
public class StockAddedEvent implements StockEvent {

	private Stock stockDetails;
	
}

Created StockRemovedEvent class and it’s implementaion

package com.example.stock.management;

import lombok.Builder;
import lombok.Data;

@Builder
@Data
public class StockRemovedEvent implements StockEvent {
	
	private Stock stockDetails;
}

Added EventRepository class

package com.example.stock.management;

import java.time.LocalDateTime;

import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Component;

@Component
public interface EventRepository extends CrudRepository<EventStore, Long>{

	Iterable<EventStore> findByEntityId(String entityId);
	
	Iterable<EventStore> findByEntityIdAndEventTimeLessThanEqual(String entityId,LocalDateTime date);

}

Created EventService class

package com.example.stock.management;

import java.time.LocalDate;
import java.time.LocalDateTime;
import java.util.List;
import java.util.Map;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;

@Service
public class EventService {

	@Autowired
	private EventRepository repo;

	public void addEvent(StockAddedEvent evnt) throws JsonProcessingException {

		EventStore eventStore = new EventStore();
		eventStore.setEventData(new ObjectMapper().writeValueAsString(event.getStockDetails()));
		eventStore.setEventType("STOCK_ADDED");
		eventStore.setEntityId(event.getStockDetails().getName());
		eventStore.setEventTime(LocalDateTime.now());
		repo.save(eventStore);
	}

	public void addEvent(StockRemovedEvent event) throws JsonProcessingException {

		EventStore eventStore = new EventStore();
		eventStore.setEventData(new ObjectMapper().writeValueAsString(event.getStockDetails()));
		eventStore.setEventType("STOCK_REMOVED");
		eventStore.setEntityId(event.getStockDetails().getName());
		eventStore.setEventTime(LocalDateTime.now());
		repo.save(eventStore);
	}

	public Iterable<EventStore> fetchAllEvents(String name) {

		return repo.findByEntityId(name);

	}
	
	public Iterable<EventStore> fetchAllEventsTillDate(String name,LocalDateTime date) {

		return repo.findByEntityIdAndEventTimeLessThanEqual(name, date);

	}
}

Created StockController class for adding a stock item , removing a stock item and Getting current count of stock. 

package com.example.stock.management;

import java.time.LocalDate;
import java.time.LocalDateTime;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.google.gson.Gson;

@RestController
public class StockController {

	@Autowired
	private EventService service;

	// Adding a stock item
	@PostMapping("/stock")
	public void addStock(@RequestBody Stock stockRequest) throws JsonProcessingException {

		StockAddedEvent event = StockAddedEvent.builder().stockDetails(stockRequest).build();
		service.addEvent(event);
	}

	// To remove item from a stock
	@DeleteMapping("/stock")
	public void removeStock(@RequestBody Stock stock) throws JsonProcessingException {

		StockRemovedEvent event = StockRemovedEvent.builder().stockDetails(stock).build();
		service.addEvent(event);
	}

	//To get current count of stock
	@GetMapping("/stock")
	public Stock getStock(@RequestParam("name") String name) throws JsonProcessingException {

		Iterable<EventStore> events = service.fetchAllEvents(name);

		Stock currentStock = new Stock();
		currentStock.setName(name);
		currentStock.setUser("NA");

		for (EventStore event : events) {

			Stock stock = new Gson().fromJson(event.getEventData(), Stock.class);

			if (event.getEventType().equals("STOCK_ADDED")) {

				currentStock.setQuantity(currentStock.getQuantity() + stock.getQuantity());
			} else if (event.getEventType().equals("STOCK_REMOVED")) {

				currentStock.setQuantity(currentStock.getQuantity() - stock.getQuantity());
			}
		}

		return currentStock;
	}
	
	@GetMapping("/events")
	public Iterable<EventStore> getEvents(@RequestParam("name") String name) throws JsonProcessingException {

		Iterable<EventStore> events = service.fetchAllEvents(name);

		return events;

	}
	
	@GetMapping("/stock/history")
	public Stock getStockUntilDate(@RequestParam("date") String date,@RequestParam("name") String name) throws JsonProcessingException {
	
		String[] dateArray = date.split("-");
		
		LocalDateTime dateTill = LocalDate.of(Integer.parseInt(dateArray[0]), Integer.parseInt(dateArray[1]), Integer.parseInt(dateArray[2])).atTime(23, 59);
		
		Iterable<EventStore> events = service.fetchAllEventsTillDate(name,dateTill);

		Stock currentStock = new Stock();

		currentStock.setName(name);
		currentStock.setUser("NA");

		for (EventStore event : events) {

			Stock stock = new Gson().fromJson(event.getEventData(), Stock.class);

			if (event.getEventType().equals("STOCK_ADDED")) {

				currentStock.setQuantity(currentStock.getQuantity() + stock.getQuantity());
			} else if (event.getEventType().equals("STOCK_REMOVED")) {

				currentStock.setQuantity(currentStock.getQuantity() - stock.getQuantity());
			}
		}

		return currentStock;

	}
}

StockmanagementEventstoreApplication class will be as follows

package com.example.stock.management;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

// Main class of the application
@SpringBootApplication
public class StockmanagementEventstoreApplication {

	public static void main(String[] args) {
		SpringApplication.run(StockmanagementEventstoreApplication.class, args);
	}

}

Added application.yml file

spring:
  datasource:
    url:  jdbc:h2:mem:testdb
    driverClassName: org.h2.Driver
    username: sa
    password:
 
  jpa:
    database-platform: org.hibernate.dialect.H2Dialect
    
  h2:
    console: 
      enabled: true 
      path: /h2

Work Flow

Start the StockmanagementEventstoreApplication app

Adding some items to stock:

stock-item1

stock-item2

stock-item3

Let’s check the database :

stock-database

We are able to get the current stock by hitting the GET API

get-stock-item-via-name

If we wants to know what was the stock the day before.

get-stock-item-via-date

Conclusion

Fetching the current state of an entity is not straightforward and is not scalable in event sourcing. This can be mitigated by taking snapshots of events at a particular time, compute the state of the entity for that snapshot at that time, store it somewhere and then only replay events that occurred after that snapshot time. For more, you can refer to the documentation: https://www.baeldung.com/cqrs-event-sourcing-java

Original article source at: https://blog.knoldus.com/

#event #springboot 

How to implement Event Sourcing with SpringBoot

Learn Axon Terminologies

In this blog, We will discuss axon framework terminologies and servers we’ll be talking about part one the structure of an axon application but prior to diving into that, we’ll first have a couple of the axon concepts which come into play when you’re working with an axon application. You might want to learn more about like command query responsibility segregation or using commands events and queries as message-driven API DDD event sourcing but also very importantly evolutionary microservice these will all be concepts.

We needs will intend to cover in detail the Axon terminology that the Axon Framework provides to help build applications based on CQRS/DDD and Event Sourcing

A summary of the various terminologies is given below

Query

Axon queries describe a request for information or state. A query can have multiple handlers when it dispatches queries, the client indicates whether he wants a result from one or from all available query handlers.

  • Query Processing 
  • Query Dispatchers
  • Query Handlers
  • Implementations
  • Configuration

Messages

In Axon one of the core concepts is messaging. All communication between components is done by message objects. This gives these components the location transparency needed to be able to scale and distribute the components when necessary.‌

In the Axon, all communication between components is done with explicit messages, represented by the Message interface. A Message consists of a payload, which is an application-specific that represents the actual functional message, and Meta Data, which is a key-value pair describing the context of the message.

  • Message Concept
  • Message Correlation
  • Message Interceptor

Command

Axon application is associated with Command Messages, accepting from the true source of the outer world.

Axon commands describe an intent to change the application’s state (event state). They are implemented as (preferably read-only) POJOs that are wrapped using one of the CommandMessage interface implementations.‌

Commands always have exactly one destination. While the sender does not care which component handles the command or where that component resides, it may be interesting to know the outcome of it.

  • Modeling
  • Command Dispatchers
  • Command Handler
  • Infrastructure
  • Configuration

Event

Event Sourcing is an Axon architectural pattern in which Events are considered the “source of truth”, based on which components are built in their internal state. Events are objects that describe something that has occurred in the axon based-application. A typical source of events is the aggregates. When something happens within the aggregate, it will raise an event. In the Axon f/w, events can be any object. it’s highly encouraged to make sure all events are serializable.‌ Event Sourcing Handlers combined will form the Aggregate, this is where all the state changes happen.

  • Event Dispatchers
  • Event Handler
  • Event Processors
  • Event Bus
  • Event Store
  • Event Versioning

Axon Server

Now that we have the Axon Server side of things, we expect things here concerning both message handling and event storage. Axon Server comes with a zero-configuration message router and event store that are combined gracefully with Axon to provide a solution to both storing events and delivering messages between components.

Needless to say, it ticks all these boxes. An axon server is built from scratch in Java to specifically meet all of these requirements. It manages files directly and does not depend on an underlying database system to store events.

Event Sourcing 

Event Sourcing is an Architectural pattern in which Events are considered the “source of truth”, based on which components are build their internal state.

EventStore

The database “EventStore” (written with quotes to emphasize it is the name of the database) is a built-for-purpose solution and therefore, it meets all the requirements in our list. “EventStore” is a popular option that is written in .NET (with Java clients written using Akka). Axon Framework gives a huge selection of options for where to store your events in traditional RDBMS options like PostgreSQL or MySQL to NoSQL databases such as MongoDB.

Summary

In this blog, we’ve summarized the requirements a query, command, message, and Event Store database needs and looked at the various options available together. Axon’s set of features and performance of your Event Sourcing system, we recommend specialized storage and, particularly if you are already leveraging Axon Framework, choosing Axon Server is a logical choice to make.

Original article source at: https://blog.knoldus.com/

#event #java #query 

Learn Axon Terminologies
Nigel  Uys

Nigel Uys

1670707320

Real-Time Event Processing with Kafka

Introduction to Real-Time Event Processing with Kafka

As the industry grows, that produced has also increased in the changing scenario. This data can be a great asset to the business if analyzed properly. Most tech companies receive data in raw form, and it becomes challenging to process data. Apache Kafka, an open-source streaming platform, helps you deal with the problem. It allows you to perform basic tasks like moving data from source to destination to more complex tasks like altering the structure and performing aggregation that too on the fly in real-time. Real-Time Event Processing with Kafka in a serverless environment makes your job easier by taking the overhead burden of managing the server and allowing you to focus solely on building your application.

The new technologies give us the ability to develop and deploy lifesaving applications at unprecedented speed — while also safeguarding privacy. Source: Tracking People And Events In Real Time

What is a Serverless Environment and Apache Kafka?

Serverless is that form of computing architecture wherein all the computational capacities can transfer to cloud platforms; this can help increase the speed and performance. This serverless environment helps build and run applications and use various services without worrying about the server. This enables the developers to develop their applications by putting all their efforts towards the core development of their applications removing the overhead problems of managing the server and using this time towards making better applications.
Apache Kafka is an open-source event streaming platform that provides data storing, reading, and analyzing capabilities. It has high throughput reliability and replication factor that makes it highly fault-tolerant. It is fast and scalable. It is distributed, allowing its user to run it across many platforms, thus giving it extra processing power and storage capabilities. It was initially built as a messaging queue system but has evolved into a full-fledged event streaming platform over time. Different use cases of Kafka are:

How does Apache Kafka Work?

Kafka acts as a messenger sending messages from one application to another. Messages sent by the producer (sender) are grouped into a topic that the consumer (subscriber) subscribed to as a stream of data.How Apache Kafka WorksKafka Stream API And KSQL for Real-time Event Streaming  Kafka Stream is a client library that analyzes data. The stream is a continuous flow of data to be analyzed for our purposes. It helps us read this data in real time with milliseconds of latency, allowing us to perform some aggregation functions and return the output to a new topic. The picture below shows us the working of an application that uses the Apache Kafka stream library.

What are the Features of Kafka Stream and KSQL?

  1. High scalability, elasticity, and fault tolerance
  2.  Deploys on cloud and VM's
  3. Write standard java/scala application
  4. No separate cluster needed
  5. It is viable for any case, small, medium, large

KSQL streaming SQL engine for Apache Kafka  KSQL is a streaming SQL engine for real-time event processing against Apache Kafka. It provides an easy yet powerful interactive SQL interface for stream processing, relinquishing you from writing any java or python code.

Different use cases of KSQL 

  1. Filtering Data: Filtre the data using a simple SQL-like query with a where clause.
  2. Data Transformation and Conversion: Data conversion becomes very handy with KSQL. If you want to convert data from Jason to Avro format, it can be easy.
  3. Data Enrichment with Joins: With the join function's help, to enrich data.
  4. Data manipulation with scalar function: Analysis data with aggregation, processing, and window operation can perform various aggregation functions like sum count average on our data. If we want the data of letting us say, the last twenty minutes or the previous day, that can also be done using a window function.

Read more about Apache Kafka Security with Kerberos on Kubernetes.

 Features of KSQL

  1. Develop on mac Linux and windows
  2. Deploy to containers cloud and VMS
  3. High scalability, elasticity, and fault tolerance
  4. It is viable for any case, small, medium, large
  5. Integrated with Kafka security

Kafka on AWS, on Azure and on GCP

On AWS

AWS provides Amazon MSK, a fully managed service that allows you to build Apache Kafka applications for real-time event processing. It might be a tedious task to manage the setup and scale of Apache Kafka clusters in production. Once you run it on your own, you would like to provision servers, configure it manually, replace servers failure, integrate upgrades and server patches, create the cluster to maximize availability, ensure data safety, and plan to scale events from time to time for supporting load changes. Amazon MSK makes it a cakewalk to create and run production applications on Apache Kafka without it's infrastructure management expertise taking the weight off your shoulder to manage infrastructure and focus on building applications. Benefits of Amazon MSK

  • Amazon MSK is fully compatible with Apache Kafka, which allows you to migrate your application to AWS without making any changes.
  • It enables you to focus on building applications taking on the overhead burden of managing your Apache Kafka cluster.
  • Amazon MSK creates multi-replicated Kafka cluster, manages them, and replace them on failure, thus ensuring high availability.
  • It provides high security to your the cluster.

Before discussing how Kafka works on Azure, let us quickly get insight into Microsoft Azure.

What is Microsoft Azure?

Well, Azure is a set of cloud services provided by Microsoft to meet your daily business challenges by giving you the utility to build, manage, deploy and scale applications over an extensive global platform. It provides Azure HDinsight which is a cloud-based service used for data analytics. It allows us to run popular open-source frameworks, including Apache Kafka, with effective cost and enterprise-grade services. Azure enables massive data processing with minimal effort, complemented by an open-source ecosystem's benefits. QUICKSTART: to create a Kafka cluster using Azure portal in HDInsight To create an Apache Kafka cluster on Azure HDInsight, follow the steps given below

  • Sign in to the Azure portal and select + create the resource 
  • To go to create the HDInsight cluster page, select Analytics => Azure HDInsight
  • From the basic Tab, provide the information marked (*)
  1. Subscription: Provide Azure subscription used for cluster
  2. Resource group: enter the appropriate resource group(HDInsight)
  3. Cluster detail: provide all the cluster detail (cluster name location type)
  4. Cluster credential: give all the cluster credential (username, password, Secured shell(ssh) username)
  • For the next step, select the storage tab and provide the detail
  1. Primary storage type: set to default (Azure)
  2. Select method: set to default
  3. Primary storage account: select from the drop-down menu your preferences
  • Now for the next step, select the security + networking tab and choose your desired settings
  • Next step, click on the configuration + pricing tab and select the number of nodes and sizes for various fields ( zookeeper = 3, worker node = 4 preferred for a guarantee of Apache Kafka )
  • The next step is to select review + create (it takes approx 20 min to start cluster) 

Command to connect to the cluster: ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net

Know more about Stream Processing with Apache Flink and Kafka

Kafka on Azure

  • It is managed and provides simplified configurations
  • It uses an azure manage disk that provides up to 16Tb storage per Kafka broker
  • Microsoft guarantee 99.9% SLA on Kafka uptime (service level agreement)

Azure separates Kafka's single-dimension view of the rack to a two-dimension rack view (update domain and fault domain) and provides tools to rebalance Kafka partitions across these domains

Kafka for Google Cloud Platform (GCP)

Following Kafka's huge demand in its adoption by developers due to its large scalability and workload handling capabilities, almost all developers are shifting towards stream-oriented applications rather than state-oriented ones. However, breaking the stereotype that managing Kafka requires expert skills, the Confluent Cloud provides developers with Kafka's full management. The developers need not worry about the gruesome work of managing it. This is built as a 'cloud-native service' that provides the developers with a serverless environment with utility pricing that offers the services, and the pricing is done by charging for the stream used. The Confluent Cloud is available on GCP, and the developers can use it by signing up and paying for its usage. Thus, it provides the developers with an integrated environment for billing using the metered GCP capacity.  The use of tools provided by Confluent clouds, such as Confluent Schema Registry, BigQuery Collector, and support for KSQL, can be done by this subscription, enabling the developers to use care of the technical issues or write their codes.

How Kafka works on Amazon MSK ?

In a few steps, you can provide your Apache Kafka cluster by logging on to Amazon MSK to manage your Apache Kafka cluster integrate upgrade and let you freely build your application.Apache Kafka on AWS Amazon MSK

What are the Steps to Deploy Confluent Cloud?

Given below are the following steps to deploy the confluent cloud.

  1. Spin up Confluent Cloud: Firstly, the user needs to log in and select the project where the user needs to spin up Confluent Cloud and then select the 'marketplace' option.
  2. Select Confluent cloud in Marketplace: In Marketplace, select Apache Kafka on the confluent cloud.
  3. Buy Confluent Cloud: The Confluent Cloud purchase page will show up. The user can purchase it by clicking on the 'purchase' button.
  4. Enabling Confluent Cloud on GCP: Enable API for usage after purchasing the confluent cloud. Click on the 'enable' button after the API-enabled page opens up.
  5. Register the Admin User: The user should register as the cloud's primary user by going to the 'Manage via Confluent' page. Then the user needs to verify the email address.

After all these steps, the user will log in to his account. In this case, the user needs to make some decisions regarding the clusters but not as complicated as the ones wherein the users do the managing process of Apache Kafka independently. Thus it makes the development of event-streaming applications easier and provides a better experience.Click to explore the Managed Apache Kafka Services.

Conclusion

So, in a nutshell, we can say that Real-Time Event Processing with Kafka has huge demand in its adoption by developers due to its extensive scalability and workload handling capabilities; almost all the developers are shifting towards stream-oriented applications rather than state-oriented. Combining this with a serverless environment makes it a piece of art with the reduced burden of managing the cluster and letting us focus more on the development part and leaving most of the working part to a serverless environment.

Original article source at: https://www.xenonstack.com/

#kafka #event 

Real-Time Event Processing with Kafka
Monty  Boehm

Monty Boehm

1669643708

How to Remove JavaScript Event Listeners Attached To HTML Elements

JavaScript: how to remove event listeners

JavaScript provides a built-in function called removeEventListener() that you can use to remove event listeners attached to HTML elements. Suppose you have an event listener attached to a <button> element as follows:

<body>
  <button id="save">Save</button>
  <script>
    let button = document.getElementById("save");

    function fnClick(event) {
      alert("Button save is clicked");
    }
    
    button.addEventListener("click", fnClick);
  </script>
  <body></body>
</body>

To remove the "click" event listener attached from the <script> tag above, you need to use the removeEventListener() method, passing the type of the event and the callback function to remove from the element:

button.removeEventListener("click", fnClick);

The above code should suffice to remove the "click" event listener from the button element. Notice how you need to call the removeEventListener() method on the element while also passubg the function fnClick reference to the method. To correctly remove an event listener, you need a reference both to the element with a listener and the callback function reference.

This is why it’s not recommended to pass a nameless callback function to event listeners as follows:

button.addEventListener("click", function(event){
  alert("Button save is clicked");
})

Without the callback function name as in the example above, you won’t be able to remove the event listener.

Removing event listener after click

Sometimes, you may also want to disable the button element and remove the event listener to prevent a double-click from your users. You can do so by writing the removeEventListener() method inside the addEventListener() method as shown below:

<body>
  <button id="save">Save</button>
  <script>
    let button = document.getElementById("save");

    function fnClick(event) {
      alert("Button save is clicked");
      button.disabled = true; // disable button
      button.removeEventListener("click", fnClick); // remove event listener
    }

    button.addEventListener("click", fnClick);
  </script>
</body>

In the code above, the button element will be disabled and the event listener will be removed after a "click" event is triggered.

And that’s how you remove JavaScript event listeners attached to HTML elements. You need to keep references to the element you want to remove the listener from, the type of the event, and the callback function executed by the event so that you can remove the event listener without any error.

Original article source at: https://sebhastian.com/

#javascript #event #elements 

How to Remove JavaScript Event Listeners Attached To HTML Elements
Bongani  Ngema

Bongani Ngema

1669464016

How to Toggle A Div Element Display By using Button onclick Event

JavaScript - How to show and hide div by a button click

To display or hide a <div> by a <button> click, you can add the onclick event listener to the <button> element.

The onclick listener for the button will have a function that will change the display attribute of the <div> from the default value (which is block) to none.

For example, suppose you have an HTML <body> element as follows:

<body>
  <div id="first">This is the FIRST div</div>
  <div id="second">This is the SECOND div</div>
  <div id="third">This is the THIRD div</div>
  <button id="toggle">Hide THIRD div</button>
</body>

The <button> element above is created to hide or show the <div id="third"> element on click.

You need to add the onclick event listener to the <button> element like this:

const targetDiv = document.getElementById("third");
const btn = document.getElementById("toggle");
btn.onclick = function () {
  if (targetDiv.style.display !== "none") {
    targetDiv.style.display = "none";
  } else {
    targetDiv.style.display = "block";
  }
};

The HTML will be rendered as if the <div> element never existed by setting the display attribute to none.

When you click the <button> element again, the display attribute will be set back to block, so the <div> will be rendered back in the HTML page.

Since this solution is using JavaScript API native to the browser, you don’t need to install any JavaScript libraries like jQuery.

You can add the JavaScript code to your HTML <body> tag using the <script> tag as follows:

<body>
  <div id="first">This is the FIRST div</div>
  <div id="second">This is the SECOND div</div>
  <div id="third">This is the THIRD div</div>
  <button id="toggle">Hide THIRD div</button>

  <script>
    const targetDiv = document.getElementById("third");
    const btn = document.getElementById("toggle");
    btn.onclick = function () {
      if (targetDiv.style.display !== "none") {
        targetDiv.style.display = "none";
      } else {
        targetDiv.style.display = "block";
      }
    };
  </script>
</body>

Feel free to use and modify the code above in your project.

I hope this tutorial has been useful for you. 👍

Original article source at: https://sebhastian.com/

#javascript #event 

How to Toggle A Div Element Display By using Button onclick Event
Oral  Brekke

Oral Brekke

1669022820

How React onChange Event Handlers Work

Let's learn how to use React onChange events properly for keeping track of user input.

The onChange event handler is a prop that you can pass into JSX <input> elements.

This prop is provided by React so that your application can listen to user input in real-time.

When an onChange event occurs, the prop will call the function you passed as its parameter.

Here’s an example of the onChange event in action:

import React from "react";

function App() {
  function handleChange(event) {
    console.log(event.target.value);
  }

  return (
    <input
      type="text"
      name="firstName"
      onChange={handleChange}
    />
  );
}

export default App;

In the example above, the handleChange() function will be called every time the onchange event occurs for the <input> element.

The event object passed into the handleChange() function contains all the detail about the input event.

You can also declare a function right inside the onChange prop like this:

import React from "react";

function App() {
  return (
    <input
      type="text"
      name="firstName"
      onChange={event => console.log("onchange is triggered")}
    />
  );
}

export default App;

Now whenever you type something into the text box, React will trigger the function that we passed into the onChange prop.

Common use cases for React onChange event handler

In regular HTML, form elements such as and usually maintain their own value:

<input id="name" type="text" />

Which you can retrieve by using the document selector:

var name = document.getElementById("name").value;

In React however, it is encouraged for developers to store input values in the component’s state object.

This way, React component that render <input> elements will also control what happens on subsequent user inputs.

First, you create a state for the input as follows:

import React, { useState } from "react";

function App(props) {
  const [name, setName] = useState("");
}

Then, you create an input element and call the setName function to update the name state.

Every time the onChange event is triggered, React will pass the event argument into the function that you define inside the prop:

import React, { useState } from "react";

function App(props) {
  const [name, setName] = useState("");

  return (
    <input
      type="text"
      name="firstName"
      onChange={event => setName(event.target.value)}
    />
  );
}

Finally, you use the value of name state and put it inside the input’s value prop:

return (
  <input
    type="text"
    name="firstName"
    onChange={event => setName(event.target.value)}
    value={name}
  />
);

You can retrieve input value in event.target.value and input name in event.target.name.

As in the previous example, you can also separate the onChange handler into its own function. The event object is commonly shortened as e like this:

import React, { useState } from "react";

function App(props) {
  const [name, setName] = useState("");

  function handleChange(e) {
    setName(e.target.value);
  }

  return (
    <input 
      type="text" 
      name="firstName" 
      onChange={handleChange} 
      value={name} 
    />
  );
}

This pattern of using React’s onChange event and the component state will encourage developers to use state as the single source of truth.

Instead of using the Document object to retrieve input values, you retrieve them from the state.

And now you’ve learned how React onChange event handler works. Nice job! 👍

Original article source at: https://sebhastian.com/

#react #event 

How React onChange Event Handlers Work
Hermann  Frami

Hermann Frami

1668115440

React to any Event with Serverless Functions Across Clouds

The Event Gateway combines both API Gateway and Pub/Sub functionality into a single event-driven experience. It's dataflow for event-driven, serverless architectures. It routes Events (data) to Functions (serverless compute). Everything it cares about is an event! Even calling a function. It makes it easy to share events across different systems, teams and organizations!

Use the Event Gateway right now, by running the Event Gateway Getting Started Application with the Serverless Framework.

Features:

  • Platform agnostic - All your cloud services are now compatible with one another: share cross-cloud functions and events with AWS Lambda, Microsoft Azure, IBM Cloud and Google Cloud Platform.
  • Send events from any cloud - Data streams in your application become events. Centralize events from any cloud provider to get a bird’s eye view of all the data flowing through your cloud.
  • React to cross-cloud events - You aren’t locked in to events and functions being on the same provider: Any event, on any cloud, can trigger any function. Set events and functions up like dominoes and watch them fall.
  • First-class support for CloudEvents - Emit and react to events in CloudEvents standard.
  • Expose events to your team - Share events and functions to other parts of the application. Your teammates can find them and utilize them in their own services.
  • Extendable through middleware - Perform data transforms, authorizations, serializations, and other custom computes straight from the Event Gateway.

The Event Gateway is a L7 proxy and realtime dataflow engine, intended for use with Functions-as-a-Service on AWS, Azure, Google & IBM.

The project is under heavy development. The APIs will continue to change until we release a 1.0.0 version. It's not yet ready for production applications.

Event Gateway - Build event-driven integrations with lambda, cloud functions, kubernetes

Reference

  1. API
  2. Event Types
  3. Subscription Types
  4. Architecture
  5. Clustering
  6. System Events and Plugin System
  7. System Metrics
  8. Reliability Guarantees

Quick Start

Getting Started

Looking for an example to get started? The easiest way to use the Event Gateway is with the serverless-event-gateway-plugin with the Serverless Framework. Check out the Getting Started Example to deploy your first service to the Event Gateway.

Running the Event Gateway

Hosted version

If you don't want to run the Event Gateway yourself, you can use the hosted version provided by the Serverless team. Sign up here!

via Docker

There is an official Docker image.

docker run -p 4000:4000 -p 4001:4001 serverless/event-gateway --dev

Binary

On macOS or Linux run the following to download the binary:

curl -sfL https://raw.githubusercontent.com/serverless/event-gateway/master/install.sh | sh

On Windows download binary.

Then run the binary in development mode with:

$ event-gateway --dev

Kubernetes

The repo contains helm charts for a quick deploy to an existing cluster using native nginx Ingress. To deploy a development cluster you can follow the minikube instructions.


If you want more detailed information on running and developing with the Event Gateway, please check Running Locally and Developing guides.

Motivation

  • It is cumbersome to plug things into each other. This should be easy! Why do I need to set up a queue system to keep track of new user registrations or failed logins?
  • Introspection is terrible. There is no performant way to emit logs and metrics from a function. How do I know a new piece of code is actually working? How do I feed metrics to my existing monitoring system? How do I plug this function into to my existing analytics system?
  • Using new functions is risky without the ability to incrementally deploy them.
  • The AWS API Gateway is frequently cited as a performance and cost-prohibitive factor for using AWS Lambda.

Components

Event Registry

Event Registry is a single source of truth about events occuring in the space. Every event emitted to a space has to have event type registered beforehand. Event Registry also provides a way to authorize incoming events. Please check Event Types reference for more information.

Function Discovery

Discover and call serverless functions from anything that can reach the Event Gateway. Function Discovery supports the following function types:

  • FaaS functions (AWS Lambda, Google Cloud Functions, Azure Functions, IBM Cloud Functions)
  • Connectors (AWS Kinesis, AWS Kinesis Firehose, AWS SQS)
  • HTTP endpoints/Webhook (e.g. POST http://example.com/function)

Function Discovery stores information about functions allowing the Event Gateway to call them as a reaction to received event.

Example: Register An AWS Lambda Function

curl example

curl --request POST \
  --url http://localhost:4001/v1/spaces/default/functions \
  --header 'content-type: application/json' \
  --data '{
    "functionId": "hello",
    "type": "awslambda",
    "provider":{
      "arn": "arn:aws:lambda:us-east-1:377024778620:function:bluegreen-dev-hello",
      "region": "us-east-1"
    }
}'

Node.js SDK example

const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.registerFunction({
  functionId: 'sendEmail',
  type: 'awslambda',
  provider: {
    arn: 'xxx',
    region: 'us-west-2'
  }
})

Subscriptions

Lightweight pub/sub system. Allows functions to asynchronously receive custom events. Instead of rewriting your functions every time you want to send data to another place, this can be handled entirely in configuration using the Event Gateway. This completely decouples functions from one another, reducing communication costs across teams, eliminates effort spent redeploying functions, and allows you to easily share events across functions, HTTP services, even different cloud providers. Functions may be registered as subscribers to a custom event. When an event occurs, all subscribers are called asynchronously with the event as its argument.

Creating a subscription requires providing ID of registered function, an event type, an HTTP method (POST by default), and a path (/ by default). The method and path properties defines HTTP endpoint which Events API will be listening on.

Event Gateway supports two subscription types: async and sync. Please check Subscription Types reference for more information.

Example: Subscribe to an Event

curl example

curl --request POST \
  --url http://localhost:4001/v1/spaces/default/subscriptions \
  --header 'content-type: application/json' \
  --data '{
    "type": "async",
    "eventType": "user.created",
    "functionId": "sendEmail",
    "path": "/myteam"
  }'

Node.js SDK example

const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.subscribe({
  type: 'async',
  eventType: 'user.created',
  functionId: 'sendEmail',
  path: '/myteam'
})

sendEmail function will be invoked for every user.created event to <Events API>/myteam endpoint.

Example: Emit an Event

curl example

curl --request POST \
  --url http://localhost:4000/ \
  --header 'content-type: application/json' \
  --data '{
    "eventType": "myapp.user.created",
    "eventID": "66dfc31d-6844-42fd-b1a7-a489a49f65f3",
    "cloudEventsVersion": "0.1",
    "source": "/myapp/services/users",
    "eventTime": "1990-12-31T23:59:60Z",
    "data": { "userID": "123" },
    "contentType": "application/json"
  }'

Node.js SDK example

const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.emit({
  "eventType": "myapp.user.created",
  "eventID": "66dfc31d-6844-42fd-b1a7-a489a49f65f3",
  "cloudEventsVersion": "0.1",
  "source": "/myapp/services/users",
  "eventTime": "1990-12-31T23:59:60Z",
  "data": { "userID": "123" },
  "contentType": "application/json"
})

Example: Subscribe to an http.request Event

Not all data are events that's why Event Gateway has a special, built-in http.request event type that enables subscribing to raw HTTP requests.

curl example

curl --request POST \
  --url http://localhost:4001/v1/spaces/default/subscriptions \
  --header 'content-type: application/json' \
  --data '{
    "type": "sync",
    "eventType": "http.request",
    "functionId": "listUsers",
    "method": "GET",
    "path": "/users"
  }'

Node.js SDK example

const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.subscribe({
  type: 'sync',
  eventType: 'http.request',
  functionId: 'listUsers',
  method: 'GET',
  path: '/users'
})

listUsers function will be invoked for every HTTP GET request to <Events API>/users endpoint.

Spaces

One additional concept in the Event Gateway are Spaces. Spaces provide isolation between resources. Space is a coarse-grained sandbox in which entities (Functions and Subscriptions) can interact freely. All actions are possible within a space: publishing, subscribing and invoking.

Space is not about access control/authentication/authorization. It's only about isolation. It doesn't enforce any specific subscription path.

This is how Spaces fit different needs depending on use-case:

  • single user - single user uses default space for registering function and creating subscriptions.
  • multiple teams/departments - different teams/departments use different spaces for isolation and for hiding internal implementation and architecture.

Technically speaking Space is a mandatory field ("default" by default) on Function or Subscription object that user has to provide during function registration or subscription creation. Space is a first class concept in Config API. Config API can register function in specific space or list all functions or subscriptions from a space.

CloudEvents Support

Event Gateway has the first-class support for CloudEvents. It means few things.

First of all, if the event emitted to the Event Gateway is in CloudEvents format, the Event Gateway is able to recognize it and trigger proper subscriptions based on event type specified in the event. Event Gateway supports both Structured Content and Binary Content modes described in HTTP Transport Binding spec.

Secondly, there is a special, built-in HTTP Request Event type allowing reacting to raw HTTP requests that are not formatted according to CloudEvents spec. This event type can be especially helpful for building REST APIs.

Currently, Event Gateway supports CloudEvents v0.1 schema specification.

SDKs

Versioning

This project uses Semantic Versioning 2.0.0. We are in initial development phase right now (v0.X.Y). The public APIs should not be considered stable. Every breaking change will be listed in the release changelog.

FAQ

What The Event Gateway is NOT

  • it's not a replacement for message queues (no message ordering, currently weak durability guarantees only)
  • it's not a replacement for streaming platforms (no processing capability and consumers group)
  • it's not a replacement for existing service discovery solutions from the microservices world

Event Gateway vs FaaS Providers

The Event Gateway is NOT a FaaS platform. It integrates with existing FaaS providers (AWS Lambda, Google Cloud Functions, Azure Functions, OpenWhisk Actions). The Event Gateway enables building large serverless architectures in a unified way across different providers.

Background

SOA came along with a new set of challenges. In monolithic architectures, it was simple to call a built-in library or rarely-changing external service. In SOA it involves much more network communication which is not reliable. The main problems to solve include:

  1. Where is the service deployed? How many instances are there? Which instance is the closest to me? (service discovery)
  2. Requests to the service should be balanced between all service instances (load balancing)
  3. If a remote service call failed I want to retry it (retries)
  4. If the service instance failed I want to stop sending requests there (circuit breaking)
  5. Services are written in multiple languages, I want to communicate between them using the best language for the particular task (sidecar)
  6. Calling remote service should not require setting up new connection every time as it increases request time (persistent connections)

The following systems are solutions those problems:

The main goal of those tools is to manage the inconveniences of network communication.

Microservices Challenges & FaaS

The greatest benefit of serverless/FaaS is that it solves almost all of above problems:

  1. service discovery: I don't care! I have a function name, that's all I need.
  2. load balancing: I don't care! I know that there will be a function to handle my request (blue/green deployments still an issue though)
  3. retries: It's highly unusual that my request will not proceed as function instances are ephemeral and failing function is immediately replaced with a new instance. If it happens I can easily send another request. In case of failure, it's easy to understand what is the cause.
  4. circuit breaking: Functions are ephemeral and auto-scaled, low possibility of flooding/DoS & cascading failures.
  5. sidecar: calling function is as simple as calling method from cloud provider SDK.
  6. in FaaS setting up persistent connection between two functions defeats the purpose as functions instances are ephemeral.

Tools like Envoy/Linkerd solve different domain of technical problems that doesn't occur in serverless space. They have a lot of features that are unnecessary in the context of serverless computing.

Service Discovery in FaaS = Function Discovery

Service discovery problems may be relevant to serverless architectures, especially when we have a multi-cloud setup or we want to call a serverless function from a legacy system (microservices, etc...). There is a need for some proxy that will know where the function is actually deployed and have retry logic built-in. Mapping from function name to serverless function calling metadata is a different problem from tracking the availability of a changing number of service instances. That's why there is a room for new tools that solves function discovery problem rather than the service discovery problem. Those problems are fundamentally different.

Download Details:

Author: Serverless
Source Code: https://github.com/serverless/event-gateway 
License: Apache-2.0 license

#serverless #event #golang #pubsub 

React to any Event with Serverless Functions Across Clouds
Hermann  Frami

Hermann Frami

1667716620

KEDA: Kubernetes-based Event Driven Autoscaling

KEDA

Kubernetes-based Event Driven Autoscaling

KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.

KEDA can run on both the cloud and the edge, integrates natively with Kubernetes components such as the Horizontal Pod Autoscaler, and has no external dependencies.

We are a Cloud Native Computing Foundation (CNCF) incubation project.

Getting started

You can find several samples for various event sources here.

Deploying KEDA

There are many ways to deploy KEDA including Helm, Operator Hub and YAML files.

Documentation

Interested to learn more? Head over to keda.sh.

Community

If interested in contributing or participating in the direction of KEDA, you can join our community meetings! Learn more about them on our website.

Just want to learn or chat about KEDA? Feel free to join the conversation in #KEDA on the Kubernetes Slack!

Adopters - Become a listed KEDA user!

We are always happy to list users who run KEDA in production, learn more about it here.

Governance & Policies

You can learn about the governance of KEDA here.

Roadmap

We use GitHub issues to build our backlog, a complete overview of all open items and our planning.

Learn more about our roadmap here.

Releases

You can find the latest releases here.

Contributing

You can find contributing guide here.

Building & deploying locally

Learn how to build & deploy KEDA locally here.

Download Details:

Author: Kedacore
Source Code: https://github.com/kedacore/keda 
License: Apache-2.0 license

#serverless #kubernetes #event 

KEDA: Kubernetes-based Event Driven Autoscaling
Rupert  Beatty

Rupert Beatty

1665913800

Swift-nio: Event-driven Network Application Framework

SwiftNIO

SwiftNIO is a cross-platform asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients.

It's like Netty, but written for Swift.

Repository organization

The SwiftNIO project is split across multiple repositories:

RepositoryNIO 2 (Swift 5.5.2+)
[https://github.com/apple/swift-nio][repo-nio] 
SwiftNIO core
from: "2.0.0"
[https://github.com/apple/swift-nio-ssl][repo-nio-ssl] 
TLS (SSL) support
from: "2.0.0"
[https://github.com/apple/swift-nio-http2][repo-nio-http2]
HTTP/2 support
from: "1.0.0"
[https://github.com/apple/swift-nio-extras][repo-nio-extras] 
useful additions around SwiftNIO
from: "1.0.0"
[https://github.com/apple/swift-nio-transport-services][repo-nio-transport-services] 
first-class support for macOS, iOS, tvOS, and watchOS
from: "1.0.0"
[https://github.com/apple/swift-nio-ssh][repo-nio-ssh] 
SSH support
.upToNextMinor(from: "0.2.0")

NIO 2.29.0 and older support Swift 5.0+, NIO 2.39.0 and older support Swift 5.2+.

Within this repository we have a number of products that provide different functionality. This package contains the following products:

  • NIO. This is an umbrella module exporting NIOCore, NIOEmbedded and NIOPosix.
  • NIOCore. This provides the core abstractions and types for using SwiftNIO (see "Conceptual Overview" for more details). Most NIO extension projects that provide things like new EventLoops and Channels or new protocol implementations should only need to depend on NIOCore.
  • NIOPosix. This provides the primary [EventLoopGroup], EventLoop, and Channels for use on POSIX-based systems. This is our high performance core I/O layer. In general, this should only be imported by projects that plan to do some actual I/O, such as high-level protocol implementations or applications.
  • NIOEmbedded. This provides EmbeddedChannel and EmbeddedEventLoop, implementations of the NIOCore abstractions that provide fine-grained control over their execution. These are most often used for testing, but can also be used to drive protocol implementations in a way that is decoupled from networking altogether.
  • NIOConcurrencyHelpers. This provides a few low-level concurrency primitives that are used by NIO implementations, such as locks and atomics.
  • NIOFoundationCompat. This extends a number of NIO types for better interoperation with Foundation data types. If you are working with Foundation data types such as Data, you should import this.
  • NIOTLS. This provides a few common abstraction types for working with multiple TLS implementations. Note that this doesn't provide TLS itself: please investigate swift-nio-ssl and swift-nio-transport-services for concrete implementations.
  • NIOHTTP1. This provides a low-level HTTP/1.1 protocol implementation.
  • NIOWebSocket. This provides a low-level WebSocket protocol implementation.
  • NIOTestUtils. This provides a number of helpers for testing projects that use SwiftNIO.

Protocol Implementations

Below you can find a list of a few protocol implementations that are done with SwiftNIO. This is a non-exhaustive list of protocols that are either part of the SwiftNIO project or are accepted into the SSWG's incubation process. All of the libraries listed below do all of their I/O in a non-blocking fashion using SwiftNIO.

Low-level protocol implementations

Low-level protocol implementations are often a collection of ChannelHandlers that implement a protocol but still require the user to have a good understanding of SwiftNIO. Often, low-level protocol implementations will then be wrapped in high-level libraries with a nicer, more user-friendly API.

ProtocolClientServerRepositoryModuleComment
HTTP/1apple/swift-nioNIOHTTP1official NIO project
HTTP/2apple/swift-nio-http2NIOHTTP2official NIO project
WebSocketapple/swift-nioNIOWebSocketofficial NIO project
TLSapple/swift-nio-sslNIOSSLofficial NIO project
SSH[apple/swift-nio-ssh][repo-nio-ssh]n/aofficial NIO project

High-level implementations

High-level implementations are usually libraries that come with an API that doesn't expose SwiftNIO's ChannelPipeline and can therefore be used with very little (or no) SwiftNIO-specific knowledge. The implementations listed below do still do all of their I/O in SwiftNIO and integrate really well with the SwiftNIO ecosystem.

ProtocolClientServerRepositoryModuleComment
HTTPswift-server/async-http-clientAsyncHTTPClientSSWG community project
gRPCgrpc/grpc-swiftGRPCalso offers a low-level API; SSWG community project
APNSkylebrowning/APNSwiftAPNSwiftSSWG community project
PostgreSQLvapor/postgres-nioPostgresNIOSSWG community project
Redismordil/swift-redi-stackRediStackSSWG community project

Supported Versions

SwiftNIO 2

This is the current version of SwiftNIO and will be supported for the foreseeable future.

The most recent versions of SwiftNIO support Swift 5.5.2 and newer. The minimum Swift version supported by SwiftNIO releases are detailed below:

SwiftNIOMinimum Swift Version
2.0.0 ..< 2.30.05.0
2.30.0 ..< 2.40.05.2
2.40.0 ..< 2.43.05.4
2.43.0 ...5.5.2

SwiftNIO 1

SwiftNIO 1 is considered end of life - it is strongly recommended that you move to a newer version. The Core NIO team does not actively work on this version. No new features will be added to this version but PRs which fix bugs or security vulnerabilities will be accepted until the end of May 2022.

If you have a SwiftNIO 1 application or library that you would like to migrate to SwiftNIO 2, please check out the migration guide we prepared for you.

The latest released SwiftNIO 1 version supports Swift 4.0, 4.1, 4.2, and 5.0.

Supported Platforms

SwiftNIO aims to support all of the platforms where Swift is supported. Currently, it is developed and tested on macOS and Linux, and is known to support the following operating system versions:

Compatibility

SwiftNIO follows SemVer 2.0.0 with a separate document declaring SwiftNIO's Public API.

What this means for you is that you should depend on SwiftNIO with a version range that covers everything from the minimum SwiftNIO version you require up to the next major version. In SwiftPM that can be easily done specifying for example from: "2.0.0" meaning that you support SwiftNIO in every version starting from 2.0.0 up to (excluding) 3.0.0. SemVer and SwiftNIO's Public API guarantees should result in a working program without having to worry about testing every single version for compatibility.

Conceptual Overview

SwiftNIO is fundamentally a low-level tool for building high-performance networking applications in Swift. It particularly targets those use-cases where using a "thread-per-connection" model of concurrency is inefficient or untenable. This is a common limitation when building servers that use a large number of relatively low-utilization connections, such as HTTP servers.

To achieve its goals SwiftNIO extensively uses "non-blocking I/O": hence the name! Non-blocking I/O differs from the more common blocking I/O model because the application does not wait for data to be sent to or received from the network: instead, SwiftNIO asks for the kernel to notify it when I/O operations can be performed without waiting.

SwiftNIO does not aim to provide high-level solutions like, for example, web frameworks do. Instead, SwiftNIO is focused on providing the low-level building blocks for these higher-level applications. When it comes to building a web application, most users will not want to use SwiftNIO directly: instead, they'll want to use one of the many great web frameworks available in the Swift ecosystem. Those web frameworks, however, may choose to use SwiftNIO under the covers to provide their networking support.

The following sections will describe the low-level tools that SwiftNIO provides, and provide a quick overview of how to work with them. If you feel comfortable with these concepts, then you can skip right ahead to the other sections of this README.

Basic Architecture

The basic building blocks of SwiftNIO are the following 8 types of objects:

All SwiftNIO applications are ultimately constructed of these various components.

EventLoops and EventLoopGroups

The basic I/O primitive of SwiftNIO is the event loop. The event loop is an object that waits for events (usually I/O related events, such as "data received") to happen and then fires some kind of callback when they do. In almost all SwiftNIO applications there will be relatively few event loops: usually only one or two per CPU core the application wants to use. Generally speaking event loops run for the entire lifetime of your application, spinning in an endless loop dispatching events.

Event loops are gathered together into event loop groups. These groups provide a mechanism to distribute work around the event loops. For example, when listening for inbound connections the listening socket will be registered on one event loop. However, we don't want all connections that are accepted on that listening socket to be registered with the same event loop, as that would potentially overload one event loop while leaving the others empty. For that reason, the event loop group provides the ability to spread load across multiple event loops.

In SwiftNIO today there is one EventLoopGroup implementation, and two EventLoop implementations. For production applications there is the MultiThreadedEventLoopGroup, an EventLoopGroup that creates a number of threads (using the POSIX pthreads library) and places one SelectableEventLoop on each one. The SelectableEventLoop is an event loop that uses a selector (either kqueue or epoll depending on the target system) to manage I/O events from file descriptors and to dispatch work. These EventLoops and EventLoopGroups are provided by the NIOPosix module. Additionally, there is the EmbeddedEventLoop, which is a dummy event loop that is used primarily for testing purposes, provided by the NIOEmbedded module.

EventLoops have a number of important properties. Most vitally, they are the way all work gets done in SwiftNIO applications. In order to ensure thread-safety, any work that wants to be done on almost any of the other objects in SwiftNIO must be dispatched via an EventLoop. EventLoop objects own almost all the other objects in a SwiftNIO application, and understanding their execution model is critical for building high-performance SwiftNIO applications.

Channels, Channel Handlers, Channel Pipelines, and Channel Contexts

While EventLoops are critical to the way SwiftNIO works, most users will not interact with them substantially beyond asking them to create EventLoopPromises and to schedule work. The parts of a SwiftNIO application most users will spend the most time interacting with are Channels and ChannelHandlers.

Almost every file descriptor that a user interacts with in a SwiftNIO program is associated with a single Channel. The Channel owns this file descriptor, and is responsible for managing its lifetime. It is also responsible for processing inbound and outbound events on that file descriptor: whenever the event loop has an event that corresponds to a file descriptor, it will notify the Channel that owns that file descriptor.

Channels by themselves, however, are not useful. After all, it is a rare application that doesn't want to do anything with the data it sends or receives on a socket! So the other important part of the Channel is the ChannelPipeline.

A ChannelPipeline is a sequence of objects, called ChannelHandlers, that process events on a Channel. The ChannelHandlers process these events one after another, in order, mutating and transforming events as they go. This can be thought of as a data processing pipeline; hence the name ChannelPipeline.

All ChannelHandlers are either Inbound or Outbound handlers, or both. Inbound handlers process "inbound" events: events like reading data from a socket, reading socket close, or other kinds of events initiated by remote peers. Outbound handlers process "outbound" events, such as writes, connection attempts, and local socket closes.

Each handler processes the events in order. For example, read events are passed from the front of the pipeline to the back, one handler at a time, while write events are passed from the back of the pipeline to the front. Each handler may, at any time, generate either inbound or outbound events that will be sent to the next handler in whichever direction is appropriate. This allows handlers to split up reads, coalesce writes, delay connection attempts, and generally perform arbitrary transformations of events.

In general, ChannelHandlers are designed to be highly re-usable components. This means they tend to be designed to be as small as possible, performing one specific data transformation. This allows handlers to be composed together in novel and flexible ways, which helps with code reuse and encapsulation.

ChannelHandlers are able to keep track of where they are in a ChannelPipeline by using a ChannelHandlerContext. These objects contain references to the previous and next channel handler in the pipeline, ensuring that it is always possible for a ChannelHandler to emit events while it remains in a pipeline.

SwiftNIO ships with many ChannelHandlers built in that provide useful functionality, such as HTTP parsing. In addition, high-performance applications will want to provide as much of their logic as possible in ChannelHandlers, as it helps avoid problems with context switching.

Additionally, SwiftNIO ships with a few Channel implementations. In particular, it ships with ServerSocketChannel, a Channel for sockets that accept inbound connections; SocketChannel, a Channel for TCP connections; and DatagramChannel, a Channel for UDP sockets. All of these are provided by the NIOPosix module. It also providesEmbeddedChannel, a Channel primarily used for testing, provided by the NIOEmbedded module.

A Note on Blocking

One of the important notes about ChannelPipelines is that they are thread-safe. This is very important for writing SwiftNIO applications, as it allows you to write much simpler ChannelHandlers in the knowledge that they will not require synchronization.

However, this is achieved by dispatching all code on the ChannelPipeline on the same thread as the EventLoop. This means that, as a general rule, ChannelHandlers must not call blocking code without dispatching it to a background thread. If a ChannelHandler blocks for any reason, all Channels attached to the parent EventLoop will be unable to progress until the blocking call completes.

This is a common concern while writing SwiftNIO applications. If it is useful to write code in a blocking style, it is highly recommended that you dispatch work to a different thread when you're done with it in your pipeline.

Bootstrap

While it is possible to configure and register Channels with EventLoops directly, it is generally more useful to have a higher-level abstraction to handle this work.

For this reason, SwiftNIO ships a number of Bootstrap objects whose purpose is to streamline the creation of channels. Some Bootstrap objects also provide other functionality, such as support for Happy Eyeballs for making TCP connection attempts.

Currently SwiftNIO ships with three Bootstrap objects in the NIOPosix module: ServerBootstrap, for bootstrapping listening channels; ClientBootstrap, for bootstrapping client TCP channels; and DatagramBootstrap for bootstrapping UDP channels.

ByteBuffer

The majority of the work in a SwiftNIO application involves shuffling buffers of bytes around. At the very least, data is sent and received to and from the network in the form of buffers of bytes. For this reason it's very important to have a high-performance data structure that is optimized for the kind of work SwiftNIO applications perform.

For this reason, SwiftNIO provides ByteBuffer, a fast copy-on-write byte buffer that forms a key building block of most SwiftNIO applications. This type is provided by the NIOCore module.

ByteBuffer provides a number of useful features, and in addition provides a number of hooks to use it in an "unsafe" mode. This turns off bounds checking for improved performance, at the cost of potentially opening your application up to memory correctness problems.

In general, it is highly recommended that you use the ByteBuffer in its safe mode at all times.

For more details on the API of ByteBuffer, please see our API documentation, linked below.

Promises and Futures

One major difference between writing concurrent code and writing synchronous code is that not all actions will complete immediately. For example, when you write data on a channel, it is possible that the event loop will not be able to immediately flush that write out to the network. For this reason, SwiftNIO provides EventLoopPromise<T> and EventLoopFuture<T> to manage operations that complete asynchronously. These types are provided by the NIOCore module.

An EventLoopFuture<T> is essentially a container for the return value of a function that will be populated at some time in the future. Each EventLoopFuture<T> has a corresponding EventLoopPromise<T>, which is the object that the result will be put into. When the promise is succeeded, the future will be fulfilled.

If you had to poll the future to detect when it completed that would be quite inefficient, so EventLoopFuture<T> is designed to have managed callbacks. Essentially, you can hang callbacks off the future that will be executed when a result is available. The EventLoopFuture<T> will even carefully arrange the scheduling to ensure that these callbacks always execute on the event loop that initially created the promise, which helps ensure that you don't need too much synchronization around EventLoopFuture<T> callbacks.

Another important topic for consideration is the difference between how the promise passed to close works as opposed to closeFuture on a Channel. For example, the promise passed into close will succeed after the Channel is closed down but before the ChannelPipeline is completely cleared out. This will allow you to take action on the ChannelPipeline before it is completely cleared out, if needed. If it is desired to wait for the Channel to close down and the ChannelPipeline to be cleared out without any further action, then the better option would be to wait for the closeFuture to succeed.

There are several functions for applying callbacks to EventLoopFuture<T>, depending on how and when you want them to execute. Details of these functions is left to the API documentation.

Design Philosophy

SwiftNIO is designed to be a powerful tool for building networked applications and frameworks, but it is not intended to be the perfect solution for all levels of abstraction. SwiftNIO is tightly focused on providing the basic I/O primitives and protocol implementations at low levels of abstraction, leaving more expressive but slower abstractions to the wider community to build. The intention is that SwiftNIO will be a building block for server-side applications, not necessarily the framework those applications will use directly.

Applications that need extremely high performance from their networking stack may choose to use SwiftNIO directly in order to reduce the overhead of their abstractions. These applications should be able to maintain extremely high performance with relatively little maintenance cost. SwiftNIO also focuses on providing useful abstractions for this use-case, such that extremely high performance network servers can be built directly.

The core SwiftNIO repository will contain a few extremely important protocol implementations, such as HTTP, directly in tree. However, we believe that most protocol implementations should be decoupled from the release cycle of the underlying networking stack, as the release cadence is likely to be very different (either much faster or much slower). For this reason, we actively encourage the community to develop and maintain their protocol implementations out-of-tree. Indeed, some first-party SwiftNIO protocol implementations, including our TLS and HTTP/2 bindings, are developed out-of-tree!

Documentation

Example Usage

There are currently several example projects that demonstrate how to use SwiftNIO.

To build & run them, run following command, replace TARGET_NAME with the folder name under ./Sources

swift run TARGET_NAME

For example, to run NIOHTTP1Server, run following command:

swift run NIOHTTP1Server

Getting Started

SwiftNIO primarily uses SwiftPM as its build tool, so we recommend using that as well. If you want to depend on SwiftNIO in your own project, it's as simple as adding a dependencies clause to your Package.swift:

dependencies: [
    .package(url: "https://github.com/apple/swift-nio.git", from: "2.0.0")
]

and then adding the appropriate SwiftNIO module(s) to your target dependencies. The syntax for adding target dependencies differs slightly between Swift versions. For example, if you want to depend on the NIOCore, NIOPosix and NIOHTTP1 modules, specify the following dependencies:

Swift 5.4 and newer (swift-tools-version:5.4)

dependencies: [.product(name: "NIOCore", package: "swift-nio"),
               .product(name: "NIOPosix", package: "swift-nio"),
               .product(name: "NIOHTTP1", package: "swift-nio")]

Using Xcode Package support

If your project is set up as an Xcode project and you're using Xcode 11+, you can add SwiftNIO as a dependency to your Xcode project by clicking File -> Swift Packages -> Add Package Dependency. In the upcoming dialog, please enter https://github.com/apple/swift-nio.git and click Next twice. Finally, select the targets you are planning to use (for example NIOCore, NIOHTTP1, and NIOFoundationCompat) and click finish. Now will be able to import NIOCore (as well as all the other targets you have selected) in your project.

To work on SwiftNIO itself, or to investigate some of the demonstration applications, you can clone the repository directly and use SwiftPM to help build it. For example, you can run the following commands to compile and run the example echo server:

swift build
swift test
swift run NIOEchoServer

To verify that it is working, you can use another shell to attempt to connect to it:

echo "Hello SwiftNIO" | nc localhost 9999

If all goes well, you'll see the message echoed back to you.

To work on SwiftNIO in Xcode 11+, you can just open the Package.swift file in Xcode and use Xcode's support for SwiftPM Packages.

If you want to develop SwiftNIO with Xcode 10, you have to generate an Xcode project:

swift package generate-xcodeproj

An alternative: using docker-compose

Alternatively, you may want to develop or test with docker-compose.

First make sure you have Docker installed, next run the following commands:

docker-compose -f docker/docker-compose.yaml run test

Will create a base image with Swift runtime and other build and test dependencies, compile SwiftNIO and run the unit and integration tests

docker-compose -f docker/docker-compose.yaml up echo

Will create a base image, compile SwiftNIO, and run a sample NIOEchoServer on localhost:9999. Test it by echo Hello SwiftNIO | nc localhost 9999.

docker-compose -f docker/docker-compose.yaml up http

Will create a base image, compile SwiftNIO, and run a sample NIOHTTP1Server on localhost:8888. Test it by curl http://localhost:8888

docker-compose -f docker/docker-compose.yaml -f docker/docker-compose.2204.57.yaml run test

Will create a base image using Ubuntu 22.04 and Swift 5.7, compile SwiftNIO and run the unit and integration tests. Files exist for other ubuntu and swift versions in the docker directory.

Developing SwiftNIO

Note: This section is only relevant if you would like to develop SwiftNIO yourself. You can ignore the information here if you just want to use SwiftNIO as a SwiftPM package.

For the most part, SwiftNIO development is as straightforward as any other SwiftPM project. With that said, we do have a few processes that are worth understanding before you contribute. For details, please see CONTRIBUTING.md in this repository.

Prerequisites

SwiftNIO's main branch is the development branch for the next releases of SwiftNIO 2, it's Swift 5-only.

To be able to compile and run SwiftNIO and the integration tests, you need to have a few prerequisites installed on your system.

macOS

  • Xcode 11.4 or newer, Xcode 12 recommended.

Linux

  • Swift 5.5 or newer from swift.org/download. We always recommend to use the latest released version.
  • netcat (for integration tests only)
  • lsof (for integration tests only)
  • shasum (for integration tests only)

Ubuntu 18.04

# install swift tarball from https://swift.org/downloads
apt-get install -y git curl libatomic1 libxml2 netcat-openbsd lsof perl

Fedora 28+

dnf install swift-lang /usr/bin/nc /usr/bin/lsof /usr/bin/shasum

Speeding up testing

It's possible to run the test suite in parallel, it can save significant time if you have a larger multi-core machine, just add --parallel when running the tests. This can speed up the run time of the test suite by 30x or more.

swift test --parallel

Download Details:

Author: Apple
Source Code: https://github.com/apple/swift-nio 
License: Apache-2.0 license

#swift #networking #event 

Swift-nio: Event-driven Network Application Framework

6 Favorite PHP Libraries for Working with Event and Task Queues

In today's post we will learn about 6 Favorite PHP Libraries for Working with Event and Task Queues. 

What is a Task Queue?

A Task Queue is a lightweight, dynamically allocated queue that one or more Worker Entities poll for Tasks.

Task Queues do not have any ordering guarantees. It is possible to have a Task that stays in a Task Queue for a period of time, if there is a backlog that wasn't drained for that time.

There are two types of Task Queues, Activity Task Queues and Workflow Task Queues.

Table of contents:

  • Bernard - A multibackend abstraction library.
  • BunnyPHP - A performant pure-PHP AMQP (RabbitMQ) sync and also async (ReactPHP) library.
  • Pheanstalk - A Beanstalkd client library.
  • PHP AMQP - A pure PHP AMQP library.
  • Tarantool Queue - PHP bindings for Tarantool Queue.
  • Thumper - A RabbitMQ pattern library.

1 - Bernard:

A multibackend abstraction library.

Bernard makes it super easy and enjoyable to do background processing in PHP. It does this by utilizing queues and long running processes. It supports normal queueing drivers but also implements simple ones with Redis and Doctrine.

Currently these are the supported backends, with more coming with each release:

  • Predis / PhpRedis
  • Amazon SQS
  • Iron MQ
  • Doctrine DBAL
  • Pheanstalk
  • PhpAmqp / RabbitMQ
  • Queue interop

Install

Via Composer

$ composer require bernard/bernard

Testing

We try to follow BDD and TDD, as such we use both phpspec and phpunit to test this library.

$ composer test

You can run the functional tests by executing:

$ composer test-functional

View on Github

2 - BunnyPHP:

A performant pure-PHP AMQP (RabbitMQ) sync and also async (ReactPHP) library.

Performant pure-PHP AMQP (RabbitMQ) sync/async (ReactPHP) library

Requirements

BunnyPHP requires PHP 7.1 and newer.

Installation

Add as Composer dependency:

$ composer require bunny/bunny:@dev

Tutorial

Connecting

When instantiating the BunnyPHP Client accepts an array with connection options:

$connection = [
    'host'      => 'HOSTNAME',
    'vhost'     => 'VHOST',    // The default vhost is /
    'user'      => 'USERNAME', // The default user is guest
    'password'  => 'PASSWORD', // The default password is guest
];

$bunny = new Client($connection);
$bunny->connect();

Connecting with SSL/TLS

Options for SSL-connections should be specified as array ssl:

$connection = [
    'host'      => 'HOSTNAME',
    'vhost'     => 'VHOST',    // The default vhost is /
    'user'      => 'USERNAME', // The default user is guest
    'password'  => 'PASSWORD', // The default password is guest
    'ssl'       => [
        'cafile'      => 'ca.pem',
        'local_cert'  => 'client.cert',
        'local_pk'    => 'client.key',
    ],
];

$bunny = new Client($connection);
$bunny->connect();

For options description - please see SSL context options.

Note: invalid SSL configuration will cause connection failure.

See also common configuration variants.

Publish a message

Now that we have a connection with the server we need to create a channel and declare a queue to communicate over before we can publish a message, or subscribe to a queue for that matter.

$channel = $bunny->channel();
$channel->queueDeclare('queue_name'); // Queue name

With a communication channel set up, we can now publish a message to the queue:

$channel->publish(
    $message,    // The message you're publishing as a string
    [],          // Any headers you want to add to the message
    '',          // Exchange name
    'queue_name' // Routing key, in this example the queue's name
);

Subscribing to a queue

Subscribing to a queue can be done in two ways. The first way will run indefinitely:

$channel->run(
    function (Message $message, Channel $channel, Client $bunny) {
        $success = handleMessage($message); // Handle your message here

        if ($success) {
            $channel->ack($message); // Acknowledge message
            return;
        }

        $channel->nack($message); // Mark message fail, message will be redelivered
    },
    'queue_name'
);

The other way lets you run the client for a specific amount of time consuming the queue before it stops:

$channel->consume(
    function (Message $message, Channel $channel, Client $client){
        $channel->ack($message); // Acknowledge message
    },
    'queue_name'
);
$bunny->run(12); // Client runs for 12 seconds and then stops

View on Github

3 - Pheanstalk:

A Beanstalkd client library.

Pheanstalk is a pure PHP 7.1+ client for the beanstalkd workqueue. It has been actively developed, and used in production by many, since late 2008.

Created by Paul Annesley, Pheanstalk is rigorously unit tested and written using encapsulated, maintainable object oriented design. Community feedback, bug reports and patches has led to a stable 1.0 release in 2010, a 2.0 release in 2013, and a 3.0 release in 2014.

Pheanstalk 3.0 introduces PHP namespaces, PSR-1 and PSR-2 coding standards, and PSR-4 autoloader standard.

beanstalkd up to the latest version 1.10 is supported. All commands and responses specified in the protocol documentation for beanstalkd 1.3 are implemented.

Pheanstalk 4

In 2018 Sam Mousa took on the responsibility of maintaining Pheanstalk.

Pheanstalk 4.0 drops support for older PHP versions. It contains the following changes (among other things):

  • Strict PHP type hinting
  • Value objects for Job IDs
  • Functions without side effects
  • Dropped support for persistent connections
  • Add support for multiple socket implementations (streams extension, socket extension, fsockopen)

Dropping support persistent connections

Persistent connections are a feature where a TCP connection is kept alive between different requests to reduce overhead from TCP connection set up. When reusing TCP connections we must always guarantee that the application protocol, in this case beanstalks' protocol is in a proper state. This is hard, and in some cases impossible; at the very least this means we must do some tests which cause roundtrips. Consider for example a connection that has just sent the command PUT 0 4000. The beanstalk server is now going to read 4000 bytes, but if the PHP script crashes during this write the next request get assigned this TCP socket. Now to reset the connection to a known state it used to subscribe to the default tube: use default. Since the beanstalk server is expecting 4000 bytes, it will just write this command to the job and wait for more bytes..

To prevent these kinds of issues the simplest solution is to not use persistent connections.

Dropped connection handling

Depending on the socket implementation used we might not be able to enable TCP keepalive. If we do not have TCP keepalive there is no way for us to detect dropped connections, the underlying OS may wait up to 15 minutes to decide that a TCP connection where no packets are being sent is disconnected. When using a socket implementation that supports read timeouts, like SocketSocket which uses the socket extension we use read and write timeouts to detect broken connections; the issue with the beanstalk protocol is that it allows for no packets to be sent for extended periods of time. Solutions are to either catch these connection exceptions and reconnect or use reserveWithTimeout() with a timeout that is less than the read / write timeouts.

Example code for a job runner could look like this (this is real production code):

while(true) {
    $job = $beanstalk->reserveWithTimeout(50);
    $this->stdout('.', Console::FG_CYAN);
    if (isset($job)) {
        $this->ensureDatabase($db);
        try {
            /** @var HookTask $task */
            $task = $taskFactory->createFromJson($job->getData());

            $commandBus->handle($task);
            $this->stdout("Deleting job: {$job->getId()}\n", Console::FG_GREEN);
            $beanstalk->delete($job);
        } catch (\Throwable $t) {
            \Yii::error($t);
            $this->stderr("\n{$t->getMessage()}\n", Console::FG_RED);
            $this->stderr("{$t->getTraceAsString()}\n", Console::FG_RED);

            $this->stdout("Burying job: {$job->getId()}\n", Console::FG_YELLOW);
            $beanstalk->bury($job);
        }
    }
}

Here connection errors will cause the process to exit (and be restarted by a task manager).

Functions with side effects

In version 4 functions with side effects have been removed, functions like putInTube internally did several things:

  1. Switch to the tube
  2. Put the job in the new tube

In this example, the tube changes meaning that the connection is now in a different state. This is not intuitive and forces any user of the connection to always switch / check the current tube. Another issue with this approach is that it is harder to deal with errors. If an exception occurs it is unclear whether we did or did not switch tube.

Installation with Composer

Install pheanstalk as a dependency with composer:

composer require pda/pheanstalk

View on Github

4 - PHP AMQP:

A pure PHP AMQP library.

This library is a pure PHP implementation of the AMQP 0-9-1 protocol. It's been tested against RabbitMQ.

The library was used for the PHP examples of RabbitMQ in Action and the official RabbitMQ tutorials.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

Project Maintainers

Thanks to videlalvaro and postalservice14 for creating php-amqplib.

The package is now maintained by Ramūnas Dronga, Luke Bakken and several VMware engineers working on RabbitMQ.

Setup

Ensure you have composer installed, then run the following command:

$ composer require php-amqplib/php-amqplib

That will fetch the library and its dependencies inside your vendor folder. Then you can add the following to your .php files in order to use the library

require_once __DIR__.'/vendor/autoload.php';

Then you need to use the relevant classes, for example:

use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;

Usage

With RabbitMQ running open two Terminals and on the first one execute the following commands to start the consumer:

$ cd php-amqplib/demo
$ php amqp_consumer.php

Then on the other Terminal do:

$ cd php-amqplib/demo
$ php amqp_publisher.php some text to publish

You should see the message arriving to the process on the other Terminal

Then to stop the consumer, send to it the quit message:

$ php amqp_publisher.php quit

If you need to listen to the sockets used to connect to RabbitMQ then see the example in the non blocking consumer.

$ php amqp_consumer_non_blocking.php

View on Github

5 - Tarantool Queue:

PHP bindings for Tarantool Queue.

Tarantool is a NoSQL database running in a Lua application server. It integrates Lua modules, called LuaRocks. This package provides PHP bindings for Tarantool Queue LuaRock.

Installation

The recommended way to install the library is through Composer:

composer require tarantool/queue

Before start

In order to use queue, you first need to make sure that your Tarantool instance is configured, up and running. The minimal required configuration might look like this:

-- queues.lua

box.cfg {listen = 3301}

queue = require('queue')
queue.create_tube('foobar', 'fifottl', {if_not_exists = true})

You can read more about the box configuration in the official Tarantool documentation. More information on queue configuration can be found here.

To start the instance you need to copy (or symlink) queues.lua file into the /etc/tarantool/instances.enabled directory and run the following command:

sudo tarantoolctl start queues

Working with queue

Once you have your instance running, you can start by creating a queue object with the queue (tube) name you defined in the Lua script:

use Tarantool\Queue\Queue;

...

$queue = new Queue($client, 'foobar');

where $client is an instance of Tarantool\Client\Client from the tarantool/client package.

Data types

Under the hood Tarantool uses MessagePack binary format to serialize/deserialize data being stored in a queue. It can handle most of the PHP data types (except resources and closures) without any manual pre- or post-processing:

$queue->put('foo');
$queue->put(true);
$queue->put(42);
$queue->put(4.2);
$queue->put(['foo' => ['bar' => ['baz' => null]]]);
$queue->put(new MyObject());

To learn more about object serialization, please follow this link.

Tasks

Most of the Queue API methods return a Task object containing the following getters:

Task::getId()
Task::getState() // States::READY, States::TAKEN, States::DONE, States::BURY or States::DELAYED
Task::getData()

And some sugar methods:

Task::isReady()
Task::isTaken()
Task::isDone()
Task::isBuried()
Task::isDelayed()

View on Github

6 - Thumper:

A RabbitMQ pattern library.

Thumper is a PHP library that aims to abstract several messaging patterns that can be implemented over RabbitMQ.

Inside the examples folder you can see how to implement RPC, parallel processing, simple queue servers and pub/sub.

Install

Via Composer

$ composer require php-amqplib/thumper

About the Examples

Each example has a README.md file that shows how to execute it. All the examples expect that RabbitMQ is running. They have been tested using RabbitMQ 2.1.1

For example, to publish message to RabbitMQ is as simple as this:

	$producer = new Thumper\Producer($connection);
	$producer->setExchangeOptions(array('name' => 'hello-exchange', 'type' => 'direct'));
	$producer->publish($argv[1]);

And then to consume them on the other side of the wire:

	$myConsumer = function($msg)
	{
	  echo $msg, "\n";
	};

	$consumer = new Thumper\Consumer($connection);
	$consumer->setExchangeOptions(array('name' => 'hello-exchange', 'type' => 'direct'));
	$consumer->setQueueOptions(array('name' => 'hello-queue'));
	$consumer->setCallback($myConsumer); //myConsumer could be any valid PHP callback
	$consumer->consume(5); //5 is the number of messages to consume.

Queue Server

This example illustrates how to create a producer that will publish jobs into a queue. Those jobs will be processed later by a consumer –or several of them–.

RPC

This example illustrates how to do RPC over RabbitMQ. We have a RPC Client that will send request to a server that returns the number of characters in the provided strings. The server code is inside the parallel_processing folder.

Parallel Processing

This example is based on the RPC one. In this case it shows how to achieve parallel execution with PHP. Let's say that you have to execute two expensive tasks. One takes 5 seconds and the other 10. Instead of waiting 15 seconds, we can send the requests in parallel and then wait for the replies which should take 10 seconds now –the time of the slowest task–.

Topic

In this case we can see how to achieve publish/subscribe with RabbitMQ. The example is about logging. We can log with several levels and subjects and then have consumers that listen to different log levels act accordingly.

View on Github

Thank you for following this article.

Related videos:

Laravel & RabbitMQ queues

#php #event #task #queues 

6 Favorite PHP Libraries for Working with Event and Task Queues

10 Favorite PHP Libraries for Event

In today's post we will learn about 10 Best PHP Libraries for Event

What is event?

Events are actions performed by users within an app, such as completing a level or making a purchase. Any and all actions within apps can be defined as an event. These can be tracked with your mobile measurement partner (MMP) to learn how users interact with your app.

Table of contents:

  • Amp - An event driven non-blocking I/O library.
  • Broadway - An event source and CQRS library.
  • CakePHP Event - An event dispatcher library.
  • Elephant.io - Yet another web socket library.
  • Evenement - An event dispatcher library.
  • Event - An event library with a focus on domain events.
  • Pawl - An asynchronous web socket client.
  • Prooph Event Store - An event source component to persist event messages
  • PHP Defer - Golang's defer statement for PHP.
  • Ratchet - A web socket library.

1 - Amp:

An event driven non-blocking I/O library.

Amp is a non-blocking concurrency framework for PHP. It provides an event loop, promises and streams as a base for asynchronous programming.

Promises in combination with generators are used to build coroutines, which allow writing asynchronous code just like synchronous code, without any callbacks.

Installation

This package can be installed as a Composer dependency.

composer require amphp/amp

This installs the basic building blocks for asynchronous applications in PHP. We offer a lot of repositories building on top of this repository, e.g.

Documentation

Documentation can be found on amphp.org as well as in the ./docs directory. Each package has its own ./docs directory.

Requirements

This package requires PHP 7.0 or later. Many of the other packages raised their requirement to PHP 7.1. No extensions required!

Optional Extensions

Extensions are only needed if your app necessitates a high number of concurrent socket connections, usually this limit is configured up to 1024 file descriptors.

Examples

Examples can be found in the ./examples directory of this repository as well as in the ./examples directory of our other libraries.

View on Github

2 - Broadway:

An event source and CQRS library.

Broadway is a project providing infrastructure and testing helpers for creating CQRS and event sourced applications. Broadway tries hard to not get in your way. The project contains several loosely coupled components that can be used together to provide a full CQRS\ES experience.

Installation

$ composer require broadway/broadway

Documentation

You can find detailed documentation of the Broadway bundle on broadway.github.io/broadway.

Feel free to join #qandidate on freenode with questions and remarks!

Acknowledgements

The broadway project is heavily inspired by other open source project such as AggregateSource, Axon Framework and Ncqrs.

We also like to thank Benjamin, Marijn and Mathias for the conversations we had along the way that helped us shape the broadway project. In particular Marijn for giving us access to his in-house developed CQRS framework.

View on Github

3 - CakePHP Event:

An event dispatcher library.

This library emulates several aspects of how events are triggered and managed in popular JavaScript libraries such as jQuery: An event object is dispatched to all listeners. The event object holds information about the event, and provides the ability to stop event propagation at any point. Listeners can register themselves or can delegate this task to other objects and have the chance to alter the state and the event itself for the rest of the callbacks.

Usage

Listeners need to be registered into a manager and events can then be triggered so that listeners can be informed of the action.

use Cake\Event\Event;
use Cake\Event\EventDispatcherTrait;

class Orders
{

	use EventDispatcherTrait;

	public function placeOrder($order)
	{
		$this->doStuff();
		$event = new Event('Orders.afterPlace', $this, [
			'order' => $order
		]);
		$this->getEventManager()->dispatch($event);
	}
}

$orders = new Orders();
$orders->getEventManager()->on(function ($event) {
	// Do something after the order was placed
	...
}, 'Orders.afterPlace');

$orders->placeOrder($order);

The above code allows you to easily notify the other parts of the application that an order has been created. You can then do tasks like send email notifications, update stock, log relevant statistics and other tasks in separate objects that focus on those concerns.

View on Github

4 - Elephant.io:

Yet another web socket library.

        ___     _,.--.,_         Elephant.io is a rough websocket client
      .-~   ~--"~-.   ._ "-.     written in PHP. Its goal is to ease the
     /      ./_    Y    "-. \    communications between your PHP Application and
    Y       :~     !         Y   a real-time server.
    lq p    |     /         .|
 _   \. .-, l    /          |j   Requires PHP 5.4 and openssl, licensed under
()\___) |/   \_/";          !    the MIT License.
 \._____.-~\  .  ~\.      ./
            Y_ Y_. "vr"~  T      Built-in Engines :
            (  (    |L    j      - Socket.io 2.x
            [nn[nn..][nn..]      - Socket.io 1.x
          ~~~~~~~~~~~~~~~~~~~    - Socket.io 0.x (courtesy of @kbu1564)

NOTICE

As this lib is not used anymore by the maintainers, the support has sadly been dropped. But rejoice, as a new repo is now maintained in its own organization : https://github.com/ElephantIO/elephant.io ! :)

Installation

We are suggesting you to use composer, with the following : php composer.phar require wisembly/elephant.io. For other ways, you can check the release page, or the git clone urls.

Documentation

The docs are not written yet, but you should check the example directory to get a basic knowledge on how this library is meant to work.

View on Github

5 - Evenement:

An event dispatcher library.

It has the same design goals as Silex and Pimple, to empower the user while staying concise and simple.

It is very strongly inspired by the EventEmitter API found in node.js.

Fetch

The recommended way to install Événement is through composer.

Just create a composer.json file for your project:

{
    "require": {
        "evenement/evenement": "^3.0 || ^2.0"
    }
}

Note: The 3.x version of Événement requires PHP 7 and the 2.x version requires PHP 5.4. If you are using PHP 5.3, please use the 1.x version:

{
    "require": {
        "evenement/evenement": "^1.0"
    }
}

And run these two commands to install it:

$ curl -s http://getcomposer.org/installer | php
$ php composer.phar install

Now you can add the autoloader, and you will have access to the library:

<?php
require 'vendor/autoload.php';

Usage

Creating an Emitter

<?php
$emitter = new Evenement\EventEmitter();

Adding Listeners

<?php
$emitter->on('user.created', function (User $user) use ($logger) {
    $logger->log(sprintf("User '%s' was created.", $user->getLogin()));
});

Removing Listeners

<?php
$emitter->off('user.created', function (User $user) use ($logger) {
    $logger->log(sprintf("User '%s' was created.", $user->getLogin()));
});

Emitting Events

<?php
$emitter->emit('user.created', [$user]);

Tests

$ ./vendor/bin/phpunit

View on Github

6 - Event:

An event library with a focus on domain events.

Installation

composer require league/event

Usage

Step 1: Create an event dispatcher

use League\Event\EventDispatcher;

$dispatcher = new EventDispatcher();

For more information about setting up the dispatcher, view the documentation about dispatcher setup.

Step 2: Subscribe to an event

Listeners can subscribe to events with the dispatcher.

$dispatcher->subscribeTo($eventIdentifier, $listener);

For more information about subscribing, view the documentation about subscribing to events.

Step 3: Dispatch an event

Events can be dispatched by the dispatcher.

$dispatcher->dispatch($event);

For more information about dispatching, view the documentation about dispatching events.

View on Github

7 - Pawl:

An asynchronous web socket client.

Install via composer:

composer require ratchet/pawl

Usage

Pawl as a standalone app: Connect to an echo server, send a message, display output, close connection:

<?php

require __DIR__ . '/vendor/autoload.php';

\Ratchet\Client\connect('wss://echo.websocket.org:443')->then(function($conn) {
    $conn->on('message', function($msg) use ($conn) {
        echo "Received: {$msg}\n";
        $conn->close();
    });

    $conn->send('Hello World!');
}, function ($e) {
    echo "Could not connect: {$e->getMessage()}\n";
});

Classes

There are 3 primary classes to be aware of and use in Pawl:

Connector:

Makes HTTP requests to servers returning a promise that, if successful, will resolve to a WebSocket object. A connector is configured via its constructor and a request is made by invoking the class. Multiple connections can be established through a single connector. The invoke mehtod has 3 parameters:

  • $url: String; A valid uri string (starting with ws:// or wss://) to connect to (also accepts PSR-7 Uri object)
  • $subProtocols: Array; An optional indexed array of WebSocket sub-protocols to negotiate to the server with. The connection will fail if the client and server can not agree on one if any are provided
  • $headers: Array; An optional associative array of additional headers requests to use when initiating the handshake. A common header to set is Origin

WebSocket:

This is the object used to interact with a WebSocket server. It has two methods: send and close. It has two public properties: request and response which are PSR-7 objects representing the client and server side HTTP handshake headers used to establish the WebSocket connection.

Message:

This is the object received from a WebSocket server. It has a __toString method which is how most times you will want to access the data received. If you need to do binary messaging you will most likely need to use methods on the object.

View on Github

8 - Prooph Event Store:

An event source component to persist event messages

Installation

You can install prooph/event-store via composer by adding "prooph/event-store": "dev-master" as requirement to your composer.json.

Available persistent implementations

Documentation

See: https://github.com/prooph/documentation

Will be published on the website soon.

Contribute

Please feel free to fork and extend existing or add new plugins and send a pull request with your changes! To establish a consistent code quality, please provide unit tests for all your changes and may adapt the documentation.

Version Guidance

VersionStatusPHP VersionSupport Until
5.xEOL>= 5.5EOL
6.xMaintained>= 5.53 Dec 2017
7.xLatest>= 7.1active
8.xDevelopment>= 7.4active

View on Github

9 - PHP Defer:

Golang's defer statement for PHP.

The defer statement originally comes from Golang. This library allows you to use the defer functionality in your PHP code.

Usage

<?php

defer($context, $callback);

defer requires two parameters: $context and $callback.

  1. $context - unused in your app, required to achieve the "defer" effect. I recommend to use $_ always.
  2. $callback - a callback which is executed after the surrounding function returns.

Examples

Defer the execution of a code

<?php

function helloGoodbye()
{
    defer($_, function () {
        echo "goodbye\n";
    });

    defer($_, function () {
        echo "...\n";
    });

    echo "hello\n";
}

echo "before hello\n";
helloGoodbye();
echo "after goodbye\n";

// Output:
//
// before hello
// hello
// ...
// goodbye
// after goodbye

Defer and exceptions

<?php

function throwException()
{
    defer($_, function () {
        echo "after exception\n";
    });

    echo "before exception\n";

    throw new \Exception('My exception');
}

try {
    throwException();
} catch (\Exception $e) {
    echo "exception has been caught\n";
}

// Output:
//
// before exception
// after exception
// exception has been caught

Installation

PHP Defer supports all PHP versions from ^5.3 to ^8.0. The following command will install the latest possible version of PHP Defer for your PHP interpreter.

composer require "php-defer/php-defer:^3.0|^4.0|^5.0"

View on Github

10 - Ratchet:

A web socket library.

A PHP library for asynchronously serving WebSockets. Build up your application through simple interfaces and re-use your application without changing any of its code just by combining different components.

Requirements

Shell access is required and root access is recommended. To avoid proxy/firewall blockage it's recommended WebSockets are requested on port 80 or 443 (SSL), which requires root access. In order to do this, along with your sync web stack, you can either use a reverse proxy or two separate machines. You can find more details in the server conf docs.

A quick example

<?php
use Ratchet\MessageComponentInterface;
use Ratchet\ConnectionInterface;

    // Make sure composer dependencies have been installed
    require __DIR__ . '/vendor/autoload.php';

/**
 * chat.php
 * Send any incoming messages to all connected clients (except sender)
 */
class MyChat implements MessageComponentInterface {
    protected $clients;

    public function __construct() {
        $this->clients = new \SplObjectStorage;
    }

    public function onOpen(ConnectionInterface $conn) {
        $this->clients->attach($conn);
    }

    public function onMessage(ConnectionInterface $from, $msg) {
        foreach ($this->clients as $client) {
            if ($from != $client) {
                $client->send($msg);
            }
        }
    }

    public function onClose(ConnectionInterface $conn) {
        $this->clients->detach($conn);
    }

    public function onError(ConnectionInterface $conn, \Exception $e) {
        $conn->close();
    }
}

    // Run the server application through the WebSocket protocol on port 8080
    $app = new Ratchet\App('localhost', 8080);
    $app->route('/chat', new MyChat, array('*'));
    $app->route('/echo', new Ratchet\Server\EchoServer, array('*'));
    $app->run();
$ php chat.php
    // Then some JavaScript in the browser:
    var conn = new WebSocket('ws://localhost:8080/echo');
    conn.onmessage = function(e) { console.log(e.data); };
    conn.onopen = function(e) { conn.send('Hello Me!'); };

View on Github

Thank you for following this article.

Related videos:

PHP Event Calendar using FullCalendar JS Library

#php #event 

10 Favorite PHP Libraries for Event

Discrete Event Process Oriented Simulation Framework Written in Julia

SimJulia

A discrete event process oriented simulation framework written in Julia inspired by the Python library SimPy.

Installation

SimJulia.jl is a registered package, and is installed by running

julia> Pkg.add("SimJulia")

Contributing

  • To discuss problems or feature requests, file an issue. For bugs, please include as much information as possible, including operating system, julia version, and version of the dependencies: DataStructures and ResumableFunctions.
  • To contribute, make a pull request. Contributions should include tests for any new features/bug fixes.

Release Notes

  • v0.8.2 (2021)
    • implementation of Store based on a Dict
  • v0.8.1 (2021)
    • some minor bug fixes
    • uses ResumableFunctions v0.6 or higher
  • v0.8 (2019)
    • adds support for Julia v1.2.
  • v0.7 (2018)
    • adds support for Julia v1.0
  • v0.6 (2018)
    • adds support for Julia v0.7.
    • the @oldprocess macro and the produce / consume functions are removed because they are no longer supported.
  • v0.5 (2018)
    • The old way of making processes is deprecated in favor of the semi-coroutine approach as implemented in ResumableFunctions. The @process macro replaces the @coroutine macro. The old @process macro is temporarily renamed @oldprocess and will be removed when the infrastructure supporting the produce and the consume functions is no longer available in Julia. (DONE)
    • This version no longer integrates a continuous time solver. A continuous simulation framework based on DISCO and inspired by the standalone QSS solver using SimJulia as its discrete-event engine can be found in the repository QuantizedStateSystems (WIP):
    • Documentation is automated with Documenter.jl (WIP: Overview and Tutorial OK).
  • v0.4.1 (2017)
    • the @resumable and @yield macros are put in a seperate package ResumableFunctions:
    • Users have to take into account the following syntax change: @yield return arg is replaced by @yield arg.
  • v0.4 (2017) only supports Julia v0.6 and above. It is a complete rewrite: more julian and less pythonic. The discrete event features are on par with v0.3 (SimPy v3) and following features are added:
    • Scheduling of events can be done with Base.Dates.Datetime and Base.Dates.Period
    • Two ways of making Processes are provided:
      • using the existing concept of Tasks
      • using a novel finite-statemachine approach
    • A continuous time solver based on the standalone QSS solver is implemented. Only non-stiff systems can be solved efficiently.
  • v0.3 (2015) synchronizes the API with SimPy v3 and is Julia v0.3, v0.4 and v0.5 compatible:
    • Documentation is available at readthedocs.
    • The continuous time solver is not implemented.
  • v0.2 (2014) introduces a continuous time solver inspired by the Simula library DISCO and is Julia v0.2 and v0.3 compatible.
  • v0.1 (2013) is a Julia clone of SimPy v2 and is Julia v0.2 compatible.

Todo

  • Transparent statistics gathering for resources.
  • Update of documentation.

Documentation

Download Details:

Author: BenLauwens
Source Code: https://github.com/BenLauwens/SimJulia.jl 
License: MIT license

#julia #event #framework 

Discrete Event Process Oriented Simulation Framework Written in Julia
Lawson  Wehner

Lawson Wehner

1660901542

The `TypedEventNotifier` Library Allows Notifying Listeners

The TypedEventNotifier library allows notifying listeners with an object. listeners can be subscribed to only a special type or group of objects.

Installation

Add on pubspec.yml:

dependencies:
  typed_event_notifier: ... // latest package version

Usage

See example in /example folder

import 'package:typed_event_notifier/typed_event_notifier.dart';


/// Class [ExampleNotifier].
/// 
/// The example of notifier.
/// It can send notifications to listeners with an object
/// and notify listeners if they are registered for this object type
/// or extended objects.
class ExampleNotifier extends TypedEventNotifier<Event> {
  /// Create [ExampleNotifier] instance.
  ExampleNotifier();
  
  /// Will notify listeners with [CurrentPageChangedEvent] event.
  void currentPage(int index) {
    _currentPage = index;
    notifyListeners(CurrentPageChangedEvent(currentPage: currentPage));
  }

  /// Will notify listeners with [PagesLoadedEvent] event.
  set loadedPages(Set<int> set) {
    _loadedPages.addAll(set);
    notifyListeners(PagesLoadedEvent(pages: set));
  }
}


//The part of example of listener on `current page changed` event only.
class _CurrentPageOnlyListenerState extends State<CurrentPageOnlyListener> {
  String message = 'CurrentPageOnly: empty';

  // Will receive events only with CurrentPageChangedEvent type.
  void currentPageChanged(CurrentPageChangedEvent event) {
    setState(() {
      message = 'CurrentPageOnly: now current page is ${event.currentPage}';
    });
  }

  @override
  void initState() {
    widget.notifier.addListener(currentPageChanged);
    super.initState();
  }

  @override
  void dispose() {
    widget.notifier.removeListener(currentPageChanged);
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Text(message);
  }

}

// The part of example of listener on any event.
class _AnyListenerState extends State<AnyListener> {
  String message = 'Any: empty';

  // Will receive events with CurrentPageChangedEvent and PagesLoadedEvent type.
  void any(Event event) {
    if (event is CurrentPageChangedEvent) {
      setState(() {
        message = 'Any: now current page is ${event.currentPage}';
      });
    }
    if (event is PagesLoadedEvent) {
      setState(() {
        message = 'Any: new loaded pages is ${event.pages}';
      });
    }
  }

  @override
  void initState() {
    widget.notifier.addListener(any);
    super.initState();
  }

  @override
  void dispose() {
    widget.notifier.removeListener(any);
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Text(message);
  }
}

// The events for example, which will be sent through the notifier.
// They have abstract base class (used as parent type),
// and extends from it events.
// for example two types with different content.
/// Class [Event]. 
abstract class Event {
  /// Create [Event] instance.
  Event();
}

/// Class [CurrentPageChangedEvent].
class CurrentPageChangedEvent extends Event {
  /// Index of current page.
  final int currentPage;

  /// Create [CurrentPageChangedEvent] instance.
  CurrentPageChangedEvent({
    required this.currentPage,
  }) : super();
}

/// Class [PagesLoadedEvent].
class PagesLoadedEvent extends Event {
  /// Indexes of loaded pages.
  final Set<int> pages;

  /// Create [PagesLoadedEvent] instance.
  PagesLoadedEvent({
    required this.pages,
  }) : super();
}

Use this package as a library

Depend on it

Run this command:

With Flutter:

 $ flutter pub add typed_event_notifier

This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dependencies:
  typed_event_notifier: ^0.0.2

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:typed_event_notifier/typed_event_notifier.dart';

example/lib/main.dart

import 'dart:math';

import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:typed_event_notifier/typed_event_notifier.dart';

void main() {
  runApp(const App());
}

/// Example app.
class App extends StatelessWidget {
  /// Create [App] instance.
  const App({Key? key}) : super(key: key);

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        primarySwatch: Colors.blue,
      ),
      home: MyHomePage(
        title: 'Event Notifier Demo',
        notifier: notifier,
      ),
    );
  }
}

/*
The events for example, which will be sent through the notifier.
They have abstract base class (used as parent type),
and extends from it events.
for example two types with different content.
 */

/// Class [Event].
abstract class Event {
  /// Create [Event] instance.
  Event();
}

/// Class [CurrentPageChangedEvent].
class CurrentPageChangedEvent extends Event {
  /// Index of current page.
  final int currentPage;

  /// Create [CurrentPageChangedEvent] instance.
  CurrentPageChangedEvent({
    required this.currentPage,
  }) : super();
}

/// Class [PagesLoadedEvent].
class PagesLoadedEvent extends Event {
  /// Indexes of loaded pages.
  final Set<int> pages;

  /// Create [PagesLoadedEvent] instance.
  PagesLoadedEvent({
    required this.pages,
  }) : super();
}

/*
The example of notifier.
It can send notifications to listeners with an object
and notify listeners if they are registered for this object type
or extended objects.
 */

/// Instance of the demo.
final ExampleNotifier notifier = ExampleNotifier();

/// Class [ExampleNotifier]
class ExampleNotifier extends TypedEventNotifier<Event> {
  /// Create [ExampleNotifier] instance.
  ExampleNotifier();

  int _currentPage = 0;

  /// Current index of page.
  int get currentPage => _currentPage;

  set currentPage(int index) {
    _currentPage = index;
    notifyListeners(CurrentPageChangedEvent(currentPage: currentPage));
  }

  final Set<int> _loadedPages = <int>{};

  /// List of indexes of loaded pages.
  List<int> get loadedPages => _loadedPages.toList(growable: false);

  set loadedPages(List<int> list) {
    final Set<int> loadedPages = list.toSet();
    _loadedPages.addAll(loadedPages);
    notifyListeners(PagesLoadedEvent(pages: loadedPages));
  }
}

/*
The example of listener on `current page changed` event only.
 */

/// Class [CurrentPageOnlyListener].
class CurrentPageOnlyListener extends StatefulWidget {
  /// Create [CurrentPageOnlyListener] instance.
  const CurrentPageOnlyListener({
    required this.notifier,
    Key? key,
  }) : super(key: key);

  /// Notifier.
  final ExampleNotifier notifier;

  @override
  State<CurrentPageOnlyListener> createState() =>
      _CurrentPageOnlyListenerState();
}

class _CurrentPageOnlyListenerState extends State<CurrentPageOnlyListener> {
  String message = 'CurrentPageOnly: empty';

  void currentPageChanged(CurrentPageChangedEvent event) {
    setState(() {
      message = 'CurrentPageOnly: now current page is ${event.currentPage}';
    });
  }

  @override
  void initState() {
    widget.notifier.addListener(currentPageChanged);
    super.initState();
  }

  @override
  void dispose() {
    widget.notifier.removeListener(currentPageChanged);
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Text(message);
  }
}

/*
The example of listener on any event.
 */

/// Class [AnyListener].
class AnyListener extends StatefulWidget {
  /// Create [AnyListener] instance.
  const AnyListener({
    required this.notifier,
    Key? key,
  }) : super(key: key);

  /// Notifier.
  final ExampleNotifier notifier;

  @override
  State<AnyListener> createState() => _AnyListenerState();
}

class _AnyListenerState extends State<AnyListener> {
  String message = 'Any: empty';

  void any(Event event) {
    if (event is CurrentPageChangedEvent) {
      setState(() {
        message = 'Any: now current page is ${event.currentPage}';
      });
    }
    if (event is PagesLoadedEvent) {
      setState(() {
        message = 'Any: new loaded pages is ${event.pages}';
      });
    }
  }

  @override
  void initState() {
    widget.notifier.addListener(any);
    super.initState();
  }

  @override
  void dispose() {
    widget.notifier.removeListener(any);
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Text(message);
  }
}

/// Class [MyHomePage].
class MyHomePage extends StatelessWidget {
  /// Create [MyHomePage] instance.
  const MyHomePage({
    required this.title,
    required this.notifier,
    Key? key,
  }) : super(key: key);

  /// Title of homepage.
  final String title;

  /// Notifier.
  final ExampleNotifier notifier;

  void _setNewCurrentPage() {
    final Random random = Random();
    notifier.currentPage = random.nextInt(100);
  }

  void _setNewLoadedPages() {
    final Random random = Random();
    notifier.loadedPages = <int>[
      random.nextInt(100),
      random.nextInt(100),
      random.nextInt(100)
    ];
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text(title),
      ),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            CurrentPageOnlyListener(notifier: notifier),
            const SizedBox(height: 10),
            AnyListener(notifier: notifier),
            const SizedBox(height: 40),
            const Text(
              'You can push the buttons to notify listeners.',
            ),
            const SizedBox(height: 20),
            ElevatedButton(
              onPressed: _setNewCurrentPage,
              child: const Text('New Current Page'),
            ),
            const SizedBox(height: 10),
            ElevatedButton(
              onPressed: _setNewLoadedPages,
              child: const Text('New Loaded Pages List'),
            ),
          ],
        ),
      ),
    );
  }
}

Download Details:

Author: EvGeniyLell
Source Code: https://github.com/EvGeniyLell/typed_event_notifier 
License: MIT license

#flutter #dart #type #event 

The `TypedEventNotifier` Library Allows Notifying Listeners
Dexter  Goodwin

Dexter Goodwin

1660469700

Loopbench: Benchmark Your Event Loop

loopbench

Benchmark your event loop, extracted from hapi, hoek, heavy and boom.

Install

To install loopbench, simply use npm:

npm i loopbench --save

Example

See example.js.

API


loopbench([opts])

Creates a new instance of loopbench.

Options:

  • sampleInterval: the interval at which the eventLoop should be sampled, defaults to 5.
  • limit: the maximum amount of delay that is tollerated before overLimit becomes true, and the load event is emitted, defaults to 42.

Events:

  • load, emitted when instance.delay > instance.limit
  • unload, emitted when overLimit goes from true and false

instance.delay

The delay in milliseconds (and fractions) from the expected run. It might be negative (in older nodes).


instance.limit

The maximum amount of delay that is tollerated before overLimit becomes true, and the load event is emitted.


instance.overLimit

Is true if the instance.delay > instance.limit.


instance.stop()

Stops the sampling.

Download Details:

Author: Mcollina
Source Code: https://github.com/mcollina/loopbench 
License: MIT license

#javascript #loop #event 

Loopbench: Benchmark Your Event Loop