1675508640
An events app to discover events happening around the world using the Open Event Platform on Eventyay.
Eventyay Attendee App provides following features for users:
Application is available here:
Please join our mailing list to discuss questions regarding the project here
Our chat channel is on gitter here
A native Android app using Kotlin for writing code and Open event server for API.
There are certain conventions we follow in the project, we recommend that you become familiar with these so that the development process is uniform for everyone:
Generally, projects are created using package by layer approach where packages are names by layers like ui
, activity
, fragment
, etc but it quickly becomes unscalable in large projects where a large number of unrelated classes are crammed in one layer and it becomes difficult to navigate through them.
Instead, we follow package by feature, which at the cost of flatness of our project, provides us packages of isolated functioning related classes which are likely to be a complete self-sufficient component of the application. Each package contains all related classes of view, presenter, their implementations like Activities and Fragments.
A notable exception to this is the helper
module and data classes like Models and Repositories as they are used in a cross component way.
Lastly, each class should only perform one task, do it well, and be unit tested for it. For example, if a presenter is doing more than it should, i.e., parsing dates or implementing search logic, better move it in its own class. There can be exceptions to this practice, but if the functionality can be generalised and reused, it should most definitely be transferred in its own class and unit tested.
First time contributors can read CONTRIBUTING.md file for help regarding creating issues and sending pull requests.
We have the following branches
Please Note that :-
Each push to master branch automatically publishes the application to Play Store as an Alpha Release. Thus, on each merge into master, the versionCode and versionName MUST be changed accordingly in app/build.gradle
versionCode : Integer : To be monotonically incremented with each merge. Failure to do so will lead to publishing error, and thus is a crucial step before any merge
versionName : String : User visible version of the app. To be changed following semantic versioning
Please help us follow the best practices to make it easy for the reviewer as well as the contributor. We want to focus on the code quality more than on managing pull request ethics.
Installing APK on your device: You can get debug APK as well as Release APK in apk branch of the repository. After each PR merge, both the APKs are automatically updated. So, just download the APK you want and install it on your device. The APKs will always be the latest one.
Author: Fossasia
Source Code: https://github.com/fossasia/open-event-attendee-android
License: Apache-2.0 license
1671514860
The name directly comes from the fact that event sourcing events are the source of truth. So all of the other data and other data structures are just derived from the events. So we can erase in theory all of those other storages as long as we keep event lock then we can always regenerate them. Event sourcing contains a ordered of our operation so if we have look on the shopping cart.
Nice thing about Event sourcing is that we are able to do time traveling. If we have recorded the sequence of events then we can always go back, So we can just take the events and apply that to the current state and get back to time to see what has happened.
Let’s take a use case. Gaurav is a shop keeper, he sells electronic items like mobile phones, laptops etc, he wants to keep track of stock in his shop and wants to know whether his shop has stock of a particular item or not without checking manually. He wants an app for it.
The app has three functionalities:
In Event Sourcing you just capture user events and add them in database, you just keep adding new events for every user action and no record is updated or deleted in the database , just events are added. With events, you also add event data specific to the event.
In this way you maintain the history of the user action. It is useful if your application has security requirements to audit all user actions. This is also useful in any application where you want a history of user actions (eg Github commits, analytics applications, etc.) and to know the current state of an entity, you simply iterate through your code. are and receive it.
The project structure will be as follows-
The pom.xml will be as follows-
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.7.0</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>stockmanagement_eventstore</artifactId>
<version>1.0.0</version>
<name>stockmanagement_eventstore</name>
<description>Demo project for Event Sourcing</description>
<properties>
<java.version>11</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- H2 database dependency(in-memory databases ) -->
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Lombok remove boilerplate codes -->
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
Created Stock Model class
package com.example.stock.management;
import lombok.Data;
//entity model
@Data
public class Stock {
private String name;
private int quantity;
private String user;
}
EventStore class will be as follows
package com.example.stock.management;
import java.time.LocalDateTime;
import java.util.Map;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import lombok.Builder;
import lombok.Data;
@Entity
@Data
public class EventStore {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private long eventId;
private String eventType;
private String entityId;
private String eventData;
private LocalDateTime eventTime;
}
Created StockEvent interface
package com.example.stock.management;
public interface StockEvent {
}
Here is StockAddedEvent class and it’s implementaion
package com.example.stock.management;
import lombok.Builder;
import lombok.Data;
@Builder
@Data
public class StockAddedEvent implements StockEvent {
private Stock stockDetails;
}
Created StockRemovedEvent class and it’s implementaion
package com.example.stock.management;
import lombok.Builder;
import lombok.Data;
@Builder
@Data
public class StockRemovedEvent implements StockEvent {
private Stock stockDetails;
}
Added EventRepository class
package com.example.stock.management;
import java.time.LocalDateTime;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Component;
@Component
public interface EventRepository extends CrudRepository<EventStore, Long>{
Iterable<EventStore> findByEntityId(String entityId);
Iterable<EventStore> findByEntityIdAndEventTimeLessThanEqual(String entityId,LocalDateTime date);
}
Created EventService class
package com.example.stock.management;
import java.time.LocalDate;
import java.time.LocalDateTime;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
@Service
public class EventService {
@Autowired
private EventRepository repo;
public void addEvent(StockAddedEvent evnt) throws JsonProcessingException {
EventStore eventStore = new EventStore();
eventStore.setEventData(new ObjectMapper().writeValueAsString(event.getStockDetails()));
eventStore.setEventType("STOCK_ADDED");
eventStore.setEntityId(event.getStockDetails().getName());
eventStore.setEventTime(LocalDateTime.now());
repo.save(eventStore);
}
public void addEvent(StockRemovedEvent event) throws JsonProcessingException {
EventStore eventStore = new EventStore();
eventStore.setEventData(new ObjectMapper().writeValueAsString(event.getStockDetails()));
eventStore.setEventType("STOCK_REMOVED");
eventStore.setEntityId(event.getStockDetails().getName());
eventStore.setEventTime(LocalDateTime.now());
repo.save(eventStore);
}
public Iterable<EventStore> fetchAllEvents(String name) {
return repo.findByEntityId(name);
}
public Iterable<EventStore> fetchAllEventsTillDate(String name,LocalDateTime date) {
return repo.findByEntityIdAndEventTimeLessThanEqual(name, date);
}
}
Created StockController class for adding a stock item , removing a stock item and Getting current count of stock.
package com.example.stock.management;
import java.time.LocalDate;
import java.time.LocalDateTime;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.google.gson.Gson;
@RestController
public class StockController {
@Autowired
private EventService service;
// Adding a stock item
@PostMapping("/stock")
public void addStock(@RequestBody Stock stockRequest) throws JsonProcessingException {
StockAddedEvent event = StockAddedEvent.builder().stockDetails(stockRequest).build();
service.addEvent(event);
}
// To remove item from a stock
@DeleteMapping("/stock")
public void removeStock(@RequestBody Stock stock) throws JsonProcessingException {
StockRemovedEvent event = StockRemovedEvent.builder().stockDetails(stock).build();
service.addEvent(event);
}
//To get current count of stock
@GetMapping("/stock")
public Stock getStock(@RequestParam("name") String name) throws JsonProcessingException {
Iterable<EventStore> events = service.fetchAllEvents(name);
Stock currentStock = new Stock();
currentStock.setName(name);
currentStock.setUser("NA");
for (EventStore event : events) {
Stock stock = new Gson().fromJson(event.getEventData(), Stock.class);
if (event.getEventType().equals("STOCK_ADDED")) {
currentStock.setQuantity(currentStock.getQuantity() + stock.getQuantity());
} else if (event.getEventType().equals("STOCK_REMOVED")) {
currentStock.setQuantity(currentStock.getQuantity() - stock.getQuantity());
}
}
return currentStock;
}
@GetMapping("/events")
public Iterable<EventStore> getEvents(@RequestParam("name") String name) throws JsonProcessingException {
Iterable<EventStore> events = service.fetchAllEvents(name);
return events;
}
@GetMapping("/stock/history")
public Stock getStockUntilDate(@RequestParam("date") String date,@RequestParam("name") String name) throws JsonProcessingException {
String[] dateArray = date.split("-");
LocalDateTime dateTill = LocalDate.of(Integer.parseInt(dateArray[0]), Integer.parseInt(dateArray[1]), Integer.parseInt(dateArray[2])).atTime(23, 59);
Iterable<EventStore> events = service.fetchAllEventsTillDate(name,dateTill);
Stock currentStock = new Stock();
currentStock.setName(name);
currentStock.setUser("NA");
for (EventStore event : events) {
Stock stock = new Gson().fromJson(event.getEventData(), Stock.class);
if (event.getEventType().equals("STOCK_ADDED")) {
currentStock.setQuantity(currentStock.getQuantity() + stock.getQuantity());
} else if (event.getEventType().equals("STOCK_REMOVED")) {
currentStock.setQuantity(currentStock.getQuantity() - stock.getQuantity());
}
}
return currentStock;
}
}
StockmanagementEventstoreApplication class will be as follows
package com.example.stock.management;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
// Main class of the application
@SpringBootApplication
public class StockmanagementEventstoreApplication {
public static void main(String[] args) {
SpringApplication.run(StockmanagementEventstoreApplication.class, args);
}
}
Added application.yml file
spring:
datasource:
url: jdbc:h2:mem:testdb
driverClassName: org.h2.Driver
username: sa
password:
jpa:
database-platform: org.hibernate.dialect.H2Dialect
h2:
console:
enabled: true
path: /h2
Start the StockmanagementEventstoreApplication app
Adding some items to stock:
Let’s check the database :
We are able to get the current stock by hitting the GET API
If we wants to know what was the stock the day before.
Fetching the current state of an entity is not straightforward and is not scalable in event sourcing. This can be mitigated by taking snapshots of events at a particular time, compute the state of the entity for that snapshot at that time, store it somewhere and then only replay events that occurred after that snapshot time. For more, you can refer to the documentation: https://www.baeldung.com/cqrs-event-sourcing-java
Original article source at: https://blog.knoldus.com/
1670828040
In this blog, We will discuss axon framework terminologies and servers we’ll be talking about part one the structure of an axon application but prior to diving into that, we’ll first have a couple of the axon concepts which come into play when you’re working with an axon application. You might want to learn more about like command query responsibility segregation or using commands events and queries as message-driven API DDD event sourcing but also very importantly evolutionary microservice these will all be concepts.
We needs will intend to cover in detail the Axon terminology that the Axon Framework provides to help build applications based on CQRS/DDD and Event Sourcing
A summary of the various terminologies is given below
Axon queries describe a request for information or state. A query can have multiple handlers when it dispatches queries, the client indicates whether he wants a result from one or from all available query handlers.
In Axon one of the core concepts is messaging. All communication between components is done by message objects. This gives these components the location transparency needed to be able to scale and distribute the components when necessary.
In the Axon, all communication between components is done with explicit messages, represented by the Message interface. A Message consists of a payload, which is an application-specific that represents the actual functional message, and Meta Data, which is a key-value pair describing the context of the message.
Axon application is associated with Command Messages, accepting from the true source of the outer world.
Axon commands describe an intent to change the application’s state (event state). They are implemented as (preferably read-only) POJOs that are wrapped using one of the CommandMessage interface implementations.
Commands always have exactly one destination. While the sender does not care which component handles the command or where that component resides, it may be interesting to know the outcome of it.
Event Sourcing is an Axon architectural pattern in which Events are considered the “source of truth”, based on which components are built in their internal state. Events are objects that describe something that has occurred in the axon based-application. A typical source of events is the aggregates. When something happens within the aggregate, it will raise an event. In the Axon f/w, events can be any object. it’s highly encouraged to make sure all events are serializable. Event Sourcing Handlers combined will form the Aggregate, this is where all the state changes happen.
Now that we have the Axon Server side of things, we expect things here concerning both message handling and event storage. Axon Server comes with a zero-configuration message router and event store that are combined gracefully with Axon to provide a solution to both storing events and delivering messages between components.
Needless to say, it ticks all these boxes. An axon server is built from scratch in Java to specifically meet all of these requirements. It manages files directly and does not depend on an underlying database system to store events.
Event Sourcing is an Architectural pattern in which Events are considered the “source of truth”, based on which components are build their internal state.
The database “EventStore” (written with quotes to emphasize it is the name of the database) is a built-for-purpose solution and therefore, it meets all the requirements in our list. “EventStore” is a popular option that is written in .NET (with Java clients written using Akka). Axon Framework gives a huge selection of options for where to store your events in traditional RDBMS options like PostgreSQL or MySQL to NoSQL databases such as MongoDB.
In this blog, we’ve summarized the requirements a query, command, message, and Event Store database needs and looked at the various options available together. Axon’s set of features and performance of your Event Sourcing system, we recommend specialized storage and, particularly if you are already leveraging Axon Framework, choosing Axon Server is a logical choice to make.
Original article source at: https://blog.knoldus.com/
1670707320
As the industry grows, that produced has also increased in the changing scenario. This data can be a great asset to the business if analyzed properly. Most tech companies receive data in raw form, and it becomes challenging to process data. Apache Kafka, an open-source streaming platform, helps you deal with the problem. It allows you to perform basic tasks like moving data from source to destination to more complex tasks like altering the structure and performing aggregation that too on the fly in real-time. Real-Time Event Processing with Kafka in a serverless environment makes your job easier by taking the overhead burden of managing the server and allowing you to focus solely on building your application.
The new technologies give us the ability to develop and deploy lifesaving applications at unprecedented speed — while also safeguarding privacy. Source: Tracking People And Events In Real Time
Serverless is that form of computing architecture wherein all the computational capacities can transfer to cloud platforms; this can help increase the speed and performance. This serverless environment helps build and run applications and use various services without worrying about the server. This enables the developers to develop their applications by putting all their efforts towards the core development of their applications removing the overhead problems of managing the server and using this time towards making better applications.
Apache Kafka is an open-source event streaming platform that provides data storing, reading, and analyzing capabilities. It has high throughput reliability and replication factor that makes it highly fault-tolerant. It is fast and scalable. It is distributed, allowing its user to run it across many platforms, thus giving it extra processing power and storage capabilities. It was initially built as a messaging queue system but has evolved into a full-fledged event streaming platform over time. Different use cases of Kafka are:
Kafka acts as a messenger sending messages from one application to another. Messages sent by the producer (sender) are grouped into a topic that the consumer (subscriber) subscribed to as a stream of data.Kafka Stream API And KSQL for Real-time Event Streaming Kafka Stream is a client library that analyzes data. The stream is a continuous flow of data to be analyzed for our purposes. It helps us read this data in real time with milliseconds of latency, allowing us to perform some aggregation functions and return the output to a new topic. The picture below shows us the working of an application that uses the Apache Kafka stream library.
KSQL streaming SQL engine for Apache Kafka KSQL is a streaming SQL engine for real-time event processing against Apache Kafka. It provides an easy yet powerful interactive SQL interface for stream processing, relinquishing you from writing any java or python code.
Read more about Apache Kafka Security with Kerberos on Kubernetes.
AWS provides Amazon MSK, a fully managed service that allows you to build Apache Kafka applications for real-time event processing. It might be a tedious task to manage the setup and scale of Apache Kafka clusters in production. Once you run it on your own, you would like to provision servers, configure it manually, replace servers failure, integrate upgrades and server patches, create the cluster to maximize availability, ensure data safety, and plan to scale events from time to time for supporting load changes. Amazon MSK makes it a cakewalk to create and run production applications on Apache Kafka without it's infrastructure management expertise taking the weight off your shoulder to manage infrastructure and focus on building applications. Benefits of Amazon MSK
Before discussing how Kafka works on Azure, let us quickly get insight into Microsoft Azure.
Well, Azure is a set of cloud services provided by Microsoft to meet your daily business challenges by giving you the utility to build, manage, deploy and scale applications over an extensive global platform. It provides Azure HDinsight which is a cloud-based service used for data analytics. It allows us to run popular open-source frameworks, including Apache Kafka, with effective cost and enterprise-grade services. Azure enables massive data processing with minimal effort, complemented by an open-source ecosystem's benefits. QUICKSTART: to create a Kafka cluster using Azure portal in HDInsight To create an Apache Kafka cluster on Azure HDInsight, follow the steps given below
Command to connect to the cluster: ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net
Know more about Stream Processing with Apache Flink and Kafka
Azure separates Kafka's single-dimension view of the rack to a two-dimension rack view (update domain and fault domain) and provides tools to rebalance Kafka partitions across these domains
Following Kafka's huge demand in its adoption by developers due to its large scalability and workload handling capabilities, almost all developers are shifting towards stream-oriented applications rather than state-oriented ones. However, breaking the stereotype that managing Kafka requires expert skills, the Confluent Cloud provides developers with Kafka's full management. The developers need not worry about the gruesome work of managing it. This is built as a 'cloud-native service' that provides the developers with a serverless environment with utility pricing that offers the services, and the pricing is done by charging for the stream used. The Confluent Cloud is available on GCP, and the developers can use it by signing up and paying for its usage. Thus, it provides the developers with an integrated environment for billing using the metered GCP capacity. The use of tools provided by Confluent clouds, such as Confluent Schema Registry, BigQuery Collector, and support for KSQL, can be done by this subscription, enabling the developers to use care of the technical issues or write their codes.
How Kafka works on Amazon MSK ?
In a few steps, you can provide your Apache Kafka cluster by logging on to Amazon MSK to manage your Apache Kafka cluster integrate upgrade and let you freely build your application.
Given below are the following steps to deploy the confluent cloud.
After all these steps, the user will log in to his account. In this case, the user needs to make some decisions regarding the clusters but not as complicated as the ones wherein the users do the managing process of Apache Kafka independently. Thus it makes the development of event-streaming applications easier and provides a better experience.Click to explore the Managed Apache Kafka Services.
So, in a nutshell, we can say that Real-Time Event Processing with Kafka has huge demand in its adoption by developers due to its extensive scalability and workload handling capabilities; almost all the developers are shifting towards stream-oriented applications rather than state-oriented. Combining this with a serverless environment makes it a piece of art with the reduced burden of managing the cluster and letting us focus more on the development part and leaving most of the working part to a serverless environment.
Original article source at: https://www.xenonstack.com/
1669643708
JavaScript provides a built-in function called removeEventListener()
that you can use to remove event listeners attached to HTML elements. Suppose you have an event listener attached to a <button>
element as follows:
<body>
<button id="save">Save</button>
<script>
let button = document.getElementById("save");
function fnClick(event) {
alert("Button save is clicked");
}
button.addEventListener("click", fnClick);
</script>
<body></body>
</body>
To remove the "click"
event listener attached from the <script>
tag above, you need to use the removeEventListener()
method, passing the type
of the event and the callback
function to remove from the element:
button.removeEventListener("click", fnClick);
The above code should suffice to remove the "click"
event listener from the button
element. Notice how you need to call the removeEventListener()
method on the element while also passubg the function fnClick
reference to the method. To correctly remove an event listener, you need a reference both to the element with a listener and the callback
function reference.
This is why it’s not recommended to pass a nameless callback
function to event listeners as follows:
button.addEventListener("click", function(event){
alert("Button save is clicked");
})
Without the callback
function name as in the example above, you won’t be able to remove the event listener.
Sometimes, you may also want to disable the button element and remove the event listener to prevent a double-click from your users. You can do so by writing the removeEventListener()
method inside the addEventListener()
method as shown below:
<body>
<button id="save">Save</button>
<script>
let button = document.getElementById("save");
function fnClick(event) {
alert("Button save is clicked");
button.disabled = true; // disable button
button.removeEventListener("click", fnClick); // remove event listener
}
button.addEventListener("click", fnClick);
</script>
</body>
In the code above, the button
element will be disabled and the event listener will be removed after a "click"
event is triggered.
And that’s how you remove JavaScript event listeners attached to HTML elements. You need to keep references to the element you want to remove the listener from, the type
of the event, and the callback
function executed by the event so that you can remove the event listener without any error.
Original article source at: https://sebhastian.com/
1669464016
To display or hide a <div>
by a <button>
click, you can add the onclick
event listener to the <button>
element.
The onclick
listener for the button will have a function
that will change the display
attribute of the <div>
from the default value (which is block
) to none
.
For example, suppose you have an HTML <body>
element as follows:
<body>
<div id="first">This is the FIRST div</div>
<div id="second">This is the SECOND div</div>
<div id="third">This is the THIRD div</div>
<button id="toggle">Hide THIRD div</button>
</body>
The <button>
element above is created to hide or show the <div id="third">
element on click.
You need to add the onclick
event listener to the <button>
element like this:
const targetDiv = document.getElementById("third");
const btn = document.getElementById("toggle");
btn.onclick = function () {
if (targetDiv.style.display !== "none") {
targetDiv.style.display = "none";
} else {
targetDiv.style.display = "block";
}
};
The HTML will be rendered as if the <div>
element never existed by setting the display
attribute to none
.
When you click the <button>
element again, the display
attribute will be set back to block
, so the <div>
will be rendered back in the HTML page.
Since this solution is using JavaScript API native to the browser, you don’t need to install any JavaScript libraries like jQuery.
You can add the JavaScript code to your HTML <body>
tag using the <script>
tag as follows:
<body>
<div id="first">This is the FIRST div</div>
<div id="second">This is the SECOND div</div>
<div id="third">This is the THIRD div</div>
<button id="toggle">Hide THIRD div</button>
<script>
const targetDiv = document.getElementById("third");
const btn = document.getElementById("toggle");
btn.onclick = function () {
if (targetDiv.style.display !== "none") {
targetDiv.style.display = "none";
} else {
targetDiv.style.display = "block";
}
};
</script>
</body>
Feel free to use and modify the code above in your project.
I hope this tutorial has been useful for you. 👍
Original article source at: https://sebhastian.com/
1669022820
Let's learn how to use React onChange events properly for keeping track of user input.
The onChange
event handler is a prop that you can pass into JSX <input>
elements.
This prop is provided by React so that your application can listen to user input in real-time.
When an onChange
event occurs, the prop will call the function you passed as its parameter.
Here’s an example of the onChange
event in action:
import React from "react";
function App() {
function handleChange(event) {
console.log(event.target.value);
}
return (
<input
type="text"
name="firstName"
onChange={handleChange}
/>
);
}
export default App;
In the example above, the handleChange()
function will be called every time the onchange
event occurs for the <input>
element.
The event
object passed into the handleChange()
function contains all the detail about the input event.
You can also declare a function right inside the onChange
prop like this:
import React from "react";
function App() {
return (
<input
type="text"
name="firstName"
onChange={event => console.log("onchange is triggered")}
/>
);
}
export default App;
Now whenever you type something into the text box, React will trigger the function that we passed into the onChange
prop.
In regular HTML, form elements such as and usually maintain their own value:
<input id="name" type="text" />
Which you can retrieve by using the document
selector:
var name = document.getElementById("name").value;
In React however, it is encouraged for developers to store input values in the component’s state object.
This way, React component that render <input>
elements will also control what happens on subsequent user inputs.
First, you create a state for the input as follows:
import React, { useState } from "react";
function App(props) {
const [name, setName] = useState("");
}
Then, you create an input element and call the setName
function to update the name
state.
Every time the onChange
event is triggered, React will pass the event
argument into the function that you define inside the prop:
import React, { useState } from "react";
function App(props) {
const [name, setName] = useState("");
return (
<input
type="text"
name="firstName"
onChange={event => setName(event.target.value)}
/>
);
}
Finally, you use the value of name
state and put it inside the input’s value
prop:
return (
<input
type="text"
name="firstName"
onChange={event => setName(event.target.value)}
value={name}
/>
);
You can retrieve input value in event.target.value
and input name in event.target.name
.
As in the previous example, you can also separate the onChange
handler into its own function. The event
object is commonly shortened as e
like this:
import React, { useState } from "react";
function App(props) {
const [name, setName] = useState("");
function handleChange(e) {
setName(e.target.value);
}
return (
<input
type="text"
name="firstName"
onChange={handleChange}
value={name}
/>
);
}
This pattern of using React’s onChange event and the component state will encourage developers to use state as the single source of truth.
Instead of using the Document
object to retrieve input values, you retrieve them from the state.
And now you’ve learned how React onChange
event handler works. Nice job! 👍
Original article source at: https://sebhastian.com/
1668115440
The Event Gateway combines both API Gateway and Pub/Sub functionality into a single event-driven experience. It's dataflow for event-driven, serverless architectures. It routes Events (data) to Functions (serverless compute). Everything it cares about is an event! Even calling a function. It makes it easy to share events across different systems, teams and organizations!
Use the Event Gateway right now, by running the Event Gateway Getting Started Application with the Serverless Framework.
Features:
The Event Gateway is a L7 proxy and realtime dataflow engine, intended for use with Functions-as-a-Service on AWS, Azure, Google & IBM.
The project is under heavy development. The APIs will continue to change until we release a 1.0.0 version. It's not yet ready for production applications.
Looking for an example to get started? The easiest way to use the Event Gateway is with the serverless-event-gateway-plugin
with the Serverless Framework. Check out the Getting Started Example to deploy your first service to the Event Gateway.
If you don't want to run the Event Gateway yourself, you can use the hosted version provided by the Serverless team. Sign up here!
There is an official Docker image.
docker run -p 4000:4000 -p 4001:4001 serverless/event-gateway --dev
On macOS or Linux run the following to download the binary:
curl -sfL https://raw.githubusercontent.com/serverless/event-gateway/master/install.sh | sh
On Windows download binary.
Then run the binary in development mode with:
$ event-gateway --dev
The repo contains helm
charts for a quick deploy to an existing cluster using native nginx Ingress. To deploy a development cluster you can follow the minikube instructions.
If you want more detailed information on running and developing with the Event Gateway, please check Running Locally and Developing guides.
Event Registry is a single source of truth about events occuring in the space. Every event emitted to a space has to have event type registered beforehand. Event Registry also provides a way to authorize incoming events. Please check Event Types reference for more information.
Discover and call serverless functions from anything that can reach the Event Gateway. Function Discovery supports the following function types:
Function Discovery stores information about functions allowing the Event Gateway to call them as a reaction to received event.
curl example
curl --request POST \
--url http://localhost:4001/v1/spaces/default/functions \
--header 'content-type: application/json' \
--data '{
"functionId": "hello",
"type": "awslambda",
"provider":{
"arn": "arn:aws:lambda:us-east-1:377024778620:function:bluegreen-dev-hello",
"region": "us-east-1"
}
}'
Node.js SDK example
const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.registerFunction({
functionId: 'sendEmail',
type: 'awslambda',
provider: {
arn: 'xxx',
region: 'us-west-2'
}
})
Lightweight pub/sub system. Allows functions to asynchronously receive custom events. Instead of rewriting your functions every time you want to send data to another place, this can be handled entirely in configuration using the Event Gateway. This completely decouples functions from one another, reducing communication costs across teams, eliminates effort spent redeploying functions, and allows you to easily share events across functions, HTTP services, even different cloud providers. Functions may be registered as subscribers to a custom event. When an event occurs, all subscribers are called asynchronously with the event as its argument.
Creating a subscription requires providing ID of registered function, an event type, an HTTP method (POST
by default), and a path (/
by default). The method and path properties defines HTTP endpoint which Events API will be listening on.
Event Gateway supports two subscription types: async
and sync
. Please check Subscription Types reference for more information.
curl example
curl --request POST \
--url http://localhost:4001/v1/spaces/default/subscriptions \
--header 'content-type: application/json' \
--data '{
"type": "async",
"eventType": "user.created",
"functionId": "sendEmail",
"path": "/myteam"
}'
Node.js SDK example
const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.subscribe({
type: 'async',
eventType: 'user.created',
functionId: 'sendEmail',
path: '/myteam'
})
sendEmail
function will be invoked for every user.created
event to <Events API>/myteam
endpoint.
curl example
curl --request POST \
--url http://localhost:4000/ \
--header 'content-type: application/json' \
--data '{
"eventType": "myapp.user.created",
"eventID": "66dfc31d-6844-42fd-b1a7-a489a49f65f3",
"cloudEventsVersion": "0.1",
"source": "/myapp/services/users",
"eventTime": "1990-12-31T23:59:60Z",
"data": { "userID": "123" },
"contentType": "application/json"
}'
Node.js SDK example
const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.emit({
"eventType": "myapp.user.created",
"eventID": "66dfc31d-6844-42fd-b1a7-a489a49f65f3",
"cloudEventsVersion": "0.1",
"source": "/myapp/services/users",
"eventTime": "1990-12-31T23:59:60Z",
"data": { "userID": "123" },
"contentType": "application/json"
})
http.request
EventNot all data are events that's why Event Gateway has a special, built-in http.request
event type that enables subscribing to raw HTTP requests.
curl example
curl --request POST \
--url http://localhost:4001/v1/spaces/default/subscriptions \
--header 'content-type: application/json' \
--data '{
"type": "sync",
"eventType": "http.request",
"functionId": "listUsers",
"method": "GET",
"path": "/users"
}'
Node.js SDK example
const eventGateway = new EventGateway({ url: 'http://localhost' })
eventGateway.subscribe({
type: 'sync',
eventType: 'http.request',
functionId: 'listUsers',
method: 'GET',
path: '/users'
})
listUsers
function will be invoked for every HTTP GET request to <Events API>/users
endpoint.
One additional concept in the Event Gateway are Spaces. Spaces provide isolation between resources. Space is a coarse-grained sandbox in which entities (Functions and Subscriptions) can interact freely. All actions are possible within a space: publishing, subscribing and invoking.
Space is not about access control/authentication/authorization. It's only about isolation. It doesn't enforce any specific subscription path.
This is how Spaces fit different needs depending on use-case:
Technically speaking Space is a mandatory field ("default" by default) on Function or Subscription object that user has to provide during function registration or subscription creation. Space is a first class concept in Config API. Config API can register function in specific space or list all functions or subscriptions from a space.
Event Gateway has the first-class support for CloudEvents. It means few things.
First of all, if the event emitted to the Event Gateway is in CloudEvents format, the Event Gateway is able to recognize it and trigger proper subscriptions based on event type specified in the event. Event Gateway supports both Structured Content and Binary Content modes described in HTTP Transport Binding spec.
Secondly, there is a special, built-in HTTP Request Event type allowing reacting to raw HTTP requests that are not formatted according to CloudEvents spec. This event type can be especially helpful for building REST APIs.
Currently, Event Gateway supports CloudEvents v0.1 schema specification.
This project uses Semantic Versioning 2.0.0. We are in initial development phase right now (v0.X.Y). The public APIs should not be considered stable. Every breaking change will be listed in the release changelog.
The Event Gateway is NOT a FaaS platform. It integrates with existing FaaS providers (AWS Lambda, Google Cloud Functions, Azure Functions, OpenWhisk Actions). The Event Gateway enables building large serverless architectures in a unified way across different providers.
SOA came along with a new set of challenges. In monolithic architectures, it was simple to call a built-in library or rarely-changing external service. In SOA it involves much more network communication which is not reliable. The main problems to solve include:
The following systems are solutions those problems:
The main goal of those tools is to manage the inconveniences of network communication.
The greatest benefit of serverless/FaaS is that it solves almost all of above problems:
Tools like Envoy/Linkerd solve different domain of technical problems that doesn't occur in serverless space. They have a lot of features that are unnecessary in the context of serverless computing.
Service discovery problems may be relevant to serverless architectures, especially when we have a multi-cloud setup or we want to call a serverless function from a legacy system (microservices, etc...). There is a need for some proxy that will know where the function is actually deployed and have retry logic built-in. Mapping from function name to serverless function calling metadata is a different problem from tracking the availability of a changing number of service instances. That's why there is a room for new tools that solves function discovery problem rather than the service discovery problem. Those problems are fundamentally different.
Author: Serverless
Source Code: https://github.com/serverless/event-gateway
License: Apache-2.0 license
1667716620
Kubernetes-based Event Driven Autoscaling
KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.
KEDA can run on both the cloud and the edge, integrates natively with Kubernetes components such as the Horizontal Pod Autoscaler, and has no external dependencies.
We are a Cloud Native Computing Foundation (CNCF) incubation project.
You can find several samples for various event sources here.
There are many ways to deploy KEDA including Helm, Operator Hub and YAML files.
Interested to learn more? Head over to keda.sh.
If interested in contributing or participating in the direction of KEDA, you can join our community meetings! Learn more about them on our website.
Just want to learn or chat about KEDA? Feel free to join the conversation in #KEDA on the Kubernetes Slack!
We are always happy to list users who run KEDA in production, learn more about it here.
You can learn about the governance of KEDA here.
We use GitHub issues to build our backlog, a complete overview of all open items and our planning.
Learn more about our roadmap here.
You can find the latest releases here.
You can find contributing guide here.
Learn how to build & deploy KEDA locally here.
Author: Kedacore
Source Code: https://github.com/kedacore/keda
License: Apache-2.0 license
1665913800
SwiftNIO is a cross-platform asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients.
It's like Netty, but written for Swift.
The SwiftNIO project is split across multiple repositories:
Repository | NIO 2 (Swift 5.5.2+) |
---|---|
[https://github.com/apple/swift-nio][repo-nio] SwiftNIO core | from: "2.0.0" |
[https://github.com/apple/swift-nio-ssl][repo-nio-ssl] TLS (SSL) support | from: "2.0.0" |
[https://github.com/apple/swift-nio-http2][repo-nio-http2] HTTP/2 support | from: "1.0.0" |
[https://github.com/apple/swift-nio-extras][repo-nio-extras] useful additions around SwiftNIO | from: "1.0.0" |
[https://github.com/apple/swift-nio-transport-services][repo-nio-transport-services] first-class support for macOS, iOS, tvOS, and watchOS | from: "1.0.0" |
[https://github.com/apple/swift-nio-ssh][repo-nio-ssh] SSH support | .upToNextMinor(from: "0.2.0") |
NIO 2.29.0 and older support Swift 5.0+, NIO 2.39.0 and older support Swift 5.2+.
Within this repository we have a number of products that provide different functionality. This package contains the following products:
NIO
. This is an umbrella module exporting NIOCore
, NIOEmbedded
and NIOPosix
.NIOCore
. This provides the core abstractions and types for using SwiftNIO (see "Conceptual Overview" for more details). Most NIO extension projects that provide things like new EventLoop
s and Channel
s or new protocol implementations should only need to depend on NIOCore
.NIOPosix
. This provides the primary [EventLoopGroup
], EventLoop
, and Channel
s for use on POSIX-based systems. This is our high performance core I/O layer. In general, this should only be imported by projects that plan to do some actual I/O, such as high-level protocol implementations or applications.NIOEmbedded
. This provides EmbeddedChannel
and EmbeddedEventLoop
, implementations of the NIOCore
abstractions that provide fine-grained control over their execution. These are most often used for testing, but can also be used to drive protocol implementations in a way that is decoupled from networking altogether.NIOConcurrencyHelpers
. This provides a few low-level concurrency primitives that are used by NIO implementations, such as locks and atomics.NIOFoundationCompat
. This extends a number of NIO types for better interoperation with Foundation data types. If you are working with Foundation data types such as Data
, you should import this.NIOTLS
. This provides a few common abstraction types for working with multiple TLS implementations. Note that this doesn't provide TLS itself: please investigate swift-nio-ssl and swift-nio-transport-services for concrete implementations.NIOHTTP1
. This provides a low-level HTTP/1.1 protocol implementation.NIOWebSocket
. This provides a low-level WebSocket protocol implementation.NIOTestUtils
. This provides a number of helpers for testing projects that use SwiftNIO.Below you can find a list of a few protocol implementations that are done with SwiftNIO. This is a non-exhaustive list of protocols that are either part of the SwiftNIO project or are accepted into the SSWG's incubation process. All of the libraries listed below do all of their I/O in a non-blocking fashion using SwiftNIO.
Low-level protocol implementations are often a collection of ChannelHandler
s that implement a protocol but still require the user to have a good understanding of SwiftNIO. Often, low-level protocol implementations will then be wrapped in high-level libraries with a nicer, more user-friendly API.
Protocol | Client | Server | Repository | Module | Comment |
---|---|---|---|---|---|
HTTP/1 | ✅ | ✅ | apple/swift-nio | NIOHTTP1 | official NIO project |
HTTP/2 | ✅ | ✅ | apple/swift-nio-http2 | NIOHTTP2 | official NIO project |
WebSocket | ✅ | ✅ | apple/swift-nio | NIOWebSocket | official NIO project |
TLS | ✅ | ✅ | apple/swift-nio-ssl | NIOSSL | official NIO project |
SSH | ✅ | ✅ | [apple/swift-nio-ssh][repo-nio-ssh] | n/a | official NIO project |
High-level implementations are usually libraries that come with an API that doesn't expose SwiftNIO's ChannelPipeline
and can therefore be used with very little (or no) SwiftNIO-specific knowledge. The implementations listed below do still do all of their I/O in SwiftNIO and integrate really well with the SwiftNIO ecosystem.
Protocol | Client | Server | Repository | Module | Comment |
---|---|---|---|---|---|
HTTP | ✅ | ❌ | swift-server/async-http-client | AsyncHTTPClient | SSWG community project |
gRPC | ✅ | ✅ | grpc/grpc-swift | GRPC | also offers a low-level API; SSWG community project |
APNS | ✅ | ❌ | kylebrowning/APNSwift | APNSwift | SSWG community project |
PostgreSQL | ✅ | ❌ | vapor/postgres-nio | PostgresNIO | SSWG community project |
Redis | ✅ | ❌ | mordil/swift-redi-stack | RediStack | SSWG community project |
This is the current version of SwiftNIO and will be supported for the foreseeable future.
The most recent versions of SwiftNIO support Swift 5.5.2 and newer. The minimum Swift version supported by SwiftNIO releases are detailed below:
SwiftNIO | Minimum Swift Version |
---|---|
2.0.0 ..< 2.30.0 | 5.0 |
2.30.0 ..< 2.40.0 | 5.2 |
2.40.0 ..< 2.43.0 | 5.4 |
2.43.0 ... | 5.5.2 |
SwiftNIO 1 is considered end of life - it is strongly recommended that you move to a newer version. The Core NIO team does not actively work on this version. No new features will be added to this version but PRs which fix bugs or security vulnerabilities will be accepted until the end of May 2022.
If you have a SwiftNIO 1 application or library that you would like to migrate to SwiftNIO 2, please check out the migration guide we prepared for you.
The latest released SwiftNIO 1 version supports Swift 4.0, 4.1, 4.2, and 5.0.
SwiftNIO aims to support all of the platforms where Swift is supported. Currently, it is developed and tested on macOS and Linux, and is known to support the following operating system versions:
SwiftNIO follows SemVer 2.0.0 with a separate document declaring SwiftNIO's Public API.
What this means for you is that you should depend on SwiftNIO with a version range that covers everything from the minimum SwiftNIO version you require up to the next major version. In SwiftPM that can be easily done specifying for example from: "2.0.0"
meaning that you support SwiftNIO in every version starting from 2.0.0 up to (excluding) 3.0.0. SemVer and SwiftNIO's Public API guarantees should result in a working program without having to worry about testing every single version for compatibility.
SwiftNIO is fundamentally a low-level tool for building high-performance networking applications in Swift. It particularly targets those use-cases where using a "thread-per-connection" model of concurrency is inefficient or untenable. This is a common limitation when building servers that use a large number of relatively low-utilization connections, such as HTTP servers.
To achieve its goals SwiftNIO extensively uses "non-blocking I/O": hence the name! Non-blocking I/O differs from the more common blocking I/O model because the application does not wait for data to be sent to or received from the network: instead, SwiftNIO asks for the kernel to notify it when I/O operations can be performed without waiting.
SwiftNIO does not aim to provide high-level solutions like, for example, web frameworks do. Instead, SwiftNIO is focused on providing the low-level building blocks for these higher-level applications. When it comes to building a web application, most users will not want to use SwiftNIO directly: instead, they'll want to use one of the many great web frameworks available in the Swift ecosystem. Those web frameworks, however, may choose to use SwiftNIO under the covers to provide their networking support.
The following sections will describe the low-level tools that SwiftNIO provides, and provide a quick overview of how to work with them. If you feel comfortable with these concepts, then you can skip right ahead to the other sections of this README.
The basic building blocks of SwiftNIO are the following 8 types of objects:
EventLoopGroup
, a protocol, provided by NIOCore
.EventLoop
, a protocol, provided by NIOCore
.Channel
, a protocol, provided by NIOCore
.ChannelHandler
, a protocol, provided by NIOCore
.Bootstrap
, several related structures, provided by NIOCore
.ByteBuffer
, a struct, provided by NIOCore
.EventLoopFuture
, a generic class, provided by NIOCore
.EventLoopPromise
, a generic struct, provided by NIOCore
.All SwiftNIO applications are ultimately constructed of these various components.
The basic I/O primitive of SwiftNIO is the event loop. The event loop is an object that waits for events (usually I/O related events, such as "data received") to happen and then fires some kind of callback when they do. In almost all SwiftNIO applications there will be relatively few event loops: usually only one or two per CPU core the application wants to use. Generally speaking event loops run for the entire lifetime of your application, spinning in an endless loop dispatching events.
Event loops are gathered together into event loop groups. These groups provide a mechanism to distribute work around the event loops. For example, when listening for inbound connections the listening socket will be registered on one event loop. However, we don't want all connections that are accepted on that listening socket to be registered with the same event loop, as that would potentially overload one event loop while leaving the others empty. For that reason, the event loop group provides the ability to spread load across multiple event loops.
In SwiftNIO today there is one EventLoopGroup
implementation, and two EventLoop
implementations. For production applications there is the MultiThreadedEventLoopGroup
, an EventLoopGroup
that creates a number of threads (using the POSIX pthreads
library) and places one SelectableEventLoop
on each one. The SelectableEventLoop
is an event loop that uses a selector (either kqueue
or epoll
depending on the target system) to manage I/O events from file descriptors and to dispatch work. These EventLoop
s and EventLoopGroup
s are provided by the NIOPosix
module. Additionally, there is the EmbeddedEventLoop
, which is a dummy event loop that is used primarily for testing purposes, provided by the NIOEmbedded
module.
EventLoop
s have a number of important properties. Most vitally, they are the way all work gets done in SwiftNIO applications. In order to ensure thread-safety, any work that wants to be done on almost any of the other objects in SwiftNIO must be dispatched via an EventLoop
. EventLoop
objects own almost all the other objects in a SwiftNIO application, and understanding their execution model is critical for building high-performance SwiftNIO applications.
While EventLoop
s are critical to the way SwiftNIO works, most users will not interact with them substantially beyond asking them to create EventLoopPromise
s and to schedule work. The parts of a SwiftNIO application most users will spend the most time interacting with are Channel
s and ChannelHandler
s.
Almost every file descriptor that a user interacts with in a SwiftNIO program is associated with a single Channel
. The Channel
owns this file descriptor, and is responsible for managing its lifetime. It is also responsible for processing inbound and outbound events on that file descriptor: whenever the event loop has an event that corresponds to a file descriptor, it will notify the Channel
that owns that file descriptor.
Channel
s by themselves, however, are not useful. After all, it is a rare application that doesn't want to do anything with the data it sends or receives on a socket! So the other important part of the Channel
is the ChannelPipeline
.
A ChannelPipeline
is a sequence of objects, called ChannelHandler
s, that process events on a Channel
. The ChannelHandler
s process these events one after another, in order, mutating and transforming events as they go. This can be thought of as a data processing pipeline; hence the name ChannelPipeline
.
All ChannelHandler
s are either Inbound or Outbound handlers, or both. Inbound handlers process "inbound" events: events like reading data from a socket, reading socket close, or other kinds of events initiated by remote peers. Outbound handlers process "outbound" events, such as writes, connection attempts, and local socket closes.
Each handler processes the events in order. For example, read events are passed from the front of the pipeline to the back, one handler at a time, while write events are passed from the back of the pipeline to the front. Each handler may, at any time, generate either inbound or outbound events that will be sent to the next handler in whichever direction is appropriate. This allows handlers to split up reads, coalesce writes, delay connection attempts, and generally perform arbitrary transformations of events.
In general, ChannelHandler
s are designed to be highly re-usable components. This means they tend to be designed to be as small as possible, performing one specific data transformation. This allows handlers to be composed together in novel and flexible ways, which helps with code reuse and encapsulation.
ChannelHandler
s are able to keep track of where they are in a ChannelPipeline
by using a ChannelHandlerContext
. These objects contain references to the previous and next channel handler in the pipeline, ensuring that it is always possible for a ChannelHandler
to emit events while it remains in a pipeline.
SwiftNIO ships with many ChannelHandler
s built in that provide useful functionality, such as HTTP parsing. In addition, high-performance applications will want to provide as much of their logic as possible in ChannelHandler
s, as it helps avoid problems with context switching.
Additionally, SwiftNIO ships with a few Channel
implementations. In particular, it ships with ServerSocketChannel
, a Channel
for sockets that accept inbound connections; SocketChannel
, a Channel
for TCP connections; and DatagramChannel
, a Channel
for UDP sockets. All of these are provided by the NIOPosix
module. It also providesEmbeddedChannel
, a Channel
primarily used for testing, provided by the NIOEmbedded
module.
A Note on Blocking
One of the important notes about ChannelPipeline
s is that they are thread-safe. This is very important for writing SwiftNIO applications, as it allows you to write much simpler ChannelHandler
s in the knowledge that they will not require synchronization.
However, this is achieved by dispatching all code on the ChannelPipeline
on the same thread as the EventLoop
. This means that, as a general rule, ChannelHandler
s must not call blocking code without dispatching it to a background thread. If a ChannelHandler
blocks for any reason, all Channel
s attached to the parent EventLoop
will be unable to progress until the blocking call completes.
This is a common concern while writing SwiftNIO applications. If it is useful to write code in a blocking style, it is highly recommended that you dispatch work to a different thread when you're done with it in your pipeline.
While it is possible to configure and register Channel
s with EventLoop
s directly, it is generally more useful to have a higher-level abstraction to handle this work.
For this reason, SwiftNIO ships a number of Bootstrap
objects whose purpose is to streamline the creation of channels. Some Bootstrap
objects also provide other functionality, such as support for Happy Eyeballs for making TCP connection attempts.
Currently SwiftNIO ships with three Bootstrap
objects in the NIOPosix
module: ServerBootstrap
, for bootstrapping listening channels; ClientBootstrap
, for bootstrapping client TCP channels; and DatagramBootstrap
for bootstrapping UDP channels.
The majority of the work in a SwiftNIO application involves shuffling buffers of bytes around. At the very least, data is sent and received to and from the network in the form of buffers of bytes. For this reason it's very important to have a high-performance data structure that is optimized for the kind of work SwiftNIO applications perform.
For this reason, SwiftNIO provides ByteBuffer
, a fast copy-on-write byte buffer that forms a key building block of most SwiftNIO applications. This type is provided by the NIOCore
module.
ByteBuffer
provides a number of useful features, and in addition provides a number of hooks to use it in an "unsafe" mode. This turns off bounds checking for improved performance, at the cost of potentially opening your application up to memory correctness problems.
In general, it is highly recommended that you use the ByteBuffer
in its safe mode at all times.
For more details on the API of ByteBuffer
, please see our API documentation, linked below.
One major difference between writing concurrent code and writing synchronous code is that not all actions will complete immediately. For example, when you write data on a channel, it is possible that the event loop will not be able to immediately flush that write out to the network. For this reason, SwiftNIO provides EventLoopPromise<T>
and EventLoopFuture<T>
to manage operations that complete asynchronously. These types are provided by the NIOCore
module.
An EventLoopFuture<T>
is essentially a container for the return value of a function that will be populated at some time in the future. Each EventLoopFuture<T>
has a corresponding EventLoopPromise<T>
, which is the object that the result will be put into. When the promise is succeeded, the future will be fulfilled.
If you had to poll the future to detect when it completed that would be quite inefficient, so EventLoopFuture<T>
is designed to have managed callbacks. Essentially, you can hang callbacks off the future that will be executed when a result is available. The EventLoopFuture<T>
will even carefully arrange the scheduling to ensure that these callbacks always execute on the event loop that initially created the promise, which helps ensure that you don't need too much synchronization around EventLoopFuture<T>
callbacks.
Another important topic for consideration is the difference between how the promise passed to close
works as opposed to closeFuture
on a Channel
. For example, the promise passed into close
will succeed after the Channel
is closed down but before the ChannelPipeline
is completely cleared out. This will allow you to take action on the ChannelPipeline
before it is completely cleared out, if needed. If it is desired to wait for the Channel
to close down and the ChannelPipeline
to be cleared out without any further action, then the better option would be to wait for the closeFuture
to succeed.
There are several functions for applying callbacks to EventLoopFuture<T>
, depending on how and when you want them to execute. Details of these functions is left to the API documentation.
SwiftNIO is designed to be a powerful tool for building networked applications and frameworks, but it is not intended to be the perfect solution for all levels of abstraction. SwiftNIO is tightly focused on providing the basic I/O primitives and protocol implementations at low levels of abstraction, leaving more expressive but slower abstractions to the wider community to build. The intention is that SwiftNIO will be a building block for server-side applications, not necessarily the framework those applications will use directly.
Applications that need extremely high performance from their networking stack may choose to use SwiftNIO directly in order to reduce the overhead of their abstractions. These applications should be able to maintain extremely high performance with relatively little maintenance cost. SwiftNIO also focuses on providing useful abstractions for this use-case, such that extremely high performance network servers can be built directly.
The core SwiftNIO repository will contain a few extremely important protocol implementations, such as HTTP, directly in tree. However, we believe that most protocol implementations should be decoupled from the release cycle of the underlying networking stack, as the release cadence is likely to be very different (either much faster or much slower). For this reason, we actively encourage the community to develop and maintain their protocol implementations out-of-tree. Indeed, some first-party SwiftNIO protocol implementations, including our TLS and HTTP/2 bindings, are developed out-of-tree!
There are currently several example projects that demonstrate how to use SwiftNIO.
To build & run them, run following command, replace TARGET_NAME with the folder name under ./Sources
swift run TARGET_NAME
For example, to run NIOHTTP1Server, run following command:
swift run NIOHTTP1Server
SwiftNIO primarily uses SwiftPM as its build tool, so we recommend using that as well. If you want to depend on SwiftNIO in your own project, it's as simple as adding a dependencies
clause to your Package.swift
:
dependencies: [
.package(url: "https://github.com/apple/swift-nio.git", from: "2.0.0")
]
and then adding the appropriate SwiftNIO module(s) to your target dependencies. The syntax for adding target dependencies differs slightly between Swift versions. For example, if you want to depend on the NIOCore
, NIOPosix
and NIOHTTP1
modules, specify the following dependencies:
swift-tools-version:5.4
)dependencies: [.product(name: "NIOCore", package: "swift-nio"),
.product(name: "NIOPosix", package: "swift-nio"),
.product(name: "NIOHTTP1", package: "swift-nio")]
If your project is set up as an Xcode project and you're using Xcode 11+, you can add SwiftNIO as a dependency to your Xcode project by clicking File -> Swift Packages -> Add Package Dependency. In the upcoming dialog, please enter https://github.com/apple/swift-nio.git
and click Next twice. Finally, select the targets you are planning to use (for example NIOCore
, NIOHTTP1
, and NIOFoundationCompat
) and click finish. Now will be able to import NIOCore
(as well as all the other targets you have selected) in your project.
To work on SwiftNIO itself, or to investigate some of the demonstration applications, you can clone the repository directly and use SwiftPM to help build it. For example, you can run the following commands to compile and run the example echo server:
swift build
swift test
swift run NIOEchoServer
To verify that it is working, you can use another shell to attempt to connect to it:
echo "Hello SwiftNIO" | nc localhost 9999
If all goes well, you'll see the message echoed back to you.
To work on SwiftNIO in Xcode 11+, you can just open the Package.swift
file in Xcode and use Xcode's support for SwiftPM Packages.
If you want to develop SwiftNIO with Xcode 10, you have to generate an Xcode project:
swift package generate-xcodeproj
docker-compose
Alternatively, you may want to develop or test with docker-compose
.
First make sure you have Docker installed, next run the following commands:
docker-compose -f docker/docker-compose.yaml run test
Will create a base image with Swift runtime and other build and test dependencies, compile SwiftNIO and run the unit and integration tests
docker-compose -f docker/docker-compose.yaml up echo
Will create a base image, compile SwiftNIO, and run a sample NIOEchoServer
on localhost:9999
. Test it by echo Hello SwiftNIO | nc localhost 9999
.
docker-compose -f docker/docker-compose.yaml up http
Will create a base image, compile SwiftNIO, and run a sample NIOHTTP1Server
on localhost:8888
. Test it by curl http://localhost:8888
docker-compose -f docker/docker-compose.yaml -f docker/docker-compose.2204.57.yaml run test
Will create a base image using Ubuntu 22.04 and Swift 5.7, compile SwiftNIO and run the unit and integration tests. Files exist for other ubuntu and swift versions in the docker directory.
Note: This section is only relevant if you would like to develop SwiftNIO yourself. You can ignore the information here if you just want to use SwiftNIO as a SwiftPM package.
For the most part, SwiftNIO development is as straightforward as any other SwiftPM project. With that said, we do have a few processes that are worth understanding before you contribute. For details, please see CONTRIBUTING.md
in this repository.
SwiftNIO's main
branch is the development branch for the next releases of SwiftNIO 2, it's Swift 5-only.
To be able to compile and run SwiftNIO and the integration tests, you need to have a few prerequisites installed on your system.
# install swift tarball from https://swift.org/downloads
apt-get install -y git curl libatomic1 libxml2 netcat-openbsd lsof perl
dnf install swift-lang /usr/bin/nc /usr/bin/lsof /usr/bin/shasum
It's possible to run the test suite in parallel, it can save significant time if you have a larger multi-core machine, just add --parallel
when running the tests. This can speed up the run time of the test suite by 30x or more.
swift test --parallel
Author: Apple
Source Code: https://github.com/apple/swift-nio
License: Apache-2.0 license
1663776720
In today's post we will learn about 6 Favorite PHP Libraries for Working with Event and Task Queues.
What is a Task Queue?
A Task Queue is a lightweight, dynamically allocated queue that one or more Worker Entities poll for Tasks.
Task Queues do not have any ordering guarantees. It is possible to have a Task that stays in a Task Queue for a period of time, if there is a backlog that wasn't drained for that time.
There are two types of Task Queues, Activity Task Queues and Workflow Task Queues.
Table of contents:
A multibackend abstraction library.
Bernard makes it super easy and enjoyable to do background processing in PHP. It does this by utilizing queues and long running processes. It supports normal queueing drivers but also implements simple ones with Redis and Doctrine.
Currently these are the supported backends, with more coming with each release:
Via Composer
$ composer require bernard/bernard
We try to follow BDD and TDD, as such we use both phpspec and phpunit to test this library.
$ composer test
You can run the functional tests by executing:
$ composer test-functional
A performant pure-PHP AMQP (RabbitMQ) sync and also async (ReactPHP) library.
Performant pure-PHP AMQP (RabbitMQ) sync/async (ReactPHP) library
BunnyPHP requires PHP 7.1 and newer.
Add as Composer dependency:
$ composer require bunny/bunny:@dev
When instantiating the BunnyPHP Client
accepts an array with connection options:
$connection = [
'host' => 'HOSTNAME',
'vhost' => 'VHOST', // The default vhost is /
'user' => 'USERNAME', // The default user is guest
'password' => 'PASSWORD', // The default password is guest
];
$bunny = new Client($connection);
$bunny->connect();
Options for SSL-connections should be specified as array ssl
:
$connection = [
'host' => 'HOSTNAME',
'vhost' => 'VHOST', // The default vhost is /
'user' => 'USERNAME', // The default user is guest
'password' => 'PASSWORD', // The default password is guest
'ssl' => [
'cafile' => 'ca.pem',
'local_cert' => 'client.cert',
'local_pk' => 'client.key',
],
];
$bunny = new Client($connection);
$bunny->connect();
For options description - please see SSL context options.
Note: invalid SSL configuration will cause connection failure.
See also common configuration variants.
Now that we have a connection with the server we need to create a channel and declare a queue to communicate over before we can publish a message, or subscribe to a queue for that matter.
$channel = $bunny->channel();
$channel->queueDeclare('queue_name'); // Queue name
With a communication channel set up, we can now publish a message to the queue:
$channel->publish(
$message, // The message you're publishing as a string
[], // Any headers you want to add to the message
'', // Exchange name
'queue_name' // Routing key, in this example the queue's name
);
Subscribing to a queue can be done in two ways. The first way will run indefinitely:
$channel->run(
function (Message $message, Channel $channel, Client $bunny) {
$success = handleMessage($message); // Handle your message here
if ($success) {
$channel->ack($message); // Acknowledge message
return;
}
$channel->nack($message); // Mark message fail, message will be redelivered
},
'queue_name'
);
The other way lets you run the client for a specific amount of time consuming the queue before it stops:
$channel->consume(
function (Message $message, Channel $channel, Client $client){
$channel->ack($message); // Acknowledge message
},
'queue_name'
);
$bunny->run(12); // Client runs for 12 seconds and then stops
A Beanstalkd client library.
Pheanstalk is a pure PHP 7.1+ client for the beanstalkd workqueue. It has been actively developed, and used in production by many, since late 2008.
Created by Paul Annesley, Pheanstalk is rigorously unit tested and written using encapsulated, maintainable object oriented design. Community feedback, bug reports and patches has led to a stable 1.0 release in 2010, a 2.0 release in 2013, and a 3.0 release in 2014.
Pheanstalk 3.0 introduces PHP namespaces, PSR-1 and PSR-2 coding standards, and PSR-4 autoloader standard.
beanstalkd up to the latest version 1.10 is supported. All commands and responses specified in the protocol documentation for beanstalkd 1.3 are implemented.
In 2018 Sam Mousa took on the responsibility of maintaining Pheanstalk.
Pheanstalk 4.0 drops support for older PHP versions. It contains the following changes (among other things):
Persistent connections are a feature where a TCP connection is kept alive between different requests to reduce overhead from TCP connection set up. When reusing TCP connections we must always guarantee that the application protocol, in this case beanstalks' protocol is in a proper state. This is hard, and in some cases impossible; at the very least this means we must do some tests which cause roundtrips. Consider for example a connection that has just sent the command PUT 0 4000
. The beanstalk server is now going to read 4000 bytes, but if the PHP script crashes during this write the next request get assigned this TCP socket. Now to reset the connection to a known state it used to subscribe to the default tube: use default
. Since the beanstalk server is expecting 4000 bytes, it will just write this command to the job and wait for more bytes..
To prevent these kinds of issues the simplest solution is to not use persistent connections.
Depending on the socket implementation used we might not be able to enable TCP keepalive. If we do not have TCP keepalive there is no way for us to detect dropped connections, the underlying OS may wait up to 15 minutes to decide that a TCP connection where no packets are being sent is disconnected. When using a socket implementation that supports read timeouts, like SocketSocket
which uses the socket extension we use read and write timeouts to detect broken connections; the issue with the beanstalk protocol is that it allows for no packets to be sent for extended periods of time. Solutions are to either catch these connection exceptions and reconnect or use reserveWithTimeout()
with a timeout that is less than the read / write timeouts.
Example code for a job runner could look like this (this is real production code):
while(true) {
$job = $beanstalk->reserveWithTimeout(50);
$this->stdout('.', Console::FG_CYAN);
if (isset($job)) {
$this->ensureDatabase($db);
try {
/** @var HookTask $task */
$task = $taskFactory->createFromJson($job->getData());
$commandBus->handle($task);
$this->stdout("Deleting job: {$job->getId()}\n", Console::FG_GREEN);
$beanstalk->delete($job);
} catch (\Throwable $t) {
\Yii::error($t);
$this->stderr("\n{$t->getMessage()}\n", Console::FG_RED);
$this->stderr("{$t->getTraceAsString()}\n", Console::FG_RED);
$this->stdout("Burying job: {$job->getId()}\n", Console::FG_YELLOW);
$beanstalk->bury($job);
}
}
}
Here connection errors will cause the process to exit (and be restarted by a task manager).
In version 4 functions with side effects have been removed, functions like putInTube
internally did several things:
In this example, the tube changes meaning that the connection is now in a different state. This is not intuitive and forces any user of the connection to always switch / check the current tube. Another issue with this approach is that it is harder to deal with errors. If an exception occurs it is unclear whether we did or did not switch tube.
Install pheanstalk as a dependency with composer:
composer require pda/pheanstalk
A pure PHP AMQP library.
This library is a pure PHP implementation of the AMQP 0-9-1 protocol. It's been tested against RabbitMQ.
The library was used for the PHP examples of RabbitMQ in Action and the official RabbitMQ tutorials.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
Thanks to videlalvaro and postalservice14 for creating php-amqplib
.
The package is now maintained by Ramūnas Dronga, Luke Bakken and several VMware engineers working on RabbitMQ.
Ensure you have composer installed, then run the following command:
$ composer require php-amqplib/php-amqplib
That will fetch the library and its dependencies inside your vendor folder. Then you can add the following to your .php files in order to use the library
require_once __DIR__.'/vendor/autoload.php';
Then you need to use
the relevant classes, for example:
use PhpAmqpLib\Connection\AMQPStreamConnection;
use PhpAmqpLib\Message\AMQPMessage;
With RabbitMQ running open two Terminals and on the first one execute the following commands to start the consumer:
$ cd php-amqplib/demo
$ php amqp_consumer.php
Then on the other Terminal do:
$ cd php-amqplib/demo
$ php amqp_publisher.php some text to publish
You should see the message arriving to the process on the other Terminal
Then to stop the consumer, send to it the quit
message:
$ php amqp_publisher.php quit
If you need to listen to the sockets used to connect to RabbitMQ then see the example in the non blocking consumer.
$ php amqp_consumer_non_blocking.php
PHP bindings for Tarantool Queue.
Tarantool is a NoSQL database running in a Lua application server. It integrates Lua modules, called LuaRocks. This package provides PHP bindings for Tarantool Queue LuaRock.
The recommended way to install the library is through Composer:
composer require tarantool/queue
In order to use queue, you first need to make sure that your Tarantool instance is configured, up and running. The minimal required configuration might look like this:
-- queues.lua
box.cfg {listen = 3301}
queue = require('queue')
queue.create_tube('foobar', 'fifottl', {if_not_exists = true})
You can read more about the box configuration in the official Tarantool documentation. More information on queue configuration can be found here.
To start the instance you need to copy (or symlink) queues.lua
file into the /etc/tarantool/instances.enabled
directory and run the following command:
sudo tarantoolctl start queues
Once you have your instance running, you can start by creating a queue object with the queue (tube) name you defined in the Lua script:
use Tarantool\Queue\Queue;
...
$queue = new Queue($client, 'foobar');
where $client
is an instance of Tarantool\Client\Client
from the tarantool/client package.
Under the hood Tarantool uses MessagePack binary format to serialize/deserialize data being stored in a queue. It can handle most of the PHP data types (except resources and closures) without any manual pre- or post-processing:
$queue->put('foo');
$queue->put(true);
$queue->put(42);
$queue->put(4.2);
$queue->put(['foo' => ['bar' => ['baz' => null]]]);
$queue->put(new MyObject());
To learn more about object serialization, please follow this link.
Most of the Queue API methods return a Task object containing the following getters:
Task::getId()
Task::getState() // States::READY, States::TAKEN, States::DONE, States::BURY or States::DELAYED
Task::getData()
And some sugar methods:
Task::isReady()
Task::isTaken()
Task::isDone()
Task::isBuried()
Task::isDelayed()
A RabbitMQ pattern library.
Thumper is a PHP library that aims to abstract several messaging patterns that can be implemented over RabbitMQ.
Inside the examples folder you can see how to implement RPC, parallel processing, simple queue servers and pub/sub.
Via Composer
$ composer require php-amqplib/thumper
Each example has a README.md file that shows how to execute it. All the examples expect that RabbitMQ is running. They have been tested using RabbitMQ 2.1.1
For example, to publish message to RabbitMQ is as simple as this:
$producer = new Thumper\Producer($connection);
$producer->setExchangeOptions(array('name' => 'hello-exchange', 'type' => 'direct'));
$producer->publish($argv[1]);
And then to consume them on the other side of the wire:
$myConsumer = function($msg)
{
echo $msg, "\n";
};
$consumer = new Thumper\Consumer($connection);
$consumer->setExchangeOptions(array('name' => 'hello-exchange', 'type' => 'direct'));
$consumer->setQueueOptions(array('name' => 'hello-queue'));
$consumer->setCallback($myConsumer); //myConsumer could be any valid PHP callback
$consumer->consume(5); //5 is the number of messages to consume.
This example illustrates how to create a producer that will publish jobs into a queue. Those jobs will be processed later by a consumer –or several of them–.
This example illustrates how to do RPC over RabbitMQ. We have a RPC Client that will send request to a server that returns the number of characters in the provided strings. The server code is inside the parallel_processing folder.
This example is based on the RPC one. In this case it shows how to achieve parallel execution with PHP. Let's say that you have to execute two expensive tasks. One takes 5 seconds and the other 10. Instead of waiting 15 seconds, we can send the requests in parallel and then wait for the replies which should take 10 seconds now –the time of the slowest task–.
In this case we can see how to achieve publish/subscribe with RabbitMQ. The example is about logging. We can log with several levels and subjects and then have consumers that listen to different log levels act accordingly.
Thank you for following this article.
Laravel & RabbitMQ queues
1663757640
In today's post we will learn about 10 Best PHP Libraries for Event
What is event?
Events are actions performed by users within an app, such as completing a level or making a purchase. Any and all actions within apps can be defined as an event. These can be tracked with your mobile measurement partner (MMP) to learn how users interact with your app.
Table of contents:
An event driven non-blocking I/O library.
Amp is a non-blocking concurrency framework for PHP. It provides an event loop, promises and streams as a base for asynchronous programming.
Promises in combination with generators are used to build coroutines, which allow writing asynchronous code just like synchronous code, without any callbacks.
This package can be installed as a Composer dependency.
composer require amphp/amp
This installs the basic building blocks for asynchronous applications in PHP. We offer a lot of repositories building on top of this repository, e.g.
amphp/byte-stream
providing a stream abstractionamphp/socket
providing a socket layer for UDP and TCP including TLSamphp/parallel
providing parallel processing to utilize multiple CPU cores and offload blocking operationsamphp/http-client
providing an HTTP/1.1 and HTTP/2 clientamphp/http-server
providing an HTTP/1.1 and HTTP/2 application serveramphp/mysql
and amphp/postgres
for non-blocking database accessDocumentation can be found on amphp.org as well as in the ./docs
directory. Each package has its own ./docs
directory.
This package requires PHP 7.0 or later. Many of the other packages raised their requirement to PHP 7.1. No extensions required!
Optional Extensions
Extensions are only needed if your app necessitates a high number of concurrent socket connections, usually this limit is configured up to 1024 file descriptors.
Examples can be found in the ./examples
directory of this repository as well as in the ./examples
directory of our other libraries.
An event source and CQRS library.
Broadway is a project providing infrastructure and testing helpers for creating CQRS and event sourced applications. Broadway tries hard to not get in your way. The project contains several loosely coupled components that can be used together to provide a full CQRS\ES experience.
$ composer require broadway/broadway
You can find detailed documentation of the Broadway bundle on broadway.github.io/broadway.
Feel free to join #qandidate on freenode with questions and remarks!
The broadway project is heavily inspired by other open source project such as AggregateSource, Axon Framework and Ncqrs.
We also like to thank Benjamin, Marijn and Mathias for the conversations we had along the way that helped us shape the broadway project. In particular Marijn for giving us access to his in-house developed CQRS framework.
An event dispatcher library.
This library emulates several aspects of how events are triggered and managed in popular JavaScript libraries such as jQuery: An event object is dispatched to all listeners. The event object holds information about the event, and provides the ability to stop event propagation at any point. Listeners can register themselves or can delegate this task to other objects and have the chance to alter the state and the event itself for the rest of the callbacks.
Listeners need to be registered into a manager and events can then be triggered so that listeners can be informed of the action.
use Cake\Event\Event;
use Cake\Event\EventDispatcherTrait;
class Orders
{
use EventDispatcherTrait;
public function placeOrder($order)
{
$this->doStuff();
$event = new Event('Orders.afterPlace', $this, [
'order' => $order
]);
$this->getEventManager()->dispatch($event);
}
}
$orders = new Orders();
$orders->getEventManager()->on(function ($event) {
// Do something after the order was placed
...
}, 'Orders.afterPlace');
$orders->placeOrder($order);
The above code allows you to easily notify the other parts of the application that an order has been created. You can then do tasks like send email notifications, update stock, log relevant statistics and other tasks in separate objects that focus on those concerns.
Yet another web socket library.
___ _,.--.,_ Elephant.io is a rough websocket client
.-~ ~--"~-. ._ "-. written in PHP. Its goal is to ease the
/ ./_ Y "-. \ communications between your PHP Application and
Y :~ ! Y a real-time server.
lq p | / .|
_ \. .-, l / |j Requires PHP 5.4 and openssl, licensed under
()\___) |/ \_/"; ! the MIT License.
\._____.-~\ . ~\. ./
Y_ Y_. "vr"~ T Built-in Engines :
( ( |L j - Socket.io 2.x
[nn[nn..][nn..] - Socket.io 1.x
~~~~~~~~~~~~~~~~~~~ - Socket.io 0.x (courtesy of @kbu1564)
NOTICE
As this lib is not used anymore by the maintainers, the support has sadly been dropped. But rejoice, as a new repo is now maintained in its own organization : https://github.com/ElephantIO/elephant.io ! :)
Installation
We are suggesting you to use composer, with the following : php composer.phar require wisembly/elephant.io
. For other ways, you can check the release page, or the git clone urls.
Documentation
The docs are not written yet, but you should check the example directory to get a basic knowledge on how this library is meant to work.
An event dispatcher library.
It has the same design goals as Silex and Pimple, to empower the user while staying concise and simple.
It is very strongly inspired by the EventEmitter API found in node.js.
The recommended way to install Événement is through composer.
Just create a composer.json file for your project:
{
"require": {
"evenement/evenement": "^3.0 || ^2.0"
}
}
Note: The 3.x
version of Événement requires PHP 7 and the 2.x
version requires PHP 5.4. If you are using PHP 5.3, please use the 1.x
version:
{
"require": {
"evenement/evenement": "^1.0"
}
}
And run these two commands to install it:
$ curl -s http://getcomposer.org/installer | php
$ php composer.phar install
Now you can add the autoloader, and you will have access to the library:
<?php
require 'vendor/autoload.php';
<?php
$emitter = new Evenement\EventEmitter();
<?php
$emitter->on('user.created', function (User $user) use ($logger) {
$logger->log(sprintf("User '%s' was created.", $user->getLogin()));
});
<?php
$emitter->off('user.created', function (User $user) use ($logger) {
$logger->log(sprintf("User '%s' was created.", $user->getLogin()));
});
<?php
$emitter->emit('user.created', [$user]);
$ ./vendor/bin/phpunit
An event library with a focus on domain events.
Installation
composer require league/event
Usage
Step 1: Create an event dispatcher
use League\Event\EventDispatcher;
$dispatcher = new EventDispatcher();
For more information about setting up the dispatcher, view the documentation about dispatcher setup.
Step 2: Subscribe to an event
Listeners can subscribe to events with the dispatcher.
$dispatcher->subscribeTo($eventIdentifier, $listener);
For more information about subscribing, view the documentation about subscribing to events.
Step 3: Dispatch an event
Events can be dispatched by the dispatcher.
$dispatcher->dispatch($event);
For more information about dispatching, view the documentation about dispatching events.
An asynchronous web socket client.
composer require ratchet/pawl
Pawl as a standalone app: Connect to an echo server, send a message, display output, close connection:
<?php
require __DIR__ . '/vendor/autoload.php';
\Ratchet\Client\connect('wss://echo.websocket.org:443')->then(function($conn) {
$conn->on('message', function($msg) use ($conn) {
echo "Received: {$msg}\n";
$conn->close();
});
$conn->send('Hello World!');
}, function ($e) {
echo "Could not connect: {$e->getMessage()}\n";
});
There are 3 primary classes to be aware of and use in Pawl:
Connector:
Makes HTTP requests to servers returning a promise that, if successful, will resolve to a WebSocket object. A connector is configured via its constructor and a request is made by invoking the class. Multiple connections can be established through a single connector. The invoke mehtod has 3 parameters:
Origin
WebSocket:
This is the object used to interact with a WebSocket server. It has two methods: send
and close
. It has two public properties: request
and response
which are PSR-7 objects representing the client and server side HTTP handshake headers used to establish the WebSocket connection.
Message:
This is the object received from a WebSocket server. It has a __toString
method which is how most times you will want to access the data received. If you need to do binary messaging you will most likely need to use methods on the object.
An event source component to persist event messages
You can install prooph/event-store via composer by adding "prooph/event-store": "dev-master"
as requirement to your composer.json.
See: https://github.com/prooph/documentation
Will be published on the website soon.
Please feel free to fork and extend existing or add new plugins and send a pull request with your changes! To establish a consistent code quality, please provide unit tests for all your changes and may adapt the documentation.
Version | Status | PHP Version | Support Until |
---|---|---|---|
5.x | EOL | >= 5.5 | EOL |
6.x | Maintained | >= 5.5 | 3 Dec 2017 |
7.x | Latest | >= 7.1 | active |
8.x | Development | >= 7.4 | active |
Golang's defer statement for PHP.
The defer statement originally comes from Golang. This library allows you to use the defer functionality in your PHP code.
<?php
defer($context, $callback);
defer
requires two parameters: $context
and $callback
.
$context
- unused in your app, required to achieve the "defer" effect. I recommend to use $_
always.$callback
- a callback which is executed after the surrounding function returns.<?php
function helloGoodbye()
{
defer($_, function () {
echo "goodbye\n";
});
defer($_, function () {
echo "...\n";
});
echo "hello\n";
}
echo "before hello\n";
helloGoodbye();
echo "after goodbye\n";
// Output:
//
// before hello
// hello
// ...
// goodbye
// after goodbye
<?php
function throwException()
{
defer($_, function () {
echo "after exception\n";
});
echo "before exception\n";
throw new \Exception('My exception');
}
try {
throwException();
} catch (\Exception $e) {
echo "exception has been caught\n";
}
// Output:
//
// before exception
// after exception
// exception has been caught
PHP Defer supports all PHP versions from ^5.3
to ^8.0
. The following command will install the latest possible version of PHP Defer for your PHP interpreter.
composer require "php-defer/php-defer:^3.0|^4.0|^5.0"
A web socket library.
A PHP library for asynchronously serving WebSockets. Build up your application through simple interfaces and re-use your application without changing any of its code just by combining different components.
Shell access is required and root access is recommended. To avoid proxy/firewall blockage it's recommended WebSockets are requested on port 80 or 443 (SSL), which requires root access. In order to do this, along with your sync web stack, you can either use a reverse proxy or two separate machines. You can find more details in the server conf docs.
<?php
use Ratchet\MessageComponentInterface;
use Ratchet\ConnectionInterface;
// Make sure composer dependencies have been installed
require __DIR__ . '/vendor/autoload.php';
/**
* chat.php
* Send any incoming messages to all connected clients (except sender)
*/
class MyChat implements MessageComponentInterface {
protected $clients;
public function __construct() {
$this->clients = new \SplObjectStorage;
}
public function onOpen(ConnectionInterface $conn) {
$this->clients->attach($conn);
}
public function onMessage(ConnectionInterface $from, $msg) {
foreach ($this->clients as $client) {
if ($from != $client) {
$client->send($msg);
}
}
}
public function onClose(ConnectionInterface $conn) {
$this->clients->detach($conn);
}
public function onError(ConnectionInterface $conn, \Exception $e) {
$conn->close();
}
}
// Run the server application through the WebSocket protocol on port 8080
$app = new Ratchet\App('localhost', 8080);
$app->route('/chat', new MyChat, array('*'));
$app->route('/echo', new Ratchet\Server\EchoServer, array('*'));
$app->run();
$ php chat.php
// Then some JavaScript in the browser:
var conn = new WebSocket('ws://localhost:8080/echo');
conn.onmessage = function(e) { console.log(e.data); };
conn.onopen = function(e) { conn.send('Hello Me!'); };
Thank you for following this article.
PHP Event Calendar using FullCalendar JS Library
1661159700
A discrete event process oriented simulation framework written in Julia inspired by the Python library SimPy.
SimJulia.jl is a registered package, and is installed by running
julia> Pkg.add("SimJulia")
DataStructures
and ResumableFunctions
.@oldprocess
macro and the produce
/ consume
functions are removed because they are no longer supported.@process
macro replaces the @coroutine
macro. The old @process
macro is temporarily renamed @oldprocess
and will be removed when the infrastructure supporting the produce
and the consume
functions is no longer available in Julia. (DONE)@resumable
and @yield
macros are put in a seperate package ResumableFunctions:@yield return arg
is replaced by @yield arg
.Base.Dates.Datetime
and Base.Dates.Period
Processes
are provided:Tasks
Author: BenLauwens
Source Code: https://github.com/BenLauwens/SimJulia.jl
License: MIT license
1660901542
The TypedEventNotifier
library allows notifying listeners with an object. listeners can be subscribed to only a special type or group of objects.
Add on pubspec.yml:
dependencies:
typed_event_notifier: ... // latest package version
See example in /example
folder
import 'package:typed_event_notifier/typed_event_notifier.dart';
/// Class [ExampleNotifier].
///
/// The example of notifier.
/// It can send notifications to listeners with an object
/// and notify listeners if they are registered for this object type
/// or extended objects.
class ExampleNotifier extends TypedEventNotifier<Event> {
/// Create [ExampleNotifier] instance.
ExampleNotifier();
/// Will notify listeners with [CurrentPageChangedEvent] event.
void currentPage(int index) {
_currentPage = index;
notifyListeners(CurrentPageChangedEvent(currentPage: currentPage));
}
/// Will notify listeners with [PagesLoadedEvent] event.
set loadedPages(Set<int> set) {
_loadedPages.addAll(set);
notifyListeners(PagesLoadedEvent(pages: set));
}
}
//The part of example of listener on `current page changed` event only.
class _CurrentPageOnlyListenerState extends State<CurrentPageOnlyListener> {
String message = 'CurrentPageOnly: empty';
// Will receive events only with CurrentPageChangedEvent type.
void currentPageChanged(CurrentPageChangedEvent event) {
setState(() {
message = 'CurrentPageOnly: now current page is ${event.currentPage}';
});
}
@override
void initState() {
widget.notifier.addListener(currentPageChanged);
super.initState();
}
@override
void dispose() {
widget.notifier.removeListener(currentPageChanged);
super.dispose();
}
@override
Widget build(BuildContext context) {
return Text(message);
}
}
// The part of example of listener on any event.
class _AnyListenerState extends State<AnyListener> {
String message = 'Any: empty';
// Will receive events with CurrentPageChangedEvent and PagesLoadedEvent type.
void any(Event event) {
if (event is CurrentPageChangedEvent) {
setState(() {
message = 'Any: now current page is ${event.currentPage}';
});
}
if (event is PagesLoadedEvent) {
setState(() {
message = 'Any: new loaded pages is ${event.pages}';
});
}
}
@override
void initState() {
widget.notifier.addListener(any);
super.initState();
}
@override
void dispose() {
widget.notifier.removeListener(any);
super.dispose();
}
@override
Widget build(BuildContext context) {
return Text(message);
}
}
// The events for example, which will be sent through the notifier.
// They have abstract base class (used as parent type),
// and extends from it events.
// for example two types with different content.
/// Class [Event].
abstract class Event {
/// Create [Event] instance.
Event();
}
/// Class [CurrentPageChangedEvent].
class CurrentPageChangedEvent extends Event {
/// Index of current page.
final int currentPage;
/// Create [CurrentPageChangedEvent] instance.
CurrentPageChangedEvent({
required this.currentPage,
}) : super();
}
/// Class [PagesLoadedEvent].
class PagesLoadedEvent extends Event {
/// Indexes of loaded pages.
final Set<int> pages;
/// Create [PagesLoadedEvent] instance.
PagesLoadedEvent({
required this.pages,
}) : super();
}
Run this command:
With Flutter:
$ flutter pub add typed_event_notifier
This will add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get
):
dependencies:
typed_event_notifier: ^0.0.2
Alternatively, your editor might support flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:typed_event_notifier/typed_event_notifier.dart';
example/lib/main.dart
import 'dart:math';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:typed_event_notifier/typed_event_notifier.dart';
void main() {
runApp(const App());
}
/// Example app.
class App extends StatelessWidget {
/// Create [App] instance.
const App({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(
title: 'Event Notifier Demo',
notifier: notifier,
),
);
}
}
/*
The events for example, which will be sent through the notifier.
They have abstract base class (used as parent type),
and extends from it events.
for example two types with different content.
*/
/// Class [Event].
abstract class Event {
/// Create [Event] instance.
Event();
}
/// Class [CurrentPageChangedEvent].
class CurrentPageChangedEvent extends Event {
/// Index of current page.
final int currentPage;
/// Create [CurrentPageChangedEvent] instance.
CurrentPageChangedEvent({
required this.currentPage,
}) : super();
}
/// Class [PagesLoadedEvent].
class PagesLoadedEvent extends Event {
/// Indexes of loaded pages.
final Set<int> pages;
/// Create [PagesLoadedEvent] instance.
PagesLoadedEvent({
required this.pages,
}) : super();
}
/*
The example of notifier.
It can send notifications to listeners with an object
and notify listeners if they are registered for this object type
or extended objects.
*/
/// Instance of the demo.
final ExampleNotifier notifier = ExampleNotifier();
/// Class [ExampleNotifier]
class ExampleNotifier extends TypedEventNotifier<Event> {
/// Create [ExampleNotifier] instance.
ExampleNotifier();
int _currentPage = 0;
/// Current index of page.
int get currentPage => _currentPage;
set currentPage(int index) {
_currentPage = index;
notifyListeners(CurrentPageChangedEvent(currentPage: currentPage));
}
final Set<int> _loadedPages = <int>{};
/// List of indexes of loaded pages.
List<int> get loadedPages => _loadedPages.toList(growable: false);
set loadedPages(List<int> list) {
final Set<int> loadedPages = list.toSet();
_loadedPages.addAll(loadedPages);
notifyListeners(PagesLoadedEvent(pages: loadedPages));
}
}
/*
The example of listener on `current page changed` event only.
*/
/// Class [CurrentPageOnlyListener].
class CurrentPageOnlyListener extends StatefulWidget {
/// Create [CurrentPageOnlyListener] instance.
const CurrentPageOnlyListener({
required this.notifier,
Key? key,
}) : super(key: key);
/// Notifier.
final ExampleNotifier notifier;
@override
State<CurrentPageOnlyListener> createState() =>
_CurrentPageOnlyListenerState();
}
class _CurrentPageOnlyListenerState extends State<CurrentPageOnlyListener> {
String message = 'CurrentPageOnly: empty';
void currentPageChanged(CurrentPageChangedEvent event) {
setState(() {
message = 'CurrentPageOnly: now current page is ${event.currentPage}';
});
}
@override
void initState() {
widget.notifier.addListener(currentPageChanged);
super.initState();
}
@override
void dispose() {
widget.notifier.removeListener(currentPageChanged);
super.dispose();
}
@override
Widget build(BuildContext context) {
return Text(message);
}
}
/*
The example of listener on any event.
*/
/// Class [AnyListener].
class AnyListener extends StatefulWidget {
/// Create [AnyListener] instance.
const AnyListener({
required this.notifier,
Key? key,
}) : super(key: key);
/// Notifier.
final ExampleNotifier notifier;
@override
State<AnyListener> createState() => _AnyListenerState();
}
class _AnyListenerState extends State<AnyListener> {
String message = 'Any: empty';
void any(Event event) {
if (event is CurrentPageChangedEvent) {
setState(() {
message = 'Any: now current page is ${event.currentPage}';
});
}
if (event is PagesLoadedEvent) {
setState(() {
message = 'Any: new loaded pages is ${event.pages}';
});
}
}
@override
void initState() {
widget.notifier.addListener(any);
super.initState();
}
@override
void dispose() {
widget.notifier.removeListener(any);
super.dispose();
}
@override
Widget build(BuildContext context) {
return Text(message);
}
}
/// Class [MyHomePage].
class MyHomePage extends StatelessWidget {
/// Create [MyHomePage] instance.
const MyHomePage({
required this.title,
required this.notifier,
Key? key,
}) : super(key: key);
/// Title of homepage.
final String title;
/// Notifier.
final ExampleNotifier notifier;
void _setNewCurrentPage() {
final Random random = Random();
notifier.currentPage = random.nextInt(100);
}
void _setNewLoadedPages() {
final Random random = Random();
notifier.loadedPages = <int>[
random.nextInt(100),
random.nextInt(100),
random.nextInt(100)
];
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
CurrentPageOnlyListener(notifier: notifier),
const SizedBox(height: 10),
AnyListener(notifier: notifier),
const SizedBox(height: 40),
const Text(
'You can push the buttons to notify listeners.',
),
const SizedBox(height: 20),
ElevatedButton(
onPressed: _setNewCurrentPage,
child: const Text('New Current Page'),
),
const SizedBox(height: 10),
ElevatedButton(
onPressed: _setNewLoadedPages,
child: const Text('New Loaded Pages List'),
),
],
),
),
);
}
}
Author: EvGeniyLell
Source Code: https://github.com/EvGeniyLell/typed_event_notifier
License: MIT license
1660469700
Benchmark your event loop, extracted from hapi, hoek, heavy and boom.
To install loopbench, simply use npm:
npm i loopbench --save
See example.js.
Creates a new instance of loopbench.
Options:
sampleInterval
: the interval at which the eventLoop should be sampled, defaults to 5
.limit
: the maximum amount of delay that is tollerated before overLimit
becomes true, and the load
event is emitted, defaults to 42
.Events:
load
, emitted when instance.delay > instance.limit
unload
, emitted when overLimit
goes from true
and false
The delay in milliseconds (and fractions) from the expected run. It might be negative (in older nodes).
The maximum amount of delay that is tollerated before overLimit
becomes true, and the load
event is emitted.
Is true
if the instance.delay > instance.limit
.
Stops the sampling.
Author: Mcollina
Source Code: https://github.com/mcollina/loopbench
License: MIT license