Mike  Kozey

Mike Kozey

1618895100

Simulating CloudEvents With AsyncAPI and Microcks

CloudEvents and AsyncAPI are complementary specifications that help define your Event-Driven Architecture. Microcks allows simulation of CloudEvents to speed up and ensure the autonomy of development teams.

The rise of  Event-Driven Architecture (EDA) is a necessary evolutionary step toward cloud-native applications. Events are the ultimate weapon to decouple your microservices within your architecture. They are bringing great benefits like space and time decoupling, better resiliency, and elasticity.

But events also come with challenges! One of the first you will face when starting up as a development team — aside from the technology choice — is how to describe these events’ structure. Another challenge that comes very quickly after is: How can we efficiently work as a team without having to wait for someone else’s events?

We’ll explore those two particular challenges and see how to simulate events using  CloudEvents, AsyncAPI, and  Microcks.

CloudEvents or AsyncAPI?

New standards like  CloudEvents or  AsyncAPI came up recently to address this need for structure description. People keep asking: Should I use CloudEvents or AsyncAPI? There is the belief that CloudEvents and AsyncAPI are competing on the same scope. I see things differently, and I’d like to explain to you why. Read on!

What Is CloudEvents?

From  cloudevents.io:

CloudEvents is a specification for describing event data in common formats to provide interoperability across services, platforms, and systems.

CloudEvents’s purpose is to establish a common format for event data description. They are part of the  CNCF’s Serverless Working Group. A lot of integrations already exist within  Knative Eventing,  Trigger Mesh, or  Azure Event Grid, allowing true cross-vendor platform interoperability.

#tutorial #microservices #asyncapi #api mocking #cloudevents

What is GEEK

Buddha Community

Simulating CloudEvents With AsyncAPI and Microcks
Mike  Kozey

Mike Kozey

1618895100

Simulating CloudEvents With AsyncAPI and Microcks

CloudEvents and AsyncAPI are complementary specifications that help define your Event-Driven Architecture. Microcks allows simulation of CloudEvents to speed up and ensure the autonomy of development teams.

The rise of  Event-Driven Architecture (EDA) is a necessary evolutionary step toward cloud-native applications. Events are the ultimate weapon to decouple your microservices within your architecture. They are bringing great benefits like space and time decoupling, better resiliency, and elasticity.

But events also come with challenges! One of the first you will face when starting up as a development team — aside from the technology choice — is how to describe these events’ structure. Another challenge that comes very quickly after is: How can we efficiently work as a team without having to wait for someone else’s events?

We’ll explore those two particular challenges and see how to simulate events using  CloudEvents, AsyncAPI, and  Microcks.

CloudEvents or AsyncAPI?

New standards like  CloudEvents or  AsyncAPI came up recently to address this need for structure description. People keep asking: Should I use CloudEvents or AsyncAPI? There is the belief that CloudEvents and AsyncAPI are competing on the same scope. I see things differently, and I’d like to explain to you why. Read on!

What Is CloudEvents?

From  cloudevents.io:

CloudEvents is a specification for describing event data in common formats to provide interoperability across services, platforms, and systems.

CloudEvents’s purpose is to establish a common format for event data description. They are part of the  CNCF’s Serverless Working Group. A lot of integrations already exist within  Knative Eventing,  Trigger Mesh, or  Azure Event Grid, allowing true cross-vendor platform interoperability.

#tutorial #microservices #asyncapi #api mocking #cloudevents

Alec  Nikolaus

Alec Nikolaus

1596247680

Simulating a Queuing System in Python

We all have visited a bank at some point in our life, and we are familiar with how banks operate. Customers enter, wait in a queue for their number to be called out, get service from the teller, and finally leave. This is a queueing system, and we encounter many queueing systems in our day to day lives, from grocery stores to amusement parks they’re everywhere. And that’s why we must try and make them as efficient as possible. There is a lot of randomness involved in these systems, which can cause huge delays, result in long queues, reduce efficiency, and even monetary loss. The randomness can be addressed by developing a discrete event simulation model, this can be extremely helpful in improving the operational efficiency, by analyzing key performance measures.

In this project, I am going to be simulating a queueing system for a bank.

Let’s consider a bank that has two tellers. Customers arrive at the bank about every 3 minutes on average according to a Poisson process. This rate of arrival is assumed in this case but should be modeled from actual data to get accurate results. They wait in a single line for an idle teller. This type of system is referred to as a M/M/2 queueing system. The average time it takes to serve a customer is 1.2 minutes by the first teller and 1.5 minutes by the second teller. The service times are assumed to be exponential here. When a customer enters the bank and both tellers are idle, they choose either one with equal probabilities. If a customer enters the bank and there are four people waiting in the line, they will leave the bank with probability 50%. If a customer enters the bank and there are five or more people waiting in the line, they will leave the bank with probability 60%.

Lets first try and visualize the system

Image for post

Great! now let’s start building the model

#simulation #decision-making #queuing-theory #discrete-event-simulation #python

Mia  Marquardt

Mia Marquardt

1624860900

Easier to use common Machine Learning in High Performance Computing simulations

SmartSim

SmartSim makes it easier to use common Machine Learning (ML) libraries like PyTorch and TensorFlow, in High Performance Computing (HPC) simulations and workloads.

SmartSim provides an API to connect HPC (MPI + X) simulations written in Fortran, C, C++, and Python to an in-memory database called the Orchestrator. The Orchestrator is built on Redis, a popular caching database written in C. This connection between simulation and database is the fundamental paradigm of SmartSim. Simulations in the aforementioned languages can stream data to the Orchestrator and pull the data out in Python for online analysis, visualization, and training.

In addition, the Orchestrator is equipped with ML inference runtimes: PyTorch, TensorFlow, and ONNX. From inside a simulation, users can store and execute trained models and retrieve the result.

#machine learning #simulation #pytorch #tensorflow #high performance computing simulations

andrew cls

andrew cls

1597334096

How can I run iPhone simulator over full-screen- Fullscreen Xcode 11 and Simulator (2020)

https://www.youtube.com/watch?v=1EN988Xu8sU&t=15s

#https://www.youtube.com/watch?v=1en988xu8su&t=15s #how can i run iphone simulator over full-screen- fullscreen xcode 11 and simulator (2020)

Madilyn  Kihn

Madilyn Kihn

1598376942

Simulation is Everywhere!

Simulation is everywhere!
In my next article, I describe how we use simulation at rideOS to improve our partners’ fleet.
tl;dr of this article: Simulation can and has been used in a wide variety of domains.
In my previous article, we learned how simulation is about making something “similar enough”, and we primarily considered two domains: video games and professional training.

#research #robotics #simulation #artificial-intelligence #animation