A Julia Package for Dynamical Billiard Systems in Two Dimensions


A Julia package for dynamical billiard systems in two dimensions. The goals of the package is to provide a flexible and intuitive framework for fast implementation of billiard systems of arbitrary construction.

If you have used this package for research that resulted in a publication, please be kind enough to cite the papers listed in the CITATION.bib file.


Please see the documentation for list of features, tutorials and installation instructions.


This package is mainly developed by George Datseris. However, this development would not have been possible without significant help from other people:

  1. Lukas Hupe(@lhupe) Contributed the lyapunov spectrum calculation for magnetic propagation, implemented the boundary map function and did other contributions in bringing this package to version 2.0 (see here).
  2. Diego Tapias (@dapias) Contributed the lyapunov spectrum calculation method for straight propagation.
  3. David. P. Sanders (@dpsanders) and Ragnar Fleischmann contributed in fruitful discussions about the programming and physics of Billiard systems all-around.
  4. Christopher Rackauckas (@ChrisRackauckas) helped set-up the continuous integration, testing, documentation publishing and all around package development-related concepts.
  5. Tony Kelman (@tkelman) helped significantly in the package publication process, especially in making it work correctly without destroying METADATA.jl.

Download Details:

Author: JuliaDynamics
Source Code: https://github.com/JuliaDynamics/DynamicalBilliards.jl 
License: View license

#julia #physics #models #chaos 

A Julia Package for Dynamical Billiard Systems in Two Dimensions

Toxiproxy: A TCP Proxy To Simulate Network and System Conditions


Toxiproxy is a framework for simulating network conditions. It's made specifically to work in testing, CI and development environments, supporting deterministic tampering with connections, but with support for randomized chaos and customization. Toxiproxy is the tool you need to prove with tests that your application doesn't have single points of failure. We've been successfully using it in all development and test environments at Shopify since October, 2014. See our blog post on resiliency for more information.

Toxiproxy usage consists of two parts. A TCP proxy written in Go (what this repository contains) and a client communicating with the proxy over HTTP. You configure your application to make all test connections go through Toxiproxy and can then manipulate their health via HTTP. See Usage below on how to set up your project.

For example, to add 1000ms of latency to the response of MySQL from the Ruby client:

Toxiproxy[:mysql_master].downstream(:latency, latency: 1000).apply do
  Shop.first # this takes at least 1s

To take down all Redis instances:

Toxiproxy[/redis/].down do
  Shop.first # this will throw an exception

While the examples in this README are currently in Ruby, there's nothing stopping you from creating a client in any other language (see Clients).

Why yet another chaotic TCP proxy?

The existing ones we found didn't provide the kind of dynamic API we needed for integration and unit testing. Linux tools like nc and so on are not cross-platform and require root, which makes them problematic in test, development and CI environments.



Let's walk through an example with a Rails application. Note that Toxiproxy is in no way tied to Ruby, it's just been our first use case. You can see the full example at sirupsen/toxiproxy-rails-example. To get started right away, jump down to Usage.

For our popular blog, for some reason we're storing the tags for our posts in Redis and the posts themselves in MySQL. We might have a Post class that includes some methods to manipulate tags in a Redis set:

class Post < ActiveRecord::Base
  # Return an Array of all the tags.
  def tags

  # Add a tag to the post.
  def add_tag(tag)
    TagRedis.sadd(tag_key, tag)

  # Remove a tag from the post.
  def remove_tag(tag)
    TagRedis.srem(tag_key, tag)

  # Return the key in Redis for the set of tags for the post.
  def tag_key

We've decided that erroring while writing to the tag data store (adding/removing) is OK. However, if the tag data store is down, we should be able to see the post with no tags. We could simply rescue the Redis::CannotConnectError around the SMEMBERS Redis call in the tags method. Let's use Toxiproxy to test that.

Since we've already installed Toxiproxy and it's running on our machine, we can skip to step 2. This is where we need to make sure Toxiproxy has a mapping for Redis tags. To config/boot.rb (before any connection is made) we add:

require 'toxiproxy'

    name: "toxiproxy_test_redis_tags",
    listen: "",
    upstream: ""

Then in config/environments/test.rb we set the TagRedis to be a Redis client that connects to Redis through Toxiproxy by adding this line:

TagRedis = Redis.new(port: 22222)

All calls in the test environment now go through Toxiproxy. That means we can add a unit test where we simulate a failure:

test "should return empty array when tag redis is down when listing tags" do
  @post.add_tag "mammals"

  # Take down all Redises in Toxiproxy
  Toxiproxy[/redis/].down do
    assert_equal [], @post.tags

The test fails with Redis::CannotConnectError. Perfect! Toxiproxy took down the Redis successfully for the duration of the closure. Let's fix the tags method to be resilient:

def tags
rescue Redis::CannotConnectError

The tests pass! We now have a unit test that proves fetching the tags when Redis is down returns an empty array, instead of throwing an exception. For full coverage you should also write an integration test that wraps fetching the entire blog post page when Redis is down.

Full example application is at sirupsen/toxiproxy-rails-example.


Configuring a project to use Toxiproxy consists of three steps:

  1. Installing Toxiproxy
  2. Populating Toxiproxy
  3. Using Toxiproxy

1. Installing Toxiproxy


See Releases for the latest binaries and system packages for your architecture.


$ wget -O toxiproxy-2.1.4.deb https://github.com/Shopify/toxiproxy/releases/download/v2.1.4/toxiproxy_2.1.4_amd64.deb
$ sudo dpkg -i toxiproxy-2.1.4.deb
$ sudo service toxiproxy start


With Homebrew:

$ brew tap shopify/shopify
$ brew install toxiproxy

Or with MacPorts:

$ port install toxiproxy


Toxiproxy for Windows is available for download at https://github.com/Shopify/toxiproxy/releases/download/v2.1.4/toxiproxy-server-windows-amd64.exe


Toxiproxy is available on Github container registry. Old versions <= 2.1.4 are available on on Docker Hub.

$ docker pull ghcr.io/shopify/toxiproxy
$ docker run --rm -it ghcr.io/shopify/toxiproxy

If using Toxiproxy from the host rather than other containers, enable host networking with --net=host.

$ docker run --rm --entrypoint="/toxiproxy-cli" -it ghcr.io/shopify/toxiproxy list


If you have Go installed, you can build Toxiproxy from source using the make file:

$ make build
$ ./toxiproxy-server

Upgrading from Toxiproxy 1.x

In Toxiproxy 2.0 several changes were made to the API that make it incompatible with version 1.x. In order to use version 2.x of the Toxiproxy server, you will need to make sure your client library supports the same version. You can check which version of Toxiproxy you are running by looking at the /version endpoint.

See the documentation for your client library for specific library changes. Detailed changes for the Toxiproxy server can been found in CHANGELOG.md.

2. Populating Toxiproxy

When your application boots, it needs to make sure that Toxiproxy knows which endpoints to proxy where. The main parameters are: name, address for Toxiproxy to listen on and the address of the upstream.

Some client libraries have helpers for this task, which is essentially just making sure each proxy in a list is created. Example from the Ruby client:

# Make sure `shopify_test_redis_master` and `shopify_test_mysql_master` are
# present in Toxiproxy
    name: "shopify_test_redis_master",
    listen: "",
    upstream: ""
    name: "shopify_test_mysql_master",
    listen: "",
    upstream: ""

This code needs to run as early in boot as possible, before any code establishes a connection through Toxiproxy. Please check your client library for documentation on the population helpers.

Alternatively use the CLI to create proxies, e.g.:

toxiproxy-cli create -l localhost:26379 -u localhost:6379 shopify_test_redis_master

We recommend a naming such as the above: <app>_<env>_<data store>_<shard>. This makes sure there are no clashes between applications using the same Toxiproxy.

For large application we recommend storing the Toxiproxy configurations in a separate configuration file. We use config/toxiproxy.json. This file can be passed to the server using the -config option, or loaded by the application to use with the populate function.

An example config/toxiproxy.json:

    "name": "web_dev_frontend_1",
    "listen": "[::]:18080",
    "upstream": "webapp.domain:8080",
    "enabled": true
    "name": "web_dev_mysql_1",
    "listen": "[::]:13306",
    "upstream": "database.domain:3306",
    "enabled": true

Use ports outside the ephemeral port range to avoid random port conflicts. It's 32,768 to 61,000 on Linux by default, see /proc/sys/net/ipv4/ip_local_port_range.

3. Using Toxiproxy

To use Toxiproxy, you now need to configure your application to connect through Toxiproxy. Continuing with our example from step two, we can configure our Redis client to connect through Toxiproxy:

# old straight to redis
redis = Redis.new(port: 6380)

# new through toxiproxy
redis = Redis.new(port: 22220)

Now you can tamper with it through the Toxiproxy API. In Ruby:

redis = Redis.new(port: 22220)

Toxiproxy[:shopify_test_redis_master].downstream(:latency, latency: 1000).apply do
  redis.get("test") # will take 1s

Or via the CLI:

toxiproxy-cli toxic add -t latency -a latency=1000 shopify_test_redis_master

Please consult your respective client library on usage.

4. Logging

There are the following log levels: panic, fatal, error, warn or warning, info, debug and trace. The level could be updated via environment variable LOG_LEVEL.


Toxics manipulate the pipe between the client and upstream. They can be added and removed from proxies using the HTTP api. Each toxic has its own parameters to change how it affects the proxy links.

For documentation on implementing custom toxics, see CREATING_TOXICS.md


Add a delay to all data going through the proxy. The delay is equal to latency +/- jitter.


  • latency: time in milliseconds
  • jitter: time in milliseconds


Bringing a service down is not technically a toxic in the implementation of Toxiproxy. This is done by POSTing to /proxies/{proxy} and setting the enabled field to false.


Limit a connection to a maximum number of kilobytes per second.


  • rate: rate in KB/s


Delay the TCP socket from closing until delay has elapsed.


  • delay: time in milliseconds


Stops all data from getting through, and closes the connection after timeout. If timeout is 0, the connection won't close, and data will be delayed until the toxic is removed.


  • timeout: time in milliseconds


Simulate TCP RESET (Connection reset by peer) on the connections by closing the stub Input immediately or after a timeout.


  • timeout: time in milliseconds


Slices TCP data up into small bits, optionally adding a delay between each sliced "packet".


  • average_size: size in bytes of an average packet
  • size_variation: variation in bytes of an average packet (should be smaller than average_size)
  • delay: time in microseconds to delay each packet by


Closes connection when transmitted data exceeded limit.

  • bytes: number of bytes it should transmit before connection is closed


All communication with the Toxiproxy daemon from the client happens through the HTTP interface, which is described here.

Toxiproxy listens for HTTP on port 8474.

Proxy fields:

  • name: proxy name (string)
  • listen: listen address (string)
  • upstream: proxy upstream address (string)
  • enabled: true/false (defaults to true on creation)

To change a proxy's name, it must be deleted and recreated.

Changing the listen or upstream fields will restart the proxy and drop any active connections.

If listen is specified with a port of 0, toxiproxy will pick an ephemeral port. The listen field in the response will be updated with the actual port.

If you change enabled to false, it will take down the proxy. You can switch it back to true to reenable it.

Toxic fields:

  • name: toxic name (string, defaults to <type>_<stream>)
  • type: toxic type (string)
  • stream: link direction to affect (defaults to downstream)
  • toxicity: probability of the toxic being applied to a link (defaults to 1.0, 100%)
  • attributes: a map of toxic-specific attributes

See Toxics for toxic-specific attributes.

The stream direction must be either upstream or downstream. upstream applies the toxic on the client -> server connection, while downstream applies the toxic on the server -> client connection. This can be used to modify requests and responses separately.


All endpoints are JSON.

  • GET /proxies - List existing proxies and their toxics
  • POST /proxies - Create a new proxy
  • POST /populate - Create or replace a list of proxies
  • GET /proxies/{proxy} - Show the proxy with all its active toxics
  • POST /proxies/{proxy} - Update a proxy's fields
  • DELETE /proxies/{proxy} - Delete an existing proxy
  • GET /proxies/{proxy}/toxics - List active toxics
  • POST /proxies/{proxy}/toxics - Create a new toxic
  • GET /proxies/{proxy}/toxics/{toxic} - Get an active toxic's fields
  • POST /proxies/{proxy}/toxics/{toxic} - Update an active toxic
  • DELETE /proxies/{proxy}/toxics/{toxic} - Remove an active toxic
  • POST /reset - Enable all proxies and remove all active toxics
  • GET /version - Returns the server version number
  • GET /metrics - Returns Prometheus-compatible metrics

Populating Proxies

Proxies can be added and configured in bulk using the /populate endpoint. This is done by passing a json array of proxies to toxiproxy. If a proxy with the same name already exists, it will be compared to the new proxy and replaced if the upstream and listen address don't match.

A /populate call can be included for example at application start to ensure all required proxies exist. It is safe to make this call several times, since proxies will be untouched as long as their fields are consistent with the new data.

CLI Example

$ toxiproxy-cli create -l localhost:26379 -u localhost:6379 redis
Created new proxy redis
$ toxiproxy-cli list
Listen          Upstream        Name  Enabled Toxics
====================================================================== localhost:6379  redis true    None

Hint: inspect toxics with `toxiproxy-client inspect <proxyName>`
$ redis-cli -p 26379> SET omg pandas
OK> GET omg
$ toxiproxy-cli toxic add -t latency -a latency=1000 redis
Added downstream latency toxic 'latency_downstream' on proxy 'redis'
$ redis-cli -p 26379> GET omg
(1.00s)> DEL omg
(integer) 1
$ toxiproxy-cli toxic remove -n latency_downstream redis
Removed toxic 'latency_downstream' on proxy 'redis'
$ redis-cli -p 26379> GET omg
$ toxiproxy-cli delete redis
Deleted proxy redis
$ redis-cli -p 26379
Could not connect to Redis at Connection refused


Toxiproxy exposes Prometheus-compatible metrics via its HTTP API at /metrics. See METRICS.md for full descriptions

Frequently Asked Questions

How fast is Toxiproxy? The speed of Toxiproxy depends largely on your hardware, but you can expect a latency of < 100µs when no toxics are enabled. When running with GOMAXPROCS=4 on a Macbook Pro we achieved ~1000MB/s throughput, and as high as 2400MB/s on a higher end desktop. Basically, you can expect Toxiproxy to move data around at least as fast the app you're testing.

Can Toxiproxy do randomized testing? Many of the available toxics can be configured to have randomness, such as jitter in the latency toxic. There is also a global toxicity parameter that specifies the percentage of connections a toxic will affect. This is most useful for things like the timeout toxic, which would allow X% of connections to timeout.

I am not seeing my Toxiproxy actions reflected for MySQL. MySQL will prefer the local Unix domain socket for some clients, no matter which port you pass it if the host is set to localhost. Configure your MySQL server to not create a socket, and use as the host. Remember to remove the old socket after you restart the server.

Toxiproxy causes intermittent connection failures. Use ports outside the ephemeral port range to avoid random port conflicts. It's 32,768 to 61,000 on Linux by default, see /proc/sys/net/ipv4/ip_local_port_range.

Should I run a Toxiproxy for each application? No, we recommend using the same Toxiproxy for all applications. To distinguish between services we recommend naming your proxies with the scheme: <app>_<env>_<data store>_<shard>. For example, shopify_test_redis_master or shopify_development_mysql_1.


  • make. Build a toxiproxy development binary for the current platform.
  • make all. Build Toxiproxy binaries and packages for all platforms. Requires to have Go compiled with cross compilation enabled on Linux and Darwin (amd64) as well as goreleaser in your $PATH to build binaries the Linux package.
  • make test. Run the Toxiproxy tests.



Author: Shopify
Source Code: https://github.com/shopify/toxiproxy 
License: MIT license

#go #golang #testing #proxy #chaos 

Toxiproxy: A TCP Proxy To Simulate Network and System Conditions
Lulu  Hegmann

Lulu Hegmann


Web Service Which Feeds Navitia with Real-Time Disruptions: Chaos


Chaos is the web service which can feed Navitia with real-time disruptions. It can work together with Kirin which can feed Navitia with real-time delays.

chaos schema global

API Documentation


For french users, you can see this FAQ


The hard way

Clone the Chaos repository
git clone git@github.com:CanalTP/Chaos.git
cd Chaos

  • PostgreSQL 9.6 sudo apt-get install postgresql-9.6 postgresql-server-dev-9.6 libpq-dev
  • RabbitMQ
  • Install Python2.7 sudo apt-get install python2.7 python2.7-dev

    or sudo apt install python2 python2-dev on recent linux release

  • Install pip

  • Install virtualenv

virtualenv venv
source venv/bin/activate
pip install -r requirements.txt

Install & build protobuf
  • Install protobuf

You can use sudo apt-get install protobuf-compiler if you’re sure it won’t install version 3.x.x (incompatible).

Or install protoc building it from source : protobuf v2.6.1. After download, from inside the unziped folder :

make install
make clean

Check your version

protoc --version

  • Build protobuf, back into Chaos project folder
git submodule init
git submodule update
./setup.py build_pbf

Create the database
sudo -i -u postgres
# Create a user
createuser -P navitia
(password "navitia")

# Create database
createdb -O navitia chaos

# Create database for tests
createdb -O navitia chaos_testing
ctrl + d

Cache configuration

To improve its performance Chaos can use Redis.

Install Redis

Installing Redis

Using Chaos without Redis

You can deactivate Redis usage in default_settings.py by changing ‘CACHE_TYPE’ to ‘simple’

Using Chaos without cache

For development purpose you can deactivate cache usage in default_settings.py by forcing ‘CACHE_TYPE’ to ‘null’

Run Chaos with honcho (optional)
Install honcho

You can use honcho for managing Procfile-based applications.

pip install honcho

Upgrade database
honcho run ./manage.py db upgrade

RabbitMQ (optional)

RabbitMQ is optional and you can deactivate it if you don’t want to send disruptions to a queue.

# chaos/default_settings.py

Run Chaos
honcho start

The easy way (with Docker)
git clone git@github.com:CanalTP/Chaos.git
cd Chaos
git submodule init
git submodule update
docker-compose up -d

To watch logs output:

docker-compose logs -f

Chaos will be accessible on http://chaos_ws_1.docker if you are using the docker-gen-hosts tool, it will also be accessible on http://chaos-ws.local.canaltp.fr The database will be accessible at ‘chaos_database_1.docker’ and default RabbitMQ interface at ‘http://chaos_rabbitmq_1.docker:15672’.

Security (optional)

If you want to add more security, you can add a file chaos/clients_tokens.json with the client code and navitia tokens like:

   "client_code": [

client_code should be the same as the value of X-Customer-Id header in HTTP request and token should be the same as the value of Authorization header in HTTP request If the file doesn’t exist, the security will be disabled.

You can add a ‘master’ key in the file. It will allow you to access all resources for all clients.


Unit tests

cd tests
honcho run nosetests

Functional tests

cd tests
honcho run lettuce

To stop directly on faulty test

cd tests
honcho run lettuce --failfast

With docker

docker-compose -f docker-compose.test.yml build --pull
docker-compose -f docker-compose.test.yml up -d
docker-compose -f docker-compose.test.yml exec -T chaos /bin/sh ./docker/tests.sh
docker-compose -f docker-compose.test.yml down --remove-orphans


Copyright © since 2001, Kisio Digital and/or its affiliates. All rights reserved. This project is part of Navitia surround, the sprawling API to build cool stuff with public transport.

Hope you’ll enjoy and contribute to this project, powered by Kisio Digital (www.kisio.com).

Help us simplify mobility and open public transport: a non ending quest to the responsive locomotion way of traveling !

Download Details:

Author: CanalTP
Download Link: Download The Source Code
Official Website: https://github.com/CanalTP/Chaos
License: AGPL-3.0

This program is free software; you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

You should have received a copy of the GNU Affero General Public License along with this program. If not, see http://www.gnu.org/licenses/.

Stay tuned

Twitter @navitia

Tchat channel #navitia on riot

Forum Navitia on googlegroups


#chaos #docker #developer

Web Service Which Feeds Navitia with Real-Time Disruptions: Chaos
Sofia Gardiner

Sofia Gardiner


The DiRT on Chaos Engineering at Google • Jason Cahoon • GOTO 2021

COURTNEY NASH: Prerequisites for Chaos Engineering

Chaos Engineering is often characterized as “breaking things in production” which lends it an air of something only feasible for elite or sophisticated organizations. In practice, it’s been a key element in digital transformation from the ground up for a number of companies ranging from pre-streaming Netflix to those in highly regulated industries like healthcare and financial services. In this talk, you’ll learn the basic prerequisites for Chaos Engineering, including a couple pragmatic ways to get started.

JASON CAHOON: The DiRT on Chaos Engineering @ Google

A shallow dive into 15 years of Chaos Engineering at Google, the lessons we’ve learned performing many thousands of disaster tests on production systems, and some tips on how to approach getting started with Chaos Engineering at your own organization.


  • 00:00 Intro
  • 01:02 DiRT: Disaster Resiliency Testing
  • 02:53 Why?
  • 04:38 What we test?
  • 06:01 Testing themes
  • 10:01 Practical vs theoretical
  • 12:31 How?
  • 15:12 Picking what to test
  • 16:29 Steps for bootstrapping a disaster testing program
  • 18:25 Testing production vs testin in production
  • 20:16 Really, you’re breaking production though?!
  • 23:00 Reporting on results
  • 24:24 What have we learned?
  • 26:55 Test example: Run at service level
  • 28:51 Test example: Toggle the O-N / O-F-F discriminator
  • 30:25 Test example: Run without dependencies
  • 31:53 Test example: Hacked!

#chaos #chaos-engineering #developer

The DiRT on Chaos Engineering at Google • Jason Cahoon • GOTO 2021
Anton Palyonko

Anton Palyonko


Automating Chaos with LitmusChaos to ensure Kubernetes Application Resiliency

As resilience use-cases proliferate, Chaos Engineering has become a compelling practice for enhancing your application resilience in production. If you’ve ever gone through the pain and anxiety of responding to an unexpected failure in your production system, then Chaos Engineering is the right fit for you.

Whether you want to run chaos manually or through CI/CD! Litmus brings together configurable environments to trigger chaos experiments automatically with the change in application states. Chaos on the Edge with Litmus brings together thought leaders, technologies, and customers across the entire Kubernetes community to share their knowledge and insight. If Chaos Engineering is on your radar, this is a talk you won’t want to miss!

#kubernetes #chaos

Automating Chaos with LitmusChaos to ensure Kubernetes Application Resiliency
Sofia Gardiner

Sofia Gardiner


Combining Chaos, Observability & Resilience to get Chaos Engineering

YURY NINO: Combining Chaos, Observability and Resilience to get Chaos Engineering

Chaos Engineering is becoming common practice for development teams. Many companies and technology experts have done a great job promoting a premise: “Reliability” is the most important feature in software applications. “Resilience” combined with “Chaos Engineering” is key for reaching this. However, the complete equation requires observability. Chaos Engineering and Resilience without observability is just Chaos.

MIKOLAJ PAWLIKOWSKI: Making Chaos Engineering Boring: debunking myths hampering adoption

Chaos Engineering offers some of the best ROI your teams can get: a little bit of experimentation can prevent colossal problems. But like any new technology, it needs to move through the adoption curve, from the obscure into the mainstream. In this talk, I’m going to address some of the roadblocks you’ll run into when trying to get CE onto the roadmap. Let’s debunk some myths!


  • 00:00 Intro
  • 01:25 Agenda
  • 02:21 Example: US flight 1549
  • 03:22 Chaos foundations
  • 05:53 Observability
  • 12:04 Reliability & resilience
  • 16:17 Chaos engineering
  • 22:56 Outro

Read the full abstract here:

#chaos #developer

Combining Chaos, Observability & Resilience to get Chaos Engineering

The Principles of Chaos Engineering

Resilience is something those who use Kubernetes to run apps and microservices in containers aim for. When a system is resilient, it can handle losing a portion of its microservices and components without the entire system becoming inaccessible.

Resilience is achieved by integrating loosely coupled microservices. When a system is resilient, microservices can be updated or taken down without having to bring the entire system down. Scaling becomes easier too, since you don’t have to scale the whole cloud environment at once.

That said, resilience is not without its challenges. Building microservices that are independent yet work well together is not easy.

What Is Chaos Engineering?

Chaos Engineering has been around for almost a decade now but it is still a relevent and useful concept to incorporate into improving your whole systems architecture. In essence, Chaos Engineering is the process of triggering and injecting faults into a system deliberately. Instead of waiting for errors to occur, engineers can take deliberate steps to cause (or simulate) errors in a controlled environment.

Chaos Engineering allows for better, more advanced resilience testing. Developers can now experiment in cloud-native distributed systems. Experiments involve testing both the physical infrastructure and the cloud ecosystem.

Chaos Engineering is not a new approach. In fact, companies like Netflix have been using resilience testing through Chaos Monkey, an in-house Chaos Engineering framework designed to improve the strength of cloud infrastructure for years now.

When dealing with a large-scale distributed system, Chaos Engineering provides an empirical way of building confidence by anticipating faults instead of reacting to them. The chaotic condition is triggered intentionally for this purpose.

There are a lot of analogies depicting how Chaos Engineering works, but the traffic light analogy represents the concept best. Conventional testing is similar to testing traffic lights individually to make sure that they work.

Chaos Engineering, on the other hand, means closing out a busy array of intersections to see how traffic reacts to the chaos of losing traffic lights. Since the test is run deliberately, more insights can be collected from the process.

#devops #chaos engineering #chaos monkey #chaos #chaos testing

The Principles of Chaos Engineering