An Open Source Chaos Engineering Library from AWS

AWS engineers recently wrote about an open source chaos engineering tool called AWSSSMChaosRunner that they used to test fault injection in Prime Video. Built using AWS Systems Manager that can execute arbitrary commands on EC2 instances, the team was able to mitigate latency related issues using it.

The AWSSSMChaosRunner is built using the AWS Systems Manager to remotely execute commands against a specific set of EC2 instances. The set of commands, specified declaratively as a collection, creates the set of injected faults.

Varun Jewalikar, Software Engineer at Prime Video, and Adrian Hornsby, Principal Developer Advocate (Architecture) at AWS, write that typical chaos engineering experiments include simulating resource exhaustion and a failed or slow network. There are countermeasures for such scenarios but “they are rarely adequately tested, as unit or integration tests generally can’t validate them with high confidence”.

AWS Systems Manager is a tool that can perform various operational tasks across AWS resources with an agent component called SSM Agent. The agent - pre-installed by default on certain Windows and Linux AMIs - has the concept of “Documents” which are similar to runbooks that can be executed. It can run simple shell scripts too, a feature leveraged by the AWSSSMChaosRunner. The SendCommand API in SSM enables running commands across multiple instances, which can be filtered by AWS tags. CloudWatch can be used to view logs from all the instances in a single place.

The security aspects like creating a user for execution on the EC2 instance are taken care of by the agent. Examples of what the chaos runner can do include silently dropping all outgoing TCP traffic on a specific port, introducing network latency on an interface, hogging CPU etc. It’s important to note that the currently supported failure injections are either at the infrastructure or at the AWS service layer.

#chaos engineering #aws #devops #news

What is GEEK

Buddha Community

An Open Source Chaos Engineering Library from AWS
Tyrique  Littel

Tyrique Littel

1598461200

An Open-Source Book About the Open Source World

Open source today is a word that often include a lot of things, such as open knowledge (Wikimedia projects), open hardware (Arduino, Raspberry Pi), open formats (ODT/ODS/ODP) and so on.

It is a world of opportunities that can be difficult for newcomers but also for intermediates. This article will help you discover how to approach specific roles, activities or projects/communities in the best way.

Everything Started with “Coaching for OpenSource Communities 2.0”

I decided to write a book in my personal style about my experience in the last 7 to 8 years in open source. I was surprised when I reached 100 pages about various different topics.

My idea was to write something that I would like to read, so nothing that is boring or complicated, but full of real facts.

The second goal was to include my experience but also my philosophy on contributing and how I contribute daily.

Thirdly, I wanted to give a lot of hints and resources and an overall view of this open source world.

Basically, I wanted to write something different from self-help or coaching books that includes just a list of suggestions and best practices. Instead, I take real examples from real life about the OSS world.

As a contributor and developer, I prefer to have real cases to study, because best practices are useful, but we need to learn from others and this world is full of good and bad cases to discover.

In 2019, I started writing a book after Fosdem 2019 and after 2 years inside the Mozilla Reps Council. In that Fosdem edition, I had a talk “Coaching for Open Source Communities 2.0” and after the feedback at the conference and my thoughts in various roles, activities, and projects, it was time to write something.

At the end it wasn’t a manual but a book that included my experience, learnings, best practices and so on in Localization, Development, Project Maintainer, Sysadmin, Community Management, Mentor, Speaker and so on. It contains the following sections:

  • Biography - This choice isn’t for self promotion but just to understand my point of view and my story that can be inspiring for others
  • Philosophy - Not the usual description of Open Source or the 4 freedoms, but just what Open Source means and how you can help
  • How to live inside the Open Source - A discovery about communications and tools, understanding the various kind of people and the best way to talk with your community
  • How to choose a project - Starting with some questions to yourself and how to involve more people in your project
  • The activity - Open Source is based on tasks that can be divided in 2 levels: Support, Testing, Marketing, Development etc
  • How to use your time - We are busy, we have a life, a job and a family but Open Source can be time-consuming
  • Why document is important - How writing documentation can be healthy for your community and the project’s future and brand

There are also three appendices that are manuals which I wrote throughout the years and gathered and improved for this book. They are about: community management, public speaking, and mentoring.

The book ends with my point of view about the future and what we have to do to change opinions about those topics.

I wrote this book and published in October 2019, but it was only possible with the help of reviews and localizers that improved and contributed. Yes, because this book is open source and free for everyone.

I picked the GPL license because this license changed the world and my life in the best way. Using this license is just a tribute. This decision usually is not clear because after all this is a book and there are better licenses like Creative Commons.

#open-source #contributing-to-open-source #programming #software-development #development #coding #books #open-source-software

Ray  Patel

Ray Patel

1623348300

Top 8 Java Open Source Projects You Should Get Your Hands-on [2021]

Learning about Java is no easy feat. It’s a prevalent and in-demand programming language with applications in numerous sectors. We all know that if you want to learn a new skill, the best way to do so is through using it. That’s why we recommend working on projects.

So if you’re a Java student, then you’ve come to the right place as this article will help you learn about the most popular Java open source projects. This way, you’d have a firm grasp of industry trends and the programming language’s applications.

However, before we discuss its various projects, it’s crucial to examine the place where you can get those projects – GitHub. Let’s begin.

#full stack development #java open source projects #java projects #open source projects #top 8 java open source projects #java open source projects

An Open Source Chaos Engineering Library from AWS

AWS engineers recently wrote about an open source chaos engineering tool called AWSSSMChaosRunner that they used to test fault injection in Prime Video. Built using AWS Systems Manager that can execute arbitrary commands on EC2 instances, the team was able to mitigate latency related issues using it.

The AWSSSMChaosRunner is built using the AWS Systems Manager to remotely execute commands against a specific set of EC2 instances. The set of commands, specified declaratively as a collection, creates the set of injected faults.

Varun Jewalikar, Software Engineer at Prime Video, and Adrian Hornsby, Principal Developer Advocate (Architecture) at AWS, write that typical chaos engineering experiments include simulating resource exhaustion and a failed or slow network. There are countermeasures for such scenarios but “they are rarely adequately tested, as unit or integration tests generally can’t validate them with high confidence”.

AWS Systems Manager is a tool that can perform various operational tasks across AWS resources with an agent component called SSM Agent. The agent - pre-installed by default on certain Windows and Linux AMIs - has the concept of “Documents” which are similar to runbooks that can be executed. It can run simple shell scripts too, a feature leveraged by the AWSSSMChaosRunner. The SendCommand API in SSM enables running commands across multiple instances, which can be filtered by AWS tags. CloudWatch can be used to view logs from all the instances in a single place.

The security aspects like creating a user for execution on the EC2 instance are taken care of by the agent. Examples of what the chaos runner can do include silently dropping all outgoing TCP traffic on a specific port, introducing network latency on an interface, hogging CPU etc. It’s important to note that the currently supported failure injections are either at the infrastructure or at the AWS service layer.

#chaos engineering #aws #devops #news

Chaos Engineering — How to Break AWS Infrastructure on Purpose

> 1. What is Chaos Engineering and the importance of it.
Image for post

Chaos Engineering is a type of Engineering where we test the system’s robustness, reliability and the ability to survive a disaster without manual intervention.

It is a process where we manually disrupt our Infrastructure productively and test how quickly and efficiently our Applications and Infra Autoheal themselves and their ability to thrive during a disaster or any System Catastrophe.

Sounds interesting, huh?

Well, it is very interesting because we would be experimenting, playing and disrupting our Infra and keenly observe how it reacts, learn and improve from it. This makes our Infra robust, stable and exhibit more confidence on our production stacks (which, I think is very important).

We will be knowing the weakness and the leaks in our system and help us overcome the issues beforehand in our Test Environment.

There are many Chaos experiments we can perform on our system like deleting a random EC2 Instance, deleting Services and etc which we shall explore in the last section.

> 2.Addressing Prerequisites — Setup your AWS Account and CLI on your Terminal
Let’s get our hands dirty by setting up our Infra ready to disrupt.

Prerequisites:

  1. Get the Access Key ID and Secret Access Key from AWS Account
  2. Install AWS CLI on your local machine
  3. Configure AWS credentials for the AWS Account on your machine
  4. Setup Infra — Create an Auto Scaling Group and attach 3 EC2 Instances to it as desired and Min Capacity (Assume Tasks/Services are running inside it).
  5. Validate AWS CLI by checking the number of Instances against the newly created AS

Get the Access Key ID and Secret Access Key from AWS Account

Go to https://aws.amazon.com/console/ and login to the AWS Console. Navigate to IAM section->Dashboard->Manage Security Credentials → AccessKeys Tab and extract your Access Key ID and Secret Access Key.

Go ahead and Create on if you don’t have one.

Image for post

AWS Access Keys (Masked for Security)

Install AWS CLI on your local machine

After jotting down the keys, let’s install AWS CLI v2 on your system. If you already have this configured, please proceed to Step 3 where we create the AWS Infra.

Install AWS CLI by following the commands mentioned in the AWS documentation.

Installing the AWS CLI version 2 on macOS

This topic describes how to install, update, and remove the AWS CLI version 2 on macOS. AWS CLI versions 1 and 2 use…

docs.aws.amazon.com

After installing AWS CLI, go to your mac Terminal and type in aws and that should list something like the image below. This confirms and validates that AWS CLI has been successfully configured.

Image for post

AWS CLI Validation

Configure AWS credentials for the AWS Account on your machine

Now, time to map your AWS Credentials on your local machine. We need to configure the Access Key ID and Secret Access Key on your machine so that you can connect to yourAWS Account from your machine and create and disrupt the Infra using AWS CLI.

aws configure should do the trick and ask for the Credentials, region and the output format. You might want to configure it as the image below.

Image for post

We can validate this by going to your ~/.aws/credentials

This file validates the Credentials we have just added in the terminal and displays the keys. With this step finished, we now have access to the AWS Account from our machine through AWS CLI. Eureka…!!!

Setup Infra — Create an Auto Scaling Group and attach 3 EC2 Instances to it as desired and Min Capacity (Assume Tasks/Services are running inside it).

We will be using the AWS CLI to create a Chaos Experiment and disrupt the Instances. For the time being we shall create an Auto Scaling Group and attach 3 EC2 Instances using the AWS Console.

Go straight to AWS Console and search for EC2 and go to the tab of “Auto Scaling Groups” and Create a new Auto Scaling Group.

a. Select the Appropriate Instance type (preferably a t2.micro -free tier)

b. Create a new Launch Configuration and associate an IAM role if you have one.

c. Create the ASG with a minimum of 3 EC2 Instances and a max of 6 Instances and add it in the required VPC and Subnets. Defaults are sufficient for this sample Experiment.

Image for post

Image for post

Validate AWS CLI by checking the number of Instances against the newly created ASG.

New ASG gets created and 3 new EC2 Instances gets automatically launched and come to a steady state. We have established the Infra. For this Experiment, we can assume that this is how our backend Infrastructure is setup and now we shall start disrupting. We can discuss more disruption techniques in the last section.

Image for post

#chaos-testing #chaos-monkey #disruption #aws #chaos-engineering

Houston  Sipes

Houston Sipes

1600992000

Did Google Open Sourcing Kubernetes Backfired?

Over the last few years, Kubernetes have become the de-facto standard for container orchestration and has also won the race against Docker for being the most loved platforms among developers. Released in 2014, Kubernetes has come a long way with currently being used across the entire cloudscape platforms. In fact, recent reports state that out of 109 tools to manage containers, 89% of them are leveraging Kubernetes versions.

Although inspired by Borg, Kubernetes, is an open-source project by Google, and has been donated to a vendor-neutral firm — The Cloud Native Computing Foundation. This could be attributed to Google’s vision of creating a platform that can be used by every firm of the world, including the large tech companies and can host multiple cloud platforms and data centres. The entire reason for handing over the control to CNCF is to develop the platform in the best interest of its users without vendor lock-in.

#opinions #google open source #google open source tools #google opening kubernetes #kubernetes #kubernetes platform #kubernetes tools #open source kubernetes backfired